Biden's Executive Order on AI Was Necessary. Yet, There's Still A Lot to Do
Governments have failed to keep pace with technological changes. Is this about change?
A few days ago, President Biden issued an executive order on safe, secure, and trustworthy artificial intelligence. This has been described as the strongest action ever taken on AI safety, and it’s the first-ever AI executive order from the US government.
The EO is broad and touches upon different areas. In this article, I’d like to discuss some important points.
Standards for AI were necessary
Standards for AI safety and security are some of the main points in the EO.
The EO requires companies to share their safety test results and other critical information with the government. This should prevent any big tech company from building a very powerful AI in secret that could threaten a country’s safety, economy, or public health. The test results should help improve policies and regulations based on real-world data, and companies will be held accountable for the systems they create.
It seems the US government learned this lesson from their mishandling during the social media boom. The days of big tech companies setting the standards themselves should be over.
Is this good news? This is a thing of who you trust more (or less) — big tech companies or the government.
Some big tech companies have proved so far that they’re not reliable when it comes to safety and security. They worked under Silicon Valley’s philosophy “move fast and break things” which sounds cool for entrepreneurs but already had some bad impacts on society. If they set the standards themselves we’ll be back in the social media era where scandals like Cambridge Analytica were out of the radar until someone spoke out.
I believe it’s important that an external entity has access to critical information provided by AI companies in order to develop standards that ensure the safety of upcoming AI systems.
One of the standards that drew more my attention was detecting AI-generated content and authenticating official content. That leads us to my next point.
All this looks easy on paper, but is it possible to successfully execute it?
Protecting people from the potential risks of AI systems isn’t an easy task. Let’s take deepfakes as an example.
In its infancy, the content AI produced was easy to recognize by the human eye, but over time it got harder to say whether something was generated by AI.
AI-generated content isn’t evil by nature. The problem is the way some people use it.
Recently, it’s been reported different types of evil use of AI from fake AI videos of famous people to scam their followers to fake AI-generated images to harass teenage girls.
All this increases the importance of detecting AI-generated content.
According to the EO, the Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
This sounds good on paper, but this isn’t so simple to successfully implement in reality. The EO doesn’t give much detail on how this will be executed, so we have to wait to see what they come up with.
I don’t want to be a pessimist, but I can’t think of a reliable way to label AI content.
Say, they develop a way to watermark the images generated with Midjourney and DALL-E 3. Problem, solve right? Not at all. Many workarounds come to my mind to overcome this such as watermark removers, taking a photo or screenshot of the AI-generated image, and even using alternative apps that produce images without watermarks.
It’s getting harder and harder to distinguish genuine content from AI-generated content (even for Biden). Deepfakes keep improving, but the tools used to detect them are not.
This is an issue that should be taken care of by the companies that created such tools. Not only the government.
What’s next?
Biden’s executive order on AI is a good start, but there’s a lot to do. Things seem easy on paper, but it’ll be challenging to implement all the points covered in the executive order.
Why? This isn’t a work that only involves the government, but also other entities.
Federal agencies: They will have to keep up the pace with the latest advancements in AI, which keeps moving faster than ever
Tech companies: Collaboration and transparency from tech companies are expected, but not always guaranteed
AI professionals: AI talent working for the government (and not only for big tech companies) will be essential to implement this. They’re now embracing programs to hire more AI professionals. We’ll see whether the necessary talent will prefer to work for the government rather than for the private sector.
Citizens: People will need to be educated on AI. It’s necessary that society becomes aware of AI developments.
This executive order on AI is a good attempt to harness the potential of AI while managing its risks. Yet, there’s a lot to do to successfully implement what’s on paper.
I'm writing about the executive order for next week's AI Marketing Ethics Digest and will link to this post (with attribution). It contains helpful insights and information.
Dipping into the political waters of the U.S. is touchy. Wording your comments without naming names is better.