This is a free excerpt from one of Shortform’s Articles. We give you all the important information you need to know about current events and more.
Don't miss out on the whole story. Sign up for a free trial here .
Should AI be regulated? What are experts’ concerns about AI? Is AI regulation even possible?
A growing number of experts are calling for regulation of advanced AI models, saying the technology poses an existential threat to humanity if left to develop unchecked. However, there’s no clear consensus on who should regulate AI or exactly what the process should look like.
Here are some possible approaches in which the US government is involved fully, in part, or not at all.
The Argument for AI Regulation
Should AI be regulated? Here’s why some experts say “yes.”
AI developers and researchers continue to sound alarm bells about the dangers of powerful AI models to society, now asserting that the technology carries the deadly potential of nuclear weapons and pandemics. The warning comes amid ongoing concerns that, without regulation, the technology could wreak havoc on society by:
- Spreading disinformation and replacing jobs.
- Amplifying discrimination, data exploitation, and massive-scale fraud.
- Creating AI monopolies.
- Creating a national security threat.
- Leading to a permanent caste system.
How to Regulate AI
With broad agreement that AI regulation is necessary, the question is who should regulate it, and how? In the face of the technology’s explosive growth, there’s no clear answer. Possibilities include:
- Full government regulation. If the US government believes that AI poses a credible existential threat, it will assume control of and regulate AI companies as it does other national security assets.
- Moderate government regulation. If the federal government believes AI poses a less serious threat, it could impose more modest regulation:
- Congress could create a new AI regulation agency with the authority to implement and enforce rules.
- Existing government agencies could enforce rules in areas under their purview.
- Light regulation. In this model, AI companies would be held accountable through lawsuits for any harm they cause.
- “Soft laws.” In a soft law approach, a private organization would establish regulations for industry members.
- An international regulator similar to the International Atomic Energy Agency would inspect and mandate audits of AI systems, test for safety and standards compliance, and restrict AI deployment and levels of security.
Specific Regulation Recommendations
Though nobody’s certain about exactly what regulations should look like, some top AI developers recommend creating a checks-and-balances system where both AI developers and external entities test and monitor advanced AI models. In this system, independent auditors and risk assessors would:
- Determine the technology’s safety before its release.
- Report results to regulatory agencies and the public.
- Flag and report ongoing problems requiring follow-up.
Specific areas experts say these stakeholders should test include AI’s ability to:
- Launch cyberattacks.
- Access weapons.
- Plan and follow through on political strategies.
- Create other AI systems.
- Develop situational awareness and the ability to self-clone and -sustain.
- Manipulate people into believing fake information or thinking they’re communicating with a human rather than a machine.
They further recommend that AI developers:
- Train and test the technology in isolation so it can’t independently interact with other computer networks.
- Develop methods to rapidly shut down or disable AI models’ access to networks when the models display troublesome behavior.
Looking Ahead
Some express cautious optimism that despite fierce competition to dominate the AI landscape, major AI companies are attempting to collaborate to set shared standards around the technology’s development and safety. Time will tell whether they continue to cooperate without mediation or regulation from the US government or another entity.
Want to fast-track your learning? With Shortform, you’ll gain insights you won't find anywhere else .
Here's what you’ll get when you sign up for Shortform :
- Complicated ideas explained in simple and concise ways
- Smart analysis that connects what you’re reading to other key concepts
- Writing with zero fluff because we know how important your time is