This is a free excerpt from one of Shortform’s Articles. We give you all the important information you need to know about current events and more.
Don't miss out on the whole story. Sign up for a free trial here .
Why are some leaders calling for a pause in the development of powerful AI technologies? Is the AI development pause a good or bad idea?
In March, over 1,000 tech and academic leaders called for a six-month pause in the development of powerful AI systems, citing potential societal harm from disinformation and infrastructure hacking. Some disagree, saying the change is already happening and we should focus on the benefits of AI.
Read on to learn whether a pause in AI development is feasible or unrealistic, based on experts’ varied viewpoints.
The AI Development Pause Debate
On March 28, more than 1,000 technology and academic leaders, including Tesla and Twitter head Elon Musk and Apple co-founder Steve Wozniak signed a letter calling for a six-month AI development pause to assess the risks that powerful artificial intelligence technologies may pose to society. Signatories argue that AI capacities are growing so big, so fast, that even the technology’s creators don’t fully understand and can’t predict or control it. With no meaningful regulation of AI systems in the US, it’s unclear whether humans can reign it in or adequately respond to the fallout of the technology’s potentially harmful actions—which could have catastrophic consequences for the nation.
What is AI? Why are Musk and other leaders calling for a pause in the development of powerful forms of the technology? Should AI development be allowed to continue at full throttle? We’ll examine varied experts’ viewpoints on these questions.
Background
Before we examine why some are calling for a pause in the development of AI technologies, let’s explore a brief history of AI and how it has developed so rapidly. Artificial Intelligence tries to simulate or replicate human intelligence in machines. The drive to create intelligent machines dates back to a 1950 paper by mathematician Alan Turing, which proposed the Turing Test to determine whether a computer should be deemed intelligent based on how well it fools humans into thinking it’s human. While some researchers and scholars debate whether AI has passed the Turing Test, others argue that the test isn’t meaningful because simulating intelligence isn’t the same as being intelligent.
Artificial intelligence has developed considerably since Turing’s time, moving from basic machine learning, in which computers remember information (the items in stock at a particular store, for example) without being programmed, to “deep learning.” The latter uses artificial neural networks, or layers of algorithms mimicking the brain’s structure, to process and learn from vast amounts of data. Google describes its AI technology as a neural network.
These two applications of computer learning may produce either “weak” AI or “strong” AI, also known as Artificial General Intelligence (AGI). Examples of weak AI are typical chatbots, spam filters, virtual assistants, and self-driving cars. We haven’t achieved AGI—artificial intelligence that can learn and perform every cognitive task a human can. Many AI experts consider AGI the holy grail and agree that it’s not on the current horizon. However, it’s increasingly imaginable:
- Dr. David Ferrucci, who helped build the Watson computer that won at Jeopardy! In 2011, heads a start-up that’s working to combine the vast data processing capability of Watson and its successors with software that mimics human reasoning. This hybrid type of AI would not only make suggestions, but also be able to explain how it arrived at the suggestions. Ferrucci says this would position computers to work collaboratively on tasks with people. Businesses are already applying some of the nascent technology.
- Some experts assert that we’re experiencing a “golden decade” of accelerating advances in AI. An analyst recently raised her estimation of AI’s chances of transforming society by 2036. Instead of a 15% chance, Ajeya Cotra now sees a 35% chance of such transformation (for instance, eliminating the need for knowledge workers) in that timeframe.
OpenAI & ChatGPT
In November, OpenAI, a research and development company, released ChatGPT—an intelligent chatbot technology that offers human-like responses to nearly any question asked—sparking a frantic race among AI developers to create rival technologies, including Microsoft (Bing) and Google (Bard).
View #1: Yes, Slow Down AI Development
Some experts agree with Musk that AI creators should pause the development of more powerful forms of AI because:
- Humans can’t control it. AI knowledge grows exponentially with each new iteration, and even the technology’s creators don’t fully understand the processes that enable it to arrive at mind-bending conclusions, making managing it near impossible. This could be disastrous if, for example, an autonomous weapon makes a decision—one that’s mistaken—faster than the human charged with controlling it can understand and react.
- It can manipulate human thinking and behavior. AI is already capable of altering images in convincing ways that risk provoking harmful human, social, and political responses. Left to run amok, it could produce large-scale disinformation, hack computer systems that control critical infrastructure, and increase phishing attacks.
- It’s fallible. The technology makes mistakes, hallucinates (gives factually incorrect, irrelevant or nonsensical answers), and produces racist, sexist, and biased responses.
- The US lacks AI-specific laws and regulations. Compared with the EU and China, which have developed comprehensive legal and regulatory frameworks for AI, the US lags woefully behind, with no national privacy law, a mere proposal from the White House for an AI Bill of Rights, and a set of reactive, patchwork responses to AI-related problems.
View #2: No, Don’t Slow Down AI Development
Other experts disagree with the proposal to pause the development of AI technologies, in part because there’s no feasible way to enforce a global slowdown and doing so wouldn’t resolve AI-related challenges coming down the pike anyway. As a result, we should embrace forward movement and the positives that come with AI development, which offers the potential to:
- Improve global health and education access. Bill Gates, co-founder of Microsoft (which made a multi-billion dollar investment in OpenAI and is developing its own AI technology) says his foundation will use AI to help the world’s most vulnerable.
- Liberate people to engage positively with the world in ways the technology can’t. Gates argues that AI will support vital, socially valuable work (like scientists’ development of vaccines) and free people of cumbersome tasks (like reviewing emails and scheduling meetings, giving them more time to spend on meaningful activities, like caring for patients and the elderly.
What’s Next?
Some say that without a pause in the development of these AI systems, continuing to release more powerful AI technology to the public before creators can understand and manage them not only won’t improve the technology, it guarantees a world of chaos and Whac-A-Mole problems that will demand developers’ constant attention to fix.
Finally, some argue that a six-month pause in AI development likely isn’t enough time for US legislators to put meaningful, AI-specific guardrails in place given their lack of AI expertise and the technology’s potential for rapid, explosive growth. This, they say, is perhaps the best reason to take a deep breath and slow down.
Want to fast-track your learning? With Shortform, you’ll gain insights you won't find anywhere else .
Here's what you’ll get when you sign up for Shortform :
- Complicated ideas explained in simple and concise ways
- Smart analysis that connects what you’re reading to other key concepts
- Writing with zero fluff because we know how important your time is