This article is an excerpt from the Shortform book guide to "The Singularity Is Near" by Ray Kurzweil. Shortform has the world's best summaries and analyses of books you should be reading.
Like this article? Sign up for a free trial here.
What’s the downside of technological progress? How worried should we be about the future?
The road to the future of the technological Singularity isn’t without its dangers. Every technology has the potential for misuse, and the hazards involved with biotech, nanotech, and AI are substantial. Ray Kurzweil details some of those dangers while arguing the futility of trying to halt technological progress.
Continue reading to get Kurzweil’s take on the negative impact of technology on society and what he recommends as a remedy.
Technology’s Negative Impact
Kurzweil acknowledges the negative impact of technology on society and suggests that the only solution is for responsible people to take an active role in guiding technological regulation and development.
If there’s one lesson to be learned from the 20th century, it’s that technology has the potential to wipe out the human race if mishandled. While the chief threat for most of those years was that of global nuclear annihilation, biotechnology now makes it possible to engineer a virus more contagious or deadly than any in nature. Nevertheless, Kurzweil points out that the threat of artificial pathogens hasn’t slowed down genetic research. Instead, the medical benefits of harnessing the genetic code have only sped up research into new biotechnology applications.
(Shortform note: While Kurzweil was correct to predict that advanced biotech would make it easy to obtain the tools to create pathogens, others argue that the knowledge and skills to make use of those tools are still extremely uncommon. On the plus side, modern biotechnology is largely responsible for the rapid response to Covid-19. Though boosted by massive funding efforts and worldwide collaboration between scientists, the means to fight Covid-19 came from years of research into mRNA—the body’s own nanomachine for delivering genetic instructions. Without the accelerating advance of biomedicine, a vaccine could have taken years to develop.)
Nanotech presents an even greater hazard—that swarms of self-replicating, microscopic robots might run amok, disassembling everything in their path, including buildings, animals, plants, and even us. The ultimate nightmare nanotech scenario is that unstoppable nanobots, either by accident or malicious design, spread across the world and reduce every piece of matter into a sea of undifferentiated ooze. Though the technology to create such a plague hasn’t yet been invented, Kurzweil reports that concerned scientists are already discussing what safeguards will have to be developed as research into nanotechnology continues.
(Shortform note: Since Kurzweil’s nanobots still seem to be a long way off, the current legal and ethical debate around nanotechnology centers more on the development and regulation of artificial nanomaterials, such as those used in medicine, construction, and computing, which may have unforeseen negative effects on health and the environment. In a field that has so much potential to upend manufacturing, healthcare, and environmental stewardship, the ethics of nanotech research also covers sustainability, social impacts, and economic justice.)
Navigating the Impact
Some people suggest that the best safeguard possible is to ban any further research into hazardous technology, but Kurzweil argues that that’s a non-starter. A complete cessation of scientific research would have to be enforced by a global dictatorship, and since no such dictatorship exists, research will always continue somewhere. He believes that the only defense against the dangers of technology is for governments, corporations, and the scientific community to cooperate in developing responsible regulations and viable defenses that allow for research to progress while putting an infrastructure in place to combat potential hazards.
(Shortform note: With any new technology, ethical research aims to maximize its benefits to society while minimizing its risks. When it comes to new developments in the fields of bioscience and artificial intelligence, ethical concerns may have to include human autonomy, agency, and privacy, as well as accountability and transparency on the part of researchers. Enforcing ethical standards in the sciences is largely the purview of academic institutions, but since government and private entities play a large role in funding research, they share the responsibility for ensuring the beneficial progress of science, just as Kurzweil suggests.)
The most difficult hazard to address is the one presented by artificial intelligence, in particular a strong AI that doesn’t share humanity’s ethics or values. Kurzweil reiterates that against strong AI, there may be no defense because it will be by definition more intelligent and capable than we are. Even if it’s used to augment us, not replace us, AI will likely empower humanity’s worst instincts as well as its good ones. The one solution that Kurzweil offers is to ensure that any future AI learns and grows out of the best we have to offer. Artificial intelligence will be humanity’s offspring, and like any good parents, we should guide its growth by presenting the best version of ourselves that we can.
(Shortform note: While Kurzweil raises several concerns about AI, he doesn’t address the fact that artificial intelligence can spread disinformation, amplifying societal divisions and throwing fuel on the fire of sectarian conflict. However, AI may also provide the solution by being able to respond much more quickly to “fake news” campaigns than human-driven media. Because of its ability to compile and compare large amounts of data, AI may provide the ultimate journalistic tool for weeding out good information from the bad. Working in conjunction with human experts and reporters to provide fact-checking in real time, such systems could halt the spread of false narratives, resulting in a better-informed human race.)
Exercise: Think About the Future
Kurzweil argues that our future will be shaped by simultaneous advances in the fields of biomedicine, nanotechnology, and artificial intelligence. Consider how these have impacted your life and what potential they hold for tomorrow.
- Medical research has created new treatments we could only have dreamed of decades ago. How have you or someone you love benefited from advances in medical science? What other advances would you like to see in your lifetime?
- The device you’re using to read this is the product of decades of circuit miniaturization. Can you picture any benefits to continuing that trend, and if so, what do you imagine they are? What might be some hazards of making computers even smaller than they are now?
- Artificial intelligence is potentially the most helpful and the most disruptive new technology since the development of atomic power. What concerns do you have about how AI may directly impact your life and society? Which do you feel is greater—its benefits or its hazards—and why?
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Ray Kurzweil's "The Singularity Is Near" at Shortform.
Here's what you'll find in our full The Singularity Is Near summary:
- The upcoming technological shift that will change everything
- The revolutions in bioscience, nanotechnology, and artificial intelligence
- How Kurzweil's predictions held up over time