The Existential Threat of the Pending Singularity
Theoretical physicist Stephen Hawking warned that the development of full artificial intelligence (AI) “could spell the end of the human race.”¹ Elon Musk, entrepreneur and CEO of SpaceX and Tesla Motors, echoed Hawking’s sentiment, warning that AI poses the human race’s greatest existential threat, likening its pursuit to summoning an uncontrollable demon.²
Both Hawking and Musk described a potential outcome beyond what futurists have hypothesized as the “singularity.” The singularity describes the tipping point in AI development where AI no longer relies on human beings to generate improvements. The AI then generates an intelligence explosion by using its own superior capabilities for its continued self-improvement, designing increasingly complex and intelligent machines without human intervention.
Ideas about machine intelligence and the singularity started as early as the mid-nineteenth century when Samuel Butler in the wake of Charles Darwin’s theory of evolution noted the rapidity of technological progress during the Industrial Revolution compared to the relatively slow adaptations in the animal kingdom.³ Science fiction popularized the idea and explored the potential catastrophic consequences of a world increasingly dependent on AI. Science fiction author Isaac Asimov postulated what became known as the “Three Laws of Robotics,” a set of imbedded instructions in an artificially intelligent robot that would preclude it from harming human beings through either action or inaction. These laws provide a literary device which merely recognize the potential implications of a superior AI rather than providing an actual failsafe for commercial implementation.
Some futurists cite Moore’s Law, an observation that computing power doubles roughly every two years, to predict the singularity will occur within the next fifty years. This premise may not surprise even a casual observer of well-publicized breakthroughs where increasingly sophisticated computers surpassed humans in endeavors beyond straightforward computational exercises. In 1997, IBM’s Deep Blue bested chess grandmaster Garry Kasparav, who until that point had been undefeated (even having defeated Deep Blue a year earlier). In 2011, IBM’s Watson, a computer designed to answer questions posed in natural language, easily beat former human champions Ken Jennings and Brad Rutter in a Jeopardy exhibition match.
And while Deep Blue and Watson managed to outperform humans in games of perfect information, a program called Cepheus, created by researchers at the University of Alberta, dominates poker, a game of imperfect information (i.e. the player knows only his or her own hand while the opponents hand remains hidden). Programmed only with the rules of Texas hold ’em and the concept of winning, Cepheus played billions of hands of poker — more hands than have been played by humans in the entirety of history — to develop its own strategy that researches claim cannot be beaten in the long run.
While Deep Blue and Watson may have seriously damaged the egos of Kasparov, Jennings and Rutter, these advances in AI hardly seem to suggest any real danger to the human race. However, much more dangerous competitions with AI have been suggested.
One danger comes in a post-singularity world in which AI develops a sense of self-preservation along with its exponentially increasing ability for self-improvement unconstrained by the biological limits of the human mind. This thinking suggests that AI will develop its own survival instinct and potentially view human beings as a threat.
Assuming this survival instinct manifests, if one suggests that AI develops hostility towards humans, is it a stretch to suggest the capability for AI to develop empathy as well? Perhaps even in the absence of such empathy, the development of a drive for self-preservation need not threaten human existence. Indeed, a super-intelligent AI could see the value of cooperation with humans as the surest means to self-preservation developing a symbiotic rather than confrontational relationship with its makers.
While an AI perception of humans as a direct threat to its self-preservation provides one scenario of an existential threat, another perhaps more likely scenario comes on the other side of the spectrum. In this case, the AI simply competes with humans for resources. Rather than a direct threat for the AI to eliminate, humans become the proverbial spotted owl as we are crowded out of our shrinking habitat as the AI consumes it at an increasing rate while it replicates itself more and more rapidly.
However, a super-intelligent AI would likely recognize the limits of available resources and potentially develop more efficient methods of extraction and use, becoming better stewards of the environment than humans could hope to be.
A third potential existential threat involves the increasing reliance on AI and the potential impact of ceding more and more functions to AI. In 1995, one author offered a dire prediction:
That prediction came from Ted Kaczynski, more notoriously known as the Unabomber, in his 35,000 word manifesto printed in the NY Times and Washington Post.
This should not suggest that concerns over the singularity are unfounded. Intellectual capacity directed at developing safeguards and regulations should not be outpaced by development of more capable AI. Currently, little focus outside the realm of science fiction has been applied to the potential pitfalls and externalities of the potential technological singularity. Efforts have focused specifically on the implications of autonomous weapons, such as the 2013 United Nations Human Rights Council’s call for a worldwide moratorium on the development of Lethal Autonomous Robotics (LAR).⁵ The US Department of Defense issued a directive that autonomous and semi-autonomous systems shall be designed to allow human execution of judgment prior to use of force and no systems will be designed to engage human targets autonomously.⁶
However, researchers should devote more effort developing safeguards to reduce or eliminate the existential threat of super-intelligent AI beyond simply addressing autonomous weapon systems. More pressing, society needs to embrace the reality of the immediate issues created by job loss to AI. As manufacturing jobs (moved overseas or) lost to automation, the development of a knowledge-based economy became the answer. However, AI systems can now replace accountants, stock brokers and other knowledge-workers. This hardly presents the existential threat warned of, but it surely presents a problematic reality that must be addressed.
Ironically, when Stephen Hawking delivered his warning of the potential dangers of AI, he did so in the robotic voice — made possible by AI — that was Hawking’s identifying trademark. In fact, his voice system’s AI had learned his speaking patterns and suggested words that allowed Hawking to continue his brilliant work as a professor, author, and lecturer for three decades after losing his ability to speak.⁷
Aside from his warning, Hawking also acknowledged that creating a super-intelligent AI would be the biggest event in human history with the potential to end poverty and war. In fact, researchers at the University of Southern California are using the same technology that allowed Cepheus to master Texas hold ’em to develop strategies for deploying air marshals and Coast Guard patrols that cannot be exploited by terrorists.⁸ Meanwhile, Watson retired from Jeopardy competition and found gainful employment in healthcare, successfully diagnosing lung cancer at rate of 90% compared to the 50% rate for doctors.⁹ It seems that we can count on AI to help keep humans safe and healthy.
At least for now…
[1] Rory Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” BBC News, accessed January 13, 2015, http://www.bbc.com/news/technology-30290540
[2] Samuel Gibbs, “Elon Musk: Artificial Intelligence Is Our Biggest Existential Threat,” The Guardian, accessed January 13, 2015, http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
[3] “Seventeen Definitions of the Technological Singularity,” accessed January 13, 2015, https://www.singularityweblog.com/17-definitions-of-the-technological-singularity/
[4] “The Unabomber Trial: The Manifesto,” The Washington Post, accessed January 13, 2015, http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm
[5] UN Human Rights Council Report, accessed January 13, 2015, http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf
[6] DoD Directive 3000.09, accessed January 13, 2015, http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf
[7] Katherine Harmon, “How Has Stephen Hawking Lived Past 70 with ALS?” Scientific American, accessed January 13, 2015, http://www.scientificamerican.com/article/stephen-hawking-als/
[8] Nic Szeremeta, “New Computer Program Cepheus Is Said To Be Unbeatable At Poker,” The Independent, accessed January 13, 2015, http://www.independent.co.uk/life-style/gadgets-and-tech/features/new-computer-program-cepheus-is-said-to-be-unbeatable-at-poker-9978706.html
[9] Ian Steadman, “IBM’s Watson Is Better At Diagnosing Caner Than Human Doctors,” Wired, accessed January 13, 2015, http://www.wired.co.uk/news/archive/2013-02/11/ibm-watson-medical-doctor
The Existential Threat of the Pending Singularity
Research & References of The Existential Threat of the Pending Singularity|A&C Accounting And Tax Services
Source
0 Comments