Fullscreen
Loading...
 
Print

Limits of Artificial Intelligence

Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.
What has not been shown, however, is scientific evidence for such an event.
- John Markoff, When Is the Singularity? Probably Not in Your Lifetime, NYT, April 7, 2016


“I don’t understand why some people are not concerned,” Mr. Gates said in an interview on Reddit.

“I think we should be very careful about artificial intelligence,” Mr. Musk said during an interview at M.I.T. “If I had to guess at what our biggest existential threat is, it’s probably that,” he added. He has also said that artificial intelligence would “summon the demon.”

And Mr. Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race.”

Ken Goldberg, a University of California, Berkeley, roboticist, has called on the computing world to drop its obsession with singularity, the much-ballyhooed time when computers are predicted to surpass their human designers.
- John Markoff, Relax, the Terminator Is Far Away, NYT, May 25, 2015


The Stanford report [Stanford One Hundred Year Study on Artificial Intelligence's Artificial Intelligence and Life in 2030] attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life, including health care, education, entertainment and employment, but specifically do not look at the issue of warfare. They said that military A.I. applications were outside their current scope and expertise, but they did not rule out focusing on weapons in the future.

The report also does not consider the belief of some computer specialists about the possibility of a “singularity” that might lead to machines that are more intelligent and possibly threaten humans.

“It was a conscious decision not to give credence to this in the report,” Dr. Stone said.
- John Markoff, How Tech Giants Are Devising Real Ethics for Artificial Intelligence, NYT, Sept. 1, 2016


Though the Stanford One-Hundred Year Study on Artificial Intelligence's page on Reflections and Framing mentions singularity concerns in summarizing the topic "Loss of control of AI systems," which it offers as an example of "interrelated topics of interest," the word singularity does not appear in the group's 2016 report Artificial Intelligence and Life in 2030, supporting the impression given by Stone's comment quoted above that this group of AI experts does not consider the singularity an urgent concern.

The Standing Committee and Study Panel will work to formulate topics of attention and the best ways to approach them. A sampling of interrelated topics of interest includes the following: ...
Loss of control of AI systems. Concerns have been expressed about the possibility of the loss of control of intelligent machines for over a hundred years. The loss of control of AI systems has been a recurrent theme in science fiction and has been raised as a possibility by scientists. Some have speculated that we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes—and that such powerful systems would threaten humanity. Speculations about the rise of such uncontrollable machine intelligences have called out different scenarios, including trajectories that take a slow, insidious course of refinement and faster-paced evolution of systems toward a powerful intelligence “singularity.” Are such dystopic outcomes possible? If so, how might these situations arise? What are the paths to these feared outcomes? What might we do proactively to effectively address or lower the likelihood of such outcomes, and thus reduce these concerns? What kind of research would help us to better understand and to address concerns about the rise of a dangerous superintelligence or the occurrence of an “intelligence explosion”? Concerns about the loss of control of AI systems should be addressed via study, dialog, and communication. Anxieties need to be addressed even if they are unwarranted. Studies could reduce anxieties by showing the infeasibility of concerning outcomes or by providing guidance on proactive efforts and policies to reduce the likelihood of the feared outcomes.
- One-Hundred Year Study on Artificial Intelligence: Reflections and Framing



Show php error messages