The term “Artificial Intelligence” (or AI) tends to conjure up images of killer robots in movies like The Terminator, Blade Runner and Avengers: Age of Ultron. All these movies warn of the dangers AI poses to humanity but it’s the Avengers movie that really captures the fear that some technology and scientific luminaries have warned us about. In that movie Ultron, a sentient robot created by the Tony Stark (Ironman) character, concludes that in order to save Earth, it has to eradicate humans.
This is exactly the kind of thing that Tesla’s Elon Musk, who’s a bit of a Tony Stark-like figure, has been warning for years. In 2014, he famously likened the unregulated development of AI as akin to “summoning the demon” which cannot be controlled.
If you think his views are alarmist, you should know he’s far from alone. Physicist Stephen Hawking has also warned about the potential dangers of AI. “I believe there’s no deep difference between what can be achieved by a biological brain and what can be achieved by a computer,” Hawking said. “It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.”
Not only that, he fears that AI robots could re-design themselves at an ever-increasing rate and that humans, who are limited by slow biological evolution, wouldn’t be able to compete and would eventually be superseded by the AI agents.
Both Musk and Hawking are members of the board of advisors for The Future of Life (FLI) Institute which lists four existential threats to humans. These are nuclear weapons, biotechnology, climate change, and last but not least: artificial intelligence.
In 2015, FLI launched its AI Safety Research programme —funded primarily by a donation from Musk — whose purpose is to finance researchers and institutions to initiate projects that will help ensure artificial intelligence stay safe and beneficial to humanity.
Alarm bells ringing
Just last month, Musk warned about AI again, this time to a gathering of US governors. He said: “I have exposure to the most cutting-edge AI. I think people should be really concerned about it. I keep sounding the alarm bell, but you know, until people see robots going down the streets killing people, they don’t know how to react, because it seems so ethereal. I think we should be really concerned about AI.”
Musk, no fan of regulation, feels that AI is one sector that does need regulation. “AI is a rare case where I think there should be proactive regulation instead of reactive. I think by the time we’re reactive in AI regulation, it’s too late. Normally, the way regulations are set up is that a whole bunch of bad things happen, there’s a public outcry, and then after many years, the regulatory agencies are set up to regulate that industry.”
Musk went on to say: “There’s a bunch of opposition from the companies who don’t like being told what to do by regulators, and it takes forever. That, in the past has been bad, but not something which represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the existence of the human civilisation. In a way that car accidents, airplane crashes, faulty drugs, or bad food were not. They were harmful to a set of individuals within society of course, but they were not harmful to society as a whole. AI is a fundamental existential risk for human civilisation, and I don’t think people really appreciate that.”
This view is one that’s famously echoed by Hawking, who says that while AI could lead to the eradication of disease and poverty and the conquest of climate change, it could also bring about all sorts of things we don’t like such as autonomous weapons, economic disruption and machines that develop a will of their own, in conflict with humanity. “In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We don’t yet know which.”
One tech entrepreneur who holds the opposite view of Musk and Hawking is Facebook’s Mark Zuckerberg, who’s very bullish on AI. Last month, he conducted a Facebook Livestream where he took questions from the public and one of the topics touched upon was AI.
Zuckerberg said: “I have pretty strong opinions on this. I’m really optimistic. I think you can build things, and the world gets better. But with AI especially, I’m really optimistic, and I think that people who are naysayers and kind of try to drum up these doomsday scenarios are... I just don’t understand it, it’s really negative. And, in some ways I think it’s pretty irresponsible. Because in the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.”
In his livestream, Zuckerberg highlighted some of the ways in which AI can keep people safe, such as helping to diagnose diseases more accurately and enhancing the safety of travel through self-driving cars. In contrast to Musk, he doesn’t believe in purposefully slowing down the development of AI. “I have a hard time wrapping my head around that because if you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And, you’re arguing against being able to diagnose people when they’re sick. I just don’t see how, in good conscience, some people can do that.”
In response, Musk, whose company is in the business of building self-driving cars, replied in a tweet: “I’ve talked to Mark about this. His understanding of the subject is limited.”
So, who’s right: Musk or Zuckerberg? Actually the two of them are talking about two different aspects of AI. So, it’s like comparing apples with oranges. Zuckerberg is talking about using AI for very specific purposes, for example in the medical line or transportation field. Musk is talking about what’s called artificial general intelligence, which is more like the type of AI you see in movies. He’s not talking about the ability to crunch massive amounts of data in order to fulfil a specific task but about systems that have the ability to plan, create and even imagine — something pretty close to achieving sentience or consciousness.
Musk fears that this would happen if AI development is unregulated but many scientists say that we’re still far from coming even close to that. No doubt, computers have been shown to beat human players in chess and the game of Go (an ancient Eastern game of strategy). But even their programmers will concede that those are feats of raw computing power rather than intelligence.
Importance of a ‘kill’ switch
Interestingly though, something happened in the Facebook AI Research Lab (FAIR) in June that demonstrated how potentially smart computers can become. FAIR researchers were stunned to find its AI agents or “chatbots” had developed their own language — without any human input — to make communication more efficient.
English is a rich language that evolved organically over the centuries and apparently, Facebook’s chatbot system found that some phrases in English weren’t necessary for communication. So, it diverged from its basic training in English and proceeded to develop a language that sounds like gibberish to humans but could be easily understood by other AI agents or chatbots.
Facebook decided to pull the plug on this new language and had its researchers reprogramme the chatbots to use normal English. This seems to be the sensible thing to do but if programmes are able to communicate with each other through self-developed languages that make them more efficient, isn’t that a good thing?
Arguably it is but if left unfettered, there’s the real risk that the AI-generated language could become so complex that at some point the programmers might no longer be able to figure out what the programmes are saying to each other. One doesn’t need to be a science fiction movie buff to see the dangers of that.
Perhaps the single most important thing to learn from this Facebook chatbot episode is that it’s always important to have an “off” switch that can’t be over-ridden by the AI system. If Tony Stark had built something like that into Ultron, we wouldn’t have had an Avengers movie. But in the real world, a kill switch is something absolutely necessary as we continue to develop smarter and smarter computers. I’m sure both Zuckerberg and Musk would be in agreement with that point.