27 Mar 2019
Microsoft co-founder and humanitarian Bill Gates addressed a human-centered AI conference held in Stanford last week. He insisted that Artificial Intelligence has the potential to both help us and hurt us. The most staggering remark made by him, which resonated with his audience was his comparison of Artificial Intelligence with Nuclear Weapons. Yes, you heard that right. Alexa is as dangerous as a nuclear bomb. Takes a bit to sink in, but the man does have a point.
"The world hasn't had that many technologies that are both promising and dangerous," Gates said, mentioning nuclear energy and nuclear weapons as other examples with that much potential for profound change. As for areas where AI has helped society so far, he said, "I won't say there are that many."
He also apprised that medicine and education are viable options for AI to meddle in."It's a chance to supercharge the social sciences, with the biggest being education itself," Gates said of AI's promise. This is not the first instance when Gates has expressed his inhibitions about the ramifications of AI by and large. Back in 2015, Gates did a Reddit “AskMeAnything” thread, which is a Q&A session which is conducted live on the social media platform. When asked about the threat that superintelligence poses on humanity, he said, “I am in the camp that is concerned about super intelligence.First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern”.
Not just Gates, Elon Musk who is a famous entrepreneur has also indicated his wariness on the AI emergence. Talking about the risks that AI poses on the “Joe Rogan Podcast” Musk said, “(AI is) not necessarily bad. It’s just outside of human control. It would be very tempting to use AI as a weapon. In fact, it will be used as a weapon. The danger is going to be more humans using it against each other.” Although he also said that it is definitely less of a worry than it used to be ever since he took a more of a fatalistic attitude towards the rise of AI.
One of the main reasons why these very great people are scared of the AI phenomenon is the fact that it is very real right now and not just a piece of science-fiction. What have we learned from Hollywood after all these years of watching “I, Robot”, “The Matrix” and “Terminator”? All of these movies are loosely based on machines going rogue or artificial intelligence gone horribly wrong. Musk has even gone so far as to nickname AI development as the “summoning of Satan”.
Late scientist Stephen Hawking, writing for The Independent in May 2014, also expressed his concerns. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," Hawking wrote.
Most technologists have agreed that AI will be unable to express complex human emotions such as love or hate and thus there is no reason to really be expecting that an AI turns kind or starts to kill just like that. It could come down to the AI being programmed to wreak havoc and then it just does what it was intended for. These weapons are called autonomous weapons which are programmed to kill. Also, AI wars can cause mass casualties. It is a huge possibility that to one-up the enemy, these AI weapons do not have a simple turn-off switch and these are situations where humans lose control quite easily.
Talking about “I, Robot” again, the robots in it were governed under the three laws of robotics but one of them went rogue somehow. Similarly, robots may not have the same idea of ethics as we do. They may only focus on getting tasks done instead of what they are doing is right or wrong. A super-intelligent self-driving car which is programmed to take you to the airport as soon as possible may do it even though you may not be in the condition to travel.
We will have to align our goals with those of the AI. Without that there could dire repercussions for mankind. Misaligned intelligence should be the main concern when the dangers of AI are discussed. Another fact that imposes major risk is machines using their database to use against humans. A machine can easily learn everything about a human, an example being a personal favorite Black Mirror episode, “White Christmas”, with tech such as “Z-eyes” and “cookie”, it is inevitable of these machines ultimately being hostile on the people.
Any powerful technology can be misused. Today, artificial intelligence is used for many good causes including to help us make better medical diagnoses, find new ways to cure cancer and make our cars safer. Unfortunately, as our AI capabilities expand, we will also see it being used for dangerous or malicious purposes. Since AI technology is advancing so rapidly, it is vital for us to start to debate the best ways for AI to develop positively while minimizing its destructive potential.
More on Artificial Intelligence: