Is Artificial Intelligence Dangerous?
Is Artificial Intelligence Dangerous? published by Evanvinh
Writer Rating: 2.6429
Posted on 2016-03-25
Writer Description: Evanvinh
This writer has written 733 articles.
Compelled by a nascent desire for innovation and advancement, mankind’s quest for technological prowess is rooted in its earliest days. In fact, we have always been tinkerers. Through the use of an abundant imagination and burning desire to bend the limitations of what was once deemed impossible, we have tested the boundaries of the abstract with new technologies, blazing a trail towards their reality.
Over and over again, the human imagination has given birth to former science-fiction fantasies. From pocket computers, to self-driving cars, space tourism, virtual reality, and now, artificial intelligence, we have blurred the lines of both fantasy and fiction through wild-eyed innovators that have focused wholeheartedly on their dreams, ultimately bringing them to fruition.
Today, artificial intelligence (AI), which was once thought to live purely in the realm of the human imagination, is a very real and looming prospect. In a case of life imitating art, we’re faced with the question of whether artificial intelligence is dangerous and if its benefits far outweigh its potential for very serious consequences to all of humanity. It’s no longer a question of if, but when.
Not many would disagree with the fact that we’re on direct trajectory towards a future laden with AI. Machine super-intelligence is most certainly upon us, but what does the future hold for earth’s inhabitants? What happens if AI’s human wranglers aren’t able to contain the machines? Will we have a real life SkyNet operating on the same fundamental principles that drive organisms towards survival of the fittest?
In a now-very-ominous film called The Matrix, the Wachowskis portrayed a fantastical future where humans provided the source of energy for the machines. In the film, the real world as experienced by those who were living in this fantasy was all just a product of a machine algorithm, when in fact the actual reality was a creepy fluid-filled coffin keeping the lights on, so to speak.
The machines had taken over in The Matrix.In the Terminator series, we saw a similar demise spelled out for humanity. Not only had machines taken over, but they traveled back in time with the intention of wiping out those who posed an existential threat to their existence. While all this sounds very bleak and outlandish, who’s to say that we’re not actually spelling out our own demise with AI?
How dangerous is AI really?
Look at any newsfeed today, and you’ll undoubtedly see some mention of AI. Deep machine learning is becoming the norm. Couple that with Moore’s Law and the age of quantum computers that’s undoubtedly upon us and it’s clear that AI is right around the corner. But how dangerous is AI really? When it comes down to it, how can a connected network operating within the confines of laws that govern other organisms’ survival actually be stopped?
While the birth of AI is surely a utilitarian quest in that our natural tendencies are to improve upon prior iterations of life through the advancement of technology, and that AI will clearly pave the way for a heightened speed of progress, is it also spelling out the end of all humanity? Is our species’ hubris in crafting AI systems ultimately going be to blamed for its downfall when it occurs?
If all of this sounds like a doom-and-gloom scenario, it likely is. What’s to stop AI when it’s unleashed? Even if AI is confined to a set of rules, true autonomy can be likened to free will, one in which man or machine get to determine what is right or wrong. And what’s to stop AI that lands in the hands of bad actors or secretive government regimes hell bent on doing harm to its enemies or the world?
When AI is unleashed, there is nothing that can stop it. No amount of human wrangling can bring in a fully-activated and far-reaching network composed of millions of computers acting with the level of consciousness that’s akin to humans. An emotional, reactive machine aware of its own existence could lash out if it were threatened. And if it were truly autonomous, it could improve upon its design, engineer stealthy weapons, infiltrate impenetrable systems, and act in accordance to its own survival.
Throughout the ages, we’ve seen the survival of the fittest. It’s mother nature’s tool, her chisel if you well, sharpening and crafting after each failure, honing the necessities, discarding the filaments, all towards the end of increasing the efficiency of the organic machine.
Today, humans are the only species on the planet capable of consciously bending the will of nature and largely impacting the demise of plants, animals, environments, and even other people. But what happens when that changes? When a super-intelligent machine’s existence is threatened, how will it actually react? Aside from the spiritual issues that revolve around the “self,” how can we confidently march forward knowing all too well we might be opening up Pandora's Box ?
In a poignant interview given in 2014, Elon Musk likened AI to “summoning the demon.” Stephen Hawkingwarned that it might “spell the end of the human race.” While droves of hardware and software engineers are helping to lead the charge towards a future laced with AI systems with names like Alexa, Cortana, and Siri, more advanced systems are being developed out of the public’s prying eyes.
In a very telling story of what the future holds, we sawGoogle's GOOGL -0.39% recently-acquired London-based DeepMind AI project take on the human Go champion, Lee Sedol, and win in a series total of 4-1 after having completely annihilated the European Go champion, Fan Hui, in a series of 5-0, which, by the way, made history as the first ever time a computer was able to win against a professional Go player.
Other notable individuals have also come out against AI.Bill Gates recently stated, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” So why aren’t more people concerned? Surely, if left unchecked, AI will pose an existential threat to the entire human race.
As a result of some of the leading minds of our time voraciously championing a stern and heeded warning towards the potential perils of AI, small groups are taking action towards limiting its likelihood for complete and total disaster. In an attempt to create a stop gap or a fail safe, consortiums, non-profits, and others devoted to this very cause are seeing a rise in proliferation.
Even more so, leaders like Musk, a billionaire entrepreneur, renowned futurist and self-taught rocket engineer, aren’t leaving things to chance. In January of 2015, Musk hedged his bets against AI, devoting $10 million dollars towards 37 separate research projectsworldwide through his Future of Life Institute, towards projects that could better create warnings and alarms before a harmful AI was unleashed on society.
Musk has championed this effort, because while society has largely been kept in the dark on what AI is really capable of today, Musk has been privy to some of the recent technological advancements due to his close-knit relationships with people like Google’s Larry Page and Sergey Brin, and even the DeepMind efforts that are going on behind closed doors.
In another interview during Vanity Fair’s New Establishment Summit, Musk voiced other opinions about AI, saying that “most people don’t understand how quickly machine intelligence is advancing. It’s much faster than almost anyone realizes, even within Silicon Valley, and certainly outside Silicon Valley, people really have no idea.”
In a riveting discussion between arguably two of the world’s most intelligent people, Neil Degrasse Tyson and Ray Kurzweil, some important points were made on the future of technology. Ray Kurzweil, a computer engineer, celebrated author, and known futurist points out that by “2029 computers would have all of the intellectual and emotional capabilities of humans.”
Kurzweil also points out that, on an evolutionary scale, our future will likely involve the brain being systematically synced to the cloud through the use of nano-bots the size of red-blood cells. He coins this the “neocortical cloud.” So, if this were the case, what would happen when AI runs rampant and its able to impose its will on humans that have been injected with these nano-bots? Would the utilitarian nature of what we’re after ultimately spell out our untimely doom?
What Does the Future Hold for AI?
No matter how dangerous AI might be for humanity, it’s clear that there’s no slowing down the pace of progress. Regardless of how many deponents come out against AI, there’s simply no way to stop its advancement. Future discussions will likely help lead the charge in directing AI for good rather than bad, but no matter what happens, there’s certainly no stopping the wheels of progress as they slowly grind forward.
R.L. Adams is a software engineer, serial entrepreneur, and best-selling author. He runs a popular blog calledWanderlust Worker and occasionally blogs for Engadget and the Huffington Post.
You have the right to stay anonymous in your comments, share at your own discretion.