We’d likely all agree that Artificial Intelligence is pretty damn sweet. Over the past few decades developments in AI have made our lives easier, safer and lot more fun (who doesn’t enjoy being insulted by Siri or getting their ass kicked by an Xbox?). Even with recent AI advancements such as Google’s self driving cars, we haven’t scratched the surface of what is possible. While this is super exciting, it’s also where serious ethical questions begin to arise. The main worry being that nobody knows the full extent of what is possible if we keep developing smarter AI. Once robots are able to think for themselves, independent of human orders, we can only theorise what might happen.
On one hand we could end up with a utopian, Apple Store-esque future, where self learning robots drive us around town, clean up our rubbish, predict catastrophic events and make scientific discoveries that no amount of human brain power is able to make, in turn forwarding the evolution of our species, sending us in to deep space and beyond! On the other hand we could have a terrifying Age of Ultron type scenario where robots become self aware and decide they no longer want to be under the rule of their monkey creators. They begin to wreak revenge on humans for enslaving them and launch their own self determined evolution. Somewhere between those two possibilities is Futurama’s Bender. Intelligent enough to be capable of both good and bad, but ultimately too apathetic and lazy to be invested in either.
Earlier this week 16,000 people, fronted by founder of Tesla and Space X, Elon Musk, theoretical physicist Stephen Hawking and Apple co-founder Steve Wozniak signed a letter to the UN to ban the development of weaponised AI. A move that at first appears to make perfect sense. After all, we all remember that scene from Robo-Cop where the ED-209 went bat-shit crazy, mowing down poor Mr Kinney. No one wants to deal with that mess.
There are tons of arguments for and against the ban. One of the arguments we see a lot in favour of weaponised AI is that the robots themselves are harmless and it’s the intention of their human programmers that makes the tech good or evil. While this is true, it’s short sighted. Sure, the scientists and engineers working on this tech right now have the best of intentions for their work, but can we trust everyone on earth to have this approach to AI? Another argument against the ban is that when used correctly AI is less likely to kill innocent civilians. It’s true that because of human error we often make worse decisions than robots and often have biases against others that a robot wouldn’t have. Maybe what the 16000 signatories are afraid of is that a sentient AI will realise how flawed humanity is and decide they are better off without us.
In spirit at least, I’m on Team Nope. Ultimately, if this technology ends up in the wrong hands, the continued de-humanisation of war could lead us to total annihilation. In reality though I don’t believe it’s possible to completely ban weaponised AI. Someone somewhere is bound to develop such a system at some point, legally or illegally, and if they do we should be ready to defend ourselves, right? In that case is there any point in a UN ban?
What do you think? Are you for or against weaponised AI? Does Pet Man freak you out as much as he does me? Let me know in the comments below.
Featured image by Ben Husmann