by Alexandra Bánfi
As a human race, we consistently strive for efficiency and convenience. This desire is only growing as Artificial Intelligence is fuelling an obsession with technological ease. We strive for what is called the Internet of Things (IoT), a concept that suggests that all household items would be connected via the internet: from our microwave to our car. Not only is this invention increasing societyÔÇÖs idle reliance on technology but there need only be one weak spot in the network to cause vulnerable security. If somebody can hack into your microwave, for example, this provides easier access to other items, such as your car. This desire for the IoT is threatening our ability to function as a society without technology, but it is also introducing technology into our houses that could jeopardize our privacy. However, there are also lesser spoken about issues that could have detrimental effects on our society, such as lethal autonomous weapons (LARs). We do not seem to be preparing ourselves for the possible repercussions of lethal AI in the wrong hands.
We currently only have Narrow AI, which entails that a computer can match human intelligence in terms of the task it has been programmed to complete. AI in its current form doesnÔÇÖt pose much of a threat to society, it has no innate intelligence and while it is independent in the long term it still requires human intervention. As a subfield of machine learning, deep learning is the most recent revolution in AI. This phenomenon constitutes a group of algorithms that ÔÇÿlearnÔÇÖ and improve from data. A simple application model would be, if we were to feed an algorithm numerous pictures of cats and dogs, in time the algorithm would learn to recognise features and patterns specific to a cat or a dog allowing it to classify the two. While this is a simplistic example, these algorithms can be scaled to much larger models which have allowed the creation of consumer products such as AmazonÔÇÖs Alexa and autonomous cars. While these developments in AI do pose a threat in terms of creating a society reliant on technology and threatening our privacy, the AI does not have any innate intelligence or consciousness. There is still a necessity for a degree of human intervention in terms of creating and developing the most suitable architecture for a deep learning model. General AI, the notion that machines could have equal intelligence as a human mind is currently impossible. This ÔÇÿsuper-humanÔÇÖ AI is at the root of many of the fears of AI, and rightly so. However, there is no specific timeline for when this will possible.
The notion of ÔÇÿsuper-humanÔÇÖ AI has plagued society with a fear of AI robots overthrowing the human race; a concern sensationalised by Hollywood. While this may be a concern in the future, for now, this is unobtainable and there are many other concerns in the near future that must be acknowledged. The concerns of society can be divided into two main categories. Firstly, people fear that machines will lead to mass unemployment for humans, which they will to a certain degree, but they will also create numerous jobs. The issue has been simplified and in reality, there are many ethical dilemmas that must be considered with this concern. For example, if a surgeon has a 95% success rate but a robot has a 99.9% success rate, is it morally better to allow the surgeon to keep his livelihood or to give the patient a better chance at survival? The second fear is that robots will become sentient and intellectually superior to humans, threatening the existence of the human race. This extreme conception of the future of AI stems from ideas disseminated in Hollywood films. Perhaps we may achieve true, general AI in the very distant future. But in my admittedly non-scientific opinion, sentience cannot be mathematically programmed and the idea of robots overthrowing the human race is highly unlikely.
While these concerns are merited, there are other implications of our current AI that pose frightening consequences for our society. The main concern, for me, is AI driven LARs. 3,978 AI and robotics researchers and 22,539 others signed an open letter in 2015, discussing their fears around the lack of regulation of LARs, that could be ÔÇ£feasible within years, not decadesÔÇØ. The development of LARs for warfare has been envisioned as the third wave of weaponry evolution, after gunpowder and nuclear arms, as experts premediate ÔÇ£a global arms raceÔÇØ. We are still living with the socio-political tremors that defined the latter part of the 20th century and embedded a global fear for the future. Why are we more concerned about the unrealistic notion of a robot takeover than reliving the race to arms? While these weapons would decrease the number of casualties for the owners of such weapons, they would also lower the threshold for declaring war. I see a future where diplomatic alternatives for war may be cast aside as violent conflict becomes a more acceptable way to solve global issues.
Technology has developed at an unprecedented rate, but it will have to slow at some point. We should stop concerning ourselves with AI robots that will destroy humanity and concern ourselves with the hands in which AI resides currently. In an article by Alexandra Brzozowski on website EURACTIV she discusses that ÔÇ£an artificial intelligence system is only as good as the data it is being fed.ÔÇØ We should not be scared of the AI in itself but instead the people who are creating, using and feeding data to the AI. I find AmazonÔÇÖs Alexa scary because it is providing an avenue through which microphones can be installed in homes. I recognise that the AI is not the risk in itself, it is the way in which the data it collects is used. The development of LARs is a more imminent threat, as it will shift moral compasses and any humanity that was present in war will be destroyed. While the idea of super smart AI is truly scary, it is incomprehensible at current. Near future developments in AI hint at the fact that the downfall of the human race will be humans themselves, and not a robot apocalypse.