Humans have dominated the world for thousands of years. Scientifically speaking, Humans are no different than other animals on this planet. What sets us apart from animals is our overall intelligence. Humans by nature are a dominating species, who consider themselves superior; hence we unfortunately treat other species badly; be it hunting, subjugating them in captivity or even use them in experiments.
However, the closest in terms of our intelligence is not some living species, but our own creation —“The Computer”. Created back in the 20th century as a means to calculate, computers have come a long way from being just a number crunching machine.
Be it size — where multi-storied tower-sized computers have evolved into tiny pocket devices we call smartphones or be it speed —where computers of today are billions of times faster than the first computer ever made. Computers have affected our lives in such a profound manner that it’s hard to imagine the modern world without them. Automated computers based on programmed instructions have made life even easier for us.
However, the rapid evolution of computers won’t stop at speed improvement and size reduction. We have now reached a stage of creating computers with ‘Artificial Intelligence’ — where the computers are designed to speak and operate on their own, with the aim to serve humans better. Take for example autonomous cars, smart appliances, and even Android robots — the developments on these fronts are tremendous.
However, serving humans is just one angle — the military around the world are going as far as creating AI robots that can take the front on a war field. Killing humans (the enemy) is what these robots would finally be designed for.
Nevertheless, in a recent interview published by ‘The Telegraph, UK’, leading artificial intelligence scientist Stuart Russell, a professor of computer science at Berkeley University, California, has warned that the technology to create ‘killer robots’ is already here and needs to be banned” he further said “allowing machines to choose to kill humans” would be “devastating” for world peace and security.
The professor, who has worked in the field of artificial intelligence (AI) for more than 35 years, also warned that there is no guarantee that AI cannot go rogue”.
We have seen a lot of fiction movies prompting that AI can go rogue and take over mankind. While most fiction movies have turned true, these ones are soon on their way to reality.
Pressure group ‘Campaign to Stop Killer Robots’ released a short (fictional) film it produced (in November 2017) to a meeting of countries participating in the Convention on Conventional Weapons, which painted a shocking and scary scenario based on existing technologies.
The video, entitled ‘Slaughterbots’, starts with an enthusiastic CEO on stage unveiling a new product to an excited crowd. Instead of a new smartphone of consumer tech innovation, he reveals a miniature drone that uses facial recognition to identify its target before administering a small yet lethal explosive blast to the skull, somewhat similar to a heat-seeking missile, but on a larger and more accurate scale.
The fictional CEO in the video boasts: “A $25 million order now buys this, enough to kill half a city — ‘the bad half’. Nuclear is obsolete, take out your entire enemy virtually risk-free. Just characterise him, release the swarm and rest easy.”
However, the film shows the weapons quickly falling into the hands of terrorists who use them to slaughter politicians and a classroom of students.
Professor Russell said: “This short film is more than just a speculation; it shows the results of technologies that we already have. [AI's] potential to benefit humanity is enormous, even in defence. But allowing machines to choose to kill humans will be devastating to our security and freedom — thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast”.
More than 70 countries participating in the Convention on Conventional Weapons have been meeting in Geneva this week to discuss a potential worldwide ban on lethal robots. The convention has already prohibited weapons such as blinding lasers before they were widely acquired or used.
Autonomous weapons that have a degree of human control, such as drones, are already used by the militaries of advanced countries such as the UK, US, Israel, and China.
The Campaign to Stop Killer Robots is arguing that modern low-cost sensors and recent advances in artificial intelligence have made it possible to design a weapons system that could attack and kill without human control. Jody Williams, a 1997 Nobel Peace Laureate and co-founder of the campaign, said: “To avoid a future where machines select and attack targets without further human intervention, countries must draw the line against unchecked autonomy in weapon systems.
“With an adequate political will, governments can negotiate an international treaty and ban killer robots—fully autonomous weapons— within two years time.”
Earlier in July SpaceX and Tesla head Elon Musk described AI as the “biggest risk we face as a civilisation” and warned that it needed to be regulated before “people see robots go down the street killing people”. Musk’s quote at the time was in response to Facebook shutting down an experiment where two artificially intelligent programs appeared to be chatting with each other in an unknown language only they understood.
All this brings us to this fact— what makes us think that these non-living beings without any emotions, having access to limitless data of internet and now able to think on their own, will follow our commands? So will future robots overpower us to wipe us out and rule or will AI evolve to be better and co-exist with the human race? Only time can tell....