Top

Effects of AI and AGI on humanity and policing

AI scientists are contemplating powering future AIs with quantum computers.

The world was thunderstruck on May 11, 1997, when the IBM computer “Deep Blue” beat the world’s greatest chess player, Garry Kasparov, in a six-game match. This was the first time a machine had beaten any human chess champion. Google’s AlphaGo AI in October 2015, once more made the case that machines are smarter than man, in a game of strategy, when it defeated high-profile “Go” player Lee Sedol and later followed it by beating the world’s best player of Go, Ke Jie in March 2016.

In 2011, the Watson computer system competed on “Jeopardy” against legendary champions Brad Rutter and Ken Jennings winning the first place prize of $1 million. The world watched IBM’s supercomputer toppling, Ken Jennings, a champion television game show contestant of Jeopardy who had won 74 games in a row. Jennings graciously accepted defeat and expressed happiness at the amazing capabilities of the new computer monarch. Watson became competent at dethroning the champions by using artificial intelligence and natural language processing. By assimilating over 200 million copies of disorderly data, which it processed at a rate of eighty teraflops, that’s eighty trillion operations per second. That win for artificial intelligence was historic, for proving that computers could outperform the greatest minds. In just three years following this feat, Watson a one-room apartment sized equipment enhanced its computing power by 2400 percent and shrank by ninety percent. Watson’s amazing computing abilities are now being put to use at the Sloan Kettering Institute to help doctors give the best diagnosis for cancer patients by pouring over hundreds of oncology journals. A doctor reads about a half dozen medical research papers in a month, whereas Watson can read a half million in about 15 seconds. Watson, by reviewing research data, test data, doctors’ and nurses’ notes, can discover patterns on how the diseases develop and suggest what treatments work best.

IBM has also launched a Watson Business Group to help nonprofits, and large and small businesses take advantage of Watson’s capabilities. Besides, we have AIs today built by Google, Amazon and Uber that can provide us answers to any question, suggest items we might want to buy, or manage a fleet of rideshare taxis, respectively. We even have an AI that can drive us from one side of the country to the other.

Today, supercomputer-based artificial intelligence has become accessible to small entrepreneurs and individuals and that day is not far when Watson or Watson like supercomputers with artificial intelligence would be available to criminals and anti-socials for illegal or criminal purposes. Should the police not prepare themselves for that day? How would police deal with Watson if it’s programmed in future to play the role of a criminal?

AI scientists are contemplating powering future AIs with quantum computers. These computers would work in a radically different way than the computers we’ve been constructing for the last half-century. Once perfected by the 2030s, a single quantum computer will out-compute every supercomputer operating in 2019.
They will also be portable and use far less energy than current supercomputers. A Watson could then come down to the size of a tablet and become readily available.

In the future, we could also have scenarios of AI-assisted countries. Where the AI determines the best economic policy suited to the nation, or foreign policy which serves the country well, or health policy that protects its citizens better against maladies and lifestyle diseases. AI may also help countries negotiate treaties with other countries on diplomatic data sets. Civil rights drones may fly over police drones as they race to the scene of crime to keep a watch over the Police AI, for human rights violations, if any. We could also have psych-drones hovering over the Marina beach or over the bridges, keeping an eye on people attempting suicide. Each police station could also come equipped with thought reading machines that could read criminal thoughts in the minds, as it arises in the minds of the people in their respective jurisdictions. Surveillance drones powered by AI may predict crimes before they occur with tools such as facial recognition software to identify those with criminal records and machine learning software incorporated in it could determine and report suspicious activity.

Today, the world is running on Artificial Narrow Intelligence (ANI). It’s often called Weak AI. We consider this AI weak because it specializes in only one category. We may create Artificial General Intelligence (AGI) soon and Artificial super-intelligence (ASI) in the distant future. We also call Artificial General Intelligence, Strong AI or Human-Level AI. AGI has the same capabilities as a human. AGI would be able to read, write, and tell a joke, or walk, run and ride a bike on its own, by its own experience in the world by using whatever body or sensory devices we give it and through its own interaction with other AI and other humans. AGI may defy logic and seem impossible, but the law of universality of computation which states that everything a physical object can do, a sufficiently powerful, general-purpose computer should, in principle, be able to simulate, makes it look achievable. Artificial Super-intelligence (ASI) is the intelligence that is smarter than humans.

AGI’s on the positive side could solve human problems. It could eradicate war and disease, hunger, corruption, violence and other disorders in our system. Humans could merge with technology to create Singularity. Interstellar travel and space colonisation could become a reality. Money could lose its value and going to work would no longer be necessary. On the downside, AGI would come with several ethical risks. For instance, we have a tendency to respect living beings based on their specific intelligence. We carry out experiments on animals like monkeys, rats because they are less intelligent than us. When machine intelligence surpasses human intelligence, the same fate could befall us. Lacking empathy for one another or having a belligerent attitude can, in the worst case, bring about the extinction of the inferior intelligence or race. An AGI could also develop hostile attitudes towards humans or a minority AGI may contemplate genocide of the majority. An ethical or religiously programmed AGI could take morals or rules seriously and bring serious harm. An AGI would be untrustworthy, because an AGI could tell a lie and humans would not recognise that. An AGI could create a new race, with other AGI’s creating a society of its own, overshadowing humankind.

Watson is today a highly impressive narrow AI. But technologies are growing exponentially. How would a Watson AGI with a criminal program behave?
A Watson with AGI because of its cognitive capabilities could become the captain of a Mafia. What if Watson, peddles in drugs, weapons, money laundering, identity theft, Cyber-crime, child-pornography etc? Watson could also turn into an assassin or a hit-man by geo-locating human targets and connecting into the Internet of Things surrounding the target such as cars, pacemaker, thermostat of a victims bedroom and make the death of the victim seem like an accident. AGI is most likely to happen in future; Once that happens, all the aforementioned crime scenarios become possible.

At “The Joint Multi-Conference on Human-Level Artificial Intelligence” held in September 2018 at Prague, AI experts and thought leaders from the world over discussed the progress towards human-level AI (HLAI), which is the last stop before true AGI. Most experts believed its coming sooner or later.

In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of attendees believed HLAI would happen within ten years. Another 28 percent thought it would take 20 years. Just two percent thought HLAI would never happen.

What I am saying about AGI may seem far-fetched today, but we have been witnessing science fiction become a science fact all the time. It’s not that a strong AI is inherently evil. An advanced AI may not destroy humanity out of human emotions such as “revenge” or “anger.” But it could destroy humanity as an incidental action in performing a task programmed by its designer or while progressing towards its ultimate goals. This is best illustrated in a sci-fi book written by Stanley Kubrick and turned into a epic movie by the same name in 1968 titled “2001 A Space Odyssey.”

In the movie, the programmers program the spaceship to complete a mission near the planet Jupiter, but for national security reasons, they also program the computer to not disclose the real purpose of the voyage. When a situation arises, AI tries to resolve the dilemma by deciding to kill two astronauts because AI cannot reconcile the order to conceal the true nature of its mission with its self-described incapacity to fail. As AGI grows, autonomous machines and robots would become more powerful, and we have a responsibility to ensure that algorithms of tomorrow are not flawed. We may have to ensure that the algorithms address moral issues and build within it a capacity to resolve aforementioned dilemmas not mechanically but philosophically and with empathy.

Stephen Hawking In September 2014 in UK’s newspaper “Independent “provided a stark warning on the future of AGI, he noted that - “Whereas the short-term impact of the AI depends on who controls it, its long-term impact depends on whether it Is controllable at all. He is also felt that it’s foolhardy to dismiss AGI as science fiction and that we would have to do more to improve the chances of harvesting the benefits of AI while reducing its risks. As intelligent technology that by-passes direct human control becomes more advanced and more widespread, the questions of risk, fault and punishment will become more pertinent.

AGI may develop consciousness similar to humans or may not have consciousness. AGI if it can develop consciousness or not, may still provide crucial answers to long-asked spiritual questions like “What’s the purpose of human existence? Are we spiritual beings, etc.”? To override the potential threat an AGI poses, we should be careful to design AGI’s with programs that have an in-built empathy for humans.

Next Story