Controversial project used as an example of success

AI task force report does not view the ethical implications but sees cost, technology, availability and capability deficit as challenges.

Update: 2018-09-01 20:01 GMT
Experts opine that these products have massive repercussions on freedom of expression and privacy, and are also machines that are capable of making grave errors.

Hyderabad: India’s Artificial Intelligence (AI) Task Force Report looks at Google’s Project Maven as an example of thriving application of AI in national security. Project Maven is a US Department of Defense (DoD) AI project to analyse drone video footage that could potentially identify human targets and improve drone strikes in the battlefield.

Over the last two months, Google employees have criticised the company for working on a project for targeted killings and some of them even resigned their jobs over it. In an open letter to Sundar Pichai, they claimed that the US military could weaponise AI and apply the technology towards refining drone strikes and conduct other kinds of lethal attacks.  

The task force led by Tata Sons Chairman N. Chandrasekaran submitted its final report in July to Defence Minister Nirmala Sitharaman on using AI for military superiority. The AI task force report does not view the ethical implications but sees cost, technology, availability and capability deficit as challenges for implementing it in India.

Experts opine that these products have massive repercussions on freedom of expression and privacy, and are also machines that are capable of making grave errors.

Ms Vidushi Marda, a lawyer and researcher, said, “The argument for using AI for national security is that these machines learn from large amounts of data, and because of this knowledge they are all-knowing and can keep us safe. The reality is less comforting. AI does not understand context or social and cultural nuance.” She adds that a few AI applications that have been tested on large crowds for facial recognition in India, for example, have shown abysmal levels of accuracy. AI has often been criticised for being inaccurate, and having a tendency to disadvantage marginalised and vulnerable groups.

Another lawyer, on the condition of anonymity, said, “The term national security is often loosely used and the kind of data used in autonomous surveillance systems may not protect basic fundamental rights. Added to that, the task force document lists how Project Maven’s transformation of tactical data analysis is valuable in fighting ISIS.” 

Similar News