Top

When Algorithms Decide Sentences and Fly Planes: The Growing Debate Over AI Liability

AI systems learn from historical data

Hyderabad: Apart from that chatbot on your phone, it is no secret that artificial intelligence finds its place in countless places including courtrooms, aeroplane cockpits, hospitals, police stations and more, places where decisions alter liberty, safety and life expectancy. These systems are advancing quickly, but the clarity about who answers when they fail is not.

Human beings have always trusted tools and often depended on it too much. Errors in such crucial areas do not permit any casual experimentation. In a courtroom in New Zealand recently, Judge Tom Gilbert discovered that apology letters submitted in an arson case had been written with the help of artificial intelligence.

Remorse can reduce a sentence, but the judge questioned whether computer-generated contrition was personal moral reckoning at all. Research by psychologist Jim Everett at the University of Kent, based on studies involving around 4,000 participants, suggests people tend to see AI-assisted writing as less authentic and less trustworthy. In that case, the defendant received only partial credit for remorse. Efficiency met the limits of human judgment.

In aviation, the scenario is different, and larger: Lives are at stake. Commercial aircraft rely on the autopilot for much of a flight, as Capt. Augustine Joseph, a seaplane operator, explained, “Once the aircraft reaches roughly 500 to 1,000 feet, autopilot is typically engaged.” Pilots programme altitude, speed and route. They continue to update the system when air traffic control issues new instructions. Capt. Joseph noted that advanced landing systems could guide an aircraft onto a runway without a pilot touching the controls.

“AI isn’t used so much in the flying part yet, but it’s going to come.” He saw gradual change in predictive maintenance, crew management and traffic planning before more cockpit reliance on AI, but maintained that change would be slow. “Accountability and liability are the big questions. If something happens, who is responsible?”

That question becomes important when human instinct clashes with system recommendation. “In the initial stages, it has to be a joint process between the human and the AI.” Over time, if validation builds, pilots may be instructed to follow system guidance. Regulators would stand as gatekeepers, and liability would rest across authorities, airlines and manufacturers.

Capt. C.J. Chandrasekhar, director at Sky Choppers Logistics, offered a more cautious view. “In civil aviation or commercial passenger aircraft where lives are involved, I don’t think AI will be successful,” he said. He accepted a role in ground handling and cargo logistics, perhaps in drones. “But not in passenger operations or aircraft operations. Definitely not.”

His concern was blunt. “How safe is AI? And who is responsible if something happens?” The same question of liability comes back.

Further, AI systems learn from historical data. In India, researchers have warned that models trained on social patterns can absorb caste hierarchies where surnames and occupations become proxies. Hyderabad-based software engineer Kiran R. pointed to personalisation as an everyday example.

“Just like your Instagram feed differs from someone else’s, the information and decisions by AI vary based on who you are,” he said. “A man and a woman may be shown different content. Someone from one class background may see different news than someone from another.”

When it comes to military use, the best example is the recent public statement from the AI company Anthropic which described refusing US government applications such as mass domestic surveillance and fully autonomous weapons, even if legally permitted. They argued that mass surveillance threatens democratic liberties and that current AI systems are not reliable enough to power weapons without human control. However the Pentagon signalled it would only contract with companies accepting “any lawful use”.

Healthcare and policing follow comparable paths. When a diagnosis is missed or a certain class neighbourhood is over-policed, the trail of responsibility runs through data, software, institutional policy and human oversight. Courts then assess outcomes long after the design choices that made them.

If an AI system one day recommends a sentence, flags a suspect, guides a plane or filters a medical test, and something goes wrong, who stands up to answer? The programmer who wrote the code, the company that sold it, the official who approved it, the operator who deferred to it?


( Source : Deccan Chronicle )
Next Story