Top

Soon, robots that can make moral decisions?

Can there be autonomous robots with a sense for right and wrong?

Washington: Scientists, including one of Indian-origin, are exploring the challenges associated with developing robots that are capable of making moral decisions.

Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the US Navy to explore the challenges of infusing autonomous robots with a sense for right, wrong, and the consequences of both.

"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," said principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts.

"The question is whether machines - or any other artificial system, for that matter - can emulate and exercise these abilities," Scheutz said.

The project, funded by the Office of Naval Research (ONR) in Arlington, will first isolate essential elements of human moral competence through theoretical and empirical research.

Based on the results, the researchers will develop formal frameworks for modelling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

"Our lab will develop unique algorithms and computational mechanisms integrated into an existing and proven architecture for autonomous robots," said Scheutz.

"The augmented architecture will be flexible enough to allow for a robot's dynamic override of planned actions based on moral reasoning," said Scheutz.

Once architecture is established, researchers can begin to evaluate how machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions, and explain their decisions in ways that are acceptable to humans.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings.

In Bringsjord's approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers.

If that check reveals a need for deep, deliberate moral reasoning, such reasoning would be fired inside the robot, using newly invented logics tailor-made for the task.

"We're talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don't have to tell them what to do," Bringsjord said.

"When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario," Bringsjord added.

( Source : PTI )
Next Story