Marking 20 years
of bold journalism,
reader supported.
News
Science + Tech

'Roboethics' Not Science Fiction Anymore

As policy lags behind tech advances, researchers try 'ethical' programming.

Emi Sasagawa 14 Jun 2014TheTyee.ca

Emi Sasagawa is completing a practicum at the Tyee.

Can a robot love? Can it think? How about kill?

These questions have been endlessly explored in sci-fi novels, but lately it's been a topic of international diplomacy. The United Nations probed present-day robot ethics last month at the four-day Convention on Certain Conventional Weapons meeting in Geneva.

The meeting brought together experts and government officials to talk about the opportunities and dangers of killer robots. No international agreement was reached, but the discussion made clear that autonomous robot technology moves much faster than the policies governing it.

Meanwhile, here in B.C., robotics experts are investigating the ethical implications inherent to firsthand interactions between humans and robots.

At the heart of the debate is the question of where to draw the line. Whether we're talking about killer, caregiving ("assistive") or industrial robots, the key issues are the same: How far are we willing to delegate human tasks to a machine? And is it possible to create machines that think and behave in ways that minimize harm to humans?

Within the last decade, programming robots to think and behave ethically has shifted from an Isaac Asimov fantasy to a thriving area of study. In 2010, Michael Anderson, a computer science professor at the University of Hartford, and Susan Anderson, a philosophy professor at the University of Connecticut, programmed what they called the first ethical robot.

Four years ago, the married couple fused Michael's expertise in robotics and programming with Susan's work in ethics to test the limits of what a machine could do. They used NAO, a programmable human-like robot launched by Aldebaran Robotics in 2008, to conduct their experiments. At a prototypical level, they wanted to show that it was possible to control a robot's behaviour using an ethical principle.

To do this, they examined a seemingly innocuous task: reminding a patient to take their medication. But that simple act has ethical implications. Is it ethical to have a robot remind a patient to take their medication? If so, how often should it occur? More importantly, how forceful should the robot be?

They then looked at different scenarios for the task. Say it was time to take the medication. NAO approaches its owner to remind them. The patient refuses. Should the robot insist? Should it come back later?

For each scenario, the Andersons determined an acceptable behaviour, or "right decision." Michael created an algorithm based on the sum of all these decisions, which then generated a general ethical principle, later encoded into NAO.

The researchers concluded that the ethical principle in this experiment was that "a health care robot should challenge a patient's decision -- violating the patient's autonomy -- whenever doing otherwise would fail to prevent harm or severely violate the duty of promoting patient welfare."

It meant that NAO had agency to decide whether a patient should be reminded to take their medication.

The Andersons recognize not everyone would support putting medical decisions in the hands of a machine. But they believe there are also ethical implications in not creating ethical robots that provide services society needs.

"Ethics is not only about what we, and robots, shouldn't do, but also what we, and robots, could and should do," said Michael. "If there is a need for certain machines and we can ensure that they will behave ethically, don't we have an obligation to create them?"

New research says no 'right' answer

Skepticism is not the only obstacle researchers like the Andersons face. Creating robots that know right from wrong implies that we, as humans, agree on what is ethical and what is not.

AJung Moon, a PhD candidate at the Collaborative Advanced Robotics and Intelligent Systems (CARIS) Lab at the University of British Columbia, has reservations about our ability to agree on a set of universal ethical principles. Moon said there is a difference between the appropriate decision and the right decision.

"I don't think a robot would be able to make the right decisions on their own, because I don't think people agree 100 per cent on what the right decision is," she added. According to her research, the "right" thing for a robot to do would actually be to consult humans and act based on this interaction.

Moon studies roboethics, a field exploring the ethical considerations that arise in the design, construction, use and treatment of artificially intelligent beings. Last month she conducted an experiment with other members of the CARIS Lab to investigate one of those considerations.

The researchers wanted to know what humans think a robot should do when faced with sharing a resource, like a copy machine, with a human. The resource can't be used by two individuals at the same time, so who should get access to the resource? Would the answer change depending on the situation?

Moon asked a sample group to consider 12 different scenarios where an elevator was the shared resource. Variables changed from scenario to scenario. For example, the human could be inside the elevator or waiting with the robot. The human could be standing with nothing in hand, carrying heavy items, or in a wheelchair. Finally, the robot could either be carrying urgent or non-urgent mail.

For each scenario, respondents were asked to rank the appropriateness of the robot's behaviour. In one, the robot is carrying urgent mail. When it reaches the elevator, there is a person in a wheelchair waiting. What should it do?

In this case, Moon found the socially appropriate behaviour was for the robot to engage in conversation with the person and make a decision based on the interaction. For example, if the robot asked "Are you in a hurry?" and the person said yes, the robot would let them ride the elevator and wait for the next available one.

Moon believes there is a lot more to a decision than a robot acting as an independent agent. "There is something to be said about studying dynamic communication and interaction, which considers more than a robot that just knows it all."

Licence to kill

Autonomous robots like NAO don't seem like such a bad idea, especially if they will help care for seniors. But on the flip side, with several high-tech militaries working on machines with combat autonomy, the reality of killer robots has become an international concern.

Killer robots -- or lethal autonomous weapons systems, as the UN calls them -- are machines capable of selecting targets and using force, lethal or otherwise, without meaningful human control. These devices are not a reality just yet. But several nations, including China, Israel, Russia, the United Kingdom and the United States, are already working on how to give greater combat autonomy to machines.

"As a concept, the killer robot represents a stark shift in automation policy -- a willful, intentional and unprecedented removal of humans from the kill decision loop," said Ian Kerr, Canada research chair in ethics, law and technology at the University of Ottawa.

Self-navigating cruise missiles and pilotless drones have already distanced humans from war through the physical separation of the machines and the individuals who control them. What will happen if we give military robots the power to kill on their own, without any human consultation?

Proponents say the substitution of machines for humans will help make conflicts more effective, for example by reducing casualties. They believe killer robots will do better both physically and ethically, because they are not susceptible to the same risks -- bias, strong emotions, exhaustion -- that humans are. But according to Mary Wareham, co-ordinator of the Campaign to Stop Killer Robots, the dangers of developing these technologies outweigh any intended benefit.

She's concerned about moving away from a system where human decision-making is central to the waging of war, to one where machines have the power to make those decisions.

Wareham doubts such weapons would be able to comply with current international humanitarian law. "Also, there is an accountability gap. If a robot does something wrong, who is responsible? Who will be held accountable?" she asks.

She adds that her campaign doesn't aim to wholly prohibit autonomy or robotics. "We see a way to prohibit the development of fully autonomous, military robots that would enable other robotic activities to continue," she said.

Last month the campaign, which is backed by over 50 non-governmental organizations worldwide, came to Canada to lobby the federal government to say "no" to killer robots. "Canada has yet to state a position as a stakeholder in this issue in any firm way," said the University of Ottawa's Kerr, who is also involved in the campaign.

The professor believes the solution lies in an international agreement. But as the technology advances faster than government is able to make decisions, basic guidelines are needed more than ever, he said.

"As robots become more and more prevalent in society, one of the main ethical questions becomes: when should a human being delegate what was once human activity or human decision-making to a machine system?" said Kerr.  [Tyee]

Read more: Science + Tech

  • Share:

Facts matter. Get The Tyee's in-depth journalism delivered to your inbox for free

Tyee Commenting Guidelines

Comments that violate guidelines risk being deleted, and violations may result in a temporary or permanent user ban. Maintain the spirit of good conversation to stay in the discussion.
*Please note The Tyee is not a forum for spreading misinformation about COVID-19, denying its existence or minimizing its risk to public health.

Do:

  • Be thoughtful about how your words may affect the communities you are addressing. Language matters
  • Challenge arguments, not commenters
  • Flag trolls and guideline violations
  • Treat all with respect and curiosity, learn from differences of opinion
  • Verify facts, debunk rumours, point out logical fallacies
  • Add context and background
  • Note typos and reporting blind spots
  • Stay on topic

Do not:

  • Use sexist, classist, racist, homophobic or transphobic language
  • Ridicule, misgender, bully, threaten, name call, troll or wish harm on others
  • Personally attack authors or contributors
  • Spread misinformation or perpetuate conspiracies
  • Libel, defame or publish falsehoods
  • Attempt to guess other commenters’ real-life identities
  • Post links without providing context

LATEST STORIES

The Barometer

Do You Think Naheed Nenshi Will Win the Alberta NDP Leadership Race?

Take this week's poll