Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

2

Robots that will fold your laundry

This is “Brett” also known as The Berkeley Robot for the Elimination of Tedious Tasks.  This guy can do simple household chores. Specifically, the robot can fold laundry and is part of an ongoing project by UC Berkeley’s Pieter Abbeel.

Folding towels might seem easy to us humans, but this is actually quite complicated for a robot to do.  In fact it requires a method where the robot learns the tasks by seeing how humans do it.  Abbeel explains:

For robots to be integrated in unstructured or changing environments, such as a typical human household, they must develop the ability to learn from human experts and to even teach themselves.  

The hope is to have these robots perform everyday chores for the elderly or disabled so that they can live more independently.

You can watch more videos of this robot here

Control system: Quiz no 3- 100%
Control system: Midterm exam - 95%
Programming: Midterm exam - 100%

Wala lang. Ang sarap lang sa feeling nung worth it yung pagod mo para mag-aral. Tapos nagsimba pa kami kanina sa school. Hindi ako magyayabang o kung ano man, feeling ko lang talaga sobrang blessed ako. THANK YOU LORD! 😁❤️

Watch on thingiverse.tumblr.com

Emmet’s Automatic Transmission Model is absolutely worth your 3D printer’s time. The video is pretty great, too. 

Watch and you might learn something. 

3

Light Printing

We are exploring new modalities of creative photography through robotics and long-exposure photography. Using a robotic arm, a light source is carried through precise movements in front of a camera. Photographic compositions are recorded as images of volumetric light. Robotic light “painting” can also be inverted: the camera is moved via the arm to create an image “painted” with environmental light. Finally, adding real-time sensor input to the moving arm and programming it to explore the physical space around objects can reveal immaterial fields like radio waves, magnetic fields, and heat flows.

Via Mediated Matter (MIT)

Text
Photo
Quote
Link
Chat
Audio
Video