human robot interaction

vimeo

*Human-robot interactions.  Get ready for a lot more of these 

Uh oh! Robots are learning to DISOBEY humans: Humanoid machine says no to instructions if it thinks it might be hurt
  • Engineers used artificial intelligence to teach robots to disobey commands
  • The robot analyses its environment to assess whether it can perform a task
  • If it deems the command too dangerous it politely refuses to carry it out
  • The concept is designed to make human robot interactions more realistic

If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.

Yet despite this, roboticists have started to teach their own creations to say no to human orders.

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.

Robotics engineers are developing robots that can disobey instructions from humans if they believe it may cause them to become damaged. If asked to walk forward on a table top (pictured) the robot replies that it can’t do this as it is ‘unsafe’. However, when told a human will catch it, the robot then obeys

FULL STORY:

http://www.dailymail.co.uk/sciencetech/article-3334786/Uh-oh-Robots-learning-DISOBEY-humans-Humanoid-machine-says-no-instructions-thinks-hurt.html


***I think I saw this movie.. Didn’t end well

2

Behold the advance of human-swarm interaction. Researchers at Georgia Tech are developing a method that uses a smart tablet and red light beams to control a fleet of robots. 

When a user taps an area of the tablet screen, a red light is projected on the ground. The little robots being used for the proof of concept start moving toward the light, communicating with each other all the while to evenly space themselves out as they maneuver. Two fingers on the screen mean two beams of light, and the robots break into teams to reach the two locations on the floor.

“If you scale up, from fives to tens to thousands, then there’s just no way of dictating who should be doing what exactly,” said applied mathematician and roboticist Magnus Egerstedt, who is leading the research. “This area of human-swarm interaction, which is, how should a single person interact with a large collection of robots, is really something that we’re now beginning to have to address. All of a sudden autonomy and robotics has reached a level where it’s not science fiction anymore to have lots of robots out there and a single operator having to interact with large teams of robots.”

Keep reading

[crossposted and edited from facebook, because I think this is indicative of an issue that goes beyond my school] 

About a month ago, I submitted a FERPA access request to the registrar’s office, and today I got a chance to look at my admission records. In one of my essays, I’d happened to mention that I’m autistic. It wasn’t the focus, and I didn’t go in depth. The essay itself was about some research I’d done in human-robot interaction that specifically involved assistive technology for developmental disabilities and I was explaining why I was drawn the work.

Of course, that one sentence was the main thing the committee picked up on. They were concerned I wouldn’t be a “good fit” - the academic culture here is too collaborative, obviously I’d be self-isolating, why didn’t I just apply to MIT or CalTech? The general consensus was that they’d make a decision based on the interview, and, luckily, my interviewer loved me. He said in his letter that he’d realized I was autistic from our conversation, and made sure to stress how “regular and mainstream” I was. He said that he knew that my autism would be an issue for the committee, and wanted to reassure them that I could have a social life. 

So I almost did not get into [school] because I openly said I had a disability. And I did get in because I can kind of/sort of/almost pass. The committee didn’t even register that what they were doing was wrong.