Robot Scorpion. After nearly four years, authorities in Japan are still trying to get a good look at the nuclear fuel left inside the three reactors that melted down at the Fukushima power plant after an earthquake and tsunami slammed the coast. After multiple failed attempts with various robots, Toshiba has made this scorpion inspired machine that carries a camera instead of a stinger on its tail. Read more at PopSci


AADRL Spyropoulos Design Lab

Visual portfolio reel of various projects related to self-assembly robotics which could apply to architecture - a good primer on current ideas in the field:

Research from the AADRL Spyropoulos Design Lab exploring an architecture that is self-aware, self-structured and self-assembles. The research explores high population of mobility agents that evolve an architecture that moves beyond the fixed and finite towards a behavioural model of interactive human and machine ecologies.

You can find out more about the AA DRL program at their website here


The robot that learns from its mistakes

UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn. This marks a major milestone in the field of artificial intelligence.

Motor learning is much more challenging than passive recognition of images and sounds.

The researchers demonstrated their technique, a type of reinforcement learning, by having a robot complete various motor tasks without pre-programmed details about its surroundings.

“Most robotic applications are in controlled environments where objects are in predictable positions,” said UC Berkeley faculty member Trevor Darrell, who is one of the leads on the project.

“The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings,” said Darrell.

Conventional, but impractical approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

Instead, the researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

Deep learning helps the robot recognize patterns and categories among the sensory data it is receiving.

Learn more about how deep learning works in the robot



360 video from Norweigan news site Teknisk Ukeblad looks inside a robotic automated warehouse.

Note that the video below should be seen with a smartphone / tablet with the latest YouTube app installed so you can change your perspective. The video itself is unlisted so will not appear in search results.

[Google Translation:] When TU visited Komplett extended robot stock, we set a 360-degree camera on top of the robot no. 60. Here you can see a glimpse of the camera setup, which is actually composed of two modified GoPro Hero4 with fisheye optics. In sum, there is a video that lets you see in all directions.

More Here - Link to video Here