5

Captives #B04

Documentation from artist Quayola of a stone sculpture being formed with an industrial robot to create a work combining the contemporary and the classical:

Captives is an ongoing series of digital and physical sculptures, a contemporary interpretation of Michelangelo’s unfinished series “Prigioni” (1513-1534) and his technique of “non-finito”.

The work explores the tension and equilibrium between form and matter, man-made objects of perfection and complex, chaotic forms of nature. Whilst referencing Renaissance sculptures, the focus of this series shifts from pure figurative representation to the articulation of matter itself. As in the original “Prigioni” the classic figures are left unfinished, documenting the very history of their creation and transformation.

Mathematical functions and processes describe computer-generated geological formations that evolve endlessly, morphing into classical figures. Industrial computer-controlled robots sculpt the resulting geometries into life-size “unfinished” sculptures.

More Here

Robotics gloves develop to give stroke patients therapy at home.

A team of European researchers have been developing robotic gloves aimed at helping stroke victims to receive advanced therapy at home. The SCRIPT project (Supervised Care and Rehabilitation Involving Personal Tele-robotics) has led to two prototypes that help develop hand and wrist movement while recording monitoring and recording the patient’s ability to perform a variety of tasks.
The system is designed to allow patients to continue receiving therapy at home once in-clinic rehab sessions are over. The hope is that well targeted therapy in the comfort of the home will lead to meaningful improvements in patients that may otherwise plateau in their motor ability.
Dr Farshid Amirabdollahian, a senior lecturer in adaptive systems at the University of Hertfordshire’s School of Computer Science who co-ordinated the project, said: “This project focused on therapies for stroke patients at home. Our goal was to make motivating therapies available to people to practise at home using this system, hoping that they have a vested interest to practise and will do so. We tried this system with 30 patients and found that patients indeed practised at home, on average around 100 minutes each week, and some showed clinical improvements in their hand and arm function.”

youtube

Zoobotics: Introducing Pleurobot

EPFL developed a salamander-like robot called Pleurobot, that not only looks like an animal designed by Neil Gaiman. It actually behaves like an animal (talking features excluded). The motion is based on 3D x-ray movies of a real salamander walking and swimming.

Contrary to our previous bio-inspired approaches, in this new approach we make use of the recent advances in cineradiography to benefit from the advantages that a biomimetic design can offer. We recorded three-dimensional X-ray videos of salamanders, Pleurodeles waltl, walking on ground, walking underwater and swimming. Tracking up to 64 points on the animal’s skeleton we were able to record three-dimensional movements of bones in great detail. Using optimization on all the recorded postures for the three gaits we deduced the number and position of active and passive joints needed for the robot to reproduce the animal movements in reasonable accuracy in three-dimensions.

Whats next? Pleurobot could be part of near future search-and-rescue applications and "In the future, we plan to use Pleurobot’s design methodology to bring early tetrapods to ‘life’." Dinosaurs FTW.

[read more] [via IEEE] [picture credits: Konstantinos Karakasiliotis & Robin Thandiackal, BioRob, EPFL, 2013]

Robot model for infant learning shows bodily posture may affect memory and learning

An Indiana University cognitive scientist and collaborators have found that posture is critical in the early stages of acquiring new knowledge.

The study, conducted by Linda Smith, a professor in the IU Bloomington College of Arts and Sciences’ Department of Psychological and Brain Sciences, in collaboration with a roboticist from England and a developmental psychologist from the University of Wisconsin-Madison, offers a new approach to studying the way “objects of cognition,” such as words or memories of physical objects, are tied to the position of the body.

"This study shows that the body plays a role in early object name learning, and how toddlers use the body’s position in space to connect ideas," Smith said. "The creation of a robot model for infant learning has far-reaching implications for how the brains of young people work."

The research, “Posture Affects How Robots and Infants Map Words to Objects,” was published today in PLOS ONE, an open-access, peer-reviewed online journal.

Using both robots and infants, researchers examined the role bodily position played in the brain’s ability to “map” names to objects. They found that consistency of the body’s posture and spatial relationship to an object as an object’s name was shown and spoken aloud were critical to successfully connecting the name to the object.

The new insights stem from the field of epigenetic robotics, in which researchers are working to create robots that learn and develop like children, through interaction with their environment. Morse applied Smith’s earlier research to creating a learning robot in which cognitive processes emerge from the physical constraints and capacities of its body.

"A number of studies suggest that memory is tightly tied to the location of an object," Smith said. "None, however, have shown that bodily position plays a role or that, if you shift your body, you could forget."

To reach these conclusions, the study’s authors conducted a series of experiments, first with robots programmed to map the name of an object to the object through shared association with a posture, then with children age 12 to 18 months.

In one experiment, a robot was first shown an object situated to its left, then a different object to the right; then the process was repeated several times to create an association between the objects and the robot’s two postures. Then with no objects in place, the robot’s view was directed to the location of the object on the left and given a command that elicited the same posture from the earlier viewing of the object. Then the two objects were presented in the same locations without naming, after which the two objects were presented in different locations as their names were repeated. This caused the robot to turn and reach toward the object now associated with the name.

The robot consistently indicated a connection between the object and its name during 20 repeats of the experiment. But in subsequent tests where the target and another object were placed in both locations — so as to not be associated with a specific posture — the robot failed to recognize the target object. When replicated with infants, there were only slight differences in the results: The infant data, like that of the robot, implicated the role of posture in connecting names to objects.

"These experiments may provide a new way to investigate the way cognition is connected to the body, as well as new evidence that mental entities, such as thoughts, words and representations of objects, which seem to have no spatial or bodily components, first take shape through spatial relationship of the body within the surrounding world," Smith said.

Smith’s research has long focused on creating a framework for understanding cognition that differs from the traditional view, which separates physical actions such as handling objects or walking up a hill from cognitive actions such as learning language or playing chess.

Additional research is needed to determine whether this study’s results apply to infants only, or more broadly to the relationship between the brain, the body and memory, she added. The study may also provide new approaches to research on developmental disorders in which difficulties with motor coordination and cognitive development are well-documented but poorly understood.