Thumbs-up for mind-controlled robotic arm

A paralysed woman who controlled a robotic arm using just her thoughts has taken another step towards restoring her natural movements by controlling the arm with a range of complex hand movements.

Thanks to researchers at the University of Pittsburgh, Jan Scheuermann, who has longstanding quadriplegia and has been taking part in the study for over two years, has gone from giving “high fives” to the “thumbs-up” after increasing the manoeuvrability of the robotic arm from seven dimensions (7D) to 10 dimensions (10D).

The extra dimensions come from four hand movements – finger abduction, a scoop, thumb extension and a pinch – and have enabled Scheuermann to pick up, grasp and move a range of objects much more precisely than with the previous 7D control.

It is hoped that these latest results, which have been published in IOP Publishing’s Journal of Neural Engineering, can build on previous demonstrations and eventually allow robotic arms to restore natural arm and hand movements in people with upper limb paralysis.

Scheuermann, 55, from Pittsburgh, PA had been paralysed from the neck down since 2003 due to a neurodegenerative condition. After her eligibility for a research study was confirmed in 2012, Scheuermann underwent surgery to be fitted with two quarter-inch electrode grids, each fitted with 96 tiny contact points, in the regions of her brain that were responsible for right arm and hand movements.

After the electrode grids in Scheuermann’s brain were connected to a computer, creating a brain-machine interface, the 96 individual contact points picked up pulses of electricity that were fired between the neurons. Computer algorithms were used to decode these firing signals and identify the patterns associated with a particular arm movement, such as raising the arm or turning the wrist.

By simply thinking of controlling her arm movements, Scheuermann was then able to make the robotic arm reach out to objects, as well as move it in a number of directions and flex and rotate the wrist. It also enabled her to high-five the researchers and feed herself dark chocolate.

Two years on from the initial results, the researchers at the University of Pittsburgh have now shown that Scheuermann can successfully manoeuvre the robotic arm in a further four dimensions through a number of hand movements, allowing for more detailed interaction with objects.

The researchers used a virtual reality computer program to calibrate Scheuermann’s control over the robotic arm, and discovered that it is crucial to include virtual objects in this training period in order to allow reliable, real-time interaction with objects.

Co-author of the study Dr Jennifer Collinger said: “10D control allowed Jan to interact with objects in different ways, just as people use their hands to pick up objects depending on their shapes and what they intend to do with them. We hope to repeat this level of control with additional participants and to make the system more robust, so that people who might benefit from it will one day be able to use brain-machine interfaces in daily life.

“We also plan to study whether the incorporation of sensory feedback, such as the touch and feel of an object, can improve neuroprosthetic control.”

Commenting on the latest results, Scheuermann said: ““This has been a fantastic, thrilling, wild ride, and I am so glad I’ve done this.

“This study has enriched my life, given me new friends and co-workers, helped me contribute to research and taken my breath away. For the rest of my life, I will thank God every day for getting to be part of this team.”


So after New Years and after I get the 900 dollars of my Ambrosine suit paid off I have decided to make the full bodied animatronics. I just watched a brilliant tutorial. The skeleton was made from wood, wire and some type of calking or stuffing. Then of course it was furred.  It probably will take months. But slow and steady wins the race. I think I will have someone do the voicing and singing for the animatronics for me. Except for Clara of course. She’ll have my voice. I’m just going to make it simple like the older animatronics from the 60s and 70s.

Maybe I could use these to let my college professor be impressed XD After all I do plan on teaming up with Scott after I graduate highschool and maybe college but I might make it when I am still in college. A real scale model of Freddy Fazbear’s pizzeria. Y’know it’ll have the pizza and arcade. Woo family fun. So it’ll be just a real pizza place just the animatronics probably won’t kill anyone.



Japanese robotics fighting contest where the makers have no technical skills and the robots have to be as crappy as possible - video embedded below:

Hebocon is a robot sumo-wrestling tournament for those who don’t have the technical skills to actually make robots.
31 pseudo-robots that don’t even move properly came together to go head to head.
The robots in this tournament are nothing like what their builders initially imagined them to be, robots are forgotten on the train, the strategies made with careful planning only result in failure, and the entrants who enter the ring brag about their robot’s secret moves that don’t really exist.
This is the heated battle between crappy, but lovable robots.


Researcher Advances Robotic Surgery Technique to Treat Previously Inoperable Head and Neck Cancer Tumors

In a groundbreaking new study, UCLA researchers have for the first time advanced a surgical technique performed with the help of a robot to successfully access a previously-unreachable area of the head and neck.

This pioneering method can now be used safely and efficiently in patients to remove tumors that many times were previously thought to be inoperable, or necessitated the use of highly-invasive surgical techniques in combination with chemotherapy or radiation therapy.

Developed by Dr. Abie Mendelsohn, UCLA Jonsson Comprehensive Cancer Center member and director of head and neck robotic surgery at UCLA, this new approach provides the surgical community with a leading-edge technology roadmap to treat patients who had little or no hope of living cancer-free lives.

“This is a revolutionary new approach that uses highly advanced technology to reach the deepest areas of the head and neck,” said Mendelsohn, lead author of the study. “Patients can now be treated in a manner equivalent to that of a straightforward dental procedure and go back to leading normal, healthy lives in a matter of days with few or even no side effects.”

A New Approach to Saving Lives

The parapharyngeal space is pyramid-shaped area that lies near the base of the human skull and connects several deep compartments of the head and neck. It is lined with many large blood vessels, nerves and complex facial muscles, making access to the space via traditional surgical options often impossible or highly invasive.

Current surgical techniques can necessitate external incisions be made to the patient’s neck, or the splitting of their jaw bone or areas close to the voice box. Chemotherapy and radiation therapy are also often required, further complicating recovery and potentially putting patients at risk for serious (or even lethal) side effects.

Approved by the U.S. Food & Drug Administration in 2009, Trans Oral Robotic Surgery (or TORS) utilizes the Da Vinci robotic surgical system, the state-of-the-art technology that was developed at UCLA by the specialized surgical program for the head and neck. TORS uses a minimally invasive procedure in which a surgical robot, under the full control of a specially trained physician, operates with a three-dimensional, high-definition video camera and robotic arms.

These miniature “arms” can navigate through the small, tight and delicate areas of the mouth without the need for external incisions. A retraction system allows the surgeon to see the entire surgical area at once. While working from an operating console just steps away from the patient’s bed, every movement of the surgeon’s wrists and fingers are transformed into movement of the surgical instruments.

Over the course of the robotic program’s development, Mendelsohn refined, adapted and advanced the TORS techniques to allow surgical instruments and the 3-D imaging tools to at last reach and operate safely within the parapharyngeal space, amongst other recessed areas of the head and neck.

Currently, Mendelsohn’s new procedure largely benefits patients with tumors located in the throat near the tonsils and tongue, but it continues to be adapted and expanded in scope and impact.

“We are tremendously excited about the possibilities for the surgical community with this new advancement of TORS,” said Mendelsohn. “Now patients have options they never had before, and we can even develop potential applications for the procedure beyond the surface of the head and neck.”

The study was published online ahead of print in the journal Head & Neck.

David Alpern: One Patient’s Story

In 2012, David Alpern received devastating news. He was diagnosed with throat cancer, and the treatment options given to him by his doctors sound worse than the disease.

“They described a procedure where your face is split in half and it’s basically reconstructive surgery. I was completely freaked out,” said Alpern, a husband and father of two.

After careful examination and imaging by Dr. Abie Mendelsohn at UCLA, the doctor determined David was a perfect candidate for TORS. The husband and father of two was soon up and about in a matter of days following the procedure. Like the over 100 similar TORS surgeries performed with Dr. Mendelsohn at the controls, David’s tumor was removed and he’s now completely cancer-free.

“I try not to get too cocky or excited that I beat cancer, but I think I did,” said David. “There are no side effects at this point. My hopes are just to watch my kids grow up and enjoy my family and my life.”


Thought control makes robot arm grab and move objects

A woman paralysed from the neck down can now grab a ball with a robotic arm – just by thinking about it.

Jan Scheuermann, who lost control of her limbs in 2003, was able to make complex hand movements using the robot arm. She successfully picked up and moved a variety of objects, from a tiny cube to a tube standing upright (see video).

The system, developed by Jennifer Collinger at the University of Pittsburgh, Pennsylvania, and colleagues, uses two small electrode grids implanted in Scheuermann’s brain, in the region of the left motor cortex responsible for controlling her right arm and hand. The devices were connected to a computer, which analysed electrical brain activity picked up by 96 contact points within the grids.

Computer algorithms learned to match the electrical patterns to Scheuermann’s thoughts about making specific movements with a hand or arm. These patterns were then translated into the real movements of a robotic arm.

The latest version of the algorithm can detect four patterns of activity related to the shape of the hand, adding a scooping shape, thumb extension and a pinching action to the repertoire of possible movements. The improvements allow Scheuermann to control the artificial limb with 10 degrees of freedom simultaneously.

The team say that such a wide range of motion has never been achieved in this way before. They hope that adding the capacity to use different hand shapes will expand the range of activities paralysed people can perform independently – from gestures to manipulation of objects – once the technology moves outside the lab.

Journal reference: Journal of Neural Engineering, DOI: 10.1088/1741-2552/12/1/016011

Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? Paper by John Danaher

John Danaher published a paper (forthcoming in Criminal Law and Philosophy) asking What if sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse?


Soon there will be sex robots. The creation of such devices raises a host of social, legal and ethical questions. In this article, I focus in on one of them. What if these sex robots are deliberately designed and used to replicate acts of rape and child sexual abuse? Should the creation and use of such robots be criminalised, even if no person is harmed by the acts performed? I offer an argument for thinking that they should be. The argument consists of two premises. The first claims that it can be a proper object of the criminal law to regulate wrongful conduct with no extrinsically harmful effects on others (the moralistic premise). The second claims that the use (and possibly the manufacture) of robots that replicate acts of rape and child sexual abuse would be wrongful, even if such usage had no extrinsically harmful effects on others. I defend both premises of this argument and consider its implications for the criminal law. I do not offer a conclusive argument for criminalisation, nor would I wish to be interpreted as doing so; instead, I offer a tentative argument and a framework for future debate. This framework may also lead one to question the proposed rationales for criminalisation.

Important questions. Legal, ethical and social implications are often neglected in the technological optimism of our time, meaning that most of the present pictures of the future lack an important perspective.

[read the paper] [John Danaher] [via Miles Brundage]

Researchers Have Put The Mind Of A Worm Inside A LEGO Robot -

Are we any closer to those deadly, killer A.I. robots that science fiction has been promising for the past few decades? The short answer is no, but researchers have gotten a few steps closer by putting the ‘mind’ of a worm into a LEGO robot. Our brains our extremely complex compared to a worm’s (C. Elegan) whose brain only has 302 neurons. To take advantage of this small number, researchers from the OpenWorm project have implemented their own software into a LEGO robot. The software mimics the nematode’s stimulus responses and acts as if it were actually a worm. The whole process is complex so we’ll simplify it. A neural map of the C. Elegan’s brain has been developed by one of OpenWorms founders, Timothy Busbice. The map that has been implemented into the robot allows for sensory input and motor output. To start, if a sensor picks up any information, it sends a value to its respective sensory “neuron” over UDP packets (internet communications protocol). That neurons then tell the motors to output an action. Whether it be reversing due to the robot “smelling” something or going forward, the robot’s movements are not what you’d expect. Unlike your typical “if-then” robot, this worm robot outputs actions based on which neurons are activated by the values delivered by the sensors. It’s a bit more random but pretty neat. I-Programming news reports that Busbice is working on a Raspberry Pi version of the worm robot. It’s not finished yet but expect to see it soon. What do you think? Is this just some clever programming or are we on the verge of some serious robot A.I.? (via Researchers Have Put The Mind Of A Worm Inside A LEGO Robot | SimpleBotics)



Robotic audio project by Polish creatives panGenerator features eight articulate speakers which output sound as a choir - video embedded below:

the eight-channel robotic choir

performers voice is processed in eight independent channels
and feeded to the speakers, movement of each speaker
is directly connected with frequency and amplitude
of the generated sound