microchipping

We had this handsome fancy pigeon admitted on Tuesday. He is quite clearly a lost pet, but has no rings on his feet, microchip or address stamp on his wings.

We will keep him for 7 days to allow the owner time to come forward, but happily we already have a great home lined up for him if nobody claims him.

Please SHARE to help us find his owner!

6

Get your cameras ready! Here comes male supermodel Tyson Beckford! This handsome gentleman is a 55lb 5-6year old Boxer originally from North Carolina. Not only does he have the looks, he’s got the brains as well since Tyson is already housebroken, crate trained and can sit on command! Tyson’s foster mom says he’s a “sweet humble dog,” who enjoys the company of other dogs, but is equally happy to lounge on the couch. He’s a gentle and friendly pup and ready for that special someone! Tyson is neutered, up to date on vaccines and microchipped. 

Email seespotrescued@gmail.com for an adoption application or download it here: http://bit.ly/SSR-APP-MAY2014. You can also check out our other adoptable dogs here!

It looks like Geordi LaForge’s vision visor is already outdated. A tiny 3mm microchip has given vision back to the blind. Scientists and doctors in Oxford implanted a new “bionic eye” microchip in the eyes of two blind individuals last month during a grueling eight-hour operation. The chips were placed in the back of the eyes and connected with electrodes. Weeks later, both individuals — Chris James and Robin Millar — have regained ‘useful vision’ and are well on their way to recognizing faces and seeing once again, reports Sky News. 

Read more: http://www.digitaltrends.com/cool-tech/bionic-eyes-activate-microchip-gives-sight-to-the-blind/#ixzz2MGEVuEDh
Follow us: @digitaltrends on Twitter | digitaltrendsftw on Facebook

Birth control, now in 16-year microchip form 

Thanks to the Bill and Melinda Gates Foundation, a woman who doesn’t want to get pregnant could soon implant a matchstick-sized, wireless chip under her arm, stomach or butt and be “on the pill” for years — 16 years, to be exact.

At the moment, no hormonal birth control exists that lasts for more than five years. Non-hormonal copper IUDs last 12.

Read more | Follow micdotcom

Bioengineers create circuit board modeled on the human brain

Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.

Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."

Brain ferment

In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.

Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.

In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.

Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.

Speed and efficiency

But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.

By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.

For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.

A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.

Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.

In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

2

Neuroelectronics: Smart connections

Kwabena Boahen got his first computer in 1982, when he was a teenager living in Accra. “It was a really cool device,” he recalls. He just had to connect up a cassette player for storage and a television set for a monitor, and he could start writing programs.

But Boahen wasn’t so impressed when he found out how the guts of his computer worked. “I learned how the central processing unit is constantly shuffling data back and forth. And I thought to myself, ‘Man! It really has to work like crazy!’” He instinctively felt that computers needed a little more ‘Africa’ in their design, “something more distributed, more fluid and less rigid”.

Today, as a bioengineer at Stanford University in California, Boahen is among a small band of researchers trying to create this kind of computing by reverse-engineering the brain.

The brain is remarkably energy efficient and can carry out computations that challenge the world’s largest supercomputers, even though it relies on decidedly imperfect components: neurons that are a slow, variable, organic mess. Comprehending language, conducting abstract reasoning, controlling movement — the brain does all this and more in a package that is smaller than a shoebox, consumes less power than a household light bulb, and contains nothing remotely like a central processor.

To achieve similar feats in silicon, researchers are building systems of non-digital chips that function as much as possible like networks of real neurons. Just a few years ago, Boahen completed a device called Neurogrid that emulates a million neurons — about as many as there are in a honeybee’s brain. And now, after a quarter-century of development, applications for ‘neuromorphic technology’ are finally in sight. The technique holds promise for anything that needs to be small and run on low power, from smartphones and robots to artificial eyes and ears. That prospect has attracted many investigators to the field during the past five years, along with hundreds of millions of dollars in research funding from agencies in both the United States and Europe.

Neuromorphic devices are also providing neuroscientists with a powerful research tool, says Giacomo Indiveri at the Institute of Neuroinformatics (INI) in Zurich, Switzerland. By seeing which models of neural function do or do not work as expected in real physical systems, he says, “you get insight into why the brain is built the way it is”.

And, says Boahen, the neuromorphic approach should help to circumvent a looming limitation to Moore’s law — the longstanding trend of computer-chip manufacturers managing to double the number of transistors they can fit into a given space every two years or so. This relentless shrinkage will soon lead to the creation of silicon circuits so small and tightly packed that they no longer generate clean signals: electrons will leak through the components, making them as messy as neurons. Some researchers are aiming to solve this problem with software fixes, for example by using statistical error-correction techniques similar to those that help the Internet to run smoothly. But ultimately, argues Boahen, the most effective solution is the same one the brain arrived at millions of years ago.

“My goal is a new computing paradigm,” Boahen says, “something that will compute even when the components are too small to be reliable.”

Silicon cells

The neuromorphic idea goes back to the 1980s and Carver Mead: a world-renowned pioneer in microchip design at the California Institute of Technology in Pasadena. He coined the term and was one of the first to emphasize the brain’s huge energy-efficiency advantage. “That’s been the fascination for me,” he says, “how in the heck can the brain do what it does?”

Mead’s strategy for answering that question was to mimic the brain’s low-power processing with ‘sub-threshold’ silicon: circuitry that operates at voltages too small to flip a standard computer bit from a 0 to a 1. At those voltages, there is still a tiny, irregular trickle of electrons running through the transistors — a spontaneous ebb and flow of current that is remarkably similar in size and variability to that carried by ions flowing through a channel in a neuron. With the addition of microscopic capacitors, resistors and other components to control these currents, Mead reasoned, it should be possible to make tiny circuits that exhibit the same electrical behaviour as real neurons. They could be linked up in decentralized networks that function much like real neural circuits in the brain, with communication lines running between components rather than through a central processor.

By the 1990s, Mead and his colleagues had shown it was possible to build a realistic silicon neuron. That device could accept outside electrical input through junctions that performed the role of synapses, the tiny structures through which nerve impulses jump from one neuron to the next. It allowed the incoming signals to build up voltage in the circuit’s interior, much as they do in real neurons. And if the accumulating voltage passed a certain threshold, the silicon neuron ‘fired’, producing a series of voltage spikes that travelled along a wire playing the part of an axon, the neuron’s communication cable. Although the spikes were ‘digital’ in the sense that they were either on or off, the body of the silicon neuron operated — like real neurons — in a non-digital way, meaning that the voltages and currents weren’t restricted to a few discrete values as they are in conventional chips.

That behaviour mimics one key to the brain’s low-power usage: just like their biological counterparts, the silicon neurons simply integrated inputs, using very little energy, until they fired. By contrast, a conventional computer needs a constant flow of energy to run an internal clock, whether or not the chips are computing anything.

Mead’s group also demonstrated decentralized neural circuits — most notably in a silicon version of the eye’s retina. That device captured light using a 50-by-50 grid of detectors. When their activity was displayed on a computer screen, these silicon cells showed much the same response as their real counterparts to light, shadow and motion. Like the brain, this device saves energy by sending only the data that matters: most of the cells in the retina don’t fire until the light level changes. This has the effect of highlighting the edges of moving objects, while minimizing the amount of data that has to be transmitted and processed.

Coding challenge

In those early days, researchers had their hands full mastering single-chip devices such as the silicon retina, says Boahen, who joined Mead’s lab in 1990. But by the end of the 1990s, he says, “we wanted to build a brain, and for that we needed large-scale communication”. That was a huge challenge: the standard coding algorithms for chip-to-chip communication had been devised for precisely coordinated digital signals, and wouldn’t work for the more-random spikes created by neuromorphic systems. Only in the 2000s did Boahen and others devise circuitry and algorithms that would work in this messier system, opening the way for a flurry of development in large-scale neuromorphic systems.

Among the first applications were large-scale emulators to give neuroscientists an easy way to test models of brain function. In September 2006, for example, Boahen launched the Neurogrid project: an effort to emulate a million neurons. That is only a tiny chunk of the 86 billion neurons in the human brain, but enough to model several of the densely interconnected columns of neurons thought to form the computational units of the human cortex. Neuroscientists can program Neurogrid to emulate almost any model of the cortex, says Boahen. They can then watch their model run at the same speed as the brain — hundreds to thousands of times faster than a conventional digital simulation. Graduate students and researchers have used it to test theoretical models of neural function for processes such as working memory, decision-making and visual attention.

“In terms of real efficiency, in terms of fidelity to the brain’s neuronal networks, Kwabena’s Neurogrid is well in advance of other large-scale neuromorphic systems,” says Rodney Douglas, co-founder of the INI and co-developer of the silicon neuron.

But no system is perfect, as Boahen himself is quick to point out. One of Neurogrid’s biggest shortcomings is that its synapses — of which there is an average of 5,000 per neuron — are simplified connections that cannot be modified individually. This means that the system cannot be used to model learning, which occurs in the brain when synapses are modified by experience. Given the limited space available on the chip, squeezing in the complex circuitry needed to make each synapse behave in a more realistic manner would require circuit elements about a thousand times smaller in area than they are at present — in the realm of nanotechnology. This is currently impossible, although a newly developed class of nanometre-scale memory devices called ‘memristors’ could someday solve the problem.

Another issue stems from inevitable variations in the fabrication process, which mean that every neuromorphic chip performs slightly differently. “The variability is still much less than what is observed in the brain,” says Boahen — but it does mean that programs for Neurogrid have to allow for substantial variations in the silicon neurons’ firing rates.

This issue has led some researchers to abandon Mead’s original idea of using sub-threshold chips. Instead, they are using more conventional digital systems that are still neuromorphic in the sense that they mimic the electrical behaviour of individual neurons, but are more predictable and much easier to program — at the cost of using more power.

A leading example is the SpiNNaker Project, led since 2005 by computer engineer Steve Furber at the University of Manchester, UK. This system uses a version of the very-low-power digital chips — which Furber helped to develop — that are found in many smartphones. SpiNNaker can currently emulate up to 5 million neurons. These neurons are simpler than those in Neurogrid and burn more power, says Furber, but the system’s purpose is similar: “running large-scale brain models in biological real time”.

Another effort sticks with neuron-like chips, but boosts their speed. Neurogrid’s neurons operate at exactly the same rate as real ones. But the European BrainScaleS project, headed by former accelerator-physicist Karlheinz Meier at Heidelberg University in Germany, is developing a neuromorphic system that currently emulates 400,000 neurons running up to 10,000 times faster than real time. This means it consumes about 10,000 times more energy than equivalent processes in the brain. But the speed is a boon for some neuroscience researchers. “We can simulate a day of neural activity in 10 seconds,” Meier says.

Furber and Meier now have the money to push for bigger and better. Together they constitute the neuromorphic arm of the European Union’s ten-year, €1-billion (US$1.3-billion) Human Brain Project, which was officially launched last month. The roughly €100 million devoted to neuromorphic research will allow Furber’s group to scale up his system to 500 million digital neurons; Meier’s group, meanwhile, is aiming for 4 million.

The success of these research-oriented projects has helped to stoke interest in the idea of using neuromorphic hardware for practical, ultra-low-power applications in devices from phones to robots. Until recently, that hadn’t been a priority in the computer industry. Chip designers could usually minimize energy consumption by simplifying circuit design, or splitting computations over multiple processor ‘cores’ that can run in parallel or shut down when they are not needed.

But these approaches can only achieve so much. Since 2008, the US Defense Advanced Research Projects Agency has spent more than $100 million on its SyNAPSE project to develop compact, low-power neuromorphic technology. One of the project’s main contractors, the cognitive computing group at IBM’s research centre in Almaden, California, has used its share of the money to develop digital, 256-neuron chips that can be used as building blocks for larger-scale systems.

Brain power

Boahen is pursuing his own approach to practical applications — most notably in an as-yet-unnamed initiative he started in April. The project is based on Spaun: a design for a computer model of the brain that includes the parts responsible for vision, movement and decision-making. Spaun relies on a programming language for neural circuitry developed a decade ago by Chris Eliasmith, a theoretical neuroscientist at the University of Waterloo in Ontario, Canada. A user just has to specify a desired neural function — the generation of instructions to move an arm, for example — and Eliasmith’s system will automatically design a network of spiking neurons to carry out that function.

To see if it would work, Eliasmith and his colleagues simulated Spaun on a conventional computer. They showed that, with 2.5 million simulated neurons plus a simulated retina and hand, it could copy handwritten digits, recall the items in a list, work out the next number in a given sequence and carry out several other cognitive tasks. That’s an unprecedented range of abilities by neural simulation standards, says Boahen. But the Spaun simulation ran about 9,000 times slower than real time, taking 2.5 hours to simulate 1 second of behaviour.

Boahen contacted Eliasmith with the obvious proposition: build a physical version of Spaun using real-time neuromorphic hardware. “I got very excited,” says Eliasmith, for whom the match seemed perfect. “You’ve got the peanut butter, we’ve got the chocolate!”

With funding from the US Office of Naval Research, Boahen and Eliasmith have put together a team that plans to build a small-scale prototype in three years and a full-scale system in five. For sensory input they will use neuromorphic retinas and cochleas developed at the INI, says Boahen. For output, they have a robotic arm. But the cognitive hardware will be built from scratch. “This is not a new Neurogrid, but a whole new architecture,” he says. It will trade a certain amount of realism for practicality, relying on “very simple, very efficient neurons so that we can scale to the millions”.

The system is explicitly designed for real-world applications. On a five-year timescale, says Boahen, “we envision building fully autonomous robots that interact with their environments in a meaningful way, and operate in real-time while [their brains] consume as much electricity as a cell phone”. Such devices would be much more flexible and adaptive than today’s autonomous robots, and would consume considerably less power.

In the longer term, Boahen adds, the project could pave the way for compact, low-power processors in any computer system, not just robotics. If researchers really have managed to capture the essential ingredients that make the brain so efficient, compact and robust, then it could be the salvation of an industry about to run into a wall as chips get ever smaller.

“But we won’t know for sure,” Boahen says, “until we try.”

The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun… Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.
—  Kevin Kelly
2

She doesn’t have any tags on and she’s not microchipped. She’s only 12 weeks old. (Originally posted by lilypad_thevizsla on instagram).