Follow posts tagged #neural net in seconds.Sign up
Simple Neural Networks
Here is an easy way to create a network which can learn:
The network is a collection of nodes. Each node has an input, an energy level, a threshold, an output, a synaptic map, an input map, and an output map.
Inputs are set high via the environment or via feedback loops from other node’s outputs.
When a node’s input goes high, place one threshold worth of energy into it’s energy level.
When a node’s energy level is higher than it’s threshold, it triggers. When a node triggers, remove one threshold of energy from it’s energy level. Check the node’s synaptic map and add one energy to each of the nodes listed in the synaptic map. If this node is listed in it’s own map - add one energy to this node’s output.
Outputs control actions in the environment or feedback to the input of other nodes.
This collection of nodes is an Individual.
Take a small group of individuals and have them all operate on the same set of inputs - monitor the outputs of each individual and score them based on which individuals have the least errors compared to the intended output pattern you desire.
After each round, replace the poorest scoring individual with a copy of the highest scoring individual - but mutate one mapping in the copy.
Over the course of many generations (and mutations), your population should evolve closer to the correct solution.
This minimalist network can be trained to solve many fascinating and complex problems, enjoy!
@tymkrs / @whixr
Million-core ARM machine aims to simulate brain
Million ARM CPU cluster SpiNNaker neural net simulates the human brain
Manchester academics aim to use a million ARM processing cores to simulate the neuron network of the human brain and investigate new models of computing.
The bedrock of the SpiNNaker computing architecture is formed of 50,000 or so ARM 968-series multi-core, low-powered embedded processors, which passed their functionality tests “with flying colours”, Manchester University said on Thursday.
“The most fundamental deliverable from this project is a generic computing platform that can be used to test hypotheses that are emerging from psychology and neuroscience about how information flows through the brain,” Steve Furber, Manchester University’s ICL processor of computer engineering and leader of the project, told ZDNet UK.
Furber also hopes that by closely approximating the structure of the brain, the researchers will investigate more distributed and resilient computer systems. “At the moment, the way we build computers is not able to cope with component failure, but the brain does. We don’t know how to design things with that resilience,” he said. Furber helped design the Advanced RISC Machine (ARM) 32-bit processor while at Acorn in the 1980s, before ARM was spun-off as a separate company.
Eventually, the chips will form a supercomputer built out of a SpiNNaker — spiking neural network — architecture, in which each chip sits within a two-dimensional mesh network connected to six or so others. Each processor has 18 cores and around 100 million transistors, and is attached to 128 megabytes of DRAM, which has a billion transistors. A single Intel Sandy Bridge-based Core i5-750 processor has 774 million transistors. Intel’s server and supercomputing processor, the Xeon Nehalem-EX, has around 2.3 billion transistors.
Once built, the computer will be accessible to other academics and researchers via the internet, possibly through the UK’s research network Janet, Furber said.Testing the system
At the moment, the researchers are testing the system with a card containing four ARM processors, giving 72 cores in total; they then hope to expand this and build a card-based system of 1,000 cores. By the end of the year the researchers hope to assemble a SpiNNaker architecture with 10,000 cores and anticipate achieving a million cores by the end of 2012, Furber said.
Each chip will mimic the spikes that neurons produce when they pass information between one another. “A spike is basically a fixed-energy impulse, so what you need to communicate is which neuron spiked and when it spiked,” Furber said. “When a processor that’s modelling a neuron computes that that neuron should spike it drops a 32-bit identifier into a 40-bit packet that goes into the local fabric”, at which point an on-chip router steers the packet to where it must go, Furber explained.
Because any processor can be turned into any particular neuron, the entire supercomputer can be modified, so while it can only simulate around one percent of the human brain, it can be modified to simulate different parts for other experiments.
“Imagine our machine as a giant FPGA [field-programmable gate array] where the individual components are not logic gates, but are neurons,” Furber explained. “Configuring a big machine is a significant software challenge, which we are working our way up towards.”
To that end, the researchers have ported a variant of high-level programming language Python to work on the SpiNNaker architecture.Toward a wholly digital brain?
A complete simulation of the entire brain is still far off. In June, a French academic predicted that a digital brain would be possible by around 2023.
Furber estimates the SpiNNaker architecture has a rough scale limit of around four billion neurons, compared to the 100 billion in the human brain, but with research this barrier could be broken.
“The real limit from our point of view is the kind of research budget that we can expect to get as a university research group,” Furber said. Additionally, he feels that the community SpiNNaker is targeted at, such as neuroscientists and psychologists, would not “sensibly be able to exploit” a whole brain model because understanding of this part of the brain is patchy.
“In the cortex there are tens of different types of neurons and they interconnect with each other in specific ways and information about the ways they connect and the strength is almost nonexistent,” he said.
The project has been funded by a £5m grant from the EPSRC, of which Manchester received £2.5m, with the rest going to the universities of Southampton, Cambridge and Sheffield. Additionally, Manchester has received some further small grants for the project, and an earlier grant of £750,000.
The researchers chose to use ARM because of Furber’s familiarity with the architecture and the relatively low power consumption — one watt per processor — of the ARM 968 chips.
Livesheets used to build a neural network
It’s a technical-looking model, but with a bit of explanation it could be added to your work space and used to build a more complex and self-explanatory neural network-based decision makers. Many Optical Character Recognition and Finger Print matching tools use neural networks, and potentially Livesheets could be used for these once we’ve added some better connectivity.
Watch this space!