neuronal networks


An experimental music video clip generated by a neural network

Creative coder @mario-klingemann puts together method to produce sound-reactive visuals from neural network trained datasets:

The visuals in this clip a generated from the sound itself using a plug and play generative network. The synchronization seems to be a bit off at the end - WIP. 


Learning is physical. Learning means the modification, growth, and pruning of our neurons, connections-called synapses- and neuronal networks, through experience…we are cultivating our own neuronal networks.
—  Dr. James Zull, Biochem professor and author of  The Art of Changing the Brain – Enriching Teaching by Exploring the Biology of Learning.
Psychotherapy works by going deep into the brain and its neurons and changing their structure by turning on the right genes. Psychiatrist Dr. Susan Vaughan has argued that the talking cure works by ‘talking to neurons,’ and that an effective psychotherapist or psychoanalyst is a 'microsurgeon of the mind’ who helps patients make needed alterations in neuronal networks.
—  Norman Doidge, author of The Brain That Changes Itself
Balancing Time and Space in the Brain: A New Model Holds Promise for Predicting Brain Dynamics

For as long as scientists have been listening in on the activity of the brain, they have been trying to understand the source of its noisy, apparently random, activity. In the past 20 years, “balanced network theory” has emerged to explain this apparent randomness through a balance of excitation and inhibition in recurrently coupled networks of neurons. A team of scientists has extended the balanced model to provide deep and testable predictions linking brain circuits to brain activity.

Lead investigators at the University of Pittsburgh say the new model accurately explains experimental findings about the highly variable responses of neurons in the brains of living animals. On Oct. 31, their paper, “The spatial structure of correlated neuronal variability,” was published online by the journal Nature Neuroscience.

The new model provides a much richer understanding of how activity is coordinated between neurons in neural circuits. The model could be used in the future to discover neural “signatures” that predict brain activity associated with learning or disease, say the investigators.

“Normally, brain activity appears highly random and variable most of the time, which looks like a weird way to compute,” said Brent Doiron, associate professor of mathematics at Pitt, senior author on the paper, and a member of the University of Pittsburgh Brain Institute (UPBI). “To understand the mechanics of neural computation, you need to know how the dynamics of a neuronal network depends on the network’s architecture, and this latest research brings us significantly closer to achieving this goal.”

Earlier versions of the balanced network theory captured how the timing and frequency of inputs—excitatory and inhibitory—shaped the emergence of variability in neural behavior, but these models used shortcuts that were biologically unrealistic, according to Doiron.

“The original balanced model ignored the spatial dependence of wiring in the brain, but it has long been known that neuron pairs that are near one another have a higher likelihood of connecting than pairs that are separated by larger distances. Earlier models produced unrealistic behavior—either completely random activity that was unlike the brain or completely synchronized neural behavior, such as you would see in a deep seizure. You could produce nothing in between.”

In the context of this balance, neurons are in a constant state of tension. According to co-author Matthew Smith, assistant professor of ophthalmology at Pitt and a member of UPBI, “It’s like balancing on one foot on your toes. If there are small overcorrections, the result is big fluctuations in neural firing, or communication.”

The new model accounts for temporal and spatial characteristics of neural networks and the correlations in the activity between neurons—whether firing in one neuron is correlated with firing in another. The model is such a substantial improvement that the scientists could use it to predict the behavior of living neurons examined in the area of the brain that processes the visual world.

After developing the model, the scientists examined data from the living visual cortex and found that their model accurately predicted the behavior of neurons based on how far apart they were. The activity of nearby neuron pairs was strongly correlated. At an intermediate distance, pairs of neurons were anticorrelated (When one responded more, the other responded less.), and at greater distances still they were independent.

“This model will help us to better understand how the brain computes information because it’s a big step forward in describing how network structure determines network variability,” said Doiron. “Any serious theory of brain computation must take into account the noise in the code. A shift in neuronal variability accompanies important cognitive functions, such as attention and learning, as well as being a signature of devastating pathologies like Parkinson’s disease and epilepsy.”

While the scientists examined the visual cortex, they believe their model could be used to predict activity in other parts of the brain, such as areas that process auditory or olfactory cues, for example. And they believe that the model generalizes to the brains of all mammals. In fact, the team found that a neural signature predicted by their model appeared in the visual cortex of living mice studied by another team of investigators.

“A hallmark of the computational approach that Doiron and Smith are taking is that its goal is to infer general principles of brain function that can be broadly applied to many scenarios. Remarkably, we still don’t have things like the laws of gravity for understanding the brain, but this is an important step for providing good theories in neuroscience that will allow us to make sense of the explosion of new experimental data that can now be collected,” said Nathan Urban, associate director of UPBI.


Think back to a really vivid memory. Got it? Now try to remember what you had for lunch three weeks ago. That second memory probably isn’t as strong—but why not? Why do we remember some things, and not others? And why do memories eventually fade?

Let’s look at how memories form in the first place. When you experience something – like dialing a phone number – the experience is converted into a pulse of electrical energy that zips along a network of neurons. Information first lands in short term memory where it’s available for anywhere from a few seconds to a couple of minutes. It’s then transferred to long-term memory through areas such as the hippocampus and finally to several storage regions  across the brain. Neurons throughout the brain communicate at dedicated sites called synapses using specialized neurotransmitters. If two neurons communicate repeatedly a remarkable thing happens – the efficiency of communication between them increases. This process, called long-term potentiation, is considered to be a mechanism by which memories are stored long-term.

But why do memories fade? Check out the TED-Ed Lesson How memories form and how we lose them - Catharine Young

Animation by Patrick Smith

Supporting the damaged brain

A new study shows that embryonic nerve cells can functionally integrate into local neural networks when transplanted into damaged areas of the visual cortex of adult mice.

(Image caption: Neuronal transplants (blue) connect with host neurons (yellow) in the adult mouse brain in a highly specific manner, rebuilding neural networks lost upon injury. Credit: Sofia Grade, LMU/Helmholtz Zentrum München)

When it comes to recovering from insult, the adult human brain has very little ability to compensate for nerve-cell loss. Biomedical researchers and clinicians are therefore exploring the possibility of using transplanted nerve cells to replace neurons that have been irreparably damaged as a result of trauma or disease. Previous studies have suggested there is potential to remedy at least some of the clinical symptoms resulting from acquired brain disease through the transplantation of fetal nerve cells into damaged neuronal networks. However, it is not clear whether transplanted intact neurons can be sufficiently integrated to result in restored function of the lesioned network. Now researchers based at LMU Munich, the Max Planck Institute for Neurobiology in Martinsried and the Helmholtz Zentrum München have demonstrated that, in mice, transplanted embryonic nerve cells can indeed be incorporated into an existing network in such a way that they correctly carry out the tasks performed by the damaged cells originally found in that position. Such work is of importance in the potential treatment of all acquired brain disease including neurodegenerative illnesses such as Alzheimer‘s or Parkinson’s disease, as well as strokes and trauma, given each disease state leads to the large-scale, irreversible loss of nerve cells and the acquisition of a what is usually a lifelong neurological deficit for the affected person.

In the study published in Nature, researchers of the Ludwig Maximilians University Munich, the Max Planck Institute of Neurobiology, and the Helmholtz Zentrum München have specifically asked whether transplanted embryonic nerve cells can functionally integrate into the visual cortex of adult mice. “This region of the brain is ideal for such experiments,” says Magdalena Götz, joint leader of the study together with Mark Hübener. Hübener is a specialist in the structure and function of the mouse visual cortex in Professor Tobias Bonhoeffer’s Department (Synapses – Circuits – Plasticity) at the MPI for Neurobiology. As Hübener explains, “we know so much about the functions of the nerve cells in this region and the connections between them that we can readily assess whether the implanted nerve cells actually perform the tasks normally carried out by the network.” In their experiments, the team transplanted embryonic nerve cells from the cerebral cortex into lesioned areas of the visual cortex of adult mice. Over the course of the following weeks and months, they monitored the behavior of the implanted, immature neurons by means of two-photon microscopy to ascertain whether they differentiated into so-called pyramidal cells, a cell type normally found in the area of interest. “The very fact that the cells survived and continued to develop was very encouraging,” Hübener remarks. “But things got really exciting when we took a closer look at the electrical activity of the transplanted cells.” In their joint study, PhD student Susanne Falkner and Postdoc Sofia Grade were able to show that the new cells formed the synaptic connections that neurons in their position in the network would normally make, and that they responded to visual stimuli.

The team then went on to characterize, for the first time, the broader pattern of connections made by the transplanted neurons. Astonishingly, they found that pyramidal cells derived from the transplanted immature neurons formed functional connections with the appropriate nerve cells all over the brain. In other words, they received precisely the same inputs as their predecessors in the network. In addition, they were able to process that information and pass it on to the downstream neurons which had also differentiated in the correct manner. “These findings demonstrate that the implanted nerve cells have integrated with high precision into a neuronal network into which, under normal conditions, new nerve cells would never have been incorporated,” explains Götz, whose work at the Helmholtz Zentrum and at LMU focuses on finding ways to replace lost neurons in the central nervous system. The new study reveals that immature neurons are capable of correctly responding to differentiation signals in the adult mammalian brain and can close functional gaps in an existing neural network.


An image recognition network dreams about every object it knows. Part ½: animals

Video from Ville-Matias Heikkilä uses deep-dream like technique to reveal collected neural dataset on various animals (and not puppyslugs)- the video here displays 500 of them:

Network used: VGG CNN-S (pretrained with Imagenet)

There are 1000 output neurons in the network, one for each image recognition category. In this video, the output of each of these neurons is separately amplified using backpropagation (i.e. deep dreaming).

More Here

Can the brain feel it? The world’s smallest extracellular needle-electrodes

A research team in the Department of Electrical and Electronic Information Engineering and the Electronics-Inspired Interdisciplinary Research Institute (EIIRIS) at Toyohashi University of Technology developed 5-μm-diameter needle-electrodes on 1 mm × 1 mm block modules. This tiny needle may help solve the mysteries of the brain and facilitate the development of a brain-machine interface. The research results were reported in Scientific Reports
on Oct 25, 2016.

(Image caption: Extracellular needle-electrode with a diameter of 5 μm mounted on a connector)

The neuron networks in the human brain are extremely complex. Microfabricated silicon needle-electrode devices were expected to be an innovation that would be able to record and analyze the electrical activities of the microscale neuronal circuits in the brain.

However, smaller needle technologies (e.g., needle diameter < 10 μm) are necessary to reduce damage to brain tissue. In addition to the needle geometry, the device substrate should be minimized not only to reduce the total amount of damage to tissue but also to enhance the accessibility of the
electrode in the brain. Thus, these electrode technologies will realize new experimental neurophysiological concepts.

A research team in the Department of Electrical and Electronic Information Engineering and the EIIRIS at Toyohashi University of Technology developed 5-
μm-diameter needle-electrodes on 1 mm × 1 mm block modules.

The individual microneedles are fabricated on the block modules, which are small enough to use in the narrow spaces present in brain tissue; as demonstrated in the recording using mouse cerebrum cortices. In addition, the block module remarkably improves the design variability in the packaging, offering numerous in vivo recording applications.

“We demonstrated the high design variability in the packaging of our electrode device, and in vivo neuronal recordings were performed by simply placing the device on a mouse’s brain. We were very surprised that high quality signals of a single unit were stably recorded over a long period using the 5-μm-diameter needle,” explained the first author, Assistant Professor Hirohito Sawahata, and co-author, researcher Shota Yamagiwa.

The leader of the research team, Associate Professor Takeshi Kawano said: “Our silicon needle technology offers low invasive neuronal recordings and provides novel methodologies for electrophysiology; therefore, it has the potential to enhance experimental neuroscience.” He added, “We expect the development of applications to solve the mysteries of the brain and the development of brain–machine interfaces.”

New insights into neural computations in cerebral cortex

Study by Max Planck Florida scientists points to an active role for dendrites in cortical processing.

Advancing our understanding of neural circuits in the cerebral cortex

The cerebral cortex is the largest and most complex area of the brain, comprising 20 billion neurons and 60 trillion synapses–a neuronal network whose proper function is critical for sensory perception, motor control, and cognition. The part of the cerebral cortex devoted to vision has played a key role in elucidating fundamental principles that are used by cortical circuits to encode information. Because edges supply a wealth of information about our visual world, neurons in visual cortex respond selectively to edge orientation, some preferring vertical edges, while others prefer horizontal, and all angles in between. Individual neurons also exhibit considerable diversity in their degree of selectivity, some responding to a narrow range of orientations, others to a broad range of orientations. These differences in selectivity are critical for accurately encoding the visual information in natural scenes, but the underlying mechanisms that account for this diversity remain unclear. In their recent publication in Nature Neuroscience, MPFI researchers Daniel Wilson, David Whitney, Ben Scholl and David Fitzpatrick describe how this diversity comes about and, in the process, provide new insights into the powerful role that dendrites play in cortical processing.

The research team addressed this issue using new microscopic imaging technologies that allowed them for the first time to assess the input/output functions of individual cortical neurons in the living brain. By using in vivo 2-photon calcium imaging, they were able to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual neurons in ferret visual cortex, and compare dendritic spine and cell body responses.

The researchers found that they were able to reliably predict the orientation preference of individual neurons simply by adding up the responses of their dendritic spines. However, the responses of the dendritic spines did not account for the degree of orientation selectivity exhibited by individual neurons. In looking for factors that could account for differences in selectivity, they noticed that spines with similar orientation preference were often spatially clustered along the dendrite and that neurons that had a greater number of these clusters exhibited greater selectivity. They also discovered that this functional clustering was correlated with localized dendritic events that are likely to enhance the inputs from the clustered spines.

So not only did the researchers solve the riddle of orientation selectivity, they provided evidence that dendrites endow neurons with more computational power than previously thought. While this study focused specifically on information coding in visual cortex, it is likely that functional clustering of inputs within the dendritic field is a common principle influencing neuronal input/output functions throughout the cerebral cortex, significantly enhancing the brain’s information processing capabilities.


An image recognition network dreams about every object it knows. Part 2/2: non-animals

Second video from Ville-Matias Heikkilä uses deep-dream like technique to visually reveal a collected neural dataset, this time featuring man-made objects and food:

Network used: VGG CNN-S (pretrained with Imagenet)

There are 1000 output neurons in the network, one for each image recognition category. In this video, the output of each of these neurons is separately amplified using backpropagation (i.e. deep dreaming).

The middle line shows the category title of the amplified neuron. The bottom line shows the category title of the highest competing neuron. Color coding: green = amplification very succesful (second tier far behind), yellow = close competition with the second tier, red = this is the second tier.

Some parameters adjusted mid-rendering, sorry.


The first video (on the animal dataset) can be found here