I (we) am/are a hivemind of 20,000 bees, under the dictatorial rule of a queen, piloting a colossal human-sized mecha shell (as established in previous asks)
By carefully synchronizing the wing-beats of all 20,000 drones, I (we) create an interplay of frequencies which establishes a sort of consciousness. Each bee acts as a single node capable of sending, receiving, and passing along impulses, not unlike the neurons within a neural network. Vibrations from my (our) wings are sent through the air as a conduit, carrying pulses from bee to bee in the form of obfuscating air pressure. These high-frequency waves take the role of electrical pulses in a typical human brain. This creates a “Singularity” among the bee horde which functions as a coherent mind.
In my (our) opinion, this cohesion and cooperation satisfies a human’s definition of “single” as opposed to “many”
Thank you for asking me (us) about the complicated technicalities of my (our) being. It makes me (us) feel appreciated.
Blue Brain team finds 'Multi-dimensional universe' in brain networks
For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.
Using algebraic topology in a way that it has never been used before in Neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.
Learning is physical. Learning means the modification, growth, and pruning of our neurons, connections-called synapses- and neuronal networks, through experience…we are cultivating our own neuronal networks.
Psychotherapy works by going deep into the brain and its neurons and changing their structure by turning on the right genes. Psychiatrist Dr. Susan Vaughan has argued that the talking cure works by ‘talking to neurons,’ and that an effective psychotherapist or psychoanalyst is a 'microsurgeon of the mind’ who helps patients make needed alterations in neuronal networks.
How to measure the relevance of a voice you hear on the internet:
A series of thoughts you can implement to cope with criticism or hatred.
Step 1: if whatever this person is complaining about vanished, what would they do with themselves that is useful to society? Would they likely move on to complaining vehemently about something else? Or would they suddenly have time for their abandoned engineering careers, their medical degree, or perhaps their terrible art?
If the answer is “nothing”, then disregard them
Step 2: is this person actually trying to improve whatever situation it is they’re complaining about? Are they providing strategies, speaking in a way the welcomes equal and frank discussion, allows for ideas other than their own? Are they providing resources which a person can use to reproduce their opinions for themselves, or to assist in fixing the situation? Are citations being made? Is information being given in an even-handed way? If not, then they are not there to improve anything. They point out the discrepancies to mock them, not to heal them.
Step 3: Does the person honestly have a single ounce of care for the people with whom they clash? The best way to root out a professional duelist is to watch how easily he goes for his revolver - which is to say, attacking as a natural beginning is not constructive. It indicates a lack of care or concern for anyone but themselves. It means their opinion is entirely self-serving, and all other arguments they may make that incorporate other people or their situations (Using veterans to lambast a presidential administration for example) is only utilized as a pawn for their own selfish reasons. If no strategies, resources, alternatives are offered, and all arguments are framed in terms of abusive, dismissive language or exaggerated situations…they are there just to brawl…disregard.
Step 4: if the person uses nothing but inflammatory language, mockery, or sarcasm, they have but one thing in mind - which is to hurt the feelings of one group to entertain another. All their points are henceforth invalidated because they refused to obey the rules of decorum.
Step 5: if they tell you, in ANY capacity, that how you feel about something, how your soul reacts, is wrong…if they try to correct that with negative judgements or inflammatory language…they aren’t actually interested in your opinion. It contradicts theirs. They will not listen. They operate with the arrogance that allows them the freedom to question you, but not the humility to receive similar. They literally have a neuronal network weighted against your words and will never hear you. Don’t bother trying.
Step 6: if what they say makes you feel like ripping off one of their arms so that you can use the jagged bone to scoop out their eyes so that you can piss into their open skull…disregard them.
My advice in all cases is the same:
Leave it alone for a time. Walk away. Let your mind come up with all its clever rejoinders. Tell others. Vent your frustrations. Most importantly - breathe. Center yourself. Focus all that anger and hurt into a fine, thin sheet of steel, hammer and temper it in your focus, sharpen it on your energy level. But never wield it unless you’re willing to cross swords, fight for blood, be injured, divide your mind between the fray and the strategy.
Don’t tax yourself with responding to these people unless you know for a fact, you can outlast them. They have nothing else to live for. This is all there is for them. So their all goes into it.
If anything can compromise you or harm you…leave it.
Unless you know that someone else is being harmed, and then the decision is up to you. You can step in the way, deflect the rage, distract. Or you can walk away. Both have consequences.
But think on this: they’re relying on your strength to provide them with entertainment. They’re relying on weakness to let them keep shouting from their soapboxes. Either way you act, they are waiting for it and will get something from that if they can.
Because they are leeches. And as we have established in the steps of thought…have nothing constructive to add.
This goes for me too. I am old and set in my ways. I have certain notions about the world. I try to be flexible but I have my own thoughts. If I am ever all of these in one place with regards to an issue…
Please chastise me. I bite…but I will work hard not to.
I didn’t want to post a selfie for this. It didn’t feel significant enough. My haircut, my eyes, the lines of my face, the shape of my nose, these are all the last things you need to know about me if you want to know autism. There is no autistic look, no physical trait, no outfit we all wear to be seen. Autism is our brain, the dense tissue of cells, fibers and liquid - the ugliest organ, one might say, a light shadow of pink and a lot of slime, nothing remarkable. Yet it holds all our thoughts and dreams, all our fears and hopes, all our memories, and our identities. All of the the things that make us, us.
Autism is our brain and it can’t be seen or heard or touched. But many things can be. Many things are so very obvious. My faked, exaggerated facial expressions and awkward raptor hands. My intonations, too high, too low, voice too loud, too expressive. My fingers, always moving, always going through rounds and rounds of repetitive motions. My shakes and flinches in reaction to bad sounds and unexpected touches. My words, often “smart” and sophisticated, sometimes carefully prepared, and repeated again and again. And my happy flappy hands when I feel the joy channel through me like a lighting strike. Those are things you can notice: if I allow you to. Or if I’m too tired to hide them.
I hide them because I have been taught to. Not by so-called therapists, thank god, but by people around me. They did not give me stickers for saying please and thank you. They did not take away my toys for not making eye contact. They just bullied and shamed me for years until I picked it up myself.
Every time a kid at school laughed at me for taking so long to tie my shoelaces, or ridiculed me for talking about science fiction, or tricked me into an embarrassing situation because they could - I learned. Every time a teacher blamed me for not being able to get up early in the morning, or accused me of deliberately being rude, or told my parents they should “beat me once or twice” to fix my problems - I remembered.
And I trained myself to pretend. I became an outstanding actor. I rehearsed every word, every expression, every step of every scenario, until I forgot why I was doing it. I painstakingly copied everyone I interacted with, from their smile to the way they moved their hands when talking, until I forgot what it was like to be myself. I thought I was broken, and I was repairing myself. Only it didn’t make me feel better. It only made me feel more broken.
I am autistic. It is in my brain, in that complicated network of neurons we call ourselves. But around me I have a shell. A cover, maybe, like the camouflage suits that solders wear. I made it for myself, one thread at a time, because I had to. Autism is there, underneath, but the outside world sees the cover. I know now I am not broken. I know now I am wired that way. I do not wish to have that cover anymore, yet I can’t get rid of it. I try to. I learn to live as an autistic person, not as a broken neurotypical, and I am shedding that cover, slowly, one thread at a time.
This is why we need acceptance, not awareness. Awareness would just put a new word in the mouth of my bullies to shout at my back. Awareness would just give a reason to my teachers not to help me and a cause to write down on my “expelled” papers. Awareness would just make me feel like I am a tragedy, a burden, a fate worse than death and… how is that any difference from what I felt for so many years?
Acceptance tells me that my struggles are real, and can be made less with support and accommodations. Acceptance tells me that the way I move, the way I talk, the way I am is okay, a part of natural human variation, and not something to be ashamed of. Acceptance tells me I am not alone, and there are people like me out there. Acceptance tells me my life can be beautiful, amazing, fulfilling, and just as happy as a neurotypical life, no matter how much help I need or how much I can do. Acceptance tells me - it is not all bad. There is a place in this world for you.
So today, do not support Autism Speaks, do not support Light It Up Blue, and do no support autism awareness. Awareness is the last thing we need right now! What we need is for people to understand us and to stop trying to fix us. Maybe we aren’t the ones who are broken. Maybe society is. Maybe it’s time to fix society. And then, there will be a place for us, just the way we are.
In @fishingboatproceeds’s new book, Turtles All the Way Down, the narrator experiences things she calls thought spirals, where as the phrase suggests, your thoughts consume you and spin towards an infinite spiral, coiling so tight until it possesses your entire conscience in that moment.
This past month has been incredibly hard for me. A few years ago, my therapist recommended journaling, doodling, or coloring as a way to cope with my panic and anxiety in a healthy manner. During the past few days, I’ve pulled out this piece whenever I felt like a thought spiral was on the verge of forming.
My thoughts as of lately have been a lot like this:
If I fail this assignment I won’t get a good grade in the course.
Then I will have a lower gpa.
Then I won’t get into graduate school.
Then I’ll have to endure more emotional abuse from my family.
Then I’ll have to deal with the fact that I’m a failure.
Then the sacrifices my parents made to get me here were made in vain.
Then what kind of person am I?
Why do I deserve these privileges when there are others more worthy?
Am I studying enough?
Am I trying hard enough?
Am I enough?
So each spiral in this drawing was a time I felt like things were beyond my control or I couldn’t handle or process my emotions or thoughts or when I became fixated on all the overwhelming negative possibilities or consequences that could happen.
I wish I could say I’m getting better but just because our brains and neuronal networks exhibit immense plasticity, doesn’t mean it will happen when you need it.
And even though evolution can occur over time, adaptive changes within a species don’t occur because these organisms have a stronger will to live.
But I still wake up each morning, surrounded by people who care. And I think for now, waking up each morning to see another day should be enough.
Inflamed Support Cells Appear to Contribute to Some Kinds of Autism
But researchers found that when glia cells were normal, they “rescued” autistic neurons in culture, causing the latter to behave normally
Modeling the interplay between neurons and astrocytes derived from children with Autism Spectrum Disorder (ASD), researchers at University of California San Diego School of Medicine, with colleagues in Brazil, say innate inflammation in the latter appears to contribute to neuronal dysfunction in at least some forms of the disease.
The findings, published in the current issue of Biological Psychiatry, are the first to demonstrate that supporting brain cells, called astrocytes, may play a role in some subtypes of ASD. But more importantly, the research, using induced pluripotent stem cells, suggests the neuronal damage might be reversible through novel anti-inflammatory therapies.
A confocal micrograph of a stained astrocyte grown in tissue
culture. Blue indicates DNA, revealing the nucleus of the astrocyte and
other cells. Image courtesy of EnCor
To conduct the study, scientists took dental pulp cells from donated baby teeth of three children with diagnoses of non-syndromic autism (part of the on-going “Tooth Fairy Project”) and reprogrammed the cells to become either neurons or astrocytes, a type of glia or support cell abundantly found in the brain. The cells were grown into organoids, essentially mini-brains in a dish.
Though genetically distinct, all three children displayed stereotypical ASD behaviors, such as lack of verbal skills or social interaction. When researchers examined the developed organoids in microscopic detail, they noted that the neurons had fewer synapses (connections to other neurons) and other network defects. Additionally, some astrocytes showed high levels of interleukin 6 (IL-6), a pro-inflammatory protein. High levels of IL-6 are toxic to neurons.
The researchers co-cultured astrocytes derived from the ASD children with neurons derived from normal controls. The healthy neurons behaved like ASD neurons, said co-senior author Alysson R. Muotri, PhD, professor in the UC San Diego School of Medicine departments of Pediatrics and Cellular and Molecular Medicine, director of the UC San Diego Stem Cell Program and a member of the Sanford Consortium for Regenerative Medicine.
“But more importantly, the opposite was true. When we co-cultured ASD neurons with normal astrocytes, we could rescue the cellular defects. The neurons reverted to normal functioning and behavior.”
Scientists discover new mechanism of how brain networks form
Scientists have discovered that networks of inhibitory brain cells or
neurons develop through a mechanism opposite to the one followed by
excitatory networks. Excitatory neurons sculpt and refine maps of the
external world throughout development and experience, while inhibitory
neurons form maps that become broader with maturation. This discovery
adds a new piece to the puzzle of how the brain organizes and processes
information. Knowing how the normal brain works is an important step
toward understanding the nature of neurological conditions and opens the
possibility of finding treatments in the future. The results appear in
“The brain represents the external world as specific maps of
activity created by networks of neurons,” said senior author Dr.
Benjamin Arenkiel, associate professor of molecular and human genetics
and of neuroscience at Baylor College of Medicine, who studies neural
maps in the olfactory system of the laboratory mouse. “Most of these
maps have been studied in the excitatory circuits of the brain because
excitatory neurons in the cortex outnumber inhibitory neurons.”
The studies of excitatory maps have revealed that they begin as a
diffuse and overlapping network of cells. “With time,” said Arenkiel,
“experience sculpts this diffuse pattern of activity into better defined
areas, such that individual mouse whiskers, for instance, are
represented by discrete segments of the brain cortex. This progression
from a diffuse to a refined pattern occurs in many areas of the brain.”
In addition to excitatory networks, the brain has inhibitory
networks that also respond to external stimuli and regulate the activity
of neural networks. How the inhibitory networks develop, however, has
remained a mystery.
In this study, Arenkiel and colleagues studied the development of
maps of inhibitory neurons in the olfactory system of the mouse.
Studying inhibitory brain networks of the mouse sense of smell
“Unlike sight, hearing or other senses, the sense of smell in the
mouse detects discrete scents from a large array of molecules,” said
Arenkiel, who is also a McNair Scholar at Baylor.
Mice can detect a vast number of scents thanks in part to a
complex network of inhibitory neurons. Inhibitory neurons are the most
abundant type of cells in the mouse brain area dedicated to process
scent. To support this network, newly born inhibitory neurons are
continually added and integrated into the circuits.
Arenkiel and colleagues followed the paths of these newly added
neurons in time to determine how inhibitory circuits develop. First,
they genetically labeled the cells so they would glow when the neurons
were active. Then, they offered individual scents to the mice and
visually recorded through a microscope the areas or networks of the
brain that glowed for each scent the live, anesthetized animal smelled.
The scientists repeated the experiment several times to determine how
the networks changed as the animal learned to identify each scent.
The scientists expected that inhibitory networks would mature in a
way similar to that of excitatory networks. That is, the more the
animal experienced a scent, the better defined the networks of activity
would become. Surprisingly, the scientists discovered that the
inhibitory brain circuits of the mouse sense of smell develop in a
manner opposite to the excitatory circuits. Instead of becoming narrowly
defined areas, the inhibitory circuits become broader. Thanks to this
new finding scientists now better understand how the brain organizes and
Arenkiel and colleagues think that the inhibitory networks work
hand-in-hand with the excitatory networks. They propose that the
interaction between excitatory and inhibitory networks could be compared
to a network of roads (excitatory networks) whose traffic is regulated
by a network of traffic lights (inhibitory networks). The scientists
suggest that the formation of useful neural maps depends on inhibitory
networks driving the refinement of excitatory networks, and that this
new information will be essential towards developing new approaches for
repairing brain tissue.
In “Doctor Bashir, I presume?” Julian says he was taken to Adigeon Prime just before his seventh birthday. Then he says: “Over the course of the next two months, my genetic structure was manipulated to accelerate the growth of neuronal networks in my cerebral cortex, and a whole new Julian Bashir was born.” He thinks he was born when they finished the DNA resequencing.
In “Distant Voices” we can see that he’s not too happy that his 30th birthday is coming. We are given an explanation - the passage of time, the end of youth. But maybe.. maybe he doesn’t think it really is his birthday? The DNA resequencing surely wasn’t finished exactly on the day he turned seven. So maybe he thinks his real birthday is on some other date? Maybe he thinks it’s a birthday of little Jules who was killed? Maybe he’s not grumpy just about his 30th birthday, but about all his birthdays in general? Because he thinks they are fake, because they remind him of what he manages to forget on normal days: that he is not from nature.
ARTIFICIAL INTELLIGENCE ANALYZES GRAVITATIONAL LENSES 10 MILLION TIMES FASTER
** Synopsis: SLAC and Stanford researchers demonstrate that brain-mimicking ‘neural networks’ can revolutionize the way astrophysicists analyze their most complex data, including extreme distortions in spacetime that are crucial for our understanding of the universe. **
Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.
“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” said postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published today in Nature.
Lightning Fast Complex Analysis
The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.
Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.
But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.
To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.
Prepared for Data Floods of the Future
“The neural networks we tested – three publicly available neural nets and one that we developed ourselves – were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” said the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.
This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.
The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe – and produce more data – than ever before.
The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.
“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” Perreault Levasseur said. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”
A Revolutionary Approach
Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.
In the artificial version, the “neurons” are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.
“The amazing thing is that neural networks learn by themselves what features to look for,” said KIPAC staff scientist Phil Marshall, a co-author of the paper. “This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”
But in this case, Hezaveh said, “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”
Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones.
“Neural nets have been applied to astrophysical problems in the past with mixed outcomes,” said KIPAC faculty member Roger Blandford, who was not a co-author on the paper. “But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.”
TOP IMAGES….KIPAC researchers used images of strongly lensed galaxies taken with the Hubble Space Telescope to test the performance of neural networks, which promise to speed up complex astrophysical analyses tremendously. (Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA)
LOWER IMAGE….Scheme of an artificial neural network, with individual computational units organized into hundreds of layers. Each layer searches for certain features in the input image (at left). The last layer provides the result of the analysis. The researchers used particular kinds of neural networks, called convolutional neural networks, in which individual computational units (neurons, gray spheres) of each layer are also organized into 2-D slabs that bundle information about the original image into larger computational units. (Greg Stewart/SLAC National Accelerator Laboratory)
Balancing Time and Space in the Brain: A New Model Holds Promise for Predicting Brain Dynamics
For as long as scientists have been listening in on the activity of
the brain, they have been trying to understand the source of its noisy,
apparently random, activity. In the past 20 years, “balanced network
theory” has emerged to explain this apparent randomness through a
balance of excitation and inhibition in recurrently coupled networks of
neurons. A team of scientists has extended the balanced model to provide
deep and testable predictions linking brain circuits to brain activity.
Lead investigators at the University of Pittsburgh say the new model
accurately explains experimental findings about the highly variable
responses of neurons in the brains of living animals. On Oct. 31, their
paper, “The spatial structure of correlated neuronal variability,” was published online by the journal Nature Neuroscience.
The new model provides a much richer understanding of how activity is
coordinated between neurons in neural circuits. The model could be used
in the future to discover neural “signatures” that predict brain
activity associated with learning or disease, say the investigators.
“Normally, brain activity appears highly random and variable most of the time, which looks like a weird way to compute,” said Brent Doiron,
associate professor of mathematics at Pitt, senior author on the paper,
and a member of the University of Pittsburgh Brain Institute (UPBI).
“To understand the mechanics of neural computation, you need to know how
the dynamics of a neuronal network depends on the network’s
architecture, and this latest research brings us significantly closer to
achieving this goal.”
Earlier versions of the balanced network theory captured how the
timing and frequency of inputs—excitatory and inhibitory—shaped the
emergence of variability in neural behavior, but these models used
shortcuts that were biologically unrealistic, according to Doiron.
“The original balanced model ignored the spatial dependence of wiring
in the brain, but it has long been known that neuron pairs that are
near one another have a higher likelihood of connecting than pairs that
are separated by larger distances. Earlier models produced unrealistic
behavior—either completely random activity that was unlike the brain or
completely synchronized neural behavior, such as you would see in a deep
seizure. You could produce nothing in between.”
In the context of this balance, neurons are in a constant state of
tension. According to co-author Matthew Smith, assistant professor of
ophthalmology at Pitt and a member of UPBI, “It’s like balancing on one
foot on your toes. If there are small overcorrections, the result is big
fluctuations in neural firing, or communication.”
The new model accounts for temporal and spatial characteristics of
neural networks and the correlations in the activity between
neurons—whether firing in one neuron is correlated with firing in
another. The model is such a substantial improvement that the scientists
could use it to predict the behavior of living neurons examined in the
area of the brain that processes the visual world.
After developing the model, the scientists examined data from the
living visual cortex and found that their model accurately predicted the
behavior of neurons based on how far apart they were. The activity of
nearby neuron pairs was strongly correlated. At an intermediate
distance, pairs of neurons were anticorrelated (When one responded more,
the other responded less.), and at greater distances still they were
“This model will help us to better understand how the brain computes
information because it’s a big step forward in describing how network
structure determines network variability,” said Doiron. “Any serious
theory of brain computation must take into account the noise in the
code. A shift in neuronal variability accompanies important cognitive
functions, such as attention and learning, as well as being a signature
of devastating pathologies like Parkinson’s disease and epilepsy.”
While the scientists examined the visual cortex, they believe their
model could be used to predict activity in other parts of the brain,
such as areas that process auditory or olfactory cues, for example. And
they believe that the model generalizes to the brains of all mammals. In
fact, the team found that a neural signature predicted by their model
appeared in the visual cortex of living mice studied by another team of
“A hallmark of the computational approach that Doiron and Smith are
taking is that its goal is to infer general principles of brain function
that can be broadly applied to many scenarios. Remarkably, we still
don’t have things like the laws of gravity for understanding the brain,
but this is an important step for providing good theories in
neuroscience that will allow us to make sense of the explosion of new
experimental data that can now be collected,” said Nathan Urban,
associate director of UPBI.
Researchers create organic nanowire synaptic transistors that emulate the working principles of biological synapses
A team of researchers with the Pohang University of Science and
Technology in Korea has created organic nanowire synaptic transistors
that emulate the working principles of biological synapses. As they
describe in their paper published in the journal Science Advances,
the artificial synapses they have created use much smaller amounts of
power than other devices developed thus far and rival that of their
are taking multiple paths towards building next generation
computers—some are fixated on finding a material to replace silicon,
others are working towards building a quantum machine, while still
others are busy trying to build something much more like the human mind.
A hybrid system of sorts that has organic artificial parts meant to
mimic those found in the brain. In this new effort, the team in Korea
has reached a new milestone in creating an artificial synapse—one that
has very nearly the same power requirements as those inside our skulls.
Up till now, artificial synapses have consumed far more power than
human synapses, which researchers have calculated is on the order of 10
femtojoules each time a single one fires. The new synapse created by the
team requires just 1.23 femtojoules per event—far lower than anything
achieved thus far, and on par with their natural rival. Though it might
seem the artificial creations are using less power, they do not perform
the same functions just yet, so natural biology is still ahead. Plus
there is the issue of transferring information from one neuron to
another. The “wires” used by the human body are still much thinner than
the metal kind still being used by scientists—still, researchers are
As part of this latest effort, the team placed 144 of their
artificial synapses on a 4 inch wafer and connected them together in a
two dimensional mesh with wires that were just 200 to 300 nanometers on
average. The idea was to test the possibility of causing the synapses to
fire (open or close) based on information coming from a wire, or being
sent from other artificial neurons. Each synapse mimicked the natural
kind in shape as well—they were long and thin and were made of two types
of organic material that allowed for holding or releasing ions.
The new artificial synapses are one more step on the road towards a
computer that works in ways very similar to the human brain, and most
believe if we ever get there, the machines we create will be far more
powerful than anything nature has ever produced.
Image: Schematic of biological neuronal network and an ONW ST that emulates a biological synapse.
Insect Nervous System Copied To Boost Computing Power
by Charles Q. Choi
Brains are the most powerful computers known. Now microchips built to mimic insects’ nervous systems have been shown to successfully tackle technical computing problems like object recognition and data mining, researchers say.
Attempts to recreate how the brain works are nothing new. Computing principles underlying how the organ operates have inspired computer programs known as neural networks, which have been used for decades to analyze data. The artificial neurons that make up these programs imitate the brain’s neurons, with each one capable of sending, receiving and processing information.
However, real biological neural networks rely on electrical impulses known as spikes. Simulating networks of spiking neurons with software is computationally intensive, setting limits on how long these simulations can run and how large they can get.
An image recognition network dreams about every object it knows. Part ½: animals
Video from Ville-Matias Heikkilä uses deep-dream like technique to reveal collected neural dataset on various animals (and not puppyslugs)- the video here displays 500 of them:
Network used: VGG CNN-S (pretrained with Imagenet)
There are 1000
output neurons in the network, one for each image recognition category.
In this video, the output of each of these neurons is separately
amplified using backpropagation (i.e. deep dreaming).