A giant neuron found wrapped around entire mouse brain

Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness.

Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans.

Crick, F. C. & Koch, C. Phil. Trans. R. Soc. Lond. B Biol. Sci. 360, 1271–1279 (2005).

Torgerson, C. M. et al. Hum. Brain Mapp. 36, 827–838 (2015).

A digital reconstruction of a neuron that encircles the mouse brain. Allen Institute for Brain Science

The Nervous System

The Nervous System is made up of the Central Nervous System(CNS) and the Peripheral Nervous System(PNS).

Central Nervous System(CNS)

The CNS has two main functions, of controlling behaviours and regulating physiological processes by receiving and sending messages to and from various parts of the body. The CNS is split into two parts:

  •  The Spinal Cord:
    • Relays information between the brain and the rest of the body, allowing for bodily processes to be regulated and voluntary movement to be coordinated
  • The Brain: split into four areas
      • Cerebrum
        • Split into two hemispheres:
          • Left: aware of past and future, controls the right side of the body and is the logical side
          • Right: aware of the present, controlling the left side of the body and is the creative side.
            • These are joined by the corpus callosum which is a band of nerve fibres allowing for communication between the two
        • Split into four lobes:
          • Frontal: associated with reasoning and motor skills, sexual habits, risk taking and socialisation
          • Parietal: deals with sensory information, perceotion and spatial reasoning
          • Temporal: Deals with sound and language, linked to memory (hippocampus)
          • Occipital: deals with visual information
      • Cerebellum: associated with otor skills, balance and muscle coordination
      • Diencephalon: contains two key structures
        • Hypothalamus- regulates temperature, hunger and thirst, so is the link between the endocrine system and the nervous system
        • Thalamus- relay station for nerve impulses (from senses)
      • Brain Stem: regulates automatic functions while linking brain to spinal cord

Peripheral Nervous System(PNS)

The PNS is everything eithin the nervous system that isnt the brain or spinal cord. It is in charge of relaying nerve impulses from the CNS to the rest of the body and vice versa. It is split into two sections:

  • Somatic Nervous System: sends sensory information to and from CNS, allows for quick reflex actions
  • Autonomic Nervous System:regulates the involuntary actions of the body such as breathing and is vital for everyday functioning. This is further split into two subsections:
    • Sympathetic Nervous System- “Fight-or-flight”:
      • uses noradrenaline, a stimulant which allows for quick actions in emergency situations where we feel threatened. This sends neurons to every organ or gland when needed, and increases heart rate, blood pressure. and dilates blood vessels. It allows for the release of stored energy, enhancing functions needed for survival and temporarily stops non-vital functions such as urination and digestion
    • Parasympathetic Nervous System- “Rest-and-digest”:
      • uses acetylcholine, an inhibitant to restore calm after an emergency, slowing heartbeat, reducing blood pressure, restarts digestion and allows for energy conservation.

@typicalacademic and I were discussing the idea of “Portuguese Man-O’-War aliens, where each citizen is a hive of sub-organisms, which are themselves sapient”

-Sub-organisms vary in size from individual neurons to ~2cm; the simplest ones have about the processing power of a smartphone, and the most complex are on par with an average human adult

-On the whole, citizens skew fairly cautious and conservative

-Ritualized exchange of sub-organisms is common, in a sort of sex/diplomatic summit hybrid (“Sexual Congress”, if you will); one of the biggest social schisms is “let’s keep it at that” vs. “DISSOLVE THE CONCEPT OF SELF”

-The main unit of social organization is a social/diplomatic circle(/polycule) of about a dozen citizens (but citizens are typically members of many different circles)

-Murdering a citizen is the gravest possible taboo, but conversely, they tend to not assign much moral weight to one-mind beings

-“You sure are indecisive!” carries the subtext of “You sure are slutty! (And/or a Filthy Collectivist)”

-Reproduction is carried out by dissolving into 2-4 new organisms; the process wipes out most of the stored memories and minds, but some odds and ends linger

-Merging infant citizens from different parents is possible, but intensely illegal, and even the (mainstream) Collectivists disavow it

-Neuron sub-organisms are replaced every few months, but the most complex pieces can live for decades; a citizen’s overall lifespan is hard to measure by any conventional standard, and even staunch Individualists take pride in being part of a densely interwoven species



** Synopsis: SLAC and Stanford researchers demonstrate that brain-mimicking ‘neural networks’ can revolutionize the way astrophysicists analyze their most complex data, including extreme distortions in spacetime that are crucial for our understanding of the universe. **

Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” said postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published today in Nature.

Lightning Fast Complex Analysis

The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.

Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.

But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.

Prepared for Data Floods of the Future

“The neural networks we tested – three publicly available neural nets and one that we developed ourselves – were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” said the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.

The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe – and produce more data – than ever before.

The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.

“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” Perreault Levasseur said. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”

A Revolutionary Approach

Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.

In the artificial version, the “neurons” are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.

“The amazing thing is that neural networks learn by themselves what features to look for,” said KIPAC staff scientist Phil Marshall, a co-author of the paper. “This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”

But in this case, Hezaveh said, “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”

Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones.

“Neural nets have been applied to astrophysical problems in the past with mixed outcomes,” said KIPAC faculty member Roger Blandford, who was not a co-author on the paper. “But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.”

TOP IMAGES….KIPAC researchers used images of strongly lensed galaxies taken with the Hubble Space Telescope to test the performance of neural networks, which promise to speed up complex astrophysical analyses tremendously. (Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA)

LOWER IMAGE….Scheme of an artificial neural network, with individual computational units organized into hundreds of layers. Each layer searches for certain features in the input image (at left). The last layer provides the result of the analysis. The researchers used particular kinds of neural networks, called convolutional neural networks, in which individual computational units (neurons, gray spheres) of each layer are also organized into 2-D slabs that bundle information about the original image into larger computational units. (Greg Stewart/SLAC National Accelerator Laboratory)

anonymous asked:

girl... your eyes... are the organs of vision. They detect light and convert it into electro-chemical impulses in neurons. In higher organisms, the eye is a complex optical system which collects light from the surrounding environment, regulates its intensity through a diaphragm, focuses it through an adjustable assembly of lenses to form an image, converts this image into a set of electrical signals, and transmits these signals to the brain through complex neural pathways that connect th


Brains biological clock stimulates thirst before sleep

The brain’s biological clock stimulates thirst in the hours before sleep, according to a study published in the journal Nature by researchers from the Research Institute of the McGill University Health Centre (RI-MUHC).

The finding – along with the discovery of the molecular process behind it – provides the first insight into how the clock regulates a physiological function. And while the research was conducted in mice, “the findings could point the way toward drugs that target receptors implicated in problems that people experience from shift work or jet lag,” says the study’s senior author, Charles Bourque, a professor in McGill’s Department of Neurology and scientist at the Brain Repair and Integrative Neuroscience Program at the (RI-MUHC).

Scientists knew that rodents show a surge in water intake during the last two hours before sleep. The study by Bourque’s group revealed that this behavior is not motivated by any physiological reason, such as dehydration. So if they don’t need to drink water, why do they?

The team of researchers, which included lead author and Ph.D. student Claire Gizowski, found that restricting the access of mice to water during the surge period resulted in significant dehydration towards the end of the sleep cycle. So the increase in water intake before sleep is a preemptive strike that guards against dehydration and serves to keep the animal healthy and properly hydrated.

Then the researchers looked for the mechanism that sets this thirst response in motion. It’s well established that the brain harbors a hydration sensor with thirst neurons in that sensor organ. So they wondered if the SCN (suprachiasmatic nuclei), the brain region that regulates circadian cycles – a.k.a. the biological clock – could be communicating with the thirst neurons.

The team suspected that vasopressin, a neuropeptide produced by the SCN, might play a critical role. To confirm that, they used so-called “sniffer cells” designed to fluoresce in the presence of vasopressin. When they applied these cells to rodent brain tissue and then electrically stimulated the SCN, Bourque says, “We saw a big increase in the output of the sniffer cells, indicating that vasopressin is being released in that area as a result of stimulating the clock.”

To explore if vasopressin was stimulating thirst neurons, the researchers employed optogenetics, a cutting-edge technique that uses laser light to turn neurons on or off. Using genetically engineered mice whose vasopressin neurons contain a light activated molecule, the researchers were able to show that vasopressin does, indeed, turn on thirst neurons.

“Although this study was performed in rodents, it points toward an explanation as to why we often experience thirst and ingest liquids such as water or milk before bedtime,” Bourque says. “More importantly, this advance in our understanding of how the clock executes a circadian rhythm has applications in situations such as jet lag and shift work. All our organs follow a circadian rhythm, which helps optimize how they function. Shift work forces people out of their natural rhythms, which can have repercussions on health. Knowing how the clock works gives us more potential to actually do something about it.”

Researchers discover how parts of the brain work together, or alone

Our brains have billions of neurons grouped into different regions. These regions often work alone but sometimes must join forces. How do regions communicate selectively?

Stanford researchers may have solved a riddle about the inner workings of the brain, which consists of billions of neurons, organized into many different regions, with each region primarily responsible for different tasks.

The various regions of the brain often work independently, relying on the neurons inside that region to do their work. At other times, however, two regions must cooperate to accomplish the task at hand. The riddle is this: what mechanism allows two brain regions to communicate when they need to cooperate yet avoid interfering with one another when they must work alone?

In a paper published today in Nature Neuroscience, a team led by Stanford electrical engineering professor Krishna Shenoy reveals a previously unknown process that helps two brain regions cooperate when joint action is required to perform a task.

“This is among the first mechanisms reported in the literature for letting brain areas process information continuously but only communicate what they need to,” said Matthew T. Kaufman, who was a postdoctoral scholar in the Shenoy lab when he co-authored the paper.

Keep reading

Signal replicas make a flexible sensor

When a jogger sets out on his evening run, the active movements of his arms and legs are accompanied by involuntary changes in the position of the head relative to the rest of the body. Yet the jogger does not experience feelings of dizziness like those induced in the passive riders of a rollercoaster, who have no control over the abrupt dips and swoops to which they are exposed. The reason for the difference lies in the vestibular organ (VO) located in the inner ear, which controls balance and posture. The VO senses ongoing self-motion and ensures that, while running, the jogger unconsciously compensates for the accompanying changes in the orientation of the head. The capacity to adapt and respond appropriately to both slight and substantial displacements of the head in turn implies that the sensory hair cells in the inner ear can react to widely varying stimulus intensities.

(Image caption: Fluorescence image showing two nerves (stained in red and green), which are responsible for transmitting information from the hair cells to the brain and from neurons (small green dots) that alter hair cell sensitivity, respectively)

In collaboration with Dr. John Simmers at the Centre national de la recherche scientifique (CNRS) at the University of Bordeaux, neurobiologists Dr. Boris Chagnaud, Roberto Banchi and Professor Hans Straka at LMU’s Department of Biology II, have now shown, for the first time, how this feat is achieved. Their findings reveal that cells in the spinal cord which generate the rhythmic patterns of neural and muscle activity required for locomotion also adaptively alter the sensitivity of the hair cells in the VO, enabling them to respond appropriately to the broad range of incoming signal amplitudes. The results are reported in the online journal “Nature Communications”. As Boris Chagnaud points out, “we are not really aware of what movement actually involves because our balance organs react immediately to alterations in posture and head position. The hair cells, which detect the resulting changes in fluid flow in the semicircular canals in the inner ear, enable us to keep our balance without any conscious effort.”

Using tadpoles as an experimental model system, the researchers investigated how the hair cells manage to sense both low- and high-amplitude movements and produce the signals that control the appropriate compensatory response. The tadpole’s balance organs operate on the same principle as the bilateral VOs in humans, and the nerve circuits responsible for communication between the hair cells and the motor neurons in the spinal cord are organized in essentially identical ways.

The role of replicate signals

When a tadpole initiates a voluntary movement, e.g., begins to swim by moving its tail from side to side, nerve cells in the spinal cord send copies of the motor commands to so-called efferent neurons in the brainstem that project to the hair cells in the inner ear. “The effect of this signal is to reduce the sensitivity of the hair cells,” says Chagnaud. By dampening the intrinsic sensitivity of the hair cells, the input from the spinal cord effectively adapts the VO’s dynamic range. This process enables the balance organ to maintain responsiveness to high-amplitude “afferent” stimuli from the periphery, and thus to modulate the head movements that accompany propulsive swimming.

Hence the whole adaptation process is controlled by neurons in the spinal cord, which transmit signals to the VO via nerve cells located in the brainstem just before the muscles carry out the next locomotory behavior. These signals thus notify the VO in advance about the temporal form of the impending movement. “This feedforward principle is crucial, because it prepares the hair cells to react appropriately to the next movement,” Chagnaud explains. “The direct impact of input from the spinal cord on the sensitivity of sensory nerve cells in the balance organ demonstrates the importance of interactions between sensory and motor systems, and it underlines the significance of the interplay between different components of the central nervous system – in this case, the spinal cord and the brainstem. Here, evolution has not only come up with an elegant means of anticipating the effects of locomotion on the body but also of compensating for them in an adaptive fashion.”

The LMU group now intends to study whether all the hair cells in the inner ear also respond to efferent information emanating from the spinal cord or whether the VO possess subpopulations of hair cells that are specialized for reception of impulses that signal either fast or slow movements.

Circumventricular Organs.

Circumventricular organs (CVO) are so named because they are positioned at distinct sites around the margin of the ventricular system of the brain.

They are among the few sites in the brain which have an incomplete blood-brain barrier. As a result, neurons located in circumventricular organs can directly sense the concentrations of various compounds, particularly peptide hormones, in the bloodstream, without the need for specialized transport systems which move those compounds across the blood-brain barrier

 Circumventricular organs can be classified in two groups:

 Sensory organs:

  • Area Postrema: has been implicated as a chemoreceptor trigger zone for vomiting
  • Subfornical organ (SFO): implicated with osmoregulation and cardiovascular regulation
  • Organum vasculosum of lamina terminalis (OVLT):  involved in osmo- and sodium regulation.

Secretory organs:  

  • Subcommissural organ (SCO): formed by ependymal and hypendymal cells highly specialized in the secretion of proteins.
  • Neurohypophysis: stores and releases oxytocin and vasopressin
  • Pineal gland: secretes melatonin and is associated with circadian rhythms
  • Median eminence
Researchers Use Human Stem Cells to Create Light-Sensitive Retina in a Dish

Using a type of human stem cell, Johns Hopkins researchers say they have created a three-dimensional complement of human retinal tissue in the laboratory, which notably includes functioning photoreceptor cells capable of responding to light, the first step in the process of converting it into visual images.

(Image caption: Rod photoreceptors (in green) within a “mini retina” derived from human iPS cells in the lab. Image courtesy of Johns Hopkins Medicine)

“We have basically created a miniature human retina in a dish that not only has the architectural organization of the retina but also has the ability to sense light,” says study leader M. Valeria Canto-Soler, Ph.D., an assistant professor of ophthalmology at the Johns Hopkins University School of Medicine. She says the work, reported online June 10 in the journal Nature Communications, “advances opportunities for vision-saving research and may ultimately lead to technologies that restore vision in people with retinal diseases.”

Like many processes in the body, vision depends on many different types of cells working in concert, in this case to turn light into something that can be recognized by the brain as an image. Canto-Soler cautions that photoreceptors are only part of the story in the complex eye-brain process of vision, and her lab hasn’t yet recreated all of the functions of the human eye and its links to the visual cortex of the brain. “Is our lab retina capable of producing a visual signal that the brain can interpret into an image? Probably not, but this is a good start,” she says.

The achievement emerged from experiments with human induced pluripotent stem cells (iPS) and could, eventually, enable genetically engineered retinal cell transplants that halt or even reverse a patient’s march toward blindness, the researchers say.

The iPS cells are adult cells that have been genetically reprogrammed to their most primitive state. Under the right circumstances, they can develop into most or all of the 200 cell types in the human body. In this case, the Johns Hopkins team turned them into retinal progenitor cells destined to form light-sensitive retinal tissue that lines the back of the eye.

Using a simple, straightforward technique they developed to foster the growth of the retinal progenitors, Canto-Soler and her team saw retinal cells and then tissue grow in their petri dishes, says Xiufeng Zhong, Ph.D., a postdoctoral researcher in Canto-Soler’s lab. The growth, she says, corresponded in timing and duration to retinal development in a human fetus in the womb. Moreover, the photoreceptors were mature enough to develop outer segments, a structure essential for photoreceptors to function.

Retinal tissue is complex, comprising seven major cell types, including six kinds of neurons, which are all organized into specific cell layers that absorb and process light, “see,” and transmit those visual signals to the brain for interpretation. The lab-grown retinas recreate the three-dimensional architecture of the human retina. “We knew that a 3-D cellular structure was necessary if we wanted to reproduce functional characteristics of the retina,” says Canto-Soler, “but when we began this work, we didn’t think stem cells would be able to build up a retina almost on their own. In our system, somehow the cells knew what to do.”

When the retinal tissue was at a stage equivalent to 28 weeks of development in the womb, with fairly mature photoreceptors, the researchers tested these mini-retinas to see if the photoreceptors could in fact sense and transform light into visual signals.

They did so by placing an electrode into a single photoreceptor cell and then giving a pulse of light to the cell, which reacted in a biochemical pattern similar to the behavior of photoreceptors in people exposed to light.

Specifically, she says, the lab-grown photoreceptors responded to light the way retinal rods do. Human retinas contain two major photoreceptor cell types called rods and cones. The vast majority of photoreceptors in humans are rods, which enable vision in low light. The retinas grown by the Johns Hopkins team were also dominated by rods.

Canto-Soler says that the newly developed system gives them the ability to generate hundreds of mini-retinas at a time directly from a person affected by a particular retinal disease such as retinitis pigmentosa. This provides a unique biological system to study the cause of retinal diseases directly in human tissue, instead of relying on animal models.

The system, she says, also opens an array of possibilities for personalized medicine such as testing drugs to treat these diseases in a patient-specific way. In the long term, the potential is also there to replace diseased or dead retinal tissue with lab-grown material to restore vision.