A Fold in the Brain is Linked to Keeping Reality and Imagination Separate, Study Finds

External image

  • The researchers looked at MRI brain scans of a large group of healthy adults. In particular, they were looking for the paracingulate sulcus (PCS), a fold near the front of the brain. There’s a lot of variability in the PCS: some people have quite distinctive folds, others have barely any. It’s in a part of the brain known to be important in keeping track of reality, which is why the researchers chose to study it. Of the 53 people selected for the study, some had this fold on both sides of their brain, some had it on one side, and some had no fold.
  • The participants saw some full well-known word pairs (“Jekyll and Hyde”) and some half pairs (“Jekyll and ?”). If they only saw half of a pair, they were asked to imagine the other half (“Hyde”). After each pair or half pair, either the participant or the experimenter said the whole pair aloud.
  • Once they’d seen all the pairs, the participants were asked two questions about each phrase: Did you see both words of the pair, or just one? And who said the phrase aloud, you or the experimenter?
  • People who didn’t have the fold on either side of their brains did worse on both questions—remembering if something was real or imagined, and remembering who’d done something—than people whose brains had the fold. But they felt as confident in their answers, meaning they didn’t realize they’d been mixing up internal and external events.

Along with schizophrenia, the PCS would also be a place of interest for study on the ability to lucid dream.


Promachoteuthis sulcus

Promachoteuthis sulcus is a species of promachoteuthid squid. It is distinguished from related taxa on the basis of several morphological features: nuchal fusion between the head and mantle, much larger size of arm suckers compared to club suckers, greater width of tentacle base than arm base, a recessed club base, and the presence of an aboral tentacle groove.

photo credits: ourbreathingplanet

Why does Hodor in Game of Thrones only say one word? Neuroscience explains

Hodor hodor hodor. Hodor hodor? Hodor. Hodor-hodor. Hodor!

Oh, um, excuse me. Did you catch what I said?

Fans of the hit HBO show Game of Thrones, the fifth season of which premieres this Sunday, know what I’m referencing, anyway. Hodor is the brawny, simple-minded stableboy of the Stark family in Winterfell. His defining characteristic, of course, is that he only speaks a single word: “Hodor.”

But those who read the A Song of Ice and Fire book series by George R R Martin may know something that the TV fans don’t: his name isn’t actually Hodor. According to his great-grandmother Old Nan, his real name is Walder. “No one knew where ‘Hodor’ had come from,” she says, “but when he started saying it, they started calling him by it. It was the only word he had.”

Whether he intended it or not, Martin created a character who is a textbook example of someone with a neurological condition called expressive aphasia.

Read more

Researchers Find Brain Area That Integrates Speech’s Rhythms

Duke and MIT scientists have discovered an area of the brain that is sensitive to the timing of speech, a crucial element of spoken language.

Timing matters to the structure of human speech. For example, phonemes are the shortest, most basic unit of speech and last an average of 30 to 60 milliseconds. By comparison, syllables take longer: 200 to 300 milliseconds. Most whole words are longer still.

To understand speech, the brain needs to somehow integrate this rapidly evolving information.

(Image credit)

The auditory system, like other sensory systems, likely takes shortcuts to cope with the onslaught of information – by, for example, sampling information in chunks similar in length to that of an average consonant or syllable, says study co-author Tobias Overath, an assistant research professor of psychology and neuroscience at Duke. The other corresponding author is Josh McDermott from MIT.

In a study appearing May 18 in the journal Nature Neuroscience, Overath and his collaborators cut recordings of foreign speech into short chunks ranging from 30 to 960 milliseconds in length, and then reassembled the pieces using a novel algorithm to create new sounds that the authors call ‘speech quilts’.

The shorter the pieces of the resulting speech quilts, the greater the disruption was to the original structure of the speech.

To measure the activity of neurons in real time, the scientists played speech quilts to study participants while scanning their brains in a functional magnetic resonance imaging machine. The team hypothesized that brain areas involved in speech processing would show larger responses to speech quilts made up of longer segments.

Indeed, a region of the brain called the superior temporal sulcus (STS) became highly active during the 480- and 960-millisecond quilts compared with the 30-millisecond quilts.

In contrast, other areas of the brain involved in processing sound did not change their response as a result of the differences in the sound quilts.

“That was pretty exciting. We knew we were onto something,” said Overath, who is a member of the Duke Institute for Brain Sciences

The superior temporal sulcus is known to integrate auditory and other sensory information. But no one has shown that the STS is sensitive to time structures in speech.

To rule out other explanations for the activation of the STS, the researchers tested numerous control sounds they created to mimic speech. One of the synthetic sounds they created shared the frequency of speech but lacked its rhythms. Another removed all the pitch from the speech. A third used environmental sounds.

They quilted each of these control stimuli, chopping them up in either 30- or 960-millisecond pieces and stitching them back together, before playing them to participants. The STS didn’t seem responsive to the quilting manipulation when it was applied to these control sounds.

“We really went to great lengths to be certain that the effect we were seeing in STS was due to speech-specific processing and not due to some other explanation, for example, pitch in the sound or it being a natural sound as opposed to some computer-generated sound,” Overath said.

The group plans to study whether the response in STS is similar for foreign speech that is phonetically much different than English, such as Mandarin, or quilts of familiar speech that is intelligible and has meaning. For familiar speech they might see stronger activation on the left side of the brain, which is thought to be dominant in processing language.

Hallucinations linked to differences in brain structure

People diagnosed with schizophrenia who are prone to hallucinations are likely to have structural differences in a key region of the brain compared to both healthy individuals and people diagnosed with schizophrenia who do not hallucinate, according to new research published.

The study, led by the University of Cambridge in collaboration with Durham University, Macquarie University, and Trinity College Dublin, found that reductions in the length of the paracingulate sulcus (PCS), a fold towards the front of the brain, were associated with increased risk of hallucinations in people diagnosed with schizophrenia.

The PCS is one of the last structural folds to develop in the brain before birth, and varies in size between individuals. In a previous study, a team of researchers led by Dr Jon Simons from the Department of Psychology at the University of Cambridge, found that variation in the length of the PCS in healthy individuals was linked to the ability to distinguish real from imagined information, a process known as ‘reality monitoring’.

In this new study, published in the journal Nature Communications, Dr Simons and his colleagues analysed 153 structural MRI scans of people diagnosed with schizophrenia and matched control participants, measuring the length of the PCS in each participant’s brain. As difficulty distinguishing self-generated information from that perceived in the outside world may be responsible for many kinds of hallucinations, the researchers wanted to assess whether there was a link between length of the PCS and propensity to hallucinate.

The researchers found that in people diagnosed with schizophrenia, a 1 cm reduction in the fold’s length increased the likelihood of hallucinations by nearly 20%. The effect was observed regardless of whether hallucinations were auditory or visual in nature, consistent with a reality monitoring explanation.

“Schizophrenia is a complex spectrum of conditions that is associated with many differences throughout the brain, so it can be difficult to make specific links between brain areas and the symptoms that are often observed,” says Dr Simons. “By comparing brain structure in a large number of people diagnosed with schizophrenia with and without the experience of hallucinations, we have been able to identify a particular brain region that seems to be associated with a key symptom of the disorder.”

The researchers believe that changes in other areas of the brain are likely also important in generating the complex phenomena of hallucinations, possibly including regions that process visual and auditory perceptual information. In people who experience hallucinations, these areas may produce altered perceptions which, due to differences in reality monitoring processes supported by regions around the PCS, may be misattributed as being real. For example, a person may vividly imagine a voice but judge that it arises from the outside world, experiencing the voice as a hallucination.

“We think that the PCS is involved in brain networks that help us recognise information that has been generated ourselves,” adds Dr Jane Garrison, first author of the study, “People with a shorter PCS seem less able to distinguish the origin of such information, and appear more likely to experience it as having been generated externally.

“Hallucinations are very complex phenomena that are a hallmark of mental illness and, in different forms, are also quite common across the general population. There is likely to be more than one explanation for why they arise, but this finding seems to help explain why some people experience things that are not actually real.”

(Image caption: Left: A composite image showing the brain lesions of people with spelling difficulty after strokes. Right: An image of a healthy brain depicting the regions typically active during spelling)

What Goes Wrong in the Brain When Someone Can’t Spell

By studying stroke victims who have lost the ability to spell, researchers have pinpointed the parts of the brain that control how we write words.

In the latest issue of the journal Brain, Johns Hopkins University neuroscientists link basic spelling difficulties for the first time with damage to seemingly unrelated regions of the brain, shedding new light on the mechanics of language and memory.

“When something goes wrong with spelling, it’s not one thing that always happens — different things can happen and they come from different breakdowns in the brain’s machinery,” said lead author Brenda Rapp, a professor in the Department of Cognitive Sciences. “Depending on what part breaks, you’ll have different symptoms.”

Rapp’s team studied 15 years worth of cases in which 33 people were left with spelling impairments after suffering strokes. Some of the people had long-term memory difficulties, others working-memory issues.

With long-term memory difficulties, people can’t remember how to spell words they used to know and tend to make educated guesses. They could probably correctly guess a predictably spelled word like “camp,” but with a more unpredictable spelling like “sauce,” they might try “soss.” In severe cases, people trying to spell “lion” might offer things like “lonp,” “lint” and even “tiger.” With working memory issues, people know how to spell words but they have trouble choosing the correct letters or assembling the letters in the correct order — “lion” might be “liot,” “lin,” “lino,” or “liont.”

The team used computer mapping to chart the brain lesions of each individual and found that in the long-term memory cases, damage appeared on two areas of the left hemisphere, one towards the front of the brain and the other at the lower part of the brain towards the back. In working memory cases, the lesions were primarily also in the left hemisphere but in a very different area in the upper part of the brain towards the back.

“I was surprised to see how distant and distinct the brain regions are that support these two subcomponents of the writing process, especially two subcomponents that are so closely inter-related during spelling that some have argued that they shouldn’t be thought of as separate functions,” Rapp said. “You might have thought that they would be closer together and harder to tease apart.”

Though science knows quite a bit about how the brain handles reading, these findings offer some of the first clear evidence of how it spells, an understanding that could lead to improved behavioral treatments after brain damage and more effective ways to teach spelling.

Researchers examine how a face represents a whole person in the brain

The sight of a face offers the brain something special. More than a set of features, it conveys the emotions, intent, and identity of the whole individual. The same is not true for the body; cues such as posture convey some social information, but the image of a body does not substitute for a face.

(Image caption: Friend or foe?: The brain patches activated by the sight of a face (red) or a body (blue) appear above in the flattened representation of the area around one macaque’s superior temporal sulcus (dark gray))

A brain imaging study at the Rockefeller University offers some insight into how faces achieve this special status. The scientists found that certain spots dedicated to processing faces in the primate brain prefer faces with bodies—evidence they are combining both facial and body information to represent an individual.

The study, published on October 13 in the Proceedings of the National Academy of Sciences, was conducted in rhesus macaque monkeys. Humans have a similar system that responds to faces, suggesting the findings have relevance for understanding our own social processing as well.

“The body, arguably, is the most important contextual clue a viewer has to help make sense of a face,” says senior author Winrich Freiwald, an assistant professor and head of the Laboratory of Neural Systems. “Work by Clark Fisher, a graduate student in my lab, is remarkable in that it shows how the face-processing network places information about a face into its natural context as part of the body, and so begins to generate a sense of agency associated with the whole individual.”

How the brain processes faces and bodies
In work published in 2008, Freiwald and his colleague Doris Tsao showed that a network of patches along a deep groove in the sides of the macaque brain act as a specialized system for processing faces. A similar system has been found in the human brain, although it is not yet clear how the respective networks align. Both macaque and human brains also have separate patches that respond to bodies.

The conventional anatomical wisdom is that both species’ brains process faces and bodies independently. However, some studies of human perception suggest a more complex situation. For instance, one study found people’s perception of the emotion shown by a face can be altered by body posture, even when the viewers were told to disregard the body.

More than the sum of the parts
In the study, Fisher began by showing macaques still images that either displayed the face of a fellow macaque alone, the body without a face, or the entire animal.

Using high-resolution brain activity scans, captured with a method known as functional magnetic resonance imaging, he recorded how each of the six macaque facial patches, located in a part of the brain called the superior temporal sulcus (STS), responded. This approach was intended to reveal if a patch reacted strictly to faces or to some degree to bodies as well—or if a patch preferred a face and body together more than the sum of both presented separately.

“The only known way to get what we call a superadditive response, which exceeds those prompted by an individual face and body combined, is if there is some kind of interaction between the facial and body information in the patch,” Fisher says. This interaction is important because it suggests the brain is no longer just receiving information from the eyes, but beginning to make sense of it.

Two of the four face patches, one in particular, showed evidence of a superadditive response. When Fisher replaced the macaque bodies with images of other objects—a metronome, a spray bottle, a power tool—superadditivity disappeared from these patches. This result suggested the two patches were responding specifically to bodies, not just any object.

He also performed the same experiments while looking at two neighboring body patches, but these patches appeared largely uninterested in faces. This finding appears to match the asymmetry found in human social perception—the fact that bodies influence our perception of faces, while faces do not really add to our reading of bodies.

A critical node
Aside from body context, another crucial clue to the state of mind and intent of another individual comes from the motion of his or her face. Previously, Fisher and Freiwald found face patches respond to facial motion. As it turns out, face patches’ preference for bodies and for facial motion intersect at one particular patch. Located within a region at the front of the STS, this patch responds strongly to both.

“We now think this particular face patch might be a critical node in social cognition, the process by which the brain infers a sense of agency for another individual and so determines how to interact appropriately,” Freiwald says.

Cassini view of Enceladus, February 15, 2016

This image was taken during Cassini’s final close flyby of Enceladus. It captures Enceladus’ heavily fractured southern hemisphere from a distance of about 83,000 kilometers. Running left to right near the terminator is Cashmere Sulcus, and extending north towards the limb is Labtayt Sulcus. Mosul Sulcus is near the limb on the left. The south pole itself is in winter night.

Credit: NASA / JPL / SSI / Justin Cowart

Researchers uncover "predictive neuron orchestra" behind looking and reaching movements

Different groups of neurons “predict” the body’s subsequent looking and reaching movements, suggesting an orchestration among distinct parts of the brain, a team of neuroscientists has found. The study enhances our understanding of the decision-making process, potentially offering insights into different forms of mental illness—afflictions in which this dynamic is typically impaired.

“Identifying which neurons are involved in looking and reaching actions means we can actually see them firing before these decisions are made, offering a crystal ball of sorts into subsequent movements,” said Bijan Pesaran, an associate professor at NYU’s Center for Neural Science, member of NYU’s Institute for the Interdisciplinary Study of Decision Making, and the study’s senior author.

It’s long been known that selecting and planning actions involve recruiting neurons across many areas of the brain. Specifically, it had been previously established that neurons in the lateral, or side, portion of the brain’s intraparietal sulcus (IPS) were active prior to eye movements while neurons on its medial bank fired before arm movements.

Less clear, however, is how ensembles of neurons work together to make decisions—such as eyeing a target, then reaching for it.

To address this question in their study, which appears in the journal Nature Neuroscience, the researchers examined different groups of neurons that were active ahead of a decision that involved discrete actions: eye movement and arm movement, or reach. This allowed the scientists to map an array of neuronal activity during two simultaneous actions.

In the study, primates engaged in a series of activities that involved both looking and reaching for different colored targets on a computer screen. During these tasks, the scientists recorded neurological activity in the IPS.

Here, they found “coherent” patterns of spike in activity among groups of neurons in both the lateral and medial regions of the IPS that predicted both eye and reaching movements. Other groups of neurons fired spikes without coherent patterns, and they did not predict the movements. The results, then, offered both a prediction of subsequent actions—based on preceding neuronal activity—and indicated an orchestration between these distinct sets of neurons.

“The timing of the spiking of these populations of neurons indicates they are working together ahead of a decision being made—apparently ‘sharing’ information before any overt action is taken,” observes Pesaran.


Pairing: Dean x Reader (Female)

Word count: 904

My entry for  @torn-and-frayed Songs of Supernatural- Challenge.

This was so much fun!

Song: ‘Your Hooks On Me’ by Little Charlie and The Nightcats.

Warnings: SMUT-FLUFF.(Unprotected sex, this is only fiction! Always wrap it up!)

Dean looks down between them, to where their bodies are linked together, cock rigid, that big crooked vein traveling from his shaft to the sulcus under the head, popped up and throbbing.

Keep reading

A study demonstrates the possibility of changing the behaviour of the gaze by transcranial magnetic stimulation

It is commonly admitted that the gaze plays an essential role in social interactions. At a very young age, human beings look others in the eye, because information from the eyes allows us to guess their intentions and feelings. In the brain, many studies highlight the importance of a specific region of the brain, the superior temporal sulcus (STS), in perception and behaviour of the gaze. However, to date, no experimental data has demonstrated a possible modification of the gaze by artificial modulation of a neural network.

Work conducted by Inserm Unit 1000, financed by AP-HP, has confirmed that ad hoc intervention in the STS was able to have an impact on the behaviour of the gaze. The researchers used transcranial magnetic stimulation (TMS): this method consists of applying a non-invasive and painless magnetic impulse to the brain through the skull, in order to study changes of gaze caused by inhibition of the STS by TMS, using oculometry (‘eye-tracking’). They showed 15 subjects films of actors and recorded the way they looked at these films before and after inhibition of the STS. In this way the researchers observed a significant distancing from the gaze of control subjects relative to the eyes of the actors, compared to the base measurement (cf. pictures). Inhibition of the superior temporal sulcus therefore selectively and transiently disrupts the movement of the subject’s gaze into the eyes of another subject.

These results offer new therapeutic prospects for autistic patients. In fact, many brain imaging studies have revealed the presence of anatomical and functional differences of the STS in this type of patient not displaying a marked preference of other people’s eyes.

For Prof Monica Zilbovicius, “given that TMS can be applied so as to inhibit or stimulate a certain brain area, stimulating the STS using TMS could cause an increase in gazing into the eyes. This is an avenue we will explore during the next stage of our research”.


Consumers can expect changes to the Crest toothpastes on store shelves nationwide.

24-Hour News 8 took a look at what that means for the tube already in medicine cabinets across the county.

Trish Walraven a Dental Hygienist wanted people to take a good look at the blue specks in their toothpaste.

Walraven said she had no clue what they were when she spotted the particles a few years ago in the gum lines of her patients.

“We thought it was maybe a cleaning product or something that people were chewing,” said Walraven.

It turns out those blue specks are Polyethylene. It’s a plastic used in products like grocery bags, bulletproof vests and even knee replacements.

Walveran found Crest appeared to use the plastic in its products too.

24-Hour News 8 also spoke with Doctor Domenick Zero who is the Director of Oral Health Research at the IU School of Dentistry.

“In theory, if these particles are trapped in the gingival sulcus, sort of the space between the tooth and the gum tissue, it could act as an irritant. I don’t know of any studies that support that,” said Zero.

Walveran said Polyethylene is used in toothpaste for decorative purposes only.

“From an oral health point of view, there’s really no benefit. They are pretty much inert particles and they are there to bring color and pizazz to the toothpaste,” said Zero.

Procter and Gamble, the makers of Crest released a statement in an e-mail saying in part “P&G understands there is a growing preference for them to remove this ingredient, so P&G will.”

Dr. Zero says there’s no immediate reason for people to be concerned about the product.

“These micro-particles, microbeads are approved by the FDA, so it’s legal, it’s possible for the manufacturers to include them.”

Proctor and Gamble also said the majority of Crest products will be microbead-free by March of 2015.

Consumers can check the ingredient list on your toothpaste for microbeads. Just look for the key word: polyethylene.