Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

“This research is exciting because it provides evidence indicating that effortful learning leads to different results depending upon the kind of information learners are trying to master,” says Michael Ramscar, a professor of linguistics at the University of Tübingen who was not part of the research team. “The results indicate that learning to identify relatively simple parts of language, such as words, is facilitated by effortful learning, whereas learning more complex aspects of language, such as grammatical features, is impeded by effortful learning.”

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

2

Neural Basis of Prejudice and Stereotyping

As social beings, humans have the capacity to make quick evaluations that allow for discernment of in-groups (us) and out-groups (them). However, these fast computations also set the stage for social categorizations, including prejudice and stereotyping.

According to David Amodio, author of the review I am summarizing: 

Social prejudices are scaffolded by basic-level neurocognitive structures, but their expression is guided by personal goals and normative expectations, played out in dyadic and intergroup settings; this is truly the human brain in vivo.

But what is the role of the brain in prejudice and stereotypes? First, let’s start by defining and distinguishing between the two: 

Prejudice refers to preconceptions — often negative — about groups or individuals based on their social, racial or ethnic affiliations whereas stereotypes are generalized characteristics ascribed to a social group, such as personal traits or circumstantial attributes. However, these two are rarely solo operators and are often work in combination to influence social behavior. 

Research on the neural basis of prejudice has placed emphasis on brain areas implicated in emotion and motivation. These include the amygdala, insula, striatum and regions of the prefrontal cortex (see top figure). Speficifically, the amygdala is involved in the rapid processing of social category cues, including racial groups, in terms of potential threat or reward. The striatum mediates approach-related instrumental responses while the insula, an area implicated in disgust, supports visceral and subjective emotional responses towards social ingroups or outgroups. Affect-driven judgements of social outgroup members rely on the orbital frontal cortex (OFC) and may be characterized by reduced activity in the ventral medial prefrontal cortex (mPFC), a region involved in empathy and mentalizing. Together, these structures are thought to form a core network that underlies the experience and expression of prejudice. 

In contrast to prejudice, which reflects an evaluative or emotional component of social bias, stereotypes represent the cognitive component. As such, stereotyping is a little more complex because it involves the encoding and storage of stereotype concepts, the selection and activation of these concepts into working memory and their application in judgements and behaviors. When it comes to social judgments, I find it useful to think of prejudice as a low road, and stereotypes as a high road (which recruits higher order cortical areas). For example, stereotyping involves cortical structures supporting more general forms of semantic memory, object memory, retrieval and conceptual activation, such as the temporal lobes and inferior frontal gyrus (IFG), as well as regions that are involved in impression formation, like the mPFC (see bottom figure).

Importantly, although prejudice and stereotyping share an overlapping neural circuitry, they are considered as different and dissociable networks. Also, it is important to remember that areas such as the mPFC, include many subdivisions that may contribute to different aspects of the network. This is important because these within structure subdivisions are usually not readily identifiable in neuroimaging studies. Anyway, if you want to learn more about the specifics of these network and obtain real world examples of these networks at work, read the full review article (see below). 

Source:

Amodio, D.  (2014). The neuroscience of prejudice and stereotyping. Nature Reviews Neurocience. doi: 10.1038/nrn3800

Why Does the Brain Remember Dreams?

Some people recall a dream every morning, whereas others rarely recall one. A team led by Perrine Ruby, an Inserm Research Fellow at the Lyon Neuroscience Research Center, has studied the brain activity of these two types of dreamers in order to understand the differences between them.

In a study published in the journal Neuropsychopharmacology, the researchers show that the temporoparietal junction, an information-processing hub in the brain, is more active in high dream recallers. Increased activity in this brain region might facilitate attention orienting toward external stimuli and promote intrasleep wakefulness, thereby facilitating the encoding of dreams in memory.

Continue Reading

A Great Big Pile of Leaves' FANTASTIC debut LP Have You Seen My Prefrontal Cortex? — re-issued on vinyl for the first time since originally being released via Sinking Ships Records back in 2010 and subsequently out of print — is available now through us as a beautiful double-LP with gatefold packaging and insert! Pre-orders are up now!

Reblog this for a chance to win a test press of the album!

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

2

How is the human brain different from other primate brains? 

Let’s start by what is likely common knowledge: humans have a bigger brain- about three times bigger than apes. But as I’ve said before, bigger doesn’t necessarily equate with better. So what else is different in humans? 

For starters, comparative studies between non-human primates and humans have revealed differences in the volume and distribution of white matter (the component of the brain that includes glia and myelinated axons which give it its white appearance). 

White matter differences can be telling, but it seems like most of the answer lies in the prefrontal cortex- an area in the frontal lobe of the brain that regulates abstract thinking, thought analysis, decision-making, and behavior. For those who are familiar with Freudian concepts, think of it as being similar to the ego- the structure that enables you to balance conflicting thoughts and make decisions based on potential outcomes.

Scientists have found that the human brain not only features larger areas within the prefrontal cortex compared to the chimpanzee brain, but it is also more gyrified. (FYI: gyri are the ridges in the brain that serve to increase the surface area of the brain). In addition, humans have different neurotransmitter innervation of the PFC. For example, humans have a higher number of dopaminergic afferents in layers III, IV and V and a greater density of serotonergic axons in layer IV and V. 

Recently, a study led by Semendeferi and colleagues has revealed differences in the spatial organization of neurons in the prefrontal cortex that you can see (first figure above). The region shown above is known as Brodmann area 10 (BA10) and it is thought to contribute to abstract thinking and other complex cognitive processes. As you can see, the neurons in the human brain have more space in between them compared to the neurons in the chimpanzee. This extra space is thought to provide more room for connections between neurons, thus enabling more complex information processing and cognition. 

As a control, the researchers studied the organization of other cortical areas such as: primary visual cortex, primary somatosensory cortex and primary motor cortex and found subtle but no large-scale differences between primates and humans (see second diagram). 

Who knew that a little space in a higher order structure could make such a difference? 

Source: 

Schoenemann, PT, Sheehan, MJ and LD Glotzer. (2005). Prefrontal white matter volume is disproportionately larger in humans than in other primates. Nature Neuroscience, 8(2): 242-252. 

Semendeferi et al. (2011). Spatial organization of neurons in the frontal pole sets humans apart from great apes. Cerebral Cortex, 21(7): 1485-97. doi: 10.1093/cercor/bhq191

2

Shout now! How nerve cells initiate voluntary calls

"Should I say something or not?" Human beings are not alone in pondering this dilemma – animals also face decisions when they communicate by voice. University of Tübingen neurobiologists Dr. Steffen Hage and Professor Andreas Nieder have now demonstrated that nerve cells in the brain signal the targeted initiation of calls – forming the basis of voluntary vocal expression. Their results are published in Nature Communications.

When we speak, we use the sounds we make for a specific purpose – we intentionally say what we think, or consciously withhold information. Animals, however, usually make sounds according to what they feel at that moment. Even our closest relations among the primates make sounds as a reflex based on their mood. Now, Tübingen neuroscientists have shown that rhesus monkeys are able to call (or be silent) on command. They can instrumentalize the sounds they make in a targeted way, an important behavioral ability which we also use to put language to a purpose.

To find out how the neural cells in the brain catalyse the production of controled vocal noises, the researchers taught rhesus monkeys to call out quickly when a spot appeared on a computer screen. While the monkeys solved puzzles, measurements taken in their prefrontal cortex revealed astonishing reactions in the cells there. The nerve cells became active whenever the monkey saw the spot of light which was the instruction to call out. But if the monkey simply called out spontaneously, these nerve cells were not activated. The cells therefore did not signaled for just any vocalisation – only for calls that the monkey actively decided to make.

The results published in Nature Communications provide valuable insights into the neurobiological foundations of vocalization. “We want to understand the physiological mechanisms in the brain which lead to the voluntary production of calls,” says Dr. Steffen Hage of the Institute for Neurobiology, “because it played a key role in the evolution of human ability to use speech.” The study offers important indicators of the function of part of the brain which in humans has developed into one of the central locations for controlling speech. “Disorders in this part of the human brain lead to severe speech disorders or even complete loss of speech in the patient,” Professor Andreas Nieder explains. The results – giving insights into how the production of sound is initiated – may help us better understand speech disorders.

A blood test for suicide?

Johns Hopkins researchers say they have discovered a chemical alteration in a single human gene linked to stress reactions that, if confirmed in larger studies, could give doctors a simple blood test to reliably predict a person’s risk of attempting suicide.

image

The discovery, described online in The American Journal of Psychiatry, suggests that changes in a gene involved in the function of the brain’s response to stress hormones plays a significant role in turning what might otherwise be an unremarkable reaction to the strain of everyday life into suicidal thoughts and behaviors.

“Suicide is a major preventable public health problem, but we have been stymied in our prevention efforts because we have no consistent way to predict those who are at increased risk of killing themselves,” says study leader Zachary Kaminsky, Ph.D., an assistant professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. “With a test like ours, we may be able to stem suicide rates by identifying those people and intervening early enough to head off a catastrophe.”

For his series of experiments, Kaminsky and his colleagues focused on a genetic mutation in a gene known as SKA2. By looking at brain samples from mentally ill and healthy people, the researchers found that in samples from people who had died by suicide, levels of SKA2 were significantly reduced.

Within this common mutation, they then found in some subjects an epigenetic modification that altered the way the SKA2 gene functioned without changing the gene’s underlying DNA sequence. The modification added chemicals called methyl groups to the gene. Higher levels of methylation were then found in the same study subjects who had killed themselves. The higher levels of methylation among suicide decedents were then replicated in two independent brain cohorts.

In another part of the study, the researchers tested three different sets of blood samples, the largest one involving 325 participants in the Johns Hopkins Center for Prevention Research Study found similar methylation increases at SKA2 in individuals with suicidal thoughts or attempts. They then designed a model analysis that predicted which of the participants were experiencing suicidal thoughts or had attempted suicide with 80 percent certainty. Those with more severe risk of suicide were predicted with 90 percent accuracy. In the youngest data set, they were able to identify with 96 percent accuracy whether or not a participant had attempted suicide, based on blood test results.

The SKA2 gene is expressed in the prefrontal cortex of the brain, which is involved in inhibiting negative thoughts and controlling impulsive behavior. SKA2 is specifically responsible for chaperoning stress hormone receptors into cells’ nuclei so they can do their job. If there isn’t enough SKA2, or it is altered in some way, the stress hormone receptor is unable to suppress the release of cortisol throughout the brain. Previous research has shown that such cortisol release is abnormal in people who attempt or die by suicide.

Kaminsky says a test based on these findings might best be used to predict future suicide attempts in those who are ill, to restrict lethal means or methods among those a risk, or to make decisions regarding the intensity of intervention approaches.

He says that it might make sense for use in the military to test whether members have the gene mutation that makes them more vulnerable. Those at risk could be more closely monitored when they returned home after deployment. A test could also be useful in a psychiatric emergency room, he says, as part of a suicide risk assessment when doctors try to assess level of suicide risk.

The test could be used in all sorts of safety assessment decisions like the need for hospitalization and closeness of monitoring. Kaminsky says another possible use that needs more study could be to inform treatment decisions, such as whether or not to give certain medications that have been linked with suicidal thoughts.

“We have found a gene that we think could be really important for consistently identifying a range of behaviors from suicidal thoughts to attempts to completions,” Kaminsky says. “We need to study this in a larger sample but we believe that we might be able to monitor the blood to identify those at risk of suicide.”

A rodent model of teenage bullying

Many teenagers experience bullying, and adolescent bullying is a severe stressor associated with greater incidence for psychiatric disorders that can persist into adulthood, including depression and substance use disorders. Such disorders are characterized by deficits in executive functioning, which are known to be mediated by medial prefrontal cortex (mPFC) and dopamine (DA) activity. 

In order to assess how bullying changes DA in the mPFC after adolescent bullying, MJ Watt and colleagues employed a rodent model of repeated social aggression during adolescence in which peripubertal rats were exposed to an older and aggressive adult male for 10 minutes a day over a 5 day period (PN35-39). Following each “bullying bout”,  a wire mesh was placed in the chamber to separate the intruder (i.e. aggressor) from the test rat for 25 minutes, which is a form of chronic psychosocial stress and may serve to model the intimidation many teens may experience with their own aggressors (i.e. school mates, peers, etc). The researchers found that rats experiencing adolescent social defeat show increased mPFC DA activity at PN40 (the day after the last bullying session), which was maintained until PN49- even after the stressor hadn’t been experienced since PN39. However, at PN56 these same animals had decreased mPFC DA activity, which may be due to over-compensation to reduce the elevated activity seen from PN40-49. Furthermore, mPFC DA hypofunction induced by social defeat stress was specific to animals experiencing it during adolescence, as the same manipulation in adults only led to increased mPFC DA activity following exposure to social aggression. Importantly, this manipulation did not change norepinephrine (NE) or serotonin (5-HT) activity in the mPFC, and animals experiencing social defeat stress during adolescence did not become more aggressive later in development. Collectively, these findings suggest that mPFC DA is particularly susceptible to social stressors during adolescence and that social defeat changes not only mPFC DA function, but also behavioral responses to later life social events.

On a side note, I would be interested to see how these animals perform in  a go/no-go task or in a rodent model of the Iowa gambling task. Also, how would these animals perform in a self-administration paradigm and how would it be modulated by social context? Also, I really like the idea of framing  the social defeat paradigm as a model of bullying instead of depression. Kudos!

Lesson of the day: Be nice to each other, because you never know how your words/actions are affecting the other person. 

Source: 

M.J. Watt, L.C. Miller, J.L Scholl, K.J. Renner, A.M. Novick, G.L. Forster. Trajectory of alterations to cortical dopamine activity in a model of teenage bullying. Program No. 84.03/ZZ23.2013. Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience, 2013. Online. 

4

The Rise and Fall of the Lobotomy

LOBOTOMY is a psychosurgical procedure in which the connections of the prefrontal cortex and underlying structures are severed, the theory being that this leads to the uncoupling of the brain’s emotional centers and the seat of intellect. This is supposed to leave the patient in a state of calm awareness. Unfortunately the majority of cases led to irreversible vegetative states.

The lobotomy was first performed on humans in the 1890s. About half a century later, it was being touted by some as a miracle cure for mental illness, and its use became widespread; during its heyday in the 1940s and ’50s, the lobotomy was performed on some 40,000 patients in the United States, and on around 10,000 in Western Europe. The procedure became popular because there was no alternative, and because it was seen to alleviate several social crises, overcrowding in psychiatric institutions, and the increasing cost of caring for mentally ill patients.

The American clinical neurologist Walter Freeman (1895-1972) had been following the work of earlier lobotomists closely, and had also attended the symposium on the frontal lobe. It was Freeman who introduced the lobotomoy to the United States, and who would later become the biggest advocate of the technique.

Freeman developed a quicker method, the so-called “ice-pick” lobotomy, which he performed for the first time on January 17th, 1945. With the patient rendered unconscious by electroshock, an instrument was inserted above the eyeball through the orbit using a hammer. Once inside the brain, the instrument was moved back and forth; this was then repeated on the other side.

Freeman happily performed ice-pick lobotomies on anyone who was referred to him. During his career, he would perform almost 3,500 operations. Like the the earlier lobotomies performed in the late 1800s, those performed by Freeman were blind, and also gave mixed results. Some of his patients could return to work, while others were left in something like a vegetative state.

Most famously, Freeman lobotomized President John F. Kennedy’s sister Rosemary, who was incapacitated by the operation, which was performed on her when she was 23 years of age. She was in a vegetative state for the rest of her life. (She is pictured above in her before and after states.)

It was largely because of Freeman that the lobotomy became so popular. He traveled across the U. S., teaching his technique to groups of psychiatrists who were not qualified to perform surgery. Freeman was very much a showman; he often deliberately tried to shock observers by performing two-handed lobotomies, or by performing the operation in a production line manner. (He once lobotomized 25 women in a single day.) Journalists were often present on his “tours” of hospitals, so that his appearance would end up on the front page of the local newspaper; he was also featured in highly popular publications such as Time and Life. Often, these news stories exaggerated the success of lobotomy in alleviating the symptoms of mental illness.

The use of lobotomies began to decline in the mid- to late-1950s, for several reasons. Firstly, although there had always been critics of the technique, opposition to its use became very fierce. Secondly, and most importantly, anti-psychotic drugs, such as chlorpromazine, became widely available. These had much the same effect as psychosurgery gone wrong; thus, the surgical method was quickly superseded by the chemical lobotomy.

Brain scans link concern for justice with reason, not emotion

People who care about justice are swayed more by reason than emotion, according to new brain scan research from the Department of Psychology and Center for Cognitive and Social Neuroscience.

Psychologists have found that some individuals react more strongly than others to situations that invoke a sense of justice—for example, seeing a person being treated unfairly or mercifully. The new study used brain scans to analyze the thought processes of people with high “justice sensitivity.”

“We were interested to examine how individual differences about justice and fairness are represented in the brain to better understand the contribution of emotion and cognition in moral judgment,” explained lead author Jean Decety, the Irving B. Harris Professor of Psychology and Psychiatry.    

Using a functional magnetic resonance imaging (fMRI) brain-scanning device, the team studied what happened in the participants’ brains as they judged videos depicting behavior that was morally good or bad. For example, they saw a person put money in a beggar’s cup or kick the beggar’s cup away. The participants were asked to rate on a scale how much they would blame or praise the actor seen in the video. People in the study also completed questionnaires that assessed cognitive and emotional empathy, as well as their justice sensitivity.

As expected, study participants who scored high on the justice sensitivity questionnaire assigned significantly more blame when they were evaluating scenes of harm, Decety said. They also registered more praise for scenes showing a person helping another individual.

But the brain imaging also yielded surprises. During the behavior-evaluation exercise, people with high justice sensitivity showed more activity than average participants in parts of the brain associated with higher-order cognition. Brain areas commonly linked with emotional processing were not affected.

The conclusion was clear, Decety said: “Individuals who are sensitive to justice and fairness do not seem to be emotionally driven. Rather, they are cognitively driven.” 

According to Decety, one implication is that the search for justice and the moral missions of human rights organizations and others do not come primarily from sentimental motivations, as they are often portrayed. Instead, that drive may have more to do with sophisticated analysis and mental calculation.

Decety adds that evaluating good actions elicited relatively high activity in the region of the brain involved in decision-making, motivation and rewards. This finding suggests that perhaps individuals make judgments about behavior based on how they process the reward value of good actions as compared to bad actions.

“Our results provide some of the first evidence for the role of justice sensitivity in enhancing neural processing of moral information in specific components of the brain network involved in moral judgment,” Decety said.

Text
Photo
Quote
Link
Chat
Audio
Video