Posts tagged "science"
Neuroscientists propose revolutionary DNA-based approach to map wiring of whole brain

Cold Spring Harbor, NY — A team of neuroscientists has proposed a new and potentially revolutionary way of obtaining a neuronal connectivity map (the “connectome”) of the whole brain of the mouse. The details are set forth in an essay published October 23 in the open-access journal PLOS Biology.
The team, led by Professor Anthony Zador, Ph.D., of Cold Spring Harbor Laboratory, aims to provide a comprehensive account of neural connectivity. At present the only method for obtaining this information with high precision relies on examining individual cell-to-cell contacts (synapses) in electron microscopes. But such methods are slow, expensive and labor-intensive.
Zador and colleagues instead propose to exploit high-throughput DNA sequencing to probe the connectivity of neural circuits at the resolution of single neurons.
"Our method renders the connectivity problem in a format in which the data are readable by currently available high-throughput genome sequencing machines," says Zador. "We propose to do this via a process we’re now developing, called BOINC: the barcoding of individual neuronal connections."
The proposal comes at a time when a number of scientific teams in the U.S. are progressing in their efforts to map connections in the mammalian brain. These efforts use injections of tracer dyes or viruses to map neuronal connectivity at a “mesoscopic” scale—a mid-range resolution that makes it possible to follow neural fibers between brain regions. Other groups are scaling up approaches based on electron microscopy.
Zador’s team wants to trace connectivity “beyond the mesoscopic,” at the level of synaptic contacts between pairs of individual neurons, throughout the brain. The BOINC barcoding technique, now undergoing proof-of-concept testing, will be able, says Zador, “to provide immediate insight into the computations that a circuit performs.” In practice, he adds, most neural computations are not currently understood at this level of precision, partly because detailed circuit information is not available for mammals. The BOINC method promises to be much faster and cheaper than approaches based on electron microscopy, Zador says.
The BOINC method consists of three steps. First, each neuron is labeled with a specific DNA barcode. A barcode consisting of just 20 random DNA “letters” can uniquely label a trillion neurons—many more than exist in the mouse brain.
The second step looks at neurons that are synaptically connected, and associates their respective barcodes with one other. One way to do this is by exploiting a virus such as the pseudorabies virus, which can move genetic material across synapses.
"To share barcodes across synapses, the virus must be engineered to carry the barcode within its own genetic sequence," explains Zador. "After the virus spreads across synapses, each neuron effectively ends up as a bag of barcodes, comprising its own code and those from synaptically coupled partners."
The third step involves joining barcodes from synaptically connected neurons to make single pieces of DNA, which can then be read via existing high-throughput DNA sequencing methods. These double-barcode sequences can then be analyzed computationally to reveal the synaptic wiring diagram of the brain.
Taken together, says Zador, if BOINC succeeds in its current proof-of-concept tests, it will offer a dramatically inexpensive and rapid means of assembling a connectome, even of the complex brains of mammals.
(EurekAlert)

Neuroscientists propose revolutionary DNA-based approach to map wiring of whole brain


Cold Spring Harbor, NY — A team of neuroscientists has proposed a new and potentially revolutionary way of obtaining a neuronal connectivity map (the “connectome”) of the whole brain of the mouse. The details are set forth in an essay published October 23 in the open-access journal PLOS Biology.

The team, led by Professor Anthony Zador, Ph.D., of Cold Spring Harbor Laboratory, aims to provide a comprehensive account of neural connectivity. At present the only method for obtaining this information with high precision relies on examining individual cell-to-cell contacts (synapses) in electron microscopes. But such methods are slow, expensive and labor-intensive.

Zador and colleagues instead propose to exploit high-throughput DNA sequencing to probe the connectivity of neural circuits at the resolution of single neurons.

"Our method renders the connectivity problem in a format in which the data are readable by currently available high-throughput genome sequencing machines," says Zador. "We propose to do this via a process we’re now developing, called BOINC: the barcoding of individual neuronal connections."

The proposal comes at a time when a number of scientific teams in the U.S. are progressing in their efforts to map connections in the mammalian brain. These efforts use injections of tracer dyes or viruses to map neuronal connectivity at a “mesoscopic” scale—a mid-range resolution that makes it possible to follow neural fibers between brain regions. Other groups are scaling up approaches based on electron microscopy.

Zador’s team wants to trace connectivity “beyond the mesoscopic,” at the level of synaptic contacts between pairs of individual neurons, throughout the brain. The BOINC barcoding technique, now undergoing proof-of-concept testing, will be able, says Zador, “to provide immediate insight into the computations that a circuit performs.” In practice, he adds, most neural computations are not currently understood at this level of precision, partly because detailed circuit information is not available for mammals. The BOINC method promises to be much faster and cheaper than approaches based on electron microscopy, Zador says.

The BOINC method consists of three steps. First, each neuron is labeled with a specific DNA barcode. A barcode consisting of just 20 random DNA “letters” can uniquely label a trillion neurons—many more than exist in the mouse brain.

The second step looks at neurons that are synaptically connected, and associates their respective barcodes with one other. One way to do this is by exploiting a virus such as the pseudorabies virus, which can move genetic material across synapses.

"To share barcodes across synapses, the virus must be engineered to carry the barcode within its own genetic sequence," explains Zador. "After the virus spreads across synapses, each neuron effectively ends up as a bag of barcodes, comprising its own code and those from synaptically coupled partners."

The third step involves joining barcodes from synaptically connected neurons to make single pieces of DNA, which can then be read via existing high-throughput DNA sequencing methods. These double-barcode sequences can then be analyzed computationally to reveal the synaptic wiring diagram of the brain.

Taken together, says Zador, if BOINC succeeds in its current proof-of-concept tests, it will offer a dramatically inexpensive and rapid means of assembling a connectome, even of the complex brains of mammals.

(EurekAlert)

How Fear Can Skew Spatial Perception
That snake heading towards you may be further away than it appears. Fear can skew our perception of approaching objects, causing us to underestimate the distance of a threatening one, finds a study published in Current Biology.
"Our results show that emotion and perception are not fully dissociable in the mind," says Emory psychologist Stella Lourenco, co-author of the study. "Fear can alter even basic aspects of how we perceive the world around us. This has clear implications for understanding clinical phobias."
Lourenco conducted the research with Matthew Longo, a psychologist at Birkbeck, University of London.
People generally have a well-developed sense for when objects heading towards them will make contact, including a split-second cushion for dodging or blocking the object, if necessary. The researchers set up an experiment to test the effect of fear on the accuracy of that skill.
Study participants made time-to-collision judgments of images on a computer screen. The images expanded in size over one second before disappearing, to simulate “looming,” an optical pattern used instinctively to judge collision time. The study participants were instructed to gauge when each of the visual stimuli on the computer screen would have collided with them by pressing a button.
The participants tended to underestimate the collision time for images of threatening objects, such as a snake or spider, as compared to non-threatening images, such as a rabbit or butterfly.
The results challenge the traditional view of looming, as a purely optical cue to object approach. “We’re showing that what the object is affects how we perceive looming. If we’re afraid of something, we perceive it as making contact sooner,” Longo says.
"Even more striking," Lourenco adds, "it is possible to predict how much a participant will underestimate the collision time of an object by assessing the amount of fear they have for that object. The more fearful someone reported feeling of spiders, for example, the more they underestimated time-to-collision for a looming spider. That makes adaptive sense: If an object is dangerous, it’s better to swerve a half-second too soon than a half-second too late."
The researchers note that it’s unclear whether fear of an object makes the object appear to travel faster, or whether that fear makes the viewer expand their sense of personal space, which is generally about an arm’s length away.
"We’d like to distinguish between these two possibilities in future research. Doing so will allow us to shed insight on the mechanics of basic aspects of spatial perception and the mechanisms underlying particular phobias," Lourenco says.

How Fear Can Skew Spatial Perception

That snake heading towards you may be further away than it appears. Fear can skew our perception of approaching objects, causing us to underestimate the distance of a threatening one, finds a study published in Current Biology.

"Our results show that emotion and perception are not fully dissociable in the mind," says Emory psychologist Stella Lourenco, co-author of the study. "Fear can alter even basic aspects of how we perceive the world around us. This has clear implications for understanding clinical phobias."

Lourenco conducted the research with Matthew Longo, a psychologist at Birkbeck, University of London.

People generally have a well-developed sense for when objects heading towards them will make contact, including a split-second cushion for dodging or blocking the object, if necessary. The researchers set up an experiment to test the effect of fear on the accuracy of that skill.

Study participants made time-to-collision judgments of images on a computer screen. The images expanded in size over one second before disappearing, to simulate “looming,” an optical pattern used instinctively to judge collision time. The study participants were instructed to gauge when each of the visual stimuli on the computer screen would have collided with them by pressing a button.

The participants tended to underestimate the collision time for images of threatening objects, such as a snake or spider, as compared to non-threatening images, such as a rabbit or butterfly.

The results challenge the traditional view of looming, as a purely optical cue to object approach. “We’re showing that what the object is affects how we perceive looming. If we’re afraid of something, we perceive it as making contact sooner,” Longo says.

"Even more striking," Lourenco adds, "it is possible to predict how much a participant will underestimate the collision time of an object by assessing the amount of fear they have for that object. The more fearful someone reported feeling of spiders, for example, the more they underestimated time-to-collision for a looming spider. That makes adaptive sense: If an object is dangerous, it’s better to swerve a half-second too soon than a half-second too late."

The researchers note that it’s unclear whether fear of an object makes the object appear to travel faster, or whether that fear makes the viewer expand their sense of personal space, which is generally about an arm’s length away.

"We’d like to distinguish between these two possibilities in future research. Doing so will allow us to shed insight on the mechanics of basic aspects of spatial perception and the mechanisms underlying particular phobias," Lourenco says.

What You Hear Could Depend on What Your Hands are Doing
ScienceDaily (Oct. 14, 2012) — New research links motor skills and perception, specifically as it relates to a second finding — a new understanding of what the left and right brain hemispheres “hear.” Georgetown University Medical Center researchers say these findings may eventually point to strategies to help stroke patients recover their language abilities, and to improve speech recognition in children with dyslexia.

The study, presented at Neuroscience 2012, the annual meeting of the Society for Neuroscience, is the first to match human behavior with left brain/right brain auditory processing tasks. Before this research, neuroimaging tests had hinted at differences in such processing.
"Language is processed mainly in the left hemisphere, and some have suggested that this is because the left hemisphere specializes in analyzing very rapidly changing sounds,” says the study’s senior investigator, Peter E. Turkeltaub, M.D., Ph.D., a neurologist in the Center for Brain Plasticity and Recovery. This newly created center is a joint program of Georgetown University and MedStar National Rehabilitation Network.
Turkeltaub and his team hid rapidly and slowly changing sounds in background noise and asked 24 volunteers to simply indicate whether they heard the sounds by pressing a button.
"We asked the subjects to respond to sounds hidden in background noise," Turkeltaub explained. "Each subject was told to use their right hand to respond during the first 20 sounds, then their left hand for the next 20 second, then right, then left, and so on." He says when a subject was using their right hand, they heard the rapidly changing sounds more often than when they used their left hand, and vice versa for the slowly changing sounds.
"Since the left hemisphere controls the right hand and vice versa, these results demonstrate that the two hemispheres specialize in different kinds of sounds — the left hemisphere likes rapidly changing sounds, such as consonants, and the right hemisphere likes slowly changing sounds, such as syllables or intonation," Turkeltaub explains. "These results also demonstrate the interaction between motor systems and perception. It’s really pretty amazing. Imagine you’re waving an American flag while listening to one of the presidential candidates. The speech will actually sound slightly different to you depending on whether the flag is in your left hand or your right hand."
Ultimately, Turkeltaub hopes that understanding the basic organization of auditory systems and how they interact with motor systems will help explain why language resides in the left hemisphere of the brain, and will lead to new treatments for language disorders, like aphasia (language difficulties after stroke or brain injury) or dyslexia.
"If we can understand the basic brain organization for audition, this might ultimately lead to new treatments for people who have speech recognition problems due to stroke or other brain injury. Understanding better the specific roles of the two hemispheres in auditory processing will be a big step in that direction. If we find that people with aphasia, who typically have injuries to the left hemisphere, have difficulty recognizing speech because of problems with low-level auditory perception of rapidly changing sounds, maybe training the specific auditory processing deficits will improve their ability to recognize speech," Turkeltaub concludes.

What You Hear Could Depend on What Your Hands are Doing

ScienceDaily (Oct. 14, 2012) — New research links motor skills and perception, specifically as it relates to a second finding — a new understanding of what the left and right brain hemispheres “hear.” Georgetown University Medical Center researchers say these findings may eventually point to strategies to help stroke patients recover their language abilities, and to improve speech recognition in children with dyslexia.

The study, presented at Neuroscience 2012, the annual meeting of the Society for Neuroscience, is the first to match human behavior with left brain/right brain auditory processing tasks. Before this research, neuroimaging tests had hinted at differences in such processing.

"Language is processed mainly in the left hemisphere, and some have suggested that this is because the left hemisphere specializes in analyzing very rapidly changing sounds,” says the study’s senior investigator, Peter E. Turkeltaub, M.D., Ph.D., a neurologist in the Center for Brain Plasticity and Recovery. This newly created center is a joint program of Georgetown University and MedStar National Rehabilitation Network.

Turkeltaub and his team hid rapidly and slowly changing sounds in background noise and asked 24 volunteers to simply indicate whether they heard the sounds by pressing a button.

"We asked the subjects to respond to sounds hidden in background noise," Turkeltaub explained. "Each subject was told to use their right hand to respond during the first 20 sounds, then their left hand for the next 20 second, then right, then left, and so on." He says when a subject was using their right hand, they heard the rapidly changing sounds more often than when they used their left hand, and vice versa for the slowly changing sounds.

"Since the left hemisphere controls the right hand and vice versa, these results demonstrate that the two hemispheres specialize in different kinds of sounds — the left hemisphere likes rapidly changing sounds, such as consonants, and the right hemisphere likes slowly changing sounds, such as syllables or intonation," Turkeltaub explains. "These results also demonstrate the interaction between motor systems and perception. It’s really pretty amazing. Imagine you’re waving an American flag while listening to one of the presidential candidates. The speech will actually sound slightly different to you depending on whether the flag is in your left hand or your right hand."

Ultimately, Turkeltaub hopes that understanding the basic organization of auditory systems and how they interact with motor systems will help explain why language resides in the left hemisphere of the brain, and will lead to new treatments for language disorders, like aphasia (language difficulties after stroke or brain injury) or dyslexia.

"If we can understand the basic brain organization for audition, this might ultimately lead to new treatments for people who have speech recognition problems due to stroke or other brain injury. Understanding better the specific roles of the two hemispheres in auditory processing will be a big step in that direction. If we find that people with aphasia, who typically have injuries to the left hemisphere, have difficulty recognizing speech because of problems with low-level auditory perception of rapidly changing sounds, maybe training the specific auditory processing deficits will improve their ability to recognize speech," Turkeltaub concludes.

The Worst Noises in the World: Why We Recoil at Unpleasant Noises
Heightened activity between the emotional and auditory parts of the brain explains why the sound of chalk on a blackboard or a knife on a bottle is so unpleasant.
In a study published today in the Journal of Neuroscience and funded by the Wellcome Trust, Newcastle University scientists reveal the interaction between the region of the brain that processes sound, the auditory cortex, and the amygdala, which is active in the processing of negative emotions when we hear unpleasant sounds.
Brain imaging has shown that when we hear an unpleasant noise the amygdala modulates the response of the auditory cortex heightening activity and provoking our negative reaction.
"It appears there is something very primitive kicking in," says Dr Sukhbinder Kumar, the paper’s author from Newcastle University. "It’s a possible distress signal from the amygdala to the auditory cortex."
Researchers at the Wellcome Trust Centre for Neuroimaging at UCL and Newcastle University used functional magnetic resonance imaging (fMRI) to examine how the brains of 13 volunteers responded to a range of sounds. Listening to the noises inside the scanner they rated them from the most unpleasant — the sound of knife on a bottle — to pleasing — bubbling water. Researchers were then able to study the brain response to each type of sound.
Researchers found that the activity of the amygdala and the auditory cortex varied in direct relation to the ratings of perceived unpleasantness given by the subjects. The emotional part of the brain, the amygdala, in effect takes charge and modulates the activity of the auditory part of the brain so that our perception of a highly unpleasant sound, such as a knife on a bottle, is heightened as compared to a soothing sound, such as bubbling water.
Analysis of the acoustic features of the sounds found that anything in the frequency range of around 2,000 to 5,000 Hz was found to be unpleasant. Dr Kumar explains: “This is the frequency range where our ears are most sensitive. Although there’s still much debate as to why our ears are most sensitive in this range, it does include sounds of screams which we find intrinsically unpleasant.”
Scientifically, a better understanding of the brain’s reaction to noise could help our understanding of medical conditions where people have a decreased sound tolerance such as hyperacusis, misophonia (literally a “hatred of sound”) and autism when there is sensitivity to noise.
Professor Tim Griffiths from Newcastle University, who led the study, says: “This work sheds new light on the interaction of the amygdala and the auditory cortex. This might be a new inroad into emotional disorders and disorders like tinnitus and migraine in which there seems to be heightened perception of the unpleasant aspects of sounds.”

The Worst Noises in the World: Why We Recoil at Unpleasant Noises

Heightened activity between the emotional and auditory parts of the brain explains why the sound of chalk on a blackboard or a knife on a bottle is so unpleasant.

In a study published today in the Journal of Neuroscience and funded by the Wellcome Trust, Newcastle University scientists reveal the interaction between the region of the brain that processes sound, the auditory cortex, and the amygdala, which is active in the processing of negative emotions when we hear unpleasant sounds.

Brain imaging has shown that when we hear an unpleasant noise the amygdala modulates the response of the auditory cortex heightening activity and provoking our negative reaction.

"It appears there is something very primitive kicking in," says Dr Sukhbinder Kumar, the paper’s author from Newcastle University. "It’s a possible distress signal from the amygdala to the auditory cortex."

Researchers at the Wellcome Trust Centre for Neuroimaging at UCL and Newcastle University used functional magnetic resonance imaging (fMRI) to examine how the brains of 13 volunteers responded to a range of sounds. Listening to the noises inside the scanner they rated them from the most unpleasant — the sound of knife on a bottle — to pleasing — bubbling water. Researchers were then able to study the brain response to each type of sound.

Researchers found that the activity of the amygdala and the auditory cortex varied in direct relation to the ratings of perceived unpleasantness given by the subjects. The emotional part of the brain, the amygdala, in effect takes charge and modulates the activity of the auditory part of the brain so that our perception of a highly unpleasant sound, such as a knife on a bottle, is heightened as compared to a soothing sound, such as bubbling water.

Analysis of the acoustic features of the sounds found that anything in the frequency range of around 2,000 to 5,000 Hz was found to be unpleasant. Dr Kumar explains: “This is the frequency range where our ears are most sensitive. Although there’s still much debate as to why our ears are most sensitive in this range, it does include sounds of screams which we find intrinsically unpleasant.”

Scientifically, a better understanding of the brain’s reaction to noise could help our understanding of medical conditions where people have a decreased sound tolerance such as hyperacusis, misophonia (literally a “hatred of sound”) and autism when there is sensitivity to noise.

Professor Tim Griffiths from Newcastle University, who led the study, says: “This work sheds new light on the interaction of the amygdala and the auditory cortex. This might be a new inroad into emotional disorders and disorders like tinnitus and migraine in which there seems to be heightened perception of the unpleasant aspects of sounds.”

The Science of Lucid Dreaming - learn how to control your dreams!

#submission


Researchers Discover that the Sleeping Brain Behaves as if it’s Remembering Something
UCLA researchers have for the first time measured the activity of a brain region known to be involved in learning, memory and Alzheimer’s disease during sleep. They discovered that this part of the brain behaves as if it’s remembering something, even under anesthesia, a finding that counters conventional theories about memory consolidation during sleep.
The research team simultaneously measured the activity of single neurons from multiple parts of the brain involved in memory formation. The technique allowed them to determine which brain region was activating other areas of the brain and how that activation was spreading, said study senior author Mayank R. Mehta, a professor of neurophysics in UCLA’s departments of neurology, neurobiology, physics and astronomy.
In particular, Mehta and his team looked at three connected brain regions in mice – the new brain or the neocortex, the old brain or the hippocampus, and the entorhinal cortex, an intermediate brain that connects the new and the old brains. While previous studies have suggested that the dialogue between the old and the new brain during sleep was critical for memory formation, researchers had not investigated the contribution of the entorhinal cortex to this conversation, which turned out to be a game changer, Mehta said. His team found that the entorhinal cortex showed what is called persistent activity, which is thought to mediate working memory during waking life, for example when people pay close attention to remember things temporarily, such as recalling a phone number or following directions.
(read more)

Researchers Discover that the Sleeping Brain Behaves as if it’s Remembering Something

UCLA researchers have for the first time measured the activity of a brain region known to be involved in learning, memory and Alzheimer’s disease during sleep. They discovered that this part of the brain behaves as if it’s remembering something, even under anesthesia, a finding that counters conventional theories about memory consolidation during sleep.

The research team simultaneously measured the activity of single neurons from multiple parts of the brain involved in memory formation. The technique allowed them to determine which brain region was activating other areas of the brain and how that activation was spreading, said study senior author Mayank R. Mehta, a professor of neurophysics in UCLA’s departments of neurology, neurobiology, physics and astronomy.

In particular, Mehta and his team looked at three connected brain regions in mice – the new brain or the neocortex, the old brain or the hippocampus, and the entorhinal cortex, an intermediate brain that connects the new and the old brains. While previous studies have suggested that the dialogue between the old and the new brain during sleep was critical for memory formation, researchers had not investigated the contribution of the entorhinal cortex to this conversation, which turned out to be a game changer, Mehta said. His team found that the entorhinal cortex showed what is called persistent activity, which is thought to mediate working memory during waking life, for example when people pay close attention to remember things temporarily, such as recalling a phone number or following directions.


(read more)

Language Learning Makes the Brain Grow, Swedish Study Suggests 
At the Swedish Armed Forces Interpreter Academy in the city of Uppsala, young people with a flair for languages go from having no knowledge of a language such as Arabic, Russian or Dari to speaking it fluently in the space of 13 months. From morning to evening, weekdays and weekends, the recruits study at a pace unlike on any other language course.
As a control group, the researchers used medicine and cognitive science students at Umeå University — students who also study hard, but not languages. Both groups were given MRI scans before and after a three-month period of intensive study. While the brain structure of the control group remained unchanged, specific parts of the brain of the language students grew. The parts that developed in size were the hippocampus, a deep-lying brain structure that is involved in learning new material and spatial navigation, and three areas in the cerebral cortex.
“We were surprised that different parts of the brain developed to different degrees depending on how well the students performed and how much effort they had had to put in to keep up with the course,” says Johan Mårtensson, a researcher in psychology at Lund University, Sweden.
Students with greater growth in the hippocampus and areas of the cerebral cortex related to language learning (superior temporal gyrus) had better language skills than the other students. In students who had to put more effort into their learning, greater growth was seen in an area of the motor region of the cerebral cortex (middle frontal gyrus). The areas of the brain in which the changes take place are thus linked to how easy one finds it to learn a language and development varies according to performance.
Previous research from other groups has indicated that Alzheimer’s disease has a later onset in bilingual or multilingual groups.
“Even if we cannot compare three months of intensive language study with a lifetime of being bilingual, there is a lot to suggest that learning languages is a good way to keep the brain in shape,” says Johan Mårtensson.
Language Learning Makes the Brain Grow, Swedish Study Suggests

At the Swedish Armed Forces Interpreter Academy in the city of Uppsala, young people with a flair for languages go from having no knowledge of a language such as Arabic, Russian or Dari to speaking it fluently in the space of 13 months. From morning to evening, weekdays and weekends, the recruits study at a pace unlike on any other language course.

As a control group, the researchers used medicine and cognitive science students at Umeå University — students who also study hard, but not languages. Both groups were given MRI scans before and after a three-month period of intensive study. While the brain structure of the control group remained unchanged, specific parts of the brain of the language students grew. The parts that developed in size were the hippocampus, a deep-lying brain structure that is involved in learning new material and spatial navigation, and three areas in the cerebral cortex.

“We were surprised that different parts of the brain developed to different degrees depending on how well the students performed and how much effort they had had to put in to keep up with the course,” says Johan Mårtensson, a researcher in psychology at Lund University, Sweden.

Students with greater growth in the hippocampus and areas of the cerebral cortex related to language learning (superior temporal gyrus) had better language skills than the other students. In students who had to put more effort into their learning, greater growth was seen in an area of the motor region of the cerebral cortex (middle frontal gyrus). The areas of the brain in which the changes take place are thus linked to how easy one finds it to learn a language and development varies according to performance.

Previous research from other groups has indicated that Alzheimer’s disease has a later onset in bilingual or multilingual groups.

“Even if we cannot compare three months of intensive language study with a lifetime of being bilingual, there is a lot to suggest that learning languages is a good way to keep the brain in shape,” says Johan Mårtensson.

What Number Is Halfway Between 1 and 9? Is It 5 — Or 3?
Ask adults from the industrialized world what number is halfway between 1 and 9, and most will say 5. But pose the same question to small children, or people living in some traditional societies, and they’re likely to answer 3.

Cognitive scientists theorize that that’s because it’s actually more natural for humans to think logarithmically than linearly: 30 is 1, and 32 is 9, so logarithmically, the number halfway between them is 31, or 3. Neural circuits seem to bear out that theory. For instance, psychological experiments suggest that multiplying the intensity of some sensory stimuli causes a linear increase in perceived intensity.
(click here to read the rest of the article)
This is a freaking awesome study, by the way.

What Number Is Halfway Between 1 and 9? Is It 5 — Or 3?

Ask adults from the industrialized world what number is halfway between 1 and 9, and most will say 5. But pose the same question to small children, or people living in some traditional societies, and they’re likely to answer 3.

Cognitive scientists theorize that that’s because it’s actually more natural for humans to think logarithmically than linearly: 30 is 1, and 32 is 9, so logarithmically, the number halfway between them is 31, or 3. Neural circuits seem to bear out that theory. For instance, psychological experiments suggest that multiplying the intensity of some sensory stimuli causes a linear increase in perceived intensity.

(click here to read the rest of the article)

This is a freaking awesome study, by the way.

Universal Map of Vision in the Human Brain
Nearly 100 years after a British neurologist first mapped the blind spots caused by missile wounds to the brains of soldiers, Perelman School of Medicine researchers at the University of Pennsylvania have perfected his map using modern-day technology. Their results create a map of vision in the brain based upon an individual’s brain structure, even for people who cannot see. Their result can, among other things, guide efforts to restore vision using a neural prosthesis that stimulates the surface of the brain.
The study appears in the latest issue of Current Biology, a Cell Press journal.
Scientists frequently use a brain imaging technique called functional MRI (fMRI) to measure the seemingly unique activation map of vision on an individual’s brain. This fMRI test requires staring at a flashing screen for many minutes while brain activity is measured, which is an impossibility for people blinded by eye disease. The Penn team has solved this problem by finding a common mathematical description across people of the relationship between visual function and brain anatomy.
"By measuring brain anatomy and applying an algorithm, we can now accurately predict how the visual world for an individual should be arranged on the surface of the brain," said senior author Geoffrey Aguirre, MD, PhD, assistant professor of Neurology. "We are already using this advance to study how vision loss changes the organization of the brain."
The researchers combined traditional fMRI measures of brain activity from 25 people with normal vision. They then identified a precise statistical relationship between the structure of the folds of the brain and the representation of the visual world.
"At first, it seems like the visual area of the brain has a different shape and size in every person," said co-lead author Noah Benson, PhD, post-doctoral researcher in Psychology and Neurology. "Building upon prior studies of regularities in brain anatomy, we found that these individual differences go away when examined with our mathematical template."
A World War I neurologist, Gordon Holmes, is generally credited with creating the first schematic of this relationship. “He produced a remarkably accurate map in 1918 with only the crudest of techniques,” said co-lead author Omar Butt, MD/PhD candidate in the Perelman School of Medicine at Penn. “We have now locked down the details, but it’s taken 100 years and a lot of technology to get it right.”
The research was funded by grants from the Pennsylvania State CURE fund and the National Institutes of Health (P30 EY001583, P30 NS045839-08, R01 EY020516-01A1).

Universal Map of Vision in the Human Brain


Nearly 100 years after a British neurologist first mapped the blind spots caused by missile wounds to the brains of soldiers, Perelman School of Medicine researchers at the University of Pennsylvania have perfected his map using modern-day technology. Their results create a map of vision in the brain based upon an individual’s brain structure, even for people who cannot see. Their result can, among other things, guide efforts to restore vision using a neural prosthesis that stimulates the surface of the brain.

The study appears in the latest issue of Current Biology, a Cell Press journal.

Scientists frequently use a brain imaging technique called functional MRI (fMRI) to measure the seemingly unique activation map of vision on an individual’s brain. This fMRI test requires staring at a flashing screen for many minutes while brain activity is measured, which is an impossibility for people blinded by eye disease. The Penn team has solved this problem by finding a common mathematical description across people of the relationship between visual function and brain anatomy.

"By measuring brain anatomy and applying an algorithm, we can now accurately predict how the visual world for an individual should be arranged on the surface of the brain," said senior author Geoffrey Aguirre, MD, PhD, assistant professor of Neurology. "We are already using this advance to study how vision loss changes the organization of the brain."

The researchers combined traditional fMRI measures of brain activity from 25 people with normal vision. They then identified a precise statistical relationship between the structure of the folds of the brain and the representation of the visual world.

"At first, it seems like the visual area of the brain has a different shape and size in every person," said co-lead author Noah Benson, PhD, post-doctoral researcher in Psychology and Neurology. "Building upon prior studies of regularities in brain anatomy, we found that these individual differences go away when examined with our mathematical template."

A World War I neurologist, Gordon Holmes, is generally credited with creating the first schematic of this relationship. “He produced a remarkably accurate map in 1918 with only the crudest of techniques,” said co-lead author Omar Butt, MD/PhD candidate in the Perelman School of Medicine at Penn. “We have now locked down the details, but it’s taken 100 years and a lot of technology to get it right.”

The research was funded by grants from the Pennsylvania State CURE fund and the National Institutes of Health (P30 EY001583, P30 NS045839-08, R01 EY020516-01A1).

Homolog of Mammalian Neocortex Found in Bird Brain
A seemingly unique part of the human and mammalian brain is the neocortex, a layered structure on the outer surface of the organ where most higher-order processing is thought to occur. But new research at the University of Chicago has found the cells similar to those of the mammalian neocortex in the brains of birds, sitting in a vastly different anatomical structure.

The work, published in Proceedings of the National Academy of Sciences, confirms a 50-year-old hypothesis about the identity of a mysterious structure in the bird brain that has provoked decades of scientific debate. The research also sheds new light on the evolution of the brain and opens up new animal models for studying the neocortex.
"If you want to study motor neurons or dopamine cells, which are biomedically important, you can study them in mammals, in chick embryos, in zebrafish. But for these neurons of the cerebral cortex, we could only do that in mammals before," said Clifton Ragsdale, PhD, associate professor of neurobiology at University of Chicago Biological Sciences and senior author of the study. "Now, we can take advantage of these other experimental systems to ask how they are specified, can they regenerate, and other questions."
Both the mammalian neocortex and a structure in the bird brain called the dorsal ventricular ridge (DVR) originate from an embryonic region called the telencephalon. But the two regions mature into very different shapes, with the neocortex made up of six distinct cortical layers while the DVR contains large clusters of neurons called nuclei.
Because of this divergent anatomy, many scientists proposed that the bird DVR does not correspond to the mammalian cortex, but is analogous to another mammalian brain structure called the amygdala.
"All mammals have a neocortex, and it’s virtually identical across all of them," said Jennifer Dugas-Ford, PhD, postdoctoral researcher at the University of Chicago and first author on the paper. "But when you go to the next closest group, the birds and reptiles, they don’t have anything that looks remotely similar to neocortex."
But in the 1960s, neuroscientist Harvey Karten studied the neural inputs and outputs of the DVR, finding that they were remarkably similar to the pathways traveling to and from the neocortex in mammals. As a result, he proposed that the DVR performs a similar function to the neocortex despite its dramatically different anatomy.
Dugas-Ford, Ragsdale and co-author Joanna Rowell decided to test Karten’s hypothesis by using recently discovered sets of molecular markers that can identify specific layers of mammalian cortex: the layer 4 “input” neurons or layer 5 “output” neurons. The researchers then looked for whether these marker genes were expressed in the DVR nuclei.
In two different bird species — chicken and zebra finch — the level 4 and 5 markers were expressed by distinct nuclei of the DVR, supporting Karten’s hypothesis that the structure contains cells homologous to those of mammalian neocortex.
"Here was a completely different line of evidence," Ragsdale said. "There were molecular markers that picked out specific layers of cortex; whereas the original Karten theory was based just on connections, and some people dismissed that. But in two very distant birds, all of the gene expression fits together very nicely with the connections."
Dugas-Ford called the evidence “really incredible.”
"All of our markers were exactly where they thought they would be in the DVR when you’re comparing them to the neocortex," she said.
A similar experiment was conducted in a species of turtle, and revealed yet another anatomical possibility for these neocortex-like cells. Instead of a six-layer neocortex or a cluster of nuclei, the turtle brain had layer 4- and 5-like cells distributed along a single layer of the species’ dorsal cortex.
"I think that’s the interesting part, that you can have all these different morphologies built with the same cell types, just in different conformations," Rowell said. "It’s a neocortex or a big clump of nuclei, and then in reptiles they have an unusual dorsal cortex unlike either of those."
Future experiments will test the developmental steps that shape these neurons into various structures, and the relative pros and cons of these anatomical differences. The complex language and tool-use of some bird species suggests that the nuclear organization of this pathway is also capable of supporting advanced functions — and even may offer advantages over the mammalian brain.
"If you wanted to have a special nuclear processing center in Broca’s area to carry out language processing, you can’t do that in a mammal," Ragsdale said. "But in a bird they have these special nuclei that are involved in vocalization. It’s as if you have additional flexibility: You can have shorter circuits, longer circuits, you can have specialized processing centers."
Beyond the structural differences, the discovery of homologous neocortex cell types will allow scientists to study cortical neurons in bird species such as the chicken, a common model used for examining embryonic development. Such research could help scientists more easily study the neurons lost in paralysis, deafness, blindness, and other neurological conditions.

Homolog of Mammalian Neocortex Found in Bird Brain


A seemingly unique part of the human and mammalian brain is the neocortex, a layered structure on the outer surface of the organ where most higher-order processing is thought to occur. But new research at the University of Chicago has found the cells similar to those of the mammalian neocortex in the brains of birds, sitting in a vastly different anatomical structure.

The work, published in Proceedings of the National Academy of Sciences, confirms a 50-year-old hypothesis about the identity of a mysterious structure in the bird brain that has provoked decades of scientific debate. The research also sheds new light on the evolution of the brain and opens up new animal models for studying the neocortex.

"If you want to study motor neurons or dopamine cells, which are biomedically important, you can study them in mammals, in chick embryos, in zebrafish. But for these neurons of the cerebral cortex, we could only do that in mammals before," said Clifton Ragsdale, PhD, associate professor of neurobiology at University of Chicago Biological Sciences and senior author of the study. "Now, we can take advantage of these other experimental systems to ask how they are specified, can they regenerate, and other questions."

Both the mammalian neocortex and a structure in the bird brain called the dorsal ventricular ridge (DVR) originate from an embryonic region called the telencephalon. But the two regions mature into very different shapes, with the neocortex made up of six distinct cortical layers while the DVR contains large clusters of neurons called nuclei.

Because of this divergent anatomy, many scientists proposed that the bird DVR does not correspond to the mammalian cortex, but is analogous to another mammalian brain structure called the amygdala.

"All mammals have a neocortex, and it’s virtually identical across all of them," said Jennifer Dugas-Ford, PhD, postdoctoral researcher at the University of Chicago and first author on the paper. "But when you go to the next closest group, the birds and reptiles, they don’t have anything that looks remotely similar to neocortex."

But in the 1960s, neuroscientist Harvey Karten studied the neural inputs and outputs of the DVR, finding that they were remarkably similar to the pathways traveling to and from the neocortex in mammals. As a result, he proposed that the DVR performs a similar function to the neocortex despite its dramatically different anatomy.

Dugas-Ford, Ragsdale and co-author Joanna Rowell decided to test Karten’s hypothesis by using recently discovered sets of molecular markers that can identify specific layers of mammalian cortex: the layer 4 “input” neurons or layer 5 “output” neurons. The researchers then looked for whether these marker genes were expressed in the DVR nuclei.

In two different bird species — chicken and zebra finch — the level 4 and 5 markers were expressed by distinct nuclei of the DVR, supporting Karten’s hypothesis that the structure contains cells homologous to those of mammalian neocortex.

"Here was a completely different line of evidence," Ragsdale said. "There were molecular markers that picked out specific layers of cortex; whereas the original Karten theory was based just on connections, and some people dismissed that. But in two very distant birds, all of the gene expression fits together very nicely with the connections."

Dugas-Ford called the evidence “really incredible.”

"All of our markers were exactly where they thought they would be in the DVR when you’re comparing them to the neocortex," she said.

A similar experiment was conducted in a species of turtle, and revealed yet another anatomical possibility for these neocortex-like cells. Instead of a six-layer neocortex or a cluster of nuclei, the turtle brain had layer 4- and 5-like cells distributed along a single layer of the species’ dorsal cortex.

"I think that’s the interesting part, that you can have all these different morphologies built with the same cell types, just in different conformations," Rowell said. "It’s a neocortex or a big clump of nuclei, and then in reptiles they have an unusual dorsal cortex unlike either of those."

Future experiments will test the developmental steps that shape these neurons into various structures, and the relative pros and cons of these anatomical differences. The complex language and tool-use of some bird species suggests that the nuclear organization of this pathway is also capable of supporting advanced functions — and even may offer advantages over the mammalian brain.

"If you wanted to have a special nuclear processing center in Broca’s area to carry out language processing, you can’t do that in a mammal," Ragsdale said. "But in a bird they have these special nuclei that are involved in vocalization. It’s as if you have additional flexibility: You can have shorter circuits, longer circuits, you can have specialized processing centers."

Beyond the structural differences, the discovery of homologous neocortex cell types will allow scientists to study cortical neurons in bird species such as the chicken, a common model used for examining embryonic development. Such research could help scientists more easily study the neurons lost in paralysis, deafness, blindness, and other neurological conditions.

How Memory Load Leaves Us ‘Blind’ to New Visual Information
Trying to keep an image we’ve just seen in memory can leave us blind to things we are ‘looking’ at, according to the results of a new study supported by the Wellcome Trust.
It’s been known for some time that when our brains are focused on a task, we can fail to see other things that are in plain sight. This phenomenon, known as ‘inattentional blindness’, is exemplified by the famous ‘invisible gorilla’ experiment in which people watching a video of players passing around a basketball and counting the number of passes fail to observe a man in a gorilla suit walking across the centre of the screen.
The new results reveal that our visual field does not need to be cluttered with other objects to cause this ‘blindness’ and that focusing on remembering something we have just seen is enough to make us unaware of things that happen around us.
Professor Nilli Lavie from UCL Institute of Cognitive Neuroscience, who led the study, explains: “An example of where this is relevant in the real world is when people are following directions on a sat nav while driving.
"Our research would suggest that focusing on remembering the directions we’ve just seen on the screen means that we’re more likely to fail to observe other hazards around us on the road, for example an approaching motorbike or a pedestrian on a crossing, even though we may be ‘looking’ at where we’re going."
Participants in the study were given a visual memory task to complete while the researchers looked at the activity in their brains using functional magnetic resonance imaging. The findings revealed that while the participants were occupied with remembering an image they had just been shown, they failed to notice a flash of light that they were asked to detect, even though there was nothing else in their visual field at the time.
The participants could easily detect the flash of light when their mind was not loaded, suggesting that they had established a ‘load induced blindness’. At the same time, the team observed that there was reduced activity in the area of the brain that processes incoming visual information — the primary visual cortex.
Professor Lavie adds: “The ‘blindness’ seems to be caused by a breakdown in visual messages getting to the brain at the earliest stage in the pathway of information flow, which means that while the eyes ‘see’ the object, the brain does not.”
The idea that there is competition in the brain for limited information processing power is known as load theory and was first proposed by Professor Lavie more than a decade ago. The theory explains why the brain fails to detect even conspicuous events in the visual field, like the man in a gorilla suit, when attention is focused on a task that involves a high level of information load.
The research reveals a pathway of competition in the brain between new visual information and our short-term visual memory that was not appreciated before. In other words, the act of remembering something we’ve seen that isn’t currently in our field of vision means that we don’t see what we’re looking at.

How Memory Load Leaves Us ‘Blind’ to New Visual Information


Trying to keep an image we’ve just seen in memory can leave us blind to things we are ‘looking’ at, according to the results of a new study supported by the Wellcome Trust.

It’s been known for some time that when our brains are focused on a task, we can fail to see other things that are in plain sight. This phenomenon, known as ‘inattentional blindness’, is exemplified by the famous ‘invisible gorilla’ experiment in which people watching a video of players passing around a basketball and counting the number of passes fail to observe a man in a gorilla suit walking across the centre of the screen.

The new results reveal that our visual field does not need to be cluttered with other objects to cause this ‘blindness’ and that focusing on remembering something we have just seen is enough to make us unaware of things that happen around us.

Professor Nilli Lavie from UCL Institute of Cognitive Neuroscience, who led the study, explains: “An example of where this is relevant in the real world is when people are following directions on a sat nav while driving.

"Our research would suggest that focusing on remembering the directions we’ve just seen on the screen means that we’re more likely to fail to observe other hazards around us on the road, for example an approaching motorbike or a pedestrian on a crossing, even though we may be ‘looking’ at where we’re going."

Participants in the study were given a visual memory task to complete while the researchers looked at the activity in their brains using functional magnetic resonance imaging. The findings revealed that while the participants were occupied with remembering an image they had just been shown, they failed to notice a flash of light that they were asked to detect, even though there was nothing else in their visual field at the time.

The participants could easily detect the flash of light when their mind was not loaded, suggesting that they had established a ‘load induced blindness’. At the same time, the team observed that there was reduced activity in the area of the brain that processes incoming visual information — the primary visual cortex.

Professor Lavie adds: “The ‘blindness’ seems to be caused by a breakdown in visual messages getting to the brain at the earliest stage in the pathway of information flow, which means that while the eyes ‘see’ the object, the brain does not.”

The idea that there is competition in the brain for limited information processing power is known as load theory and was first proposed by Professor Lavie more than a decade ago. The theory explains why the brain fails to detect even conspicuous events in the visual field, like the man in a gorilla suit, when attention is focused on a task that involves a high level of information load.

The research reveals a pathway of competition in the brain between new visual information and our short-term visual memory that was not appreciated before. In other words, the act of remembering something we’ve seen that isn’t currently in our field of vision means that we don’t see what we’re looking at.

 ‘Psychopaths’ Have An Impaired Sense of Smell
Psychopathy is a broad term that covers a severe personality disorder characterized by callousness, manipulation, sensation-seeking and antisocial behaviors, traits which may also be found in otherwise healthy and functional people. Studies have shown that people with psychopathic traits have impaired functioning in the front part of the brain – the area largely responsible for functions such as planning, impulse control and acting in accordance with social norms. In addition, a dysfunction in these areas in the front part of the brain is linked to an impaired sense of smell.

Mahmut and Stevenson looked at whether a poor sense of smell was linked to higher levels of psychopathic tendencies, among 79 non-criminal adults living in the community. First they assessed the participants’ olfactory ability as well as the sensitivity of their olfactory system. They also measured subjects’ levels of psychopathy, looking at four measures: manipulation; callousness; erratic lifestyles; and criminal tendencies. They also noted how much or how little they emphasized with other people’s feelings.
(Neuroscience News)

‘Psychopaths’ Have An Impaired Sense of Smell

Psychopathy is a broad term that covers a severe personality disorder characterized by callousness, manipulation, sensation-seeking and antisocial behaviors, traits which may also be found in otherwise healthy and functional people. Studies have shown that people with psychopathic traits have impaired functioning in the front part of the brain – the area largely responsible for functions such as planning, impulse control and acting in accordance with social norms. In addition, a dysfunction in these areas in the front part of the brain is linked to an impaired sense of smell.

Mahmut and Stevenson looked at whether a poor sense of smell was linked to higher levels of psychopathic tendencies, among 79 non-criminal adults living in the community. First they assessed the participants’ olfactory ability as well as the sensitivity of their olfactory system. They also measured subjects’ levels of psychopathy, looking at four measures: manipulation; callousness; erratic lifestyles; and criminal tendencies. They also noted how much or how little they emphasized with other people’s feelings.

(Neuroscience News)


I heard about this study on nature-neuroscience. These are scientists with some serious intestinal fortitude. It takes a lot of guts to apply for research grants, submit a manuscript and publish in the face of outside claims that your research is fanatic and based on bunk science. Now that the behavioral data exists, they should run an fMRI follow up to implicate the associational regions that underlie this phenomenon. Is it simple pavlovian conditioning, or something else? 
scinerds

LEARNING IN YOUR SLEEP
Sleeping and learning go hand in hand, studies have shown for years. Even a brief nap can boost your memory and sharpen your thinking. But the relationship goes deeper than that. In a new study, scientists report that the brain can actually learn something new during sleep.
Scientists used to believe that a sleeping brain was taking a break. But it turns out it can be taught a thing or two, scientists reported in a scientific journal published in August.
“The brain is not passive while you sleep,” neuroscientist Anat Arzi told Science News. “It’s quite active. You can do quite a lot of things while you are asleep.” Arzi researches olfaction, or the sense of smell, at the Weizmann Institute of Science in Rehovot, Israel. She worked on the new study.
Arzi and her coworkers didn’t try to teach the sleeping volunteers any complex information, like new words or facts. (So sleeping on top of your study notes won’t boost your grades.) Instead, the scientists taught snoozing volunteers to make new connections between smells and sounds.
When we smell something nice, like a flower, we automatically take deep breaths. When we smell something bad, like the stench of a dumpster, we automatically take short breaths. These natural reactions maximize our exposure to good smells and minimize our exposure to bad ones. Arzi and her coworkers based their experiment on these reactions and the knowledge that our senses don’t turn off while we slumber.
Once the volunteers fell asleep in the lab, the scientists went to work. They gave the volunteers a whiff of something pleasant, like shampoo, and at the same time played a particular musical note. The volunteers didn’t wake up, but they did hear — and sniff deeply. Then the scientists gave the volunteers a whiff of something repulsive, like rotten fish, and played a different musical note. Again, the volunteers heard and smelled — a short snort this time — but didn’t wake up. The researchers repeated the experiment while the volunteers slept.
After just four repetitions, volunteers made a connection between the musical notes and their paired smells. When the scientists played the musical tone that went with good smells, the sleepers inhaled deeply — even though there was no good smell to sniff. And when the scientists played the musical tone that went with foul odors, the sleepers inhaled briefly — despite there being no bad smell.
“They learned what the tone signified,” Arzi concluded.
The next day, the volunteers woke up with the sound-smell connection intact. They inhaled deeply when hearing one tone and cut their breaths short when hearing the other. Which must have been odd for them: Imagine walking down the street and taking a deep breath upon hearing a particular sound!

I heard about this study on nature-neuroscience. These are scientists with some serious intestinal fortitude. It takes a lot of guts to apply for research grants, submit a manuscript and publish in the face of outside claims that your research is fanatic and based on bunk science. Now that the behavioral data exists, they should run an fMRI follow up to implicate the associational regions that underlie this phenomenon. Is it simple pavlovian conditioning, or something else?

scinerds

LEARNING IN YOUR SLEEP

Sleeping and learning go hand in hand, studies have shown for years. Even a brief nap can boost your memory and sharpen your thinking. But the relationship goes deeper than that. In a new study, scientists report that the brain can actually learn something new during sleep.

Scientists used to believe that a sleeping brain was taking a break. But it turns out it can be taught a thing or two, scientists reported in a scientific journal published in August.

“The brain is not passive while you sleep,” neuroscientist Anat Arzi told Science News. “It’s quite active. You can do quite a lot of things while you are asleep.” Arzi researches olfaction, or the sense of smell, at the Weizmann Institute of Science in Rehovot, Israel. She worked on the new study.

Arzi and her coworkers didn’t try to teach the sleeping volunteers any complex information, like new words or facts. (So sleeping on top of your study notes won’t boost your grades.) Instead, the scientists taught snoozing volunteers to make new connections between smells and sounds.

When we smell something nice, like a flower, we automatically take deep breaths. When we smell something bad, like the stench of a dumpster, we automatically take short breaths. These natural reactions maximize our exposure to good smells and minimize our exposure to bad ones. Arzi and her coworkers based their experiment on these reactions and the knowledge that our senses don’t turn off while we slumber.

Once the volunteers fell asleep in the lab, the scientists went to work. They gave the volunteers a whiff of something pleasant, like shampoo, and at the same time played a particular musical note. The volunteers didn’t wake up, but they did hear — and sniff deeply. Then the scientists gave the volunteers a whiff of something repulsive, like rotten fish, and played a different musical note. Again, the volunteers heard and smelled — a short snort this time — but didn’t wake up. The researchers repeated the experiment while the volunteers slept.

After just four repetitions, volunteers made a connection between the musical notes and their paired smells. When the scientists played the musical tone that went with good smells, the sleepers inhaled deeply — even though there was no good smell to sniff. And when the scientists played the musical tone that went with foul odors, the sleepers inhaled briefly — despite there being no bad smell.

“They learned what the tone signified,” Arzi concluded.

The next day, the volunteers woke up with the sound-smell connection intact. They inhaled deeply when hearing one tone and cut their breaths short when hearing the other. Which must have been odd for them: Imagine walking down the street and taking a deep breath upon hearing a particular sound!

Caltech study shows that the distance at which facial photos are taken influences perception.
“It turns out that faces photographed quite close-up are geometrically warped, compared to photos taken at a larger distance,” explains Bryan. “Of course, the close picture would also normally be larger, higher resolution and have different lighting—but we controlled for all of that in our study. What you’re left with is a warping effect that is so subtle that nobody in our study actually noticed it. Nonetheless, it’s a perceptual clue that influenced their judgments.”
That subtle distance warping, however, had a big effect: close-up photos made people look less trustworthy, according to study participants. The close-up photo subjects were also judged to look less attractive and competent.
(Neuroscience News)

Caltech study shows that the distance at which facial photos are taken influences perception.

“It turns out that faces photographed quite close-up are geometrically warped, compared to photos taken at a larger distance,” explains Bryan. “Of course, the close picture would also normally be larger, higher resolution and have different lighting—but we controlled for all of that in our study. What you’re left with is a warping effect that is so subtle that nobody in our study actually noticed it. Nonetheless, it’s a perceptual clue that influenced their judgments.”

That subtle distance warping, however, had a big effect: close-up photos made people look less trustworthy, according to study participants. The close-up photo subjects were also judged to look less attractive and competent.

(Neuroscience News)