Posts tagged "visual cognition"
Universal Map of Vision in the Human Brain
Nearly 100 years after a British neurologist first mapped the blind spots caused by missile wounds to the brains of soldiers, Perelman School of Medicine researchers at the University of Pennsylvania have perfected his map using modern-day technology. Their results create a map of vision in the brain based upon an individual’s brain structure, even for people who cannot see. Their result can, among other things, guide efforts to restore vision using a neural prosthesis that stimulates the surface of the brain.
The study appears in the latest issue of Current Biology, a Cell Press journal.
Scientists frequently use a brain imaging technique called functional MRI (fMRI) to measure the seemingly unique activation map of vision on an individual’s brain. This fMRI test requires staring at a flashing screen for many minutes while brain activity is measured, which is an impossibility for people blinded by eye disease. The Penn team has solved this problem by finding a common mathematical description across people of the relationship between visual function and brain anatomy.
"By measuring brain anatomy and applying an algorithm, we can now accurately predict how the visual world for an individual should be arranged on the surface of the brain," said senior author Geoffrey Aguirre, MD, PhD, assistant professor of Neurology. "We are already using this advance to study how vision loss changes the organization of the brain."
The researchers combined traditional fMRI measures of brain activity from 25 people with normal vision. They then identified a precise statistical relationship between the structure of the folds of the brain and the representation of the visual world.
"At first, it seems like the visual area of the brain has a different shape and size in every person," said co-lead author Noah Benson, PhD, post-doctoral researcher in Psychology and Neurology. "Building upon prior studies of regularities in brain anatomy, we found that these individual differences go away when examined with our mathematical template."
A World War I neurologist, Gordon Holmes, is generally credited with creating the first schematic of this relationship. “He produced a remarkably accurate map in 1918 with only the crudest of techniques,” said co-lead author Omar Butt, MD/PhD candidate in the Perelman School of Medicine at Penn. “We have now locked down the details, but it’s taken 100 years and a lot of technology to get it right.”
The research was funded by grants from the Pennsylvania State CURE fund and the National Institutes of Health (P30 EY001583, P30 NS045839-08, R01 EY020516-01A1).

Universal Map of Vision in the Human Brain


Nearly 100 years after a British neurologist first mapped the blind spots caused by missile wounds to the brains of soldiers, Perelman School of Medicine researchers at the University of Pennsylvania have perfected his map using modern-day technology. Their results create a map of vision in the brain based upon an individual’s brain structure, even for people who cannot see. Their result can, among other things, guide efforts to restore vision using a neural prosthesis that stimulates the surface of the brain.

The study appears in the latest issue of Current Biology, a Cell Press journal.

Scientists frequently use a brain imaging technique called functional MRI (fMRI) to measure the seemingly unique activation map of vision on an individual’s brain. This fMRI test requires staring at a flashing screen for many minutes while brain activity is measured, which is an impossibility for people blinded by eye disease. The Penn team has solved this problem by finding a common mathematical description across people of the relationship between visual function and brain anatomy.

"By measuring brain anatomy and applying an algorithm, we can now accurately predict how the visual world for an individual should be arranged on the surface of the brain," said senior author Geoffrey Aguirre, MD, PhD, assistant professor of Neurology. "We are already using this advance to study how vision loss changes the organization of the brain."

The researchers combined traditional fMRI measures of brain activity from 25 people with normal vision. They then identified a precise statistical relationship between the structure of the folds of the brain and the representation of the visual world.

"At first, it seems like the visual area of the brain has a different shape and size in every person," said co-lead author Noah Benson, PhD, post-doctoral researcher in Psychology and Neurology. "Building upon prior studies of regularities in brain anatomy, we found that these individual differences go away when examined with our mathematical template."

A World War I neurologist, Gordon Holmes, is generally credited with creating the first schematic of this relationship. “He produced a remarkably accurate map in 1918 with only the crudest of techniques,” said co-lead author Omar Butt, MD/PhD candidate in the Perelman School of Medicine at Penn. “We have now locked down the details, but it’s taken 100 years and a lot of technology to get it right.”

The research was funded by grants from the Pennsylvania State CURE fund and the National Institutes of Health (P30 EY001583, P30 NS045839-08, R01 EY020516-01A1).

 
Neural network gets an idea of number without counting

AN ARTIFICIAL brain has taught itself to estimate the number of objects in an image without actually counting them, emulating abilities displayed by some animals including lions and fish, as well as humans.
Because the model was not preprogrammed with numerical capabilities, the feat suggests that this skill emerges due to general learning processes rather than number-specific mechanisms. “It answers the question of how numerosity emerges without teaching anything about numbers in the first place,” says Marco Zorzi at the University of Padua in Italy, who led the work.
The finding may also help us to understand dyscalculia - where people find it nearly impossible to acquire basic number and arithmetic skills - and enhance robotics and computer vision.
The skill in question is known as approximate number sense. A simple test of ANS involves looking at two groups of dots on a page and intuitively knowing which has more dots, even though you have not counted them. Fish use ANS to pick the larger, and therefore safer, shoal to swim in.
To investigate ANS, Zorzi and colleague Ivilin Stoianov used a computerised neural network that responds to images and generates new “fantasy” ones based on rules that it deduces from the original images. The software models a retina-like layer of neurons that fire in response to the raw pixels, plus two deeper layers that do more sophisticated processing based on signals from layers above.
The pair fed the network 51,800 images, each containing up to 32 rectangles of varying sizes. In response to each image, the program strengthened or weakened connections between neurons so that its image generation model was refined by the pattern it had just “seen”. Zorzi likens it to “learning how to visualise what it has just experienced”.
Infants demonstrate ANS without being taught, so the network was not preprogrammed with the concept of “amount”. But when Zorzi and Stoianov looked at the network’s behaviour, they discovered a subset of neurons in the deepest layer that fired more often as the number of objects in the image decreased. This suggested that the network had learned to estimate the number of objects in each image as part of its rules for generating images. This behaviour was independent of the total surface area of the objects, emphasising that the neurons were detecting number.
What’s more, these firing patterns followed the trend shown by neurons inside the parietal cortex of monkeys. This region is involved in knowledge of numbers, suggesting that the model might reflect how real brains work.
To see if these patterns could give rise to ANS, the pair created a second program and fed it the firing patterns of the number-detecting neurons in the first program. They also fed it information on whether the number of objects associated with each firing pattern was bigger or smaller than a reference number. Trained in this way, the model could estimate whether a fresh image contained more or fewer than a given number of objects (Nature Neuroscience, DOI: 10.1038/nn.2996).
Brian Butterworth, who studies mathematical cognition at University College London, says the work breaks new ground. “It gives an explanation for how we estimate number when we can’t count.”
(Content provided by New Scientist)

Neural network gets an idea of number without counting

AN ARTIFICIAL brain has taught itself to estimate the number of objects in an image without actually counting them, emulating abilities displayed by some animals including lions and fish, as well as humans.

Because the model was not preprogrammed with numerical capabilities, the feat suggests that this skill emerges due to general learning processes rather than number-specific mechanisms. “It answers the question of how numerosity emerges without teaching anything about numbers in the first place,” says Marco Zorzi at the University of Padua in Italy, who led the work.

The finding may also help us to understand dyscalculia - where people find it nearly impossible to acquire basic number and arithmetic skills - and enhance robotics and computer vision.

The skill in question is known as approximate number sense. A simple test of ANS involves looking at two groups of dots on a page and intuitively knowing which has more dots, even though you have not counted them. Fish use ANS to pick the larger, and therefore safer, shoal to swim in.

To investigate ANS, Zorzi and colleague Ivilin Stoianov used a computerised neural network that responds to images and generates new “fantasy” ones based on rules that it deduces from the original images. The software models a retina-like layer of neurons that fire in response to the raw pixels, plus two deeper layers that do more sophisticated processing based on signals from layers above.

The pair fed the network 51,800 images, each containing up to 32 rectangles of varying sizes. In response to each image, the program strengthened or weakened connections between neurons so that its image generation model was refined by the pattern it had just “seen”. Zorzi likens it to “learning how to visualise what it has just experienced”.

Infants demonstrate ANS without being taught, so the network was not preprogrammed with the concept of “amount”. But when Zorzi and Stoianov looked at the network’s behaviour, they discovered a subset of neurons in the deepest layer that fired more often as the number of objects in the image decreased. This suggested that the network had learned to estimate the number of objects in each image as part of its rules for generating images. This behaviour was independent of the total surface area of the objects, emphasising that the neurons were detecting number.

What’s more, these firing patterns followed the trend shown by neurons inside the parietal cortex of monkeys. This region is involved in knowledge of numbers, suggesting that the model might reflect how real brains work.

To see if these patterns could give rise to ANS, the pair created a second program and fed it the firing patterns of the number-detecting neurons in the first program. They also fed it information on whether the number of objects associated with each firing pattern was bigger or smaller than a reference number. Trained in this way, the model could estimate whether a fresh image contained more or fewer than a given number of objects (Nature Neuroscience, DOI: 10.1038/nn.2996).

Brian Butterworth, who studies mathematical cognition at University College London, says the work breaks new ground. “It gives an explanation for how we estimate number when we can’t count.”

(Content provided by New Scientist)

 
Attention and awareness uncoupled in brain imaging experiments
This is bi-stable visual stimuli used for awareness studies. Left diagram shows a classical example, the Necker cube, where the surface depth perception switches over time. On the right, a binocular rivalry stimulus is shown. By putting one grating in one eye and the other grating in the other eye, our percept starts to switch between the two gratings. Interestingly, as in our main stimuli, the unpatterned donut region also takes over the left grating when the right stimulus is perceived. They are ideal and widely used tools to investigate the neural correlate of visual awareness because our percept switches while the physical stimulus remains constant. Credit: MPI for Biological Cybernetics
(read more)

Attention and awareness uncoupled in brain imaging experiments

This is bi-stable visual stimuli used for awareness studies. Left diagram shows a classical example, the Necker cube, where the surface depth perception switches over time. On the right, a binocular rivalry stimulus is shown. By putting one grating in one eye and the other grating in the other eye, our percept starts to switch between the two gratings. Interestingly, as in our main stimuli, the unpatterned donut region also takes over the left grating when the right stimulus is perceived. They are ideal and widely used tools to investigate the neural correlate of visual awareness because our percept switches while the physical stimulus remains constant. Credit: MPI for Biological Cybernetics

(read more)

Flashed face distortion effect: Grotesque faces from relative spaces

Abstract: We describe a novel face distortion effect resulting from the fast-paced presentation of eye-aligned faces. When cycling through the faces on a computer screen, each face seems to become a caricature of itself and some faces appear highly deformed, even grotesque. The degree of distortion is greatest for faces that deviate from the others in the set on a particular dimension (eg if a person has a large forehead, it looks particularly large). This new method of image presentation, based on alignment and speed, could provide a useful tool for investigating contrastive distortion effects and face adaptation.

From the Journal of Perception 

Neuroscientists Find Famous Optical Illusion Surprisingly Potent

The yellow jacket (Rocky, the mascot of the University of Rochester) appears to be expanding. But he is not. He is staying still. We simply think he is growing because our brains have adapted to the inward motion of the background and that has become our new status quo. Similar situations arise constantly in our day-to-day lives – jump off a moving treadmill and everything around you seems to be in motion for a moment.

This age-old illusion, first documented by Aristotle, is called the Motion Aftereffect by today’s scientists. Why does it happen, though? Is it because we are consciously aware that the background is moving in one direction, causing our brains to shift their frame of reference so that we can ignore this motion? Or is it an automatic, subconscious response?

Davis Glasser, a doctoral student in the University of Rochester’s Department of Brain and Cognitive Sciences thinks he has found the answer. The results of a study done by Glasser, along with his advisor, Professor Duje Tadin, and colleagues James Tsui and Christopher Pack of the Montreal Neurological Institute, will be published this week in the journal Proceedings of the National Academy of Sciences (PNAS).

In their paper, the scientists show that humans experience the Motion Aftereffect even if the motion that they see in the background is so brief that they can’t even tell whether it is heading to the right or the left.

Even when shown a video of a backdrop that is moving for only 1/40 of a second (25 milliseconds) – so short that the direction it is moving cannot be consciously distinguished – a subject’s brain automatically adjusts. If the subject is then shown a stationary object, it will appear to him as though it is moving in the opposite direction of the background motion. In recordings from a motion center in the brain called cortical area MT, the researchers found neurons that, following a brief exposure to motion, respond to stationary objects as if they are actually moving. It is these neurons that the researchers think are responsible for the illusory motion of stationary objects that people see during the Motion Aftereffect.

This discovery reveals that the Motion Aftereffect illusion is not just a compelling visual oddity: It is caused by neural processes that happen essentially every time we see moving objects. The next phase of the group’s study will attempt to find out whether this rapid motion adaptation serves a beneficial purpose – in other words, does this rapid adaptation actually improve your ability to estimate the speed and direction of relevant moving objects, such as a baseball flying toward you.

(University of Rochester)
Source of Key Brain Function Located: How to Comprehend a Scene in Less Than a Second
Scientists at the University of Southern California have pinned down the region of the brain responsible for a key survival trait: our ability to comprehend a scene — even one never previously encountered — in a fraction of a second.
The key is to process the interacting objects that comprise a scene more quickly than unrelated objects, according to corresponding author Irving Biederman, professor of psychology and computer science in the USC Dornsife College and the Harold W. Dornsife Chair in Neuroscience.
The study appears in the June 1 issue of The Journal of Neuroscience.
The brain’s ability to understand a whole scene on the fly “gives us an enormous edge on an organism that would have to look at objects one by one and slowly add them up,” Biederman said. What’s more, the interaction of objects in a scene actually allows the brain to identify those objects faster than if they were not interacting.
While previous research had already established the existence of this “scene-facilitation effect,” the location of the part of the brain responsible for the effect remained a mystery. That’s what Biederman and lead author Jiye G. Kim, a graduate doctoral student in Biederman’s lab, set out to uncover with Chi-Hung Juan of the Institute of Cognitive Neuroscience at the National Central University in Taiwan.
"The ‘where’ in the brain gives us clues as to the ‘how,’" Biederman said. This study is the latest in an ongoing effort by Biederman and Kim to unlock the complex way in which the brain processes visual experience. The goal, as Biederman puts it, is to understand "how we get mind from brain."
To find out the “where” of the scene-facilitation effect, the researchers flashed drawings of pairs of objects for just 1/20 of a second. Some of these objects were depicted as interacting, such as a hand grasping for a pen, and some were not, with the hand reaching away from the pen. The test subjects were asked to press a button if a label on the screen matched either one of the two objects, which it did on half of the presentations.
A recent study by Kim and Biederman suggested that the source of the scene-facilitation effect was the lateral occipital cortex, or LO, which is a portion of the brain’s visual processing center located between the ear and the back of the skull. However, the possibility existed that the LO was receiving help from the intraparietal sulcus, or IPS, which is a groove in the brain closer to the top of the head.
The IPS is engaged with implementing visual attention, and the fact that interacting objects may attract more attention left open the possibility that perhaps it was providing the LO with assistance.
While participants took the test, electromagnetic currents were used to alternately zap subjects’ LO or IPS, temporarily numbing each region in turn and preventing it from providing assistance with the task.
All of the participants were pre-screened to ensure they could safely receive the treatment, known as transcranial magnetic stimulation (TMS), which produces minimal discomfort.
By measuring how accurate participants were in detecting objects shown as interacting or not interacting when either the LO or IPS were zapped, researchers could see how much help that part of the brain was providing. The results were clear: zapping the LO eliminated the scene-facilitation effect. Zapping the IPS, however, did nothing.
When it comes to providing a competitive edge in identifying objects that are part of an interaction, the lateral occipital cortex appears to be working alone. Or, at least, without help from the intraparietal sulcus.

Source of Key Brain Function Located: How to Comprehend a Scene in Less Than a Second

Scientists at the University of Southern California have pinned down the region of the brain responsible for a key survival trait: our ability to comprehend a scene — even one never previously encountered — in a fraction of a second.

The key is to process the interacting objects that comprise a scene more quickly than unrelated objects, according to corresponding author Irving Biederman, professor of psychology and computer science in the USC Dornsife College and the Harold W. Dornsife Chair in Neuroscience.

The study appears in the June 1 issue of The Journal of Neuroscience.

The brain’s ability to understand a whole scene on the fly “gives us an enormous edge on an organism that would have to look at objects one by one and slowly add them up,” Biederman said. What’s more, the interaction of objects in a scene actually allows the brain to identify those objects faster than if they were not interacting.

While previous research had already established the existence of this “scene-facilitation effect,” the location of the part of the brain responsible for the effect remained a mystery. That’s what Biederman and lead author Jiye G. Kim, a graduate doctoral student in Biederman’s lab, set out to uncover with Chi-Hung Juan of the Institute of Cognitive Neuroscience at the National Central University in Taiwan.

"The ‘where’ in the brain gives us clues as to the ‘how,’" Biederman said. This study is the latest in an ongoing effort by Biederman and Kim to unlock the complex way in which the brain processes visual experience. The goal, as Biederman puts it, is to understand "how we get mind from brain."

To find out the “where” of the scene-facilitation effect, the researchers flashed drawings of pairs of objects for just 1/20 of a second. Some of these objects were depicted as interacting, such as a hand grasping for a pen, and some were not, with the hand reaching away from the pen. The test subjects were asked to press a button if a label on the screen matched either one of the two objects, which it did on half of the presentations.

A recent study by Kim and Biederman suggested that the source of the scene-facilitation effect was the lateral occipital cortex, or LO, which is a portion of the brain’s visual processing center located between the ear and the back of the skull. However, the possibility existed that the LO was receiving help from the intraparietal sulcus, or IPS, which is a groove in the brain closer to the top of the head.

The IPS is engaged with implementing visual attention, and the fact that interacting objects may attract more attention left open the possibility that perhaps it was providing the LO with assistance.

While participants took the test, electromagnetic currents were used to alternately zap subjects’ LO or IPS, temporarily numbing each region in turn and preventing it from providing assistance with the task.

All of the participants were pre-screened to ensure they could safely receive the treatment, known as transcranial magnetic stimulation (TMS), which produces minimal discomfort.

By measuring how accurate participants were in detecting objects shown as interacting or not interacting when either the LO or IPS were zapped, researchers could see how much help that part of the brain was providing. The results were clear: zapping the LO eliminated the scene-facilitation effect. Zapping the IPS, however, did nothing.

When it comes to providing a competitive edge in identifying objects that are part of an interaction, the lateral occipital cortex appears to be working alone. Or, at least, without help from the intraparietal sulcus.

Scientists trick the Brain Into Experiencing Life as Barbie-Doll Sized
Imagine shrinking to the size of a doll in your sleep. When you wake up, will you perceive yourself as tiny or the world as being populated by giants? Researchers at Karolinska Institutet in Sweden may have found the answer.
According to the textbooks, our perception of size and distance is a product of how the brain interprets different visual cues, such as the size of an object on the retina and its movement across the visual field. Some researchers have claimed that our bodies also influence our perception of the world, so that the taller you are, the shorter distances appear to be. However, there has been no way of testing this hypothesis experimentally — until now.
Henrik Ehrsson and his colleagues at Karolinska Institutet have already managed to create the illusion of body-swapping with other people or mannequins. Now they have used the same techniques to create the illusion of having a very small doll-sized body or a very large, 13-foot tall body.
Their results, published in the online open access journal PLoS ONE, show for the first time that the size of our bodies has a profound effect on how we perceive the space around us.
"Tiny bodies perceive the world as huge, and vice versa," says study leader Henrik Ehrsson.
The altered perception of space was assessed by having subjects estimate the size of different blocks and then walk over to the blocks with their eyes shut. The illusion of having a small body caused an overestimation of size and distance, an effect that was reversed for large bodies.
One strategy that the brain uses to judge size is through comparison — if a person stands beside a tree it computes the size of both. However, the sensed own body seems to serve as a fundamental reference that affects this and other visual mechanisms.
"Even though we know just how large people are, the illusion makes us perceive other people as giants; it’s a very weird experience," says Dr Ehrsson, who also tried the experiment on himself.
The study also shows that it is perfectly possible to create an illusion of body-swapping with extremely small or large artificial bodies; an effect that Dr Ehrsson believes has considerable potential practical applications.
"It’s possible, in theory, to produce an illusion of being a microscopic robot that can carry out operations in the human body, or a giant robot repairing a nuclear power plant after an accident," he says.
(Public Library of Science) 
Gaping hole of a confound: Participants’ previous experiences as being average-sized. Their spatial memories will be used as the templates to which all ‘shrunken’ experiences will be compared, thereby forming a relative association between being bigger and being smaller. Cool research but uninspired conclusion. 

Scientists trick the Brain Into Experiencing Life as Barbie-Doll Sized

Imagine shrinking to the size of a doll in your sleep. When you wake up, will you perceive yourself as tiny or the world as being populated by giants? Researchers at Karolinska Institutet in Sweden may have found the answer.

According to the textbooks, our perception of size and distance is a product of how the brain interprets different visual cues, such as the size of an object on the retina and its movement across the visual field. Some researchers have claimed that our bodies also influence our perception of the world, so that the taller you are, the shorter distances appear to be. However, there has been no way of testing this hypothesis experimentally — until now.

Henrik Ehrsson and his colleagues at Karolinska Institutet have already managed to create the illusion of body-swapping with other people or mannequins. Now they have used the same techniques to create the illusion of having a very small doll-sized body or a very large, 13-foot tall body.

Their results, published in the online open access journal PLoS ONE, show for the first time that the size of our bodies has a profound effect on how we perceive the space around us.

"Tiny bodies perceive the world as huge, and vice versa," says study leader Henrik Ehrsson.

The altered perception of space was assessed by having subjects estimate the size of different blocks and then walk over to the blocks with their eyes shut. The illusion of having a small body caused an overestimation of size and distance, an effect that was reversed for large bodies.

One strategy that the brain uses to judge size is through comparison — if a person stands beside a tree it computes the size of both. However, the sensed own body seems to serve as a fundamental reference that affects this and other visual mechanisms.

"Even though we know just how large people are, the illusion makes us perceive other people as giants; it’s a very weird experience," says Dr Ehrsson, who also tried the experiment on himself.

The study also shows that it is perfectly possible to create an illusion of body-swapping with extremely small or large artificial bodies; an effect that Dr Ehrsson believes has considerable potential practical applications.

"It’s possible, in theory, to produce an illusion of being a microscopic robot that can carry out operations in the human body, or a giant robot repairing a nuclear power plant after an accident," he says.

(Public Library of Science

Gaping hole of a confound: Participants’ previous experiences as being average-sized. Their spatial memories will be used as the templates to which all ‘shrunken’ experiences will be compared, thereby forming a relative association between being bigger and being smaller. Cool research but uninspired conclusion.