It really gets me how your brain completes what you see. The first picture shows what your brain tells you that you see. But in reality, your retina is covered by blood vessels and you see them all the time. Also, there’s a blind spot on your eye that your brain erases and completes for you by averaging the light conditios around it. And your cornea and lens twist the picture so it is both horizontally and vertically flipped.
So the second picture shows what is really projected on your retina.
The eyes differ in their optical properties what results in a blur projected in each retina, despite we see sharp images because the visual system calibrates itself. An international research performed by the
Consejo Superior de Investigaciones Científicas has discovered that when
each eye separately has a different level of blur, our brain uses as
sharp reference the image projected through the less aberrated eye. The
research has been published in Current Biology.
“Our impression about what is sharp is colossal and it is determined
by the sharper image among those which are projected through both
eyes”, explain the CSIC researcher Susana Marcos of the Instituto de
Óptica Daza de Váldes. The research reveals that, despite these blur
differences, the perception of each eye separately about the sharper
image is the same, regardless of the eye we use to make the test and
coincides with the blur image projected through the less aberrated eye.
The nature of these visual calibrations is important in order to
understand the different consequences referred to the refractive errors
between both eyes. “For instance, an available solution to correct the
presbyopia is monovision, in which different refractive corrections are
provided for both eyes. One eye, the dominant eye, is corrected for
distance viewing and the other one is corrected for vision viewing. It
is essential to understand the visual calibration with different levels
of blur to understand the visual processing of the patients, the main
objective is to provide the best possible correction”, conclude the
With 576-megapixel resolution, our eyes are incredible cameras, capturing 72-times more high-definition detail than the iPhone 6. To do this, our retinas are packed with many different cell types that help transmit light information to the brain. We know very little, however, about how these cells interconnect, so researchers have turned to mapping and tracing how one cell connects with another…and you can help. A team of scientists at MIT has developed an online game called EyeWire that allows anyone to figure out how cells connect in the retina with real science implications. This image was generated from players correctly tracing connections from one cell to the next, generating a complete connectivity map for these seven cells.
Image by Amy Robinson, Alex Norton, Sebastian Seung, William Silversmith, Jinseop Kim, Kisuk Lee, Aleks Zlasteski, Matt Green, Matthew Balkam, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, and Doug Bland.
Any science textbook will tell you we can’t see infrared light. Like X-rays and radio waves, infrared light waves are outside the visual spectrum. But an international team of researchers co-led by scientists at Washington University School of Medicine in St. Louis has found that under certain conditions, the retina can sense infrared light after all.
Using cells from the retinas of mice and people, and powerful lasers that emit pulses of infrared light, the researchers found that when laser light pulses rapidly, light-sensing cells in the retina sometimes get a double hit of infrared energy. When that happens, the eye is able to detect light that falls outside the visible spectrum.
“We’re using what we learned in these experiments to try to develop a new tool that would allow physicians to not only examine the eye but also to stimulate specific parts of the retina to determine whether it’s functioning properly,” said senior investigator Vladimir J. Kefalov, PhD, associate professor of ophthalmology and visual sciences at Washington University. “We hope that ultimately this discovery will have some very practical applications.”
The eye can detect light at wavelengths in the visual spectrum. Other wavelengths, such as infrared and ultraviolet, are supposed to be invisible to the human eye, but Washington University scientists have found that under certain conditions, it’s possible for us to see otherwise invisible infrared light. Credit: Sara Dickherber
If your eyes deceive you, blame your brain. Many optical illusions work because what we see clashes with what we expect to see.
That 3D movie? Give credit to filmmakers who exploit binocular
vision, or the way the brain merges the slightly different images from
the two eyes to create depth.
These are examples of the brain making sense of the information
coming from the eyes in order to produce what we “see.” The brain
combines signals that reach your retina with the models your brain has
learned to predict what to expect when you move through the world. Your
brain solves problems by inferring what is the most likely cause of any
given image on your retina, based on knowledge or experience.
(Image caption: Experiments
tested detection of changes in direction of motion (left-hand pathway)
or depth (right-hand pathway, in blue) after neurons in V2/V3 were
inactivated. Credit: Born lab)
Scientists have explored the complex puzzle of visual perception with
increasing precision, discovering that individual neurons are tuned to
detect very specific motions: up, but not down; right, but not left; and
in all directions. These same neurons, which live in the brain’s middle
temporal visual area, are also sensitive to relative depth.
Now a Harvard Medical School team led by Richard Born
has uncovered key principles about the way those neurons work,
explaining how the brain uses sensory information to guide the decisions
that underlie behaviors. Their findings, reported in Neuron, illuminate the nature and origin of the neural signals used to solve perceptual tasks.
Based on their previous work, the researchers knew that they could
selectively interfere with signals concerning depth, while leaving the
signals for direction of motion intact. They wanted to learn what
happened next, after the visual information was received and used to
make a judgment about the visual stimulus.
Was the next step based on “bottom-up” information coming from the
retina as sensory evidence? Or, as in optical illusions, did top-down
information originating in the brain’s decision centers influence what
happened in response to a visual stimulus?
“We were able to show that there’s a direct bottom-up contribution to
these signals,” said Born, HMS professor of neurobiology and senior
author of the paper. “It’s told us some very interesting things about
how the brain makes calculations and combines information from different
sources, and how that information influences behaviors.”
In their experiments with nonhuman primates, the researchers cooled
specific neurons to temporarily block their signals, in the same way
that ice makes a sprained ankle feel better because it prevents pain
neurons from firing.
The team selectively blocked pathways that provide information about
visual depth—how far something is from the viewer—but not the direction
of motion. The animals were trained to watch flickering dots on a
screen, something like “snow” on an old television, and detect when the
dots suddenly lined up and moved in one direction or changed in depth.
If the animal detected motion or a change in depth, making an eye
movement to look at the changed stimulus would result in delivery of a
When the neurons were inactivated, the animals were less likely to
detect depth, but their ability to detect motion was not affected. This
told the scientists that feed-forward information, not feedback, was
being used by the animal to make its decision. Their findings help
explain how relative motion and depth work together.
“Combining two pathways that compute two different things in the same
neurons is essential for vision, we think,” Born said. “But for these
two particular calculations, first you have to compute them separately
before you can put them together.”
Born believes there are other implications of their work.
“We think that the same operations that are happening in the visual
system are happening at higher levels of the brain, so that by
understanding these circuits that are easier to study we think we will
gain traction on those higher level questions,” Born said.
Glaucoma, the second leading cause of blindness, usually stems from elevated eye pressure, which in turn damages and destroys specialized neurons in the eye known as retinal ganglion cells. To better understand these cellular changes and how they influence the progression and severity of glaucoma, researchers at University of California, San Diego School of Medicine and Shiley Eye Institute turned to a mouse model of the disease. Their study, published Feb. 10 in The Journal of Neuroscience, reveals how some types of retinal ganglion cells alter their structures within seven days of elevated eye pressure, while others do not.
“Understanding the timing and pattern of cellular changes leading to retinal ganglion cell death in glaucoma should facilitate the development of tools to detect and slow or stop those cellular changes, and ultimately preserve vision,” said Andrew D. Huberman, PhD, assistant professor of neurosciences, neurobiology and ophthalmology. Huberman co-authored the study with Rana N. El-Danaf, PhD, a postdoctoral researcher in his lab.
Retinal ganglion cells are specialized neurons that send visual information from the eye’s retina to the brain. Increased pressure within the eye can contribute to retinal ganglion cell damage, leading to glaucoma. Even with pressure-lowering drugs, these cells eventually die, leading to vision loss.
In this study, Huberman and El-Danaf used a mouse model engineered to express a green fluorescent protein in specific retinal ganglion cells subtypes. This tool allowed them to examine four subtypes of retinal ganglion cells. The different cell types differ by the location in the eye to which they send the majority of their dendrites (cellular branches). Within seven days of elevated eye pressure, all retinal ganglion cells that send most or all of their dendrites to a region of the eye known as the OFF sublamina underwent significant rearrangements, such as reductions in number and length of dendritic branches. Retinal ganglion cells with connections in the ON part of the retina did not.
“We are very excited about this discovery,” Huberman said. “One of the major challenges to the detection and treatment of glaucoma is that you have to lose a lot of cells or eye pressure has to go way up before you know you have the disease. These results tell us we should design visual field tests that specifically probe the function of certain retinal cells. In collaboration with the other researcher members of the Glaucoma Research Foundation Catalyst for a Cure, we are doing just that and we are confident these results will positively impact human patients in the near-future.”
Pictured: Example of retinal ganglion cells with dendrites in the retina of a healthy eye.
Here a mouse retina is seen en face with these “J” retinal ganglion cells marked by the expression of one fluorescent protein. The millions of other entangled neurons are not marked and thus are invisible in this image. Image obtained with a confocal scanning microscope and pseudocolored.
Driving a car at 40 mph, you see a child dart into the street. You hit the brakes. Disaster averted.
But how did your eyes detect that movement? It’s a question that has confounded scientists.
Now, studying mice, researchers at Washington University School of Medicine
in St. Louis have an answer: A neural circuit in the retina at the back
of the eye carries signals that enable the eye to detect movement. The
finding could help in efforts to build artificial retinas for people who
have suffered vision loss.
The research is published June 16 in the online journal eLife.
The research team identified specific cell types that form a neural
circuit to carry signals from the eye’s photoreceptors — the rods and
cones that sense light — to the brain’s visual cortex, where those
signals are translated into an image.
“This ability to detect motion is key for animals, allowing them to
detect the presence of predators,” said principal investigator Daniel
Kerschensteiner, MD, an assistant professor of ophthalmology and visual
sciences. “And we know that these same cells are found not only in mice
but in rabbits, cats, primates and likely humans, too. The cells look
similar in every species, and we would assume they function in a similar
manner as well.”
Studying the neural circuit, Tahnbee Kim, a graduate student in
Kerschensteiner’s lab, identified a specific type of cell called an
amacrine cell that’s key to detecting motion. Amacrine cells are thought
to inhibit, or tamp down, the activity of other cells called ganglion
cells. This process ensures that the brain doesn’t receive too much
visual information, which could distort an image.
Using a technique that combines a powerful microscope with a method
that allows researchers to track how often retinal cells fire, the
researchers also showed that when there is motion in the visual field, a
specific subtype of amacrine cell excites ganglion cells, signaling the
brain so it becomes aware that an object is moving.
The discovery that this type of cell transmits object-motion signals
is an important step in understanding how the eye senses motion. It also
provides a high level of detail that will be needed to design
computerized, artificial retinas, which will need to detect motion as
well as sense light.
“There are many elements in the retinal circuitry that we haven’t
figured out yet,” said Kerschensteiner, also an assistant professor of
anatomy and neurobiology. “We know the signals from the rods and cones
are transmitted to the retina — where the amacrine and ganglion cells
are located — and that’s really where the ‘magic’ happens that allows us
to see what we see. Unfortunately, we still have a very limited
understanding of what most of the cells in the inner retina actually
The aging process affects everything from cardiovascular function to memory to sexuality. Most worrisome for many, however, is the potential loss of eyesight due to retinal degeneration.
New progress towards a prosthetic retina could help alleviate conditions that result from problems with this vital part of the eye. An encouraging new study published in Nano Letters describes a revolutionary novel device, tested on animal-derived retinal models, that has the potential to treat a number of eye diseases. The proof-of-concept artificial retina was developed by an international team led by Prof. Yael Hanein of Tel Aviv University’s School of Electrical Engineering and head of TAU’s Center for Nanoscience and Nanotechnology and including researchers from TAU, the Hebrew University of Jerusalem, and Newcastle University.
Lilach Bareket, Nir Waiskopf, David Rand, Gur Lubin, Moshe David-Pur, Jacob Ben-Dov, Soumyendu Roy, Cyril Eleftheriou, Evelyne Sernagor, Ori Cheshnovsky, Uri Banin, Yael Hanein. Semiconductor Nanorod–Carbon Nanotube Biomimetic Films for Wire-Free Photostimulation of Blind Retinas. Nano Letters, 2014; 14 (11): 6685 DOI: 10.1021/nl5034304
We report the development of a semiconductor nanorod-carbon nanotube based platform for wire-free, light induced retina stimulation. A plasma polymerized acrylic acid midlayer was used to achieve covalent conjugation of semiconductor nanorods directly onto neuro-adhesive, three-dimensional carbon nanotube surfaces. Photocurrent, photovoltage, and fluorescence lifetime measurements validate efficient charge transfer between the nanorods and the carbon nanotube films. Successful stimulation of a light-insensitive chick retina suggests the potential use of this novel platform in future artificial retina applications.