Crash course in neuroscience
First and foremost, we’ll need to remember that our eyes start processing information right at the retinas, literally at the very first instances that light enters them. You might remember from school biology lessons that there are two types of photoreceptor cells, the infamous rods and cones. The former help us see in low-light conditions and thus are mostly responsible for night vision, whereas the second type are more sensitive to waves of different lengths, thus helping us to see color in normal lighting conditions.
There are several theories on how exactly we see color, but generally we can say that there are three broad types of cones, each responsible for a particular wavelength of the visible spectrum – hence the term trichromatic vision. We’ll just note that it’s not only the response of these cells but also the processing of the response in the brain’s visual cortex that corresponds to the actual colors as we see and know them.
So, we see everything thanks to differently-specialized cells in our retinas, each reacting to the tiny fraction of the visual scene it’s responsible for. But, predictably or not, those cells are not just located in our retinas. In a series of experiments conducted in the 1960s, David H. Hubel and Torsten Wiesel demonstrated (initially almost by accident!) that there were specific neurons in the visual cortex of a cat activating to lines positioned at a certain angle – and only at that angle. There were others that reacted to movement, and others that reacted to patterns of light and darkness. These results were revolutionary, as they showed how the brain forms representations of perceived objects based on their simple elements. Naturally, this discovery earned Hubel and Wiesel the 1981 Nobel Prize in Physiology.
Let’s sum it all up so far: both our color vision and our perception of objects actually “consists” of little elements somehow assembled together in the depth of our brain. But where does physics, or rather, optics, come in?
Changing the lens
This autumn, I was lucky enough to attend several lectures by Sergey Stafeev, a professor at ITMO’s Faculty of Physics and Engineering, as part of my course on Neuroiconics (or neuroimaging, as it is stated in my curriculum, even though it’s a completely different thing) – the Russian term uniting neurophysiology and optics. The course is a collaboration between St. Petersburg State University and ITMO.
The lectures were unique from every possible perspective not only because they were held at ITMO’s Museum of Optics, home to countless curiosities, real-life illusions, holograms, and much more, but also because it opened so many possibilities to studying and understanding vision.
Along with many things discussed, we approached the Fourier transform (a way of presenting a function in its frequencies) and learned to “decode” an image by its Fourier spectrum. During a practical part of the lecture, Prof. Stafeev put glass with various patterns or shapes inside it in the way of a beam of light, demonstrating how due to interference of light, say, a triangular shape is transformed into a pattern of lines located at different angles.
Later, we would perform these transformations using a computer program and then edit the acquired spectra, deactivating some of its frequencies and noticing the subsequent changes in the initial image. As only horizontal lines became blurred, while the vertical ones stayed intact, it was hard not to notice the parallels between this effect and the cells identified by Hubel and Wiesel. Turns out, our brains do their own kind of Fourier transform – how can you not wonder at that!
How are those bits and pieces organized, though? The answer – well, at least one of the possible ones – is hidden right in the first exhibition hall of the Museum. If you haven’t had the chance to visit it yet, the first section is devoted first and foremost to holograms, with a step by step process of creating one modelled with actual lasers in miniature.
In simple terms, holograms are preserved waves of light reflected from an object. There are usually two sources of light used to make a hologram: the reference beam, serving as, well, the reference, and the illumination beam, the one that gets reflected and has all the information about the object that this beam encountered from different angles. And this is what makes holography so unique – as the light covers the photographic plate completely, each element of the plate has complete information about the object. To make it clearer, if we break the plate, then even the smallest piece of it would show us the whole object recorded on it. And yes, in each of these pieces the object will be seen from a slightly different point of view, but it would still be complete, with no parts missing.
This astounding feature was what led researchers Karl Pribram and David Bohm to suggest that different brain regions oscillate (or vibrate) at different frequencies, creating interference patterns – just like light does in the process of making a hologram. Thus, just like in a hologram, this model suggests, any region of the brain should contain all of the information stored in it “as a whole”. This idea, known as the holographic brain, is still a theory, for, as you might remember, we still know next to nothing about the way consciousness emerges from the activity of our neurons and their connections.
The good news is that by viewing age-old problems from different angles, we can finally shed some light on them, moving closer step-by-step to unraveling the world’s greatest mysteries. Who knows, maybe one day the collaboration of physics and neurophysiology will lead us to a unified theory of mind and matter.
But where can you, too, find out more about all this stuff, you ask? Well, stay tuned to announcements – chances are, there will soon be a similar course for students of ITMO’s Master’s program in Art & Science.