The process of viewing films begins in the director’s mind and ends in the viewer’s head. But what happens before a film reaches its destination in a spectator’s mind is quite amazing. Those of us, who talk about film, commission films or make films for a living, should not only continuously strive to create better films, but of course also contemplate how films are perceived.
This article isn’t about film-set anecdotes or creative processes, storytelling or dramatic narratives. For once two covert protagonists takes centre stage, without which viewing films – and I’m sure no one would contradict me here – would be impossible: the eye and the brain. Even though research and modern technology have now made it possible for us to view a film with our tongue. But before we explain how seeing with tongues works, let’s take a leap back and start from scratch, to where it all begins, at birth.
The development of sight
Up until a few decades ago we were under the impression that newborn babies could hardly see, hear or even feel pain. Today science has radically changed its mind. Babies are surprisingly capable of making meaningful sense of the signals they receive. A baby may appear to be watching on indifferently, but in fact all its senses are working in overdrive. Even in the womb, their senses of touch, smell and taste as well as hearing, are being prepared to face the sensations of the outside world. The only sensory organ, which is barely stimulated before birth is the eye. The eyes become fully functional in the third trimester of a pregnancy. Unborn babies can see and perceive changes in light inside the womb (for example if their mother is sunbathing on a beach with her bump exposed to the sun).
In the first months after birth, their awareness increases and they start to connect what they see with their other senses. This is the stage where their sense of smell, taste and touch develop faster than their sight or hearing. But it’s not long before the eye begins to become the most important sensory organ. No other medium can transmit to our brain – even at a great distance – as much detailed information as light can. The different things we see and details that surround us, leave behind a trace in our brains and influence the way we see things in the future.
By the time we turn six, our visual acuity corresponds to that of an adult. Our eyes have reached the peak of their development. The brain however – the nerve centre that controls sight – keeps on developing into old age. The brain spends a lifetime trying to interpret our vision and other senses to best adapt them to our own individual realities. That’s good news for Tarantino too. This means the master director and professed prophet of all B-movies stands a chance of developing his passions past his favourite film genre.
A chaotic visual reality
We experience a sequence of different sounds as a unity, only because our brains are capable of combining a sequence of individual noises into a melody. The same happens with visual impressions. Even seemingly simple sensory impressions don’t penetrate our consciousness unfiltered. Some areas of the brain for example, are working non-stop to ensure that the colour of an object we look at remains the same. In reality, colour always depends on how warm or cold the light is which falls on an object. Thanks to the brain’s corrective function we will always see a rose as red, regardless if it’s being lit by the blaring midday sun, a blue energy saver bulb or by the dim light of dusk. The brain’s colour correction, also makes sure that all parts of the rose are equally red, regardless if some petals are shaded off or in a different light – which means that in reality a rose consists of a spectrum of colours that is everything but red.
This construction of reality is vital for human survival. It’s possible that when Adam and Eve were alive, wild animals might not have had an appetite for humans. Then when humans were banished from the Garden of Eden they had to learn to take deliberate decisions. Decisions require a referral system and reference values. Decisions can only be made in a clearly defined and understandable world. The brain creates this world for every person individually. Countless neurons create huge cell gatherings, so we can perceive our surroundings as a whole without interruption. Without our brain’s corrective autofocus, we would be left completely overwhelmed and incapable of processing all these different impressions. The brain organises and filters the chaotic reality that surrounds us. It bundles together sensations, weighs them up against expectations and experience and creates a solid image in our consciousness of the here and now. Facebook uses algorithms base on our browsing history to show us what we want to see. Our brain is different. It shows us only what we have to see.
Scientific research into the way our memory works, proves how important visual sensorial impressions really are: An hour later a person can only remember 10% of what he has heard and 17% of what he has read. It’s a completely different story if the person experiences or sees (!) something for himself: in this case 80% of it sticks. But it gets better: If a visual experience is connected to an activity (that also includes any interaction within a virtual reality) we remember a staggering 90% of it.
The famous “Gorilla Experiment” conducted by the American psychologist Daniel Simons in 1999 impressively demonstrates the fact that we block out unnecessary visual information. Simons filmed a basketball game. The players were asked to count how many times the ball was being passed between the players of one of the teams. When asked at the end of the game, if something unusual had happened, more than half of the players said no. Their brains had blocked out an event that was impossible to miss: During the game a man dressed in a gorilla suit had run onto the court, fooled around right in front of them doing his best gorilla impression, before walking off again. The conclusion of the study: The act of seeing depends to an alarming extent on how we pay attention. How we perceive something, always has to do with what we already know and what we expect to happen.
Why we feel films
In 1998 a study was published in the U.S. about the connection between the body and perception. The participants of the study had to sit down, resting their left on the table in front of them. The researcher covered their left hand out of sight and laid down a rubber hand beside them, which they could see. As both hands were stroked with a paintbrush, the participants didn’t feel the sensation in their left and real hand, but instead in the artificial hand, that was visible to them. Their body had assumed this visual perception into its awareness. The study proves that a seeming conformity between optical information and touch suffices for the brain to perceive a visible object, although not pertaining to its organism, as part of its body. On this premise it is hardly surprising that we not only see films and videos, but also feel them. Whether we like it or not, the brain synchronises our own bodily awareness with visual impulses that a film conveys to us.
Seeing with the tongue
Our ability to see isn’t limited to what our eyes can do. Human perception proves itself to be surprisingly flexible when it comes to visual stimuli. This malleability exists because, although the human sensory organs specifically react to environmental stimuli (such as sound, light, scents), our bodies always send the same neural code – made up of electric signs – to the brain. Depending on which part of the brain receives and processes these signals, perception changes.
A clear example for this, are the blind that have learned to see with their tongue instead of their eyes. In order for this tongue sight to be possible, three things are needed: a video camera, a small plaque covered in 400 electrodes and training. The electrodes on the plaque translate the image of the video camera, which is mounted on a person’s head, into a series of electric impulses. If a blind person puts the plaque on their tongue, the electrodes transmit the impulses through the saliva onto the tongue’s nerves.
At first the brain struggles to make sense of these strange impulses. The tongue’s nerves, which react to tactile stimuli, after repeated training, quickly learn to interpret these electric impulses in a new way. It’s not long before the brain interprets them as impressions, which generally describe the act of seeing: They provide information on the shape and size of an object, direction of movement and spatial depth. Thanks to this tiny plaque on their tongue, with practice, a blind person can reach for a cup of coffee, catch a ball or even read.
To see with the tongue is only a first step within a field of evolution whose limits are as yet unknown. Science is already working on ways for new data to penetrate our brains by means of similar tongue plaques: Information, which our natural senses are normally incapable of perceiving. Infrared signals or UV light are harmless examples of such new technology.
Optical data glasses and virtual reality (VR), but especially the combinations of organic consciousness and artificial intelligence – which promises to become a reality in the not too distant future – will shift the way we see and experience (or indeed: the way we interpret and process what we see and experience) into a completely new universe for generations to come.
What exactly these new visual worlds will reveal before our inner eye will is still unclear. With our cerebral cortex, its zillions of neurons and its incredible learning capacity, surely our brain still has one or two surprises up its sleeve.
Conclusion: Viewing films and the cinema in our head
The way in which we see and we are viewing films today is the result of 500 million years of evolution. Five billion neurons alone are busy processing visual stimuli in our brain, engaging 30 different areas of the brain, which translate the signals from our eyes into a seemingly uniform image.
According to the latest brain studies, only a small proportion of our vision depends on what we “really” see and is received as light signals and converted via the optical nerves and the thalamus into a visual perception in our brains. Most of what transforms our visual perception into lasting impressions stems from the brain itself. It’s memories and feelings, which of their own accord, combine with the visual signals in our brain to form individual, apparently logical and graspable experiences we can personally understand.
Octopuses here provide a great example for what seeing means. Due to a coincidental parallel in our evolutions, octopuses have similar seeing organs to humans. Their visual reality is not created in the eye, but in the brain (which is a vital asset for both species), which makes for a whole different cinema in an octopus’ head. Despite the similarity in our seeing organs, they perceive their surrounding in a completely different way than we humans do. That’s why we recommend you refrain from working with these eight-tentacled sea creatures when creating videos and films.
A century ago, Karl Marx came to the conclusion, with regards to social development that our consciousness determines our being. When it comes to the cinema in our head and communications, the opposite is true: The unconscious defines our being. If you look at it from the point of view of neural research, there is no such thing as an audience. It’s a group of individual human beings who experience each film in their own different way. That is undoubtedly also where the strength of the moving image lies.
Sources and additional reading:
- Gerhard Roth: Wie einzigartig ist der Mensch? Die lange Evolution der Gehirne und des Geistes, Edited by Springer Spektrum, 2010
- Vicki Bruce & Andy Young: Face Perception, Edited by Psychology Press, 2011
- Herbert Bruhn, Reinhard Kopiez, Andreas C. Lehmann: Musikpsychologie – das neue Handbuch, Rowohlts Enzyklopädie, 2008
- GEOkompakt, Nr.36: Unsere Sinne, 09/2013