Consciousness is defined as the state of being aware of and responsive to one’s surroundings, but what really makes us conscious, and how can we quantify that?
Consciousness is both the most familiar and most puzzling aspect of our existence. All of our experiences are conscious, yet it is seemingly impossible to quantify the subjective nature of these experiences. Does a computer know “what it’s like” to be in its state? Does your cat? Even newborn babies and brain-damaged patients seem to display some signs of consciousness, but how much? These questions could keep you lying awake at night.
Being able to quantify consciousness would solve many problems, such as being able to account for why some areas of the brain are vital to consciousness and why others seem to have little importance. Once theory proposed in 2008 by US-based neuroscientist, Guilio Tononi, has been gaining support in the scientific community.
Integrated Information Theory (IIT) implies that consciousness can, in principle, be found in any unit where information processing is going on. The theory states that, regardless of whether it be a computer or a living creature, a system is conscious if two physical conditions are met.
Firstly, the system must contain a vast information bank, which is called being differentiated, to borrow a term from mathematics. For example, both your brain and computer are capable of containing highly differentiated information, such as knowing every frame in a movie, but each frame being clearly distinct. However, one of these is conscious and one is not – so what’s the difference?
The second condition for a system to be conscious is that it must be highly integrated. Whatever information you are conscious of is completely presented to your brain, for you are unable to separate the frames of a film into a series of separate images, not matter how hard you try. Nor would you be able to completely isolate any information you receive from each of your senses. Integration is a measure of what differentiates out brains from other complex systems, such as computers.
To use another mathematical term, phi is the term IIT uses to generate a single number as a measure of integrated information. Something with a low phi, such as a computer, will not be conscious, whereas something with a high enough phi, like a human brain, will be.
If the “amount of consciousness” corresponds to the amount of integrated information in a system, then phi should change in differed states of consciousness. This allows us to physically test the phi value, using an instrument that researchers have now developed. Using electromagnetic pulses, the brain was stimulated and they were able to distinguish between awake and anaesthetised brains from the complexity of the resulting neural activity. Using this same technology, they were even able to discriminate between vegetative and minimally conscious patients, or when going from non-dream to dream-filled states of sleep.
Interestingly, the IIT also predicts why the cerebellum, the part of our brain responsible for muscular activity, seems to only slightly contribute to consciousness, despite it containing four times more neurons than the rest of the cerebral cortex, the part of our brain responsible for information processing and other “higher-order” functions. The cerebellum, as suggested by the IIT, is an information rich area, so it is highly differentiated; yet it fails the second condition of consciousness.
There is still a lot more work going on in this field, yet this theory has put forward some striking implications. If consciousness is defined by the amount of integrated information in a system, then it suggests that probably all complex systems, at least all creatures with brains, have some form of consciousness. But does being able to quantify consciousness now mean that we can create a computer with consciousness? Are the challenges of artificial consciousness closer to being solved than we think?