From Single Neuron to a Conscious Brain
How come we have consciousness and what is it? Thomas H. Huxley (known as Darwins pitbull) had some ideas on it, but found it hard to explain. Today we have a much better understanding of the brain. And we have a good grasp on informatics (computer science, machine learning). Using both I will try to sketch how, in small steps, a brain might evolve that can experience consciousness.
(1) Sensory input is mixed with internal state, producing output. This can start with just a single neuron or a system that behaves like one.
(2) Old sensory data persists even as sensors move on. Perhaps some neurons behaving like a recurrent neural network.
(3) Persisted input is predicted, compensating for the animals own movement, or movement of things. The goal is to minimize surprise. From this step onwards we are talking about a lot of neurons.
(4) The more individual things are recognized, and the more different properties are recognized, the more surprise can be minimized. And the better the animal can react to the things in the environment.
(5.1) Some recognized things might persist longer. Like where food is, where the nest is, where the borders of the territory are. A kind of high level memory appears. (This is a bit of a side step, in parallel with 5.2)
(5.2) Without actually doing so, the animal can predict what to expect from an action, and react to the expected future. A kind of high level learning appears, learning from feedbacks like reward or pain.
(6) Futures with multiple hypothetical actions can be predicted. Perhaps never taking the actions as the future is deemed harmful or not useful.
(7) Multiple plans can be evaluated and their resulting utility or potential harm evaluated.
(8) Plans might start to include not just hypothetical actions, but hypothetical tools like sticks or rocks. And now goals appear, like get a stick to get a fruit.
By now the brain has become abstract. Plans and their utility or harm, these are represented, yet are not things in the environment. Or things can be imagined like sticks that are needed but not available. And another memory must come about, to store plans with the things involved and goals with its subgoals.
(9) That more abstract memory can be used to store experience. And reasoning on what leads to what based on experience.
By now we have abstract reasoning, even reasoning about reasoning. And it includes non real things like "plans" and "beliefs" and "needs" and "rewards" and "self". The moment the brain can reason about who is doing the reasoning, and the "self" feels just like a thing, likely that is the moment consciousness appears.
But this is just one line of development. At some points other things come into play as well. Like recognizing others as similar to you and recognizing their behavior. To predict and act upon, perhaps to cooperate with. Or to mimic and to learn from.
Similarly, once things become more abstract, they might be represented by labels. And recognizing others, it might be useful to communicate things by communicating the labels. And the more communication becomes useful, the more useful labels become. And the more labels you have, the more abstract things there are, the more reasoning can be applied.
Though communication probably appears much sooner. Just producing sounds when surprise happens, helps others react usefully. Or producing sounds when you feel happy and safe, helps others feel safe.
A lot of brain machinery is dedicated to input persistence and prediction. And a lot of planning and execution is unconscious. No matter how well you understand football by thinking about it long and hard. You will have to train to master the movements and to master the "second to second" decision making.
Inhibiting the highest level of memory, or inhibiting the abstract reasoning system, both should stop the experience of being conscious. That is probably only a small portion of the brain to be inhibited.
Also don't be surprised by how much the brain can do without being conscious or deliberate thought.
Surely there is much more to say. But perhaps the main prediction is that consciousness is just physics, and just biology. And can come about through very gradual changes with selection for their advantages.
Have you ever had the feeling that as you grab your phone to call someone, but just before you can act, it rings and it is that person? Telepathy? Probably not.
Your brain had already figured out you should call that person, and had that plan ready, you just haven't noticed that plan yet (or you haven't stored that plan yet). Maybe such a state might even linger for hours? But then when that person does call, and you see the name on screen, suddenly you become aware of the plan and are amazed at the coincidence.
Some neuroscience based views can sound a lot like trying to understand software behavior by understanding the hardware. This idea is not entirely without merit, as you expect nature to create software by what in a computer we would call hardware acceleration. But I would guess that the more abstract the processing, the less specific the hardware.
Another theory is the integrated information theory. It looks at how much feedback does a system have on itself. While self feedback is necessary, it does not look like a good measure for consciousness. One can construct really high feedback systems that by no means are conscious. The steps described here actually bring about four levels of feedback. At steps 1, 2, 5 and 9. But at this last step something new happens, a higher level (e.g. "plan") is made part of the normal level (e.g. real world things).
Though some confusion also stems from the definition of conscious. The clinical one of not asleep or not unconscious. Or the dictionary definition of being self aware. Is a mouse conscious? Is it intelligent and creative? Yes. Is it aware of this? Probably not. (It is between step 5 to 8.) What about robots like big dog or later versions. Perhaps a way to think about it is this. Does an animal experience pain? Yes. But in how far can it experience injustice?
The more philosophic views discussing qualia or the hard problem of consciousness. To me that just sounds like answering the wrong question with the wrong models. It is useful to understand the feeling of consciousness with, but not to explain what brings it about.
Or worse, mystical ideas about consciousness. Claiming that the universe is conscious, usually in connection with quantum mechanics. The only thing that quantum mechanics has to do with consciousness is that both are hard to understand. And just because consciousness is a pattern of how information is processed, that does not make all patterns conscious.
An excellent talk on the topic by Max Tegmark: