LeverLabs

An Observer Observing Itself

written by Onne Gorter on 2017/01/01

In my previous two posts (1, 2), I have been thinking about consciousness. The main idea is that the brain keeps a history of its decisions, so that later consequences can adjust the model it used to reach those decisions. This is very useful, because this feedback loop continuously improves the brain's model of the world.

And when that history of decisions includes higher level concepts, like a self, we get a system that reflects and sees what it was doing and why. This is a self observing system. And that probably feels like something to the system doing it.

That this creates feeling and meaning is almost unavoidable, because any evaluation of the decisions and outcomes uses the model to simulate the world and the self. Then checks if the predicted outcome is meaningful to the system. Or in other words, checks if it feels good.

Our brains are self improving reality simulators.

Similar Ideas

A very similar idea is from Joshua Bach, which he presents here:

To get a much bigger picture, including answers to life the universe and everything, Joshua gave a series of amazing talks at the CCC.

The Inevitable Hard Problems

Inevitably people bring up the hard problem of consciousness, or other objections like the Chinese Room. (See Hard Problem, Chinese Room)

I would just like to point out that those are philosophical devices. We don't know if consciousness is hard. Maybe in certain information processing systems, consciousness, qualia, meaning and feelings arise trivially.

The only thing we can say, is that it is really hard to understand how. It is hard to see how the brain creates meaning using neurons as building blocks. It is equally hard to see how a computer program can create meaning from its symbols.

Submit to Hacker News