Building a conscious mind with LLMs

Recent research points towards the nature of LLMs as being simply very nice compression machines… with a trained model like GPT (3.5, e.g.) being, essentially, a compression of vast quantities of human knowledge.

With this perspective, we can see more clearly how a language model, on its own, is not a consciousness, but rather a type of memory that can be used to predict or infer appropriate data/information decompression. Granted, with massive billion-parameter compression, we might find that not just facts and connection of facts is being compressed, but also meta-properties of such facts allowing for a type of “understanding” in the decompression that is already super-human. But in the end of a day, we would still not consider such a system conscious, or even a mind.

So let’s consider additional properties and architectures that would be required for us to consider a system to be a conscious mind. For fun, I’m going to call our imaginary synthetic conscious mind “ISCM”. I have a few thoughts:

  • Continuous input - like us, ISCM would require continuous relevant inputs to build an internalized belief state about the world.
  • Belief state - Like us, ISCM can’t know anything with absolute certainty. “Facts” are proven unfactual all the time, and even our basic facts (this blanket is red) stand on colloquial usage of language. ISCM might assign “confidence” to it’s beliefs, holding some more crucial than others.
  • Goals - For ISCM to have agency, it must have goals. The goals would be interrelated.
  • Planning - ISCM must be able to make plans to reach its goals based on it’s current understanding of the world.
  • Act - ISCM would need to effect things to reach its goals. Even if purely embodied, ISCM must be able to take actions: “change its mind”, record new facts, research, plan, etc. Being able to effect real-world digital or physical change to reach goals would require additional non-mental actions like computer-system plugins (writing emails, looking up the weather, booking tickets, etc.) and actuators (move to X, cool down the room, dispense medication, etc.).
  • Temporality - I broke this out as a separate property because it is a core assumption of all of the above. Inputs happen at specific times, goals and planning are sequential and time-bound, acting–being imperfect–requires feedback to determine whether the action resulted in the intended outcome. Our current generation of LLMs understand time semantically equivalent to any other property, but conscious minds which interact with the world need a much more profound integration.

What’s not on this list?

  • Embodiment - Some theories say consciousness and mind cannot exist outside of a physical body. It’s clear a consciousness that has no interconnection with the physical world would be quite limited, it also seems clear that our other properties of Continuous Input and Belief State can encompass this idea–better input (sensations, perceptions) and better beliefs (judgements, feelings, intuition) would make a more capable ISCM. Indeed, we could limit ISCM’s inputs to a single “body of sensors” like an android, but we might also give it the input of multiple independent physical forms.
  • Feeling - Another way to say “inputs” and “goals” (e.g., fight-or-flight)
  • Intuition - A more sophisticated form of belief state management that includes meta-analysis and pattern matching.

What else? :smiley:

1 Like

fascinating thoughts, discovering new terms like Continuous Inputs and Belief States with this post. Curious about human’s continuous inputs and how we sometimes pause these and ‘remove ourselves’ from the moment when we daydream - wondering if these types of worldly assessments might benefit ISCM’s both temporality and perhaps somehow benefit their agency.

tried reading the archiveX paper but it was a bit over my head, perhaps I’ll try it with the help of ChatGPT next.

thanks for posting

2 Likes

curious if you had a chance to dive into this paper released yesterday by Wes Gurnee & Max Tegmark - “Language Models Represent Space and Time” https://arxiv.org/abs/2310.02207

Love the idea about daydreaming, where many novel ideas originate. I’m wondering if this relates to the subconscious or unconscious mind as mentioned in the video?

Another aspect that came to my mind is personality. It’s somewhat related to the belief state but not entirely. Personality plays a critical role in how we interpret and react to incidents. Personality is innate, and even identical twins can have different personalities.

The decision-making process, from input to actions, varies with different personalities. For instance, extroverted versus introverted individuals. In human design, manifestors are trailblazers and initiators of new things, while reflectors react to their environments.

1 Like

I did skim that paper. It somewhat debunked a claim I’ve been making that LLM’s have no sense of time. I still think we’ll introduce a more fundamental property of time into models at some point, rather than an emergent phenomenon as described in the paper, but it’s really interesting to see how model training is figuring out time on its own.

1 Like

Right! We kindof have a way of jumping through time and space using our memory and imagination. Additionally, some people view sleep/dreaming as a mind’s means of integrating everything. These and reflection seem like a meta-process that feeds back into continuous input… but the input being more derivative than initial impressions. And of course we then can feed it through again and again… “last night I dreamed…”, “I have this recurring dream…”, “my dreams are generally…”, “I dream” :smiley: Each time adjusting our belief states.

1 Like

Right! ISCM seems to be missing the property of “personality”, “temperament”, “genetics”, which might also be where we would put environmental influences like health, affluence, or society… all things that direct the overall functioning of turning inputs into beliefs.

A massive paper just dropped very related to this topic by David Shapiro, who has been thinking about cognitive architectures for awhile: Conceptual Framework for Autonomous Cognitive Entities (arxiv.org)