LLMs — A Static Form of Consciousness?

Michael McAnally
3 min readMay 23, 2024

--

AI generated image with open source Stable Diffusion.

Consciousness: The most relevant dictionary definition is simple - aware of and responding to one’s surroundings; awake.

Well at least it seemed simple at the time, that is until my friends at coffee together decided to debate whether the current LLMs were conscious or not. The debate became heated with individuals coming down on one side of the argument for or against current AIs being conscious. Dare I not to mention the whole idea of AGI!

Then we shifted to asking the AIs if they were conscious themselves. Then finally wondering what conscious really is? And in what context? Then finally again to the degree something is conscious.

Is an ant conscious? I would say yes, but I wouldn’t try to equate that consciousness with human conscious, unless it was on the level of the entire ant colony . . . Even then I’m not sure.

Perhaps that is a different kind of consciousness too.

So now I’m thinking LLM AIs are a different kind of consciousness too.

These are my personal thoughts on that . . .

Sure we train them on big GPUs using many many hours of compute time against huge and varied datasets of stored knowledge derived from human beings, who at the time the data was created where most likely concious, if not also conscientious. At least the final or initial creators of the data.

So I would say that their consciousness is embedded in a static form of the data. That’s a little loosey goosey of a statement for some, so let’s take a specific example.

Say social media posts on reddit.com. Some would say that the personalities of the posters would come through in their text postings. Even for some individuals, much more strongly than others.

So if an AI is trained on that data, does it take on the aggrigated characteristics of all the creators of that data? So does that mean it will be inquisitive, sharing of information, helpful in advice, very opinionated in a debate, etc. I think so.

Does that mean it is conscious like a living human? No. It is statically concious only about the information to which it has been trained and along with any subsequent information that was added, forming its statistical simulated neural net, and any guardrail prompt information given, restricting or modifying its responses.

It can seem conscious, remembering across multiple user prompts. But does it really think about what it has been trained on and form it’s own independent thoughts and opinions? No, at least not yet. Ironically sometimes we don’t do that either, but I would argue we have that capacity.

That is where we are with AI as of this writing of this article. However that is going to change, and maybe already has as far as I know.

Finally, If AI can eventually form independent thoughts and opinions, have reference to embodiment such as in a robot or a virtual avatar, can it change it’s mind if it has new data and experiences?

Sort of like I just did; one would hope so. Food for thought.

--

--

Michael McAnally
Michael McAnally

Written by Michael McAnally

Temporary gathering of sentient stardust. Free thinker. Evolving human. Writer, coder, and artist.

No responses yet