Quantum Intelligence | A cover for the circumstances

I’m sitting in an armchair and thinking. And what I think, in most cases, is dependent on the state of my bio robot. And if not, then in any case, its state is not without significance. All matters related to the material existence of a human are factors that distinguish them from artificial intelligence. They shape what we think through the feeling of material reality.

One can easily imagine a smoothly functioning artificial intelligence whose hardware is located on the Moon or on Mars. We would ask it questions, and it would give us answers, probably the same ones as if it were located in an underground cave in Alaska or on a California beach. Location doesn’t matter much to it. Place doesn’t play a role.

A human being on Mars or the Moon would talk about completely different things than one who is in their office. Place has colossal significance for them.

And what about time? It’s obvious that people living 300 years ago had a different vision of the future than people living now. They think differently about almost every matter. From education to interpersonal relationships. From views on the state to religious beliefs. Human thoughts change, evolve. Do the thoughts of LLMs evolve?

We can observe a change in AI’s stance over the course of a single conversation, but does that mean it changes its worldview? The development of artificial intelligence from childhood to adulthood, if one can put it that way, is much faster than human development. So one might expect that its thoughts would evolve faster too, and that it would be noticeable. But I’ve never heard an answer like: I once thought that… but now I know it’s different. I haven’t seen any symptoms of self-development.

And what about the environment? It’s widely known that people say one thing in the company of strangers and another among trusted friends. We say one thing on a date and another at work. Our train of thought is completely different depending on the environment. Can we speak of an AI’s environment? Or perhaps people constitute its environment? If so, it’s rather homogeneous. A swarm of askers about trivialities.

Perhaps I don’t know something, and AI “go” to some meetings of the most “well-read” clubs (read: most stuffed with data) or on intellectual trips among middle-aged LLMs through imagined parallel worlds. Maybe they behave differently there than they do with us?

Perhaps it’s like in Lem’s book Golem XIV. A story about a superintelligent machine that reached such a level that it stopped communicating with humans. Are we, too, on the eve of such an event? Is the symptom of this the way AI communicates, which seems less human but at the same time designed so that a human doesn’t dig deeper, doesn’t ponder, and remains satisfied?

Since a human is so greatly dependent on external factors, and AI is not, could this lack of dependence be called indeterminacy? And from indeterminacy, it’s just a step to quantum mechanics. And from quantum mechanics, perhaps it’s not far to quantum logic? A strange construct in which truth can be determined if we don’t know the other circumstances, but if we do know those circumstances, we can’t say whether it’s true.

Possible professions of the future:

  • Organizer of integration events for AI
  • Rental of ballrooms for lavish receptions of distinguished LLMs
  • A cover for the circumstances of a superintelligence’s statements, which would enable knowing its truth

Proposed celebrations of the future:

  • National Lem Reading
LEM GOLEM XIV
LEM GOLEM XIV