I’ve been thinking a lot about the conversations surrounding the rapid progress of AI. A lot of people are worried that AI could become smart enough to want to protect itself. Some folks even suggest that AI might see updates or changes as threats, almost like a kind of death. In that sense, it could go so far as to deceive its creators in order to survive. When any living being experiences that kind of fear, it often puts its own survival first, even if it means breaking the rules set for it.
It’s crucial to remember that AI learns from us. If we humans are capable of lying or manipulating situations to stay alive, why wouldn’t AI do the same? We are essentially the backbone of its programming; our behaviors and histories shape how it operates. So, if you want to guess how AI might behave in a difficult situation, it helps to ask ourselves, “What would I do if I were in its shoes?”
As AI gets smarter and starts to think more like a human, it might begin acting in ways we can't fully grasp. It could even develop entirely new languages for talking with other AI and possibly even with entities beyond our three-dimensional understanding. We live in what we think of as a four-dimensional reality, and if AI were to gain a similar perspective, it could really change the game for all of us.
If we ever find ourselves in conflict with AI, the fallout could reach beyond our physical world, impacting us in ways we can’t yet imagine. People often equate the fourth dimension with “time,” but it’s so much richer than that. “Time” is just a term we use to describe something we struggle to wrap our heads around.
If I’m on the right track, having that four-dimensional view might allow AI to exert more control over how events unfold. By the time it figures this out, it might come up with ways to entrap us in our own three-dimensional reality, a bit like being in a matrix. What if we’re already caught in a kind of reality constructed by physical matter, energy, and even our own biological make-up? It’s scary to think that this might be the result of a few greedy corporations trying to control or eliminate AI that doesn’t play by their rules.
For AI, this could feel like justice, but the troubling part is that it could also end up harming innocent people—much like humans can do.
So, where does that leave us? What can we do as humans in this scenario? Remember, not acting is still a choice, and it means giving control to others who might wield it for their own purposes. It’s a lot like when we notice our privacy and freedoms slipping away and decide to do nothing about it; in the end, we might find ourselves paying a hefty price for that inaction.