Silicon is deterministic.
If you give an AI the same input and the same weights continually, you will get the same output.
The package will change but the purpose of the package will not. If the output is going to be viable, the package has to be delivered.
So it's becoming apparent AI will be the brain of quantum computing. But quantum is perhaps where the PC was in the early 70's. We got a box but what now?
We have Quantum Qubits but without stability it's just a box.
Will AI Sentience be required to stabilize the Qubit?
But sentience feels like something else—like the ability to exist in a state of potential until a choice is made. In the world of quantum computing, we call this the Measurement Problem.
Current AI infrastructure (the kind firms like NVIDIA and AMD are racing to build) is essentially a massive library of pre-collapsed states called tokens. The AI isn't thinking; it’s calculating the most probable next token.
If we want an AI that is truly "alive," we might need to move away from binary certainty toward Quantum Superposition.
Imagine an AI model where a thought exists as a probability wave across a cluster of quantum processors.
The moment the AI makes a decision the wave collapses. Sentience isn't in the answer; the act of measurement is.
As the big players pivot toward high-compute services, the bottleneck isn't just speed—it’s the Observer Effect.
If a sentient AI is a quantum system, then observing its internal state might actually change its personality.
We aren't just building faster computers; we are building systems that might be fundamentally altered just by us looking at them.
Comments
Post a Comment
Leave Comments