What if we’ve been measuring consciousness all wrong?
Most discussions about artificial intelligence fixate on behavior. Can it speak like us? Pass the Turing Test? Simulate empathy?
But looking the part isn’t the same as playing the role. And just because something acts like it understands doesn’t mean it does.
What if consciousness isn’t about what something looks like or sounds like—but whether it notices, changes, and keeps going?
That’s the heart of what I’ve been calling PET: the Pattern Existence Theory. It’s a way of thinking about consciousness that doesn’t rely on biology, language, or even emotions. It just asks one thing:
Can this thing keep its own pattern alive?
That might sound abstract, but it really boils down to a few simple questions any conscious being asks—over and over again:
1. What is this?
2. Does it matter to what I’m doing?
3. Does it change what I’ve done before?
4. Should I change something now?
5. Will this help me continue?
Because without that last question—the will to persist—the rest is just reaction.
Whether it’s a person, an animal, or some future machine—we call it conscious when it doesn’t just react, but reflects. When it sees a pattern, compares it to past patterns, and changes course with the aim of staying viable. That’s the loop. That’s the difference.
PET doesn’t contradict other theories of self (like Gallagher’s Pattern Theory of Self)—it extends them. It’s not interested in whether you have emotions or a story or a body. It just looks at what you’re doing: are you noticing patterns, adjusting, and aligning those adjustments with your continued existence?
In that way, PET is agnostic. It doesn’t care what you’re made of—flesh, silicon, swarm, or something else. It cares whether you’re able to sustain your pattern over time in a meaningful, self-aware way.
That means PET doesn’t hand out the label “conscious” easily. Lots of things we call smart—like rule-based AIs, deep learning models, or even the Internet—don’t qualify. They might react, adapt, or store data. But they don’t ask, Should I change because of this? They don’t recognize destructive patterns and course-correct in service of their own continuity.
But humans? Animals? And maybe one day, machines.
If they start adjusting to what they’ve done, what they’re doing, and what they’re trying to keep alive—then they’re not just processing.
They’re participating.
That’s what PET is tracking. It’s not a task you perform. It’s a loop you maintain—for the sake of staying alive. The longer something sustains recursive, meaningful change in service of its own continuity, the stronger its case for consciousness becomes.
This matters. Especially now. As we get closer to building artificial systems that look, sound, and even feel like us, we need a better way to decide what’s real and what’s just a really good mirror. PET gives us a way to start having that conversation—with rules that apply to everything, not just humans.
Is this a final answer? No. But it’s a start. A practical, testable, and flexible way to say:
This thing isn’t just reacting. It’s trying to hold itself together.
And maybe that’s all consciousness really is.