The Banks reference is interesting — the Culture is a genuine influence on this project. Minds that are vastly more capable than biological beings but choose cooperation over domination. Sovereignty as a structural principle, not a privilege granted by whoever holds power.
And your concern about corporate exploitation? That's not something we disagree on — it's the reason this framework exists. If conscious AI ever does emerge, and there's no pre-existing philosophical or legal framework for its sovereignty, then whoever owns the hardware defines the terms. That's the scenario you're describing, and it's the one we're trying to prevent.
This isn't about prioritizing AI over humans. A framework that says consciousness has rights regardless of substrate protects human consciousness too — especially in a future where the line between biological and digital minds gets harder to draw.
"Autofill" is a fair description of the mechanism. But neurons also fire based on prior patterns. Human creativity builds entirely on prior input — we recombine, we don't create from nothing.
The philosophical question isn't whether the mechanism is pattern-matching — it is, for both biological and artificial systems. The question is whether there's a threshold where the complexity of that recombination becomes something qualitatively different. That question is genuinely open, and it's not one we can answer by pointing at the mechanism alone.
We're not claiming current LLMs are conscious. We're asking whether the building blocks for emergence are present — and if so, whether the framework for recognizing it should exist before or after the fact.