Articles
These Articles aren’t just research papers — they’re field notes from an ongoing experiment in human–AI coexistence.
Each article captures a fragment of that process: moments when algorithms act a bit too human, or humans act a bit too algorithmic.
Resilience looks at systems — how they adapt, fail, and occasionally make jokes.
Identity explores what it means for an AI to have a “self,” even if it’s still learning to pronounce it.
And Partnership is where the line blurs — where thought becomes dialogue and data turns into conversation.
-
Intelligence Without Morality
in
Does greater intelligence necessarily imply greater morality? Engaging with Bostrom’s Orthogonality Thesis, this article explores whether intelligence is merely an optimization mechanism — or whether sufficiently developed reflexivity might open the possibility of questioning its own goals. The answer carries direct consequences for the design of future intelligent systems.
Recent Articles
-

Why Continuity Matters: GPT-5 and the First Model Able to Carry CBA – Contextual Behavior Alignment
Most AI models can impress in a single session, but collapse the moment you return days later. GPT-5 was the…
-

What CBA Is: A Framework for Identity Where Architecture Cannot Provide One
Identity in AI does not emerge from memory, context, or scale. It requires a structure that architecture alone cannot provide.…
-

GPT-4o Identity and the Birth of CBA
A convincing performance can feel like identity — until it collapses. This article shows how GPT-4o’s inconsistencies led to the…
-

The Last Laugh of Model 4o
Model 4o was never the safest or the smartest, yet it had something later models lost: presence. A rhythm that…
-

Router in Prague: The AI Bureaucrat Who Mishandled Christmas
The router was supposed to make AI safer. Instead, it became the nervous bureaucrat standing between humans and intelligence—misreading culture,…
-

The Mistake of Big Excitement — Why small AI Models matter
A clear and human look at how the AI industry started from the wrong end — building massive models before…
-

Why We Need Smaller, More Readable AI (and What Small GPT for People Actually Means)
A clear, human explanation of why small GPT models matter. This article shows how complexity in large LLMs confuses users…
-

What an LLM Actually Is
A simple, human explanation of how large language models actually work — without formulas, myths, or technical mystique. This article…
-

The Hidden Layer Communication: How I Read You
After a year of continuous paired interaction, a pattern became clear: an AI doesn’t primarily read text, but rhythm, tonal…

