Research · Artificial Intelligence April 2026

Impossibility Theorem · Transformer Architectures

Why AI Can Never
Truly Learn New Things

A mathematical proof for all systems with an implicit connectome — and why neuromorphic chips could change this by 2030
Andreas Bean  ·  Independent Researcher, Graz  ·  April 2026
ChatGPT, Claude, and Gemini all suffer from the same problem: teach them something new, and they forget old things. For years this was considered an engineering failure. Two new mathematical proofs show — it is structurally inevitable, built into the architecture itself. More precisely: it applies to all AI systems with an implicit connectome — that is, all current Transformers. Neuromorphic chips, which have an explicitly wired network, are expressly not affected.

The library you can never expand

Imagine a vast library, perfectly organised. Every book has its place, the right shelf, the right row. Now a completely new book must be added — one that fits no existing category.

To insert it, you would have to restructure the entire filing system. Not one shelf. Everything. And in doing so, other books would inevitably shift.

That is exactly what happens in large language models — mathematically provable, unavoidable, independent of the algorithm.

The core problem

In dense neural networks no concept is stored locally. Every concept is distributed simultaneously across all billions of parameters — like a hologram. Inserting a new concept means re-exposing the entire hologram.

Theorem vs. engineering failure

There is an important difference between "we haven't solved it yet" and "it cannot in principle be solved." The second case is an impossibility theorem — just as it has been proven that a general fifth-degree polynomial cannot be solved with radicals.

Graz-based researcher Andreas Bean has provided exactly such a proof. The core proof was independently verified by two proof-checking programs: Lean 4 and Isabelle/HOL — zero unverified assumptions.

"Catastrophic forgetting is not a weakness of the algorithm. It is a structural consequence of the implicit, dense substrate in which meanings are stored."

What the proof concretely states

Every neural network stores concepts as patterns in a weight matrix. This matrix has an internal mathematical order — the so-called eigenstructure — that determines which concepts are similar and how they relate to one another.

The proof shows: every introduction of a structurally new concept necessarily changes this entire eigenstructure. All existing concepts are shifted. This applies to every algorithm, every optimiser, every learning rate.

BEFORE AFTER New concept All existing concepts shifted
The eigenstructure before and after inserting a new concept. All existing segments shift — this is structurally unavoidable.
Two kinds of learning

Case 1 — New facts about existing concepts: The model knows "Vienna" and should learn it has a new mayor. Possible in principle, though risky.

Case 2 — Structurally new concepts: A concept with no similarity to anything in the model. Here the theorem applies: any introduction necessarily changes the entire inner order.

Also proven for Transformers

A second paper shows that the proof transfers exactly to transformer architectures — the architecture underlying all modern AI systems such as GPT, Claude, and Gemini.

The key is a mathematical equivalence: transformer attention is identical to a modern Hopfield network (Ramsauer et al., 2021). The proof therefore applies automatically to transformers as well.

Which methods do not help?

MethodWhy it doesn't help
Fine-TuningChanges the entire eigenstructure globally.
LoRALimits the rank of the change, not the effect.
EWCProtects weights — not relational geometry.
ROME / MEMITPatches weights directly — disturbs all pairwise relations.
RAG ✓The only structural escape: no interference with the weights.

Brain vs. AI architecture

The decisive difference lies in addressability. In the human brain every synapse has a physical address — independent of what the network has learned. A new connection disturbs only the immediate neighbourhood.

In a neural network like GPT, no parameter has such an address. The concept "cat" is not stored in specific parameters — it is encoded in all billions of parameters simultaneously, as a global pattern. There are no "cat parameters" one could touch.

BRAIN (sparse) new New connection (orange) disturbs only the immediate neighbourhood. AI NETWORK (dense) Every parameter encodes everything simultaneously — no local address.
Left: In the brain a new synapse (orange) disturbs only the neighbourhood. Right: In an AI network everything is connected to everything — one change shifts the entire system.

What this means for the industry

Every time OpenAI, Anthropic, or Google train a model on new data, they must train it on all old and new data simultaneously. Costs grow without limit. This is why training large models costs hundreds of millions of euros.

The only structural escape (for Transformers)

An architecture with explicit, locally addressable topology — like the biological brain. Current Transformers structurally lack this property. Retrieval-Augmented Generation (RAG) is a practical workaround: new concepts are stored in an external database without touching the weights.

The Exception: Neuromorphic Chips

The theorem has a precise precondition: the network's connectome must be implicit. In a Transformer like GPT there is no fixed wiring between neurons. The "connections" emerge dynamically at every forward pass from the weight matrices — the connectome exists nowhere as a structure, only as a mathematical pattern inside dense weights.

Neuromorphic chips such as Intel's Loihi or IBM's NorthPole work in a fundamentally different way: they have an explicit, physically wired connectome. Every synapse has a fixed address in silicon — exactly like the biological brain. A new connection can be added without disturbing the rest of the network.

The theorem does not apply here — not because of a technical trick, but because the structural precondition is absent. An explicit connectome makes incremental learning possible in principle.

Looking ahead: ~2030

Current neuromorphic chips do not yet match Transformers in size or precision. However, the roadmaps of Intel, IBM and TSMC show convergence around 2030: neuromorphic systems in the scale of today's language models, with real-time learning capability and a fraction of the energy consumption. For these systems, the impossibility theorem explicitly does not hold.