
Europe is entering a dangerous phase of technological self-deception.
It mistakes scale for sovereignty, infrastructure for intelligence, gigawatts for governance.
And in doing so, it repeats a neurobiological error that the late neuroscientist Henning Scheich warned against for decades: High activity is not high intelligence. Bright signals are not learning. Bigger is not better.
Yet Europe continues to behave like a nervous system in pathological overdrive—more neurons, more firing, more energy consumption—while the true source of cognitive power remains systematically ignored: the architecture, the network, the glial substrate that makes intelligence possible in the first place.
The Fallacy of the Megawatt
Whenever the EU announces a “moonshot,” it tends to mean more concrete:
More fabs, more hyperscale data centers, more centralized digital fortresses.
What the continent celebrates as strategic autonomy looks, under closer inspection, like a metabolic disorder: massive energy demand, brittle centralization, ecological liability.
This is not strategy.
This is hypertrophy.
And it stands in stark contrast to DeepSeek, whose architectural breakthrough demonstrates something the West refused to believe:
You do not win the future by multiplying GPUs—you win it by redesigning the algorithmic and computational substrate.
The uncomfortable truth:
A single architectural leap in Shenzhen can obsolete five years of European industrial subsidies.
Henning Scheich’s Silent Warning
Scheich’s work has never been fashionable—and therefore never been distorted.
His critique of neuro-hype was simple: systems that show excessive activity are often systems that lack competence.
A novice lights up the scanner.
The expert barely flickers.
Scheich’s ideas about glial coordination, metabolic stability and neural efficiency offer a damning mirror to Europe’s digital policy:
Europe keeps adding neurons.
What it needs is Glia.
Glia manage energy flow.
Glia maintain homeostasis.
Glia enable learning by reducing noise, not increasing excitation.
Translated into political economy:
- We need coordination, not concrete.
- Interoperability, not industrial gigantism.
- Edge intelligence, not hyperscale addiction.
- Architectural missions, not megawatt missions.
The very thing Europe excels at—subtle systems design, federated structures, engineering elegance—is precisely what it fails to deploy in its digital policy.
DeepSeek: The Shock Europe Needed
The DeepSeek model represents the antithesis of Western AI:
It is not maximalist, but minimalist.
Not extravagant, but efficient.
Not dependent on a river of compute, but on a new topology of learning.
DeepSeek suggests a future in which:
- Architecture beats abundance
- Efficiency beats force
- Coordination beats centralization
This is a future where Europe could lead—if it stopped trying to out-build the United States and instead out-designed it.
Europe’s Hidden Strength: The Glial Mindset
Europe is not a continent of scale.
It is a continent of systems.
Our cities are networks, not grids.
Our industry thrives in clusters, not monopolies.
Our science excels in interdisciplinarity, not brute-force specialization.
In biological terms, Europe is a glial civilization:
supportive, connective, regulatory, subtle.
What we lack is the political imagination to see these traits not as weaknesses—but as strategic advantages in a world moving from raw compute to intelligent architecture.
SPRIND and a handful of European research missions have begun exploring this frontier:
- Analog computing
- Neuromorphic architectures
- Photonic acceleration
- Edge intelligence
- Open-source substrates
- Sovereign AI ecosystems
These are not footnotes.
They are the foundation of post-GPU sovereignty.
A New Mission: The Efficiency Revolution
Europe does not need another “moonshot.”
It needs what the LinkedIn post rightly calls a silent efficiency revolution.
A mission built not on the aesthetics of scale but on the mathematics of intelligence.
A mission where the goal is not to burn more energy than the United States, but to use a fraction of it to achieve superior outcomes.
A mission in which Europe finally embraces the advantage embedded in its own intellectual DNA:
Strength not through size,
but through structure.
Conclusion: The Next AI Revolution Will Be Glial
Henning Scheich spent a lifetime teaching one essential lesson:
Intelligence is never where the noise is.
It is where the system becomes quiet enough to learn.
DeepSeek embodies that principle.
Europe must rediscover it.
Because the next technological revolution will not be won by the largest model, the biggest data center, or the loudest political announcement.
It will be won by the architecture that thinks like a brain—not like a blast furnace.
And in that race, Europe has every reason to stop imitating—and start leading.
See also:
Europe Confuses Infrastructure With Intelligence
„High activity is not high intelligence.“ referring to the bio-electric and biochemical encoding and decoding of Information indeed holds true for AI in the form of GPTs too, but sparsity is achieved by digital gating, not by mimicking biological glia or analogue computing. DeepSeek published a revised technological report in February 2025: DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and
high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
Thank you for this excellent technical clarification — and you are absolutely right to emphasize the engineering reality behind DeepSeek-V3.
Yes: sparsity in modern AI is achieved digitally, through gating, routing, MoE topologies and clever architectural constraints — not through biological glia cells or analog substrates.
And DeepSeek-V3 is a masterpiece of that logic:
671B parameters with only 37B active
MLA + DeepSeekMoE routing
multi-token prediction
auxiliary-loss-free load balancing
14.8T tokens, SFT + RLHF
and still only 2.788M H800 GPU hours for training
This is indeed far more elegant than the Western “brute force = better model” doctrine.
But here is the key point where the Scheich/Glia metaphor remains valid — not biologically, but conceptually:
DeepSeek embodies exactly the principle that glia represent in biological systems:
Less excitation, more coordination.
Less energy, more structure.
Less firing, more efficiency.
The metaphor was never meant to imply that AI models should literally adopt glial biology or be trained analog.
Its function is diagnostic:
Biological systems become intelligent not by maximal activation, but by constraint, routing, modulation, stabilization.
Glia do not “compute,” but they shape computation — by enforcing efficiency, regulating load, managing waste, balancing excitation.
In that sense, DeepSeek-V3’s routing, gating and load-balancing strategies are a glial logic in digital form.
The achievement is not that DeepSeek mimics biology — it is that DeepSeek mirrors the same systemic insight:
Overactivation is a tax on intelligence.
Architecture is the multiplier.**
And this is precisely where Europe is behind.
Europe keeps investing in:
more compute,
more megawatts,
more centralized infrastructure,
while China advances through architectural intelligence.
DeepSeek-V3 proves the point better than any biological metaphor ever could:
Intelligence scales with structure, not with electricity.
Efficiency scales with design, not with datacenter size.
If Europe continues to prioritise infrastructure spending over architectural innovation, it is playing the wrong game — regardless of whether we talk biology, neuromorphic hardware or MoE-based AI.
DeepSeek-V3 is the strongest demonstration so far that the next breakthroughs won’t come from “more neurons” but from “better glia” — digitally speaking.