AI in 2026: Between Technological Revolution and Geopolitical Identity Crisis
The Elephant in the AI Room
If you’ve followed my recent posts, you’ve seen we’re in a brutal transition:
- 📊 Value migrated from execution to strategy
- 💰 The SaaS model is under existential threat
- 🧠 AIs have clear limits in reasoning
- 🛣️ Professionals must choose one of 3 roads
But what happens behind the scenes at the giants controlling this intelligence?
February 2026 brought a controversy mixing:
- 🕵️ Industrial espionage
- 🛡️ National security
- 😏 A good dose of irony
The Case of “Forbidden Distillation”
What Happened
Anthropic (creators of Claude) identified an industrial-scale campaign from three Chinese labs:
- 🔴 DeepSeek
- 🔴 Moonshot
- 🔴 MiniMax
The Staggering Numbers
The attack:
- 📧 24,000 fake accounts created
- 💬 16 million conversations generated
- 🎯 Goal: Extract Claude’s advanced capabilities
- ⏱️ Period: Months of coordinated operation
The Technique: “Knowledge Distillation”
How it works:
Traditional Method (Expensive):
→ Collect billions of texts
→ Build infrastructure ($100M+)
→ Train for months ($50M+ in GPUs)
→ Iterate and refine
= Total cost: $200M - $500M
= Time: 12-18 months
Distillation Method (Cheap):
→ Create thousands of fake accounts
→ Bombard superior AI with questions
→ Collect all responses
→ Train small model with these responses
= Total cost: $5M - $20M
= Time: 2-4 months
The math is brutal:
- 95% cheaper
- 80% faster
- Similar result (not identical, but close)
The Distillation Process in Detail
Step 1: Mass Account Creation
Automated script creates:
- 24,000 fake Gmail accounts
- All subscribe to Claude Pro ($20/month)
- Initial cost: ~$480k/month
Step 2: Question Bombardment
# Conceptual example
for topic in knowledge_domains:
for complexity in [easy, medium, hard]:
question = generate_question(topic, complexity)
response = claude.ask(question)
database.save(question, response)
Result: 16 million high-quality question-answer pairs.
Step 3: Training Own Model
Data: 16M conversations with Claude
Base model: Llama 3 or similar (open-source)
Fine-tuning: Using Claude's responses as "truth"
Result: Model that mimics 70-80% of Claude
Absurd cost-benefit:
- Anthropic spent: ~$500M+ developing Claude
- DeepSeek spent: ~$15M copying
The Reaction: National Security
The Official Narrative
Anthropic and OpenAI argue:
❌ “This is industrial espionage” ❌ “Threat to US national security” ❌ “Intellectual property theft” ❌ “Unfair advantage for China”
Measures Taken
Immediate actions:
1. Blocking 24,000 accounts
2. IP usage limits
3. Automated pattern detection
4. Stricter identity verification
5. Aggressive rate limiting
Political actions:
- 🏛️ US Congress lobbying
- 📋 Regulation proposals
- 🚫 Possible blocking of Chinese API access
- 💼 Discussions about “sovereign AI”
The Great Market Irony
This is where the story gets interesting (and hypocritical).
The Spell Backfired
Crucial question:
How can companies that scraped the entire internet without permission to train their models complain that others are “stealing” their work?
The Exposed Hypocrisy
How OpenAI, Anthropic and Google trained their models:
Data used WITHOUT authorization:
✗ Books from LibGen and Z-Library (piracy)
✗ GitHub code (licenses ignored)
✗ News site articles (no compensation)
✗ Reddit and Twitter posts (no consent)
✗ Art from millions of artists
✗ Songs and lyrics (copyrights?)
✗ YouTube video transcriptions
✗ Academic PDF documents
Conservative estimate:
- 📚 10+ trillion words
- 🎨 5+ billion images
- 💻 100+ million code repositories
- 🎵 Millions of creative works
Estimated value if they had PAID for rights:
- 💰 $50 billion - $200 billion
What they actually paid:
- 💸 $0
The Irony Equation
Anthropic training Claude:
"Let's use everything from internet without asking!"
→ OK, it's 'fair use' for research
DeepSeek distilling Claude:
"Let's use Claude without asking!"
→ CRIME! ESPIONAGE! NATIONAL SECURITY!
See the irony?
The Debate Voices
🔴 Position 1: It IS Theft
Arguments:
- Claude is Anthropic’s intellectual property
- Cost hundreds of millions to develop
- Terms of service prohibit this use
- Unfair competitive threat
🟢 Position 2: Spell Backfired on Wizard
Arguments:
- Anthropic did the same to human creators
- Didn’t compensate millions of authors/artists
- “Fair use” applies to some but not others?
- Total hypocrisy
🟡 Position 3: Both Are Wrong
Arguments:
- Nobody should be able to copy without permission
- But Anthropic has no moral high ground
- Entire system is flawed
- We need new laws
The Geopolitical Context
The USA vs China Tech War
Behind the controversy, there’s a technological cold war:
USA:
- 🏆 Dominates cutting-edge AI (for now)
- 💰 Massive investment ($100B+)
- 🛡️ Controls access via export controls
- 🎯 Strategy: Maintain technological leadership
China:
- 🎯 Goal: Lead AI by 2030
- 💰 Massive state investment
- 🔓 Strategy: Copy + innovate + scale
- ⚡ Advantage: Execution speed
The Stakes Are Extremely High
Whoever dominates AI will dominate:
- 🪖 Military supremacy
- 💵 Global economy
- 📊 Surveillance and control
- 🧬 Scientific research
- 🎓 Education
- 🏭 Industrial production
This isn’t about “software theft.” It’s about geopolitical power in the 21st century.
And the End User? (Us)
Benefits of the “War”
✅ More AI options
- DeepSeek offers alternative to Claude
- Competition forces innovation
- Prices drop (some Chinese models are free)
✅ Democratization
- Powerful models reach more countries
- Entry barrier decreases
- Open source strengthens
Damages from the “War”
❌ Fragmentation
- Internet “balkanized” by countries
- Chinese can’t use Claude
- Westerners may be blocked from DeepSeek
- Less global collaboration
❌ Less transparency
- Companies hide more information
- “Security” becomes excuse for secrecy
- Users know less about what they use
❌ Security risks
- AI arms race
- Less time for safety research
- Rush can cause accidents
Back to the 3 Roads
With all this war happening at the top, how should you position yourself?
🎯 Road 1: The Orchestrator
New reality in 2026:
- AI agents have full computer access (sandboxes)
- Execute technical tasks end-to-end
- Your role: Conduct the orchestra, not play instruments
War impact:
- More tools available (USA + China)
- Must master multiple platforms
- Arbitrage between Claude, GPT, DeepSeek
⚙️ Road 2: The Systems Builder
New reality:
- Infrastructure for agents to operate safely
- Cost optimization (right model at right time)
- Smart routing layers
War impact:
- More complexity (different APIs, restrictions)
- Opportunity (companies need multi-model systems)
- Security becomes critical
🔬 Road 3: The Domain Translator
New reality:
- Specialist (law, medicine, engineering)
- Uses AI to solve real niche problems
- Deep context is differentiator
War impact:
- Less affected (war is between giants)
- Can use tools from any origin
- Focus on problem, not technology
This road may be safest during tech war.
The Uncomfortable Truth
The AI Paradox in 2026
AI is trying to self-improve (RSI), but:
❌ Still “cheats” on performance tests ❌ Fails at novel mathematical problems ❌ Can’t do pure reasoning (only pattern recognition) ❌ Built on “stolen” data ❌ Controlled by few countries/companies ❌ Used as geopolitical weapon
But even so:
✅ Completely changing work ✅ Destroying business models ✅ Creating value (and destroying old value) ✅ Forcing everyone to adapt
Where Is the “Gold” of 2026?
NOT in:
- ❌ Code (became commodity)
- ❌ SaaS software (dying as “per seat”)
- ❌ Technical execution (AI does this)
- ❌ Having access to best AI (access war)
It’s in:
- ✅ Strategic direction (what to build?)
- ✅ Deep context (understanding the problem)
- ✅ Orchestration (using AI as tool)
- ✅ Adaptability (world changes fast)
- ✅ Platform independence (don’t depend on one AI)
Conclusion: Living in the Age of Hypocrisy
We’re in a fascinating and disturbing moment:
Companies that:
- Built empires by copying without permission…
- Now cry when they’re copied
- And use “national security” as shield
Meanwhile:
- Users caught in the middle of war
- Original creators still not compensated
- Tech race accelerates without brakes
- Ethics thrown out window
The question isn’t if this is right or wrong.
The question is: How will you navigate this chaos?
Reflection
Do you think AI labs have the moral high ground to complain about data “theft”?
Or are we seeing the “spell backfire on the wizard”?
Does this tech war benefit or harm the end user?
And more importantly: How are you positioning yourself in this scenario?
Your Controversial Opinion is Welcome
This is the kind of debate with no easy answers.
Share your vision (even if controversial):
- Email: fodra@fodra.com.br
- LinkedIn: linkedin.com/in/mauriciofodra
The future is being decided now.
Will you watch or participate?
Read Also
- The ‘WarGames’ Dilemma in Real Life — The military dimension of the geopolitical AI race.
- The Impact of AI on Modern Society — The social and ethical perspective behind the geopolitics.
- The AI Explosion in 2026: Real Evolution or Algorithmic ‘Cheating’? — The technical evolution fueling the race between nations.