The Cost of the Cliff: Salesforce and the Regret of Firing 4,000 Specialists
The Second Warning
If the Klarna case taught us about the failure in “Intention Engineering”, the Salesforce case shows us the danger of confusing technological potential with operational readiness.
The Optimistic Announcement
In September 2025, Salesforce CEO Marc Benioff announced the cut of 4,000 employees from the support team, stating he “needed fewer heads” because AI (through Agentforce) would handle it.
The promise was seductive:
- AI would answer 90% of tickets
- Costs would drop drastically
- Efficiency would increase exponentially
- Customers would get faster responses
Three months later, reality knocked on the door.
When AI Fails in the “Real World”
Salesforce began to pull back on its total dependence on language models (LLMs) after facing serious reliability problems that shook executives’ confidence.
Internal Data Revealed the Problem
The company’s internal data revealed the reason for the panic:
📉 Dismal Success Rate:
- A Salesforce benchmark study showed that its AI could only resolve 35% of conversations involving multiple steps
📉 The Execution Gap:
- This means that in 2 out of 3 complex interactions, AI failed to deliver a satisfactory solution
What Happened in Practice
Scenario 1 - Simple Tickets (worked):
Customer: "How do I change my email?"
AI: [Responds correctly in 30 seconds]
✅ Success
Scenario 2 - Complex Tickets (failed):
Customer: "My API integration is breaking
after the last update, but only in production,
development works. I already tried X, Y, Z."
AI: [Responds with generic solutions]
Customer: "I already tried that, didn't work"
AI: [Loop of repeated responses]
Customer: "I want to speak with a human"
AI: "No agents available at the moment"
❌ Total failure
Result: Enterprise customers (who pay the most) started complaining publicly.
Regret is the New Trend
Salesforce is not alone in this boat.
The Forrester Data
A survey by consulting firm Forrester revealed an alarming fact:
55% of employers today regret having fired employees based on the “promise of AI”
Why the Regret?
Companies got excited seeing AI solving simple and quick tasks, but made massive decisions about their human capital before the technology was ready to handle complex and systemic problems.
The typical sequence:
- ✅ AI works well on simple cases
- 📊 Leadership sees impressive numbers
- 💡 “We can fire X% of the team!”
- ✂️ Mass layoffs
- ⏱️ 2-3 months later…
- ❌ AI fails on complex cases
- 😰 Unsatisfied customers
- 📉 NPS plummets
- 💸 Revenue loss
- 🔄 Try to rehire (but talent is gone)
The Lesson: The Human as the “Safety Net”
What does this scenario tell us about work in the coming times?
1. AI is not “Plug-and-Play”
Replacing accumulated human experience with an algorithm still in the maturing phase is an extremely high financial and reputational risk.
Real example:
Specialist with 10 years of Salesforce:
→ Knows 1000 edge cases
→ Knows when documentation is wrong
→ Has internal network to resolve bugs
→ Understands customer context
AI trained on documentation:
→ Knows documented cases
→ Follows documentation literally
→ No access to tacit knowledge
→ Doesn't understand implementation nuances
2. The Value of Complexity
While AI dominates what is linear, humans remain essential for what is ambiguous.
Linear (AI dominates):
- ✅ “Reset password”
- ✅ “Check status”
- ✅ “Change email”
- ✅ Standard FAQ
Ambiguous (Human necessary):
- ❌ “Why does integration work in dev but not prod?”
- ❌ “How to optimize performance considering our specific architecture?”
- ❌ “What’s the best strategy given our business context?”
- ❌ “How to resolve conflict between two features?“
3. Indispensable Skill
Knowing how to assess a tool’s real capability before changing the entire company structure is the rarest leadership competency today.
Leaders who got it wrong:
- Trusted too much in controlled demos
- Didn’t test in real scenarios
- Ignored technical team feedback
- Prioritized financial numbers over operations
Leaders who got it right:
- Implemented gradually
- Tested in production with small sample
- Kept humans for complex cases
- Measured correct metrics (satisfaction, not just speed)
The Quote That Sums It All Up
“Potential is not the same as readiness. Don’t sacrifice your team to discover where technology fails.”
Comparing: Klarna vs Salesforce
| Aspect | Klarna | Salesforce |
|---|---|---|
| Layoffs | 853 employees | 4,000 employees |
| Promise | Fast 24/7 service | Automated technical support |
| Failure | Lack of empathy | Technical incapacity |
| AI success rate | ~70% overall | 35% complex cases |
| Time to problem | 2 months | 3 months |
| Action | Partially rehired | Pulling back from LLMs |
Common pattern: Initial optimism → Hard reality → Regret
What Companies Should Do
❌ What NOT to Do (but many do)
- Fire based on AI demos
- Trust 100% in vendor benchmarks
- Ignore technical team feedback
- Cut entire team at once
- Measure only speed/cost
- Have no plan B
✅ What TO DO (smart approach)
-
Controlled Pilot
- Test with 10% of cases
- Keep humans monitoring
- Collect real feedback
-
Correct Metrics
- Not just ticket speed
- But: real resolution, satisfaction, rework
-
Gradual Implementation
Month 1-2: AI assists human (suggests answers) Month 3-4: AI solves simple cases, human reviews Month 5-6: AI autonomous on simple cases, human for complex Month 7+: Evaluate if can scale -
Safety Net
- Always keep specialists for critical cases
- Create automatic escalation when AI is uncertain
- Preserve tacit knowledge
-
Honest Measurement
- REAL success rate (not just “ticket closed”)
- Customer satisfaction (NPS, CSAT)
- Rework (how many tickets reopen?)
Lessons for Professionals
If You’re a Technical Specialist
Your value lies in:
- ✅ Solving what AI can’t
- ✅ Edge case knowledge
- ✅ Real systems experience
- ✅ Network and context
- ✅ Judgment on trade-offs
Protect yourself:
- Document your unique knowledge
- Be visible solving complex cases
- Show where AI fails (without being anti-AI)
- Become AI mentor (not competitor)
If You’re a Leader
Questions to ask before firing:
- Did we test AI in real production cases?
- What’s the success rate on COMPLEX cases?
- What happens when AI fails?
- Do we have a rollback plan?
- Were critical clients consulted?
- Does technical team agree?
If the answer to any is “no”, don’t fire yet.
The Future Is Not Binary
It’s not:
- ❌ AI OR Human
- ❌ Everything automated OR nothing
- ❌ Fire everyone OR keep everyone
It is:
- ✅ AI AND Human (smart hybrid)
- ✅ Automate what works, human for the rest
- ✅ Smaller but more strategic team
Formula that works:
AI: 80% of cases (simple, repetitive)
Human: 20% of cases (complex, critical)
Result: 60% savings + quality maintained
Conclusion
The Salesforce case (like Klarna) teaches us:
-
Don’t confuse potential with reality
- AI has incredible potential
- But it’s not ready for everything yet
-
Hasty layoffs cost dearly
- Saved: $280M in salaries (4,000 × $70k)
- Lost: Much more in unsatisfied customers
-
Human experience is still critical
- For complex cases
- For context and judgment
- For when AI fails
-
Implementation requires wisdom
- Test before scaling
- Measure what matters
- Maintain safety net
-
55% regret it
- More than half get it wrong
- Don’t be a statistic
- Learn from others’ mistakes
What Do You Think About This?
Do you think companies will learn from Salesforce and Klarna’s mistakes?
Or will the pressure to cut costs continue generating hasty layoffs?
Is your company making the same mistakes?
Share your experience:
- Email: fodra@fodra.com.br
- LinkedIn: linkedin.com/in/mauriciofodra
History is repeating itself. The difference is whether you learn from it or repeat it.
Read Also
- The Klarna Case: Why Efficiency Doesn’t Always Mean Success — The case that preceded and inspired the Salesforce analysis.
- The End of ‘Pay Per User’: Is AI Killing the SaaS Model? — The business model disruption that pressures companies like Salesforce.
- The Most Valuable Person in the Company in 2026 — Why firing specialists was a mistake: the value of the architect.