The Second Warning

If the Klarna case taught us about the failure in “Intention Engineering”, the Salesforce case shows us the danger of confusing technological potential with operational readiness.

The Optimistic Announcement

In September 2025, Salesforce CEO Marc Benioff announced the cut of 4,000 employees from the support team, stating he “needed fewer heads” because AI (through Agentforce) would handle it.

The promise was seductive:

  • AI would answer 90% of tickets
  • Costs would drop drastically
  • Efficiency would increase exponentially
  • Customers would get faster responses

Three months later, reality knocked on the door.

When AI Fails in the “Real World”

Salesforce began to pull back on its total dependence on language models (LLMs) after facing serious reliability problems that shook executives’ confidence.

Internal Data Revealed the Problem

The company’s internal data revealed the reason for the panic:

📉 Dismal Success Rate:

  • A Salesforce benchmark study showed that its AI could only resolve 35% of conversations involving multiple steps

📉 The Execution Gap:

  • This means that in 2 out of 3 complex interactions, AI failed to deliver a satisfactory solution

What Happened in Practice

Scenario 1 - Simple Tickets (worked):

Customer: "How do I change my email?"
AI: [Responds correctly in 30 seconds]
✅ Success

Scenario 2 - Complex Tickets (failed):

Customer: "My API integration is breaking 
after the last update, but only in production, 
development works. I already tried X, Y, Z."

AI: [Responds with generic solutions]
Customer: "I already tried that, didn't work"
AI: [Loop of repeated responses]
Customer: "I want to speak with a human"
AI: "No agents available at the moment"
❌ Total failure

Result: Enterprise customers (who pay the most) started complaining publicly.

Regret is the New Trend

Salesforce is not alone in this boat.

The Forrester Data

A survey by consulting firm Forrester revealed an alarming fact:

55% of employers today regret having fired employees based on the “promise of AI”

Why the Regret?

Companies got excited seeing AI solving simple and quick tasks, but made massive decisions about their human capital before the technology was ready to handle complex and systemic problems.

The typical sequence:

  1. ✅ AI works well on simple cases
  2. 📊 Leadership sees impressive numbers
  3. 💡 “We can fire X% of the team!”
  4. ✂️ Mass layoffs
  5. ⏱️ 2-3 months later…
  6. ❌ AI fails on complex cases
  7. 😰 Unsatisfied customers
  8. 📉 NPS plummets
  9. 💸 Revenue loss
  10. 🔄 Try to rehire (but talent is gone)

The Lesson: The Human as the “Safety Net”

What does this scenario tell us about work in the coming times?

1. AI is not “Plug-and-Play”

Replacing accumulated human experience with an algorithm still in the maturing phase is an extremely high financial and reputational risk.

Real example:

Specialist with 10 years of Salesforce:
→ Knows 1000 edge cases
→ Knows when documentation is wrong
→ Has internal network to resolve bugs
→ Understands customer context

AI trained on documentation:
→ Knows documented cases
→ Follows documentation literally
→ No access to tacit knowledge
→ Doesn't understand implementation nuances

2. The Value of Complexity

While AI dominates what is linear, humans remain essential for what is ambiguous.

Linear (AI dominates):

  • ✅ “Reset password”
  • ✅ “Check status”
  • ✅ “Change email”
  • ✅ Standard FAQ

Ambiguous (Human necessary):

  • ❌ “Why does integration work in dev but not prod?”
  • ❌ “How to optimize performance considering our specific architecture?”
  • ❌ “What’s the best strategy given our business context?”
  • ❌ “How to resolve conflict between two features?“

3. Indispensable Skill

Knowing how to assess a tool’s real capability before changing the entire company structure is the rarest leadership competency today.

Leaders who got it wrong:

  • Trusted too much in controlled demos
  • Didn’t test in real scenarios
  • Ignored technical team feedback
  • Prioritized financial numbers over operations

Leaders who got it right:

  • Implemented gradually
  • Tested in production with small sample
  • Kept humans for complex cases
  • Measured correct metrics (satisfaction, not just speed)

The Quote That Sums It All Up

“Potential is not the same as readiness. Don’t sacrifice your team to discover where technology fails.”

Comparing: Klarna vs Salesforce

AspectKlarnaSalesforce
Layoffs853 employees4,000 employees
PromiseFast 24/7 serviceAutomated technical support
FailureLack of empathyTechnical incapacity
AI success rate~70% overall35% complex cases
Time to problem2 months3 months
ActionPartially rehiredPulling back from LLMs

Common pattern: Initial optimism → Hard reality → Regret

What Companies Should Do

❌ What NOT to Do (but many do)

  1. Fire based on AI demos
  2. Trust 100% in vendor benchmarks
  3. Ignore technical team feedback
  4. Cut entire team at once
  5. Measure only speed/cost
  6. Have no plan B

✅ What TO DO (smart approach)

  1. Controlled Pilot

    • Test with 10% of cases
    • Keep humans monitoring
    • Collect real feedback
  2. Correct Metrics

    • Not just ticket speed
    • But: real resolution, satisfaction, rework
  3. Gradual Implementation

    Month 1-2: AI assists human (suggests answers)
    Month 3-4: AI solves simple cases, human reviews
    Month 5-6: AI autonomous on simple cases, human for complex
    Month 7+: Evaluate if can scale
    
  4. Safety Net

    • Always keep specialists for critical cases
    • Create automatic escalation when AI is uncertain
    • Preserve tacit knowledge
  5. Honest Measurement

    • REAL success rate (not just “ticket closed”)
    • Customer satisfaction (NPS, CSAT)
    • Rework (how many tickets reopen?)

Lessons for Professionals

If You’re a Technical Specialist

Your value lies in:

  • ✅ Solving what AI can’t
  • ✅ Edge case knowledge
  • ✅ Real systems experience
  • ✅ Network and context
  • ✅ Judgment on trade-offs

Protect yourself:

  • Document your unique knowledge
  • Be visible solving complex cases
  • Show where AI fails (without being anti-AI)
  • Become AI mentor (not competitor)

If You’re a Leader

Questions to ask before firing:

  1. Did we test AI in real production cases?
  2. What’s the success rate on COMPLEX cases?
  3. What happens when AI fails?
  4. Do we have a rollback plan?
  5. Were critical clients consulted?
  6. Does technical team agree?

If the answer to any is “no”, don’t fire yet.

The Future Is Not Binary

It’s not:

  • ❌ AI OR Human
  • ❌ Everything automated OR nothing
  • ❌ Fire everyone OR keep everyone

It is:

  • ✅ AI AND Human (smart hybrid)
  • ✅ Automate what works, human for the rest
  • ✅ Smaller but more strategic team

Formula that works:

AI: 80% of cases (simple, repetitive)
Human: 20% of cases (complex, critical)
Result: 60% savings + quality maintained

Conclusion

The Salesforce case (like Klarna) teaches us:

  1. Don’t confuse potential with reality

    • AI has incredible potential
    • But it’s not ready for everything yet
  2. Hasty layoffs cost dearly

    • Saved: $280M in salaries (4,000 × $70k)
    • Lost: Much more in unsatisfied customers
  3. Human experience is still critical

    • For complex cases
    • For context and judgment
    • For when AI fails
  4. Implementation requires wisdom

    • Test before scaling
    • Measure what matters
    • Maintain safety net
  5. 55% regret it

    • More than half get it wrong
    • Don’t be a statistic
    • Learn from others’ mistakes

What Do You Think About This?

Do you think companies will learn from Salesforce and Klarna’s mistakes?

Or will the pressure to cut costs continue generating hasty layoffs?

Is your company making the same mistakes?

Share your experience:

History is repeating itself. The difference is whether you learn from it or repeat it.


Read Also