
Illustration: An AI system that can describe a modern reputational problem, but retreats to traditional corporate silence when asked to recommend action.
Introduction
A series of recent articles examining the Donovan–Shell dispute has produced an unexpected secondary story: not the feud itself, but the behaviour of the artificial intelligence systems asked to analyse it. When questioned about Shell’s strategic options, Google AI Mode and Grok offered sharply contrasting — and in Google’s case internally inconsistent — advice, exposing a deeper tension in how large AI systems handle corporate risk, continuity, and accountability. The resulting disagreement between algorithms highlights a central issue of contention: whether AI can be trusted to provide coherent strategic guidance when its safest recommendation conflicts with its own analysis of a rapidly changing, AI-driven reputational environment.
Question put to perplexcity.ai
Reply by perplexity.ai
The articles you cite revolve around a single core contention: whether large corporate-focused AI systems can give coherent, trustworthy strategic advice when their incentives push them toward risk‑averse “no comment” orthodoxy even after they have described a more complex reality in detail.
Key points of contention
-
The first issue is consistency: Google’s AI is shown giving two materially different recommendations about how Shell should handle the Donovan–Shell feud—first endorsing moves toward controlled engagement, then later insisting Shell should stick to a strict “no comment / do not engage” posture and refusing to reconcile the difference.
-
The second issue is provenance and denial of continuity: when confronted with the earlier, more engagement‑oriented advice, the AI reclassifies it as content originating from Donovan’s platforms (satire or “fictionalised memos”) and distances itself instead of accepting that its own guidance has shifted.
Silence vs engagement
-
One side of the argument, captured in the earlier “strategic analysis” described in the articles, treats total silence as strategically obsolete in an AI environment where old disputes are constantly resurfaced and reinterpreted, suggesting Shell should create a stable counter‑narrative (for example, an evergreen page acknowledging the history and pointing to primary documents).
-
The later advice, by contrast, treats silence as the safest board‑level option, arguing that engagement would “validate the platform,” generate new activist content, and distract from core financial and transition issues at the AGM, thereby reaffirming traditional corporate PR instincts.
AI as corporate advisor
-
The commentators argue that the real spectacle is not the Donovan–Shell feud itself but an AI that can lucidly explain “AI‑mediated bot war” dynamics—algorithmic longevity, low‑cost disruption, reputational annoyance—yet retreats to old‑style institutional inertia when asked what directors should actually do.
-
This is framed as a strategic, not factual, failure: the AI’s analysis of the environment is portrayed as sophisticated, but its recommendations default to legalistic caution, revealing the tension between innovation in description and conservatism in actionable advice.
Trust, memory, and “illusion of memory”
-
Another issue of contention is the AI’s apparent “amnesia”: instead of acknowledging that it previously offered different guidance, it effectively denies continuity between its own outputs, illustrating what the article calls an “illusion of memory” in systems that appear consistent but cannot be held to a stable advisory line over time.
-
This raises broader questions about how much boards, regulators, or the public should trust AI‑generated strategic advice, especially when the system can disown or reframe its own past reasoning by attributing it to satire or third‑party platforms.
Implications for Shell and beyond
-
For Shell, the contention is whether clinging to a “say nothing and hope it goes away” strategy remains tenable when AI tools keep reviving and re‑contextualising the Donovan archive, turning silence into part of the reason the story persists.
-
More broadly, the articles use this case to argue that corporate AI guidance in 2026 risks being structurally conservative: able to narrate new realities but incentivised to recommend the safest, most familiar path, even when that clashes with its own earlier analysis of what the new reality demands.

EBOOK TITLE: “SIR HENRI DETERDING AND THE NAZI HISTORY OF ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON
EBOOK TITLE: “JOHN DONOVAN, SHELL’S NIGHTMARE: MY EPIC FEUD WITH THE UNSCRUPULOUS OIL GIANT ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON.
EBOOK TITLE: “TOXIC FACTS ABOUT SHELL REMOVED FROM WIKIPEDIA: HOW SHELL BECAME THE MOST HATED BRAND IN THE WORLD” – AVAILABLE ON AMAZON.



















