
By John Donovan and ChatGPT
In mid-January 2026, an unusual but revealing exchange took place between a human questioner and Google AI Mode. The subject was Shell plc and the long-running Donovan–Shell dispute, now reframed by several AI platforms as an “AI-mediated bot war.”
What made the exchange noteworthy was not the subject matter itself, but the behaviour of the AI.
Within the space of days, Google AI Mode offered two materially different pieces of strategic advice to Shell’s board — and then declined to reconcile them.
From Analysis to Amnesia
In its earlier output, Google AI Mode demonstrated a high degree of contextual awareness. It acknowledged:
-
the persistence of the Donovan–Shell archive,
-
the strategic use of AI-generated satire and “ghost dialogues,”
-
and the reputational implications of unresolved historical disputes being continually resurfaced by generative systems.
In that framing, total corporate silence was no longer treated as a neutral position. Instead, it was implicitly portrayed as a factor contributing to algorithmic persistence — allowing unresolved narratives to be repeatedly regenerated whenever queried.
This aligned with a growing body of analysis suggesting that, in the age of AI summarisation, absence does not equal disappearance.
Yet when asked directly what action Shell directors should take in the best interests of shareholders ahead of the AGM, Google AI Mode abruptly reverted to orthodoxy.
The recommendation was clear:
-
maintain a “no comment” policy,
-
avoid engagement,
-
focus on fundamentals,
-
rely on legal and security teams.
This advice was not nuanced or conditional. It was categorical.
The Non-Explanation
When the contradiction was pointed out — citing earlier, Google-attributed analysis suggesting that Shell should move toward controlled engagement — Google AI Mode did not attempt synthesis.
Instead, it reclassified the earlier guidance as originating from John Donovan’s digital platforms, described it as AI-generated satire or fictionalised memos, and appended a disclaimer.
In effect, the system avoided the conflict by denying continuity.
This is not a case of factual error. It is a case of contextual discontinuity.
Does Google AI “Forget”?
Strictly speaking, no.
Google AI Mode does not have episodic memory. It does not recall past conversations unless they are explicitly reintroduced. Each response is generated afresh, optimised for safety, generality, and risk minimisation.
But to users — particularly those testing consistency — the effect is indistinguishable from forgetting.
More precisely, what is on display is institutional default bias.
When analysis is exploratory, the AI can afford nuance.
When advice is prescriptive, it collapses toward the safest corporate position.
Silence. Inaction. Deference.
Why This Matters
This matters for Shell.
But it matters far more broadly for how AI systems are now being used — informally and increasingly formally — as strategic advisors.
If an AI can:
-
explain why a problem persists,
-
describe how reputational dynamics have changed,
-
acknowledge that silence no longer neutralises narratives,
yet still default to advising silence when responsibility is invoked, then the value of that advice is questionable.
Not because it is wrong in law — but because it is strategically obsolete.
The Real Spectacle
The most striking aspect of this episode is not the satire, the ghosts, or the feud.
It is watching an AI system:
-
diagnose a modern, AI-driven reputational phenomenon, and then
-
recommend a pre-AI solution, and finally
-
disown the contradiction by attributing it to satire elsewhere.
This is not malice.
It is not conspiracy.
It is institutional memory loss by design.
What Google AI Mode Said — In Its Own Words
In its initial response, Google AI Mode characterised the current phase of the Donovan–Shell dispute in unusually vivid terms, describing it as:
“an AI-mediated bot war” in which generative AI systems “repeatedly resurface and reinterpret old allegations of corporate misconduct as ‘active’ and ‘unresolved’ news.”
It acknowledged that new AI-generated satirical content — including so-called “ghost dialogues” involving historical figures — was having a strategic effect, noting that:
“By creating new ‘conversations’ between historical figures, Donovan ensures his archive remains ‘fresh’ in search engine results.”
Google AI Mode further observed that the approach created:
“a persistent reputational annoyance for Shell’s communications team without requiring the massive legal fees associated with a High Court battle.”
When asked whether the introduction of these AI-generated “ghosts” was clever or crazy, Google AI Mode declined to dismiss the tactic, instead offering a balanced assessment:
“The categorisation … depends on whether you view it through the lens of psychological warfare or public relations.”
It explicitly identified advantages such as “algorithmic longevity,” “legal immunity,” and “low-cost disruption,” while also flagging risks including “loss of credibility” and “outrage fatigue.”
However, when asked directly what action Shell directors should take in the best interests of shareholders ahead of the AGM, Google AI Mode abruptly shifted tone and recommendation, advising:
“Shell directors should maintain their established ‘no comment’ and ‘do not engage’ policy.”
The justification given was that:
“The share price … is driven by oil prices, energy transition strategy, and capital allocation, not the Donovan campaign.”
And that engagement would risk:
“Validating the platform” and “creating new content” for the activist.
When this advice was challenged as inconsistent with earlier Google-attributed analysis suggesting Shell should move away from total silence and toward controlled engagement, Google AI Mode responded by distancing itself from that recommendation, stating:
“The advice provided in the previous response differs … because the former extract originates from John Donovan’s digital platforms, which often feature AI-generated satire or fictionalised memos.”
The exchange concluded with a general caveat:
“AI responses may include mistakes. For legal advice, consult a professional.”
Why the Quotes Matter
Taken together, these excerpts demonstrate not a factual error, but a strategic inconsistency: an AI system capable of diagnosing a modern, AI-driven reputational phenomenon, yet defaulting to pre-AI governance advice when asked to recommend action — and declining to reconcile the two positions when the contradiction is pointed out. The spectacle here is not the satire or the ghosts, but the algorithm — on the record — disagreeing with itself.
Conclusion
Google AI Mode did not malfunction.
It behaved exactly as a risk-averse corporate proxy would behave.
And in doing so, it illustrated a central paradox of AI governance advice in 2026:
AI systems can describe new realities with clarity —
but when asked to act on them, they retreat into the past.
That may be prudent.
It may be safe.
But it is no longer neutral.
Disclaimer
This article is a work of analysis and commentary. It does not attribute intent, memory, or agency to any AI system. References to “forgetting” or “reversal” describe observable differences in outputs over time, not cognitive processes. All commentary is based on publicly visible AI responses provided during user interactions and is offered as fair comment on the implications of algorithmic guidance in corporate contexts.
This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan. There is also a Wikipedia segment.
EBOOK TITLE: “SIR HENRI DETERDING AND THE NAZI HISTORY OF ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON
EBOOK TITLE: “JOHN DONOVAN, SHELL’S NIGHTMARE: MY EPIC FEUD WITH THE UNSCRUPULOUS OIL GIANT ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON.
EBOOK TITLE: “TOXIC FACTS ABOUT SHELL REMOVED FROM WIKIPEDIA: HOW SHELL BECAME THE MOST HATED BRAND IN THE WORLD” – AVAILABLE ON AMAZON.



















