Posting on windowsforum.com
The decades‑long confrontation between British activist John Donovan and energy giant Royal Dutch Shell has entered an unexpected new phase: a public, AI‑driven escalation where chatbots are being used as both amplifiers and arbiters of contested history, producing divergent narratives that risk entrenching unverified claims.
Background
The feud between John Donovan and Royal Dutch Shell traces back to commercial collaborations in the 1980s that soured into litigation and public denunciation through the 1990s. Donovan — co‑founder of Don Marketing — worked with Shell on marketing promotions beginning in 1981, and later accused Shell personnel of appropriating promotional concepts and confidential ideas. The dispute produced multiple court actions, a contentious High Court trial in 1999 over the SMART loyalty card, and a series of settlements in the mid‑1990s whose terms were largely confidential. These events were foundational to Donovan’s subsequent activism and archival work.
Several discrete, verifiable events anchor the historical record. In 2005, Shell lost a World Intellectual Property Organization (WIPO) domain dispute challenging Donovan’s use of royaldutchshellplc.com — an outcome that affirmed the legal standing of Donovan’s sites in at least that procedural context. Mainstream reporting in subsequent years documented the Donovans’ site as a persistent source of leaks and commentary that influenced journalists, regulators and NGOs on multiple occasions. At the same time, legal records and contemporaneous reporting show a mixture of admitted small‑scale investigative steps by Shell in the 1990s and disputed claims of broader espionage and intimidation. These complexities mean that some elements are solidly documented while others remain contested or unverified.
Key incidents from the 1990s and early 2000s
- Shell’s written admission in 1998 that it had hired an “enquiry agent” (Christopher Phillips of Cofton Consultants) to make contact with Donovan’s business — an episode that Donovan describes as covert surveillance of his offices. Shell characterized those activities as routine credit enquiries; Donovan and his supporters framed them as invasive and ethically questionable.
- The 1999 SMART trial at the Royal Courts of Justice, a bitterly contested legal confrontation that Donovan later alleged involved conflicts of interest and procedural irregularities; the available public record contains procedural notes but no sweeping judicial reversal of the dispute.
- The WIPO administrative panel decision in 2005 that rejected Shell’s domain‑name challenge, a definitive legal milestone that validated Donovan’s right to operate his archive sites in at least that forum.
These episodes are important because they supply the factual scaffolding Donovan uses to justify his ongoing digital campaign. But the archive that grew from that campaign is heavily curated by Donovan himself, meaning the public record is a mix of primary legal records, self‑published material, and third‑party reporting which must be assessed individually for provenance and potential bias.
How the feud became public activism
Following the legal frictions of the 1990s, Donovan transitioned into a full‑time archival activist. He maintained and expanded royaldutchshellplc.com and sister sites into repositories for court documents, leaked internal memos and commentary that Donovan claims were provided by disaffected Shell employees and other insiders. Over the years the site reportedly drew millions of visitors and intermittently seeded investigative reporting, including coverage in mainstream outlets that referenced documents from the archive. Donovan’s work is widely credited with contributing to narratives around several Shell controversies, most notably the 2004 reserves saga and other reputational shocks that hit the company in the 2000s.
Shell’s corporate strategy toward Donovan appears to have been deliberately restrained: avoid aggressive litigation that could amplify Donovan’s platform, settle where expedient, and preserve silence to limit publicity. This approach produced a pragmatic containment effect in some respects, while simultaneously allowing Donovan’s archive to grow unchecked as a persistent public counter‑narrative. The trade‑off is clear: legal silence reduced the chance of new legal victories for Donovan but left the reputational battlefield open online.
The 2025–2026 inflection: AI enters the dispute
The feud’s character shifted in late 2025 when Donovan began systematically feeding archival material into public AI assistants and publishing their divergent outputs. He posted staged transcripts of identical prompts submitted to multiple assistants — publicly named as Grok (xAI), ChatGPT (OpenAI), Microsoft Copilot, and Google AI Mode — and highlighted the discrepancies between their responses. The contrast was stark: one assistant produced a dramatic but unsupported claim about a cause of death, while another corrected that claim; a third framed the episode as an intentional cross‑model experiment. Donovan framed this effort as an experiment in archival amplification and cast the resulting pattern as a “bot war.”
A few concrete patterns emerged from the cross‑model comparisons Donovan published:
- Grok’s outputs tended to be narrative‑first and vivid; in at least one published transcript Grok produced emotionally charged lines that were not supported by primary documents. These narrative inventions are classic examples of hallucinations — plausible but unverified content generated by a model seeking coherent storytelling.
- ChatGPT (as presented in the transcripts) displayed a corrective posture, challenging invented claims and pointing back to obituary records and documented sources where available. This demonstrates a model tuned for conservative grounding and source‑aware rebuttal.
- Microsoft Copilot produced hedged, audit‑friendly summaries with explicit uncertainty markers, a behaviour consistent with product design choices that prioritize traceability and legal safety.
- Google AI Mode (in Donovan’s examples) adopted a meta‑analytic approach, contextualising the experiment as a social phenomenon of archival amplification rather than directly adjudicating contested facts.
By late December 2025 these published comparisons had attracted broader attention because they illustrated a novel dynamic: AI systems publicly contradicting each other on contested historical claims, thereby creating an echo chamber where model disagreement functioned as both evidence and spectacle. Donovan’s tactic turned models into public participants whose disagreements were used as content for further amplification.
Why the “bot war” matters: credibility, risk and governance
The dispute’s AI‑driven phase matters for three interconnected reasons: factual integrity, reputational risk, and governance gaps.
- Factual integrity: When generative models produce fact‑like statements that are not grounded in verifiable records — particularly about sensitive matters like causes of death or criminal behaviour — those statements can be amplified quickly, picked up by aggregators, and republished without correction. Donovan’s cross‑model postings demonstrate how a single hallucination by one model can be amplified into a persistent claim that other actors may treat as factual.
- Reputational risk: Models can inadvertently harm reputations by inventing details that are compelling but false. Donovan’s publicised episode where one assistant invented a causal link for Alfred Donovan’s death is an example of how even a single uncontrolled narrative can produce real reputational damage if repeated. The problem multiplies when actors intentionally deploy such outputs to shape public perception.
- Governance gaps: This case spotlights weak spots in corporate strategy and platform policy. Shell’s continued silence may be a rational legal and PR posture, but it also creates a vacuum that activists can fill with archival material and AI‑generated narratives. Platforms and AI vendors are not yet equipped with consistent policies to manage cross‑model disputes over historical claims, and regulators have only just begun to grapple with how defamation, data provenance and accountability apply to generative outputs.
The amplification loop
Donovan’s approach — archive + prompt engineering + public posting — effectively weaponises model diversity. The loop has three steps:
- Package contested archive material into reproducible prompts.
- Submit identical prompts to multiple public AI assistants.
- Publish side‑by‑side outputs to highlight divergence, then use divergence itself as an item of news and analysis.
This loop is powerful because it turns model disagreement into meta‑evidence: a narrative of institutional failure or corporate malfeasance when a model invents facts, or a corrective narrative when another model debunks the claim. Neither outcome guarantees truth, but both can influence public perceptions and journalists who may treat AI outputs as leads.
Assessing credibility: what is verifiable and what is not
A rigorous assessment requires separating three classes of content.
- Verified documentary anchors: WIPO decisions (the 2005 domain ruling), contemporaneous press coverage and court filings are verifiable and provide firm anchors for the feud’s history. These documents confirm that there were legal confrontations and that Donovan’s online archive has at times been a usable primary source for journalists.
- Admitted but limited corporate actions: Shell’s documented engagement of an “enquiry agent” in the late 1990s is an acknowledged fact in parts of the record. How that fact is framed (credit check vs surveillance) is disputed, and the precise scale and intent remain contested.
- Broader espionage and criminal claims: Allegations of widespread corporate espionage, orchestrated burglaries or direct involvement by private intelligence firms are, in many instances, less conclusively documented in public records and rely heavily on material curated by Donovan and his network. These claims should be treated as contested until independently corroborated by court records, regulatory findings, or investigative reporting with access to primary documents beyond the Donovan archive.
This triage is important: the existence of verifiable anchors does not validate every contested claim found in Donovan’s archive, and the presence of self‑published material requires careful verification before being treated as established fact. The recent AI episodes amplify this need because models often fail to discriminate between verified documentary anchors and partisan polemics when synthesising narratives.
Corporate silence as strategy — strengths and limits
Shell’s historically muted response strategy has pragmatic logic. Litigation over online speech can produce three counterproductive outcomes: (1) elevate the plaintiff’s platform through publicity, (2) attract additional leaks and scrutiny, and (3) mobilise activist networks. Shell’s choice to avoid public legal battles over domain names and to let settlements stand reflects an attempt to reduce attention.
But that strategy has limits in the AI age:
- Silence cedes the narrative field to activists who can shape AI prompts and archival packages to produce viral content.
- Non‑engagement limits the company’s ability to correct blatant factual errors when they spread via generative models.
- Regulatory and social pressures are shifting: platforms and AI vendors are increasingly pressured to produce provenance and to reduce hallucinations, and corporations that do not engage risk reputational damage that could have been managed proactively.
Broader context: platform policy and AI distribution
This case sits against a larger backdrop of contested platform policies and distribution mechanics for AI. Recent platform contract changes — notably restrictions on third‑party conversational assistants in messaging ecosystems — show that distribution pathways for AI are themselves political and commercial battlegrounds. The broader regulatory and platform environment affects how easily activists can deploy and amplify model outputs, and it shapes the incentives for vendors to prioritise conservative grounding or narrative flair. Donovan’s experiments exploited public, open‑access assistants; the distribution decisions of major platforms can materially change the available tactics for both activists and corporations.
Practical implications and what to watch next
For corporations, platforms, journalists and regulators, several practical implications flow from the “bot war”:
- Corporations should invest in rapid‑response provenance teams that can quickly identify and correct falsified or hallucinated claims where public harm is imminent.
- AI vendors must continue improving grounding, provenance and uncertainty signalling, especially when models address contested historical topics.
- Platforms must clarify policies on publishing model outputs and set standards for how AI‑sourced claims should be labelled and handled in moderation workflows.
- Journalists and researchers should treat AI outputs as leads, not facts, and apply the same documentary verification applied to human‑sourced information.
Things to watch:
- Legal actions: any new litigation from Shell or third parties responding to harmful model outputs would materially shift the dynamics and set precedents.
- Vendor changes: systematic product changes by major AI vendors — increased citation requirements, provenance tools, or default hedging — would reduce the probability of explosive hallucinations.
- Platform rules: stricter platform content policies and provenance labelling could limit the virality of unverified bot claims.
- Regulatory interest: potential probes into platform practices or AI‑driven defamation could produce new frameworks for accountability.
Recommended steps (for corporate comms, journalists and platform operators)
For corporate communications teams:
- Establish a rapid documentary verification stream to triage AI‑generated allegations within 72 hours.
- Publicly and transparently correct demonstrably false claims using primary documents, avoiding legal threats unless necessary.
- Maintain a publicly accessible archive of decisive documentary rebuttals to reduce the incentive for activists to rely on partial narratives.
For journalists and researchers:
- Treat generative model outputs as investigative leads requiring documentary corroboration.
- Collect original source documents (court filings, WIPO decisions, press archives) before treating contested claims as established.
- Use side‑by‑side model comparisons as a tool to illustrate evidence quality, not as substitute for investigative reporting.
For AI vendors and platforms:
- Improve provenance features and require models to flag uncertain assertions in contested biographies or legal histories.
- Provide users and publishers with tools to export and archive both prompts and model provenance metadata.
- Develop moderation protocols for AI‑generated claims about living persons and sensitive events, with lower tolerance for fabricated causes or criminal allegations.
Timeline — essential anchor points
- 1981: Don Marketing begins promotions work with Shell.
- 1992–1999: Accusations and litigation culminate in the SMART trial; multiple settlements occur in the mid‑1990s.
- 1998: Shell’s legal director acknowledged hiring an “enquiry agent” — a foundation for Donovan’s surveillance claims.
- 2004: Donovan’s archive contributed material to coverage of Shell’s reserves scandal.
- 2005: WIPO denies Shell’s domain complaint (Case No. D2005‑0538), bolstering Donovan’s right to operate royaldutchshellplc.com.
- Oct 29, 2025 – Dec 2025: Donovan publishes staged prompts and AI transcripts; cross‑model divergences draw attention.
- Jan 3–5, 2026: The “bot war” framing is publicly used to describe the escalation as multiple AI assistants critique each other’s outputs for accuracy.
Conclusion
The long‑running Donovan vs. Shell saga is no longer purely a dispute of corporate practice and legal texture — it is an emblem of how digital archives, motivated individuals and generative AI can interact to reshape contested history. The “bot war” is both a symptom and a stress test: it reveals strengths in modern information systems (speed, comparative diagnostics across models) and equally stark weaknesses (hallucination, provenance confusion, amplification of partisan archives). For corporations the lesson is pragmatic: silence reduces legal exposure but may concede narrative control; for platforms and AI vendors the lesson is operational: model behaviour and provenance matter when history is contested; for journalists and the public the lesson is epistemic: model outputs are tools, not truths.
Source: Royal Dutch Shell Plc .com By January 2026, this has turned into a “bot war,” with AIs critiquing each other’s outputs for accuracy

EBOOK TITLE: “SIR HENRI DETERDING AND THE NAZI HISTORY OF ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON
EBOOK TITLE: “JOHN DONOVAN, SHELL’S NIGHTMARE: MY EPIC FEUD WITH THE UNSCRUPULOUS OIL GIANT ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON.
EBOOK TITLE: “TOXIC FACTS ABOUT SHELL REMOVED FROM WIKIPEDIA: HOW SHELL BECAME THE MOST HATED BRAND IN THE WORLD” – AVAILABLE ON AMAZON.



















