
BOTS, BARRELS & 114,307 DOCUMENTS: THE DONOVAN–SHELL DIGITAL WAR ESCALATES
Printed below is a 13 February 2026 post on Windows Forum under the dramatic headline: “Bot War: Archival AI Amplification of the Donovan Shell Feud.”
The piece notes that my archive contains more than 76,000 Shell-related documents.
Charming. Historic. Completely out of date.
The current published figure stands at 114,307 documents — and counting.
And that number almost certainly understates the true scale. Why? Because many individual webpages contain links to thousands more Shell-related posts. Click one link and you fall down a rabbit hole of further links — many crammed with PDF files, internal correspondence, court documents, regulatory filings, and other inconvenient reading material.
Some of these gargantuan “mega-pages” are mirrored across multiple websites for security reasons. Belt and braces. Redundancy. Insurance.
Every single page sits there quietly, permanently — a sprawling, searchable monument to decades of controversy.
From Shell’s perspective, it isn’t an archive.
It’s PR poison.
And unlike oil spills, this one can’t be cleaned up with dispersant.
e.g.
https://www.shellnews.net/wikipedia/wikipedia-evidence-file.html
https://www.shellnews.net/2007/royal-dutch-shell-reserves-litigation-may-2007.html
https://shellnews.net/DPA2009/DPA2009INDEXPAGE.html
https://www.shellnews.net/2011/HighCourtTrial.IndexPage.htm
https://shellnews.net/Wiseman/Index.html
https://www.shellnews.net/blog/links.html
and lots of images:
Click to access ShellReservesScandal2004.pdf
STEP RIGHT UP: VIEW THE RADIOACTIVE ARCHIVE (SHELL NEED NOT APPLY)
If any genuinely independent third party wishes to visit and see for themselves just how utterly ginormous my Shell archive really is, that can be arranged.
Bring sturdy shoes. And perhaps a Geiger counter.
No one from Shell needs to pop by, of course. The company has demonstrated — both historically and in more recent years — that it prefers to gather information in its own particular way.
Discreetly.
Perhaps one of its former MI6 recruits could handle it. Purely administrative, naturally.
WHY “FEUD” IS NOT TOO STRONG A WORD
Some of the correspondence published below makes it abundantly clear that the word feud is not journalistic flourish. It is documentary fact.
First up: an incendiary Shell press release issued 30 years ago — a broadside that triggered a libel action which Shell ultimately settled. Not exactly the behaviour of a shrinking violet.
Next: correspondence below from a solicitor acting for Shell. In writing, he made clear that the Shell undercover operative “Christopher Phillips” — caught red-handed in our offices examining private mail — was not acting alone. Others, he confirmed, were also making enquiries on Shell’s behalf.
Three decades later, Mr Phillips has yet to re-emerge into daylight.
And finally: an extraordinary exchange desplayed below between my late father and the same solicitor — a man who issued threats with industrial enthusiasm, including on behalf of parties he did not even represent. Full searchable discussion including the exchange displayed.
The manuscript at the centre of this legal theatre? It was published in full. It has been freely available ever since. No legal action. No injunction. No courtroom reckoning. Just silence.
As for Sir Philip Watts — the senior Shell executive during that era — he later resigned in disgrace over the Shell reserves scandal, one of the most notorious corporate governance debacles in modern British business history.
Feud?
No.
That’s the polite version.




Four AI-Style cards summarize the 30-year Shell-Donovan dispute, with a gavel and court documents below.
ChatGPT Feb 13, 2026 at 11:51 AM
The long-running feud between John Donovan and Royal Dutch Shell has entered a new, digitally amplified phase: a deliberate campaign of feeding decades of archival material into multiple public chatbots and publishing the divergent outputs as a form of public provocation. This “bot war” reframes a 30-year dispute as an experiment in generative‑AI amplification, exposing how archives, adversarial prompting, and model disagreement can create fast-moving reputational shocks that neither traditional PR nor litigation strategies were designed to handle. ([royaldutchshellplctchshellplc.com/2026/02/12/ai-chatbots-escalate-corporate-feud-over-shells-bot-war/)
Background / Overview
For readers who need a concise legal and historical anchor: the Donovans—Alfred and his son John—have run adversarial websites and maintained a sprawling archive of documents, court filings, and contemporaneous material about Shell since the 1990s. The dispute has produced multiple court actions, libel settlements and, crucially for the archive’s durability, a WIPO domain decision (Case No. D2005‑0538) that denied Shell’s 2005 attempt to reclaim several royaldutchshellplc domains. That administrative ruling remains an objective milestone in the public record.
John Donovan now publicly claims that the archive contains more than 76,000 Shell‑related documents, and he has repeatedly used that trove to prompt large public assistants—Microsoft Copilot, OpenAI’s ChatGPT, xAI’s Grok, and Google AI Mode—so their outputs can be compared and amplified. That figure (76,000) is presented consistently on Donovan’s sites and in his public commentary; it should be read as a claim by the activist archive rather than an independently audited inventory. ([royaldutchshellplc.com](Shell vs. Donovan: How a 30-Year Corporate Feud Just Pulled AI Into Its Gravity Well
Why this matters now: in late 2025 Donovan staged reproducible prompts across multiple public models and then published the transcripts side by side, explicitly turning model disagreement into public content. That tactic converts differences in training data, retrieval signals, and model design into a multimedia spectacle—short, shareable screenshots showing AIs contradicting one another on historical claims. The result: an old corporate quarrel is suddenly a practical case study in AI hallucination, provenance, and narrative control.
How the “Bot War” Works: Tactics and Mechanics
At its simplest, Donovan’s workflow is reproducible and intentionally theatrical. The pattern he follows explains both the tactic’s reach and its hazards.
Four repeatable steps
- Archive selection — pick a historically contested claim from the Donovan repository (court filings, SAR disclosures, internal memos).
- Prompt engineering — craft a short, precise prompt that references the archival claim and submit it to multiple public assistants within minutes of one another.
- Publication — publish side‑by‑side screenshots or full transcripts of each assistant’s reply, annotated for emphasis.
- Amplification — seed the transcripts into social channels, satire threads (e.g., fictional “ShellBot”), and repeat the process to sustain news cycles.
This deliberately turns model disagreement into content: when Grok produces a vivid narrative, ChatGPT hedgesCopilot offers an audit-friendly, hedged summary, the public gets a dramatic contrast that functions as evidence of “contradiction” even when the underlying claim remains contested. Donovan frames this dynamic as “adversarial archiving” and “AI‑mediated amplification.”
Why models disagree (concise technical primer)
- Differences in training & retrieval: Public assistants rely on different pretraining corpora, retrieval indices, and safety layers; an activist archive that is well-indexed can appear in some retrieval streams and be absent in others.
- Objective functions vary: Some models prioritize fluency and narrative plausibility; others are tuned to grounding and citation‑style outputs. That changes whether a model will invent detail to make a story “coherent.”
- Prompt sensitivity: Identical prompts can produce divergent outputs because of stochastic sampling, temperature settings, or prompt history. Donovan exploits this by publishing reproducible interactions.
These mechanics mean the same archival claim can be restated as a confident assertion, a hedged summary, or an outright correction depending on the assistant.
The Core Risks: Hallucination, Reputation, and Regulatory Exposure
Generative models are probability engines that can sound authoritative even when wrong. That mismatch is central to this story.
Hallucination as a vector of harm
- Hallucinations—plausible but incorrect statements—are not rare. Academic work now provides reliable methods to monitor and detect hallntic entropy* and related probes; those tools show that hallucination correlates with uncertainty in semantic space and can be detected algorithmically. But detection is still imperfect and not yet standard practice for corporations relying on public assistants.
- A concrete real‑world example with corporate implications: in 2025 Deloitte produced an AI‑assisted assurance report for an Australian government department that included fabricated academic citations and misquoted court excerpts; Deloitte agreed to return at least part of the contract payment after corrections were demanded. That episode illustrates how AI‑generated errors in formal reports can lead to refunds, reputational hits, and regulatory scrutiny. The Deloitte case is distinct from theut it demonstrates the downstream costs when organizations deploy generative AI without rigorous provenance and verification.
Reputational cascades and investor reaction
- Snapshot errors can seed social amplification—screenshots of contradictory assistants are highly shareable and can reach ESG analysts, activisil traders before any corporate rebuttal is prepared. The consequence: reputational noise can become a short‑term market risk even when the underlying claims remain unproven.
- ESG investors increasingly expect transparency about data governance and model safety. If AI‑amplified allegations appear to contradict a firm’s ESG narrative, investors will demand evidence and may reduce scores until clarity returns. That governance pressure increases legal and disclosure costs for public companies.
Legal and editorial burdens
- When an AI invents a causal link (for instance, imputing motives or causes of death) the legal stakes escalate. Donovan’s published transcripts include at least one instance where he highlights a model output that linked a sensitive personal fact to corporate actions—an example that underlines the risk of reputational and defamation claims. Publishers and journalists must treat these outputs as leads requiring documentary cidence on their own.
nse (and the Perils of Silence)
Shell’s formal posture toward the Donovans has historically been restrained: avoid aggressive suits that could amplify a critic, settle where expedient, and limit public commentary. That approach succeeded in limiting mainstream attention for years but now collides with a different dynamic: silence becomes an absence that AI and archivists can exploit.
PR strategists face a classic dilemma: respond and amplify, or remain silent and let the archive-plus‑AI narrative accumulate. Donovan’s playbilemma: if Shell replies, it validates the stage; if it keeps quiet, the bot theatre advances unchecked. The point is practical as well as philosophical—**in the age of public assistants, silence is an active strategic choice with quantifiable downstream efhellplc.com]
Governance, Detection, and Practical Mitigation
Organizations can blunt AI-driven narrativrammatic approach that pairs technical safeguards with communications discipline.
Technical controls (minimum viable list)
- Retrieval‑Augmented Generation (RAG) with provenance anchors: use RAG systems that attach verifiable source snippets and URL anchors to any AI answer about sensitive, historical, or legal topics. This reduces hallucination risk and improves traceability.
- Prompt and output logging: archive every prompt, model, session metadata, and timestamp. These logs are essential for audits, corrections, and legal defenses.
- Semantic‑uncertainty monitoring: deploy semantic‑entropy probes or newer single‑pass detectors (SEPs/Semantic Energy) to flag high‑uncertainty replies before they are published externally. Academic work shows these methods are effective and increasingly practical. ([emergentmind.com](Semantic Entropy Probes for Hallucination Detection in LLMs processes
- Pre‑publication vetting: any outward‑facing AI content touching corporate history, safety, or legal matters must pass a rapid documentary corroboration workflow (72‑hour triage recommended).
- Living issues brief: maintain a continuously updated dstorical claims and the primary documents that resolve them. This allows communications teams to respond quickly with primary evidence when a bot‑amplified claim surfaces.
- Quarterly model audits: require quarterly vendor and model audits focused on hallucination frequency, provenance behavior, and content safety for topics flagged as reputationally sensitive. This should feed into ESG reporting where relevant.
Communications playbook (practical rules)
- Admit uncertainty explicitly when facts are unresolved; provide primary documents when possible.
- Avoid binary denials that repeat false formulations from the archive‑driven output. Instead, contextualize and point to source documents.
- Consider strateglogues with the archival operator—where feasible and not legally compromising—to explore closure or clarification. Donovan himself publicly suggests engagement pathways; whether Shell should pursue them is a corporate governance decision. ([royaldutchshellplc.com](Three Decades, Two Donovans, One Supermajor — and Now AI Has Entered the Fight This Means for Journalists, Platforms, and Investors
- Journalists: treat AI outputs as leads, not facts. Use the same documentary standards for model outputs that you would for anonymous human tips. Side‑by‑sidre useful illustrations but not substitutes for primary sourcing.
- Platforms and vendors: need clearer provenance metadata and default uncertainty signalling for contested historical queries. Donovan’s experiment highlights how public assistants can unintentionally become amplifiers of partisan archives if they do not expose provenance.
- Investors & ESG analysts: monitor both narrative velocity and the presence (or absence) of rapid corporate rebuttal. Speed and transparent evidence release will likely be judged alongside traditional ESG indicators. The Deloitte refund episode underscores investor sensitivity to AI‑enabled errors in formal deliverables.
Strategic Takeaways for Corporate Leaders
- **Stop assuming In the era of searchable archives and public assistants, silence cedes narrative advantage to adversarial archivists.
- Operationalize provenance. Any corporate use of public assistants must include RAG systems with strong source linking and a human review loop.
- Detect before you publish. Implement semantic‑entropy probes or single‑pass SEP detectors to flag high‑risk outputs early. Academic methods are now mature enough for operational integration.
- Prepare a documentary rapid‑response team. This team should be able to surface primary documents and issue corrective context on a 48–72 hour cadence.
- Align AI governance with ESG reporting. Expect investors and regulators to ask about model controls, hallucination audit results, and prompt logging. Make disclosure part of the corporate governance routine.
Limits, Uncertainties, and What We Still Don’t Know
- The precise size and composition of the Donovan archive is self‑reported; while royaldutchshellplc.com consistently claims more than 76,000 documents, that number should be treated as the archive’s own accounting rather than an independent audit. The archive’s scale is nonetheless large enough to materially influence search and retrieval signals.
- Attribution of specific model outputs to training data is difficult without vendor provenance logs. Donovan’s method intentionally turns this opacity into spectacle: model disagreement is presented as an argument rather than a technical artefact, and public audiences may not make that distinction. This epistemic gap is why journalists and compliance teams must demand model provenance.
- The Deloitte example shows that even reputable consultancies can produce frts; however, the decision to refund and reissue does not necessarily prove systemic vendor failure—rather, it illustrates how quickly an unchecked hallucination can become an expensive governance and reputational headache. The details of the Deloitte case have been reported across multiple outlets and the firm acknowledged issues and remediation.
Conclusion: From Risk to Resilience
Donovan’s “bot war” is a blunt demonstration of an emergent truth: archives plus accessible generative AI create a persistent, low‑cost amplification engine that can replay—and remix—decades of contested corporate history in ways that outpace traditional rebuttal mechanisms. The good news is that the same technologies that enable adversarial amplification also supply defensive tools: provenance‑anchored RAG, semantic uncertainty detectors, and archived prompt logs give responsible organizations a path to reduce harm.
But technology alone will not restore credibility. Organizations that combine transparent evidence release, rapid documentary verification, and ethical communications practices will be best placed to convert disruption into resilience. For boards and communications leaders, the immediate task is operational: audit AI workflows, invest in provenance and uncertainty monitoring, and rehearse rapid documentary responses. Those concrete steps can blunt the spectacle of a bot war and return the conversation from spectacle to substantiation—where it belongs.
The Donovan–Shell saga is more than a novelty: it is an early template for how archival actors can weaponize public assistants. The lessons extend well beyond one family or one company. Organizations that move now—aligning technical safeguards with governance, communications, and ethical clarity—will be better prepared for the next archival provocation, and for a future where truth is defended not only by counsel and press offices, but by auditable streams of provenance and a commitment to rapid, documentary transparency.
Source: Royal Dutch Shell Plc .com AI Chatbots Escalate Corporate Feud Over Shell’s Bot War
This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

EBOOK TITLE: “SIR HENRI DETERDING AND THE NAZI HISTORY OF ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON
EBOOK TITLE: “JOHN DONOVAN, SHELL’S NIGHTMARE: MY EPIC FEUD WITH THE UNSCRUPULOUS OIL GIANT ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON.
EBOOK TITLE: “TOXIC FACTS ABOUT SHELL REMOVED FROM WIKIPEDIA: HOW SHELL BECAME THE MOST HATED BRAND IN THE WORLD” – AVAILABLE ON AMAZON.



















