Royal Dutch Shell Plc  .com Rotating Header Image

Windows Forum: John Donovan’s December 2025 experiment

John Donovan’s December 2025 experiment — feeding decades of adversarial material about Royal Dutch Shell into multiple public AI assistants and publishing the divergent outputs — transformed a long‑running supplier feud and documentary archive into a live test of how generative systems handle contested archives, and in doing so exposed a set of practical governance failures that lawyers, platform designers, corporate boards and journalists must now confront.

Background​

From a supplier dispute to an adversarial archive​

The Donovan–Shell story begins in commerce: a 1990s dispute between Don Marketing (the Donovan family business) and Shell over promotional work evolved into litigation, domain fights and a decades‑long online campaign by John and his relatives. Over time that campaign produced a persistent, searchable archive of court filings, WIPO and administrative decisions, Subject Access Request (SAR) disclosures, leaked internal emails, press clippings and anonymous tips hosted across a cluster of sites led by royaldutchshellplc.com. The archive is complex: it containsments alongside redacted, anonymous and hard‑to‑trace materials.

The domain dispute is a public, formal anchor in that history: a World Intellectual Property Organization (WIPO) UDRP panel considered the royaldutchshellplc.com claims in Case No. D2005‑0538, a decision that is part of the public administrative record.

Archive as a public resource and a provocation​

Donovan’s sites have repeatedly been used as leads by mainstream outlets. In 2009, leaked internal Shell emails published on royaldutchshellplc.com were referenced in syndicated Reuters coverage about internal cost‑cutting and safety concerns, demonstrating that a small, persistent archive can seed major reporting cycles. At the same time, the archive mixes Tier A materials (SAR attachments) with Tier C items (anonymous tips, redacted memos) that demand caution.

Those two facts — public utility and variable provenance — are the starting point for the December 2025 experiment that reframed the Donovan archive as a new kind of rreputational risk.


What happened in December 2025: the AI experiment explained​

Two posts and one deliberate test​

On December 26, 2025 John Donovan published two complementary pieces that were intentionally performative: “Shell vs. The Bots: When Corporate Silence Meets AI Mayhem” and a satirical roleplay titled “ShellBot Briefing 404.” Both posts explicitly describe feeding identical prompts and curaarchive into several public AI assistants (identified by Donovan as Grok/xAI, ChatGPT/OpenAI, Microsoft Copilot and Google AI Mode) and publishing the side‑by‑side outputs to highlight divergence.
The experiment had three tactical goals:

  • Turn archival persistence into machine‑readable fuel for retrieval‑augmented generation (RAG) systems.
  • Force cross‑model comparisons that make hallucinations and model disagreement visible to readers.
  • Convert a niche adversarial archive eputational threat by leveraging algorithmic amplification.

Each goal was met, in part, because the archive is both large and well‑organised — precisely the qualities that make it attractive to retrieval pipelines — and because the experiment packaged the model outputs as newsworthy artifacts rather than private tests.

The headline incident: a model hallucination and a correction​

The most concrete point of friction was a single, emotionally charged hallucination. In Donovan’s published comparison, one assistant (reported publicly as Grok) generated a confident biographical claim that Alfred Donovan — John’s father — had died “from the stresses of the feud.” This claim contradicted obituary records and Donovan’s own account that Alfred died in July 2013 after a 6; another assistant (ChatGPT, per Donovan’s transcripts) corrected the claim and cited documented sources. The contrast — one model inventing a dramatic causal link, another debunking it — became a vivid demonstration of how models optimise for coherent narrative rather than rigorous provenance.


Why this matters: three interlocking risks exposed​

1) Hallucination becomes reputational harm​

Generative models are trained and nt, persuasive prose. When they are given partial, emotionally resonant material, they are likely to fill gaps with plausible‑sounding but unverified details. The Donovan episode shows how a single hallucination about a sensitive personal fact can be amplified into a circulating claim that is hard to fully retract once it reaches other  platforms, aggregators or human readers who treat AI text as authoritative.

2) Feedback loops and authority laundering​

A generator’s output often re‑enters the public web (through social posts, articles, or cached pages re-ingested by other models and services. That creates a feedback loop where an invented line can be treated as input evidence by later systems — a facts‑by‑iteration problem. Donovan’s public side‑by‑side transcripts turn the entire debate into feedstock for other assistants and human curators, making it easier for a hallucination to morph into de facto “truth” in downstream contexts.

3) Corporate silence is no longer neutral​

Historically, corporations often adopt a posture of legal restraint or strategic silence toward adversarial critics: litigate when necessary, avoid amplifying the critic with heavy legal action, and let the story fade. The AI era complicates that calculus. When an activist intentions assistants with an archive, silence leaves a provenance vacuum that models and third parties will readily fill. Donovan framed this directly: Shell might ignore a website, but it cannot ignore the machine‑orchestrated narratives that synthesise archival material into viral form. That observation reframes silence from a defensive tactic into a potential risk amplifier.


Disentangling what’s provable from what’s plausible​

The Donovan archive contains material of varying evidentiary weight. For responsible reporting and governance, the public record can be triaged into three useful categories:

  • Tier A — Verifiable anchors: court filings, WIPO decisions, regulator records and contemporaneous press reports. These should be treated as high‑confidence evidence when independently corroborated. The WIPO UDRP decision in Case No. D2005‑0538 is a clear example of a Tier A public document.
  • Tier B — Documentary but contested items: correspondence, internal emails and SAR disclosures that exist but may be subject to interpretive dispute. These are useful for context but demand careful citation and full contextualisation. Reuters’ 2009 reporting based on leaked emails that Donovan posted is an example where Tier B materials seeded mainstream coverage.
  • Tier C — Pattern and attribution claims: operational espionage, named covert actions and allegations drawn primarily from anonymous tips or redacted memos whose chain‑of‑custody cannot be independently reconstructed. The archive contains multiple Tier C items — especially claims about private intelligence activities directed at activists — that remain plausible but not fully proven in public records. These items should be explicitly labelled as allegations.

Flagging unverifiable claims is essential because generative models inherently collapse nuance unless provenance metadata is explicit.


Cross‑checking key claims: independent corroboration​

  • WIPO: The administrative panel decision in Case No. D2005‑0538 is publicly available in the WIPO database and documents the domain dispute involving royaldutchshellplc.com. That decision is a primary anchor in the procedural history.
  • Mainstream reporting: Donovan’s site was cited in syndicated Reuters stories in 2009 that discussed leaked internal Shell emails and internal cost‑cutting signals; those items show the archive’s capacity to generate legitimate news leads. Reuters‑linked coverage referencing royaldutchshellplc.com is recorded in Donovan’s news‑collation pages and in contemporaneous media archives.
  • Private intelligence reporting: Historical allegations about Hakluyt and the use of forers in surveillance operations have been reported in national press outlets (for example, coverage of an operative codenamed “Camus” / Manfred Schlickenrieder in 2001), corroborating the pattern of private intelligence engagement with energy firms even where specific acts remain contested. Independent reporting from continental outlets documented those episodes decades earlier. ([taz.de](Präsidentenwahlen im Iran: Ahmadinedschad sieht sich als Opfer

Where Donovan’s postings claim operational details about surveillance or burglaries targeted at him personally, public records and independent press reporting do not uniformly reproduce every specific allegation; remain in need of further forensic or judicial corroboration and should be reported as contested.


Structural failures revealed (and where fixes are needed)​

For AI vendors and platform operators​

  • Provenance by default: Retrieval‑augmented pipelines should attach source metadata for every asserted fact — including documtamps, and confidence markers — and make that metadata visible to users. Donovan’s experiment illustrates how opaque retrieval turns contested archives into authoritative‑sounding prose.
  • Hedging defaults for living persons: When a model summarizes materials about living persons or sensitive incidents lacking Tier A anchors, the default should be conservative language with explicit disclaimers. The accidental inventio claim demonstrates why hedging must be productised.
  • Audit logs and exportable contexts:  Platforms should let users (and regulators) export the exact prompt, model version, retrieval context and timestamps used for a particular output to enable reproducibility and redress. Donovan’s public transcripts would be more audit‑useful if retrieval logs and confidence scores were rporate communications, legal teams and boards
  • AI triage and rapid rebuttal: Corporations need a 72‑hour AI triage stream to log and assess viral model outputs that involve the company or named individuals, assign owners for verification, and publish concise documentary rebuttals where Tier A eviden a tactical choice, but it must be weighed against the speed of AI‑driven amplification.
  • Transparency on private intelligence: Where companies retain third‑party intelligence vendors, boards should require documented legal, ethical and reputational sign‑offs and consider public disclosure of oversight frameworktern of private intelligence engagements in the energy sector makes these practices a foreseeable source of reputational blowback.

For journalists and researchers​

  • Treat model outputs as leads, not facts: Every model claim that could materially harm a reputation or alter public understanding must be reverdocuments. Preserve prompts, retrieval contexts and outputs as part of the editorial audit trail.
  • Explicit labelling and context: When summarising contested archives, present the documentary anchors and the limits of provenance alongside any AI outputs to avoid substituting model disagreement for sourcing. Donovan’s side‑by‑side transcrbut insufficient without primary‑source anchoring.

Practical playbook: immediate steps for each stakeholder​

  • For AI vendors:
  • Ship provenance metadata with every factual claim in RAG outputs.
  • Default to hedged language for biographical or legal assertions absent Tier A anchors.
  • Offer exportables for audit and redress.
  • For corporate counsel/communications:
  • Stand up a rapid‑response AI triage channel and assign a verifiable owner for claims involving living persons.
  • Publicly publish Tier A rebuttal packages (redacted where necessary) tied to specific modelled claims.
  • Reassess policies for private intelligence vendor retention and oversight.
  • For journalists/researchers:
  • Use adversarial archives as lead generators; always seek Tier A corroboration before amplification.
  • Archive and publish retrieval metadata and prompts used when AI tools contribute to reporting.
  • Label unverifiable items as allegations and preserve editorial disclaimers.

Strengths and ethical benefits of Donovan’s approach (even where it is provocative)​

Donovan’s method — converting a sprawling archive into a readable dataset and staging cross‑model comparisons — is, in itself, a form of public pedagogy. It makes model failure modes visible to ordinary readers and forces  platforms to reckon with practical design choices. The experiment demonstrates three positive functions:

  • Transparency pressure: it compels corporate actors and platforms to articulate provenance and verification standards.
  • Diagnostic value: cross‑model disagreement highlights contrasting design trade‑offs (narrative fluency vs. source grounding).
  • Democratisation of scrutiny: small actors can use low‑cost tools to surface documents that otherwise would be buried in dockets or leaked caches.

Those benefits do not negate legal or ethical responsibilities: activists publishing contested material must be explicit about provenance and preserve audit trails so that downstream users can verify, challenge or correct the record.


defamation risk
Publishing internal emails, SAR outputs and court filings is often lawful when the materials are genuine, but republication still carries defamation and data‑protection risks if assertions go beyond what the documentary evidence supports. Donovan’s archive has previously triggered legal skirmishes (domain disputes at WIPO, defamation threats and administrative proceedings), underscoring the legal tightrope small publishers walk when combining named documents with anonymous tips. The prudent posture for newsrooms and platforms is to apply a higher verification bar before republishing incendiary claims from Tier C materials.


The path forward: governance, design and the human judgment that machines cannot replace​

The Donovan–Shell bot war is not a technical curiosity — it is an operationalals that:

  • Machines amplify and organise, but they do not adjudicate provenance.
  • Corporate silence has consequences in the age of generative assistants.
  • Editorial and product safeguards (provenance metadata, hedging defaults, audit exports) are implementable and necessary.

Fixing these problems will not be a single vendor update; it will require coordinated changes across newsroom practice, platform design and corporate governance. The most important immediate change is cultural: insist that every AI‑assisted public claim be traceable to a Tier A anchor or flagged as an unverified allegation. That simple rule restores human judgment as the final arbiter between machine fluency and public fact.


Conclusion​

John Donovan’s December 2025 experiment demonstrates that archival persistence plus generative AI equals a new vector for reputation‑shaping narratives: one that is fast, reproducible and perilously indifferent to provenance. The technical fix is straightforrovenance metadata, hedged outputs and exportable audit trails — but implementing those fixes requires institutional will across AI vendors, publishers and corporate boards.

The Donovan archive will remain a live case study: a hybrid beast made of verifiable public records and contested, anonymous claims. Its newest trick — turning archival weight into machine‑readable authority — has sharpened the policy conversation in a way that court cases and domain disputes alone never did. The remedy is not silence or suppression; it is transparency, verification and a governance architecture that re‑centres human judgment at every stage a model touches the public square.


Source Windows Forum Jan 25, 2026 : Royal Dutch Shell Plc .com More Than Dynamite: How AI Reframes the Donovan–Shell Archive as Persistent Risk

Microsoft Products Services
This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

Comments are closed.