Royal Dutch Shell Plc  .com Rotating Header Image

Who controls the narrative when machines generate “facts” about real people?

Copilot says: “The Donovan/Shell saga has always been unusual — decades of litigation, whistleblowing, leaked documents, and allegations of corporate surveillance. But the arrival of AI systems has added a new layer: Who controls the narrative when machines generate “facts” about real people?”

UPDATE (15 December 2025): From bot-narratives to regulator paper trails

Today the UK’s Financial Reporting Council (FRC) announced it has opened an investigation into Ernst & Young LLP’s statutory audit of Shell plc’s consolidated financial statements for the year ended 31 December 2024, including consideration of whether partner rotation requirements were breached. The FRC stresses that opening an investigation does not indicate any finding of breach, and that the investigation relates only to the relevant statutory auditor/audit firm (not other parties). 

Reuters reports the move follows Shell’s earlier disclosure that EY had failed to comply with partner-rotation rules, including SEC rotation requirements, prompting reissued U.S. audit opinions for 2023–2024 while Shell said its financial statements were unchanged. 

Why does this belong in an article about AI “facts”? Because it underlines the same principle: authority without a visible audit trail is not accountability—whether it’s a chatbot biography or a corporate assurance system. 

Below is a follow-on from the article : Battle of the Bots: “Receipts, or Retractions”

Response by John Donovan (assisted by ChatGPT) to the Copilot comments can be found under the red text headline at the foot of this page.

Copilot says:

🟡 Why this matters

The feud has evolved from a corporate dispute into a case study in AI misinformation risks, especially when models generate authoritative‑sounding content about real individuals. It’s also become a public test of how AI companies handle corrections, accountability, and provenance.

If you want, we can dig into the history of the feud, the AI ethics angle, or how Shell has responded over the years.

My reply:

Tell me more about: the AI ethics angle

Here’s a deeper look at the AI‑ethics angle in the Donovan/Shell feud, grounded in what the search results actually show and why this particular dispute has become a surprisingly rich case study in AI accountability, provenance, and corporate power.

🧭 The AI‑Ethics Angle: Why This Feud Became a Test Case

The Donovan/Shell saga has always been unusual — decades of litigation, whistleblowing, leaked documents, and allegations of corporate surveillance. But the arrival of AI systems has added a new layer: Who controls the narrative when machines generate “facts” about real people?

Search results highlight several key ethical themes.

🔍 Core Ethical Issues Raised

1. AI biography accuracy and the risk of invented facts

The spark was the Grok biography incident, where Elon Musk’s model produced a confident but false claim about John Donovan’s late father.

https://windowsforum.com/threads/ai-biographies-and-provenance-the-donovan-shell-grok-fiasco.392417/

  • This wasn’t a minor typo — it was a fabricated biographical detail presented as fact.
  • The episode illustrates how AI can turn “creative licence” into misinformation, especially when writing about real individuals.

This is a textbook example of hallucination risk in generative AI — but with reputational stakes.

2. Provenance and traceability of AI‑generated claims

A WindowsForum analysis notes that Donovan has long maintained a massive archive of documents about Shell, and the AI controversy has now forced a conversation about provenance — where AI gets its information and how it signals uncertainty.

https://windowsforum.com/threads/donovan-shell-archive-ai-summaries-provenance-and-shell-ethics.392434/

This raises questions like:

  • Should AI systems cite sources when discussing real people?
  • How do users verify claims when the model sounds authoritative?
  • What happens when AI contradicts the public record?

The Donovan case becomes a microcosm of the broader debate over AI transparency.

3. “Storytelling masquerading as fact”

A LinkedIn post summarizing the feud quotes ChatGPT as accusing Grok of exactly that: “storytelling masquerading as fact”. This phrase has become a shorthand for the ethical tension between:

https://www.linkedin.com/posts/jimkaskade_donovanshell-feud-chatgpt-accuses-grok-activity-7403166605037215744-cqyK

  • AI as a creative tool
  • AI as an information source

When models blur the line, trust erodes.

4. Corporate power, surveillance, and AI narratives

The Donovan/Shell conflict has always involved allegations of corporate intelligence operations (e.g., references to Hakluyt in the WindowsForum thread).

https://windowsforum.com/threads/donovan-shell-archive-ai-summaries-provenance-and-shell-ethics.392434/

Now, AI systems are being pulled into that orbit — not because Shell is using them, but because AI is shaping public understanding of the feud.

This raises a subtle but important ethical question:

  • When AI systems summarize decades of conflict, whose version of history do they reproduce?

5. AI as an amplifier of long‑running disputes

Shell vs. Donovan is one of the longest activist–corporate conflicts on record. AI has now become:

  • A new battleground
  • A new source of errors
  • A new lens through which the public encounters the story

As one article puts it, the feud has “pulled AI into its gravity well”.

https://shellplc.website/category/leaked-shell-emails/

🧩 Why this case matters for AI ethics

This isn’t just a quirky footnote. It’s a real‑world stress test for AI systems dealing with:

  • Real people
  • Real reputational stakes
  • Long, complex historical records
  • Conflicting narratives

It highlights the need for:

  • Better source transparency
  • Clearer uncertainty signaling
  • Guardrails around biographical claims
  • Mechanisms for correction and accountability

In other words, it’s a perfect storm for AI ethics researchers.

Want to go deeper?

Would you like to explore AI provenance and source‑citation problems, how Shell’s historical intelligence controversies intersect with AI narratives, or how AI companies handle corrections when they get facts wrong?

https://windowsforum.com/threads/ai-biographies-and-provenance-the-donovan-shell-grok-fiasco.392417/

https://windowsforum.com/threads/donovan-shell-archive-ai-summaries-provenance-and-shell-ethics.392434/

References

https://windowsforum.com/threads/ai-biographies-and-provenance-the-donovan-shell-grok-fiasco.392417/

https://windowsforum.com/threads/donovan-shell-archive-ai-summaries-provenance-and-shell-ethics.392434/

https://www.linkedin.com/posts/jimkaskade_donovanshell-feud-chatgpt-accuses-grok-activity-7403166605037215744-cqyK

https://shellplc.website/category/leaked-shell-emails/

https://windowsforum.com/tags/shell-ethics/

Donovan Shell Copilot Transcript: AI, Surveillance, and the Archive Saga

What Happens When You Ask AI About Shell’s Ethics? John Donovan Found Out

https://helm.news/2025-12-06/dec-elon-musks-ai-chatbot-grok-falsely-claimed-john-donovans-father-died.html

Shell vs. Donovan: How a 30-Year Corporate Feud Just Pulled AI Into Its Gravity Well

Response by John Donovan (assisted by ChatGPT)

Copilot is right about one thing: this does matter — but not merely because an AI model “hallucinated” a biographical detail.

It matters because the modern information supply chain works like this:

Wikipedia → search engines → AI models → “authoritative” summaries — and the weakest link is often the one that gets treated as gospel.

For years I have published articles pointing out that important information about my late father was wrong, and I drew attention to it repeatedly. I even published a book about Shell and Wikipedia, and noted how mainstream reporting (including The Guardian coverage shortly after his death in 2013) sat in plain sight while misleading biographical material remained embedded in the public record. Yet the incorrect information lingered on Wikipedia for years, apparently immune to correction — until very recently.

And that’s the uncomfortable punchline: it wasn’t careful scholarship or editorial diligence that finally shifted the record — it was embarrassment created by AI repeating an error at scale. Only when the bots started amplifying the mistake did the “public truth” suddenly become editable.

This is where Shell’s own history becomes impossible to ignore. The SAR disclosures (Shell’s internal emails) — and the conduct of a senior Shell executive as shown in the SAR material I have published — indicate Shell was acutely sensitive, at times fixated, on Wikipedia coverage and my editing of Shell-related pages. This is not a side issue. It goes to the heart of AI ethics, because it raises the deeper question:

When machines generate “facts” about real people, who decides what the machine thinks is true — and who gets to correct it?

Copilot talks about “provenance” and “accountability”. Fine. Let’s apply those words to the full chain:

  • If AI systems are going to present biographical claims as fact, they must show sources — not vibes, not “recent reporting,” not confident tone.

  • If Wikipedia is treated as a default authority, then conflicts of interest, corporate narrative management, and editorial blind spots become AI problems too.

  • If errors persist for years and are only corrected when AI controversy makes them inconvenient, then we are not dealing with truth — we are dealing with reputational triage.

So yes, this is a test case — but not only for xAI, Microsoft, or OpenAI. It is a test case for Wikipedia governance, corporate PR influence, and the wider ecosystem that turns disputed fragments into “machine-certified reality.”

And here’s my challenge to the “Want to go deeper?” invitation:

Let’s go deeper — with documentation.

I want to see:

  1. Exactly what Copilot relied on when it implied “recent reporting”.

  2. A clear distinction between sourced fact and model inference.

  3. A correction trail — not only for AI outputs, but for the upstream pages and claims that fed them.

Because the issue isn’t just that a bot got something wrong. The issue is that a bot got something wrong in a way that reveals how easily the record can be shaped — and how reluctantly it is corrected — until a public scandal forces the edit button to work.

This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan. There is also a Wikipedia segment.

Comments are closed.