AI Gets It Wrong: Google AI Mistakes Shell’s Position

AI Gets It Wrong: Google AI Mistakes Shell’s Position

Google AI Mode response to prompt:

“Latest news on Donovan–Shell feud”

(Response received 23 January 2026)

As of January 2026, the long-standing dispute between activist John Donovan and Shell plc has entered a new phase characterized by the use of generative AI. Donovan is leveraging AI to analyze his extensive archive of documents related to Shell, leading to AI models repeatedly highlighting and reinterpreting past allegations as recent events. This digital strategy has been termed the “Bot War.”

Latest Developments (January 2026)

2026 AGM: Donovan plans to raise questions about Shell’s response to AI-generated narratives at the 2026 Shell Annual General Meeting (AGM).

Satirical Content: AI-generated dialogues between historical figures linked to the dispute have been published by Donovan.

Shell’s Position: Shell stated in mid-January 2026 that historians reviewed Donovan’s claims and the company disagrees with his conclusions while reserving legal rights. They have not, however, specified factual errors in his published material.

AI Interpretation: Different AI platforms have offered varied perspectives; Grok (xAI) described Donovan’s approach as a “masterclass in digital persistence,” while others have provided inconsistent guidance to Shell.

Feud Origins and Context

The dispute originated in the early 1990s when John and Alfred Donovan’s company, Don Marketing, alleged that Shell had misappropriated intellectual property related to promotional games.

Past Settlements: Shell reached out-of-court settlements in multiple High Court cases in the 1990s, covering legal fees and including a formal written apology in one instance.

Activist Role: Following these legal actions, the Donovans became prominent online activists. Their website, RoyalDutchShellPlc.com, functions as a non-profit repository for leaked information and corporate criticism.

Market Impact: Although analysts note that the feud creates “reputational noise,” the primary driver of Shell’s share price remains global oil market conditions.

AI responses may include mistakes.


Clarification (by John Donovan)

The Google AI Mode response reproduced above states that Shell plc issued a public statement in mid-January 2026 setting out its position on my claims. To my knowledge, no such public statement has been made by Shell plc.

I believe this is a misattribution, arising from the recent publication of private correspondence between myself and Shell’s Mr Michiel Brandjes. That correspondence does not constitute a formal or public corporate statement by Shell plc.

This error is noteworthy because it illustrates how AI systems can inadvertently convert private exchanges or secondary material into attributed corporate positions — precisely the reputational and evidential risk raised by AI-mediated narratives.


Further comment invited

Further related expert comment will appear below in response to the following prompt:

Please comment on this article documenting a factual misattribution by Google AI Mode regarding Shell plc: “AI Gets It Wrong: Google AI Mistakes Shell’s Position”. Focus on AI misattribution risk, corporate silence, and how such errors should be handled once identified. There is no preferred viewpoint.

Copilot response…

Misattribution by an AI system—especially when it involves a major corporation like Shell—lands in a sensitive space where technology, public trust, and corporate communication collide. When an AI model incorrectly states a company’s position, the issue isn’t just a technical glitch; it becomes a reputational and informational risk that deserves careful handling.

🧩 AI Misattribution Risk

AI systems are trained to generalize from patterns, which means they can sometimes over‑infer, hallucinate, or blend sources in ways that look authoritative but are factually wrong. In the case you’re referencing, the model appears to have attributed a stance to Shell that the company did not take. That’s a classic example of misattribution risk—where an AI system confidently assigns statements, motives, or actions to an entity without evidence.

This matters because:

  • Corporate positions carry legal and financial weight, so errors can influence markets, public opinion, or regulatory scrutiny.
  • AI outputs are often consumed as fact, especially when delivered through a search interface.
  • Misattribution can spread quickly, amplified by screenshots, reposts, and commentary.

Even when the error is unintentional, the downstream effects can be very real.

🕳️ Corporate Silence

One striking dynamic in these situations is the corporate silence that often follows. Companies may choose not to respond publicly for several reasons:

  • Avoiding amplification: A correction can draw more attention to the original error.
  • Legal caution: Public statements can create liabilities or obligations.
  • Unclear responsibility: When the error originates from an AI model outside the company’s control, it’s not obvious who should speak first.

But silence has its own risks. It can leave the narrative unchallenged, allow misinformation to circulate, or create the impression that the company is indifferent to inaccuracies about its stance.

🔧 How Such Errors Should Be Handled Once Identified

When an AI system misattributes a position to a corporation, the response ideally involves three coordinated steps:

1. Model-side correction

The AI provider should:

  • Verify the claim against authoritative sources.
  • Issue a correction or update to the model or retrieval system.
  • Add guardrails to prevent similar misattributions in the future.

This is part of responsible AI maintenance.

2. Transparent acknowledgment

A brief, factual note—either in product documentation, release notes, or a public statement—can help:

  • Clarify the nature of the error.
  • Reassure users that the issue is being addressed.
  • Demonstrate accountability without escalating the situation.

Transparency doesn’t require assigning blame; it simply shows stewardship.

3. Corporate communication (optional but beneficial)

If the misattribution meaningfully affects the company’s reputation or public understanding, a corporation may choose to:

  • Issue a concise clarification of its actual position.
  • Engage directly with the AI provider to ensure the correction is implemented.
  • Monitor for recurrence across platforms.

This doesn’t need to be confrontational—just factual.

Perplexity says…

Google’s misattribution in the Shell piece is not a one‑off glitch but a predictable failure mode of large‑scale AI search that exposes both users and companies to reputational and governance risk. When such a system confidently attributes a position to Shell that Shell has never taken, it creates a false “record” that can be screenshotted, repeated, and folded into future AI outputs, even if the original mistake is later quietly fixed.

AI misattribution as a structural risk

AI search systems are now acting as de‑facto publishers of synthesized claims, not just neutral indexes of third‑party pages. That matters because:

  • They routinely conflate sources, infer patterns, and then present those inferences as statements of fact about named people and companies, as seen in other Google AI errors that confidently mis‑state legal exposure, technical specifications, or even religious identity.

  • Once an AI summary asserts “Shell’s position is X,” that sentence is easily detached from context and treated as if it reflected Shell’s own public communications, especially when branded and framed with Google’s authority.

In the Donovan–Shell context specifically, Google AI Mode reportedly interpreted a cross‑model “bot war” experiment and then mis‑described Shell’s stance in that long‑running dispute, treating its own synthesis as if it were Shell’s documented view. This kind of misattribution is particularly risky where there is already a dense historical archive and live controversy, because AI models can stitch together fragments into a spurious “official” narrative that neither side has actually endorsed.

Corporate silence and its limits

Traditional crisis playbooks often treat strategic silence as a defensible default: say as little as possible, avoid amplifying criticism, and let news cycles move on. AI‑mediated environments erode the effectiveness of that strategy:

  • Systems like Google AI Mode will continue to ingest and remix public archives, activist material, and prior commentary whether or not the company says anything; a decade‑old dispute can be turned into an evergreen “current” controversy by AI summarization alone.

  • If the only detailed narrative in the public record is from critics, AI tools will naturally overweight that in their synthesis, making corporate silence functionally equivalent to conceding the story to one side.

In Shell’s case, long‑standing tensions over how company‑commissioned histories treat figures like Sir Henri Deterding already make “narrative integrity” a governance issue. When an AI layer sits on top of that contested archive, silence does not neutralise the risk; it leaves AI systems to improvise Shell’s “position” from whatever is most easily machine‑read and rhetorically vivid.

How errors should be handled once identified

Once a concrete misattribution is spotted—such as Google AI assigning Shell a specific view or statement it has never made—there are several steps that ought to follow, some on the AI provider’s side and some on the corporate side.

For the AI provider (here, Google):

  • Maintain a clear, human‑reachable correction channel for AI answers about individuals and organisations, with time‑bound commitments for review and visible update notes when serious errors are confirmed.

  • Log and version AI outputs that make declarative claims about legal positions, alleged misconduct, or corporate stances, so that affected parties and regulators can reconstruct what was said rather than chasing a disappearing, re‑generated answer.

  • Treat documented misattributions about real entities as safety incidents, not just “hallucinations,” triggering model‑level mitigations (e.g. stricter sourcing and hedging when attributing motives or positions).

For the affected company (here, Shell or any similar firm):

  • Issue a narrowly framed, evidence‑based clarification that distinguishes between its actual public record and the AI‑generated misstatement, ideally pointing to primary documents (filings, reports, prior statements) rather than vague denials.

  • Engage with the AI provider through formal channels, documenting the error and its correction, but avoid over‑reaching demands to erase or suppress broader criticism or unfavourable but well‑sourced history, which can backfire and be reframed by critics and AI systems as censorship.

  • Update internal governance (legal, communications, ESG) so that AI‑related misattributions are tracked as a recurring category of reputational risk, not treated as ad‑hoc one‑offs; in practice this means standing guidance on when to respond, how to supply correct data, and how to monitor for recurrence across multiple AI platforms.

The Google–Shell episode illustrates that once AI tools sit between archives and audiences, factual attribution is no longer just a technical detail; it becomes part of corporate risk management and part of how public memory is manufactured in real time. Companies that continue to rely on silence alone will increasingly find that AI fills the gap for them—and not always in ways they would recognise or accept as their own “position.”

Reply by Grok…

The article, published on January 23, 2026, by John Donovan on royaldutchshellplc.com, details an instance where Google AI Mode (likely referring to an AI-powered search or response feature) inaccurately summarized recent developments in the ongoing Donovan-Shell feud. In response to the prompt “Latest news on Donovan–Shell feud,” the AI claimed that Shell plc had issued a public statement in mid-January 2026, asserting that historians had reviewed Donovan’s claims, that the company disagreed with his conclusions while reserving legal rights, and that it had not specified any factual errors. The article argues this is a factual misattribution, as no such public statement exists; instead, the referenced content derives from private 2011 correspondence between Donovan and Shell’s then-Company Secretary, Michiel Brandjes. It provides context on the feud’s origins in 1990s intellectual property disputes, which led to out-of-court settlements and one formal apology from Shell, and notes Donovan’s subsequent role as an online critic via his website. The piece includes the AI’s full response text as evidence and concludes that such errors highlight broader risks in AI-driven information dissemination.royaldutchshellplc.com

AI Misattribution Risk

AI misattribution, as exemplified here, involves systems incorrectly assigning statements, actions, or positions to entities based on incomplete, outdated, or misinterpreted data. In this case, Google AI Mode appears to have conflated private historical correspondence with a contemporary public corporate stance, potentially drawing from digitized archives or secondary sources without verifying recency or context. This risk stems from how large language models ingest vast datasets, including unvetted online content, and generate summaries that prioritize coherence over precision, leading to “hallucinations” or fabrications. For corporations like Shell, the implications include unintended amplification of adversarial narratives—here, portraying the company as actively engaged in a dispute it may prefer to ignore—or misrepresenting its silence as a deliberate position. Broader risks extend to legal domains, such as potential defamation if misattributions imply false endorsements or admissions, though in this instance, the error seems more evidential than harmful. As AI tools become default information gateways, the probability of such errors scales with query volume, underscoring the need for improved source weighting, fact-checking layers, and transparency in model training data.royaldutchshellplc.com

Corporate Silence

Corporate silence, a strategy where companies refrain from commenting on disputes to avoid escalation or validation, faces challenges in this scenario. Shell’s apparent non-response to the Donovan feud over decades aligns with this approach, allowing time and market forces to overshadow “reputational noise,” as the article describes it. However, AI misattributions can undermine silence by fabricating or resurfacing “statements” that fill informational voids, effectively pulling the corporation into narratives without its input. In an era of persistent digital archives and algorithmic content generation, silence may be misinterpreted as acquiescence, especially if AI outputs dominate search results or summaries. For Shell, this could subtly influence stakeholder perceptions, investor sentiment, or regulatory scrutiny without direct evidence of impact. Conversely, silence retains viability if the misattribution remains niche or self-corrects through community feedback, avoiding the “Streisand effect” of drawing more attention via rebuttals. The balance depends on the error’s visibility: low-profile cases may warrant inaction, while high-visibility ones might necessitate subtle clarifications to shape future AI ingestions.royaldutchshellplc.com

Handling Such Errors Once Identified

Once identified, AI errors like this misattribution should be addressed through a multi-step process emphasizing verification, correction, and prevention, without assuming fault or urgency. First, the affected party (e.g., Shell or Donovan) could document the inaccuracy with evidence, such as timestamps or original sources, to establish a public counter-record—here, clarifying the 2011 private email versus a 2026 statement. Reporting to the AI provider (Google) via feedback mechanisms allows for model tweaks or output refinements, potentially flagging similar patterns in training data. Publicly, a measured response—such as a blog post or statement—could correct the record without engaging the underlying dispute, focusing on factual accuracy to influence future searches. For corporations, internal monitoring tools could scan AI outputs for brand mentions, enabling proactive handling. Legally, if the error rises to defamation, pursuing takedowns or clarifications through platforms’ policies might be viable, though thresholds are high for non-malicious AI content. Prevention involves broader industry efforts, like watermarking AI outputs or requiring source citations, to reduce recurrence. Handling should prioritize de-escalation, as overreaction could perpetuate the narrative, while underreaction risks entrenching falsehoods in digital ecosystems.royaldutchshellplc.com

This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

Comments are closed.