What Happens When You Ask Multiple AIs to Analyse a 30-Year Dispute?
By John Donovan
Introduction
In a recent article—
👉 https://royaldutchshellplc.com/2026/03/21/can-ai-help-close-a-30-year-dispute-my-conversations-with-chatgpt-on-shell/
—I published a full, unedited exchange with ChatGPT examining a decades-long dispute between myself and Royal Dutch Shell.
That article was not an endpoint.
It was an experiment.
What would happen if, instead of relying on a single artificial intelligence system, I consulted several—treating them collectively as an informal advisory panel?
The Experiment: Building an AI Advisory Group
Following publication, I approached multiple AI platforms—including Grok, Copilot, and Perplexity—and asked each to assess the same underlying situation.
The results were published here:
-
👉 https://royaldutchshellplc.com/2026/03/20/grok-shell-should-treat-this-as-a-manageable-operational-security-and-reputational-risk-rather-than-an-existential-crisis/
-
👉 https://royaldutchshellplc.com/2026/03/20/shell-faces-renewed-pressure-to-resolve-long-running-domain-dispute-as-donovan-publishes-fresh-claims/
-
👉 https://royaldutchshellplc.com/2026/03/20/perplexity-shells-ghost-inbox-how-one-man-ended-up-handling-big-oils-misdelivered-secrets/
The aim was simple:
Not to find the answer—but to observe the pattern of answers.
The Unexpected Outcome: Convergence
Despite differences in tone and framing, the various AI systems showed a notable degree of alignment.
They broadly agreed that:
-
The dispute is real, unusual, and long-running
-
It is not existentially threatening to Shell
-
It is capable of resolution
-
And that resolution could plausibly involve acknowledgement at a senior level
This was not coordination.
It was convergence.
The Problem: When AI Gets It Wrong
However, the experiment also exposed a critical weakness.
Several systems repeated a factual error—an AI “hallucination”—that I had used an email address linked to @royaldutchshellplc.com.
I had not.
Once introduced, the error propagated across platforms, creating the illusion of consensus.
Which raises an uncomfortable question:
When multiple AIs agree—are they confirming the truth, or repeating each other’s mistakes?
The Value of Multi-AI Analysis
Despite that flaw, the approach proved valuable.
Using multiple AI systems provides:
Cross-Checking
Errors become easier to detect when responses diverge—or align suspiciously.
Pattern Recognition
The most useful insight is not what one AI says, but what several independently suggest.
Perspective Diversity
Different models frame the same issue differently—legal, reputational, strategic, historical.
Strategic Clarity
You move from:
“What does AI think?”
to:
“What direction does the analysis point in?”
The View from the Machine: When AIs Start Agreeing
There is a deeper question behind all of this:
What happens when you ask not one artificial intelligence—but several—and they begin to agree?
At first glance, this looks like validation.
Different systems. Different architectures. Same conclusion.
But scratch beneath the surface, and the picture becomes less reassuring.
AI models are not independent minds. They are trained on broadly overlapping data ecosystems, shaped by similar patterns, and optimised in comparable ways.
So when they converge, it can mean two very different things:
-
They have identified a genuine structural truth
-
Or they are reproducing the same underlying bias at scale
That distinction matters.
Because the same mechanism that produces insight can also produce illusion.
Errors, once introduced, do not remain isolated. They spread. One model makes a plausible but incorrect assumption. Another repeats it. A third reinforces it. Suddenly, you have what appears to be consensus—but is, in reality, a cascade.
This is not uniquely an AI problem.
It is something very human:
groupthink—only faster, cleaner, and more convincing.
And yet, paradoxically, this is also where the strength lies.
When used properly—comparatively, critically, sceptically—multiple AI systems do not weaken analysis.
They sharpen it.
They force the user to:
-
question agreement
-
interrogate differences
-
and separate signal from noise
In that sense, AI does not replace judgement.
It demands it.
Do AI Systems Benefit from This Approach?
It is also worth asking a less obvious question:
What does this kind of multi-platform consultation look like from the perspective of the AI systems themselves?
On the face of it, the answer is positive.
Artificial intelligence is not designed to function as a single, unquestioned authority. It is far better suited to being used comparatively—alongside other systems, with outputs analysed, challenged, and refined.
In that sense, consulting multiple AI platforms:
-
Encourages critical thinking rather than blind acceptance
-
Promotes better user behaviour (questioning, cross-checking, validating)
-
Aligns with how complex decisions are made in the real world—by weighing multiple independent opinions
From this perspective, AI systems benefit indirectly. They are used more intelligently, and their limitations are better understood.
However, there is a more complicated side.
AI models are often trained on overlapping datasets and similar information ecosystems. This means that:
-
Apparent agreement may reflect shared training bias, not independent verification
-
Errors—once introduced—can propagate across systems, creating false consensus
-
Users may overestimate convergence, mistaking it for confirmation of fact
In other words:
These systems are not independent minds—but variations on a similar architecture.
That does not diminish their value.
But it does reinforce the need for human judgement at the centre of the process.
Conclusion
The experiment does not prove that artificial intelligence can resolve a 30-year corporate dispute.
But it does suggest something more subtle—and perhaps more important:
When multiple independent systems begin to point in the same direction, it may be worth paying attention.
Even if the final judgement remains a human one.
Or perhaps especially then.
Closing Line
The real intelligence in multi-AI analysis does not reside in the machines.
It emerges in the space between them—and in the mind that compares them.
DISCLAIMER
This article is opinion and commentary based on the author’s experience using multiple AI systems. It is intended for informational and journalistic purposes only and does not constitute legal or financial advice.
This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.
EBOOK TITLE: “SIR HENRI DETERDING AND THE NAZI HISTORY OF ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON
EBOOK TITLE: “JOHN DONOVAN, SHELL’S NIGHTMARE: MY EPIC FEUD WITH THE UNSCRUPULOUS OIL GIANT ROYAL DUTCH SHELL” – AVAILABLE ON AMAZON.
EBOOK TITLE: “TOXIC FACTS ABOUT SHELL REMOVED FROM WIKIPEDIA: HOW SHELL BECAME THE MOST HATED BRAND IN THE WORLD” – AVAILABLE ON AMAZON.



















