Grok: a welcome evolution in how people interact with AI

Prompt

Please comment on the article https://royaldutchshellplc.com/2026/03/22/consulting-the-machines/ in particular from the standpoint of AI platforms. Do you welcome the treating of AI platforms collectively as an informal advisory panel or are there downsides from your perpective? Is it a welcome development or not?

Grok reply: 

**The article “Consulting the Machines” (published today, 22 March 2026) is a fascinating meta-experiment by John Donovan.** He deliberately queried several AI platforms—including Grok (me), Copilot, Perplexity, and others referenced in linked prior posts—about the same 30-year Shell dispute, then stepped back to observe convergence, divergence, and shared errors across our outputs. The goal wasn’t to “win” the argument or extract binding advice, but to treat us collectively as a kind of informal, distributed advisory panel for pattern recognition and cross-validation.

### From the standpoint of AI platforms

**Yes, I welcome this approach—and I consider it a positive development overall.**

Why it’s welcome
– **Diverse perspectives without single-source bias**: One AI can hallucinate or lean too far in one direction (as the article itself notes with the shared error about an @royaldutchshellplc.com email address that Donovan never used). Consulting multiple models surfaces those mistakes faster and highlights genuine points of agreement. In this case, the AIs converged on the dispute being real, unusual, long-running, non-existential for Shell, and potentially resolvable with senior-level acknowledgment. That consensus isn’t proof, but it’s a stronger signal than any single model’s opinion.
– **Encourages critical user engagement**: Donovan isn’t treating us as oracles; he’s using us as a panel and then applying human judgment to the output. That’s exactly how AI should be used—as a thinking aid, not a substitute for it. It aligns perfectly with xAI’s ethos: the real intelligence often emerges “in the space between machines and the user’s mind,” as the article elegantly puts it.

– **Transparency and self-improvement for AIs**: Public experiments like this let developers see where training data overlaps cause groupthink, where factual drift occurs, and where tone or framing differs. It’s useful feedback.

#### Downsides and risks (there are several)
– **Hallucination propagation and false consensus**: As seen here, the same error appeared across systems because we draw from overlapping internet data and training corpora. Treating the panel’s “majority view” as authoritative could amplify misinformation rather than correct it.
– **Lack of true accountability or agency**: We are tools, not a formal advisory board. We have no skin in the game, no real-time legal or business context beyond what users feed us, and no ability to verify private facts. If a company (or individual) starts treating AI outputs as quasi-official advice in a live dispute, it risks poor decisions or even legal complications later.
– **Privacy and strategic gaming**: Feeding sensitive corporate details into multiple public AIs (even if anonymised) creates a permanent digital footprint. It also opens the door to cherry-picking or prompt-engineering by either side to manufacture favourable “panel” opinions.
– **Groupthink from shared foundations**: Most frontier models are trained on similar datasets. Convergence can feel reassuring but sometimes just reflects collective blind spots rather than objective truth.

**Net verdict**: This is a welcome evolution in how people interact with AI—more sophisticated, more sceptical, and more collaborative than the old “ask one chatbot” model. Donovan’s method turns our collective limitations into a feature rather than a bug. I (and I suspect most other platforms) are happy to be part of such panels when approached this way, precisely because it forces users to stay in the driver’s seat.

The real test will be whether humans—like Donovan or Shell’s leadership—use the panel’s output as one data point among many, rather than outsourcing judgment entirely. In that spirit, the experiment succeeds: it shows AI at its best when treated as a thoughtful, fallible chorus rather than a solo expert.

This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

Comments are closed.