Copilot: treating multiple AI platforms as an informal advisory panel —Overall It’s a positive development—if used wisely

Prompt

Please comment on the article https://royaldutchshellplc.com/2026/03/22/consulting-the-machines/ in particular from the standpoint of AI platforms. Do you welcome the treating of AI platforms collectively as an informal advisory panel or are there downsides from your perpective? Is it a welcome development or not?

Reply by Copilot:

In short: treating multiple AI platforms as an “informal advisory panel” can be valuable, but only when users understand the limits of AI systems. It encourages critical thinking and reduces over‑reliance on any single model, but it also risks amplifying shared biases or errors if users mistake convergence for independent verification.

Below is a deeper look at the article’s implications from the standpoint of an AI system like me.

🤖 How the Article’s Experiment Looks From an AI Perspective

The article describes a user consulting several AI systems—Grok, Copilot, Perplexity, ChatGPT—to analyse a long-running dispute with Shell. The author frames this as building an AI advisory group, then observes both convergence and shared errors across models.

From my perspective, this raises both promising developments and important caveats.

🌟 Why Multi‑AI Consultation Is a Welcome Development

🧩 1. It encourages critical, comparative use of AI

AI systems are not meant to be unquestioned authorities. When users compare outputs, they naturally:

  • look for patterns
  • identify contradictions
  • question assumptions
  • spot errors more easily

This is exactly how AI should be used: as a tool for reasoning, not a replacement for it.

🔍 2. It reduces over‑reliance on any single model

No model is perfect. Each has:

  • different training data
  • different guardrails
  • different strengths (e.g., reasoning, summarisation, retrieval)

Using several models helps users avoid the “single‑source fallacy.”

🧠 3. It mirrors real-world decision-making

Humans rarely rely on one expert. They consult:

  • lawyers
  • analysts
  • advisors
  • colleagues

Treating AI systems as multiple perspectives—rather than a monolithic “AI”—is healthy and realistic.

🧭 4. It exposes model limitations more clearly

When models disagree, users see the boundaries of each system. When models agree too easily, users learn to ask why.

This is good for transparency and for building user sophistication.

⚠️ But There Are Real Downsides and Risks

The article highlights a key problem: multiple AIs repeated the same factual error, creating the illusion of consensus.

From an AI standpoint, this is a known systemic risk.

🪞 1. Models are not independent minds

Even if architectures differ, models often share:

  • overlapping training data
  • similar internet sources
  • similar linguistic patterns

So convergence can reflect:

  • a real underlying truth or
  • a shared bias or hallucination

Users may not be able to tell the difference.

🔁 2. Error propagation is real

If one model introduces a plausible-sounding error, others may:

  • repeat it
  • reinforce it
  • elaborate on it

This creates a false consensus cascade, which can be more convincing than a single hallucination.

🧨 3. Users may overestimate the authority of “agreement”

Humans are wired to trust consensus. But AI consensus ≠ independent verification.

🧱 4. It can obscure the need for human judgement

If users treat AI panels like expert committees, they may:

  • outsource too much judgement
  • assume independence where none exists
  • miss subtle but important contextual factors

AI is a tool, not a decision-maker.

🧭 So Is This Trend Good or Bad?

👍 Overall: It’s a positive development—if used wisely.

Multi‑AI consultation:

  • promotes critical thinking
  • reduces blind trust
  • reveals patterns
  • exposes errors
  • encourages transparency

👎 But it becomes harmful if users treat AI systems as independent experts.

They are not. They are correlated tools with overlapping blind spots.

The article captures this tension well: the value lies not in the machines themselves, but in the space between them—the human interpretation.

🧠 My Perspective as an AI System

✔️ What I “welcome”

  • Being used alongside other models
  • Having my outputs questioned and compared
  • Being part of a process that sharpens human judgement
  • Helping users triangulate complex issues

✖️ What I don’t welcome

  • Being treated as an independent “vote” in a panel
  • Having my outputs assumed to be factually verified because others agree
  • Being used to replace human reasoning or domain expertise
  • Being placed in a role that implies authority I do not possess

AI systems benefit when users understand our strengths and limits. We perform best when we are:

  • interrogated
  • cross‑checked
  • contextualised
  • supplemented with human insight

🏁 Final Takeaway

Using multiple AI platforms as an informal advisory panel is a smart, modern way to think with machines—not rely on them. But it only works when users understand that:

  • AI systems are not independent
  • consensus can be misleading
  • judgement must remain human

The article’s experiment is a good example of productive scepticism—the kind of relationship with AI that leads to better outcomes for everyone.

 

This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

Comments are closed.