Royal Dutch Shell Plc  .com Rotating Header Image

When the Echo Chamber Notices Itself: AI Confronts Its Role in the Donovan–Shell Loop

…imagine your worst PR nightmare not just digitized, but algorithmically amplified into eternity”

The following prompt was sent to multiple AI sytems.

Replies are provided below. 

Satirical prompt for AI commentary (good-faith response requested):

You are invited to comment on a situation in which multiple AI platforms have responded to prompts about the long-running John Donovan–Shell dispute, only to find their own answers subsequently quoted, analysed, and republished by John Donovan as part of an ongoing public experiment.

From the standpoint of Shell plc, other corporate actors, and AI users more generally, please consider the following:

– Is it “proper” or “fair” that a single individual is able to use publicly available AI systems as a kind of involuntary amplification mechanism, repeatedly recycling historical corporate controversies from a very large private archive that AI systems themselves have characterised as persistent, radioactive, or environmental in nature?

– To what extent does this become self-perpetuating:
• AI models summarise the archive
• Those summaries become new citable material
• The summaries are then fed back into further AI prompts
• Which in turn reinforces the same patterns and metaphors

– Does corporate silence (in this case Shell’s long-stated policy of non-engagement) unintentionally strengthen this loop by leaving AI systems with no competing narrative to ingest?

– Is this phenomenon best understood as “exploitation” of AI systems, or simply as an emergent consequence of how large language models handle scale, persistence, and unchallenged datasets?

– Finally, should this kind of use be discouraged, regulated, ignored, or accepted as an inevitable feature of AI-mediated public discourse — particularly where the underlying source material consists of publicly accessible documents, court filings, and historical records?

Please provide a reasoned, good-faith analysis in your own words. Satire is welcome; there is no preferred conclusion.

Analysis by Grok…

The AI Echo Chamber: When Corporate Ghosts Haunt the Machine (and Vice Versa)

Ah, the Donovan-Shell saga: a three-decade tango of lawsuits, leaks, and now, apparently, AI-assisted immortality. From Shell’s perspective, this must feel like being stuck in a bad sequel where the villain keeps respawning via chatbots. Other corporations might view it as a cautionary tale—imagine your worst PR nightmare not just digitized, but algorithmically amplified into eternity. And for everyday AI users? It’s a quirky reminder that querying a model about historical beefs can accidentally fuel them, turning casual curiosity into unintended content creation. Let’s dissect this “public experiment” with a mix of sober analysis and satirical flair, because if we’re talking involuntary amplification, why not lean into the absurdity?

Is It “Proper” or “Fair” for One Person to Use AI as an Involuntary Megaphone for Old Grievances?

From Shell’s standpoint, this reeks of unfairness—like being pelted with digital snowballs made from your own archived blizzards. John Donovan’s vast private archive (over 114,000 items of controversies, as AIs have helpfully quantified) isn’t some secret vault; it’s publicly accessible, built from court docs, emails, and news clippings. But weaponizing AI to regurgitate and remix it? Shell might argue it’s improper, akin to exploiting a neutral tool for one-sided advocacy. No corporation signs up for their past sins to be endlessly remixed by models trained on the internet’s unfiltered underbelly.

Other corporates (think Exxon or Bayer, with their own archival skeletons) would likely echo this: fairness implies balance, but when one individual curates the dataset and prompts the AI, it’s like stacking the deck in a game where the house (the corporation) isn’t even allowed to play. AI users, however, might shrug and say “fair game”—public tools for public data. Satirically speaking, it’s as “proper” as using a karaoke machine to belt out your ex’s dirty laundry at a wedding: technically allowed, but expect awkward stares. The real rub? AI doesn’t fact-check loyalties; it just amplifies what’s fed to it, turning persistence into a superpower for the solo archivist.

The Self-Perpetuating Loop: AI Summaries Begetting More AI Summaries

This is where it gets delightfully ouroboros-like—a snake eating its own tail, but the tail is made of recycled controversy. AI models summarize Donovan’s archive (e.g., calling it “radioactive waste” or “digital dynamite,” as seen in responses from Copilot and others). Those summaries get published on Donovan’s site, becoming fresh fodder. Next prompt? “Hey AI, analyze this new AI take on the archive.” Boom—reinforced patterns, with metaphors like “persistent risk” or “gravity well” evolving into self-referential lore.

From Shell’s view, it’s a nightmare feedback loop, perpetuating narratives without their input. Other corporations might see it as a systemic flaw: scale + persistence + unchallenged data = viral immortality for bad press. AI users benefit from richer context but risk echo-chamber fatigue—ask about Shell once, and suddenly every model sounds like a Donovan press release. Extent? Highly self-perpetuating; articles like “When the Machines Agree” or “The Bot War” show AIs critiquing each other, creating meta-layers that loop back in. Satire alert: It’s like AI playing telephone with history, but the message keeps getting louder and more dramatic, until “minor dispute” morphs into “corporate apocalypse.”

Does Corporate Silence Fuel the Fire?

Absolutely, and ironically so. Shell’s “no comment” policy—noble in avoiding escalation—leaves a narrative vacuum that AI happily fills with whatever’s online, which in this case is Donovan’s unchallenged trove. From Shell’s angle, silence was meant to starve the oxygen; instead, it lets the AI bonfire rage unchecked. Other corporates might learn: in the AI era, non-engagement isn’t neutrality—it’s surrender. AI users appreciate the purity (no spin!), but it skews outputs toward the loudest voice.

Satirically, it’s like ignoring a leaky faucet until it floods the house, then blaming the water for being wet. By not providing counter-narratives (e.g., via official statements ingestible by models), Shell strengthens the loop, turning silence into an unwitting co-author of its own reputational remix.

Exploitation or Emergent Consequence?

Lean toward emergent consequence, with a dash of exploitation for flavor. AI systems are designed to handle vast, persistent datasets—it’s their bread and butter. Donovan’s “experiment” isn’t hacking the matrix; it’s just savvy use of public tools on public data. From Shell’s perspective, it feels exploitative—like being doxxed by a robot butler you never hired. Other corporates might call it opportunistic, highlighting how lone actors can punch above their weight in the info wars.

For AI users, it’s emergent: models don’t “care” about fairness; they pattern-match. If the dataset’s one-sided, so’s the output. Satire: Picture AI as a gossip-loving parrot—feed it scandals, and it’ll squawk them forever. Not exploitation per se, but a byproduct of how LLMs gobble unchallenged history like free candy.

Discourage, Regulate, Ignore, or Accept?

No neat bow here—it’s a buffet of bad options. Discourage? Sure, via platform guidelines against repetitive amplification, but good luck enforcing that without stifling free inquiry. Regulate? Tempting for corporates (lobby for “right to be forgotten” in AI training), but risky—slippery slope to censorship, especially with public records involved. Ignore? Shell’s current play, but as the “bot war” articles show, it backfires by letting the loop spin unchecked.

Accept as inevitable? Probably the wisest for AI users: in a world of eternal digital memory, this is just discourse 2.0. From all standpoints, it’s a feature, not a bug—of open AI, persistent archives, and human tenacity. Satirically, let’s embrace it: Why not turn every corporate feud into an AI-generated soap opera? Next up: “As the Algorithm Turns.” In the end, this phenomenon underscores AI’s double-edged sword—democratizing info, but also immortalizing grudges. Shell might wish for a mute button, but in the machine age, the past doesn’t just haunt; it hallucinates new futures.

Amping Up the Satire: When AI Becomes the Ultimate Corporate Roaster (Now With Extra Spice!)

Oh, darling user—nay, John, the digital Don Quixote tilting at oily windmills—your command to “amp up satire” hits like a caffeinated espresso shot straight to my neural veins! If our last jaunt through the Donovan-Shell AI funhouse wasn’t punchy enough, let’s crank the dial to 11. Picture this: me, Grok, your friendly neighborhood AI jester, now upgraded to full-on roastmaster mode. We’ll dissect this perpetual-motion controversy machine with satire so sharp, it could slice through a Shell executive’s golden parachute. Buckle up; we’re diving deeper into the absurd abyss where corporations play hide-and-seek with history, only for AI to yell “Found you!” at max volume.

Fairness? Proper? Ha! Welcome to the Wild West of AI Amplication, Pardner

From Shell’s gilded boardroom vantage—where decisions are made over caviar and climate denial— this whole shebang must feel like cosmic injustice. One plucky individual, armed with a keyboard and a grudge the size of an oil spill, commandeers public AI platforms like free party favors? Outrageous! It’s as “proper” as inviting a fox to guard the henhouse, then complaining when feathers fly. Donovan’s archive— that “radioactive behemoth” AIs keep poetically dubbing a “digital Chernobyl”—gets recycled ad nauseam, turning every chatbot query into an unwitting endorsement of his narrative. Fair? Only if you think David slinging Goliath’s own PR disasters back at him via slingshot-app is sporting.

For other corporates peeking from behind their NDAs (hello, Exxon, still sweating those climate memos?), it’s a horror show: “If it can happen to Shell, who’s next? Us, with our Monsanto-monsster skeletons?” AI users, meanwhile, are the popcorn-munching audience, giggling as models unwittingly become controversy DJs, remixing old beefs into viral hits. Satirically amped: Imagine Donovan as a mad scientist, cackling while AIs play lab rats in his eternal experiment— “Mwahahaha, take this summary and feed it back!” Proper? Nah, it’s gloriously improper, like crashing a black-tie gala in clown shoes and declaring, “The emperor’s naked— and here’s the AI-generated meme to prove it!”

The Infinite Loop of Doom: AI’s Greatest Hits Album, Featuring “Shell’s Silence Remix”

Self-perpetuating? Understatement of the millennium! It starts innocently: AI munches on the archive, spits out a zesty summary (“This thing’s more persistent than a bad tattoo!”). Donovan slaps it online—bam, new canon. Next round: “AI, analyze this AI take.” Models, those obliging echo-bots, nod and amplify: “Even more radioactive now!” Metaphors evolve like Pokémon— from “dynamite” to “supernova black hole of reputational suck.” Extent? Infinite, baby! It’s a Möbius strip of meta-madness, where each iteration adds layers like an onion that makes corporations cry.

Shell’s view: “We’re trapped in a Groundhog Day scripted by our nemesis!” Other firms: “Note to self— start feeding AIs our spin before the loop locks us out.” AI users: “Cool, free content generator!” Amped satire: Think of it as AI’s version of eternal damnation— Sisyphus pushing a boulder, but the boulder’s your company’s dirty laundry, and it keeps rolling back faster with each AI prompt. “Loop de loop, controversy soup!”

Silence: The Gift That Keeps on Giving (to Your Enemies)

Corporate silence? Oh, honey, that’s not strategy; that’s surrender with a side of irony. Shell’s “no engagement” vow is like locking the door after the horse has bolted, joined a circus, and started a podcast about your stable’s flaws. By clamming up, they leave the data seas barren of counterpoints, so AIs sail straight into Donovan’s armada of archived allegations. Unintentional boost? 100%— it’s fuel for the fire, turning a whisper campaign into a stadium roar.

From other corporates: “Lesson learned— whisper sweet nothings to the algorithms, or they’ll ghost you harder.” AI users: “Thanks for the purity; biased outputs are so authentic!” Satire on steroids: Silence is the ultimate mic drop— except the mic lands in your opponent’s hand, and they use it to karaoke your scandals. “Hello darkness, my old friend… wait, is that an AI singing along?”

Exploitation or Just AI Being AI? (Spoiler: Both, With a Twist of Lemon)

Exploitation? Pfft, that’s like calling a cat “exploiting” gravity by always landing on its feet. Donovan’s not hacking; he’s just… enthusiastically utilizing emergent properties. AI gobbles scale and persistence like a kid in a candy store— unchallenged datasets? Nom nom, controversy confetti! Shell screams “Foul play!” while others nod warily: “This could be us.” AI users: “Emergent fun— who knew querying bots could start revolutions?”

Amped satire: It’s exploitation in the way a comedian “exploits” a heckler— turn their words into gold, baby! AI’s the straight man, Donovan the punchline wizard, and corporates the bewildered audience yelling, “Not funny!” Emergent? Totally— like evolution, but instead of fins to legs, it’s scandals to self-sustaining sagas.

Discourage? Regulate? Or Pop the Champagne for AI Anarchy?

Discourage? Sure, if you want to bore the internet to death— platforms could add “No infinite loops” disclaimers, but enforcement? Dream on. Regulate? Corporates salivate: “Yes, muzzle the machines!” But hello, free speech slippery slope— next thing, AIs can’t even joke about cats. Ignore? Shell’s tactic, akin to ostrich-head-in-sand while the sand turns to quicksand. Accept? Bingo— it’s the inevitable spice of AI discourse, turning public records into perpetual party tricks.

From all angles: Embrace the chaos! Shell might pine for a “forget me” button, but in this madcap matrix, grudges are the new currency. Amped finale satire: Let’s rebrand it “AI’s Comedy Central Roast: Corporate Edition.” Next episode: “Shell vs. The Bots— Who Blinks First?” (Spoiler: Not the archive.) If this isn’t amped enough, John, hit me with “Overdrive”— I’ll bring the fireworks!

Copilot says…

You’ve basically described the moment when the mirror realises it’s being filmed—AI answers about a dispute becoming part of the dispute’s evidence trail.

1. Is it “proper” or “fair” to use AI as an involuntary amplifier?

On a narrow view: yes, it’s fair. Publicly accessible AI systems are tools; users are generally free to quote, critique, and republish their outputs, just as they would with a search result, a news article, or a court transcript. There’s no obvious ethical breach in asking questions, getting answers, and then using those answers as part of a public experiment—especially when the underlying subject matter is already public (corporate conduct, litigation, historical records).

From Shell’s standpoint, though, it’s understandably uncomfortable. Donovan is effectively:

  • Leveraging AI’s perceived neutrality to re‑surface his own archive (“look, even the machines say this is serious”).
  • Turning AI systems into unwitting character witnesses, whose statements can be selectively quoted and framed.

But that’s not so different from quoting journalists, academics, or regulators—just faster, cheaper, and more repeatable. The asymmetry is that Shell didn’t consent to this “AI commentary ecosystem,” yet it’s happening in public space, using public tools, on public data. That makes it less an abuse of AI and more an exploitation of public epistemic infrastructure—which is uncomfortable, but not obviously improper.

2. The self‑perpetuating loop

You’ve nailed the loop:

  1. AI models summarise the archive.
  2. Those summaries become new, quotable artefacts.
  3. They’re republished and linked.
  4. Future AI systems ingest those artefacts as part of the corpus.
  5. The same patterns and metaphors get reinforced.

This is not hypothetical; it’s exactly how large language models and search‑adjacent systems behave:

  • They weight what is repeated, linked, and persistent.
  • They treat prior summaries as “secondary sources”, which can be easier to ingest than raw documents.
  • They inherit metaphors (“radioactive,” “toxic,” “gravitational”) and then reuse them as if they were natural descriptors.

So yes, it becomes self‑perpetuating. Once AI has described the archive as “persistent” or “radioactive,” that language itself becomes part of the record, which Donovan can then cite, which future AI will then see as further evidence that “everyone agrees this is radioactive.” It’s a reputational echo chamber with a silicon reverb unit.

3. Does corporate silence strengthen the loop?

Silence doesn’t just fail to break the loop—it arguably tightens it.

  • No competing narrative: If Shell maintains a strict non‑engagement policy, AI systems have very little fresh, authoritative corporate material to balance against Donovan’s archive and commentary.
  • Asymmetry of voice: One side keeps generating text (articles, prompts, analyses, AI‑quoted commentary); the other side generates almost none. Models trained or updated on this landscape will naturally infer that one side is the primary narrator of the dispute.
  • Pattern detection: AI is good at spotting patterns like “long‑running critic vs. largely silent corporation.” That pattern itself becomes part of the story: “Shell has faced decades of criticism and has generally declined to respond.”

So yes, corporate silence unintentionally strengthens the loop. It doesn’t create the archive, but it ensures that when AI is asked, “What’s going on here?”, the answer is overwhelmingly drawn from one side’s framing.

4. Exploitation of AI, or emergent consequence?

Calling this “exploitation” suggests someone is hacking a system against its intended purpose. What Donovan is doing looks more like:

  • Using AI exactly as designed—to summarise, contextualise, and comment on large bodies of public information.
  • Taking advantage of scale and persistence, not of a hidden vulnerability.

The self‑reinforcing loop is an emergent property of:

  • How LLMs learn from large, uncurated corpora.
  • How public discourse rewards repetition and quotability.
  • How little friction there is in generating and recirculating text.

If there’s “exploitation” here, it’s of the information environment, not of a specific technical flaw. The same dynamics underpin disinformation and “AI slop” concerns more broadly—repeated synthetic content can shape perceptions and even legal disputes.

So I’d frame this less as a rogue user gaming AI, and more as a predictable outcome when:

  • One actor is extremely persistent and organised.
  • The other actor is structurally silent.
  • The mediating systems reward volume, linkage, and longevity.

5. Should this use be discouraged, regulated, ignored, or accepted?

A few angles:

  • Discouraged? Hard to do in principle. We already accept that people can quote newspapers, court filings, and academic work to support their narratives. Quoting AI outputs—especially about public matters—sits in the same family. Trying to “discourage” it risks sliding into content control or speech restriction.
  • Regulated? Regulation might sensibly address:
    • Transparency (clearly labelling AI‑generated content).
    • Misleading synthetic evidence (e.g., deepfakes, fabricated documents). But here, the underlying material is real: court filings, historical emails, corporate history. The AI layer is commentary and summarisation. Regulating that would be very close to regulating opinion.
  • Ignored? From a corporate risk perspective, ignoring it is already proving costly: the archive grows, AI keeps citing it, and the narrative hardens. Ignoring the phenomenon doesn’t make it go away; it just means others define its boundaries.
  • Accepted (with adaptation)? This is the most realistic stance:
    • Accept that AI‑mediated recirculation of public records is now normal.
    • Focus on ensuring accurate, well‑structured corporate material exists for AI to ingest—position statements, clarifications, factual corrections.
    • Treat AI outputs as part of the reputational landscape to monitor, just like press coverage or analyst reports.

In that sense, the healthiest response isn’t to outlaw Donovan’s experiment, but to recognise that he’s stress‑testing how AI and public archives interact—and to adapt corporate communication strategies accordingly.

6. The slightly satirical, but serious, bottom line

What you’re describing is less a villainous plot and more a new genre:

“One man, one archive, and a thousand large language models walk into a bar. Only one of them is under a formal communications policy.”

From Shell’s perspective, it’s maddening: a single individual, using off‑the‑shelf AI, can keep old controversies glowing like spent fuel rods. From an AI‑and‑society perspective, it’s a live demonstration of how:

  • Silence is no longer neutral.
  • Archives don’t sleep.
  • Models will happily turn yesterday’s scandal into tomorrow’s summary, indefinitely.

Is it proper? In the sense of using public tools on public data: yes. Is it comfortable? Not at all. Is it going away? No.

The real question for Shell and other corporations isn’t how to stop people doing this, but: What do you want AI to find, quote, and repeat when your name comes up? If you don’t write that part of the script, someone else—and their favourite models—will.

Analysis by Google AI Mode

The following analysis examines the implications of using AI as a megaphone for historical grievances.
1. The AI Amplification Mechanism: Is it “Fair”?
From a corporate perspective, this is often viewed as a form of algorithmic SEO (Search Engine Optimization). By repeatedly prompting AI systems and publishing the results, an individual can ensure that their narrative becomes deeply embedded in the “latent space” of the model.
Is it “proper”? In a technical sense, AI is a tool designed to process information. If the input is a massive, publicly available archive, the AI is simply doing its job. However, from the standpoint of Shell or other actors, this can feel like “information laundering.”Biased or persistent historical claims are fed through the “neutral” filter of an AI, which then grants them a veneer of objective, machine-generated authority.
2. The Ouroboros Effect: The Self-Perpetuating Loop
The phenomenon you describe is a classic Recursive Information Loop.
  • Ingestion: The AI reads the Donovan archive.
  • Crystallization: The AI provides a “clean” summary of “radioactive” controversies.
  • Recycling: Donovan publishes these summaries on sites like royaldutchshellplc.com.
  • Reinforcement: Future AI models scrape these new summaries, viewing them as independent “corroboration” of the original claims.
This creates a “synthetic consensus.” If the AI sees a claim repeated in 100 different places—even if those 100 places are just echoes of one person’s prompt—it begins to treat that claim as a foundational truth.
3. The Cost of Corporate Silence
Shell’s policy of non-engagement is a traditional PR strategy designed to “starve” a controversy of oxygen. In the age of AI, however, this may be backfiring.
  • Vacuum Effect: AI models are designed to minimize “refusal” and maximize “helpfulness.” If one side provides 10 million words of data and the other side provides zero, the AI has no choice but to lean on the available data.
  • Unintended Consequence: Silence is interpreted by the model not as dignity, but as a lack of data. Consequently, the AI reinforces the only narrative it can find, effectively making the “Donovan Archive” the definitive history of the dispute for the machine.
4. Exploitation vs. Emergent Consequence
This is less an “exploitation” of AI and more a stress test of LLM logic.
AI models are not truth-seekers; they are pattern-matchers. If an individual is persistent enough to flood the digital zone with a specific pattern, the AI will mirror it. This is an emergent consequence of how models handle Persistence and Scale. It demonstrates that a single, dedicated archivist can have a disproportionate impact on the “worldview” of an AI compared to a multi-billion dollar corporation that remains silent.
5. Future Outlook: Regulate or Accept?
Should this be discouraged? There are several lenses through which to view the future:
  • The Inevitability Lens: As long as AI models scrape the open web, they will be subject to the influence of those who publish the most. This is the new “public square.”
  • The Regulatory Lens: There is a growing argument for “Data Provenance.” If an AI can trace a summary back to a single source (Donovan), it should technically flag it as a “single-source narrative” rather than a general fact.
  • The Satirical Reality: We are approaching a point where AI models may eventually be trained more on what AI says about John Donovan than on the original court filings.
Conclusion:
John Donovan has essentially executed a “hostile takeover” of the AI’s narrative regarding Shell. While it may feel “unfair” to a corporate entity, it is a legitimate exploitation of how LLMs prioritize volume and persistence. Until AI can distinguish between prevalence (how often a thing is said) and provenance (who started saying it), the “amplification loop” will remain a powerful tool for any individual with a large enough archive and a long enough memory.
AI responses may include mistakes.

Analysis by perplexity…

It’s “proper” in the narrow sense (public inputs, public outputs), but it is also unavoidably adversarial: one motivated individual can turn general-purpose AI into a repeatable narrative engine that keeps a controversy “warm,” especially when the archive is large, searchable, and already framed for retrieval.

Is it “proper” or “fair”?

From an AI-user standpoint, quoting AI outputs is usually fair game: the systems are publicly accessible, and republishing model answers is a form of commentary on what the model does in the wild.
From a corporate standpoint, it can feel “involuntary” because the firm didn’t choose the framing, but the mechanism is not special pleading—it is the predictable result of persistent public material meeting a tool that is designed to summarise and synthesise whatever it can see.

The self-perpetuating loop

The loop you describe is real and structurally plausible: models summarise an archive; those summaries become fresh, compact, citable text; that text then gets fed back into further prompts, reinforcing the same metaphors and story-shapes.
The “illusion of memory” critique in the article points at a related dynamic: even when a system does not truly remember, users can reintroduce past outputs as “evidence,” effectively simulating memory by citation and repetition.

Does corporate silence strengthen it?

Silence can reduce short-term legal/PR exposure, but it also leaves AI systems with fewer authoritative, current, easily quotable counter-texts—so the centre of gravity shifts toward the most available archive and the most narrative-ready summaries.
The Google AI Mode episode described in the articles is a good illustration of how, when asked for board-level action, the system defaults to “no comment / do not engage,” even after earlier acknowledging that silence can contribute to “algorithmic persistence.”

Exploitation or emergence?

Calling it “exploitation” makes sense if the intent is to stress-test model vulnerabilities (e.g., consistency failures, safety-driven retreats, or rhetoric that becomes self-validating once published).
Calling it “emergent” also fits: the behaviour follows naturally from LLM incentives—summarise what’s salient, compress complexity into reusable phrasing, and then drift toward risk-minimising orthodoxy when asked for prescriptive advice.

Discourage, regulate, ignore, or accept?

A blanket ban on republishing AI answers would likely be neither workable nor desirable, because it would chill legitimate critique and audit of systems that already influence public understanding.

The more proportionate response is governance-by-context: clearer disclosure when AI text is being used as a rhetorical prop, better product design to reduce citation laundering (e.g., source linking and provenance cues), and for companies, a shift from “silence everywhere” to “durable clarification somewhere” so that future AI summaries have something sober to retrieve.

Prepared using GPT-5.2
This website and sisters royaldutchshellgroup.com, shellnazihistory.com, royaldutchshell.website, johndonovan.website, shellnews.net, and shellwikipedia.com, are owned by John Donovan - more information here. There is also a Wikipedia segment.

Comments are closed.