In a digital age where artificial intelligence is reshaping every corner of media production, the voiceover industry is grappling with one of its most urgent ethical dilemmas yet: the replication and reuse of human voices without clear consent. That dilemma took center stage in May 2025 when voiceover artist Gayanne Potter alleged that ScotRail, Scotland’s national rail service, was using an AI version of her voice without her approval.
The case has ignited a broader conversation about intellectual property, performer rights, and the role of tech companies in preserving ethical standards. It’s also served as a stark reminder that consent in the world of voiceover must evolve alongside technology — or risk being left behind entirely.
The Backstory: Who Is Gayanne Potter, and What Happened?
Gayanne Potter is a seasoned Scottish voiceover artist, well known for her work across UK broadcast media. In 2021, she recorded a set of voice samples for ReadSpeaker, a Swedish-based company specializing in AI-generated speech. According to Potter, her recordings were intended for narrowly defined uses — not for synthetic voice creation used in live public transit announcements.
Fast forward to 2025, and passengers aboard ScotRail trains are hearing a new AI announcer named “Iona.” To Potter’s shock and dismay, “Iona” sounded unmistakably like her. Upon investigation, she discovered that her recordings had been used to generate the AI voice behind the system — without her consultation, knowledge, or approval.
Potter has since spoken publicly, accusing ScotRail and ReadSpeaker of violating her trust and professional rights. She also demanded that her voice be removed from the system, calling the situation “deeply distressing.”
ScotRail’s Position and Government Response
ScotRail has maintained that the issue is strictly between ReadSpeaker and the talent involved, and that the company acted in good faith under the terms of their licensing agreement with the AI provider. Officials indicated they had no direct involvement in the sourcing of the voice used in the announcements.
However, the matter has escalated to public concern. Scotland’s First Minister, John Swinney, acknowledged the controversy and said that ScotRail is “fixing” the issue, though details about what that entails remain vague.
Potter, in turn, has expressed frustration that accountability appears to be deflected between ScotRail and ReadSpeaker, with no clear resolution in sight. In a sector where trust and consent are essential, the lack of clarity around rights usage has unsettled not just Potter, but a large swath of the voiceover community.
A Symptom of a Bigger Problem
The ScotRail situation is not an isolated case. In recent years, multiple performers — from audiobook narrators to video game voice actors — have discovered their voices being repurposed, cloned, or modified for use in ways they never agreed to.
The problem often stems from contracts that include vague licensing language, or from AI developers training models on public recordings without properly vetting permissions. Many artists have unknowingly signed agreements that allow their voice data to be retained indefinitely and reused in machine-learning models.
This creates a dangerous precedent: voice actors may record a session for one purpose, only to later find their voice deployed in synthetic form across industries they never intended to support — including political advertising, adult content, or controversial brands.
AI in Voiceover: Where Consent Falls Short
At the heart of this issue is informed consent — not just what voice actors agree to in writing, but whether they truly understand what they are agreeing to. As AI developers expand their use of datasets to train synthetic voices, the fine print in licensing agreements has become a minefield.
Voiceover professionals now face critical questions:
- Was my voice recorded for training or just playback?
- Can the client use my data to generate AI replicas?
- Are there time limits or territory restrictions on the synthetic use of my voice?
- What happens if I revoke consent?
In Potter’s case, it appears that the licensing agreement did not explicitly authorize her voice for AI announcement systems. If so, the use of her voice in the ScotRail system may fall outside of acceptable industry norms — regardless of whether it was legally defensible under the contract.
Industry Response and Union Advocacy
Organizations like Equity UK, SAG-AFTRA, and independent voiceover advocacy groups have increasingly prioritized protections for performers against unauthorized AI usage. These efforts include lobbying for clearer legal frameworks, raising awareness of AI contract clauses, and creating databases of safe vendors and platforms.
Some proposed industry standards include:
- Requiring separate, opt-in consent for AI voice cloning
- Implementing “do not synthesize” registries
- Creating AI-specific usage buyouts with clear limits
- Providing revenue-sharing structures when synthetic versions are used at scale
As the ScotRail controversy continues, it is likely to serve as a rallying point for these advocacy efforts — underscoring the real-world consequences of gaps in policy and enforcement.
A Cultural and Emotional Toll
For many performers, this isn’t just about legal language — it’s personal. A voice is not a faceless instrument. It is a core part of a performer’s identity and creative expression. Hearing an AI version of oneself speak lines you never recorded, in places you never expected, is a profound violation.
Potter’s reaction has struck a chord with many voice actors, especially those who feel increasingly vulnerable in a rapidly changing landscape. The psychological toll of seeing your voice detached from your agency is rarely discussed — but deeply felt.
This event has also sparked consumer awareness, with passengers and the public beginning to question how familiar voices end up in unexpected places.
The Road Ahead: Protecting the Human Voice
The ScotRail incident may ultimately be remembered not just as a contractual dispute, but as a cultural inflection point. It demonstrates the need for voiceover professionals to reclaim ownership of their digital selves — through smarter contracts, education, and community standards.
It also illustrates the importance of ethical sourcing for clients and tech providers. Just because a voice is available doesn’t mean it’s available ethically. And just because something is technically legal doesn’t make it right.
As AI grows more integrated into daily life, transparency, consent, and attribution must become non-negotiable. The voiceover industry has long operated on relationships built on trust. That trust must be extended — not erased — in the digital age.