10 Steps to Regulate Unauthorized Use of AI—Lessons from Zelda Williams’ Plea
To all the AI enthusiasts, here’s a story that might surprise you.
Zelda Williams (daughter of the late actor Robin Williams) strongly spoke out against AI-generated videos and audio of her father, saying people are making his voice “say whatever people want” and recreating his likeness without consent.
That story touches on issues like AI intellectual property rights, unauthorized AI use regulation, deepfake content protection, celebrity voice rights AI, and the need for an AI content governance framework. It’s a powerful reminder that as AI tools become more advanced, we need ways to protect people’s voices, likenesses, and legacies.
In this blog I’ll walk you through 10 simple steps that individuals, organisations, and governments can use to help regulate unauthorized AI use. I’ll draw on what Zelda Williams’ plea shows us and add ideas for a stronger digital future. I’ll also suggest further reading along the way.
1. Recognise the problem: What’s at stake
When AI can clone voices, make fake videos, or recreate someone’s image after they’re gone, we face big questions. For example, Zelda Williams said these AI videos of her father felt like “disgusting, over-processed hotdogs out of the lives of human beings.”
Why does this matter?
- It affects AI intellectual property rights—whose voice or image is it?
- It calls for unauthorized AI use regulation—how do we stop people from making content without permission?
- It threatens deepfake content protection—how do we spot and prevent harmful fakes?
- It impacts celebrity voice rights. AI—even for public figures, there’s a question of consent and dignity.
- It demands a broad AI content governance framework—rules, tools, and ethics for the digital age.
Suggested Read: The Four Pillars of AI Ethics and Why Every Professional Needs to Understand Them
2. Build clear policies about consent and likeness
Organisations and creators should have clear rules stating: when you use someone’s voice, image, or likeness (especially AI-generated ones), you must have permission. Zelda’s story shows what happens when permission is missing—emotional harm, legacy risks, and backlash.
A strong policy should cover:
- Who owns the rights (individual, estate, company).
- How to obtain consent (signed agreement, digital contract).
- What uses are allowed and what are off-limits (commercial, memorial, parody, etc.).
- What happens after death (post-mortem rights).
Suggested Read: What Does an AI Ethics Certification Cover? Key Principles and Frameworks
3. Use watermarking and content tracing tools
To protect against unauthorized AI use and deepfake creation, add technological safeguards: digital watermarks, metadata tagging, and tracking tools. These help identify and trace when AI content is misused. The article states the arms race between deepfake generation and detection is already underway.
Examples:
- Embedding invisible markers in video/audio that indicate owner or permitted use.
- Building databases of verified voices/faces that must be checked before use.
- Using detection software to scan for suspicious AI-generated content.
4. Create licensing & usage frameworks for AI-generated likenesses
Just like regular images or songs have licenses, we need licensing for AI use of voices/likenesses. This helps with AI intellectual property rights and assures fair compensation and clear rules. The article mentions the entertainment industry facing major changes with AI deepfakes.
Licensing steps could include:
- Defining what the AI can do (voice, movement, new content) and what it cannot.
- Setting price/royalty structure for use of someone’s digital likeness.
- Defining rights of the human person behind the voice/face (living or deceased).
- Expiry or limitation of use (time, region, medium).
5. Clarify post-mortem digital rights and legacy protection
This is central to Zelda Williams’ concern. When a person is deceased, who controls their digital likeness? What rights does their estate have? The article says post-mortem rights are inconsistent or undefined in many places.
Steps here:
- Individuals should document wishes for their digital legacy (digital will).
- Estates should register and control digital personas (guardianship of the digital legacy).
- Laws should explicitly cover deceased persons’ likeness and voice rights under AI.
- Ethical frameworks should respect the dignity and memory of the deceased, not exploit them.
Suggested Read: Real AI Ethics Issues Uncovered: Get Certified Now!
6. Implement a transparent AI content governance framework
An organisation should adopt a transparent framework covering how AI content is created, approved, published, and audited. This supports the AI content governance framework. The article emphasises that we are urgently in need of guardrails in this space.
Governance framework might include:
- Responsible parties (AI ethics officer, review board).
- Approval workflows for content with likeness/voice.
- Reporting mechanisms for misuse or breach.
- Review and update policies as technology changes.
7. Educate creators, users, and the public about risks
Deepfake and unauthorized AI use is more than a tech problem; it’s a human one. People need to be aware of risks: emotional harm, identity theft, legal consequences. The article highlights how grief and intrusion affect families.
Educational actions:
- Workshops for creators and developers about ethics, consent, and rights.
- Public awareness campaigns about recognizing deepfakes and respecting others’ likeness.
- Training for content platforms to identify and moderate unauthorized AI content.
Suggested Read: AI Ethical Approaches: Putting AI Ethics into Practice
8. Promote fair use, fairness in business decisions, and fairness in AI
When organisations use AI for voices/faces or digital personas, fairness matters. That covers AI ethics certification, unauthorized AI use regulation, and the AI content governance framework. We must ensure that AI does not exploit voices/legacies for profit without fair benefit to the human (or estate). The academic paper on “digital doppelgangers” discusses how consent, ownership, and rights are emerging issues.
Business steps:
- Include voice/likeness rights as part of talent contracts.
- Share revenue or benefits if a person’s likeness is used.
- Avoid bias: don’t only use digital likenesses of certain groups or celebrities without their control.
- Ensure communities and individuals have a choice in how their digital likeness is used.
Suggested Read: AI Ethics for Ensuring Fairness in Business Decision Making
9. Advocate for stronger laws and global cooperation
Technology crosses borders. If one country lacks rules, creators or platforms might exploit the gaps. The article stresses the need for legal frameworks, especially around likeness and voice after death.
Advocacy steps:
- Support laws giving people and their estates rights over AI use of their likeness and voice.
- Push for international standards that harmonise protection so companies cannot dodge regulation by moving abroad.
- Encourage regulatory bodies to require transparency from AI platforms about how content is generated and used.
Suggested Read: AI Ethics in Higher Education: How It Will Shape the Education System
10. Get certified in AI ethics and make it part of your culture
If you’re in a company, a creator, or just someone curious, earning an AI ethics certification is a strong move. It signals you understand questions like AI intellectual property rights, unauthorized AI use regulation, deepfake content protection, celebrity voice rights AI, and the AI content governance framework. Certification creates a shared language and standard among teams.
By making a certification part of culture you:
- Build trust with stakeholders, audiences, estates and creators.
- Make better decisions about when and how AI likeness or voice can be used.
- Stay ahead of regulation and public expectation.
- Show you value human dignity and creative legacy, not just technical novelty.
Final Word
When we think of Zelda Williams’ plea, there’s something simple yet powerful: “Please respect my father’s voice and likeness. Please stop using tech without caring about the people behind it.” And that message echoes far beyond one family. It challenges all of us including, tech developers, creators, organisations, even everyday users, to treat AI not just as a tool but as a force with moral weight.
If you or your company is working with AI, voices, digital personas, or likenesses, take these ten steps seriously. Consider getting an AI Ethics certification from AI CERTs. It will help you build the skills, mindset, and policy clarity to navigate these new frontiers responsibly.
In a world where AI can bring back the past in uncanny ways, let’s choose to protect dignity, fairness, and legacy over exploitation. Enroll today!
Recent Blogs
FEATURED
Become a Partner vs. Start Your Own AI Training Program: Which Is Better?
October 29, 2025
FEATURED
5 Key Features of ‘Siraj’: Jordan’s AI-Powered Educational Assistant and the Global Lessons
October 29, 2025
FEATURED
3 Key Trends to Watch in Bitcoin After Its Surge to $125,000
October 29, 2025
FEATURED
What You Get When You Become a Partner in Our AI Training Programs
October 29, 2025
FEATURED
5 Ways AI Could Reshape Bitcoin Use in Sports Ticketing After FIFA’s Blockchain Controversy
October 29, 2025