Post

AI CERTs

2 hours ago

China Draft Strengthens Personal Likeness Rights

China’s Cyberspace Administration has fired a new regulatory warning shot. On 3 April 2026 it released draft rules for “digital virtual human” services. The move centres on Personal Likeness Rights and offers the clearest signal yet that Beijing will police AI avatars as tightly as deepfakes. Consequently, global vendors, advertisers and investors are paying close attention. Moreover, the public comment window closes on 6 May 2026, leaving little time for industry feedback.

Experts say the draft completes a trilogy with the Personal Information Protection Law and deep-synthesis rules. Meanwhile, China’s digital-human market already tops RMB 33.92 billion, according to iiMedia. Therefore, compliance costs and commercial pivots could be substantial. Nevertheless, clear consent principles may strengthen consumer trust and reduce fraud risks.

Chinese digital artist creating a digital avatar with focus on Personal Likeness Rights.
A digital artist carefully edits a human avatar, highlighting new consent protections.

Draft Overview Highlights Key

The proposed regulation contains 27 articles covering the full lifecycle of virtual humans. Furthermore, it assigns liability to service providers, technology suppliers and even end-users. In contrast with earlier guidelines, penalties now include explicit fines and deletion orders. CAC officials describe this as “whole-chain governance,” underscoring systemic risk concerns.

Key regulatory pillars include consent, labeling, content moderation and child protection. Subsections outline obligations for record-keeping, model retraining and data deletion. Additionally, the text references existing national standards on algorithmic transparency. Consequently, organisations must map overlapping duties across multiple frameworks.

These structural changes set a firmer ground. However, specific consent obligations deserve special scrutiny, as explored next.

Personal Likeness Rights Consent

Article 7 states that using a person’s image, Voice or biometrics for modelling requires explicit, separate consent. Moreover, guardians must approve any data involving minors under 14. Should consent be withdrawn, providers must erase source material and deregister the avatar.

Article 8 forbids offering digital humans that are “highly similar” to identifiable individuals without permission. Therefore, brand campaigns using celebrity doubles must secure contracts first. Additionally, platforms must block uploads that violate these new Personal Likeness Rights.

Legal scholars note potential tension with parody and historical commentary. Nevertheless, the draft prioritises individual Reputation and Privacy over creative latitude. Consequently, companies should reassess dataset provenance and negotiate clear IP licences.

Explicit consent underpins the regulation. Furthermore, rigorous record-keeping will likely become a licensing pre-condition.

Labeling And Child Protection

Article 13 introduces a persistent “数字人” watermark for every appearance of a virtual avatar. Consequently, dark-pattern marketing tactics will face immediate exposure. Meanwhile, Articles 10–11 prohibit “virtual intimate relationships” with minors and restrict addictive gamification loops.

Violations include sexually suggestive or violent scenes that might harm youth mental health. Additionally, political or national-security infractions invite stricter penalties. Therefore, content policies must integrate real-time monitoring and age-gating functions.

Providers must also prevent minors from tipping or purchasing premium features from digital companions. Consequently, revenue models built on emotional dependency will need rapid redesign.

Clear labels and minor safeguards lower deception risks. However, they raise engineering overhead, which flows into operating budgets covered below.

Industry Impact And Costs

China already hosts major virtual-human lines from Tencent Cloud, iFLYTEK and Baidu. Moreover, startups fuel live-stream influencers and e-commerce presenters. New compliance layers will affect at least three cost centres:

  • Legal reviews of likeness agreements and IP licences
  • Engineering work to embed dynamic labels and deletion mechanisms
  • Audit teams to log consent timelines and guard data Privacy

Smaller studios may struggle with these burdens. Nevertheless, early adopters could gain marketing advantage by advertising robust Personal Likeness Rights compliance. Consequently, trust signals could translate into higher conversion rates.

Operating margins will tighten for some. However, standards may catalyse premium “compliant-by-design” service tiers.

Market Context And Data

iiMedia projects the domestic virtual-human market could reach RMB 93.56 billion by 2030. Additionally, analysts observe compound annual growth rates exceeding 18%. Therefore, regulators feel pressure to guide expansion before misconduct erodes public confidence.

Industry adoption spans banking kiosks, museum guides and commercial livestreams. Moreover, provincial governments deploy digital announcers for policy outreach. Consequently, any steep compliance cliff would ripple through multiple sectors.

Nevertheless, stronger rules can protect Reputation and national cultural IP. Furthermore, accepted norms may hasten global interoperability once other jurisdictions enact similar measures.

These figures frame the stakes. The following section explains practical steps for those shipping products this quarter.

Compliance Steps For Vendors

First, map every dataset containing facial images or Voice samples. Subsequently, verify consent coverage aligns with Article 7 clauses. Second, implement real-time “数字人” overlays across all output channels. Third, build age verification and parental-control hooks to block restricted functions for minors.

Additionally, draft deletion workflows that can purge model checkpoints when users revoke consent. Consequently, incident-response playbooks must integrate CAC notification timelines.

Professionals can deepen their governance skill set through the AI Data Specialist™ certification. Furthermore, certified teams often accelerate regulatory audits and shorten sales cycles.

Structured roadmaps translate abstract legal text into engineering reality. However, cross-border platforms still face mismatched legal regimes, discussed next.

Global Interoperability Concerns Raised

Europe’s AI Act and several U.S. state laws also target likeness misuse, yet definitions diverge. Consequently, multinational services must juggle China’s explicit deletion orders with retention duties elsewhere. Moreover, wording around “high similarity” lacks a quantitative test, increasing interpretation risk.

Nevertheless, aligning to the strictest market often yields one universal standard. Furthermore, robust Personal Likeness Rights workflows can strengthen corporate Reputation amid rising consumer scepticism.

Interoperability debates will likely shape the final CAC wording. Therefore, stakeholders should file comments before the May 6 deadline.

Diverse regimes may complicate rollout. However, proactive harmonisation preserves strategic agility.

Conclusion And Next Moves

China’s draft rules place Personal Likeness Rights at the centre of digital-human governance. Transition clauses cover consent, labeling, child safety and penalties, thereby touching product design, marketing and legal risk management. Moreover, the market’s rapid growth magnifies the importance of immediate compliance planning.

Consequently, organisations should audit data pipelines, review IP contracts and embed consent revocation tools without delay. Meanwhile, global operators must reconcile China’s stringent framework with other jurisdictions. Nevertheless, early movers can signal trust and safeguard brand Reputation.

Professionals keen to lead these efforts should pursue advanced credentials. Therefore, consider enrolling in the AI Data Specialist™ program today and turn regulatory change into competitive advantage.