AI CERTS
19 minutes ago
SGI Definition: India’s New Deepfake Compliance Rules Explained
This article unpacks the background, exact language, operational duties, and industry debate surrounding the measure. However, it also highlights ambiguities that practitioners must navigate before the next product sprint. Readers will find practical insights for counsel, engineers, policy leads, and trust-and-safety teams. Additionally, we outline certification opportunities that can bolster professional readiness. Let us explore how the law defines, enforces, and tests the promise of safer digital communication. Ultimately, responsible innovation depends on clear guardrails like this one.
Why India Added SGI
India estimates over 750 million internet users will encounter synthetic AI-driven content during the 2029 national elections. Therefore, regulators sought early safeguards to deter hostile actors who weaponise deceptive videos and information. MeitY cited rising financial scams, revenge pornography, and media impersonation schemes during public consultations.

In contrast, industry bodies argued that existing legal fraud laws already addressed much harmful activity. Nevertheless, authorities insisted that deepfake velocity demands specialised rules targeting provenance, disclosure, and rapid removal.
Consequently, the government inserted the new category within the 2026 Intermediary Guidelines amendment. That insertion empowered officials to demand action within hours rather than days. These policy drivers explain the political urgency behind the change. Stakeholders converge on one point: synthetic deception poses systemic risk. Effective mitigation, however, demands both robust policy and agile engineering. Next, we review the statutory wording that anchors enforcement.
Official SGI Definition Text
The Amendment inserts Rule 2(1)(wa) which sets the authoritative SGI Definition. It states that synthetically generated information is audio, visual, or audio-visual content algorithmically created or altered. Moreover, the rule focuses on the perception test: would an average user find the output indistinguishable from reality? Therefore, the threshold hinges on audience perception rather than technical process alone.
Routine colour correction, compression, or accessibility edits remain outside this legal threshold. Consequently, newsrooms and educators can still publish polished footage without fear of breaching the statute. However, manipulated election speeches or forged government documents fall squarely under the SGI Definition.
These carved-outs balance innovation with accountability. With the text clarified, platforms must study their new compliance duties.
SGI Definition Compliance Duties
Intermediaries must deploy reasonable technical measures to detect unlawful SGI while preserving legitimate expression. Additionally, they must embed tamper-resistant provenance metadata and visible labels on every lawful synthetic upload. Therefore, watermarking, hashing, and C2PA signatures are gaining traction among product teams. Key compliance pillars include:
- Clear labeling for users
- Persistent provenance metadata
- Automated detection safeguards
- Faster grievance redressal
Significant social media intermediaries face even stricter steps. They must collect user declarations, verify claims with automated tools, and block unlabeled content from publication. Failure triggers potential loss of safe-harbour under Section 79 of the IT Act. Industry lawyers already warn that investors will scrutinise audit trails during funding rounds.
Moreover, platforms must issue quarterly notices reminding users about penalties for SGI misuse. The SGI Definition underpins each duty and clarifies scope during audits. Collectively, these duties mandate new engineering roadmaps. Next, we assess timelines and sanctions that reinforce urgency.
Accelerated Timelines And Penalties
When officials send a lawful order, platforms now have only three hours to remove violating content. Furthermore, they must process intimate-image complaints within two hours. Grievance officers receive just twenty-four hours to acknowledge any ticket.
Subsequently, most complaints require resolution inside seven days, while certain takedowns close in thirty-six hours. Consequently, legal exposure grows if teams miss these compressed windows. Loss of safe-harbour could invite criminal prosecution under multiple statutes, from POCSO to the Explosives Act. Meanwhile, users must also receive clear notice when their posts are removed under the rule. Every removal clock is expressly tied to the SGI Definition to avoid ambiguity. Legal advisors suggest documenting every moderation decision to demonstrate diligent response.
These numeric mandates convert policy rhetoric into operational deadlines. However, implementing them at scale introduces notable technical and resource challenges.
Operational Implementation Challenges
Detection algorithms still misclassify both authentic and synthetic material, producing false positives and negatives. Meanwhile, provenance metadata seldom survives compression, resizing, or cross-platform sharing. WITNESS warns that provenance which fails to travel is practically useless.
Moreover, the perception-based SGI Definition complicates policy enforcement because context alters user judgment. Engineers need consistent thresholds, yet the rule refers only to indistinguishability from reality. Civil society also flags due-process gaps created by ultra-fast takedown clocks.
Platforms therefore seek clearer standards from MeitY regarding acceptable detection error rates and provenance schemas. Legal teams likewise await guidance on appeals and audit mechanisms. Misclassifications could wrongly tag benign clips as SGI Definition material, chilling speech. Researchers continue racing to improve detection recall without sacrificing precision. These uncertainties raise compliance risk and cost. Industry response provides further perspective.
Industry Reactions And Outlook
IAMAI claims the framework could inadvertently sweep benign memes and educational media into heavy compliance burdens. In contrast, some law firms praise the clarity delivered by the first codified SGI Definition. Nishith Desai Associates emphasises that early movers will retain safe-harbour and brand trust.
Meta and Google have begun piloting watermarking toolchains and user declaration prompts. Additionally, several Indian startups are lobbying for government-endorsed open provenance standards to cut integration costs. Professionals can deepen policy literacy through the AI Writer™ certification. Moreover, compliance experts predict a new vendor ecosystem for provenance services.
Overall, industry remains cautiously optimistic yet demands technical guidance before the deadline. Finally, let us summarise key lessons and next steps.
India’s new rules place the SGI Definition at the heart of platform governance. Consequently, companies must fuse technical, operational, and legal readiness faster than ever. Moreover, provenance standards, detection accuracy, and user education will decide long-term success against malicious synthetic content. Subsequently, collaborative standards bodies may publish interoperable metadata guidelines within the year. Interim uncertainty persists, yet early investment will minimise audit pain and safeguard information integrity.
In contrast, delay could erode safe-harbour and invite reputational damage during high-stakes media moments. Nevertheless, practitioners can upskill swiftly. Enroll in the linked AI Writer™ certification to master compliant storytelling under emerging policy regimes. Regulators, platforms, and civil society should convene pilot sandboxes to test proposed solutions. Act now, and turn regulatory pressure into competitive advantage.
Disclaimer: Some content may be AI-generated or assisted and is provided ‘as is’ for informational purposes only, without warranties of accuracy or completeness, and does not imply endorsement or affiliation.