AI CERTS
4 weeks ago
India’s Ethical Framework Policy sets strict AI deepfake rules
Moreover, the changes reveal how the world’s largest open internet market plans to balance innovation, Governance, and user Rights. This article unpacks the timeline, obligations, and strategic implications for stakeholders seeking operational clarity and competitive advantage.
Policy Timeline And Context
MeitY released draft amendments on 22 October 2025. Subsequently, a consultation window attracted comments from industry, civil society, and research groups such as MANAV. The final text appeared in the Gazette on 10 February 2026 and became enforceable ten days later. Therefore, India moved from concept to law in just four months, a speed that underscores political urgency. Meanwhile, platforms with more than five million Indian users, defined as Significant Social Media Intermediaries, face the heaviest duties. The rapid schedule signals the government’s determination to deliver Accountability before India’s next national elections.

These dates frame the operational calendar. However, understanding the duties requires digging deeper into the rulebook. Consequently, professionals must map each clause against existing compliance workflows.
Core Legal Obligations Now
The amended rules centre on four headline requirements. Firstly, any audio, visual, or audiovisual content altered by computing resources qualifies as SGI, unless excluded for routine editing or accessibility. Secondly, SGI must carry a visible or audible label covering at least 10 percent of display time or area. Thirdly, embedded metadata must remain intact across uploads and reposts, ensuring forensic traceability. Finally, takedown and grievance windows have shrunk from days to hours, tightening Accountability loops.
Key quantitative thresholds guide implementation:
- 10% minimum watermark area for visuals
- 10% minimum audible marker duration for audio
- 5 million registered users triggers SSMI status
Moreover, safe-harbour protections under Section 79 remain available if intermediaries follow the Ethical Framework Policy faithfully. Consequently, compliance teams should update playbooks without delay. These rules define the new minimum bar. Nevertheless, practical execution still hinges on technical feasibility, discussed next.
Platform Compliance Demands Rise
SSMIs must capture user declarations before posts go live. Additionally, they must deploy reasonable automated systems to verify those declarations against known SGI patterns. In contrast, smaller platforms still need labels when they themselves generate content, yet they avoid pre-publication checks.
Therefore, engineering leaders at Meta, Google, and local startups scramble to integrate provenance standards like C2PA. However, visible watermarks can be cropped, and metadata often disappears during transcoding. Consequently, MeitY encourages layered mechanisms combining labels, cryptographic hashes, and backend logs. Professionals can enhance their expertise with the AI Security Compliance™ certification.
These requirements elevate cost structures, particularly for early-stage ventures. Nevertheless, timely upgrades will preserve market access and public trust. The stakes justify serious investment.
Technology And Feasibility Risks
Watermark durability remains contested. Furthermore, detection models still deliver false positives when processing compressed videos. Industry group NASSCOM argues that rigid, visible markers could be stripped or spoofed. Meanwhile, MANAV researchers warn that over-reliance on automation may erode Rights if legitimate satire gets flagged.
In contrast, MeitY defends the Ethical Framework Policy as technology-neutral. Officials highlight carve-outs for accessibility and education, signalling Governance flexibility. Nevertheless, pending technical standards will decide real-world success. Therefore, cross-industry pilots should stress-test watermark resilience, metadata persistence, and latency impacts on publishing pipelines.
These engineering questions illustrate the implementation gap. However, wider policy debates also shape perceptions, as the next section explains.
Industry And Civil Views
Corporate lobbies welcome legal clarity but seek grace periods. Moreover, they push for global alignment to avoid fragmented compliance. Civil-society coalitions demand strict red lines against non-consensual intimate imagery and political impersonation.
Independent lawyers applaud faster grievance redressal yet caution that compressed timelines may chill speech. Additionally, Rights advocates criticise opaque takedown orders that bypass judicial scrutiny. Nevertheless, many observers credit the Policy for embedding Accountability while preserving safe-harbour incentives.
These perspectives reflect India’s democratic contestation. Consequently, regulators may publish further FAQs to clarify edge cases. Engaged dialogue will help scale adherence without stifling creativity.
Global Alignment Concerns Ahead
India is not legislating in isolation. The European Union’s AI Act, US executive orders, and ASEAN guidelines all reference provenance. However, each region defines obligations differently. Therefore, multinational platforms face divergent watermark sizes, metadata schemas, and reporting formats.
Moreover, trade bodies fear that India’s 10 percent visual label could clash with US fair-use standards. MANAV experts suggest adopting interoperable C2PA manifests to reduce friction. Consequently, MeitY participates in ISO and OECD working groups to harmonise approaches.
These diplomatic efforts may narrow gaps over time. Nevertheless, compliance officers must design modular solutions that adapt to multiple jurisdictions. Flexibility now will cut future retrofit costs.
Implementation Watchpoints Ahead
Several indicators will reveal early success:
- Volume of Government takedown orders post 20 February
- Platform reports on SGI detection accuracy
- User surveys tracking label awareness and trust
- PILs challenging constitutionality or over-blocking
Additionally, audit mechanisms will test Governance robustness. MeitY could mandate independent assessments, boosting Accountability across supply chains. Meanwhile, rights groups will document false removals, ensuring the Policy respects speech Rights.
These watchpoints demand sustained monitoring. Consequently, newsrooms, researchers, and compliance teams should build dashboards tracking enforcement metrics. Continuous insight will support timely course corrections.
Conclusion And Outlook
India’s deepfake amendments represent a bold experiment in balancing innovation, Accountability, and Rights. Moreover, the Ethical Framework Policy establishes explicit labels, rapid takedowns, and metadata norms that reshape platform operations. Governance ambitions remain high, yet technical and legal uncertainties persist. Nevertheless, proactive investment in provenance tooling, staff training, and independent audits can mitigate risk. Therefore, professionals should benchmark progress, engage regulators, and pursue recognised credentials. Start today, strengthen compliance, and turn responsible AI into a market differentiator.