{"id":18611,"date":"2026-02-18T13:18:32","date_gmt":"2026-02-18T07:48:32","guid":{"rendered":"https:\/\/www.aicerts.ai\/news\/?post_type=news&#038;p=18611"},"modified":"2026-02-18T13:18:35","modified_gmt":"2026-02-18T07:48:35","slug":"ai-government-rules-india-tightens-deepfake-takedown-timelines","status":"publish","type":"news","link":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/","title":{"rendered":"AI Government Rules: India Tightens Deepfake Takedown Timelines"},"content":{"rendered":"<p>India has fired a starting gun on intensive platform oversight. On 10 February 2026, the Ministry of Electronics &amp; Information Technology (MeitY) amended the Intermediary Guidelines to cover synthetic media. Consequently, every major intermediary now faces compressed response clocks, stricter provenance duties, and sharper penalties. This sweeping move signals that <strong>AI Government<\/strong> regulation has entered an enforcement phase, not a planning phase.<\/p>\n<p>However, the notification also heightens operational risk. Platforms have barely ten days to redesign workflows before the rules activate on 20 February. Meanwhile, civil-society groups warn that speech rights may suffer. Nevertheless, policymakers insist the trade-off favours safety. The coming months will reveal whether India\u2019s digital gatekeepers can juggle speed, accuracy, and constitutional safeguards.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/tech-team-on-ai-compliance.jpg\" alt=\"Tech professionals discuss AI Government compliance and deepfake takedown timelines.\" \/><figcaption>Technology professionals collaborate on adapting to new AI Government compliance expectations.<\/figcaption><\/figure>\n<h2>New Rules Explained Clearly<\/h2>\n<p>MeitY\u2019s Gazette G.S.R. 120(E) introduces \u201csynthetically generated information\u201d into statute. The definition captures audio, visual, or audiovisual content that algorithms alter to look authentic. Furthermore, routine edits, accessibility fixes, and bona-fide research remain exempt. Therefore, newsrooms can still colour-correct footage without triggering extra duties.<\/p>\n<p>Additionally, the amendments shorten multiple statutory windows. Government or court <strong>Takedown<\/strong> orders must now be executed within three hours, down from 36. Non-consensual intimate imagery requires removal within two hours. Grievance acknowledgments often drop from fifteen days to seven. Consequently, platforms must bolster incident desks and automate triage.<\/p>\n<p>The Gazette also clarifies safe-harbour rules under Section 79 of the IT <strong>Law<\/strong>. Intermediaries retain immunity if they act in good faith and follow the updated procedures. In contrast, failure to comply may invite civil or criminal exposure.<\/p>\n<p>These baseline amendments redefine duties. However, granular technical mandates intensify the burden, as the next section explains.<\/p>\n<p>This section outlined scope, timelines, and immunity. Consequently, readers now grasp the rule\u2019s backbone. Let us examine how the timing crunch reshapes platform playbooks.<\/p>\n<h2>Sharper Timelines For Platforms<\/h2>\n<p>Speed dominates every compliance chart. Moreover, only senior officials\u2014Joint Secretary rank or higher\u2014may issue valid government takedown notices. Police must reach Deputy Inspector-General rank to qualify. Subsequently, a Secretary-level officer reviews monthly statistics for proportionality.<\/p>\n<p>Despite safeguards, industry executives question feasibility. Meta\u2019s Rob Sherman told the India AI Impact Summit that three hours is \u201coperationally challenging.\u201d Additionally, platforms rely on global trust-and-safety follow-the-sun models, where teams span time zones. Therefore, domestic deadlines disrupt established escalations.<\/p>\n<p>Consider the comparative numbers:<\/p>\n<ul>\n<li>Government or court <strong>Takedown<\/strong>: 36 hours \u2192 3 hours<\/li>\n<li>Non-consensual imagery: 24 hours \u2192 2 hours<\/li>\n<li>Standard grievance acknowledgment: 15 days \u2192 7 days<\/li>\n<\/ul>\n<p>Furthermore, significant social media intermediaries, defined as services with over five million domestic users, must place resident grievance officers on-call round-the-clock. Consequently, staffing costs will rise markedly.<\/p>\n<p>These tightened clocks force automation upgrades. However, aggressive filters may erode accuracy, raising Constitution-linked free-speech concerns.<\/p>\n<p>Timing obligations drive both engineering and policy changes. Nevertheless, provenance duties pose an equally complex puzzle, which we explore next.<\/p>\n<h2>Labelling And Provenance Mandates<\/h2>\n<p>Beyond velocity, the amendments demand transparency. Platforms must apply prominent on-screen or audible labels to every piece of synthetically generated information. Moreover, they must embed tamper-resistant metadata whenever technically feasible. Therefore, verification tags should persist across file transfers.<\/p>\n<p>In contrast, the rules prohibit users from disabling labels or stripping metadata. Consequently, interface redesigns become inevitable. Meanwhile, detection remains imperfect. False negatives could slip through, while false positives may block legitimate satire.<\/p>\n<p>Industry bodies IAMAI and Nasscom describe the provenance rule as \u201cunimplementable\u201d at current technology maturity. Nevertheless, MeitY argues that visible disclaimers will curb viral misinformation and protect dignity. The tension underscores the perennial <strong>Compliance<\/strong> dilemma in content governance.<\/p>\n<p>Professionals can enhance their expertise with the <a href=\"https:\/\/www.aicerts.ai\/certifications\/security\/ai-ethical-hacker\">AI Ethical Hacker\u2122<\/a> certification. Such training sharpens technical understanding of watermarking, hashing, and chain-of-custody protocols.<\/p>\n<p>Provenance rules aim to anchor authenticity. However, stakeholder reactions reveal deep divisions, as the following analysis shows.<\/p>\n<p>This segment covered labelling, metadata, and implementation pain points. Consequently, attention now turns to external commentary.<\/p>\n<h2>Stakeholder Reactions And Concerns<\/h2>\n<p>Views are sharply split. Government officials frame the package as a dignity shield and a deterrent to deepfake scams. Additionally, they stress that compressed removal windows offer swift relief to victims.<\/p>\n<p>However, the Internet Freedom Foundation warns of prior restraint. The group argues that three-hour response cycles encourage pre-emptive removals to protect safe-harbour. Such precaution, it contends, conflicts with Article 19 of the <strong>Constitution<\/strong>.<\/p>\n<p>Meanwhile, creators fear that blanket SGI tagging could stigmatise benign uses, including parody and performance art. Industry engineers echo detection worries. Moreover, privacy scholars highlight that persistent identifiers may compromise whistle-blower anonymity.<\/p>\n<p>Nevertheless, some academics welcome the clarity. They assert that transparent labelling can bolster media literacy without chilling speech, provided procedural audits remain open.<\/p>\n<p>Stakeholders agree on one point: the success of <strong>AI Government<\/strong> oversight hinges on nuanced enforcement, not one-size directives.<\/p>\n<p>This section unpacked praise and criticism. Consequently, we can now evaluate the practical hurdles firms must clear.<\/p>\n<h2>Operational Hurdles And Costs<\/h2>\n<p>Platforms must upgrade detection pipelines, recruit legal specialists, and localise workflows. Furthermore, provenance watermarking demands coordination across content ingestion, processing, and distribution layers. Therefore, engineering roadmaps require swift reprioritisation.<\/p>\n<p>Consequently, budget forecasts shift. One Indian social network estimates a 30 percent rise in trust-and-safety spend for fiscal 2026. Moreover, cross-border services face jurisdictional collisions, because European GDPR rules restrict certain metadata practices. In contrast, the Indian amendments assert sovereignty and urgency.<\/p>\n<p>Additionally, quality assurance becomes harder. Automated flagging models trained on English data struggle with regional dialects. False classification raises both reputational and <strong>Law<\/strong> liabilities.<\/p>\n<p>Nevertheless, strategic investment in explainable AI and multilingual datasets could improve precision. Platforms embracing robust <strong>Compliance<\/strong> cultures may convert obligation into competitive advantage.<\/p>\n<p>Operational barriers justify deep planning. However, forward-looking steps can mitigate disruption, as the next strategy section details.<\/p>\n<p>This discussion quantified cost and risk. Consequently, attention now shifts to actionable guidance for executives.<\/p>\n<h2>Broader Governance Implications Ahead<\/h2>\n<p>India\u2019s amendments ripple beyond national borders. Moreover, other capitals monitor the experiment as they draft AI playbooks. Therefore, consistent global norms appear unlikely soon.<\/p>\n<p>Additionally, federated provenance standards could emerge through multilateral forums. In contrast, unilateral mandates may spawn fragmentation. Nevertheless, the Indian framework spotlights an accelerating trend: <strong>AI Government<\/strong> regulation is moving from principles to penalties.<\/p>\n<p>Consequently, corporate boards must weave synthetic-media risk into enterprise governance charters. Regular audits, cross-functional policy drills, and proactive engagement with regulators become critical. Furthermore, continued dialogue with civil society can align innovation with constitutional values.<\/p>\n<p>These governance shifts foreshadow a maturing digital order. However, leaders still control whether the outcome protects speech while curbing harm.<\/p>\n<p>This final section connected domestic rules to global currents. Consequently, readers can synthesise both tactical and strategic insights.<\/p>\n<h3>Strategic Steps For Compliance<\/h3>\n<p>Executives should consider a phased roadmap:<\/p>\n<ol>\n<li>Map all content workflows against new three-hour and two-hour windows.<\/li>\n<li>Embed watermark libraries and hash-based provenance tools at upload.<\/li>\n<li>Train moderation teams on SGI definitions and constitutional tests.<\/li>\n<li>Conduct stress drills simulating late-night <strong>Takedown<\/strong> notices.<\/li>\n<li>Maintain transparent logs to preserve Section 79 safe-harbour.<\/li>\n<\/ol>\n<p>Moreover, periodic external audits reinforce trust. Consequently, firms demonstrate good-faith efforts to regulators and users alike.<\/p>\n<p>This checklist translates high-level duties into concrete actions. Therefore, organisations can transform obligation into resilience.<\/p>\n<h2>Conclusion And Next Steps<\/h2>\n<p>India\u2019s latest amendments compress response windows, mandate provenance labels, and thrust <strong>AI Government<\/strong> regulation into daily platform operations. Moreover, the three-hour benchmark forces automation upgrades, while metadata duties complicate privacy management. Nevertheless, clear guidance on safe-harbour and senior-rank sign-off injects procedural balance.<\/p>\n<p>Civil-society groups fear chill, yet victims of synthetic abuse may gain faster relief. Consequently, outcomes will hinge on nuanced, transparent enforcement. Professionals seeking to navigate this evolving terrain should pursue continual learning. Therefore, explore advanced credentials like the linked AI Ethical Hacker\u2122 program and stay engaged with policy updates to remain future-ready.<\/p>\n<div class=\"cta-section\">\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>India has fired a starting gun on intensive platform oversight. On 10 February 2026, the Ministry of Electronics &amp; Information Technology (MeitY) amended the Intermediary Guidelines to cover synthetic media. Consequently, every major intermediary now faces compressed response clocks, stricter provenance duties, and sharper penalties. This sweeping move signals that AI Government regulation has entered [&hellip;]<\/p>\n","protected":false},"featured_media":18610,"parent":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_yoast_wpseo_focuskw":"AI Government","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"India's new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.","_yoast_wpseo_canonical":""},"tags":[26223,26224,26226,26225],"news_category":[4],"communities":[],"class_list":["post-18611","news","type-news","status-publish","has-post-thumbnail","hentry","tag-deepfake-takedown","tag-digital-law","tag-provenance-labelling","tag-tech-policy-2026","news_category-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News<\/title>\n<meta name=\"description\" content=\"India&#039;s new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News\" \/>\n<meta property=\"og:description\" content=\"India&#039;s new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/\" \/>\n<meta property=\"og:site_name\" content=\"AI CERTs News\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-18T07:48:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/\",\"url\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/\",\"name\":\"AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News\",\"isPartOf\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg\",\"datePublished\":\"2026-02-18T07:48:32+00:00\",\"dateModified\":\"2026-02-18T07:48:35+00:00\",\"description\":\"India's new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage\",\"url\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg\",\"contentUrl\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg\",\"width\":1536,\"height\":1024,\"caption\":\"An Indian government official examines new AI Government regulations and compliance procedures.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.aicerts.ai\/news\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"News\",\"item\":\"https:\/\/www.aicerts.ai\/news\/news\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"AI Government Rules: India Tightens Deepfake Takedown Timelines\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#website\",\"url\":\"https:\/\/www.aicerts.ai\/news\/\",\"name\":\"Aicerts News\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.aicerts.ai\/news\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#organization\",\"name\":\"Aicerts News\",\"url\":\"https:\/\/www.aicerts.ai\/news\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg\",\"contentUrl\":\"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg\",\"width\":1,\"height\":1,\"caption\":\"Aicerts News\"},\"image\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News","description":"India's new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/","og_locale":"en_US","og_type":"article","og_title":"AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News","og_description":"India's new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.","og_url":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/","og_site_name":"AI CERTs News","article_modified_time":"2026-02-18T07:48:35+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/","url":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/","name":"AI Government Rules: India Tightens Deepfake Takedown Timelines - AI CERTs News","isPartOf":{"@id":"https:\/\/www.aicerts.ai\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage"},"image":{"@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage"},"thumbnailUrl":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg","datePublished":"2026-02-18T07:48:32+00:00","dateModified":"2026-02-18T07:48:35+00:00","description":"India's new IT rules push AI Government oversight, tighten takedown deadlines, mandate provenance labels, and reshape compliance in 2026.","breadcrumb":{"@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#primaryimage","url":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg","contentUrl":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/02\/official-reviewing-ai-policies.jpg","width":1536,"height":1024,"caption":"An Indian government official examines new AI Government regulations and compliance procedures."},{"@type":"BreadcrumbList","@id":"https:\/\/www.aicerts.ai\/news\/ai-government-rules-india-tightens-deepfake-takedown-timelines\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.aicerts.ai\/news\/"},{"@type":"ListItem","position":2,"name":"News","item":"https:\/\/www.aicerts.ai\/news\/news\/"},{"@type":"ListItem","position":3,"name":"AI Government Rules: India Tightens Deepfake Takedown Timelines"}]},{"@type":"WebSite","@id":"https:\/\/www.aicerts.ai\/news\/#website","url":"https:\/\/www.aicerts.ai\/news\/","name":"Aicerts News","description":"","publisher":{"@id":"https:\/\/www.aicerts.ai\/news\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.aicerts.ai\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.aicerts.ai\/news\/#organization","name":"Aicerts News","url":"https:\/\/www.aicerts.ai\/news\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/","url":"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg","contentUrl":"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg","width":1,"height":1,"caption":"Aicerts News"},"image":{"@id":"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news\/18611","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news"}],"about":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/types\/news"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/comments?post=18611"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/media\/18610"}],"wp:attachment":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/media?parent=18611"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/tags?post=18611"},{"taxonomy":"news_category","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news_category?post=18611"},{"taxonomy":"communities","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/communities?post=18611"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}