{"id":21901,"date":"2026-03-09T19:12:38","date_gmt":"2026-03-09T13:42:38","guid":{"rendered":"https:\/\/www.aicerts.ai\/news\/?post_type=news&#038;p=21901"},"modified":"2026-03-09T19:12:41","modified_gmt":"2026-03-09T13:42:41","slug":"meta-llama-4-herd-unveils-natively-multimodal-models","status":"publish","type":"news","link":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/","title":{"rendered":"Meta Llama 4 Herd Unveils Natively Multimodal Models"},"content":{"rendered":"<p>Meta has escalated the generative AI race with its latest Llama 4 herd release. The announcement spotlights Natively Multimodal Models that integrate text, images, and video from pretraining onward. Consequently, developers now evaluate unprecedented context windows and mixture-of-experts efficiencies. However, Meta released only the Scout and Maverick variants while the massive Behemoth model remains internal.<\/p>\n<p>Industry partners such as GitHub, Azure, and Hugging Face have already posted model cards and endpoints. Meanwhile, analysts debate early benchmark volatility and the practical cost of ten-million-token contexts. This report dissects the technical promises, business implications, and emerging controversies. Furthermore, it maps the Llama Roadmap and licensing caveats facing enterprise adopters.<\/p>\n<figure class=\"wp-block-image size-large\">\n            <img decoding=\"async\" src=\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/interface-for-multimodal-ai.jpg\" alt=\"Computer screen processing data for Natively Multimodal Models with text, image, and audio.\" \/><figcaption>A modern AI interface handling multiple data types natively.<\/figcaption><\/figure>\n<\/p>\n<p>Professionals will learn where to access the models, how to fine-tune them, and when to expect improvements. In contrast, skeptics will appreciate balanced coverage of performance gaps and compliance risks. Consequently, readers gain actionable intelligence for near-term investment decisions. Let\u0019s examine the release in depth.<\/p>\n<h2>Scaling Natively Multimodal Models<\/h2>\n<p>Meta calls Llama 4 a family of Natively Multimodal Models because modality fusion starts during pretraining. Images and video tokens appear alongside text, ensuring shared representations across modalities. Consequently, the models handle screenshots, diagrams, and timelines without separate vision adapters. Mixture-of-experts routing further scales capacity by activating only 17 billion parameters per token.<\/p>\n<p>Therefore, the total parameter count reaches hundreds of billions while inference costs remain manageable. In contrast, dense alternatives of similar size require prohibitive accelerator fleets. Moreover, context windows stretch toward ten million tokens, dwarfing previous generation limits. Developers can stream entire codebases, compliance archives, or podcast transcripts into a single prompt.<\/p>\n<p>Nevertheless, such extreme contexts demand careful key-value cache engineering and abundant high-bandwidth memory. These architectural choices define the herd\u0019s scalability; meanwhile, platform partners rush to optimize hosting. Scout and Maverick embody scalable multimodality with cost-aware MoE routing. However, strategic timing shaped how Meta presented the herd.<\/p>\n<h2>Meta\u0019s Strategic Herd Reveal<\/h2>\n<p>Meta unveiled the herd between April 5 and 7, 2025 through coordinated blog and partner posts. GitHub, Azure, and Hugging Face published landing pages within hours, underscoring synchronized marketing. Moreover, Meta distinguished two public mixtures: Scout for giant documents and Maverick for conversational multimodal reasoning. Behemoth, the internal teacher, remained mentioned but unreleased, fueling speculation about capability gaps.<\/p>\n<p>Consequently, observers debated whether marketing oversold Behemoth\u0019s near-term availability. In press briefings, Ahmad Al-Dahle cited continuing FP8 stability tests as the decisive delay factor. Meanwhile, Mark Zuckerberg framed the release as an &#8220;iterative open science milestone&#8221; aligned with the broader Llama Roadmap. Subsequently, partner blogs reiterated that more checkpoints will arrive monthly, mirroring Llama 3\u0019s cadence.<\/p>\n<p>These staged releases appear central to Meta\u0019s path of Open Weights disclosure without overwhelming internal QA. Consequently, strategy shaped perception well before benchmarks surfaced. The staggered reveal maintained momentum while preserving Meta\u0019s flexibility. Next, technical details clarify what Scout and Maverick actually deliver.<\/p>\n<h2>Core Technical Model Highlights<\/h2>\n<p>Understanding specifications helps enterprises estimate hosting costs and choose suitable variants. Therefore, we summarize headline numbers below.<\/p>\n<ul>\n<li><strong>Scout:<\/strong> 17B active parameters, 109B total, 10M token window, single H100 inference with quantization.<\/li>\n<li><strong>Maverick:<\/strong> 17B active, 400B total across 128 experts, 1M token window for chat and vision.<\/li>\n<li><strong>Behemoth:<\/strong> 288B active, nearly 2T total, still training, public release postponed.<\/li>\n<li><strong>Training data:<\/strong> Over 30T tokens spanning text, images, and video, doubling Llama 3 scale.<\/li>\n<li><strong>Availability:<\/strong> GitHub Models, Azure AI Foundry, Hugging Face, OCI, and several third-party hosts.<\/li>\n<\/ul>\n<p>These numbers illustrate why Natively Multimodal Models attract both excitement and caution. Moreover, the Open Weights license lets researchers download full checkpoints, though commercial rights remain constrained. Quantized releases in FP8 format lower memory footprints, consequently enabling affordable experimentation. Enterprises perceive Natively Multimodal Models as a hedge against separate image and text pipelines.<\/p>\n<p>Nevertheless, running ten-million-token jobs still overwhelms most on-premise clusters. These technical realities shape enterprise planning. However, broad platform support lowers adoption friction. Scout and Maverick offer impressive scale with manageable active parameters. Next, we examine how enterprises already integrate them.<\/p>\n<h2>Enterprise Adoption Momentum Grows<\/h2>\n<p>Within two weeks, GitHub Models declared general availability of both variants via playground and REST API. Azure AI Foundry simultaneously supplied serverless endpoints and fine-tuning pipelines. Consequently, Databricks integrated the models into its MosaicML libraries for notebook workflows.<\/p>\n<p>Hugging Face published model cards under the Open Weights initiative, including FP8 and 4-bit quantizations. Oracle and Together AI pushed containerized images to their registries, moreover citing simplified scaling on NVIDIA H100 clusters. Because licensing forbids EU multimodal self-hosting, European clients lean toward managed endpoints.<\/p>\n<p>In contrast, US startups embed Maverick inside chat widgets to analyze sales calls and screenshots. Natively Multimodal Models enable those startups to combine transcripts and product photos within one query. Meanwhile, consulting firms design proofs-of-concept that trace the Llama Roadmap against procurement milestones. These pilots often bundle the <a href=\"https:\/\/www.aicerts.ai\/certifications\/development\/ai-prompt-engineer-2\">AI Prompt Engineer\u2122<\/a> certification to upskill internal teams.<\/p>\n<p>Consequently, momentum reinforces Meta\u0019s Open Weights narrative. Hosted offerings accelerate time-to-value while masking infrastructure complexity. Yet, performance uncertainty still clouds large procurement deals.<\/p>\n<h2>Early Benchmark Result Controversies<\/h2>\n<p>Community leaders at LMArena released comparative scoreboards within days of the launch. However, several tests used mismatched quantized builds, consequently underrepresenting Maverick\u0019s image reasoning strength. VentureBeat quoted Meta engineers blaming &#8220;implementation bugs&#8221; for inconsistent summarization output.<\/p>\n<p>Independent academics replicated some issues after disabling two expert shards, thereby confirming routing fragility. Nevertheless, broader evaluations found that Natively Multimodal Models still outperform dense baselines on multimodal MMLU. Meta promised patched checkpoints and clearer documentation in upcoming minor releases.<\/p>\n<p>Subsequently, Azure AI Foundry posted a hotfix that improved long-context latency by thirty percent. Benchmark variation highlights the fragile line between research marketing and deployed reliability. Therefore, enterprises request service level objectives before scaling high-risk workloads. These debates feed directly into legal and compliance considerations.<\/p>\n<p>Accuracy controversies underscore the importance of independent validation. Licensing constraints add another critical dimension.<\/p>\n<h2>Licensing And Compliance Reality<\/h2>\n<p>Meta distributes the herd under a community license framed as Open Weights but not true open source. Crucially, the license forbids multimodal usage by entities domiciled in the European Union. Consequently, cloud partners geo-restrict endpoints or require contractual attestations.<\/p>\n<p>Legal scholars warn that downstream users remain liable for misuse, even when prompts inject disallowed images. Moreover, high-traffic platforms must negotiate additional terms once daily active counts pass Meta\u0019s thresholds. In contrast, research institutions enjoy cost-free access under non-commercial clauses.<\/p>\n<p>Therefore, compliance teams should compare the Llama Roadmap against regulatory timelines before integrating the models. Documentation from Azure and GitHub attaches explanatory addenda that flag region locks and reporting duties. Professionals can mitigate risk by adopting managed services with built-in usage monitoring.<\/p>\n<p>Natively Multimodal Models raise additional privacy flags because visual input may contain biometric data. These safeguards do not eliminate responsibility; however, they simplify audits. Licensing realities may outweigh raw capability for regulated industries. Finally, we assess Meta\u0019s forward trajectory.<\/p>\n<h2>Road Ahead For Meta<\/h2>\n<p>Meta\u0019s public statements point toward monthly checkpoints that extend the Llama Roadmap into 2026. Behemoth remains the headline goal, promising larger active capacities and refined routing strategies. Moreover, engineers experiment with block-sparse attention to trim latency across long documents.<\/p>\n<p>Rumors suggest Meta will merge video and audio tokenizers, thereby deepening the Natively Multimodal Models philosophy. Subsequently, another Mixture-of-Experts variant named Pathfinder could enter limited beta. Analysts expect Meta to publish an updated disclosure chart that outlines upgrade sequencing.<\/p>\n<p>Meanwhile, partners lobby for early access to smaller distilled checkpoints optimized for edge accelerators. Therefore, business leaders should monitor model card revisions and latency dashboards. Professionals may also pursue the <a href=\"https:\/\/www.aicerts.ai\/certifications\/development\/ai-prompt-engineer-2\">AI Prompt Engineer\u2122<\/a> credential to align skills with future releases.<\/p>\n<p>These expectations close the technical narrative yet open strategic questions. Meta must balance open access, reliability, and compliance as the roadmap unfolds. Stakeholders now need actionable next steps.<\/p>\n<h2>Conclusion And Next Steps<\/h2>\n<p>Meta\u0019s Llama 4 herd pushes boundaries through Natively Multimodal Models and giant context windows. However, early benchmark noise and strict licensing remind enterprises to proceed with diligence. Meanwhile, cloud integrations lower entry barriers by hiding MoE routing complexity.<\/p>\n<p>Professionals should monitor Meta\u0019s official roadmap for Behemoth and forthcoming distilled variants. Consequently, upskilling remains essential. Developers can gain practical prompt design skills via the <a href=\"https:\/\/www.aicerts.ai\/certifications\/development\/ai-prompt-engineer-2\">AI Prompt Engineer\u2122<\/a> course.<\/p>\n<p>Therefore, informed teams will translate emerging capabilities into reliable, compliant products. Explore the certification and start building multimodal prototypes today.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Meta has escalated the generative AI race with its latest Llama 4 herd release. The announcement spotlights Natively Multimodal Models that integrate text, images, and video from pretraining onward. Consequently, developers now evaluate unprecedented context windows and mixture-of-experts efficiencies. However, Meta released only the Scout and Maverick variants while the massive Behemoth model remains internal. [&hellip;]<\/p>\n","protected":false},"featured_media":21896,"parent":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_yoast_wpseo_focuskw":"Natively Multimodal Models","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"Explore Meta's Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.","_yoast_wpseo_canonical":""},"tags":[29977,29976,29975],"news_category":[4],"communities":[],"class_list":["post-21901","news","type-news","status-publish","has-post-thumbnail","hentry","tag-llama-roadmap","tag-meta-llama-4","tag-natively-multimodal-models","news_category-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News<\/title>\n<meta name=\"description\" content=\"Explore Meta&#039;s Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News\" \/>\n<meta property=\"og:description\" content=\"Explore Meta&#039;s Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/\" \/>\n<meta property=\"og:site_name\" content=\"AI CERTs News\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-09T13:42:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/\",\"url\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/\",\"name\":\"Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News\",\"isPartOf\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg\",\"datePublished\":\"2026-03-09T13:42:38+00:00\",\"dateModified\":\"2026-03-09T13:42:41+00:00\",\"description\":\"Explore Meta's Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage\",\"url\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg\",\"contentUrl\":\"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg\",\"width\":1536,\"height\":1024,\"caption\":\"AI experts collaborate on advancing Natively Multimodal Models.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.aicerts.ai\/news\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"News\",\"item\":\"https:\/\/www.aicerts.ai\/news\/news\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Meta Llama 4 Herd Unveils Natively Multimodal Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#website\",\"url\":\"https:\/\/www.aicerts.ai\/news\/\",\"name\":\"Aicerts News\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.aicerts.ai\/news\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#organization\",\"name\":\"Aicerts News\",\"url\":\"https:\/\/www.aicerts.ai\/news\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg\",\"contentUrl\":\"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg\",\"width\":1,\"height\":1,\"caption\":\"Aicerts News\"},\"image\":{\"@id\":\"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News","description":"Explore Meta's Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/","og_locale":"en_US","og_type":"article","og_title":"Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News","og_description":"Explore Meta's Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.","og_url":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/","og_site_name":"AI CERTs News","article_modified_time":"2026-03-09T13:42:41+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/","url":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/","name":"Meta Llama 4 Herd Unveils Natively Multimodal Models - AI CERTs News","isPartOf":{"@id":"https:\/\/www.aicerts.ai\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage"},"image":{"@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage"},"thumbnailUrl":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg","datePublished":"2026-03-09T13:42:38+00:00","dateModified":"2026-03-09T13:42:41+00:00","description":"Explore Meta's Llama 4 herd, its Natively Multimodal Models, Open Weights strategy, and Llama Roadmap shaping enterprise AI innovation for 2025.","breadcrumb":{"@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#primaryimage","url":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg","contentUrl":"https:\/\/aicertswpcdn.blob.core.windows.net\/newsportal\/2026\/03\/collaboration-on-multimodal-models.jpg","width":1536,"height":1024,"caption":"AI experts collaborate on advancing Natively Multimodal Models."},{"@type":"BreadcrumbList","@id":"https:\/\/www.aicerts.ai\/news\/meta-llama-4-herd-unveils-natively-multimodal-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.aicerts.ai\/news\/"},{"@type":"ListItem","position":2,"name":"News","item":"https:\/\/www.aicerts.ai\/news\/news\/"},{"@type":"ListItem","position":3,"name":"Meta Llama 4 Herd Unveils Natively Multimodal Models"}]},{"@type":"WebSite","@id":"https:\/\/www.aicerts.ai\/news\/#website","url":"https:\/\/www.aicerts.ai\/news\/","name":"Aicerts News","description":"","publisher":{"@id":"https:\/\/www.aicerts.ai\/news\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.aicerts.ai\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.aicerts.ai\/news\/#organization","name":"Aicerts News","url":"https:\/\/www.aicerts.ai\/news\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/","url":"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg","contentUrl":"https:\/\/www.aicerts.ai\/news\/wp-content\/uploads\/2024\/09\/news_logo.svg","width":1,"height":1,"caption":"Aicerts News"},"image":{"@id":"https:\/\/www.aicerts.ai\/news\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news\/21901","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news"}],"about":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/types\/news"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/comments?post=21901"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/media\/21896"}],"wp:attachment":[{"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/media?parent=21901"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/tags?post=21901"},{"taxonomy":"news_category","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/news_category?post=21901"},{"taxonomy":"communities","embeddable":true,"href":"https:\/\/www.aicerts.ai\/news\/wp-json\/wp\/v2\/communities?post=21901"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}