Features
Sort Reels by Views Analytics Dashboard Export Data Competitor Analysis Performance Score Hashtag Analytics
Resources
Best Posting Times Engagement Calculator Go Viral on Reels Best Hashtags Guide Instagram Insights View All Resources
Company
About Contact
Install Free
Summarize with AI: Claude ChatGPT Share

Quick answer (TL;DR): Avoid AI-generated faces, AI voiceovers, and fully synthetic video in your main content. Using AI for editing, captions, color grading, B-roll selection, sound mixing, or scriptwriting does NOT trigger the Instagram made with AI label. The "AI info" tag, now hidden under the 3-dot menu, can quietly cut your Reach by 15-80% even when viewers never see it. Mosseri's December 31, 2025 "Year of Raw Content" announcement codifies the penalty for 2026.

What Changed in 2025-2026

When Meta first rolled out the "Made with AI" tag in May 2024, it was a prominent badge that appeared directly under the creator's username. Backlash was immediate, especially from photographers whose unedited images were being mislabeled because they had run through Adobe's generative noise removal. By late 2024, Meta renamed the badge to "AI info" and moved it into the 3-dot menu on each post. The label became less visible to viewers, but the algorithm's treatment of labeled content did not relax. It got stricter.

Throughout 2025, creator analytics firms began publishing data on what happens to a Reel once it carries the AI info tag. Napolify's analysis of Instagram's AI label policy measured reach drops ranging from 15% on the low end (educational creators using AI-narrated explainers) to 80% on the high end (lifestyle and beauty creators using synthetic faces). Oreate AI's policy review confirmed that the tag is now treated by the algorithm as a soft demotion signal regardless of whether the AI content is benign.

Then, on December 31, 2025, Adam Mosseri posted to @creators on Instagram what creators are now calling the "Year of Raw Content" directive. It was the clearest statement yet that the platform is making a public bet against AI-polished content for all of 2026.

How Instagram Detects AI Content

Instagram does not need to "guess" whether a Reel is AI-generated. Its detection stack combines three deterministic signals:

Signal How it works Coverage
C2PA signatures Cryptographic content credentials embedded by Adobe Firefly, OpenAI, Microsoft Designer, and others Most enterprise AI tools
IPTC metadata flags Standard image metadata fields that mark AI provenance, set by Midjourney, DALL-E, Stable Diffusion exports Most consumer AI tools
Watermark database Visual fingerprint database trained on outputs from popular generators (Runway, Pika, Sora, ElevenLabs) Video and audio generators
Self-disclosure The "AI info" toggle in Reels composer; applied immediately Voluntary, but enforced

Crucially, the watermark database is updated continuously. A Reel you posted six months ago with synthetic narration from a now-detectable voice model can be retroactively labeled when Meta's database picks up the signature. This is one reason creators sometimes see a sudden, unexplained reach drop on older content; the AI info label gets applied silently and the algorithm starts demoting the Reel from that moment forward.

The Engagement Penalty: 15-80% Drop

The headline number from Napolify's 2025 dataset is a reach drop range of 15% to 80%. That spread isn't noise, it reflects niche-by-niche variation in how AI penalties are applied:

The penalty compounds with other ranking signals. If your Reel already has weak watch time or is competing in a saturated audio (see our breakdown of original audio vs trending sounds), the AI label can push it from "modest underperformance" into the dead zone where Reels collect under 100 plays from a 10,000-follower account. We cover that scenario in detail in Why Reels Get No Views.

What Does NOT Trigger the AI Label

This is the most important section, and the one most creators get wrong. The AI info label is not triggered by any of the following, according to Meta's published policy and Mosseri's repeated public clarifications:

Color grading and LUTs

Using CapCut, VN, or DaVinci Resolve's AI-powered color match. The output is a visual filter, not generated content.

Automatic captions

Instagram's own caption tool, CapCut auto-captions, and Whisper transcription are all editing assistants. No label.

Background music selection

AI-recommended trending sounds and AI-curated audio libraries. Mosseri confirmed this in his Threads Q&A.

Sound mixing and noise removal

Adobe Podcast AI, Krisp, ElevenLabs voice cleanup on real human speech. Removing noise is not generating content.

Scriptwriting

Using ChatGPT, Claude, or Gemini to draft hooks, scripts, or captions. The performance is human.

B-roll selection

AI tools that scan your footage and pick the best clips. The footage itself is yours, the AI is just a librarian.

What DOES Trigger the AI Label

The label is reserved for content where AI generated the primary visual or audio. Specifically:

  1. AI-generated faces. Synthetic humans from tools like HeyGen, Synthesia, D-ID, or any text-to-video model. This is the single biggest trigger.
  2. AI voiceovers using cloned or synthetic voices. ElevenLabs, Play.ht, OpenAI's voice cloning. Note: TTS voices that the platform already recognizes (like Instagram's built-in text-to-speech) are exempt.
  3. Fully synthetic video. Runway Gen-3, Pika, Sora, Kling, and other text-to-video outputs.
  4. Deepfakes. Any face-swapped or voice-cloned content. This carries an additional community guidelines risk on top of the reach penalty.
  5. AI-generated still images used as Reel covers or full-frame visuals. Midjourney, DALL-E, Firefly outputs that fill the frame.

Edge case warning: AI upscaling and frame interpolation (Topaz, Pixop) are currently in a gray zone. Meta has not formally classified them, but creators have reported labels being applied to heavily upscaled archival footage. If you use these tools, treat the result as "potentially flaggable" and disclose voluntarily.

The Disclosure Trap

Here is the trap that is silently suppressing thousands of accounts. Instagram's policy requires creators to self-disclose AI-generated content using the "AI info" toggle in the Reels composer. If Instagram later detects AI content that you did NOT disclose, the consequences are categorically worse than if you had disclosed:

This is the asymmetry creators need to understand: voluntary disclosure caps your downside at the content-level penalty. Failure to disclose risks an account-level penalty. Honesty is mathematically cheaper.

Mosseri's "Year of Raw Content" Announcement

On December 31, 2025, Adam Mosseri posted to @creators on Instagram and Threads what is now the most-quoted creator policy statement of the year. The relevant excerpt:

"Going into 2026, we want Instagram to feel like a window into real human moments, not a feed of polished outputs from production teams or AI tools. We are going to lean into raw, unpolished, vertical, real content. Expect the algorithm to reflect that. The creators who win this year will be the ones who put their actual face, voice, and unedited reality in front of the camera." — Adam Mosseri, Head of Instagram, @creators, December 31, 2025

The statement was not an isolated holiday post. Don Creative Group's analysis of Instagram's Raw Content Revolution documented six platform updates that operationalized the directive within the first quarter of 2026, including changes to the Reels composer that surface native camera footage above imported clips, a new "shot on iPhone" filter signal in Explore, and a quiet downranking of content uploaded from desktop. We connect these algorithm shifts back to the broader ranking model in our breakdown of the Instagram Reels algorithm in 2026.

What Raw Content Looks Like in 2026

"Raw" doesn't mean "lazy." It means the content reads as a real human moment rather than a produced asset. The five visual templates that are currently overperforming:

  1. Unedited iPhone footage. A single take shot vertically on the front or rear camera, with native iOS color and no LUT applied. The "shot on iPhone" metadata signal is positively weighted.
  2. Vertical talking head, single take. Creator looks directly into the lens, speaks, and the cut is the original take. No B-roll inserts, no jump cuts, no captions.
  3. Behind-the-scenes documentary clips. The camera follows a real workflow (a baker prepping dough, an artist mid-painting, a developer typing). No staging.
  4. Real reactions. Authentic responses to news, products, or other content. The duet/reaction format is the original "raw" format and remains algorithmically favored.
  5. Lo-fi production. Visible imperfection (handheld shake, ambient noise, natural lighting). Mosseri specifically called out "the post that looks like you just opened the camera" as the ideal.

How to Use AI Without Getting Labeled

You do not need to abandon AI to thrive on Instagram in 2026. You need to use it for the parts of production that don't involve the camera. Five safe workflows:

  1. Scripting and hook drafting. Use Claude or ChatGPT to generate 10 hook variants, then pick one and deliver it on camera in your own voice.
  2. Caption and hashtag generation. AI-written captions and AI-suggested hashtags are 100% safe. They are not the primary content.
  3. Background music selection. Use AI sound-matching tools to find a non-trending audio that fits your pacing.
  4. B-roll selection from your own footage. Tools like Descript and CapCut Pro can scan your raw footage and suggest the best clips. The footage is yours.
  5. Posting time and topic planning. Use analytics-driven tools (including our best time to post tool) to pick when and what to publish.

Case Studies: Three Creators Who Got Labeled

To make the penalty concrete, here are three account patterns we observed across 1,200+ labeled vs unlabeled Reels via the IShort Chrome extension during March-April 2026:

Case 1: A 240k-follower fitness account

The creator started using HeyGen to generate workout-narration videos with a synthetic instructor in late January 2026. Within three weeks the account's average Reel reach dropped from 180,000 to 42,000, a 77% decline. After switching back to on-camera narration in February, reach recovered to 110,000 over the following month, but not fully to baseline. The likely cause: the account-level suppression decay function takes 60-90 days to fully unwind.

Case 2: A 12k-follower educational account

This creator used ElevenLabs for AI voiceover on text-on-screen explainer Reels. Average reach was 8,500 before adopting the voice model, and 5,800 after, a 32% drop. The creator disclosed AI use upfront, so no account-level penalty stacked on top of the per-Reel penalty.

Case 3: A 1.4M-follower beauty account

The creator's team began using AI-generated still images as covers for Reels (the videos themselves were human-shot). Even with human video, the AI-generated cover triggered the AI info label and dropped average reach from 1.1M to 640k, a 42% decline. After switching covers to native screenshots from the video, reach recovered within 21 days.

AI Label vs Paid Promotion: Which Is Worse for Reach?

A counterintuitive finding from our data: the AI info label is, on average, a steeper reach penalty than the unlabeled-but-monetized branded content disclosure. Paid promotion content with the "Paid partnership" tag sees a 5-15% reach reduction, well below the AI label's 15-80% range.

The reason is simple: Instagram economically benefits from branded content (creators using the platform to fulfill paid deals usually buy boosts and Reels ads to amplify it). The platform does not benefit from AI content, which it sees as a long-term threat to user trust and time-on-platform. The two penalties exist in different economic logics.

How to Recover an AI-Suppressed Account

If your account is showing the symptoms (sudden 20%+ reach drop across all Reels, including non-AI content; flat engagement on previously high-performing posts; no specific Reels showing the AI info label but reach is depressed anyway), follow this 10-step recovery playbook:

  1. Audit your last 90 days of Reels and identify every post with the AI info label visible in the 3-dot menu.
  2. Archive the worst-performing labeled Reels. Do not delete them outright (deletion can also signal manipulation), archive them so they no longer count toward your account's recent average.
  3. Stop posting any AI-generated faces, voices, or fully synthetic video for at least 30 days.
  4. Switch to native iPhone or Android camera footage, shot vertically, with no LUT or filter applied at capture.
  5. Turn off Instagram's own AI editing tools in the Reels composer (the "AI suggestions" toggle).
  6. Post 3-5 raw Reels per week. Each one must show your face or hands on camera for at least 50% of the runtime.
  7. Engage in DMs daily. Human-to-human signal accumulation accelerates suppression recovery, per Mosseri's stated weighting of "sends per DM" as a top-three ranking signal in 2026.
  8. Use original audio (your own voice) rather than trending sounds for the first 30 days of recovery. The original-audio signal carries a recency boost that helps offset the prior penalty.
  9. Track your reach recovery with the IShort extension or another analytics tool. Suppression typically lifts 60-90% within 30 days and 100% within 90 days.
  10. Once recovered, maintain a maximum 10% AI-content ratio (and always self-disclose). The threshold above which the algorithm appears to re-flag accounts is 15-20% AI ratio over a rolling 30-day window.

Track Your Reach Drop in Real Time

Install IShort (free) and see whether your Reels are sitting in the AI-suppressed bucket. The extension surfaces per-Reel reach, completion rate, and engagement velocity, so you can spot the suppression pattern before it costs you another month of growth.

Install IShort Free →

FAQ

Does Instagram's AI info label show up on Stories?

As of May 2026, the label is applied to Stories but is even less visible (it requires tapping the username area). The reach penalty is roughly half that of Reels, in the 8-30% range, because Stories distribution is primarily to your existing followers and the algorithm has less surface to demote.

Can I appeal an AI info label that was applied incorrectly?

Yes. The 3-dot menu on the Reel includes a "Disagree with label" option. Meta reviews appeals within 5-7 days. Photographers who shoot with cameras that embed generative noise reduction metadata (most 2024+ Sony and Fuji bodies) have had high success rates on appeals.

Does using Instagram's built-in AI sticker or background tool trigger the label?

Yes for AI stickers and AI backgrounds, no for filters. Meta's own AI features apply the same label to maintain disclosure consistency. The reach penalty for first-party AI features is on the low end of the range (10-20%).

If I delete a labeled Reel, does the account-level penalty reset?

No. Account-level signals persist independently of any single post. Deletion can actually slow recovery because it removes a data point the algorithm uses to recalibrate. Archive instead of delete.

Will Mosseri's "Year of Raw Content" extend into 2027?

Mosseri has not committed beyond 2026. However, the algorithm changes he announced are not framed as temporary; they are framed as the new ranking baseline. Don Creative Group's analysis and our own observations suggest the raw-content weighting will persist as a structural part of the algorithm rather than reverting in January 2027.

Methodology

Engagement drop benchmarks are sourced from Napolify's 2025 AI label analysis and cross-referenced with 1,200+ labeled vs unlabeled Reels observed via the IShort Chrome extension between January and April 2026. Quotes from Adam Mosseri are sourced from his public @creators posts on Instagram and Threads, including the December 31, 2025 "Year of Raw Content" statement. Policy details on detection methods (C2PA, IPTC, watermarks) are sourced from Meta's official policy disclosures, Oreate AI's policy review, and Don Creative Group's 2026 Raw Content Revolution analysis.