Elon Musk's X Suspends Monetization for Undisclosed AI War Posts

By Global Leaders Insights Team | Mar 04, 2026

Elon Musk’s X , has announced a significant policy shift aimed at curbing the spread of undisclosed AI-generated videos depicting armed conflicts, suspending creators from its Creator Revenue Sharing programme for 90 days if they fail to clearly label content as AI-generated. Repeat offenders could face permanent removal from the monetization ecosystem.

The move comes against the backdrop of the ongoing Middle East crisis involving the U.S., Israel and Iran, which has led to a deluge of highly realistic AI-generated footage that analysts and fact-checkers say threatens the authenticity of online information. X’s revised policy is specifically designed to limit financial incentives for creators who post such misleading war videos without clear disclosures.

Key Highlights

  • X suspends monetization for creators posting undisclosed AI-generated war videos during ongoing conflict.
  • Repeat violations may lead to permanent removal from X’s revenue sharing programme.

New Rules Target Monetisation, Not Platforms

Under the updated guidelines, creators who share AI-generated videos about armed conflict without a proper “Made with AI” label will be suspended from X’s revenue sharing programme — the platform’s mechanism that allows eligible users to earn advertising revenue. A second violation could result in permanent exclusion from the programme. Enforcement will be triggered through Community Notes — X’s crowdsourced fact-checking tool — coupled with metadata and other AI detection signals.

X’s head of product, Nikita Bier, said the policy revision is intended to uphold the authenticity of information on users’ timelines. “Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the programme,” Bier wrote in a post shared with users. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people.”

Also Read: Macron Sends Aircraft Carrier to Mediterranean Amid Rising Tensions

Response From Authorities and Users

The policy has drawn attention from global officials. A senior U.S. State Department official recently praised X’s initiative, noting that financial disincentives could promote more accurate content without requiring direct government intervention. “You don’t need a Ministry of Truth to incentivise truth online,” the official said in a social media post endorsing the revised stance.

Some industry observers, however, warn that while the new rules address monetised conflict content, broader misinformation challenges — including politically charged deepfakes and non-war AI abuses — remain outside the scope of this policy and may require additional technical and regulatory responses.

What Creators Must Do

Creators on X who wish to avoid suspensions must now add a “Made with AI” disclosure label when posting any AI-generated videos of armed conflict. According to the platform, this label can be selected from the post menu via the “Add Content Disclosures” option. X will also continue refining its AI detection and labelling features as part of ongoing product enhancements.