Artificial intelligence (AI) is rapidly distorting public understanding of the escalating confrontation between the United States, Israel and Iran, as fabricated videos, images and manipulated narratives circulate widely on social media platforms.
Technology companies and fact-checking groups say generative AI tools are being used to produce convincing but false footage of missile strikes, destroyed cities and military victories that never occurred, complicating efforts by journalists and the public to verify events during the conflict.
The surge in misinformation intensified after the joint U.S.–Israeli strikes on Iranian targets in late February under the operation widely reported as Operation Lion’s Roar, which sharply raised tensions in the region.Researchers and analysts say the scale of AI-generated propaganda marks one of the first times generative technology has been deployed so extensively during an active military confrontation.
Fake war footage spreads rapidly
Social media platforms have been flooded with fabricated videos showing alleged missile strikes on cities such as Tel Aviv or Dubai, many created using generative AI or edited clips from unrelated footage. Some of these videos amassed millions of views before being debunked.Fact-checking organizations have also flagged AI-generated images falsely claiming to show Iran’s Supreme Leader killed in airstrikes, illustrating how fabricated visuals are being used to influence public perception of the conflict.
Other misinformation campaigns involved coordinated networks of accounts amplifying fabricated clips and narratives. In one case, investigators linked a network of more than 30 hacked social media accounts to the spread of AI-generated videos depicting attacks during the conflict.Platforms attempt to contain the spread
Social media companies are beginning to tighten rules to curb the spread of AI-generated war content. The platform X, formerly Twitter, said it will suspend users from its revenue-sharing program if they repeatedly post AI-generated conflict videos without labeling them clearly as synthetic media. The policy comes after a surge of misleading clips tied to the Israel–Iran confrontation. Despite these measures, analysts say the speed and sophistication of generative AI makes it increasingly difficult to distinguish genuine battlefield footage from fabricated material.
A new front in modern warfare
During previous clashes in the region, misinformation often relied on recycled videos or misleading captions. Generative AI has raised the stakes by allowing actors to produce entirely new scenes that appear realistic enough to fool viewers and sometimes even automated fact-checking systems.
Some analysts warn that such technology could be used strategically by state and non-state actors to shape international opinion, undermine adversaries or trigger panic during crises. For journalists and policymakers, the proliferation of AI-generated propaganda underscores a growing challenge, verifying facts in real time during conflicts where the digital information space has become as contested as the battlefield itself.
