The Israel-Iran conflict has sparked an extraordinary wave of disinformation online, utilizing AI-generated content to misrepresent military actions and responses. As both pro-Iranian and pro-Israeli narratives flare up, experts warn of the significant influence of misleading videos and posts, complicating the already tense situation.
Surge of AI-Generated Disinformation Amidst Israel-Iran Conflict

Surge of AI-Generated Disinformation Amidst Israel-Iran Conflict
As strikes intensify, a flood of AI-created misinformation is twisting narratives surrounding the Israel-Iran conflict, with extensive implications for public perception.
A torrent of disinformation has engulfed the internet since Israel commenced its airstrikes on Iran last week, prompting an analysis by BBC Verify that highlights the troubling use of artificial intelligence to distort realities surrounding military engagements. The situation escalated significantly after Israel launched strikes on Iran on June 13, which resulted in retaliatory missile and drone attacks by Iranian forces.
Dozens of AI-generated videos have surfaced purporting to showcase Iran's military strength, alongside misleading clips that present fabricated scenes of Israeli strike aftermaths. These sensationalized posts have amassed an astonishing collective viewership of over 100 million across various platforms. Pro-Israeli accounts are also distributing misinformation by presenting aged clips of protests in Iran, inaccurately portraying them as evidence of growing dissent against the Iranian government and a supportive stance towards Israel’s military operations.
Open-source imagery analysis groups have characterized the magnitude of disinformation as "astonishing," pointing to "engagement farmers" who profit from disseminating misleading content to capture online attention. Reports indicate that this particular conflict marks the first significant instance of generative AI being employed at scale during warfare.
BBC Verify’s investigation identified particular accounts as "super-spreaders" of fake content, observing a surge in follower counts for pro-Iranian accounts. One notable account, Daily Iran Military, skyrocketed from over 700,000 followers to 1.4 million within just six days. This surge has contributed to the visibility of a wave of obscure accounts, many of which have blue verification ticks and frequent postings of disinformation, leading some to mistakenly believe they are credible.
Among the troubling trends, AI-generated imagery has been designed to exaggerate claims regarding Iranian successes against Israeli targets. For instance, a misleading image showcasing missiles over Tel Aviv has garnered 27 million views. Another troubling piece of content depicted a fictional missile strike in Israel, leveraging night-time settings that complicate verification efforts.
Claims involving the destruction of advanced Israeli F-35 aircraft have particularly drawn attention, with one widely circulated video incorrectly depicting a damaged fighter jet in the Iranian desert. While this portrayal appeared to suggest significant losses for Israeli forces, visible signs indicated AI manipulation, including inconsistent sizing of civilians and vehicles. One more notorious video, claiming to depict an Iranian attack on an F-35, was later determined to be from a flight simulation game.
Moreover, an increase in the focus on the capabilities of Western military technology, particularly the F-35, has been attributed to Russian influence operations. Experts speculate that the aim is to breed doubt regarding these advanced weapons systems and their effectiveness.
Fatalistic disinformation is widely shared by established accounts with histories of commenting on various conflicts, suggesting diverse motivations: some may aim to monetize the content while others may have broader political agendas. Pro-Israeli narratives frequently claim significant civil unrest in Iran as a response to the airstrikes.
AI-generated content has infiltrated both public discussions and media circles, including official statements from various parties involved. Despite ongoing efforts to counter misinformation led by platforms like TikTok and Instagram, which strive to enforce community guidelines, the spread of misleading content challenges users' ability to discern truth amidst the chaos of conflict.
Experts emphasize the psychological factors at play in the rapid spread of disinformation, noting that emotionally resonant content—particularly during politically charged events—tends to circulate more broadly across social media platforms. As this conflict unfolds, the role of AI-generated misinformation remains a potent and perilous factor in shaping public understanding and response.