Key Points
-
The Israel-Iran conflict in June 2025 saw the first large-scale, coordinated use of generative AI deepfakes in wartime disinformation campaigns.
-
AI-generated images and videos were used to amplify both Israeli and Iranian narratives, with Iran-linked networks especially focused on faking dramatic Israeli damage.
-
The campaigns reached unprecedented scale on social media, exploiting weakened moderation and creating over 100 million views of false material.
- The conflict illustrated the risk of the liar’s dividend: the prevalence of AI-generated fakes makes even genuine evidence easier to dismiss as false, undermining trust in real reporting and complicating efforts to establish objective truth.
-
Future conflicts are highly likely to see even more sophisticated generative AI misinformation, posing significant risks to escalation management and public trust.
Iran’s Use of Generative AI
During the June 2025 escalation between Israel and Iran, pro-Iranian social media networks aggressively deployed generative AI to craft and disseminate highly convincing fake videos and images designed to exaggerate Tehran’s military strength and intimidate adversaries. As Israeli strikes began on June 13, online platforms were flooded with fabricated clips showing supposed missile strikes on Tel Aviv and Ben Gurion Airport. These videos were often created or enhanced using advanced generative tools such as Google’s Veo 3, which can produce hyper-realistic imagery.
Some of the most widely circulated fakes included an image of a jet allegedly shot down over the Iranian desert, which on closer inspection revealed obvious signs of AI manipulation, such as disproportionate sizing of the aircraft relative to nearby civilians and implausible terrain impact. Another viral video, which garnered 27 million views, falsely depicted dozens of missiles falling on Tel Aviv, while a separate clip with over 21 million views on TikTok claimed to show an Israeli F-35 being shot down, when in fact the footage originated from a flight simulator game.
These videos often employed tactics designed to complicate verification, such as night-time settings and recycled footage from unrelated conflicts or video games like Arma 3. Moreover, the campaign was amplified by networks linked to Russian influence operations that have shifted focus from undermining support for Ukraine to sowing doubts about Western military capabilities (and particularly the U.S.-made F-35). In many cases, the fake content was disseminated by well-known accounts, including some monetising the conflict through platforms that reward viral engagement. Collectively, the three most viewed fake videos amassed over 100 million views across social media.
Israel’s Use of Generative AI
While Iran’s use of generative AI disinformation was more extensive and sophisticated, Israeli-aligned accounts also engaged in spreading misleading content, though with a different strategic aim. Rather than focusing on dramatic battle imagery, pro-Israeli campaigns sought to shape perceptions of Iranian domestic instability.
For example, some accounts recirculated old footage of protests in Iran, falsely claiming it showed mounting dissent against the government in response to Israeli strikes. One widely shared AI-generated video purported to show Iranians chanting “we love Israel” in the streets of Tehran, an obvious fabrication designed to suggest popular support for Israel’s military actions.
Even official sources were implicated in these efforts. The Israel Defense Forces (IDF) shared a video depicting missile barrages, which turned out to be old and unrelated footage, leading to a correction in the form of a community note on X (formerly Twitter). Overall, the Israeli approach was less about creating vivid, hyper-realistic destruction and more focused on reinforcing the narrative of Iran’s internal weakness.
The Use of X’s Grok for Verification
As misinformation spread rapidly across platforms, many users turned to X’s integrated AI chatbot, Grok, in hopes of verifying the authenticity of viral videos. However, Grok’s performance revealed the limits of current AI-based moderation and fact-checking.
When faced with a particularly viral video purporting to show bombed-out airport damage, Grok’s responses varied wildly, sometimes saying “likely shows real damage” and other times “likely not authentic,” even within minutes of each other. In one especially troubling case, Grok insisted that an AI-generated video showing an endless convoy of ballistic missiles was real, despite clear signs of manipulation such as rocks moving unnaturally. Researchers documented over 300 Grok responses to just one video, demonstrating both how heavily people relied on the tool and how inconsistent its answers were.
Such failings highlight the dangers of depending too heavily on automated systems for real-time verification during fast-moving conflicts, where the information environment is flooded with sophisticated synthetic content.
GenAI Chatbots for Verification
Beyond Grok, other generative AI chatbots have also been used to check images and videos supposedly depicting the Israel-Iran war, but their performance has been similarly uneven. Tools like OpenAI’s ChatGPT and Google’s Gemini were sometimes able to correctly identify that certain images were not from the current conflict. However, they often misattributed those images to other unrelated military operations, sowing further confusion. Anthropic’s Claude, on the other hand, sometimes refused to authenticate content altogether.
While these chatbots can provide valuable structured analysis when given a well-structured prompt, the quality and accuracy of their responses vary significantly. As a result, they cannot yet be relied upon as consistent verification tools, especially as generative AI content itself becomes harder to distinguish from authentic media.
How to Spot AI-Generated Fake Images and Videos
The rapid evolution of generative AI has made detection far more challenging, but there remain practical techniques to identify fake content. One essential method involves verifying a video’s origin using tools such as Keyframe in InVid WeVerify, which can extract frames from a video and run them through reverse image searches. This approach can reveal if the footage is old or from an unrelated location. It is important to use multiple search engines, since the original video may not be indexed everywhere; for instance, Yandex often delivers different results than Google Images, as shown in this video tutorial by AFP.
Other clues can be found in the content itself. AI-generated videos frequently contain tell-tale flaws such as unrealistic proportions, glitches in textures, or looping segments. Videos created with tools like Veo 3 tend to be eight seconds long or constructed from similar short clips, which is not proof of fakery but should prompt careful scrutiny. In some cases, watermarks like “Veo 3” may even remain visible at the bottom of online videos.
Accounts pushing such content often exhibit suspicious behavior, such as rapid follower growth, verified status, and a pattern of posting sensationalist, unverified claims. Given the sophistication of these tools and the deliberate targeting of biased audiences, fact-checking has become both more essential and more difficult.
Deep fake and the Liar’s Dividend
Perhaps the most insidious consequence of the proliferation of generative AI deepfakes is the so-called liar’s dividend; a concept coined by legal scholars Danielle Citron and Robert Chesney. This term refers to the advantage gained by dishonest actors who exploit public awareness of deepfakes to deny the authenticity of genuine evidence.
In practical terms, once people know that audio, video, and images can be convincingly faked, anyone confronted with incriminating footage can simply claim “It’s a deepfake,” even if the evidence is entirely genuine. In the context of the Israel–Iran conflict, this means that real documentation of civilian casualties or military failures can be dismissed as fabricated, complicating efforts to establish accountability and undermining public trust.
Even after forensic analysis debunks an AI-generated fake, public perceptions may not shift. The mere introduction of doubt can be enough to neutralise the impact of genuine evidence. As generative AI tools become more sophisticated and accessible, this dynamic will only grow more problematic, fostering a deeply cynical information environment where truth itself becomes contested and elusive.
Conclusion
The Israel-Iran conflict represents a turning point in the use of generative AI in warfare. Both sides have weaponised these tools, though Iran’s campaign was notably more aggressive and technically sophisticated. Platforms and verification tools have so far proven inadequate to counter the scale and realism of these fakes, sometimes even amplifying confusion instead of resolving it.
Meanwhile, the liar’s dividend ensures that even genuine evidence can be plausibly denied, making truth harder to establish and eroding the foundations of accountability. As generative AI technology continues to evolve, addressing these challenges will be critical for journalists, analysts, and the broader public seeking to understand the reality of modern conflicts.
About the author
Edoardo Camilli is the CEO and Co-founder of Hozint – Horizon Intelligence. With a background in geopolitical analysis and security risk management, he has built his career on helping organisations anticipate and mitigate emerging threats. His primary interest lies in the intersection of open source intelligence (OSINT) and artificial intelligence, where he explores how advanced technologies can transform the collection, analysis, and dissemination of timely, actionable intelligence. Through Hozint, Edoardo works to bridge human expertise and AI-driven tools to deliver accurate, reliable insights in an increasingly complex information environment.