After Hamas militants launched a surprise attack on Israel on October 7, killing at least 1,000 and taking at least 150 hostages, and Israel declared war against Hamas and retaliated, photographs and videos of violence flooded out of the region and onto social media. Some of the images were posted by victims on the ground at the attacks. Some were reportedly seeded by Hamas, but others were years old, taken from conflict zones in other parts of the world, or even from a fictional video game. For the average internet user, knowing what information to trust online has never been more challenging.
Complicating matters even further are the ways in which unconfirmed reports are outpacing the process of verification, finding their way into news coverage and the statements of elected officials, further fueling online falsehoods and confusion. “I never really thought that I would see and have confirmed pictures of terrorists beheading children,” President Joe Biden said last week, referring to widely circulated but as yet unconfirmed reports of Hamas militants beheading infants during the initial attack. The White House later said that Biden had not seen any such pictures and had not independently confirmed reports about the beheading.
As someone who has covered misinformation through dozens of major news events, I know that people flock to social media during a crisis for many reasons. Maybe it’s because the mainstream news doesn’t feel fast or immediate enough, or because the crisis has put them or someone close to them in harm’s way and they need help. Perhaps they want to see and share and say something that captures the reality of an important moment in time because they don’t know what else to do when the world is on fire. Misinformation and manipulation often spread for the same reasons, slipping into the feeds of those who believe it can’t hurt to share a startling video or gruesome photograph or call for aid, even if they’re not sure of the reliability of the source.
When war goes online, the churn of good and bad information is supercharged by the stakes. While state-sponsored information wars existed well before the invention of the internet, social media has enabled all kinds of propaganda and dangerous falsehoods to rapidly reach millions. During the Russian invasion of Ukraine in 2022, for example, livestreamers and scammers reposted old videos to TikTok, claiming they showed the latest from the front lines, in order to get views and trick people into donating to fake fundraisers.
Journalists have had a difficult time following up on video-fueled updates about the situation in Gaza circulating on social media because it is extremely dangerous to be reporting in the region right now. Many news outlets have reporters working from Israel to cover the conflict. Correspondents on the ground in Gaza are trying to keep themselves and their families alive during the Israeli bombing campaign in retaliation for the Hamas attack.
For example, Hamas and Israel have traded blame over a deadly hospital bombing in Gaza City. Hamas is blaming Israel, though US intelligence officials have said, based off initial intelligence, that they think Israel’s assertion that the bombing was the result of a misfired rocket from a militant group in Gaza is correct. Neither version of events has been independently confirmed. And yet, false confirmations on both sides are proliferating, whether from a random X account pretending to be a reporter or a statement from a member of Congress.
Last year, I wrote a guide to being online in wartime to help people navigate the misinformation around Russia’s war in Ukraine. A lot of the advice about how to quickly evaluate a river of online information hasn’t changed much over the years. But social media has changed quite a bit in just a few months and some of the old tricks for verifying unreliable posts need to be modified or unlearned altogether.
This is particularly true on X, formerly known as Twitter, which was once a central destination for those who wanted to follow major news events in real time. Elon Musk, the platform’s owner and CTO, spent the hours after Hamas attacked Israel spreading misinformation about the conflict and even told his 150 million followers to get news on the attack from two verified accounts that have a clear history of sharing false information. Musk’s recommendation had at least 11 million views before it was deleted, according to the Washington Post. This is after Musk spent months diminishing the platform’s capacity to moderate against misinformation and hate speech.
Since the initial attack, X users circulated a fabricated White House memo that claimed the US government was sending $8 billion in aid to Israel. An account posing as the Jerusalem Post fueled a false rumor that Israeli Prime Minister Benjamin Netanyahu was in the hospital. And because Twitter’s verification system has been repurposed into a premium badge for paying subscribers, who also get boosted engagement with their tweets, it’s now relatively easy to buy eyeballs on X and imitate expertise on the platform.
Misinformation is an exhausting topic, one that’s difficult to define, and on some platforms, including X, tackling misinformation is no longer a company priority to address. So, increasingly, it’s up to you to sort through the mess. No online guide will fully protect you against the bad and untrue stuff online. But there are things you can do to navigate the online chaos that follows a major news event.
Understand the platform you’re on
Many large social media platforms have shifted back to prioritizing engagement over reliability for the posts their users see on their feeds. That has created a friendlier environment for online nonsense and coordinated disinformation. The situation is certainly made worse by the transformation of Twitter, once a useful news feed, into X, something drastically different.
X is much less trustworthy and useful these days during breaking news, and evaluating sources on the platform is trickier. On X, a blue check mark once meant that the platform had verified the identity of the person or people behind the account, or that the account officially belonged to an organization. But the badge no longer serves as a verification of identity; it’s now a feature for X’s paying users, who also get better engagement and features, putting their posts in front of more people. Some verified users are also part of a program that pays them based on their engagement on X, so for them, going viral literally pays off.
Plenty of blue-checked X users have indeed been sharing misinformation about the Israel-Hamas war. Some claim to be sharing footage of the war in action when in fact they are just repurposing clips from a video game and getting millions of views. Those videos are also getting views on TikTok.
TikTok has, in some ways, stepped into the role Twitter once had as the key social media app that people turn to in order to follow a major news event. The app, which many think of as an entertainment platform, is very different from Twitter in the 2010s, when it was a must-read for breaking news. While Twitter anointed its share of expert influencers, creators are the main conduit for news on TikTok. The app’s news creators build fandoms around their personalities and promise of independence from, say, mainstream sources. All that said, TikTok also has issues with misinformation.
And then there’s Telegram, one of the platforms Hamas is using to release violent footage. Telegram, which is part group chat and part social media platform, is popular globally, has few moderation practices, and has long been a home for extremists and conspiracy theorists who have left or been banned from more mainstream platforms. More on that later.
Learn to SIFT
The SIFT method, developed by digital literacy expert Mike Caulfield, is a good framework for learning how to evaluate emotionally charged or outrage-inducing online posts in the middle of an unfolding crisis. There are two reasons I like it: First, it’s adaptable to a lot of situations. And second, the goal here isn’t a full fact-check. SIFT is meant to be a quick series of checks that anyone can do in order to decide how much of your attention to give what you’re seeing and whether you feel comfortable sharing a post with others.
The SIFT method breaks down to four steps: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” That “Stop” step can do a lot of work during a major, violent conflict like the Israel-Hamas war. People get engagement on questionable or untrue posts during breaking news by tugging on your emotions and beliefs. So if a video, photograph, or post about the war seems to confirm everything you’ve ever believed about a topic or makes you immediately furious or hopeful or upset, stop yourself from instantly sharing it.
Then, investigate the source. This can be done pretty quickly. Click on the account sharing the thing you saw and glance at their information and previous posts. You’re not launching a full-scale investigation here. You’re just trying to get a sense of who has ended up in your feed. Next, find better coverage. That means you open up a bunch of tabs. Is this being reported anywhere else by trustworthy news sources? Has this claim been fact-checked? And finally, trace the source. Open up the news article and run a search for a phrase in the quote you’re about to share. See if you can find that image attributed elsewhere, and make sure the captions describe the same thing.
Check in with yourself
During acts of unfathomable violence, videos of death and maiming circulate online with the imperative to witness. Please understand that you do not have to view violent footage circulating online in order to process a horrible event, whether you feel you can handle seeing it or not.
Check in with yourself and think critically about the role you want to play on- and offline in a moment like this. That might mean resisting the impulse to become an instant breaking news reporter in your group chat. If you don’t have the skill set to evaluate for accuracy the videos of on-the-ground footage in a neighborhood you’ve never visited, you’re not likely to develop it in a matter of minutes.
I’ve tried to avoid giving specific instructions in this guide in terms of what platforms to use or not use as a regular person trying to get news. I’m going to make one now: Especially if you’re unfamiliar with Telegram, now is not the time to indulge in your curiosity and dive into the app looking for “raw” footage and live updates. In addition to the risk of encountering and engaging with literal propaganda, Telegram is notoriously bad at surfacing good information.
Your attention is valuable
Online falsehoods need attention and amplification to work. You might not have a big account with a ton of followers, but every reshare matters, both to the circle of people who see your posts online and to the engagement numbers for the original post. Interacting with something on social media — whether a cautious share “in case” it’s true or a repost to point out that something definitely isn’t — signals to the site’s algorithms that you’re interested in that content. In other words, outrage shares are still shares, even if you’re talking about a bad analysis, an unsourced photograph, or an outright lie.
Update, October 18, 6:10 pm ET: This story, originally published on October 12, has been updated to include details on the bombing of a Gaza hospital and the confusion over reports that Hamas militants were beheading children in Israel.