Skip to content Skip to footer

7.3 Countering Misinformation and Deepfakes

Digital activists face a dual challenge: amplifying truthful narratives while countering a flood of misinformation and AI-generated deepfakes. This guide provides practical steps, real-world examples, and reflection questions to help navigate false information online.

1. Recognizing Disinformation and Deepfakes

Not all falsehoods are created equal. Misinformation is false information spread without the intent to deceive. The sharer often believes it’s true. Disinformation is false information deliberately created or shared to cause harm or mislead. Malinformation is genuine information used maliciously to cause harm – for example, leaking real data to smear someone.

Experts refer to this spectrum as “information disorder,” which also includes conspiracy theories and propaganda. It’s important to identify the intent and nature of false content in order to respond appropriately (e.g. a well-meaning friend sharing a rumor vs. a bad actor staging a hoax).

Misinformation takes many forms. It can be misleading content that frames facts out of context, impostor content from fake sources posing as real ones, manipulated media (like edited photos or videos), fabricated content that is completely made up, or claims taken out of context (false context). Activists should be on the lookout for sensational claims that play on emotions like fear or outrage – if a story or image seems “too outrageous to be true,” trust your instincts and investigate further. Often, viral disinformation will use shocking or emotionally charged narratives to prompt rash sharing before verification.

Images and videos carry powerful influence in digital activism, making them prime targets for manipulation. Clues that a photo may be fake or misrepresented include odd or inconsistent lighting and shadows, signs of digital editing (blurry or pixelated edges where objects were added or removed), or an image that seems old or unrelated to the event it’s tied to. One quick method to test a suspicious image is performing a reverse image search (more on that in the next section) to see if the photo has appeared elsewhere or in a different context. Also, check for metadata (if available) indicating when or where the photo was taken – inconsistencies can reveal if the image is being falsely captioned. For example, during major protests it’s common for old photos or videos from previous events to resurface with new misleading captions. If multiple sources or eyewitnesses can’t corroborate a striking image, that’s a red flag.

Deepfakes are hyper-realistic AI-manipulated videos or audio that can make someone appear to say or do something they never did. Early deepfake detection was easier – many fakes had telltale glitches like unnatural eye blinking, awkward mouth movements, or “robotic”-sounding speech dubbing. For instance, cheap fake videos often show “logical disconnects, poor dubbing and wide-eyed people who can barely blink.” However, advanced deepfakes today are far more convincing, with edits that are “virtually imperceptible.” Activists should be aware of subtle signs: does the person’s face and body lighting match the scene? Do the voice and lip movements sync naturally? Are there any distortions when the person turns their head? In one real-world example, a deepfake video of Ukraine’s President Zelensky was circulated, attempting to make him falsely urge his people to surrender. Alert viewers noticed the strangely motionless face and an uncharacteristic voice tone, which were inconsistent with genuine footage. Authorities quickly confirmed it was a fake and the video was removed, but it illustrates the need for skepticism with shocking video clips.

Whether images or text, misinformation often appeals to our emotions. Be cautious with content that makes you feel a strong reaction (anger, fear, vindication) — bad actors want you to react instantly and share it. Recognize common tropes: scapegoating of certain groups, overly neat or heroic narratives, or one-sided stories with no nuance. These can be signs of orchestrated disinformation campaigns. Developing a habit of pausing and asking “Who might benefit if I believe and spread this?” can help activists see through manipulative narratives. In summary, recognizing misinformation and deepfakes starts with being a critical consumer: know the types of falsehoods, watch for the warning signs in media, and be mindful of emotional ploys.

2. Fact-Checking Strategies and Digital Verification Tools

Identifying a false claim or suspicious video is only half the battle — the next step is verification. Digital activists should equip themselves with fact-checking habits and open-source tools to investigate doubtful content. Here are step-by-step strategies and tools for verifying information:

  1. Pause and Inspect the Source: Before sharing or reacting, check who is providing the information. Is it coming from a known news organization, an established expert, or an anonymous social media account? Examine the website or profile: look for an “About” page, contact info, or other signs of legitimacy. Be wary of impostor sources (e.g., a URL that mimics a real news site). Many fake news sites have slight spelling variations or odd domain extensions. If the source is unclear or sketchy, don’t take the content at face value. Activists should cultivate a list of trusted sources (major news outlets, verified fact-checkers, respected community organizations) to cross-reference important claims.

  2. Cross-Check the Information: A critical fact-checking step is corroboration. See if other reputable outlets or organizations are reporting the same claim. If a dramatic story is true, chances are multiple credible sources will cover it. If you only find the claim on partisan blogs or random social posts, it’s a sign to be skeptical. Use search engines or fact-check sites to see if the story has been debunked already. For example, if someone shares a startling statistic or quote, do a quick search of keywords + “hoax” or the person’s name to see if it’s been verified. Often, fact-checkers like Snopes, PolitiFact, or Reuters Fact Check have already investigated popular rumors. Always ask: where did this information originally come from? Tracing a viral claim back to its origin can reveal if it began as a joke, a misinterpretation, or deliberate fake.

  3. Verify Images with Reverse Image Search: Images are powerful tools for activists, but they can also be easily miscaptioned or edited. A reverse search might reveal that a photo claiming to show a current protest is actually from a different country or year. For example, activists discovered that some images purportedly from a 2021 protest were actually taken in 2014 by finding the original Getty Images source. This process exposes “recycled” pictures that bad actors use to mislead audiences. A reverse image search should be a go-to technique for any suspicious or highly viral image. Free tools are available: Google Reverse Image Search, Yandex, and TinEye are three widely used services. To use them, you can upload the image or paste its URL into the search bar. These tools will find other instances of that image online:

    • Google Images has the largest database and even integrated facial recognition, which helps find images of people. It can also sort results by size, which is useful if you need a higher resolution to inspect details.
    • Yandex (a Russian search engine) sometimes finds different or additional matches, especially for images originating from Eastern Europe or Russia.
    • TinEye allows sorting by the image’s earliest appearance online. This is extremely handy to determine the timeline — TinEye can show if an image first appeared years ago, indicating it’s being reused out of context.
  4. Examine Metadata and Visual Evidence: Sometimes, digital files contain hidden data. Metadata (such as when a photo was taken, on what device, or GPS location) can be checked using tools or by viewing file properties. However, note that social media platforms often strip away metadata, and sophisticated fakers can alter it. Another tactic is using an image magnifier or error level analysis to spot edits. For instance, the InVid/WeVerify browser plugin is a free toolkit that helps verify images and videos. It can zoom in on images, perform reverse searches from multiple engines at once, and analyze metadata if available. Activists investigating suspicious media (like a photo that seems altered) can also use dedicated forensic websites such as Forensically (FotoForensics) which analyze images for signs of manipulation. These tools highlight areas with inconsistent noise or compression levels, which can indicate where an image was spliced or airbrushed. For example, a widely shared image of an environmental protest might have had a sign’s text changed digitally — a noise analysis could reveal unnatural pixel patterns around the text. While you don’t need to be a forensics expert, knowing these tools exist means you can do a basic check (or refer it to specialists) when something looks off.

  5. Validate Videos and Detect Deepfakes: Verifying videos is more challenging, but there are methods activists can use:

    • Keyframes & Reverse Search: Take screenshots of key video frames (especially ones showing distinctive objects, signs, or faces) and do a reverse image search on those frames. This can reveal if parts of the video come from older footage. The InVid plugin mentioned above can actually break a video into key frames to facilitate this.
    • Audio Clues: Listen closely to the audio. Deepfake videos might have mismatched voice accents or tone, or a robotic quality if the voice is synthesized. If the speaker’s mouth movements don’t perfectly align with the words, that’s a warning sign of tampering.
    • Deepfake Detection Tools: Cutting-edge solutions are emerging. Companies like Deeptrace Labs have built deepfake detection systems that use deep learning to flag doctored videos. These systems often look at subtle inconsistencies in facial movements, lip-sync, and eye blinking that are hard for deepfake algorithms to get perfect. Another example is the AI Foundation’s Reality Defender 2020, which allows journalists (and in some cases the public) to upload a video and get a report on whether it’s likely fake. These tools are not yet widely available to everyone and they’re not foolproof, but they represent a growing effort to automate deepfake detection. As of now, the “arms race” between deepfake creators and detectors is ongoing, so activists should not rely solely on technology – combine it with human judgment.
    • The Human “Pause” Button: Perhaps the most powerful tool is a human one: restraint. If you see a highly inflammatory video (say, a politician purportedly admitting corruption on camera), resist the urge to share immediately. Take a step back and give it a few hours while fact-checkers and experts analyze it. Often, hoaxes get exposed within a short time. As one guide puts it, have a “human pause button guided by common sense” before amplifying questionable footage. In practice, this means checking reliable fact-checking sites (like PolitiFact or AFP Fact Check) to see if they have evaluated the video. Many fact-checking organizations now rapidly assess viral videos, including deepfakes, and will publish a debunk if it’s false. Spending a moment to search “[person] video fake hoax” can save you from spreading a harmful forgery.
  6. Use Credible Fact-Checking and Verification Services: Leverage the work already done by professionals. Websites such as Snopes, PolitiFact, FactCheck.org, and BBC Reality Check regularly debunk circulating misinformation. For verifying social media content, organizations like First Draft, Bellingcat, or the DFRLab (Digital Forensic Research Lab) publish guides and case studies on verification techniques. There are also community-driven efforts: for example, the browser extension rbutr alerts you if a webpage you’re reading has been contradicted by other articles. Activists might install such tools to get automatic warnings of potential falsehoods. Additionally, if a suspicious claim involves scientific or statistical data, try to find the original study or source. Governments and academic institutions often release reports to counter prevalent myths (e.g., a government health department might publish a “COVID-19 myths vs facts” page addressing current false claims).

Consider creating a personal or team verification checklist for your activist group. For example: whenever you see a viral claim related to your cause, you will 1) verify the source, 2) cross-check at least two reputable outlets, 3) run a reverse image search if applicable, and 4) look at fact-checker findings before sharing or responding. Over time, this routine becomes second nature. Remember that speed is not as important as accuracy in activism messaging; it’s better to be right and trusted than to be first but wrong.

3. Case Studies of Misinformation in Activism

Misinformation has impacted social movements across the political spectrum, often with serious consequences. By examining case studies – both historical and contemporary – activists can learn how false narratives emerge and spread, and how they can derail or distort a movement’s message. Below are several examples (from conservative and liberal perspectives, and some lesser-known historical cases) illustrating misinformation in activism:

  • “Pizzagate” Conspiracy and Real-World Violence (2016): During the 2016 U.S. elections, a baseless conspiracy theory nicknamed “Pizzagate” claimed that a Washington D.C. pizzeria was the hub of a child-trafficking ring involving high-ranking Democrats, including Hillary Clinton. This began with false tweets and posts on extremist forums that went viral. Even though mainstream media debunked it, the rumor proliferated on social media and YouTube. The situation escalated dangerously when a man, believing the online stories, showed up at the pizza shop armed with a rifle and fired shots, attempting to “self-investigate” the fictitious child abuse ring. Thankfully, no one was injured, but this incident showed how “fake news brought real guns” into a family restaurant (as one headline put it). Pizzagate is an example of politically charged disinformation spread inspiring offline action. It demonstrates the importance of promptly debunking viral conspiracies and the challenge activists face when wild falsehoods capture public attention.

  • COINTELPRO and Disinformation Against Civil Rights (1960s): A lesser-known historical example comes from the civil rights era. The FBI’s Counterintelligence Program (COINTELPRO) actively spread disinformation to disrupt activist groups like the Black Panther Party and others. Declassified documents show the FBI created forged letters to stoke internal distrust and even planned fake communications to expel members. In one case, they impersonated a Black Panther leader in a forged letter as part of a failed attempt (the only thing that stopped them was running out of the correct stationery!). The FBI also fabricated inflammatory materials to discredit activists – for example, they produced a bogus “Black Panther Coloring Book” filled with violent images and reportedly distributed it to tarnish the group’s image. This disinformation was aimed at both the public and within movements, to sow chaos and destroy credibility. For today’s activists, COINTELPRO serves as a cautionary tale: misinformation isn’t new, and even authorities have weaponized fake content to undermine social movements. It underlines why transparency and internal education within activist groups about possible false flags or agent provocateurs is vital.

  • Misinformation during the Black Lives Matter Protests (2020): The nationwide protests against racial injustice following George Floyd’s killing in 2020 were accompanied by a surge of misinformation online. These came from various ideological angles:

    • Conspiracy Theories: Almost immediately, fringe groups spread theories that George Floyd’s death was staged and that he was “not really dead,” suggesting a “deep state” plot. One YouTube conspiracy video falsely alleging Floyd was alive got nearly 1.3 million views before removal. Others claimed the officer involved was an actor. Such false theories aimed to undercut the legitimacy of the protests by insinuating the outrage was built on a hoax.
    • False Funding Narratives: A long-running conservative conspiracy theory about George Soros (a philanthropist often scapegoated in right-wing circles) surged again, accusing him of funding and orchestrating the BLM protests. In one week, Soros was mentioned in 34,000 tweets related to Floyd, and over 90 YouTube videos in multiple languages pushed this narrative. On Facebook, posts with Soros conspiracies spiked dramatically, with 9 of the 10 most popular posts about him spreading false claims tying him to the unrest. This was amplified by notable figures; for instance, the Texas Agriculture Commissioner publicly accused Soros of funding “so-called spontaneous protests,” calling him “pure evil”. The Open Society Foundations (Soros’s organization) issued a statement debunking these allegations and defending the protesters’ genuine motives. The persistence of this false narrative illustrates how activists sometimes must combat external attempts to delegitimize their movements through guilt by association with a boogeyman.
    • “Antifa” Scapegoating and Hoaxes: On the flip side, some on the right spread unsubstantiated claims that antifa (anti-fascist activists) were behind the violence and looting. Despite President Trump labeling antifa as a “Terrorist Organization” and blaming them for unrest, the FBI stated there was no evidence that antifa or any organized extremist group hijacked the protests. Additionally, a fake “riot instruction manual” purporting to be from antifa was circulated online – but it turned out to be an old hoax recycled from 2015 protests. In another case, a Twitter account claiming to be “antifa” and encouraging violence was exposed as actually belonging to a white supremacist group trying to make the protesters look bad. These examples show how false attributions (blaming a scapegoat like antifa) were used to discredit a largely peaceful movement and justify crackdowns. Activists had to rapidly dispel these myths to keep focus on their actual message.
    • Doctored and Miscaptioned Images: Misleading visuals also spread widely. One dramatic photo showed the White House in darkness during the protests with claims that President Trump had turned off the lights – it circulated even among government critics and a former presidential candidate. In reality, the image was old (from 2014) and edited to appear darker; the White House was not actually dark that night. In another rumor, posts claimed animals had escaped from a zoo during protests; these were false reports that gained traction online. Yet another viral image showed the Lincoln Memorial supposedly vandalized with graffiti, which was also a fake – the only defacement had been minor and nowhere near the statue. Each of these false images required quick fact-checks and sharing of correct information (for example, people used reverse image search to expose the White House photo hoax). The spread of such content in 2020 demonstrated how images can be repurposed or doctored to fabricate storylines, and activists learned to be vigilant in verifying visuals coming out of fast-moving events.
  • Misinformation in Other Movements: No movement is immune to misinformation. On the more progressive/liberal side, activists have also been caught by misleading information at times. For example, environmental and climate justice movements have faced false claims that misrepresent data (like cherry-picked weather events to deny climate change) or fake experts sowing doubt. In some cases, well-meaning activists have accidentally shared old photos of disasters thinking they were new. During the 2021 Israel-Palestine clashes, both sides shared old or unrelated images on social media to support their narratives. One widely shared photo by pro-Palestinian accounts showed a child amid rubble, claimed to be recent – but journalists discovered it was actually from Gaza in 2014. Conversely, miscaptioned images were also used to falsely accuse one side or the other of actions they didn’t commit. These instances remind us that misinformation isn’t strictly left or right – it tends to reinforce whatever beliefs the target audience already holds. Activists must therefore approach all viral content, even if it supports their cause, with a critical eye.

Each of these case studies carries lessons. False information can originate from anyone – domestic groups, foreign agents, or just misinformed individuals – and it can target any movement. It can be used to discredit a cause (as with BLM), incite real-world harm (Pizzagate), or fracture groups from within (COINTELPRO). By studying these examples, activists can better anticipate the kinds of myths or fake narratives that might arise around their own campaigns. They also illustrate the need for rapid response: in several cases above, timely fact-checking and communication by activists, journalists, or officials helped contain the spread of the myth.

4. Challenging Disinformation Effectively

Confronting misinformation is a delicate task. Activists want to set the record straight, but responding the wrong way can sometimes backfire or even amplify the falsehood. This section offers ethical and strategic approaches to debunking lies and countering manipulated media while minimizing unintended side effects. The goal is to correct the falsehood, protect your cause’s narrative, and educate others, all without spreading the misinformation further or engaging in toxic exchanges.

Lead with the Truth: Cognitive scientists suggest that when countering a lie, one should start and end with the truth, and place the misleading claim in the middle – a technique known as the truth sandwich. Why? The first thing people hear frames the issue. So, begin your response by stating a true fact or the correct information. Then, identify the claim that was false, briefly address or refute it (without using overly inflammatory language or repeating the exact false claim excessively), and conclude by reinforcing the truth again. For example, if false rumors spread that a protest was “paid for,” an activist might respond: “Our protest had no outside funding – it was a grassroots event organized by local volunteers. Some are falsely claiming we were paid by a foundation; that’s not true and has been debunked. The fact is, community members came together on their own because they care about this issue, not for any money.” This way, the true narrative bookends the statement, and the lie is sandwiched in the middle and identified as false. This approach helps prevent the false claim from sticking in people’s minds by itself. Studies have found that repeating a myth without context can inadvertently make it more familiar and believable, so always pair a mention of the myth with an immediate correction.

Don’t Amplify: One challenge in debunking is that you might inadvertently draw more attention to the misinformation, especially if it was not widely seen to begin with. Before you respond publicly to a false claim, consider how far it has spread. If only a few people encountered it, it might be more effective to correct it in that small group (for instance, replying in the original comment thread or privately informing someone) rather than broadcasting it to a wider audience. In all cases, never share the false content without a correction attached. Psychologists advise against repeating the misinformation on its own, as repetition can make people remember the claim but not the correction. Instead, focus your messaging on the facts you want people to remember. For example, instead of tweeting “No, the mayor did not ban all protests,” say “Rumor control: The mayor’s official statement confirms protests are not banned – gatherings are allowed with safety measures. The claim of a total ban is false.” Notice the correction contains the myth, but it’s embedded with the factual truth and clearly labeled as false. By doing this, you ensure that anyone who hears your counter-message walks away with the correct information, not just the rumor.

Use Evidence and Citations: When possible, support your debunk with credible evidence or references. Link to a reputable news article, an official statement, or a fact-check that confirms what you’re saying. As one example: “This photo is doctored – see the analysis by digital forensics experts here.” Providing a source not only backs up your claim but also allows those who are uncertain to investigate further, which can be persuasive. In activism contexts, if misinformation is harming your cause, consider creating a “rumor control” FAQ page or shareable infographic that lists common false claims and the real facts, with sources. During the 2020 protests, many community groups did this, compiling threads or pages debunking the top circulating myths (such as “No, there are no piles of bricks mysteriously left for rioters – local construction sites have been explained by city officials” with a source). This preemptive clarification can stop rumors from spreading further.

Take Caution: It’s understandable to feel angry when you see lies spreading about a cause you care about. However, responding with insults or attacking the character of those sharing misinformation is likely to be counterproductive. It may entrench their beliefs or alienate onlookers who might otherwise have been swayed by your factual correction. Keep the tone civil and focus on the information, not the person. For instance, instead of “Only an idiot would believe this,” say “I can see why this claim caught attention, but here’s why it’s not accurate….” Maintaining a respectful tone and showing empathy can make people more receptive to the correction. Research suggests that showing understanding (e.g., “I also found that video shocking at first”) before correcting can reduce defensiveness. Remember, the goal is not to “win an argument” but to share truth in a way that others can accept.

Inoculate Your Audience (Prebunking): One effective strategy is inoculation, also known as prebunking. This involves educating people about common misinformation techniques before they encounter them, much like a vaccine prepares the immune system. Activists can engage in prebunking by warning their communities: “Be aware, we expect to see false claims about XYZ circulating — if you see sensational posts about it, double-check the facts.” For example, ahead of a planned protest, you might tell followers, “In past events, we’ve seen trolls spread fake tweets about ‘paid protesters.’ If you see something like that, know that it’s bogus, and refer to our official updates.” By forecasting the kinds of disinformation that might appear, you equip supporters to recognize and dismiss it. This technique was used effectively in Ukraine, where the government warned about potential deepfake videos of their president weeks before one actually appeared. Many Ukrainians were thus primed to doubt and investigate any such video, blunting its impact. Prebunking can be as simple as sharing an article that debunks a common myth or literally saying, “there is a lot of misinformation on this topic; here’s how to spot it.”

Frequent and Timely Corrections: Do not hesitate to correct misinformation whenever you see it causing harm. The idea that correcting false beliefs always backfires (making people cling to their beliefs more) has been largely debunked. In fact, well-executed corrections often succeed in reducing misconceptions, especially if done promptly. The sooner a falsehood is corrected, the less time it has to take root. So, if a false rumor is going viral in your activist circle, address it quickly and widely (being mindful of the amplification concern noted earlier). Repetition isn’t the enemy if it’s repetition of the truth – don’t be afraid to share the correct information multiple times, through multiple channels, especially if the myth keeps resurfacing.

Engage in Productive Dialogue: When countering misinformation interpersonally (like a friend or fellow activist who shares a false story), strive for a conversation, not a lecture. Ask questions that encourage them to think critically: “That claim is surprising – where did you hear it? Have you checked if other sources say the same?” Offer to look it up together. This collaborative approach can be more effective than a blunt “you’re wrong.” If the person seems receptive, guide them through a verification process (for instance, demonstrate a quick reverse image search on the photo they shared). The aim is to not only debunk that particular piece of misinformation, but also to impart some media literacy skills. By treating the other person with respect and as a partner in seeking truth, you’re more likely to open their mind. Even if they don’t fully agree in the moment, you’ve set an example of how to calmly fact-check – a behavior they might emulate later.

Use Humor and Positive Messaging (When Appropriate): Sometimes, especially when combating conspiracy theories or propaganda, a bit of humor can defuse tension and undermine falsehoods. The government of Taiwan famously uses a “humor over rumor” approach – when a hoax about COVID-19 surfaced (like “toilet paper shortage due to mask production”), officials responded with funny memes and jokes while conveying the factual correction. This tactic can prevent the misinformation from seeming threatening or credible by making it look ridiculous. If it suits your movement’s tone, consider using satire, memes, or light-hearted content to call out obvious falsehoods. But make sure to use humor carefully – it’s best reserved for clearly absurd misinformation and when your audience will get the joke. It’s less appropriate for deeply harmful lies (like denying violence or tragedy) where a serious, compassionate correction is needed.

Maintain Ethical High Ground: Activists should hold themselves to high standards of truth-telling. Fight misinformation, but never with disinformation of your own. Don’t exaggerate or manipulate facts to “fight fire with fire.” Not only is that unethical, it will backfire by damaging your credibility and the trust of the public. Commit to honesty in all your communications. Admit mistakes if you share something that turns out false – it actually enhances your credibility to say, “We learned this was incorrect, here’s the update.” By being transparent and accurate, you build a reputation as a reliable source, which is the best defense against false narratives. People will learn to come to you for the real story when they see junk circulating.

5. Reflection and Skill-Building Questions

Countering misinformation is an ongoing learning process. Use these questions to reflect on your experiences and to practice the skills discussed in this guide. They are meant to spur critical thinking about your role in the information ecosystem and how you can improve your digital activism tactics:

  • Think of a time you encountered misinformation. What tipped you off that it might not be true? How did you respond, and what (if anything) would you do differently now after reading this guide?

  • How do you verify information before sharing it? Outline the steps you currently take (if any). Based on the fact-checking strategies in this guide, what additional steps will you commit to implementing going forward to ensure accuracy in what you post or amplify?

  • Deepfakes are becoming more common. How might a deepfake or manipulated video cause harm to a cause you care about? What would you do if you suspect a viral video related to your movement is a deepfake? For example, who would you consult, and how would you alert others without spreading the video further?

  • Misinformation vs. Disinformation vs. Malinformation: In your own words, explain the difference. Why is it important for an activist to understand the motive and intent behind a false piece of content? How might your response differ if the person sharing false info is misinformed (believes it) versus disinforming (intentionally spreading a lie)?

  • Consider your own biases. What kinds of news or posts are you most likely to believe or share without checking? Why do you think that is? How can you remind yourself to stay critical even when information confirms your pre-existing views or hopes about a situation?

  • Scenario practice: Imagine a misleading image is circulating about an event your group is organizing – for example, an edited photo that puts your protest in a bad light. Outline a plan for how you and your team would investigate and respond. Who would fact-check it? How would you communicate the correction to your followers and to the general public?

  • Engaging others in corrections: How would you approach a fellow activist or friend who has been sharing conspiracy theories or false information online? What strategies discussed in Section 4 (like empathy, asking questions, truth sandwich) would you use, and why? What outcome would you hope for?

  • Global awareness: Misinformation can look different elsewhere. Can you identify a misinformation issue in another country or community (perhaps one mentioned in Section 5) that surprised you? What can you learn from the strategies activists used in that context? Conversely, if you were collaborating with activists from another country, what about your local misinformation landscape would you need to explain to them?

  • Personal action plan: What are three concrete things you will do in the next month to improve your resilience against misinformation and deepfakes?

Reflecting on these questions will help solidify your understanding and reveal areas where you can grow as a savvy digital activist. The fight against misinformation is collective – by improving our individual skills, we strengthen the movements we support.

Return to the Museum of Protest Activist Resources>> to find more topics of interest.

Made in protest in Los Angeles.

Museum of Protest © 2026. All rights reserved.