Charlie Kirk Was Shot and Killed in a Post-Content-Moderation World

Minutes after conservative political activist Charlie Kirk was shot yesterday at a speaking engagement at Utah Valley University, jarring videos of the incident began circulating on apps like TikTok, Instagram, and X. In the immediate aftermath, the majority of the videos viewed by WIRED did not contain content warnings. Many began autoplaying before viewers had the option to consent. And on X, an AI-generated recap of the incident falsely indicated that Kirk had survived the shooting.
Researchers tracking the spread of the shooting videos on social media say that major social platforms are falling short in enforcing their own content moderation rules, at a moment when political tensions and violence are flaring. And the video of Kirk being fatally shot is somehow falling into a policy loophole, threading the needle between allowable “graphic content” and the category of “glorified violence” that violates platform rules.
“It’s unbelievable how some of these videos are still up. And with the way this stuff spreads, it is absolutely impossible to take down or add warnings to all of these horrific videos if you don’t have a robust trust and safety program,” says Alex Mahadevan, the director of MediaWise at the Poynter Institute.
Over the past two years, social platforms like X, TikTok, Facebook, and Instagram have scaled back their content moderation efforts—in some cases eliminating the work of human moderators who previously acted as a crucial line of defense to protect users from viewing harmful content. Many platforms use AI tools to try to spot and label potentially damaging video content, but the companies don’t always share specifics about how these tools are deployed.
The videos of Kirk being shot show him sitting on a stool answering questions from students and other spectators. The moments before, during, and after he was killed were captured on smartphones and shared quickly on social media. Many videos show Kirk suddenly recoiling, having been shot, and blood pouring from the left side of his neck.
WIRED viewed these videos across TikTok, Instagram, X, Facebook, Threads, and Bluesky. Some of the videos appeared organically in feeds or upon first opening the apps. Others were easily surfaced by searching for keyword terms such as “Charlie Kirk” and “Charlie Kirk shot.” On X, some users advised that people turn off autoplay so they wouldn’t accidentally view videos of the shooting.
“I don't think it is possible to prevent the initial distribution, but I think platforms can do better in preventing the massive distribution through algorithmic feeds, especially to people that did not specifically search for it,” says Martin Degeling, a researcher who audits algorithmic systems and works with organizations like the non-profit AI Forensics.
Degeling was tracking the spread of Kirk shooting videos overnight, and he noted that one clip had reached more than 17 million views on TikTok. According to a screenshot viewed by WIRED, the video in question appears to have been recorded from just a few rows back from where Kirk was sitting on stage. It contained the hashtags: #charliekirk #rip #charliekirkdied #charliekirkincident #ripcharlie. The video has since been removed.
Another TikTok video Degeling shared with WIRED showed a slow-motion, close-up angle of the bullet hitting Kirk’s neck. The tone of the video was conspiratorial: The user who uploaded it added spooky music and a digitally narrated voice, asking, “What is the black thing on his shirt and why did it move like this before he got shot?” As of Thursday morning, the video was still online. It had been up for eight hours and had more than 900 comments (with many saying the “black thing” was a microphone).
As of Thursday morning, on Instagram, a search for “Charlie Kirk shot” surfaced a close-up video of the incident as the first result. The video autoplays as a thumbnail, without warning. At the time of writing, the video had 15.3 million views.
Not only are the Kirk shooting videos spreading rapidly, but some are in clear violation of the platforms’ social media policies. For example, TikTok’s terms of use state that the company does not allow “gory, gruesome, disturbing, or extremely violent content.”
“We are saddened by the assassination of Charlie Kirk and send our deepest condolences to his wife Erika, their two young children, and their family and friends,” TikTok spokesperson Jamie Favazza said in a statement. "These horrific violent acts have no place in our society. We remain committed to proactively enforcing our Community Guidelines and have implemented additional safeguards to prevent people from unexpectedly viewing footage that violates our rules."
On other platforms, the Kirk video falls into a gray area. Meta’s overarching policy is to age-restrict certain content, require warning labels, and remove some graphic depictions of violence.
A spokesperson for Meta said that, per the company’s Violent and Graphic Content policies, it’s applying a “Mark as Sensitive” warning label to footage of the Kirk shooting, and are age-gating it to users 18 and older. The spokesperson also said that the company has 15,000 people reviewing content for Meta—though it did not say whether these are employees or contractors—and that it does not allow videos that glorify, represent, or support the incident or perpetrator.
Meta also states in its online Transparency Center that it does not allow content of “terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, or hate crimes perpetrator-generated content relating to such attacks; or third-party imagery depicting the moment of such attacks on visible victims.” Still, the widely circulated footage of Kirk being shot, for now, is allowable. It will get a warning label and be age-gated, but not removed from Meta platforms unless determined to be in clear violation of the “glorified content” policy.
X tells users that they “may share graphic media if it is properly labeled, not prominently displayed and is not excessively gory or depicting sexual violence.” The platform notes that content that is “explicitly threatening, inciting, glorifying, or expressing desire for violence” is not allowed.
Mahadevan, from the Poynter Institute, says that he saw the Kirk shooting video without his consent multiple times on X on Wednesday, likening it to a version of “4Chan turned into a mainstream social media platform.” (He also says he opened up Facebook on Thursday morning and immediately saw a video of Kirk being shot.)
X did not reply to requests for comment or questions about whether the Kirk video was considered “excessively gory” by X’s standards.
But X appears to have another content moderation problem: A few hours after Kirk was pronounced dead, the AI chatbot Grok, which runs on X, insisted that Kirk was “fine and active as ever.” X did not reply to further questions from WIRED about Grok’s misinformation about the Kirk shooting.
Bluesky has said it’s suspending accounts that encourage violence and taking down close-up videos of the event.
For now, the videos of Charlie Kirk’s shooting continue to spread online.
“This is all psychologically damaging to our society in ways we don’t understand yet,” Mahadevan said. “We’re seeing posts on X of people saying, ‘Congratulations, you’ve radicalized me.’ And part of that is because they’re seeing the video of Kirk being killed. They’re not just reading about it. They’re actually seeing it.”
Additional reporting by Kylie Robison.
Updated: 9/11/2025 4:00 pm EST: This story has been updated with comment from TikTok and to reflect the current institutional affiliation of a researcher.
wired