(New York) An explicit video of a Pennsylvania man accused of beheading his father, which circulated for hours on YouTube, has once again highlighted the inability of social media companies to prevent atrocities published on their platforms do not spread on the web.
Police said Wednesday they charged Justin Mohn, 32, with first-degree murder and abuse of a corpse after he decapitated his father, Michael, in their Bucks County home and made his act public in a YouTube video of 14 minutes accessible to everyone, anywhere.
News of the incident — which has been compared to beheading videos posted online by Islamic State militants at the height of their notoriety nearly a decade ago — came as executives from Meta, TikTok and other social media companies were testifying before federal lawmakers frustrated by what they see as a lack of progress in keeping children safe online.
YouTube, which is owned by Google, did not attend the hearing, despite its status as one of the most popular platforms with teenagers.
The disturbing Pennsylvania video follows other atrocity videos that have surfaced on social media in recent years, including nationwide mass shootings broadcast live from Louisville, Kentucky, Memphis, Tennessee, and Buffalo, New York. York) – as well as carnage filmed abroad in Christchurch, New Zealand, and the German city of Halle.
Middletown Township Police Capt. Pete Feeney said the video was posted in Pennsylvania around 10 p.m. Tuesday and remained online for about five hours, a lag that raises questions about whether media platforms Social media adheres to moderation practices, which may be more necessary than ever in the context of the wars in Gaza and Ukraine and a highly contentious presidential election in the United States.
“This is another example of the blatant failure of these companies to protect us,” said Alix Fraser, director of the Council for Responsible Social Media at the nonprofit Issue One. “We can’t trust them to grade their own homework. »
A YouTube spokesperson announced that the company removed the video, deleted Mohn’s channel and was tracking and removing any new uploads that might appear.
The video-sharing site claims to use a combination of artificial intelligence (AI) and human moderators to monitor its platform, but did not respond to questions about how the video was suspended or why it did not. been made earlier.
Large social media companies moderate content using powerful automated systems, which can often detect banned content before a human can. But this technology can sometimes fail when a video is violent and explicit in a new or unusual way, as it was in this case, said Brian Fishman, co-founder of trust and safety technology startup Cinder.
That’s when human moderators are “really, really essential,” he said. “The AI is getting better, but it’s not there yet. »
On Wednesday, about 40 minutes after midnight Eastern Time, the Global Internet Counterterrorism Forum, a group created by technology companies to prevent the distribution of these types of videos online, said it had alerted its members to the video. The forum allows the platform that owns the original footage to submit a “hash” – a digital fingerprint corresponding to a video – and notifies nearly two dozen other member companies so they can ban them from their platforms.
But by Wednesday morning, the video had already spread to X, where an explicit video of Mohn holding his father’s head remained on the platform for at least seven hours and was viewed 20,000 times. The company did not respond to a request for comment.
Radicalization experts say social media and the internet have reduced barriers for people wanting to explore extremist groups and ideologies, allowing anyone who might be predisposed to violence to find a community that reinforces those ideas.
In the video released after the killing, Mohn describes his father as a federal employee for 20 years, espousing various conspiracy theories and ranting against the government.
Most social platforms have policies to remove violent and extremist content. But they can’t detect everything, and the emergence of many newer, less tightly moderated sites has allowed more hateful ideas to spread unchecked, said Michael Jensen, a senior researcher at the Consortium for the Study of Terrorism and Responses to Terrorism, based at the University of Maryland.
Despite the obstacles, social media companies need to be more vigilant in regulating violent content, said Jacob Ware, a fellow at the Council on Foreign Relations.
“The reality is that social media has become a front line against extremism and terrorism,” said Mr. Ware. This is going to require more serious and committed efforts to respond. »
Nora Benavidez, senior attorney for the media advocacy group Free Press, said she would like to see more transparency about the types of employees affected by layoffs and more investments in worker trust and safety.
Google, which owns YouTube, this month laid off hundreds of employees working on its hardware, voice support and engineering teams. Last year, the company announced it was cutting 12,000 jobs across Alphabet.
AP journalists Beatrice Dupuy and Mike Balsamo in New York, and Mike Catalini in Levittown, Pennsylvania, contributed to this report.