Random acts of violence are truly devastating, but they are even worse when they’re livestream on Facebook and shared thousands of times for the world to see. Robert Godwin Sr., a 74-year-old male from Cleveland, Ohio, was murdered in cold blood on Easter Sunday by Steve Stephens, a 37-year-old Facebook user. Stephens reportedly walked up to Godwin and shot him while videotaping the murder. Stephens later posted the video, captioned “Easter day slaughter,” to his Facebook page. The video was shared several thousand times.
Godwin’s family is still trying to process the death, but his family feels the weight of his murder with each new “share” of the video.
After being alerted to the gruesome video, Facebook removed it, but only after it had been on their social media platform for two hours. Facebook also removed Stephens’ personal page.
Facebook received harsh criticism for not removing the violent content quicker. Facebook’s Vice President of Global Operations conceded their response was much too slow. According to Facebook, they didn’t receive the first report about the video until an hour and 50 minutes after the incident. Less than 20 minutes after the video of the murder was uploaded, someone reported a separate five-minute Facebook Live video of Stephens confessing to the murder.
Facebook said it would be “reviewing [their] reporting flows to be sure people can report videos and other material that violates [their] standards as easily and quickly as possible.” Currently, Facebook doesn’t actively search for inappropriate content. Instead, it waits for someone to flag it as inappropriate before they act.
CEO Mark Zuckerberg announced in February that the company was working on artificial intelligence to help detect video content, but it was very early in development.
How Can Facebook Regulate Violent Content?
Facebook has become the social media powerhouse of the 21st Century. What was once created just for college students to connect with their friends is now used by the masses. It continues to take risks to pave the way among its competitors (Instagram, Twitter, Snapchat). Facebook Live is its newest feature that allows users to post live videos.
As discussed, Facebook does not have the ability or manpower to actively search for inappropriate or violent content. This is not surprising considering at last count in March Facebook had over 2 billion users worldwide. Once they are notified of violent or inappropriate content, they act quickly to remove the content and deactivate the offending person’s personal page.
Can Facebook Be Liable?
In a word, no. Facebook has no ability to control other people’s actions or read the minds of their users. It could not have anticipated that Stephens would murder someone on Easter Sunday and post the video on Facebook.
But what if Stephens posted a Facebook Live video 24 hours before the murder, declaring, “I’m going to murder someone on Easter Sunday.” What then?
The answer is still “no,” but liability is a little more murky. Let’s say someone noticed the video two hours after it was posted and reported it to Facebook. If Facebook did nothing – did not suspend the account, remove the video, or contact authorities as to the possibility of a murder – then wouldn’t Facebook have a responsibility to act? No, there is no law requiring Facebook to report a potential crime. But they’d probably be have some fallout with the public.
Facebook Videos and The Future
Critics of Facebook suggest there should be laws to limit one’s ability to post videos. This is especially true since people have started to post all sorts of things, including videos moments before committing suicide. A recent teenage couple committed suicide days apart. The boyfriend posted his parting thoughts, clearly riddled with pain and anguish, before he said, “I’m trying to get out all the words before I go.”
The problem is that technology is ever changing. Companies like Facebook and Instagram are paving the way in social media, but the laws have not quite caught up to their advancements.