HomeBusinessHold social media responsible for murder

Hold social media responsible for murder

Payton Gendron was 18 in May when authorities said he drove 230 miles across New York state to a Tops Markets grocery in Buffalo and shot 13 people. He wrote in a manifesto that he picked the supermarket because its customers were mostly African American; all 10 people who died were Black. He wounded one white man and apologized; the Black people he wounded he finished off, according to prosecutors. Gendron said he was determined to halt the destruction of white supremacy and to inspire others to follow his lead, so he live-streamed his attack via Twitch, a platform owned by the tech giant Amazon. He also said he learned everything that led to the May 14 massacre online.

“There was little to no influence on my personal beliefs by people I met in person,” he said.

On Nov. 13, 2015, Nohemi Gonzalez, a 23-year-old first-generation student from Long Beach was shot dead at La Belle Equipe cafe in Paris by gunmen loyal to the Islamic State terrorist group known as ISIS. She was one of 129 people killed throughout that night, at a football stadium, restaurants and a concert hall. Her family says in a lawsuit that the killers learned to hate online, thanks to Google, which “repeatedly recommended ISIS videos to users likely to be susceptible to the calls for terrorism which those videos conveyed, and did so at a time when ISIS recruits … were perpetrating or attempting mass killings in the United States and Europe.”

The Buffalo and Paris killings have spawned legal challenges to the rules governing online communication and give voice to a widening realization that the internet has become a civic and moral blight, enabling grotesque dogma to be popularized and innocent people to be hurt. With tech billionaire Elon Musk’s purchase of messaging behemoth Twitter, public disgust may well deepen. Musk has denounced “content moderation” — blocking problematic material — as an affront to free speech. On Nov. 19, he reinstated ex-President Donald Trump’s Twitter account nearly two years after he had been thrown off the platform for summoning the Jan. 6, 2021, assault on the U.S. Capitol.

Now, after the Buffalo bloodbath, New York State Attorney General Letitia James, in a withering 47-page analysis, urged the state to criminalize live-streaming homicides and expose anyone who distributes images created by the killers to civil liability claims. Most important, James implored Congress to gut the federal law shielding internet companies from suits over harm caused by postings they carry and requiring the likes of Google and YouTube to take reasonable steps to block violent content that might inspire copycats.

That is the same federal immunity that is at the bull’s eye of the suit against Google brought by Gonzalez’s parents, which goes before the U.S. Supreme Court this term.

Shielding internet services from liability for their postings became law in the Communications Decency Act, enacted in 1996 when the internet was young. Back then, the platforms could plausibly argue they were passive messaging boards where users posted what they wanted others to see, acting on their own and offering operators little opportunity to intervene. The services themselves had, or sought, no more control over what was posted than phone companies had over what callers said to each other. Hence the immunity written into the act as the infamous Section 230.

Nowadays, the argument goes, the entire business of internet services has undergone a radical transformation. No longer docile whiteboards, social media are mega-businesses built on aggressively monitoring and manipulating user behavior — dangling incentives and promoting content with pitch-perfect lures, all to maximize the time users spend online and goose the ad revenue their engagement brings in.

The Gonzalez suit that the Supreme Court will hear pays particular attention to “recommendation algorithms” — software that gathers behavioral data and applies inferences about taste and susceptibility to customize the options users choose from. It is this kind of software that identified the young men whose cultural and political sympathies primed them for indoctrination that turned them into ISIS killers. The algorithm became, essentially, a recruitment tool.

That’s a scary picture, and whatever your passion for expressive freedom, it’s impossible to frame a justification for standing by as the world’s most powerful and advanced communications media are used to make killers.

And, let’s be clear, to make money. The picture goes from scary to obscene when the matter of revenue is included. With paid advertising braided into the social media experience, links to content associated with footage of the Buffalo shooting exposed users to ads and generated income. The New York attorney general’s report couldn’t say how much. Among the ads, the New York Times reported, were promos related to a horror movie, clothing stores and video streaming services. A copy of the suspect’s 22-minute video — which was halted two minutes into the actual shooting — appeared on an obscure site that better-known sites linked to and was viewed more than 3 million times. A Facebook link to that site stayed up more than 10 hours, the Washington Post found, and drew some 500 comments and 46,000 shares.

There’s nothing simple about the choices the outrage over internet-enabled slaughter are forcing upon us. Stripping online services of their immunity from damage claims will strike fear into the wallets of even the most opulent of them, and they will undoubtedly crack down on postings that they probably should leave up. I myself have argued that the limp policy response to school shootings may be a by-product of the refusal of news media to let the public actually see pictures of the carnage — images that the New York attorney general’s proposal could now prohibit.

But a bedrock of socially responsible media operation has to be accountability. We rely on our legal system to enforce standards of fairness and allow people who have been hurt to, within the limits of the possible, be made whole. Once there might have been an argument for internet immunity. No longer. Now it’s clear that like other powerful institutions, social media cannot be above the law.

Edward Wasserman is a professor and former dean at the Graduate School of Journalism at UC Berkeley.

Editor’s Note: This piece has been updated to reflect the reinstatement of Donald Trump on Twitter.

Editorial Staff
Editorial Staffhttps://euroexaminer.com
Euro Examiner is one of the best online Newspapers in Europe, We provide our readers with recent news from all around the world from the most trusted sources.
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

- Advertisment -spot_img

recent posts