Science and Tech

The US Supreme Court protects Twitter in famous case for responsibility for terrorist content

()The Supreme Court handed a huge victory to Silicon Valley on Thursday by protecting online platforms from two lawsuits that legal experts warned could have put the internet in jeopardy.

The two decisions preserve the ability of social media companies to avoid lawsuits stemming from terrorism-related content, and spell defeat for critics of the tech industry who say the platforms are not held accountable.

In doing so, the court sides with the technology industry and digital rights groups, which have argued that exposing technology platforms to greater liability could break the basic functions of many websites, and could even create legal risks for individual Internet users.

In one of the two cases, Twitter against Taamnehthe Supreme Court ruled that Twitter will not have to face charges of complicity with terrorism when it maintained tweets created by the terrorist group ISIS.

The court also threw out the heavily scrutinized González v. Google case over content moderation on social media. So he stayed out of reducing a key federal liability shield for websites, known as Section 230 of the Communications Decency Act. Thursday’s decision upholds a lower court ruling that protected social media platforms from a wide range of content moderation lawsuits.

Twitter’s decision was unanimous and was penned by Judge Clarence Thomas, who stated that social media platforms are little different from other digital technologies.

“It may be that criminal groups like ISIS are able to use platforms like the defendants’ for illegal, and sometimes egregious, purposes,” Thomas wrote. “But the same could be said for cell phones, email, or the Internet in general.”

Thomas’ opinion reflected the court’s struggle to identify, in the arguments presented, what kind of speech should trigger social media liability and what kind deserved protection.

“I think the court recognized the importance of these platforms for the communication of billions of people and refrained from interfering with it,” said Samir Jain, vice president for policy at the Center for Democracy and Technology, a group that has filed briefs. in support of the technology industry.

For months, many legal experts have seen the Twitter and Google cases as a sign that the court could seek sweeping changes to Section 230, a law that has faced bipartisan criticism over content moderation decisions on social networks. technology companies. Thomas, in particular, has expressed interest in hearing a Section 230 case.

Expectations of a highly disruptive outcome in both cases prompted what Kate Klonick, a St. John’s University law professor, described as an “insane avalanche” of writings by people known as friends of the court.

However, as oral arguments unfolded and judges grappled with the intricacies of Internet speech, the likelihood of massive changes to the law seemed to diminish.

“I think little by little the possibility opened up that… maybe the court has no idea what the hell these cases are about, and MAYBE picked them out of activism, but they’re not prepared to be THAT activist.” Klonick tweeted.

Daphne Keller, director of Stanford University’s Platform Regulation Program, agreed.

“I think this vindicates all of us who were saying, ‘The Supreme Court took the wrong case, they didn’t bring up the issues that they really wanted,'” Keller told .

It is possible that the judges will soon have another opportunity to intervene on social networks. The court is still deciding whether to hear a series of cases involving the constitutionality of state laws passed by Texas and Florida that restrict the ability of online platforms to moderate content. But the way the court has handled the Twitter and Google cases suggests it might approach the new cases with caution.

“The mere fact that the judges are proceeding with caution is a good sign, and suggests a more nuanced understanding of these issues than many feared,” said Evelyn Douek, an adjunct professor at Stanford Law School.

In Thursday’s decision on Twitter, the court held that the fact that the platform hosted general speeches about terrorism does not create vicarious legal liability for specific terrorist attacks, raising the bar for future lawsuits of this type.

“We conclude,” Thomas wrote, “that plaintiffs’ arguments are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”

It also stressed that the plaintiffs “have not alleged that the defendants intentionally provided any substantial assistance” to the attack in question, nor that they “widely and systemically” assisted ISIS in a manner that would make them responsible for “every ISIS attack.”

Twitter v. Taamneh focused on whether social media companies can be sued under US anti-terrorism law for hosting terrorism-related content that is only remotely related to a specific terrorist attack.

The plaintiffs in the case, the family of Nawras Alassaf, who was killed in an ISIS attack in Istanbul in 2017, claimed that social media companies, including Twitter, had knowingly aided ISIS in violation of federal anti-terrorism law by allowing that some of the group’s content persisted on its platforms, despite policies designed to limit that type of content.

“The countless companies, academics, content creators, and civil society organizations that have joined us in this case will be reassured by this outcome,” Halimah DeLaine Prado, Google’s general counsel, said in a statement. “We will continue to work to safeguard free expression online, combat harmful content, and support companies and creators who profit from the Internet.”

Twitter has not responded to ‘s request for comment.

Judges dismiss Google’s lawsuit and leave Section 230 intact

The court dismissed the case against Google with only a brief opinion. It left intact a lower court ruling that Google is immune from a lawsuit accusing its subsidiary YouTube of complicity in terrorism.

The result will likely come as a relief not only to Google, but also to the many websites and social networking companies that have urged the Supreme Court not to cut legal protection for the Internet.

The opinion was unsigned, and the court said: “We decline to address the application of Section 230 to a claim that appears to state little or nothing credible. Instead, we vacate the judgment and remand the case to the Ninth Circuit for review.” the lawsuit in light of our decision on Twitter.”

There were no dissensions.

The case in which Google was implicated focused on whether it could be sued for its affiliate YouTube’s algorithmic promotion of terrorist videos on its platform.

The family of Nohemi González, who was killed in an ISIS attack in Paris in 2015, argued that YouTube’s selective recommendations violated a US anti-terror law by helping to radicalize viewers and promoting the ISIS worldview.

The indictment sought to limit content recommendations from receiving protection under Section 230, potentially exposing technology platforms to further liability for the way they manage their services.

Google and other technology companies have claimed that such an interpretation of Section 230 would increase the legal risks associated with classifying, ordering and curating online content, a basic feature of today’s internet. Google claimed that in such a situation, websites would try to play it safe, either by removing much more content than necessary, or by giving up content moderation altogether and allowing even more harmful material on their platforms.

Briefs filed by Craigslist, Microsoft, Yelp and others suggest that the stakes aren’t limited to algorithms, but could end up affecting just about anything on the web that could be construed as a recommendation. This could mean that even average Internet users who volunteer as moderators on various sites could face legal risks, according to a lawsuit filed by Reddit and several volunteer Reddit moderators.

Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, original co-authors of Section 230, argued in court that Congress’ intent in passing the law was to give websites broad discretion to moderate content as they saw fit. prompt.

The Biden administration also weighed in on the case. In a brief filed in December, he argued that Section 230 protects Google and YouTube from lawsuits “for failing to remove third-party content, including content you have recommended.” But, according to the government brief, those protections don’t extend to Google’s algorithms because they represent the company’s own speech, not that of others.

Source link