Section 230, the provision in 1996’s Communications Decency Act that offers immunity to tech platforms for the third-party content they host, has dominated arguments at the Supreme Court this week. And while a ruling is not expected until summer, at the earliest, there are some potential consequences that marketers should be aware of.
The Supreme Court justices appeared concerned about the sweeping consequences of limiting social media platforms’ immunity from litigation over what their users post.
The oral arguments were presented in Gonzalez v. Google, a case brought after a 23-year-old American student, Nohemi Gonzalez, was killed in a 2015 ISIS attack in Paris. Gonzalez’s family sued YouTube’s parent company in 2016, alleging the video platform was responsible because its algorithms pushed targeted Islamic State video content to viewers.
Complicating the proceedings is that Section 230 was enacted nearly 30 years ago. Since then, new technologies such as artificial intelligence have changed how online content is created and disseminated, bringing into question the law’s efficacy in the current internet landscape.
“[Section 230] was a pre-algorithm statute,” Justice Elena Kagan said. “And everybody is trying their best to figure out how this statute applies, [how] the statute—which was a pre-algorithm statute—applies in a post-algorithm world.”
The court is searching for ways to hold platforms accountable by exposing harmful content recommendations while safeguarding innocuous posts. Still, any decision that increases the burden on platforms to moderate content has the potential to pass that cost onto advertisers, UM Worldwide global chief media officer Joshua Lowcock told Adweek.
“This is a necessity that is clearly needed in an industry where [platforms] are cool with monetizing but won’t take on the responsibility of broadcasting [harmful content],” said Mandar Shinde, CEO of identity alternative Blotout.
In a separate case, Twitter v. Taamneh, the Supreme Court will decide whether social media companies can be held liable for aiding and abetting international terrorism for hosting users’ harmful content.
Taking responsibility vs. relinquishing algorithms
If the court breaks precedent and holds YouTube responsible for its content delivered through recommendations, it’s likely going to leave social media platforms at a crossroads.
These companies could assume liability for their algorithms, which could open them up to a flood of lawsuits—a point justices are concerned about, according to Tuesday’s hearing.
Or, platforms could entirely abandon algorithms—their core mechanism for keeping users engaged and driving ad revenue. As a result, advertisers would find less value for their ad dollars on social media.
“It would be like advertising on billboards or buses,” said Sarah Sobieraj, professor of sociology at Tufts University and a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University. Ads may get a lot of eyes on them, but advertisers “will only have like the crudest sense” of who’s seeing them.
To that, platforms could also see an exodus among users who may find these platforms less appealing, further exacerbating the inflow of ad dollars.
Greater transparency into campaign performance
Three industry sources pointed out that the least worst outcome from the hearings would have social media companies provide more transparency in algorithmic recommendations and take further accountability for content, both moderated and recommended.
Platforms like Twitter and Instagram could also give users the ability to opt out of algorithmic recommendations, according to Ana Milicevic, co-founder of programmatic consultancy Sparrow Advisors.
Regardless, any changes to algorithms have a direct impact on how ads show up on social media platforms. To that, platforms will want to offset the cost of hiring content moderators, likely driving up the cost of ads.
“Markets can expect changes across performance, price and even ad content adjacency,” said Lowcock.
Regardless of whether a platform would take responsibility for the content it hosts, advertisers still run the reputational risk of placing ads adjacent to harmful content. Marketers may buy on a platform such as YouTube, which may be considered brand-safe, but running ads on the channels of specific creators may not be conducive to a campaign strategy or protect brand reputation.
“Marketers will still need to be vigilant over where their ads ultimately run,” said Milicevic.