COMMENTARY ON LAW AND TECHNOLOGY

What Can We Do About Abusive and Toxic Content on Digital Media Platforms?

Feb-15-2018

The Internet is being used to recruit terrorists, to spread disturbing and abusive content, and fake news, and many of us want to stop this. Facebook allegedly sold ads to Russian operatives whose goal was to sow discord within the American electorate in advance of the 2016 election. One source of this activity was the Internet Research Agency, which has been described as a Russian troll farm. The anonymity of the Internet combined with the technology to automate a lot of social media accounts allows for the creation of echo chambers of bots. In December 2017, Facebook rolled out a tool to allow users to see if they had “Liked” or “Followed” content created by the Internet Research Agency. Facebook and Google represent about 60% of the market share for online advertising revenue. Hence, Facebook and Google likely profit the most from the marketing-style outreach of Internet troll farms.

Into this picture comes companies like Unilever, a large consumer goods multinational, which is concerned that Facebook and Google are not behaving responsibly with their power. Unilever brands include household names like Lipton tea, Dove, and Ben and Jerry’s, so their advertising choices matter. And Unilever has threatened to boycott Facebook and Google, if they do not get better at screening out extremist and illegal content.

Unilever’s approach relies on the power of private market forces. And this may be the only viable approach to combat toxic online content. The law could create barriers on Internet content based on the identity of the sender, but that would probably violate the First Amendment. As a country, the right to free speech is one of our most sacred values. Several other democracies, including France and Germany, have laws against hate speech.We do not. The Supreme Court has instead said that such speech cannot be restricted unless the speech is aimed at “inciting imminent lawless action.” Hate speech is tolerated up until the point that it involves a present threat.

Some courts have pointed out that it is patronizing to assume that people should be shielded from others’ speech. We believe that ideas should rise and fall on their merits in the so-called marketplace of ideas.If something is false or has no logical support, the law trusts the reasonable person to come to the right conclusion about that.

However, the First Amendment is also being used as a shield by people who want to divide us from each other. Some argue in favor of requiring people to reveal their real identities online, but a legal requirement like that would interfere with the right to anonymous speech under the First Amendment.

Private market forces do not compel in the same way that the government does, and a private company’s actions, as opposed to the government, cannot violate the First Amendment, except in very limited circumstances. Hence, Unilever’s threatened boycott is a potentially powerful tool against hate and disinformation. Facebook and Google could develop their own rules and contracts with their users, filter out content on their networks, and develop industry standards and metrics for advertising.

The Russian information operations are one focus of Special Counsel Robert Mueller’s criminal investigation, so it is likely that additional indictments and insights into these activities will continue to emerge. For now, private companies and consumers like you and me are entrusted with the responsibility of consuming information responsibly. Organizations like the International Federation of Library Associations and Institutions (IFLA) are also helping us identify fake online content. In short, we have to hold ourselves responsible for evaluating the information that is in front of us, and it is up to us to hold companies responsible for profits earned from another’s dishonesty or hate filled content.

Author – Jay Kesan