2021 may still be finding its feet, but the tech companies continued their exuberance into the new year. Not only in the stock market but also in censorship. The Capitol Hill attack on January 6 caused the Silicon Valley overlords to finally take action against big names.
Donald Trump was handed a lifetime ban from Twitter for incitement of violence and it was followed swiftly by a temporary two-week ban on Facebook and Instagram. Four months later, Facebook’s Oversight Board has decided to uphold the ban while Donald Trump has launched his own website. Parler, the social media app with a significant user base of Trump supporters, was banned from Apple, Google, and Amazon’s platforms for a month. Like him or not, the ex-POTUS was the top 10 most-followed account on Twitter with a whopping 88 million followers. A few impactful decisions made by a handful of big tech executives have caused one man to lose his voice to billions of people.
Social media platforms have revolutionised our ability to connect and communicate across the globe. Content can now be shared and reach millions of individuals in an instant. News, information, and educational content has never been easier to access.At the same time cyberbullying, fraud, and scam cases are growing rapidly. Social media allows for opportunities to democratise expression and diversify public discourse, but this can often lead to the spread of disinformation and hate speech. There has been a range of views towards censorship and freedom of speech on social media. Banning bad content on bigger platforms can be socially riskier over the long term, as it could be shunted elsewhere to more hidden places.
Some have suggested that engaging with hate speech head-on would be more productive than an outright ban, but challenges appear when trying to achieve this at scale. Social networks have been fairly aggressive in removing hate speech content over the past few years. Below are some key numbers quoted by The Economist:
|Removal of hate speech has risen 10x in the last 2 years||Removed 8.4billion user comments in 2020, up 18x from 2 years before|
|17 million fake accounts are disabled every day||Removed around 45 million videos in 2020|
These efforts have been made easier with artificial intelligence, with most offensive comments being deleted before users have a chance to flag.
Facebook now employs some 15,000 people to moderate content and has even agreed to pay US$52 million to moderators who developed PTSD from looking at the worst of the internet. Although these actions from the platforms help with issues around censorship, there are deeper issues that lie within this nuanced topic. Ultimately what this Twitter saga showed, along with the Parler episode, is that de-platforming decisions are made based on interpretation. The decisions by the social media companies to ban Trump and Parler are based on moral judgment, which sits above the legal standard which the platforms are required to comply.
Essentially it becomes a subjective decision based on what the person said, what they intended when they said it, and on the outcome of the event. Many believed the ban was the right decision but some thought a temporary ban would have sufficed in order to give time to plan for the next course of action. Nevertheless, the focus has shifted from Trump as a polarising individual to one of an argument about free speech and censorship. The more pertinent issue now lies in the concentrated power in the hands of social media platforms. These platforms operate in the free market when it wants to, but operates as a quasi-governmental organisation when it feels like it. So what are some of the solutions for this growing issue? What speech should be allowed online and who should decide?
American law and culture limit the government’s power to regulate speech on the internet and elsewhere. Congress has offered protections to tech companies by freeing them from most of the liabilities for speech that appears on their platforms. The US Supreme Court decided that private companies, in general, are not bound by the First Amendment. However, some activists support new efforts by the government to regulate social media. Although some platforms are large and dominant, their market power can disintegrate and alternatives are available for speakers excluded from a platform. The history of broadcast regulation shows that government regulation tends to support rather than mitigate monopolies.
Speech on social media directly tied to violence—for example, terrorism—may be regulated by government, but more expansive efforts are likely unconstitutional. Preventing harm caused by “fake news” or “hate speech” lies well beyond the jurisdiction of the government and tech firms appear determined to deal with such harms leaving little for the government to do. Silicon Valley has toyed with the idea of drawing up a distinction between freedom of speech and “freedom of reach” by leaving posts up but reducing their visibility and virality. In 2019, Youtube programmed its algorithm to leave posts up but recommended them less. This was an attempt to balance a broad and fair range of opinions while making sure that outright dangerous information does not spread. Youtube may remove video comments that violate their Community Guidelines but content creators have the option to allow all comments or hold potentially inappropriate comments for review.
Twitter released an improved version of its “prompts” feature recently discouraging users from sending potentially harmful or offensive replies, encouraging users to think twice before sending any mean tweets. Platforms are also labelling content in order to show users that the content could be misleading. In October 2020, Facebook launched their Oversight Board made up of a global independent panel of 20 people from a diverse mix of backgrounds. From academia to political and civic leaders, that also included former Prime Minister of Denmark. The Board makes content moderation decisions and helps remove decision making from the hands of a few executives, which allows for less decisions to be made by external and political pressure. The Board can only decide on whether deleted posts should be reinstated and cannot make decisions on posts that have been demoted by the algorithm. QAnon’s removal, Donald Trump’s controversial posts, and the removal of Holocaust denial content, were all decisions outside the Board’s scope. The idea to create a “social media council” has also been suggested — an independent body not linked to the government — that could help become unbiased decision makers for social media platforms.
The solutions to the problem of free speech vs. censorship is not a simple one and currently lies heavily in the hands of big tech cos. The issue has shown its ability to have significant political and economic impacts around the world, strongly affecting how countries and democracies are run. Censoring, to a certain extent, is the mark of moral failure in society. It is called for when people endanger the good order of their community through communication. In an ideal world, a society that shares high standards of morality, loyalty, and seemliness would render censoring unnecessary.
The article was originally published on e27