Should President Biden ask Congress to revoke Section 230?
The Jan 6 attack on the US Capitol reveals the danger from digital media platforms like Facebook, Twitter and YouTube. Here’s what must be done to transform Silicon Valley’s corruption of the media infrastructure.
The beautiful dream of an open and free internet, serving as a global agora of unlimited free speech to provide more democratic participation, has crashed and burned one more time. The January 6 mob attack on the US Capitol was incited and planned over Facebook, Twitter, YouTube and other digital media platforms, with a tragic nudge from the president of the United States. The gripping images of a ransacking mob, five deaths and Congress members cowering on the floor of the House of Representatives is a warning to us all.
How did we arrive here?
Since the birth of the Big Tech media platforms 15 years ago — let’s drop the friendly-sounding misnomer of “social” media — democracies around the world have been subjected to a grand experiment: can a nation’s news and information infrastructure, the lifeblood of any democracy, be dependent on digital technologies that allow a global free speech zone of unlimited audience size, combined with algorithmic (non-human) curation of massive volumes of mis/disinformation, that can be spread with unprecedented ease?
The evidence has become frighteningly clear that this experiment has veered off course, like a Frankenstein monster marauding across the landscape. Facebook is no longer simply a “social networking” website — it is the largest media giant in the history of the world, a combination publisher and broadcaster with approximately 2.6 billion regular users, plus billions more on the Facebook-owned WhatsApp and Instagram. A mere 100 pieces of COVID-19 misinformation on Facebook were shared 1.7 million times and had 117 million views — far more daily viewers than the New York Times, Washington Post, Wall Street Journal, Fox News, ABC and CNN combined.
The FacebookGoogleTwitter media giants have been mis-used frequently by bad political operatives for disinformation campaigns in over 70 countries to undermine elections, even helping elect a quasi-dictator in the Philippines; and to widely livestream child abusers, pornographers and the Christchurch mass murderer of Muslims in New Zealand. How can we unite to take action on climate change when a majority of YouTube climate change videos denies the science, and 70% of what YouTube’s two billion users watch comes from its sensation-saturated recommendation algorithms?
Traditional media are subject to certain laws and regulations, including a degree of liability over what they push into the world. While there is much to criticize about mainstream media and corporate broadcasters, at least they use humans to curate the news, and pick and choose what’s in and out of the newstream. That results in a degree of accountability, including potentially libel lawsuits and other forms of Madisonian-like checks and balances.
But with Big Tech media, it’s more like the wild wild West, with no sheriff. FacebookGoogleTwitter use robot algorithm curators that are on automatic pilot, much like killer drones for which no human bears responsibility or liability. That’s dangerous in a democracy. All their fake pretensions aside about an “open and free internet,” these companies’ primary business strategy — recommending and amplifying sensationalized crazytown content to increase users’ screen time and expose them to more profitable ads — has resulted in the dividing, distracting and outraging of people to the point where society is now plagued by a fractured basis for shared truth, sensemaking and political consensus.
And yet they still refuse to de-weaponize their platforms. Ejecting Donald Trump from their services did nothing to change their destructive business model, it just hid the most visible evidence of it. It was a self-serving act that should fool no one.
We have learned the hard way that non-human curation, when combined with unlimited audience size and frictionless amplification, has completely failed as a foundation for a democracy’s media infrastructure. It’s time to hit reset in a major way, not only to save our democracy, but also to provide the best chance to redesign these digital media technologies so that we retain the promise and decrease the dangers.
To Section 230 or to not Secton 230, that is the question
So what to do? Many Facebook critics have called for President Joe Biden to make good on one of his campaign promises by asking Congress to revoke Section 230 of the Communications Decency Act. That’s the law from 1996 that grants Big Tech media a blanket immunity from the worst of the mass content it publishes from users, including illegal content like online harassment, incitement to violence and child pornography. While revoking Section 230 is not a perfect solution, it would make the companies a bit more responsible, deliberative and potentially liable for the worst of the toxic content that is algorithmically-promoted by their platforms. Just like traditional media are already liable.
But let’s be clear: revoking or even tweaking Section 230 would not really have that much of an impact because much content and speech — even a lot of reckless speech — is already protected by the First Amendment.
For example, Donald Trump’s posts on Twitter and Facebook claiming the presidential election was stolen, and his inflammatory speech that YouTube broadcast the morning of the Capitol attack to millions, were false and provocative — but it would be difficult to legally prove that any individuals or institutions were harmed or incited directly by the president’s many outrageous statements. After all, any number of traditional media outlets also have published untrue nonsense without the protections of Section 230, yet they were never held liable.
So revoking Section 230 will likely not be as impactful as its proponents wish, or as its critics fear. If we assign more importance to Section 230 than it merits, it might drain away energy for more impactful reforms.
A better business model — investor-owned utilities
So the real question is: besides dealing with Section 230, what else needs to be done?
To answer this, we have to recognize that these businesses are creating the new public infrastructure of the digital age. That includes search engines (Google and Yahoo), portals for news and connecting (Facebook, Twitter, Instagram, WhatsApp), films, music and live-streaming (YouTube, Netflix, Spotify, Zoom), navigation (Google, Apple), commercial marketplaces (Amazon, Amazon, Amazon) and labor-market platforms (Uber, Upwork, Amazon Mechanical Turk, Clickworker). These companies like to tell us that they are providing all of this for free, that all we have to do is give them access to our private data. But that has turned out to be a very high price indeed, as the Capitol riots showed.
So the federal government should advance the regulatory incentives for a whole new business model: treating many of these companies more like investor-owned utilities. Historically, that has been the approach used by the government in other industries, such as telephone, railroad and power generation. Ironically, even Facebook’s Mark Zuckerberg has suggested such an approach.
As utilities, they would be guided by a digital operating license — just like traditional brick-and-mortar companies must apply for various licenses and permits — that defines the rules and regulations of the new business model.
To begin with, this digital license should require platforms to obtain users’ permission before collecting anyone’s personal data — i.e., opt-in rather than opt-out. These companies never asked for permission to start sucking up our private data or to track our physical locations, or to mass collect every “like,” “share” and “follow” into psychographic profiles that are used by advertisers and political operatives to target users. The platforms started these “data grabs” secretly.
Today, these giant platforms know what you like, think, watch, where you go, which church, restaurants and clubs you frequent — they know you better than your spouse or therapist. Should society continue to allow this this noxious “surveillance capitalism”? It seems clear that the dangers of this spying outweigh any alleged benefits, such as hyper-targeted advertising that supposedly caters to our individual desires.
Why can’t users have a button on their smart phones with which they can turn data and location tracking on and off at the touch of a button? When finding restaurants near your location, you would turn on location tracking, and once that mission is accomplished, turn tracking off. All of it controlled by the user, not by the platform. This is not science fiction, Apple will soon be introducing a feature on its iPhones that provides a limited version of this. Technologically, this is doable. Why not use regulation and mandated product design to put an end to surveillance capitalism?
Reining in mega-size
The utility business model also should encourage competition by limiting the mega-scale audience size of these digital media machines; nearly 250 million Americans, about 80 percent of the population, have a profile on one of these platforms. Do most users really need the capacity to reach an audience of millions or even thousands? That’s bigger than kings, prime ministers and presidents were able to reach through most of human history.
A number of organizations have called for an anti-monopoly break up of these companies, like AT&T once was split into the Baby Bells. That intervention has merits, but let’s be clear: if Facebook is forced to spin off WhatsApp and its two billion users, and nothing else about the business model changes, that will just result in another Big Tech media behemoth. More competition is good, but less so if they are competing according to the same market rules that the companies themselves have decided.
So another way to reduce the magnitude of user pools would be through incentives to scrap the targeted-advertising revenue model and switch to users paying a monthly subscription fee, like Netflix and cable TV do. That also would likely result in a decline in users.
Or, the digital media operating license could require that the platforms significantly limit ‘audience-size’ for any piece of user generated content to no more than 1000 people. That’s still way more people than most users actually know or have regular contact with, so it’s hardly a deprivation. And then we would put Facebook’s 10,000 human moderators to work amplifying selected pieces of public-interest information, including information from various leaders, artists and thinkers, rather than playing a losing game of whack-a-mole trying to thwart the flood of crazytown disinformation.
This approach would drastically cut down on virality of disinformation by introducing necessary friction to information flow. It also recognizes that Facebook, Twitter and YouTube are no longer a “public square” or a global free speech Agora anymore, at least not exclusively. They are also publishers who, following the Capitol ransacking, decided to discontinue “publishing” the President of the United States! As publishers, they have more in common with the New York Times, Fox News and the Wall Street Journal than most people have been willing to admit. And that editorial control has increased as the crazy disinformation gushing from these platforms over the Covid pandemic, racial tensions and charges of a stolen presidential election have increased. It turns out that human editors and curators, despite their obvious flaws, have some advantages over algorithmic curation.
So this approach would enhance the publisher role of digital media, while also allowing Facebook, Twitter and others to remain a “public square”/”common carrier pipeline” for smaller assemblies of networked friends, family and associates. But those “user public squares” would have limits built into them regarding audience size. Which, come to think of it, is how Facebook used to work in the early years, when it was still a cool invention.
Product liability for the machine
In reality, what Facebook, Twitter and YouTube have built is kind of like a machine, in which we can dial up or down on certain features to enhance democracy, free speech, networking, news and information-sharing, while minimizing all of the toxic and dangerous impacts. As a machine, another relevant framework is a product liability model. Imagine the danger if a manufacturer of a pandemic vaccination, or a medical device, could start injecting people, or open up patient’s chests and insert their latest artificial organ, without having their products tested and certified before widespread use. Nuclear power plants, high speed rail and many other systemically-important infrastructure services follow such a ‘precautionary principle’ protocol.
In configuring these digital media machines, the operating license also should include restraints on the platforms’ rampant use of specific “engagement” techniques that both research and live experience have shown to be contributing to social isolation, teen depression and suicide, as well as damaging our democracies. These techniques include hyper-targeting of content and advertisements, automated recommendations, addictive behavioral nudges (like pop-up screens, autoplay and infinite scroll), encrypted private internet groups and other “dark pattern” techniques that facilitate disinformation and manipulation.
The US also should update existing laws to ensure they are applied to the online world. Google’s YouTube/YouTubeKids has been violating the Children’s Television Act — which restricts violence and advertising on TV — for many years, resulting in online lawlessness that the Federal Communications Commission should halt. Similarly, the Federal Elections Commission should rein in the quasi-lawless world of online political ads and donor reporting, which has far fewer rules and less transparency than ads in TV and radio broadcasting.
Big Tech media’s frequent outrages against our humanity are supposedly the price we must pay for being able to post our summer vacation and new puppy pics to our “friends,” or for political dissidents and whistleblowers to alert the world to their just causes. Those are all important uses, but the price being paid is very high. We can do better.
The challenge is to establish sensible guardrails for this 21st century digital infrastructure, so that we can harness the positives and greatly mitigate the dangers. America has done this in the past with new technologies and infrastructure, so we should proceed with confidence that we can get this right.