Evasive Zuckerberg shows the Orwellian danger Of Facebook

Steven Hill
9 min readApr 16, 2018
“Yes, Senator, I am watching you”

Is Mark Zuckerberg really in control of Facebook? Or is he a sorcerer’s apprentice that cannot handle his invention?

Mark Zuckerberg’s testimony in the nation’s capital was chilling to watch. On display was Silicon “catch me if you can” Valley vs. an out-of-touch United States Congress. Does Zuckerberg realize how truth-dodging he looked and sounded? Or does the 33-year-old billionaire CEO just not care, as long as Congress leaves his Frankenstein creation alone? And do the members of Congress realize how clueless and unprepared they looked? Do they realize that what is at stake over the “monster-ization” of Facebook is nothing less than the future of the Internet?

On the surface, this was an exercise in damage control, on the part of both Zuckerberg and the Congress. But at a deeper level the public was provided yet another window into the destructive nature of Facebook, of Mark Zuckerberg’s leadership, as well as the disturbing aspects of Silicon Valley and its mindless mantra of “disruption.” By the end of the hearing, I was left with the daunting question of whether Zuckerberg and his computer geniuses really understand their own mutant creation.

Facebook’s artificial intelligence (AI) I has been built (or more accurately, cobbled together) over several years by hundreds of different developers and programmers. Professor Zeynep Tufekci from Harvard University describes the Facebook algorithm as “giant matrices, maybe millions of rows and columns, and not even the programmers understand anymore how exactly it is operating.” There are so many variables that go into its complex and proprietary sorting that Facebook cannot say with authority why something will or will not appear in a user’s news feed, or how and why suddenly Russian trolls and their bots were able to manipulate the algorithms to reach millions of Facebook users — half of all US voters — with targeted fake news. (This included such whoppers as the Pope had endorsed Donald Trumpfor president, which received nearly 2 million Facebook “engagements” (total number of shares, likes and comments) in the three months leading up to the U.S. election).

Nevertheless, a number of experts have been closely observing this company and have figured out a few of its behavioral patterns. Combined with recent revelations from a whistleblower, here’s what we have learned about how Facebook and its algorithms actually work. It is even more alarming than anyone thought.

Facebook is being aimed at users to affect not only advertising but also news and elections. Given Facebook’s widespread use and influence — with 2 billion global users, it is fast replacing television as the most dominant news, entertainment and commercial medium in the world — this manipulation strikes at the very heart of our democratic societies.

First, Facebook’s “engagement algorithms” use technological surveillance of our online behavior to capture our personal data in a way that would have made the Stasi or Nazis drool with envy. The goal is to generate increasingly accurate, automated predictions of what advertisements we are most influenced by.

Recently Wired reported on a leaked Facebook memo which revealed that the company had offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability. Facebook monitored posts, photos, and interactions in real time to track emotional lows, and the 23-page document actually highlighted Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.” As Dr Tufekci explains: “Humans are a social species…We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite.”

Feeding Frenzy

So Facebook offers these to its users in a grand gluttony of feeding, creating what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.” The platform is specifically designed to keep users clicking, tapping, and scrolling down a bottomless feed, and in the process deliver us to various advertisers.

But that’s not all. Based on our individual profiles, the Facebook engagement algorithms are also designed to feed us sensationalist news (both fake and real) selected to provoke powerful emotions of anger and fear. By reacting to, clicking on and sharing these stories, users are herded by the Facebook “persuasion architecture” into hyper-partisan information ghettos of opinion and alternative facts, referred to as “cognitive bubbles”.

One outrageous example was the false conspiracy theory blasted around Facebook during the presidential campaign that Hillary Clinton and her former campaign chair ran a child sex ringin the basement of a pizzeria in Washington DC. Besides the fact that the restaurant, Comet Ping Pong, doesn’t even have a basement, the restaurant’s staff and its owner were hit with a fusillade of abuse and death threats on social media. Matters went from alarming to dangerous when a man walked into Comet Ping Pong with an assault rifle and began shooting (fortunately no one was injured). That was just one of dozens of fake news stories, all of them with absurd storylines. Other stories claimed that Hillary Clinton sold weapons to Isis, and that an FBI agent linked to Clinton’s email leaks had been mysteriously found dead.

A BuzzFeed News analysis found that 17 of the 20 top-performing false election stories were overtly pro-Donald Trump or anti-Hillary Clinton. In the last three months of the presidential campaign, top fake election news stories generated nearly nine million Facebook engagements, which was 20 percent greater than the number received by election stories from 19 major news outlets combined. Fake has become real, and real is now subjugated; this is the world of Facebook.

Choosing The Target

But the impacts go even deeper than fake news. Whistleblower Christopher Wylie revealed to the New York Times and UK’s Observer how his former company, Cambridge Analytica, used personal information swiped from 87 million Facebook users to build a system that created psychological-political profiles of most US voters via online questionnaires. Cambridge Analytica was headed by Trump’s key adviser Steve Bannon, from the alt-right media outlet Breitbart. For the 2016 US presidential election, the Guardian reports that Cambridge Analytica deployed a set of techniques that were adopted from the US Department of Defense and UK Ministry of Defence, in particular their “psychological operations,” or psyops.

Those methods were focused on changing people’s minds, not through persuasion but through “informational dominance” that relies on disinformation, fake news, rumor and “psychographic messaging.” As the New York Times pointed out, a voter found to be neurotic might be shown a gun-rights commercial featuring burglars breaking into a home, rather than a dry legal defense of the Second Amendment; voters troubled by anxiety would be targeted with ads warning of the dangers posed by the Islamic State, but such ads would be presumed ineffective with those identified as ‘optimistic.’

Cambridge Analytica’s precursor, called SCL Elections, claims that it has used a similar suite of tools in more than 200 elections around the world, including in Italy, Ukraine, Romania, South Africa, Nigeria, Kenya, India, Indonesia, Thailand and many other democracies. When the Facebook data breach first occurred back in 2014, it included a third of active North American users, and nearly a quarter of potential US voters. Wylie told the newspapers, “We exploited Facebook…and built models to exploit what we knew about [their users] and target their inner demons.” Paul-Olivier Dehaye, a data expert and academic based in Switzerland, who published some of the first research into Cambridge Analytica’s processes, says it’s become increasingly apparent that Facebook is “abusive by design.”

Even if the Cambridge Analytica techniques were not wholly influential in swaying voters, as some pundits have claimed, that misses an important point. Left to their own devices, these companies are writing the rules of our collective digital future. Cambridge Analytica, Facebook, Google, Amazon — they are all algorithmically experimenting on us, on “we the public,” like we are their laboratory guinea pigs. And we are just at the beginning stage of what these technologies eventually will be capable of. The linkage of powerful emotive responses that have been digitally stimulated to orchestrate a kind of partisan groupthink is chillingly reminiscent of George Orwell’s Two Minutes of Hate in his novel 1984. Dr Tufekci says that “the core business model underlying the Big Tech platforms — harvesting attention with a massive surveillance infrastructure to allow for targeted and mostly automated advertising at a very large scale — is far too compatible with authoritarianism, propaganda, misinformation, and polarization.”

Beyond Social Networks

With its 2 billion users worldwide, Facebook has grown from a pet project started in Zuckerberg’s Harvard dorm to become far more than a social networking platform. It has morphed into a huge news, entertainment and advertising platform that is viewed by more people than any US or European television network, any newspaper or magazine and any online news outlet. It also reaches hundreds of millions of users in the developing world, where the company has tailored its app for low-bandwidth connections and less expensive Android phones. Many grey- and black market entrepreneurs in the developing world use a Facebook app as their eBay-like commercial gateway for buying and selling.

Google also has mastered these kinds of architectures of engagement, and Amazon is well on its way. In so doing, these companies have become three of the most valued companies in the world. Together Google and Facebook now account for a whopping 73% of all global online advertising revenue (84% outside China) and 25% of all ad sales (online or off). Amazon is now slowly playing catch up in the advertising game. But this growing duopoly is crowding out other media outlets, forcing even Twitter and Snapchat to fight for the advertising scraps needed to survive.

So the power of algorithms is being put to work for very questionable ends, and has a propensity to result in unforeseen consequences. Viktor Mayer-Schönberger, Professor at Oxford University and co-author of Reinventing Capitalism in the Age of Big Data, says, “The algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence.” In the past, he says, most computer code could be opened and inspected, making it effectively transparent. But with AI and its enormous datasets enhanced by “machine learning,” the human ability to monitor these technological puzzles is declining.

That leaves us with some disturbing questions, yet ones that hopefully have the potential to point us in the right direction. First, where is the deployment of any kind of “precautionary principle,” like that used in Europe, which amounts to a a kind of Hippocratic oath in medicine that says, “First, do no harm”? These monopolistic platform companies seem to exist everywhere and nowhere, and their products and services have enjoyed widespread access to global markets and consumers. Most people have shown great faith in the presumed benefits of an open Internet, but aren’t the pitfalls of this over-optimism becoming ever more apparent? Now that Facebook, Google, Amazon and Twitter have grown into large global monopolies, major platforms for the entire planet, aren’t they becoming less benign and less of a win-win?

Is Mark Zuckerberg really in control of Facebook? Or is he a sorcerer’s apprentice that cannot handle his invention? The digital leadership of Silicon Valley platform companies has been shown to be sinisterly irresponsible and teetering on being dangerous. At this point, a kind of renationalization of the Internet seems natural and almost inevitable. Nations and regional blocs like the EU and China are starting to re-configure the Net in ways that work for their populations, their values and their future needs.

French president Emmanuel Macron has outlined a forward-looking strategy that seeks to inject European values into the race for AI development. Germany already has passed a “Facebook law,” seeking to hold Zuckerberg accountable for illegal content. When combined with efforts by EU competition commissioner Margrethe Vestager to enforce a rules-based order (hence her slapping Google with a $2.7 billion fine for manipulating its search results), and the EU’s forthcoming General Data Protection Regulation, a vague European vision that provides an alternative to Silicon Valley is taking shape. But many parts of the blueprint remain to be filled in.

Should individuals become “data shareholders” who get paid for permitting Facebook and Google to mine our personal data, or should our data be re-conceptualized as “social data” that is protected as part of the commons? Do we need to establish a collaborative CERN-type organization for the development of AI, to ensure the availability of open-source datasets used in the public interest?

One idea that has been proposed is that of turning these kinds of services into public utilities. Another is that of breaking them up as overly big monopolies. Yet another option is that of requiring digital licenses that map out the rules and regulations of operation for Internet-based platforms, much in the way that traditional brick-and-mortar companies must be granted business permits and licenses. Just as a country has a recognized right to protect its physical borders, is it necessary to develop the technological tools and legal framework to protect one’s digital borders?

The evolution of the Digital Age is proceeding rapidly, and just as in bygone eras when oil, phone and Microsoft monopolies eventually needed to be yoked, it is time to figure out the right digital harness for these platform companies. The alternative is to leave the standards and norms that will rule the future to be defined by these Frankenstein companies and their double-tongued CEOs like Mark Zuckerberg.



Steven Hill

fmr Center for Humane Tech, NewAmerica, FairVote, author:“RawDeal &Uber Economy” “EuropesPromise“ ”10Steps toRepairUS Democracy” Steven-Hill.com @StevenHill1776