A critique of the Netflix film “The Social Dilemma”. A film that manipulates you with misinformation – as it tries to warn you of manipulation by misinformation

Early view of the social media landscape by Hieronymus Bosch

 

This is long. And as my regular readers know, I hate this faux social environment that rewards simplicity and shortness, and punishes complexity and depth and nuance. “The Social Dilemma” does exactly that. The danger, of course, is that when we venture into complex self-discovery, we walk a tightrope between deep insights – and profound solipsism. We need to avoid pompous navel gazing, self-indulgency. This is my attempt to do that (long-winded as it is, I might have failed in this piece) in our age of folly, our age of bewilderment, as the world moves from pandemic to protest and back again. A time of sickness and scoundrels. 

 

10 October 2020 (Palaiochóra, Crete) – In June 2009, large protests broke out in Iran in the wake of a disputed election result. The unrest did not differ all that much from comparable episodes that had occurred elsewhere in the world over the preceding decades, but many Western observers became convinced that new digital platforms like Twitter and Facebook were propelling the movement. By the time the Arab Spring kicked off with an anti-government uprising in Tunisia the following year, the belief had become widespread that social media was fomenting insurgencies for liberalization in authoritarian regimes.

The most vigorous dissenter from this cheerful consensus was technology critic Evgeny Morozov, whose 2011 book The Net Delusion inveighed against the cyber-utopianism then common among academics, bloggers, journalists, activists, and policymakers. For Morozov, cyber-utopians were captive to a “naïve belief in the emancipatory nature of online communication that rests on a stubborn refusal to acknowledge its downside.” As Morozov predicted,  repressive governments would use these very same tools to crack down on its populations. And the disinformation and manipulation to come – which, as he pointed out, was already developing – would have no bounds. His prescience was remarkable. He said dictators will learn to use this technology for sophisticated surveillance and systems of Internet censorship “to an extent that will shock many”.

One of his most interesting points, noted in his book and later detailed and confirmed by several media analysts who examined Twitter APIs, concerned what was actually happening in Iran. Of the three million collected Tweets relating to the Iranian election, posted by around 500,000 users during June and July of 2009, there were less than 1,000 Twitter users actually in Tehran. The bulk of pro-protest tweets came from outside of Iran.

I quoted Morozov in 2014 in my first white paper about the subject … the slow, inevitable murder of data privacy and the exponential growth in information manipulation … and in my research noted the blistering criticism of what had been created by the very social media tech insiders who had done it, but who were being muted by social media management insiders (those in control). That latter group was still reinforcing the conviction that social media was “good for freedom” and that their platforms would encourage peer-to-peer information-sharing, that the state’s monopoly on information would be compromised, and people would be liberated from propaganda and given access to reliable information. Any disinformation could be “minimized”.

This sort of Panglossian enthusiasm seems unimaginably naïve today, when the public has shifted toward a darker perception of the political impacts of social media. In a New York Times column penned last year by science writer Annalee Newitz she offered as forthright an expression of the current dystopian consensus as can be found:

“Social media … has poisoned the way we communicate with each other and undermined the democratic process.”

Newitz did not justify any of her assertions, and she did not need to – they were already widely accepted. Much of the traditional media and political class had attributed the victories of Trump, Brexit, and other campaigns it reviled to Facebook and Twitter, which have also been blamed for resurgent racism, xenophobia, misogyny, terrorism, and societal manipulation.

But the reality of this backlash is more complicated than a superficial reading would suggest. Geoff Shullenberger, in the Los Angeles Review of Books, said we need to look at the big picture:

Techno-optimism continues to resurface in contradictory, whiplash-inducing ways alongside the prevailing bleak view of the Internet’s prospects. The new dystopian prophecies have rejected the old utopianism only on the surface — while bringing along its core assumptions about how digital platforms can shape politics.

We all know what happened. News coverage began to treat the West itself as the battlefield for the new digital politics, not those “abroad” authoritarian governments. At home it disempowered traditional gatekeepers like newspapers and TV channels creating an untethered information free-for-all, unleashing a flood of “fake news” and conspiracy theories and disinformation. The cue line had been that with social media “the masses can now organize their dissent”. Well, yeah. Every tribe, every view, every ilk  – everybody could. No restrictions. Hey: online mobs are great – so long as they target the right enemies.

 

Yes, technology has allowed the extraordinary to become quotidian. But getting to “obviously” is so very, very hard when it comes to social shifts and technology. It takes a lot of work. In many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. I think as we are hurled headlong into this frenetic pace we suffer from illusions of understanding, a false sense of comprehension, failing to see the looming chasm between what our brain knows and what our mind is capable of accessing.

What a mess. One day, democracy is irreversibly poisoned by social media, which empowers the radical right, authoritarians, racist, misogynist trolls, and massive psycho-manipulation. The next day, the very same platforms are giving rise to a thrilling resurgence of grassroots activism. And stuck right in there are all those cyber-utopian apologists screaming, “Jesus Christ, what have we done!!”

Which takes me to the Netflix film “The Social Dilemma”. Not a real documentary but a documentary-drama hybrid. The film tries to explore the dangerous impacts of social networking on human beings, dwelling on the realm of what happens behind our screens. My guess is that just about everybody reading this post (it initially went out to my TMT and info governance membership listservs, but I subsequently sent it out to my general list) has the background and understanding of how the web and the internet and social media work so there will be nothing in this film that is new. Other than what the producers left out. And mangled. And dissembled. But many readers may not so I went over the top with detail.

Oh, and note the irony. Arguably Netflix is the first big internet company to spend the time, money, and resources to create (by all Silicon Valley accounts) the perfect AI “recommendation algorithm” – using the AI technology at the heart of the film’s argument that these internet companies are evil. And though the film is tagged as a “Netflix original” it did not “fund” the film, though given the extraordinary amount of money it has spent to promote and distribute it I suppose the use of “fund” would not be out of turn.

My two opening thoughts about the film, but not to do with the pyscho-ops/AI aspects which dominate its 1+hour run.

First, the Silicon Valley folks interviewed admit they overrated the value of information in itself and underrated the importance of narratives that bestow meaning on information. They misjudged how the openness of the media system would result in an endless stream of new users, channels, and data and simply overwhelm any shared, stable narratives, bringing about what Michael Sacasas has called “narrative collapse”. Michael writes about technology and society and in a recent essay he pulled together this “narrative collapse” issue which you can read by clicking here. It’s a long read but well worth your time.

Second, as a film and video maker I always view media – film, photography, live performance, text – from the point of composition, the “angle” of the producer. From the very start of The Social Dilemma I knew I was going to be manipulated. I was highly skeptical about the bona fides of those interviewed, deca-millionaires and centi-millionaires filmed in their multi-million $ “confessional homes”. As Maria Farrell, an Irish tech writer, argued in March, these “prodigal tech bros” get an extremely easy ride toward redemption, and The Social Dilemma is another example of this phenomenon.

Traitors always exert a strange fascination on the human mind. Benedict Arnold, despite the damnatio memoriae proclaimed in the early years of the republic, is better known today than many other heroes of the American Revolution. The public is likewise fascinated when the former foot soldiers of tech (title-inflated by journalists to ‘executive’, almost as if by law) turn on their former corporate benefactors and repent in Cultural Revolution-style struggle sessions.

The most fascinating bit is how the film uses all the hackneyed tropes of visual media to attack social media: misinformation and emotional manipulation to combat the dangers of misinformation and emotional manipulation. A few production points:

• The movie has a run time of 1 hour and 33 minutes. But lop off 4.5 minutes for the intro and the end credits. It was a cinematic pageant fatiguingly focused on ex-Googler Tristan Harris and his company. Tristan is on for a total of twenty-nine minutes, almost 1/3 of the film running time, and three times as long as any other individual interviewed in the film. This was almost a promotion piece for his role as the anti-prophet of the social media religion.

• As a filmmaker I was intrigued by the visuals: the sundry speakers are couched in faux cinema verité devices: the scene intentionally opening before the talking head is settled into his/her seat, the prefatory ruffle of clothing or insertion of a mike depicted to give the scene a false feeling of impromptu reality, but all of it contrived and studiously included for effect. The shots use short focal-length lenses, blurring the distant-seeming backgrounds, making the talking head’s presence pop on the screen.

• I was generally put off by the whole approach which was the blending of talking-head interviews with some well known Silicon Valley apostates. And the fictional: the anthropomorphization of the evil “algorithm” in the form of three white dudes standing on … what? The deck of the Starship Enterprise? Although using Vincent Kartheiser from Madmen was a nice touch. Plus the “after-school special-style” dramatizations of what happens when Johnny and Janey scroll through their mobile phone feeds all day, and how the family is dysfunctional because of their increased usage of social media. As  Antonio Garcia-Martinez (more from him below) said in his review: “The social-media sitcom functions as a Hogarthian Rake’s Progress, judgingly depicting every downward step on the path to social-media perdition, and culminating in instructive ruination”. And that piece where the AI is trying to “increase engagement” of the son in the family through any means necessary — including magically forcing some girl they think he likes to interact with him. I’m sorry, but that’s not how any of this works. Yes, it was all a bit much. (Anybody remember Reefer Madness)? The Netflix film is literally emotionally engaging misinformation designed to impact our beliefs and actions. In other words, the same thing that the film claims social media companies are doing.

* Yes, the film does get right certain causes of the social media malaise (more about that in a moment), but it doesn’t look at effects. It totally misses the real issues. And it was “dumbed down” which, to their credit, the producers admitted in several interviews because that approach was necessary to communicate in a way that “appeals to a broad, uninformed audience”. I get that. But it’s a cop out. If you’re going to do a “documentary” that argues that social platforms are uniquely responsible for the fraying of society, you have to show all your work. The whole story.

 

 

I am old enough to remember that businesses that make money by collecting and selling detailed records of private lives were once plainly described as “surveillance companies.” Their rebranding as “social media” has to be the most successful deception since the Department of War became the Department of Defense.

Ah, those youthful and charismatic (all white) male visionaries with signature casual wardrobes: the open-necked blue shirt, the black polo-neck, the marled gray T-shirt and hoodie. We fell for it. These founders who won immense public trust with their emergent technologies, from home computing to social media to the new frontier … artificial intelligence. Their companies have seemed to grow organically within the flourishing ecology of the open Internet, with their attention-getting platforms.

These still-techies but ex-baddies adorn the stage as whistle-blowers to the otherwise unaware masses. The “documentary” intends to highlight the harms of algorithms and techniques designed to increase our screen-time and keep us glued to our devices. It takes on the understanding of human manipulation by these big social networking companies.

The legitimacy of the film can be derived from the woke, or maybe spiritually awakened, techies’ interviews who have previously worked with the very companies that they now accuse. Having left these organizations due to ethical concerns (but having all exercised their stock options by the way), they try to articulate the problem that has no name and also what is not normal. They mention myriad grave issues, from data stealing, fake news, Snapchat dysmorphia and tech addiction to depression, manipulation as well as suicides and from polarization along with political extremism to religious fundamentalism.

And they say did not really know what this would morph into. Tim Kendall, the previous president of Pinterest, who also was the Director of Monetization at Facebook, laments that the technology led to the psycho-manipulation, the dopamine rush that social media triggers to our senses, that let social media “exploit human vulnerabilities”.

WRONG. They knew from the very beginning. Antonio García Martínez (Facebook’s first product manager) in his book Chaos Monkeys, Ben Mezrich in his book The Accidental Billionaires (the film The Social Network was based on the book) plus many others who wrote about the founding of Facebook noted that psychological manipulation was the goal from the start. Zuckerberg was studying for a degree with a double concentration in computer science and – this is the part people tend to forget – psychology. He was very well aware of how people’s minds work and in particular of the social dynamics of popularity and status. He talked about it and wrote about it.

And Silicon Valley billionaire Peter Thiel certainly “got it”. Thiel’s $500,000 investment in 2004 was crucial to the success of the company. But there was a particular reason Facebook caught Thiel’s eye. As Jonathan Taplin wrote in his book Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy, Thiel majored in philosophy and was riveted by the ideas of US-based French philosopher René Girard, who in his most influential book, Things Hidden since the Foundation of the World, wrote about “mimetic desire”. Human beings are born with a need for food and shelter. Once these fundamental necessities of life have been acquired, we look around us at what other people are doing, and wanting, and we copy them. And envy them. And prejudice develops. In Thiel’s summary, the idea is “that imitation is at the root of all behavior and people will pursue it, be distracted by it”.

Yes, I am sure the intent of these male Wunderkids was to show netizens a well-rounded, balanced view of the world, through links, ads and posts in news feeds, social networks and similar sites. But Facebook had other intents. Zuckerberg and Thiel realized the platform could also be used to create addicts, and feed them a diet of slanted information that taps into their prejudices, fuels their fears, and manipulates their knowledge and feelings to political or commercial ends. All of this was made possible by using their personal and private information, feeding on their fear, their prejudice, their distraction.

Hmm. Where have I heard that before? Ah, yes, Aldous Huxley in The Divine Within, published in 1950:

Men have always been a prey to distractions, which are the original sins of the mind; but never before today has an attempt been made to organize and exploit distractions, to make of them, because of their economic importance, the core and vital center of human life, to idealize them as the highest manifestations of mental activity. Ours is an age of systematized irrelevances, and the imbecile within us has become one of the Titans, upon whose shoulders rests the weight of our social and economic system. Recollectedness, or the overcoming of distractions, has never been more necessary than now; it has also, we may guess, never been so difficult.

At least Chamath Palihapitiya (an early Facebook exec and now a “conscientious objector”) was honest in the film. He talked about the fact that it was less widely understood the extent to which tech designers are systematically trained in psychological persuasion by their employers — with the intention of using that knowledge in the design of platforms to make users more suggestible. Plus all the technical tools they have at their disposal.

He spent his time at Facebook constantly experimenting on users, trying out minute tactics that would work below the radar of conscious awareness to keep them hooked and goad them into “engaging” more. Default settings, infinite scrolling, “read receipts,” and alerts that another user is typing, are all examples of such tactics. Palihapitiya argued that social media features, such as “like” counts and bright red notifications, are designed to reward engagement, as they work the code, work the process, focusing on mobile.

A briefing note on the “code and process” at play.  My regular clients will recognise this from my memo on how the tubes, wires, plumbing and connections work in social media.

There was a big shift in monetization in 2012 when internet traffic began to switch away from desktop computers towards mobile devices. If you do most of your online reading on a desktop, you are in a minority. The switch was a potential disaster for all businesses which relied on internet advertising, because people don’t much like mobile ads, and were far less likely to click on them than on desktop ads. In other words, although general internet traffic was increasing rapidly, because the growth was coming from mobile, the traffic was becoming proportionately less valuable. If the trend were to continue, every internet business that depended on people clicking links – i.e. pretty much all of them, but especially the giants like Google and Facebook – would be worth much less money.

Facebook solved the problem by means of a technique called “onboarding” which basically is part of the process of getting new users to understand and engage with your app enough to keep using it, and then “weaving in” all your known online and offline activity. For more detail on how onboarding works click here

As a very simple example, the French electronic store “Fnac” wants to give me a coupon. They can snail mail it to me:

Gregory Bufithis
xxxxxxx
xxxxxxx
Paris, France

If they want to reach me on my mobile device, my name there is:

38400000-8cfo-11bd-b23e-10b96e40000d

That’s my quasi-immutable device ID, broadcast hundreds of times a day on mobile ad exchanges. That ID is easily acquired.

My laptop ID/”name” (also easily acquired) is :

07J6yJPMB9juTowar.AWXGQnGPA1MCmThgb9wN4vLoUpg.BUUtWg.rg.FTN.0.AWUxZtUf

Though it may not be obvious, each of these keys is associated with a wealth of our personal behavior data: every website we’ve been to, many things we’ve bought in physical stores, and every app we’ve used and what we did there. They “weave it” all together. Facebook already had a huge amount of information about people and their social networks and their professed likes and dislikes. It simply “graphed” everything they had for every “location address”.

And when Google or Facebook really wants to track you and collect your data they use very sophisticated, proprietary versions of software like GHunt which extracts information from any Gmail account to get an owner’s name, profile, visits to YouTube, photos across the Net, etc., etc. GHunt is an OSINT tool, meaning “Open Source Intelligence” which is a collection/analysis tool for information that is gathered from public, or open, sources. The Google and Facebook proprietary tools are much, much better, more thorough, more invasive. But there are  hundreds of these OSINT tools on the “dark web” and thousands of corporations use them. It’s why I’ve said data privacy has been dead for a long time. And all data privacy experts know this but their mantra must be to help you comply with data privacy protection laws.

Palihapitiya said “we were taking advantage of the fact that social media is a kind of drug, that we all have a biological imperative to connect with other people” and social media companies are merely using an element of a mobile application that weaves the bridge between enticing individuals to give your app a shot and converting them into invested users.

It really is fascinating the way the process works by de-integrating, slicing, dicing, grouping and dividing people into smaller and smaller groups, referred to as cohorts, silos, and bubbles. We are continually fed precise and unique bits of information that harden our prejudices, leading us into an “us vs. them” frame of mind, causing us to doubt the motives, goodness and patriotism of our fellow citizens. These feeds reinforce our thinking that we are “right most of the time.”  It results in us having a very poor understanding of the ideas and concerns of people with whom we may disagree. Losing the feeling that “we’re all in this together” is a very bad thing.  Imagine, in contrast, if the COVID-19 threat had been handled with an attitude of “something is attacking us, we need to band together to fight, find the best and most effective strategies and implement them together to ensure the fewest number of people are affected until we find a vaccine or cure,” instead of turning it into a political, “us vs. them” fight?

Damn. The producers should have given Palihapitiya much more time. He only appears a total of four minutes. I dug into his other presentations and they are a wealth of technical “this-is-how-it-all-works” knowledge.

And I heard the interviewees frequently say “we were giving the users a dopamine hit”. Dopamine hit? Hold on. This is nonsense, based on ancient, discredited behaviorist myths. Dopamine doesn’t give anyone a “hit.” And people do not “learn” to become addicted through rewards and reinforcements. William Brewer’s classic review of behaviorist experiments found that the presence or absence of reward stimuli or negative reinforcements made no difference to whether subjects learned or did not learn the behavior that the experimenters were looking for. Yet, unexamined behaviorist ideology has seeped into addiction research, generally fused with the most reductive evolutionary psychology — and this documentary has more dubious evolutionary babble than a pickup artist’s handbook. My view? Silicon Valley executives had adopted this as if to explain why they’re capitalist geniuses for stumbling on a new way to make money. Thus telling us that they really have no idea what they’re doing.

But what of the effects of “addiction” if that’s what it is? Here, The Social Dilemma turns to social psychologist and centrist scold Jonathan Haidt to offer the usual array of alarming statistics. According to him, depression and anxiety is up 62 percent among older teen girls since 2011 and suicide up by 75 percent. For preteen girls, the equivalent figures are 189 percent and 151 percent. Those statistics are for the United States, but similar data has come up elsewhere.

They even drag in Tim Kendall (remember him from above, the former president of Pinterest?) who says – emphatically – that “these services are killing people.” A more scrupulous documentary might have examined all these issues more closely. Might there be other causes for the rise in depression, anxiety, self-harm, and suicide among young women? Have the lives of young people recently become worse, for example? If there are other causes, how would it be possible to isolate the role of social media? How could we prove that social media is not simply magnifying and purifying existing social trends?

So all I will say is that the Netflix film does get right the point that social media is trying to exploit human vulnerabilities. The notifications and recommendations designed to increase your attention span and putting you in a rabbit-hole from which it is difficult to extricate ourselves. How algorithms, throwing suggestions based on our data, keeps pushing us down those rabbit holes. That technology has undeniably overwhelmed human capacities. The one good thing The Social Dilemma does is to encourage open-end conversations and it does construct an “aware sphere” for family discussions.

The other bit the film gets right is an exposition of the flip side of social networking. The dawn of the disinformation age is linked directly with the expansion of social media. The virtual communities encourage digital tribalism, wherein like-minded people can isolate themselves in echo-chambers. Then, the truth becomes subjective and poles apart. Everyone has their own truth based on the kind of content and one-sided facts they consume, which is fed to them by algorithms.

 

But the world is a much more complicated place than these filmmakers want to believe. That algorithmic recommendation engines are at the heart of our troubles leaves out vast swathes of the internet that are arguably just as important as the big social networks, and perhaps in some cases even more so. Propaganda, bullying, and disinformation are actually far bigger and more complicated.

Disinformation: this area has been covered very well, in-depth, by Alex Stamos (former Chief Security Officer at Facebook who resigned following disagreements with Facebook executives about how to address the Russian government’s use of its platform to spread disinformation during the 2016 U.S. presidential election), and by the Reuters Institute for the Study of Journalism which is part of University of Oxford’s research centre on issues affecting news media globally. These points are a mash-up of comments from both and are important because media coverage, Twitter discussions of disinformation and this movie would benefit from treating these two classes differently: (a) Disinformation meant to persuade the few persuadables, and (b) disinformation that acts as a shibboleth to prove tribal loyalty. Points:

  • Amplifying category (b) over and over again for the benefit of Twitter likes or to dunk on the idiots who believe it only reinforces that loyalty and gives the purveyors of the disinformation control over the narrative
  • As study after study by the Reuters Institute have shown, (a) is a much smaller category by volume but can have the power to re-shape narratives over months, such as the hack-and-leak operation in 2016. Repeating this disinformation, in a truth sandwich with the goal of mitigating the impact, can be worthwhile.
  • As Stamos has shown, buried in the piles of cheap (b) disinformation around the security of the U.S. election are some basic, falsified claims that really have a chance of taking hold. That deserves focus over crazy claims that can be disproven by a candidate shaking out his ears for 5 seconds on TV
  • As the U.S. gets closer to the election, and if the polls don’t tighten up, then there will be a cacophony of bullshit as certain people are confronted with the disconnect between beliefs core to their personal identity and reality. Social media is going to have the impossible task of not to amplifying the craziness. It will lose that fight.

One of my biggest frustrations with the movie is the way it portrays disinformation as being all the same and rooted in algorithmic recommendation and ad targeting. While content recommendation can certainly help to spread misinformation to some extent, very little of this happens through paid ad targeting (for example: the U.S. Congressional review found that Russia spent under $200k in the 2016 election, an order of magnitude less than the $22 million spent by Hillary and $48 million spent by Trump).

Worse, the film fails to address that algorithmic content recommendation is not the whole story and not even, in fact, the worse part of it all. The film briefly mentions, for instance, that Facebook-owned WhatsApp has spread misinformation that inspired grotesque lynchings in India. The film doesn’t mention, however, that WhatsApp works almost nothing like Facebook. It’s a highly private, encrypted messaging service with no algorithmic interference, and it’s still fertile ground for false narratives.

Radicalization doesn’t just happen on Facebook and YouTube either. Many of the deadliest far-right killers were incubated on small forums: Christchurch mosque killer Brenton Tarrant on 8chan; Oregon mass shooter Chris Harper-Mercer on 4chan; Tree of Life Synagogue killer Robert Bowers on Gab; and Norwegian terrorist Anders Breivik on white supremacist sites including Stormfront, a 23-year-old hate site credited with inspiring scores of murders.

A note on 4chan: although the 4chan message board would become broadly reviled when it become associated with the alt-right, earlier it was viewed much more ambiguously, particularly when it was best known for spawning the ludic protest movement Anonymous. Gabriella Coleman, the anthropologist whose writings helped set the tone of the early coverage, described Anonymous in 2012 as “one of the most adroit and effective political operations of recent times.” In that same year a New York Times review noted the passion, scale and international span of Anonymous are all special, and their cybersabotage smackdowns can be quite cathartic, as can the conduit 4chan. These instances aside, there was never a full embrace of Anonymous by mainstream politicians and pundits. At the time, they seemingly knew better than to let their praise for street protesters, who just used the Internet to organize, bleed over into praise for more anarchic, fully online actors, who saw the Internet as the primary battlefield for waging unconventional warfare.

The sites I mentioned above are not primarily driven by algorithms or profit motives. Instead, they twist and exploit the open internet’s positive ability to connect like-minded people. When harmful content surfaces on them, it raises complex moderation questions for domain hosts and web infrastructure providers – a separate set of powerful companies that have completely different business models from Facebook.

NOTE: COVID misinformation has been virally spreading on WhatsApp since March and as of this past August they were still struggling how to contain it.

This isn’t to let social networks off the hook. Nor is it an effort to make the problem feel so complicated that everyone just throws their hands up and walks away from it. But I’m shocked at how appealing so many people find the idea that social networks are uniquely responsible for all of society’s ills. This cartoon super villain view of the world strikes me as a kind of mirror image of the right-wing conspiracy theories which hold that a cabal of elites are manipulating every world event in secret.

I can’t take seriously a film like The Social Dilemma, which seemingly wants to hold one company accountable for every change society has undergone since it was founded.

 

 

What is it that has been displaced and distorted in The Social Dilemma, to produce this moral-panic cinema? Well, capital, as Richard Seymour said in his review of the movie:

The documentary is very clear-sighted about aspects of the social industry and how it works. It is, as Harris says, “a totally new species of power.” The social industry doesn’t just monitor and try to manipulate us. The more our social lives are spent on these platforms, the more of our social life is programmed. 

Jaron Lanier, the soft-spoken granddaddy of computer science, speaks of the way the platforms introduce a “sneaky third person” between every pair of interlocutors, who is paying for the conversation to be manipulated. But one could go much further, and author Cathy O’Neil does when she says that the algorithms which regulate how we interact are just “opinions embedded in code.” Whose opinions? Largely, those of wealthy white men in northern California out to make a huge profit and reputation. That is an intensely important political issue, which the Left has been slow to grapple with. They have been too fawning.

The Social Dilemma is right to highlight the power that is at stake here. And when it draws attention, with palpable horror, to the exponential growth in computer processing power, it clearly apprehends that processing power is political power. However, it’s extraordinary that it doesn’t occur to anyone to think of it as class power. For what is being automated most efficiently in this cybernetic offensive is living labor.

For The Social Dilemma, the real political questions that arise out of such programmed reality have to do with the way in which the social industry platforms promote polarization and undermine consensus reality. Everyone, we are told, is working with a different set of facts. Guillaume Chaslot, a former Google software engineer, explains that the algorithms which he helped design, such as YouTube’s “up next” recommendation system, work best by polarizing people. There is something riveting about “extreme” content, sufficient to keep users hooked. But they do not let Chaslot expound on that which he has done in other presentations.

Harris points out that “fake news” reportedly spreads six times faster than the truth because “the truth is boring.” There follows the familiar social media horror stories about conspiracy theories, racist propaganda, Flat Earth ideologies, and rumors benefiting murderous dictators — all thriving on social media. And of course, Russia “destabilizing democracies.” It would have been a good spot to at least mention how AI is helping spread misinformation faster.

There is obviously some truth to all this, but it’s still just begging the question. For what the documentary really should have explained is what is so addictive about conspiracy theories and bullshit? If YouTube and Facebook seem to promote far-right infotainment, that may say more about the societies in which the social industry profits than it does about the algorithms per se. Mark Zuckerberg may be amoral enough to profit from Holocaust denial, but no one is arguing that he is actually trying to promote it.

Perhaps still more insidious is the claim that “polarization,” and disagreement over the facts, is a political problem. There are, visibly, forms of volatile and exhausting cultural polarization that are accelerated on social media, if not exactly caused by it. But people looking for nefarious attempts to manipulate the electorate are sometimes peering down the wrong end of the telescope. You can sow seeds all day and all night but nothing will take root unless the soil is fertile. Magical thinking and an avid appetite for conspiracy theories has not been generated by the Russians or the Chinese. Look into the mirror – the problem is us, the populace. If Putin throws a few seeds in the air and where they land a vast undergrowth of bushes, brambles and nettles immediately appears, well, the only thing that shows is how acutely he understands mass psychology in Western consumer society.

 

Solutions to the social media dilemma? The Social Dilemma doesn’t provide any realistic solutions but then neither do I (see below). As one of the interview subjects says “the genie is out of the bottle”. Suggestions put forth by the film include taxing data and legally limiting content recommendation algorithms, but as I’ve indicted above this doesn’t tackle the root of the problem. Telegram, a *totally* encrypted messaging app, was mentioned because technically it cannot read the contents of messages and stores no user data – and then we find out it is frequently used to transmit child pornography. But we don’t really know how frequently because there’s no interpretable data and therefore it is impossible to measure – which takes us full circle because now we know we need to collect the data because not is an important part of being able to create a safer environment online. See the complexity?

Then we have Tim Kendall, the former Facebook employee, former Pinterest CEO (who, by the way, gets the second most amount of screen time in the film) and he says “people need to reduce their screen time and get their kids to restrict their screen time”. This from a guy who is currently the CEO of a company … that tries to help you limit your phone time usage. Anyone think he has, perhaps, alternative motives to play up how “addictive” he made Facebook and Pinterest?

As the film movie correctly observes, humans evolved to live in small tribes, not global, interconnected communities where communication is instantaneous. There is no way back. Instead we are now forced to deal with the options to create *smart* regulation, and hold companies accountable without heavy handed fear mongering. An impossible task. There was lots of babble from several of the interviewees that “regulation has never been tried”.

Well, regulation-wise this same social-media commentariat was “clapping like a pack of penguins” (Antonio quote) when the European Union was thunderously passing its General Data Protection Regulation (GDPR), and again when California was passing the California Consumer Privacy Act (CCPA). As I have written numerous times, these regulations have done little beyond more deeply entrenching incumbents like Google and Facebook in their advertising monopolies, harming up-and-coming startups with onerous regulation, killed the third-party data ecosystem that served as an alternative to the walled gardens of Facebook, and generally made the European tech and media scenes hamstrung, without any net consumer benefit. And Google simply changed its dominating programmatic ad world to skirt the GDPR. Oh, there will be some fines but zippo has changed in terms of actual privacy. The constantly shifting rules and application of the GDPR/CCPA have merely sloped the field to the benefit of one or another team of players … and racked up millions of dollars and euros in privacy lawyer and data privacy vendor fees.

As scores of analysts, media pundits (and armchair experts) have noted, part of the unreason around the pro-regulation, pro-antitrust, pro-data protection arguments that frame Facebook as some sort of dastardly conspiracy foisted on society by malevolent overlords is that what really happened is that Facebook iteratively converged on precisely what users wanted via relentless testing and user measurement. Thus, any limitation on that offering – I’m talking to you, TikTok but really anyone else’s – is about “as effective and temporary as prohibiting some brand of ice cream: you’ll just have another ice-cream stand providing the same flavors in a hot second” (Antonio quote again).

And let’s face it. That pack of interviewees was *sexy*. During the past twenty years massive amounts of highly credible information on climate change dangers have been ignored because it was boring. Brilliant scientists delivering critically important messages about the sustainability for human life on this planet were never given airtime because they were not smooth talkers and unable to compete with the sensationalism of what currently passes for the nightly news.

So do not worry about the cyber-utopians now going confessional. Pointless though their social-media militating may be, it will provide a cozy media career for those who ride the wave while it lasts. They’ll expiate a few iotas of guilt – and still be invited to the right brunches and playdates for their kids. They are already hitting the speakers’ bureaus with clips from their The Social Dilemma appearances to augment their profiles to convince corporations and NGOs to pay them thousands of dollars (plus business-class airfare) to hear the same spiel in person.

So after the buzz of The Social Dilemma dies down, the rest of us will still be stuck here with our brains wired to each other in fundamentally uncensorable ways. I simply must download that new Luddite app!

 


Old-fashioned business, media, and politics will continue to play a powerful role in shaping what we see online. Misinformation peddlers and propagandists – include world leaders, political parties, military figures, celebrities minted on film or television, etc. – who should be held accountable in their own right cannot be dismissed as inevitable byproducts of “the algorithm.” Similarly, web platforms aren’t just run by code. Senior Facebook leadership have intervened to protect right-wing misinformation from its standard moderation process.

I was bemused when Tristan Harris relates his “epiphany” that Gmail’s email interface is addictive. Huh. Why does nobody raise an equally obvious issue: at many companies, answering email around the clock is encouraged or required. Here’s a crazy idea: how about asking companies to untether their employees from a 24-hour office, and tell them to turn off their notifications at a certain hour or point in the day?

And weirdly enough, in the film Harris lauds ride-hailing apps, perhaps the most ruthless, exploitative business built on opaque algorithms and manipulative interfaces. And he cites that as Silicon Valley’s “potential”.

The Social Dilemma discounts the notion of using social networks creatively or critically, sometimes to the point of condescension. But it’s easy to find people thoughtfully engaging with these apps, particularly amid the COVID-19 pandemic, which has turned screens into some of the only safe public spaces. Activists who are deeply critical of Facebook and Google have still built mutual aid networks and organized protests on the Facebook and Google  networks.

It’s also overly glib to say that all media have caused problems, and therefore social media is no different from TV or radio. The internet scales up, speeds up, and automates older media in a way that poses unique problems. But it makes the old status quo look worse, not better. Talk radio, television, and books have pushed misinformation for years with little more than affectionate eye-rolling from the public. Cable channels dedicated to science, history, or “reality” are rife with pseudoscience, deception, and conspiracy. Modern social networks have simply magnified all the worst parts of existing culture. Fixing that can’t simply mean condemning Facebook. Instead, we have to treat it as one part of a much bigger problem. It’s a shame that The Social Dilemma did not give space to the real critical internet and media scholars like Safiya Noble, Sarah T. Roberts, and Siva Vaidhyanathan (just to name a few) who continue to write in detail and with nuance about how broader structural inequalities are reflected in and often amplified by the practices of big technology companies and what the true long-term effects will be.

In sum, this omission of experts, and the lack of nuance and details and simplistic presentations results in The Social Dilemma feeling like a missed opportunity. But I agree with Pranav Malhotra who wrote the review for Slate:

On the plus side, the film will inform a wide, unknowing audience about issues like surveillance, persuasive design practices, and the spread of misinformation online. Yes, it is simplistic. And it avoids the issue of who gets to convey disinformation, and how it is framed. But perhaps amplifying the voices of those who had a seat at the table at creation will help.

Maybe. We face issues of frightening complexity that require us all to work together, if we want to have any hope of finding a lasting solution. But just skim the comments on any story that reported on or analysed this week’s U.S. House Judiciary Committee’s report on the monopoly power and practices of the world’s largest technology companies (I wrote about it, briefly, here). Calling those who don’t align perfectly with your opinion *idiots* or *assholes* or *unpatriotic low-lifes* is not going to get us there.

And, look, we are only at the beginning. The internet’s “aristocracy” arose with the commercialization of the web, starting about 20 years ago. To the hectic leaders of Silicon Valley, two decades seems like geologic time – but in the evolving social and political cultures of our species, two decades is the blink of an eye. We have not come close to absorbing the implications of these potent, purposefully disruptive technologies. But we now realise we killed ourselves. Our vulnerabilities arose from commerce – specifically, the tech-enabled exploitation of personal data by businesses – that we allowed to cultivate. The algorithms of click bait, echo chambers and adtech have their basis in BF Skinner’s rat-in-the-box experiments.

Let’s admit it. We gave up our data, our privacy, our control to Facebook and other sites willingly, and grew fat and happy and lazy. Nobody is going to rescue us from ourselves and our talismanic smartphones: not the European Union and its hamfisted regulations; not the Federal Trade Commission and its army of D.C. lawyers; not 3 billion Facebook users somehow abandoning their smartphones en masse. And we should definitely not expect salvation, or even a path to get there, from the self-anointed figureheads of media-savvy media condemnation.

Which is why, despite that false upbeat end to the film, propagated by those confessional cyber-utopians (still hanging on to their naïve belief in the emancipatory nature of online communication), was totally misplaced. I think a “solution” to social media manipulation will always escape us.

 

 

One Reply to “A critique of the Netflix film “The Social Dilemma”. A film that manipulates you with misinformation – as it tries to warn you of manipulation by misinformation”

  1. Nicole Morgan says:

    It is very very good.
    Could we have it in French?

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top