The death of data privacy, and the birth of surveillance capitalism: how we got here [PART 1]

 

This is a very preliminary draft of the introductory chapter (presented in 2 parts, today and tomorrow) for a book I am writing that addresses the futility of data privacy, the divinity of data, the flimsy fabric of computation – and technology’s unavoidable endgame – and our inability to cope with it. 

Our institutions of governance and law and commerce mostly originated during the 18th century Enlightenment. As we enter the third decade of the 21st century, linear thinking and logic (Cartesian and otherwise) are no longer suitable for this connected and complex world. Yes, we have gone through many evolutions and revolutions, each different. We have been able to discern some patterns from the past – but they will not serve us today. Our shift to an electric-digital-networked age is unique. Distributed information and communication technologies have converged, creating a societal infrastructure like no other we have experienced. We are unprepared.

And in this digital age, we struggle (delusionally) to defend liberal values we developed in an analogue era. In these two posts I want to address the death of privacy and the infantile disorder of imagining there is data privacy – or any privacy – anymore. 

Also, as a research/background note, it marks my close-to-completion “consumption” regimen: 130+ books read over the past three years (most published but a number of drafts from works-in-progress from authors who were kind enough to share their work), plus (at last count) 722 published and unpublished magazine articles and white papers. Not to mention numerous conversations and interviews at 45+ multi-faceted conferences and trade shows I have attended over these last three years that covered artificial intelligence, computer science, cyber security, journalism/media, legal technology, and mobile technology.

I will, in due course, publish a full, annotated bibliography. But in Part 2 I will include a list with comments of my 15 principal text sources that inform this 2-part introduction, along with a list of conferences and events you should attend, to fully understand “The Big Picture”: how this distributed information and communication technology infrastructure works, and what it has done to our societies. 

 

3 February 2020 (Paris, France) – In the opening chapter of her book “The Age of Surveillance Capitalism”, Shoshana Zuboff relates the following story:

“The debate on privacy and law at the Federal Trade Commission was unusually heated that day. Tech industry executives argued that they were capable of regulating themselves and that government intervention would be costly and counterproductive. Civil libertarians warned that the companies’ data capabilities posed ‘an unprecedented threat to individual freedom.’ One of them observed, ‘We have to decide what human beings are in the electronic age. Are we just going to be chattel for commerce?’ A commissioner asked, ”Where should we draw the line?’

The tech executives argued ‘Digital was fast, and stragglers would be left behind’. It’s not surprising that the regulators (and almost everybody else, it seemed), rushed to follow the bustling White Rabbit down his tunnel into a promised digital Wonderland where, like Alice, we fell prey to delusion. In Wonderland, we celebrated the new digital services as ‘free’, but did not see that the surveillance capitalists behind those services regard us as the free commodity.”

The line the commissioner referenced was never drawn, and the tech executives got their way.

The year was 1996. Congress would cement this “delusion” in a statute, Section 230 of the 1996 Communications Decency Act, absolving those companies of the obligations that adhere to “publishers” or even to “speakers.” As Zuboff notes, like our forebears who named the automobile “horseless carriage” because they could not reckon with its true dimension, we listened to these companies who said “Look, it’s simple. Just regard these internet platforms merely as bulletin boards where anyone can pin a note”. We bought it. The tech companies developed the vernacular … and we bought it wholesale.

Starting places

 

Twenty-four years later the evidence is in. The fruit of that victory was a new economic logic that Zuboff called “surveillance capitalism” (she coined the term in 2014). She says:

“Its success depends upon one-way-mirror operations engineered for our ignorance and wrapped in a fog of misdirection, euphemism and mendacity. It rooted and flourished in the new spaces of the internet, once celebrated by Big Tech as “the world’s largest ungoverned space.” But power fills a void, and those once wild spaces are no longer ungoverned. Instead, they are owned and operated by private surveillance capital and governed by its iron laws.”

The rise of surveillance capitalism over the last two decades went largely unchallenged. As I noted above, we were told “digital was fast, and stragglers would be left behind”. So off we rushed down that rabbit hole. Digital services free? We thought that we search Google, but now we understand that Google searches us. We assumed that we use social media to connect, but we learned that connection is how social media uses us. These issues have been explored in depth by Zuboff, Jamie Bartlett, Julie Cohen, and many others – all of whom I will introduce you to in this series.

And there is a major point I wish to make. As all of these systems become more sophisticated and more ubiquitous – general data collection, embedded mobile device sensors, cameras, facial-recognition technology, etc. – we are experiencing what data analysts call “phase transition” from collection-and-storage surveillance to “mass automated real-time monitoring”. Much more importantly, though, and I will get into the details as this series develops, they are bringing about this change in scale, so these tools also challenge how we understand the world and the people in it.

Plus, this degree of surveillance will not emerge simply because the technology is improving. As Jake Goldenfein (a law and technology scholar at Cornell University who covers emerging structures of governance in computational society) notes in a forthcoming book:

“Instead, the technologies that track and evaluate people gain legitimacy because of the influence of the political and commercial entities interested in making them work. Those actors have the power to make society believe in the value of the technology and the understandings of the world it generates, while at the same time building and selling the systems and apparatuses for that tracking, measurement, and computational analysis.”

But the big knife we have wielded, to kill our own data privacy? Convenience. We think we want our data to be private, or that we can choose our degree of privacy. Zuboff and Jamie Bartlett both use the phrase “a reasonable quid pro quo” : we make a calculation that we will trade a bit of personal information for some valued services. For example, last year I wrote about the Delta Air Lines pilot biometric data system (also noted by Zuboff and Matt Synder and a host of others). It is a brilliant example. Dela reported that of nearly 25,000 customers who traveled there each week, 98 percent opted into the process. The process was explained to them:

“The facial recognition option will save an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft. We will keep your photo and data for a period of time”.

That 98% consented. “Saving time” was more important than Delta keeping your photo in a database. When I spoke to a representative of the U.S. Customs and Border Protection (CBP) she said CBP had been working with multiple airlines to implement biometric face scanners in domestic airports. She said “it will streamline security. The goal is to replace the manual checking of passports nationwide. There are no privacy issue.”

Note: this is separate from the eye and fingertip scanning done by CLEAR, a secure identity company available at more than 60 airports, stadiums and other venues around the country. I signed up for CLEAR and I used it all the time when I traveled in the U.S., both at airports and event venues.

Here’s how the process of facial scanning at the airports works: cameras take your photo, and then the CBP’s Traveler Verification Service matches it to a photo the Department of Homeland Security has of you already. These could be images from sources like your passport or other travel documents.

And those photo databases? A few points:

1. In order to quickly verify travelers’ identities, photo galleries have been pre-built from flight manifests. All airlines are building them, by the way; you consented. It is in your travel purchase documents.

2. So once a face is scanned it can be checked against the stored photo of a passenger. According to the CBP representative I spoke with, it stores the photos of U.S. citizens scanned for no more than 12 hours post-verification, after which they are deleted.

3. Can you opt out? Yes. But even if you opt out of the facial recognition at the airport, your photo is still part of that gallery the airlines hascreated prior to the flight.

4. So the bottom line is this: by consenting to the facial recognition, the government can create a digital identity for you and could track you without your consent or knowledge. While they may not be using that power right now, there’s a lack of regulation preventing them from using it that way. I have combed the CBP regulations and find no “guardrails” of any kind. The American Civil Liberties Union is against the CBP’s facial recognition program for that reason. In a press release they noted “Our concern is that your face will be used to track and monitor you everywhere you go. There are simply no guidelines to what they can do and cannot do with that data”.

So, does our love of convenience simply cancel out privacy? I had similar conversations in Lille, France last week at a major cybersecurity event: is cybersecurity incompatible with digital convenience? The cybersecurity issues will be addressed later in this series.

One issue I do have in all of this relates to the statement “we’ll learn from the past”. I read all the time that to understand our shift to a networked age we can always discern some patterns from the era that followed the advent of print.

No. Our challenges are unique. There are major differences between our networked age and the era that followed the advent of European printing. James Dewar wrote a long analysis for a Rand Corporation report on why these two ages differ, but I think Niall Ferguson sums it up quite nicely in these three points from his book The Square and the Tower :

1. First, and most obviously, our networking revolution is much faster and more geographically extensive than the wave of revolutions unleashed by the German printing press.

2. Secondly, the distributional consequences of our revolution are quite different from those of the early-modern revolution. Some people foresaw the giant networks that would be made possible by the Internet, despite their propaganda about the democratization of knowledge. But few would see it could be so profoundly inegalitarian.

3. Thirdly, the printing press had the effect of disrupting religious life in Western Christendom before it disrupted anything else. By contrast, the Internet began by disrupting commerce; only very recently did it begin to disrupt politics and it has really only disrupted one religion, namely Islam.

Danny Hillis also provides some perspective. In a long-form essay on Medium, The Enlightenment is Dead, Long Live the Entanglement, he notes that in the Age of Enlightenment we learned that nature followed laws. By understanding these laws, we could predict and manipulate. We invented science. We learned to break the code of nature and thus empowered, we began to shape the world in the pursuit of our own happiness. With our newfound knowledge of natural laws we orchestrated fantastic chains of causes and effect in our political, legal, and economic systems as well as in our mechanisms. He says:

Unlike the Enlightenment, where progress was analytic and came from taking things apart, progress in the Age of Entanglement is synthetic and comes from putting things together. Instead of classifying organisms, we construct them. Instead of discovering new worlds, we create them. And our process of creation is very different. Think of the canonical image of collaboration during the Enlightenment: fifty-five white men in powdered wigs sitting in a Philadelphia room, writing the rules of the American Constitution. Contrast that with an image of the global collaboration that constructed the Wikipedia, an interconnected document that is too large and too rapidly changing for any single contributor to even read.

We are so entangled with our technologies, we are also becoming more entangled with each other. The power (economic, physical, political, and social) has shifted from comprehensible hierarchies to less-intelligible, incomprehensible networks. And we cannot fathom their extent. And virtually nothing is private.

It is a power shift, really. A power shift from the ownership of the means of production, which defined the politics of the 20th century, to the ownership of the production of meaning. As Zuboff notes:

Our digital century was to have been democracy’s Golden Age. It did not work out that way. It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice: they effected a bloodless coup from above.

It is a clash we will see time and time again: those old, “analogue” rights … like privacy … clashing with a society driven into the social relationships of Zuboff’s digital one-way mirror: “The lesson is that privacy is public”.

Data? What “data”? Whose “data”?

Before I move on, a few words about “data”. At a recent technology conference I attended I had a conversation  with an attendee that mirrored one that Jamie Bartlett relates in his book The People vs Tech:

Attendee: I do not get the big deal. What consumer among us doesn’t appreciate convenience. Google knows I am going on vacation next month. Google starts suggesting related news stories and hotels for us. Where’s the harm? Where is the violation? It is not as if my private actions were somehow made public to the world to see.

My response:

Me: ok, so if they sell these data to corporations that target you with higher prices for airline tickets, car rentals, and hotels in the places they now know you are planning to go … because they merge it with other demographic data they have on you … you still see no harm? And news alert: many of your private actions are actually pretty public for the world to see. [I then gave him a link to a tech paper that shows how web sites glean sensitive information in order to cut through a person’s virtual identity to his or her real identity. More later in this series.]

It also reminded me of the willy-nilly way we treat the word “data”. There is an increasing trend: people talking (and screaming) about policy and rules for “personal data” who shout “Data! AI! IoT! 5G!” to the point of ad nauseam. I get tired of hearing:

– “You own your data”

– “Your data should be portable”

– “There should better EU rules to control your data”

My problem is: what is your “personal data” you want to control:

– your power utility knows your power use. How are you going to control that?

– your transport authority probably knows your journey patterns if you have a subscription card? How are you going to control that?

– Google knows your search history and much of your web use. There are literally over 25,000+ data brokers in a $100 billion+ industry that aggregates user data and sells it. GDPR and CCPA notwithstanding, how are you going to control that?

There are millions of uncountable data sets, sets of information. Let’s not use “data” as a universal thing. One of the great handwringing things is “Google has all the data” or “China has all the data”. There is no such thing as “all the data”. All “data” is not interchangeable. GE has telemetry “data” for aircraft engines. It’s not interchangeable. You cannot use Uber’s booking data to make a better ad engine. You can’t use GE’s engine telemetry to build a better Chinese-to-German translation engine.

So spell out “personal data”. Is it your electricity loading levels? Is it the mileage on your car? Your LinkedIn network? Your medical records? Your OpenTable e-account?

More foundationally, what does Facebook or Google “owe” you, exactly, as far as your “data privacy”.  It’s not like Mark Zuckerberg snapped a photo of you and then monetized your image. You willfully used a service and generated data that wouldn’t otherwise exist. What you get in return is Facebook itself, for which you’ve not paid a nickel. Ditto Uber, which uses your data to optimize a tricky two-way market in riders and drivers so you have a car nearby when you open your app. At your convenience. Google likewise uses your searches (and resulting clicks) as a training set for its search algorithm. For your convenience. You’re an infinitesimally small part of a data cooperative whose benefits accrue to the very users that generated it.

 The regulatory state in the Information Age

A few words about “data” and the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), subjects of a longer chapter to come. “Data privacy” experts and consultants need to stop presenting the GDPR and the CCPA as this:

 

Granted: I know it is merely commerce for them. “Data privacy” experts and consultants have one goal: to sell, be it their services and/or products. Their mantra is that you must comply with these acts and they will help you. What is happening out there in the real world is not material. Don’t want to get a big fine, now, do you?

The CCPA went into effect 1 January 2020, and nobody was ready – least of all the state of California. Draft regulations for enforcing the law are still being finalized at the state level, those regulations now due 1 July 2020 but state officials now say maybe not until the fall of 2020.

Many called the CCPA an example of the “Brussels effect”, well known in old-economy industries like chemicals and cars. Regulation spreads through market forces. Companies adopt the EU’s rules as the price of participating in the huge EU market, and then impose them across their global businesses to minimise the cost of running separate compliance regimes. The rules are sometimes codified by foreign governments or through international organisations, but not necessarily.

But it is not working out that way. If you thought the GDPR was a bumpy ride … there are an estimated 489 GDPR cases in progress but courts are stymied by the obtuse regulations … the CCPA is even more complex. One U.S. law firm heavily invested in advising clients on the new law says (diplomatically) “oh, it is still a work in progress. Give it a few years.” Yes, Facebook and Google are facing billion-dollar lawsuits over alleged violations of the GDPR, but as Max Schrems himself confided to me last year “it will be years and years before those suits are closed”.

Meanwhile, on a bigger front, Europe’s antitrust campaign against Google faces its first big public test in the EU courts next week. Google is appealing the 2017 decision by antitrust chief Margrethe Vestager to fine it €2.42 billion for abusing its dominance as a search engine to favor its own comparison shopping service over rivals.

If the case goes Vestager’s way, it will strengthen her hand to take a tougher approach not only toward Google’s other specialized search services including flights or restaurants, but also on similar ventures by other tech giants, such as Facebook’s Marketplace or Apple Music. Conversely, the EU has a big problem if the judges in Luxembourg, who serve as the only check to the unrivaled powers of the EU’s antitrust czar, decide that she had been too bold. A victory for Google would be a major setback in Vestager’s Brussels reign, potentially driving her to make more use of her new powers to initiate legislation, rather than focus on antitrust cases. My team will be on-site in Luxembourg for the decision so I will reserve further comment until then.

The crux of the CCPA (and I will expand on these points in that subsequent post) is this:

1. if your company buys or sells data on at least 50,000 California residents each year, you have to disclose to those residents what you’re doing with the data, and, they can request you not sell it.

2. Consumers can also request companies bound by the CCPA delete all their personal data. And as The Wall Street Journal reported in a long series on the CCPA, websites with third-party tracking are supposed to add a “Do Not Sell My Personal Information” button that if clicked, prohibits the site from sending data about the customer to any third parties, including advertisers.

3. Facebook is trying to set the tone. It already has tools to allow users to access and delete their information, wherever they live. The service has a CCPA page where California residents can request information about any of its products — WhatsApp, Instagram, Portal, Messenger Kids, and Facebook itself. As a result, Facebook sees itself as largely already complying with CCPA. And as I noted in an earlier post, Facebook has a 3,000 person team focused just on the GDPR and the CCPA.

 4. Biggest administrative issues? It is not entirely clear what California is using as its definition of a “sale” of consumer information. And how is a company going to ensure it is deleting the right customer’s data without collecting more information to verify them?

And regulators are finally realising they do not grasp the converged distributed information and communication technologies at play, and so they are victims, as we all are, of “Infoglut” – masses of continuously increasing information, so poorly catalogued or organized (or not organized at all) that it is almost impossible to navigate through them to search or draw any conclusion or meaning. This is a big piece of my book and it takes up two chapters. So herein just a few points – and these are “chops” from a 27 page draft, so bear with:

1. Regimes of economic regulation include anti-distortion rules – rules intended to ensure that flows of information about the goods, services, and capabilities on offer are accurate and unbiased. Some anti-distortion rules are information-forcing; rules in that category include those requiring disclosure of material information to consumers or investors. Other anti-distortion rules are information-blocking; examples include prohibitions on discrimination, false advertising, and insider trading.

2. The difficulty currently confronting regulators is that contemporary conditions of infoglut and pervasive intermediation disrupt traditional anti-distortion strategies. To achieve meaningful anti-distortion regulation under those conditions, a different regulatory toolkit is needed.

3. From a regulatory perspective, regulators realize the problem with “infoglut” is that it makes information-forcing rules easy to manipulate and information-blocking rules easy to evade.

4. The massively intermediated, platform-based environment promises solutions, offering network users tools and strategies for cutting through the clutter. At the same time, however, it enables information power to find new points of entry outside the reach of traditional anti-distortion regimes. Regulators are undercut on an almost daily basis.

5. What happens is that infoglut and pervasive intermediation impair the ability to conduct effective consumer protection regulation – especially in the realm of data privacy. The traditional regulatory focus on the “content of disclosure” is far to limited. The way that choices are presented also matters. Techniques for nudging consumer behavior become even more powerful in platform – based, massively intermediated environments, which incorporate “choice architectures” favoring the decision that platform or application designers want their users to make.

All of this explains why EU regulators are so frustrated … “Why aren’t our massive monetary fines changing Big Tech behavior??!!”  Well, because they are applying analogue/20th century rethinking to a 21st century world.

It is why many EU regulators think the GDPR will be a failure, and needs a “rewrite” (good luck with that). GDPR will not affect the most valuable thing for Big Tech, something they lobbied mightily to preserve: the collection of “data” (information) in such a way that it is collected, aggregated and analysed to obtain data knowledge that bears value. It is a value chain similar to raw material processed to become a product.

But more nuanced. There’s behavioral/event data you talk about (on Facebook, on Google, on Instagram, on the open web) and there is identity data (personal information such as name, address, phone, etc.) as well as anonymous (device IDs, cookies, etc). What’s important to people is that they must control permissions re: their identity. GDPR (somewhat) addresses that but not the “data totality”. Without identity data there is no bridge between the anonymous and the individual. Big Tech needs that bridge so it has figured out how to work around the privacy regulations (GDPR, CCPA, etc.) These regulations rightly focus on the rights of the individual but they have actually allowed more latitude for companies to re-identify and aggregate data about large groups of people.

 The elephant(s) in the room

 

And what is not said by all of these “data privacy” experts and consultants? That invasive data privacy technology, and the complexity of these platforms, are spreading faster than lawmakers can keep up. This paragraph will be a  “lightning round” due to space, and just highlights of some of the more recent developments, each of which (and more) I will develop further in this series. So consider these indicative but by no means exhaustive:

1. Google has developed an ad system that can literally target one person, which could be the future of doxxing. It’s now easy for anyone to disaggregate your data, and use it against you. U.S. immigration officials are using it to track Mexicans planning to cross the border into the U.S. This is potentially far more invasive than facial recognition – all in the hands of just one company.

2. The FCC has now confirmed (well, they use the word “apparently”) that U.S. telecom companies have broken the law by outright selling customer location data. I wrote about this before. The controversy originated when it was learned that T-Mobile, Sprint, and AT&T had sold real-time location of their wireless subscribers. Some of that information trickled down to bounty hunters and complete strangers for a worryingly small amount of money — $100 in some cases. FCC chairman Ajit is between a rock and a hard place and analysts question just how severely the FCC will penalize the mobile providers involved. Why? Because since becoming chairman of the FCC he favored the telcos at every turn. So will there be something substantial as far as a penalty, or merely a wrist slap that leaves no lasting reminder for the companies that gave away some of the most sensitive data your phone can produce?

3. The Guardian and the New York Times have helpfully been providing “Privacy Policy” analysis on what data you share when you hit a web site. They dove into the privacy policy of one site, checked the “Third Party” sharing clause, and they used some tracking software to count … 577 companies with whom the web site shared its data (and that web site had the lowest number of third-party share vendors in their investigation).

And each of them (helpfully) had a Privacy Policy of their own. Which meant you had to then see what each of them did with your data. So you need to read an additional 577 privacy policies.

The lawyers jumped in and said it would take (approximately) 120 billable hours of reading for EACH of those privacy policies in detail to find out what you have agreed to. Rough calculation? 27,000 pages of reading.

Fun part: it would take about two months of your life to opt-out of all 577 of these respective tracking cookies – especially the ones that say they provide “Precise Geographic Location Data”.

So consider these two points because these private policies all seem to mirror each other:

 a. the next time you cheerfully click the “OK” button on “I accept all of your cookie policies” you may be explicitly granting permission to a company that you had previously opted out from … and therefore you restart the collection of your information. One click undoes whatever privacy you think you gained for yourself.

 b. it also means that the GDPR is ineffective in the face of adtech, which is the worst sort of hydra. The GDPR and adtech make up three chapters in this series. They are the chapters most-in-demand and are in the hands of my publisher for a special magazine article, so bear with. And remember: in the data brokerage world, location data is more valuable and trumps facial recognition systems data all the time.

4. Last week, the London Metropolitan Police announced that it will start using live facial recognition technology to identify criminal suspects in real time, in one of the largest experiments of its type outside of China.

5. Advast, Jumpshot, OneClick, etc. There are scores of programs that make millions of dollars following its users around the internet on behalf of clients like Microsoft, Google, McKinsey, Pepsi, Home Depot, Yelp, etc. They can track every search, every click, every buy, every site. And they continue to grow.

6. Last week the Norwegian Consumer Council provided a white paper that demonstrated how the online advertising industry is behind comprehensive illegal collection and indiscriminate use of personal data. It is a rather interesting report because it details how we use apps, the hundreds of shadowy entities receiving personal data about our interests, habits, and behaviour, and how these practices are out of control. It also shows the extent of tracking that makes it impossible for us to make informed choices about how our personal data is collected, shared and used, and flys around GDPR control.

 

Tomorrow ….

 

In his prizewinning history of regulation, the historian Thomas McCraw delivered a warning: “Across the centuries regulators failed when they did not frame strategies appropriate to the particular industries they were regulating”. This was echoed last year by Tim Wu in his book The Curse of Bigness. And I am reminded of a quote from Thomas Jefferson although he was talking about the U.S. Constitution and why it must never be static, that it must change with the times:

Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regimen of their barbarous ancestors.

Existing privacy and antitrust laws are vital but neither will be wholly adequate to the new challenges of reversing “epistemic inequality”, a favorite phrase of many of the writers that form the bulk of my research. As Zuboff notes in her book “if government really wants to act, it needs to interrupt the data supply chains. It means really fighting the forces of datafication.” She does not have much hope.

As Zuboff, Barry Steinhardt, Michael Levi, David Wall and many others have noted, American lawmakers have been reluctant to take on these challenges for many reasons. They point to the unwritten policy of “surveillance exceptionalism” forged in the aftermath of the Sept. 11 terrorist attacks. The U.S. government’s concerns shifted from online privacy protections to security. The fledgling surveillance capabilities emerging from Silicon Valley appeared to hold great promise and they were given free reign. And, of course, they saw the commercial value beyond national security. And nobody was concerned.

For me, I have never been satisfied with just having vague notions of how all these converged distributed information and communication technologies worked. I needed to do a deep dive, to understand. That should also apply to all “data privacy professionals”, as well as responsible citizens, consumers, and other professionals. You may not need to comprehend the precise details of how modern algorithms and these intermediated, platform-based environments work, but you need to know how to assess the “big picture”. You need to arm yourselves with a better, deeper, and more nuanced understanding of the phenomenon. I want to help you to do so with this series.

It is not easy, but it can be done. Just look at AI. It took time to develop a framework for deconstructing algorithmic systems but we did it, creating three fundamental components: (1) the underlying data on which they are trained, (2) the logic of algorithms themselves, and (3) the ways in which users interact with the algorithms. Each of these components feeds into the others. We learned the intended and unintended consequences of algorithmic systems.

In Part 2 I will do a “mash-up” of the work of six authors and explain:

1. what we mean by “surveillance capitalism” (to use Zuboff’s phrase) and how we killed privacy, or at least how we so easily let it happen

2. a look at the evolution of Google as the first “surveillance king” and how economics of scale, scope and action form the harm.

3. why current privacy law and legislation are insufficient to control surveillance or protect “data privacy”

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top