How algorithms REALLY created that corporate nightmare at United Airlines

United Airlines algos

“I was only following corporate algorithms”

 Testimony to be given at a future war crimes trial
(a riff on the Nuremberg defense by John Robb, astro engineer extraordinaire)

21 April 2017 – I recently posted a 2-part series on my take of the United Airlines “fight club” saga, labelling it a prime example of the basest, ugliest form of tech-abetted, bottom-seeking capitalism – one concerned with prices and profits above all else, with little regard for quality of service, for friendliness, or even for the dignity of customers. My research included my many years in airline litigation work and some of the rigid algorithmic and authoritarian decision making that can create corporate disasters in an age dominated by social networking. I did not focus on the “algorithmic chaos”.

But John Robb has done some “big thinking” and makes a few points. He has focused on that aspect and he has been featured in the Wall Street Journal, The New York Times, CNBC, etc.  John is a pilot, an astro engineer (astroengineering is engineering at astronomical scale, i.e. at planetary, stellar, stellar system, galactic or even larger scale), was in Spec Ops when he was in the military, etc.  An all-around brilliant guy.

Here’s how he summarized the algorithmic decision making that created the incident on United.

  • United employees board a full flight from Chicago to Louisville. A United flight crew headed to Louisville arrives at the gate at the last moment. A corporate scheduling algorithm decides that the deadheading flight crew has priority over passenger fares and that it needs to remove four passengers to make room for them (the flight wasn’t overbooked).
  • United asks for volunteers. A corporate financial algorithm authorizes gate employees to offer passengers up to $800 to take a later flight (offering a bigger incentive wasn’t an option). No passenger takes them up on that offer.
  • United now shifts to removing passengers from the flight non-voluntarily. To determine who gets removed from the aircraft, United runs a customer value algorithm. This algorithm calculates the value of each passenger based on frequent flyer status, the price of the ticket, connecting flights, etc. The customers with the lowest value to United are flagged for removal from the flight (it wasn’t a random selection).

Now, here’s how authoritarian decision making (and very common on modern air travel, by the way) made things worse.  John says “note how this type of decision making escalates the problem rapidly”.

  • The United flight crew approaches the four passengers identified by the corporate algorithm and tells them to deplane. Three of the people designated get off the flight as ordered. One refuses. [NOTE: I discussed this before but since disobeyal of instructions from the flight crew is not tolerated in a post 9/11 air travel world, the incident is escalated to the next level.]
  • United employees call the airport police to remove the passenger. The airport police arrived to remove the unruly (he disobeyed orders) passenger. The passenger disobeys the order from the airport police to deplane. Disobeyal of an order by a police officer rapidly escalates to violence. The police then remove the passenger by force (the video of this is shared on social media).
  • The CEO of United Airlines rapidly responds: “While I deeply regret this situation arose, I also emphatically stand behind all of you, and I want to commend you for continuing to go above and beyond…” In short, the CEO praises his employees for following the corporate algorithms and for not backing down when their authority to remove a passenger was questioned (which resulted in more negative backlash on social networks)

So what do you get? Oh, mercy. So predictable. Says John:

As you can see, United was designed to fail in a world connected by social networking and they are not alone. Let’s recap. United employees blindly followed the decision making of algorithms up to the point of telling seated passengers to deplane. The authoritarian decision making that followed was just as rigid and unyielding. Disobeying orders of the flight crew led to the police. Disobeying the police led to forced removal. Finally, the public failure of this process led United’s CEO to praise employees for their rigid adherence to algorithmic and authoritarian decision making. The entire process was inevitable. It’s also not a unique situation. We’re going to see much more of this in the future as algorithms and authoritarianism grow in America.

 

This is not to say all organizations function like this, but so many do.  We see it every week in the number of corporate disasters seen online. In Part 2 of my United Airlines series I highlighted the glaring positive exception … Southwest Airlines and its “hub social team” that has spokes into every element of the business. Southwest actually has human decision making “escape valves” build into their algorithms used to dictate employee behavior. In other words, if an algorithmic process is going terribly wrong, it is ok for an employee to find a non-standard way to solve it or ask for help. Southwest has a “rapid response team” that can swoop in electronically (usually via a smart phone) to coach onsite employees on ways to response (and also authorizing extremely non-standard responses on the spot).

Those of us in the e-discovery world have a similar structure.   A short time ago a new philosophy emerged in the e-discovery industry which was to view e-discovery as a science — something that is repeatable, predictable, and efficient, with higher quality results and not an art or something that is recreated with every project. Underpinning this transformation was the emergence of new intelligent technology — predictive coding, smarter review tools, and financial tools for managing a portfolio of cases. This approach was to deliver real results when it comes to controlling the costs of litigation.

But it is far from perfect and I have chronicled its many failures and false starts due to poor execution, not so much the technology itself. Because in many cases it is driven by people consulting and writing about it, and not doing it. Not being in the real world where it is employed. And like those United employees, blindly following the decision making of the algorithms despite the result. Not everybody is blind. I have noted the brilliant exceptions.

These are incredibly complex systems and I must, yet again, note Melanie Mitchell in her indispensable book Complexity: A Guided Tour who posits that some complex systems (weather patterns, markets, animal population groups, airline management systems) turn out to be extremely sensitive to tiny variations in initial conditions, which we call, for lack of a better term, chaos. You can have a theoretically perfect model of the behavior of a system, and develop algorithms for those systems, but behavior remains unpredictable — even in principle — because of variations that are beyond our ability to measure. She nails it when she says:

The unpredictability and fundamental unknowability of many aspects of reality are familiar enough, particularly when it comes to human social interactions (meaning, among other things, the whole of politics and economics and social relationships), human beings being notoriously unpredictable creatures. Soldiers, entrepreneurs, and fashion designers all know that all of the best planning and research that can be done often goes up in a flash when actual events start to unfold. 

And let’s forget algorithms for the moment. The reality is that regulations, regulatory reforms, and economic incentives interacted in ways that no one foresaw — or could foresee — producing results that no one wanted. Securitization (bundling and chopping up mortgages into financial instruments that could be easily traded among firms) was intended to distribute risk among investors and institutions, but it ended up concentrating that risk. Everything from public-school failures to advanced mathematics contributed to the housing bubble and meltdown. Or my favorite. The invention of photocopying, with credit-rating agencies switching from an “investor-pays” business model to an “issuer-pays” model once the easy replication of their reports made it more difficult to get paid for their work.

Or the protectionist measures taken by the United States against Japanese automakers which ended up contributing to those firms’ technological innovation (especially with smaller four-cylinder engines) and allowing domestic automakers to forgo improvements in quality and performance; this ultimately made Japanese cars more attractive rather than less attractive to U.S. buyers.

I predict that both the frequency and seriousness of AI and algorithmic failures and debacles will steadily increase as AIs become more capable. The failures of today’s narrow domain AIs are just the tip of the iceberg; once we develop some sort of general artificial intelligence capable of cross-domain performance, embarrassment will be the least of our concerns.

Which is why (coming back to our mundane world) companies are employing simple “fixes” like explicitly analyzing how their software can fail, providing a safety “human” mechanism for each possible failure, having a less “smart” backup product or service available … and having a communications plan in place to address the media in case of an embarrassing failure. (Hint: start with an immediate apology and fall on your sword)

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top