Regulating AI: Europe’s artificial intelligence cacophony of confusion

 

5 December 2019 (Paris, France) – There’s little doubt that artificial intelligence has a tremendous potential to do good. Just two examples to show its scope:

This report on how it can help epilepsy patients predict seizures

• Thanks to AI technology, a Czech scientist has identified specific scenes in Shakespeare’s Henry VIII that bear the hallmarks of another author

But AI also has the potential to cause harm in ways that are difficult to imagine. We only need to look at China’s use of AI to rank and detain its citizens, or to European ideas on how to hold AI manufacturers responsible if products go rogue. And so regulation rears its head.

ON AI RULES, A EUROPEAN CACOPHONY

The clock is ticking for Europe’s new Commission President Ursula von der Leyen to deliver on her promise and pass AI laws within her first 100 days in office. But the reality is that her Commission — both its new political leaders as well as its long-serving bureaucrats on the ground — remains divided over how those rules should look and whether they should come as broad area-specific guidelines or hard laws across sectors.

One thing everyone agrees on is that 100 days are (far too) short to release a comprehensive AI rulebook. EU officials have privately said the unofficial plan is putting out “something” at the end of the ultimatum, to live up to what one called von der Leyen’s “bloody promise.” Legislation tackling truly important issues on AI will need to come later in the 5-year mandate, at the earliest next fall, but even that seems pretty optimistic. Which sheds new light on comments made recently by Executive Vice-President Margrethe Vestager – an EU veteran familiar with the bloc’s previous work on AI – who will make the last call on the issue. She has hinted in recent days (watch a video interview with her here) that at the end of the 100 days, all the Commission might present will be a “consultation paper.”

And remember: von der Leyen’s promise didn’t come out of the blue. It was an orchestrated move that followed Angela Merkel, her confidant, saying on the sidelines of a G20 summit that the EU should regulate AI with rules similar to the GDPR, Europe’s new privacy rules. The approach reflects the thinking of a group of high-ranking EU bureaucrats with close links to the German capital, who believe that by coming up with rules early, Europe will set a global standard for AI regulation. And so officials in the EU’s digital policy department have since been busy drafting pitches. Their ideas loosely follow a list of non-binding guidelines released by the EU’s group of AI experts earlier this year. But in some key aspects, those guidelines clash with what Berlin’s experts recommended in their document.

Here is the problem: the focus is on regulating “algorithmic systems” – which is a bureaucratic way of describing essentially everything that can be considered artificial intelligence these days. AI systems should be labelled according to a 5-rank system depending on the risks they pose, the regulatory experts suggest. Systems ranked in category 3 and 4 would have to fulfill tough transparency obligations; those labeled 5 are outright banned. Generally, high-risk AI applications have to be visibly labeled as such, the document says. And lawmakers should, once and for all, drop the controversial idea to grant some AI systems the status of legal personalities.

But the bombshell is hidden in the section of a German report which recommends “regulating algorithmic systems with common horizontal requirements in European law”. In other words, lawmakers should come up with broad, overarching rules that spell out key principles any AI system has to follow and that apply for public institutions and private corporations across sectors. In a second step, the document adds, those rules should “be specified for sectors on the level of the EU and member states.”

Such catch-all “horizontal” rules are exactly what Big Tech and other industrial big shots have been lobbying against, as do AI experts who say “the regulators just do not get how this stuff works”. It also conflicts with the recommendations of the Commission’s own AI expert who had urged lawmakers in their own policy guidelines to avoid unnecessarily prescriptive regulation.

THE AGE OF FACIAL RECOGNITION

The technology is spreading rapidly around the globe, despite dire warnings by privacy advocates. This week, Georgia became the fourth country to introduce facial recognition in its subway system where a system now allows passengers to pay and pass turnstiles with a face scan (click here for a very short clip on how that works). There was an interview with Georgia officials about the system which our Georgia contact translated for me and the media representative noted that the facial recognition system was only one option for passengers, who can also “still use traditional payments methods.” She added that only the data of registered users would be collected by the Bank of Georgia, the partner of the Metro Tiflis in the project, which will also make “sure that users data will be protected”. Not answered was who has access to that data.

Use cases in China, where similar technology has been in use for a while, offer a true glimpse into how the technology can be used to build up a  surveillance state. In Beijing’s metro, authorities have now reportedly launched a “passenger credit system” that puts rule-abiding passengers on a “white list” with privileges while those who are, for instance, filmed eating on the train get credit points deducted. China also just passed a new law that requires new mobile phone users to submit facial recognition scans, bringing millions more people under the purview of the technology.

Facial recognition is only one element of China’s surveillance tools.  Another was revealed when an international group of journalists published an investigation into China’s treatment of its Uighur minority. According to the report, Beijing has an opaque centralized system known as the “Integrated Joint Operation Platform” (IJOP) that it uses to collect data on its citizens, and which uses machine-learning to provide recommendations on who should be put behind bars. But this is only the tip of the iceberg. In the classified government bulletins, which detail how IJOP works, there were many places that suggested that algorithms are used to make decisions over who will be detained. What IJOP does in Xinjiang — the predictive policing — is only a small part of IJOP’s overall functionality.

And let’s stifle the West’s “shock and dismay” over this. That system was built with the help of U.S. and European Big Tech, which still supports the system. More on that next week when I send out the 2019 edition of my “52 Things I Learned About Technology”. 

HOW EU CONCERN OVER HUMAN RIGHTS ENTERED THE AI DEBATE

As concerns grow about AI’s potential to violate human rights, some legal scholars argue that an effective weapon to fight back could be right under our noses: international human rights law. They say it’s time to move from ethical debates on AI to a discussion of how human rights law can be applied to the technology. This would provide communities around the world with internationally binding rules to fend off violations. The idea isn’t completely new. Several debates at last week’s Internet Governance Forum (IGF) in Berlin suggest it’s now gathering support among policymakers. AI technology has already been used to violate internationally recognized human rights — from the right to a fair trial and due process (see China) to fundamental rights to free information or privacy. But while “AI is generally owned and implemented by the powerful, its victims are rarely powerful,” Joe McNamee, a digital rights activist, said at the IGF. More than a hundred papers on AI ethics from public and corporate expert panels have already explored this problem. But principles remain toothless because they lack any legal mechanism to hold the powerful to account. At IGF there was talk about a “codified international regime for AI” that would set a standard to aspire to but most said that was a pipe dream. The most one could hope for was an instrument to “name and shame” governments and companies who violate their rights.

PRODUCT LIABILITY IN THE AGE OF AI

On a day-to-day level, courts around the world may soon also have to decide who’s responsible if AI-powered products cause injury to users. For that purpose, civil law traditionally follows “product liability” rules, which help consumers hold businesses responsible. But as more and more products use AI technology, a debate is raging on whether and how those rules need to be updated to this new reality. As I noted last month, a widely unnoticed expert panel of the European Commission’s justice and consumer policy department published a doozy of a report entitled Liability for Artificial Intelligence and other emerging technologies with recommendations for Europe’s liability regime. It covers just about everything: the specific characteristics of these technologies and their applications including complexity, modification through updates or self-learning during operation, limited predictability, vulnerability to cybersecurity threats, how victims could claim compensation, etc., etc.

All of this, of course, bothers manufacturers because it raises the prospect of high compliance costs, which explains their opposition. Many argue that, since most AI systems are coded based on machine-learning, products continue to change shape after leaving the factory, which they argue raises a question as to whether the manufacturer can still be held responsible for liable for damages they cause. Even legislators are quietly admitting “wow, this stuff is complex!”, and EU lobbyists are out in full force to kill this.

AND JUST A FEW WORDS ON THE HYPE

As I have written numerous times before, the popular media has seen a steady stream of reporting about new AI developments – which are first over-hyped, then quietly forgotten – leading to an epidemic of misinformation. The media is often tempted to report each tiny new advance in a field, be it AI or nanotechnology, as a great triumph that will soon fundamentally alter our world. Occasionally, of course, new discoveries are underreported. The transistor did not make huge waves when it was first introduced, and few people initially appreciated the full potential of the Internet. But for every transistor and Internet, there are thousands or tens of thousands of minor results that are overreported, products and ideas that never materialized, purported advances like cold fusion that have not been replicated, and experiments that lie in blind alleys that don’t ultimately reshape the world, contrary to their enthusiastic initial billing.

Part of this of course, is because the public loves stories of revolution, and yawns at reports of minor incremental advance. But researchers are often complicit, because they too thrive on publicity which can materially impact their funding and even their salaries. For the most part, both media and a significant fraction of researchers are satisfied with a status quo in which there is a steady stream of results that are first over-hyped, then quietly forgotten.

Just two recent examples from the last several weeks that were reported in leading media outlets in ways that were fundamentally misleading:

The Economist published an interview it had with OpenAI’s GPT-2 sentence/converation generation system, and misleadingly said that GPT-2’s answers were “unedited”, when in reality each answer that was published was selected from five options, filtered for coherence and humor. This led the public to think that conversational AI was much closer than it actually is. It got worse because you had a leading AI expert (Erik Bryjngjolffson) Tweeting that the interview was “impressive” and that “the answers are more coherent than those of many humans.”

But that was not what happened. In fact, the apparent “coherence” of the interview stemmed from (a) the enormous corpus of human writing that the system drew from and (b) the filtering for coherence that was done by a human journalist. When he actually read the full study Brynjjolffson had to issue a correction. But in keeping with the social media world we live in, the Retweets of his original Tweet far outnumbered the Tweets of his correction … something along the lines of 75:1. Just more evidence that triumphant but misleading news travels faster than more sober news.

• OpenAI (yes, again) reported that it had created a pair of neural networks that allowed a robot to learn to manipulate a custom-built Rubik’s cube, and then publicized it with a somewhat misleading video and blog that led many to think that the system had learned the cognitive aspects of cube-solving (viz. which cube faces to turn when) when it had not in fact learned that aspect of the cube-solving process.

I subscribe to the OpenAI blog and announcements and I was amazed they had done this. So I dug into the full report. “Cube-solving”, as apart from dexterity, was actually computed via a classical, symbol-manipulating cube-solving algorithm devised in 1992 that was innate, not learned. Also less than obvious from the widely circulated video was the fact that the cube was instrumented with Bluetooth sensors, and the fact that even in the best case only 20% of fully-scrambled cubes were solved. Media coverage tended to miss many of these nuances. The Washington Post for example reported in its first version of the story that “OpenAI’s researchers say they didn’t “explicitly program” the machine to solve the puzzle”, which was at best unclear. The Post later had to issue a detailed correction which was a doozy: “Correction: OpenAI did not create a pair of neural networks that allowed a robot to learn to manipulate Rubik’s cube. In fact, their research on physical manipulation of a Rubik’s Cube using a robotic hand, not on solving the puzzle”. I suspect that the number of people who read the correction was small relative to those who read and were misled by the original story.

Misinformation is not ubiquitous – some researchers are forthright about limitations, and some new stories are reported accurately, by some venues, with honest recognition of limits, but the overall tendency towards interpreting each incremental advance as revolutionary is widespread, because it fits a happy narrative of human triumph.

But the net consequences could, in the end, debilitate the field, paradoxically inducing an AI winter after initially helping stimulate public interest.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top