Technology is a bit like toothpaste. Once you squirt it out of the tube, it’s very difficult to get it back in there.

That’s why artificial intelligence experts have warned for years that as AI is maturing, policymakers need to define limits for certain applications before they’re deployed.

 

1 July 2020 (Chania, Crete) – Technology is a bit like toothpaste. Once you squirt it out of the tube, it’s very difficult to get it back in there. That’s why artificial intelligence experts have warned for years that as AI is maturing, policymakers need to define limits for certain applications before they’re deployed — and that some technology should stay where it is, inside the tube. But I suspect over the next few months we’ll see decision-makers have not been listening … or simply powerless to stop anything.

There is a lot to say about this topic and I have tried to zoom across all the recent material to outline the “state of play” on how, as policymakers write up laws for AI, researchers are trying to make their voices heard, how the U.S. might get ahead of Europe in banning facial recognition, ending with a few notes from a high-ranking EU official on how Big Tech has for years pushed the message that “nonbinding principles” are enough to regulate AI – and how things have changed.

Oh, and a Postscript with some bits and bobs.

RESEARCHERS RAISE THEIR VOICES, PART 1

It stirred waves beyond academia when last week a group of AI researchers published a letter, now signed by over 2,400 experts, in which they urged publishing house Springer Nature not to release a scientific paper.

What happened: U.S. researchers had announced (in a now-deleted press release) that they had developed software that could analyze the biometrical features of individuals to predict how likely they are to become criminals. They also said that their paper would be published in an upcoming book series by Springer Nature, one of the world’s most prestigious academic journals.

The larger picture: The research is an example of a highly controversial subfield of AI known as “affect recognition” — technology that scans parts of your body not (just) to identify you but also to draw inferences about your most innermost feelings. But as such technology is being developed around the world, studies have shown that it lacks a sound scientific basis.

What’s more, the open letter warns, is that “the circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.” In a written response, Springer Nature said: “we acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication.” The authors had submitted the paper for a forthcoming conference, the publisher added, but after undergoing a peer review it had already been rejected in mid-June.

Birth of a coalition: The core team of researchers behind the open letter want Springer Nature to commit to not publishing similar research in the future. At the same time, they announced that they will “engage in future conversations as the Coalition For Critical Technology” — not least because “this is an endemic and reoccurring problem,” as researcher Theodora Dryer put it.

US DEBATES FACIAL RECOGNITION BAN: It seems no coincidence that their initiative originated in the U.S., a country where weeks of anti-racism protests have raised unprecedented awareness for discrimination by biased technology and brought a debate over a facial recognition ban back on the political agenda.

What’s happening: Democratic lawmakers in D.C. late last week introduced a bill to make the use of facial recognition by federal law enforcement agencies illegal. The bill marks one of the most ambitious crackdowns to date on Capitol Hill, even if it has a very low chance to pass through the Senate – and so far, it’s unclear if even Democratic House leadership will be on board. I think the bill is more of a signaling exercise by progressive Democrats at this point.

Not just theory: The risk of AI discriminating against minorities is real and well documented. Last week, the New York Times reported, for instance, that flawed facial recognition technology has already led to a black man’s arrest for a crime he did not commit.

RESEARCHERS RAISE THEIR VOICES, PART II

Which brings us back to Europe, where a group of high-level AI experts last year released a (nonbinding) plan for the EU to become a world leader in “trustworthy AI” technology, designed to avoid such discrimination and other risks.

Now it’s up to the actual decision-makers in Brussels to live up to that promise – against mounting pressure from the industry to water down the upcoming rules. That’s the warning in another open letter released last week by a group of over 120 researchers, among them some of the world’s most influential AI scientists.

The backdrop: The EU wants to pass a first law for AI early next year. In February, the European Commission, the bloc’s executive body, published a White Paper in which it spelled out preferred options and asked for feedback. Over 1,200 replies came in, I’m told – many of them from tech companies or associations representing them. In the coming months, officials will work through them and come up with actual laws.

But don’t let the industry trick you, the researchers warn in their open letter: “The Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests, which in some cases involve weakening regulation and downplaying potential risks related to AI,” they write, adding, “we hope that the Commission will stand firm in doing neither.”

TOP EU OFFICIAL SETTLES SCORE WITH BIG TECH

Paul Nemitz, a principal advisor on justice policy at the Commission’s Directorate-General for Justice and Consumers, published earlier this month a book titled Prinzip Mensch (in English, roughly, “The Human Principle”) together with journalist Matthias Pfeffer.

NOTE: we were provided an electronic copy which our MT software converted to English. 

Their book does not only provide an insightful deep-dive into what old-school philosophers like Norbert Wiener, Karl Popper or even Immanuel Kant can teach us about today’s technology. It also includes insights into how Big Tech has for years been trying to influence Europe’s AI rules. And it’s fair to assume that many Silicon Valley executives probably won’t like the book.

Why you should care: Nemitz knows how tech lobbying in Brussels works from first-hand experience. He is, for example, one of the key architects behind the GDPR. And it’s not the first time he’s going public to warn that new technologies could undermine core democratic values. He has been on Twitter for years, bashing Big Tech. And what’s written in the book “is based on my personal experience as an official in Brussels,” he told a group of journalists during the book’s official launch.

Concentration of power: In 425 pages, Nemitz and Pfeffer describe a world in which a handful of foreign companies reign over unprecedented “digital power,” including in Europe: They’re Google, Amazon, Facebook, Apple and Microsoft from the U.S. (which they call “GAFAMs”) as well as Tencent, Alibaba and Baidu from China. Those companies aren’t only amassing knowledge about hundreds of millions of users of their services and devices, Nemitz said: “They are also extremely rich and they do a lot with their money – which allows them to influence politics, science and journalism.” Correct. No surprise. But the detail the book provides is impressive.

Big Tech, everywhere: “Over and over again, we were told that we don’t need binding rules but that voluntary guidelines are enough,” Nemitz said. “There are many organizations here in Brussels that wrote reports arguing along similar lines because the GAFAMs and their money are members everywhere – that’s why we kept hearing the same arguments from all directions.”

The backdrop: For almost three years — and as technology was progressing — governments, corporations and researchers have been debating who should get to write the rules for AI technology, and whether they should come in the form of hard laws or nonbinding ethical guidelines. As a result, hundreds of sets of principles have been released.

But the wind has turned, Nemitz said: “In Brussels, we’ve departed from the idea of ethics and arrived at hard laws.” Now “people have come to understand that ethics alone won’t do much in an area heavily dominated by those powerful players and that legislation is necessary in all different kind of areas,” he said, referring to the Commission led by President Ursula von der Leyen.

It’s rare to hear such frank words from a Commission official, particularly while microphones are still running – and it shows that in the fight over what will end up in the fine print of Europe’s upcoming AI rules, the gloves are coming off. Stay tuned.


 

BERLIN RELEASES FEEDBACK TO AI WHITE PAPER: The German government has released feedback to the EU’s White Paper for Artificial Intelligence. Although the document is just one out of over 1,200 submissions, it will likely be read closely — not just because it comes from the EU’s most populous country but also because Berlin is about to take over the EU’s powerful rotating Council presidency on July 1. And the document includes a section that the tech industry might not be happy about.

More rules needed: In its White Paper from February, the EU suggested to differentiate between two groups of AI applications: ”high-risk” AI technology in “high-risk” sectors which will need new tough laws, and other “low-risk” AI for which existing rules are essentially enough. But that pitch does not go far enough, Berlin argues. Instead, more nuanced categories are needed, and any AI technology that poses high risks to users should be regulated with new, tough rules — even if it’s deployed in sectors considered as “low-risk.”

PARLIAMENT SETS UP AI COMMITTEE: Elsewhere in Brussels, the European Parliament announced that it had set up a new special committee “on artificial intelligence in a digital age.” It will have 33 members.

Workplace surveillance: Accountancy firm PwC has come under fire for developing facial recognition software that allows firms to track employees’ absences from their computer screens, including for bathroom breaks.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top