Hello, and welcome to today’s edition of “Finding new ways for a Tesla to kill you!” (paging William Gibson)

1 April 2019 (Brussels, Belgium) — Last week my Chief Technology Officer was out in San Francisco for EmTech Digital, the annual “all-things-AI” event sponsored by MIT Technology Review. The event covers a whirlwind of AI topics.

One thing Eric noted in his report was “adversarial machine learning”, methods that involve experimentally feeding input into an algorithm to reveal the information it has been trained on, while others involve distorting the input in a way that causes the system to “misbehave”.

He noted how researchers were using the Enron e-mail data set to use examples of adversarial-learning trickery. But he also mentioned a separate presentation with a video demo which showed how a few innocuous-looking stickers could trick a self-driving car’s AI into “seeing” a stop sign and interpreting it as a speed limit for 45 miles per hour. You can read Eric’s full post here.

We received 210 emails on that post, many of those from e-discovery readers asking about the use of the Enron data set. But most readers asked about the automated driving systems video. I am still working my way through the tech papers, so in brief …

Basically, someone finally got around to applying the “perturbation” concept to the machine learning in Tesla’s “Autopilot” assisted cruise control. And without getting too far into the weeds, since machine learning does not necessarily use the same cues that humans use to spot things in data (in images/ voice etc), sometimes you can add something to the data that people either do not notice or cannot see at all (noise, seemingly irrelevant details), but that a ML system (especially a simple one) sees as very significant. This is “perturbation”.

As proof of concept, people have added imperceptible background noise to a picture of a dog, that results in an ML system now thinking it’s a tree or a car.

But now someone worked out how to make a series of very small stickers you can put on a road, that people can barely see but that Autopilot will interpret as a change in lane – and cause it to swerve into oncoming traffic. You can read more in a detailed PDF by clicking here.

 “Paging William Gibson”

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top