Is AI coming for reporters’ jobs? Plus: is artificial intelligence based on a decade-old misunderstanding about what intelligence is?

Guest blogger:

Eric De Grasse 
Chief Technology Officer
Project Counsel Media

 

3 June 2020 (Paris, France) – A Guardian report over the weekend that Microsoft had decided to “sack journalists to replace them with robots” got people talking. More than two dozen journalists employed by British news agency PA Media will be fired following a decision by Microsoft “to stop employing humans to select, edit and curate news articles” for its MSN website and Edge browser, the paper wrote.

News about journalists losing their jobs is tough to swallow in the midst of a pandemic, galloping unemployment and civil unrest in the United States. The doomsday scenario painted by the Guardian article isn’t helping, warned Mattia Peretti, who manages the JournalismAI research project at the London School of Economics and Political Science:

“I felt that the article was most interested in proving the point that AI is stealing jobs. And the vocabulary used, that journalists’ work will soon be done by AI software rather than “robots” will simply raise fears of a Dystopian future where AI is taking over all jobs – although automating some tasks is a normal evolution”.

Terminology aside, it’s become clear that AI will change journalism as drastically as it will change industries across many sectors. Last year Politico published a deeply researched piece which still provides a good starting point if you’d like to dig deeper.

The piece is behind the Politico paywall but I have put it on our Slideshare so you can read it: click here

It delves into automatically generated articles, templates, the nuances of storytelling and structured stories and natural language generation, etc. Software is already pretty good at turning data into basic articles. But you can’t automate complex reporting or creative writing – and you won’t be able to do so in the foreseeable future. That’s what journalists should focus on, now more than ever.

The Microsoft case illustrates that: the journalists employed by PA Media did not report original stories, according to the Guardian, but mostly curated outside content. That kind of work is prone to automation, though it remains to be seen how effective computers are at the task. Either way, the journalists’ plight should serve as a warning to journalists that in order to survive economically they need to focus on the tasks at the core of the profession: investigating, building narratives and holding people in power accountable.

THE MESS WE’RE IN

 

What if the founding fathers of artificial intelligence got one thing completely wrong, and the consequences of that mistake are felt to this day? That’s the provocative question raised by computer scientist Stuart Russell, who co-wrote the most popular AI textbook of our time, in a lecture organized (virtually) by the Alan Turing Institute in London last week. It has been uploaded to Youtube (long, silent intro ; lecture actually starts at 5:10) :

 

When pioneering work on AI first took off in the 1940s, philosophers and economists widely understood intelligence as the ability of humans to achieve objectives, he said. Scientists translated that notion to the emerging field of AI, encoding goals into machines. To this day, a lot of machine-learning technology is based on that principle: engineers and computer scientists define an objective for the computer. Then they essentially sit back and watch the machine learn and teach itself how to get there:

“But that standard model really is a big mistake, one that has led to some devastating effects.”

Why it matters: Look at the algorithms responsible for selecting content on YouTube. The objective coded into them is to maximize the probability that users want to click on the next video they’re shown. But, said Russell, “is it learning what people want?” No — what it’s actually doing is modifying you to be more predictable [and] training you to like to click on things that it will then send you.”

The algorithm, in other words, isn’t really “learning” what people want to see, Stuart argued. It is modifying people to be more predictable. (What happens if this goes terribly wrong is illustrated by Kevin Roose’s podcast series Rabbit Hole at the New York Times.)

What does that mean for potential AI laws currently being drafted around the world? Says Russell:

“It would be a good idea to install codes of practice and standard such as identifying the objective that a system is designed to optimize, identifying the actions it can take, and the aspects of the world that those actions can affect. If a system interacts with users, it would also be good to ensure that it has no incentive to manipulate or modify the user’s beliefs, preferences, etc.”

At the same time, engineers should avoid “any use of reinforcement learning – a subfield of machine learning where computers are given an objective and essentially do trial-and-error until they reach their goals — whenever AI systems interact directly with users since those algorithms optimize their objective by modifying the state of the world (i.e., the user’s mind).”

Russell’s pitch: Not least because of the limitations described in his lecture, most of today’s AI systems will hit a wall in terms of what they’re able to do, he says. But it turns out there’s another area of AI … called “probabilistic programming” … which combines probability theory with lessons from the world of programming languages or logic:

“When you combine probability theory with one of those two things, you get the ability to write down models of the world in a language that is as expressive as possible. And in the future what we’ll see is probably some kind of merger between probabilistic programming and deep learning and this will fuel the next phase of growth in AI”.

But Russell also acknowledged that the approach is still in its early days:

“We are doing the research needed to create scalable ‘safe’ technology. So it would be premature to pass laws requiring that systems be built according to design templates that are not yet ready.”

 

 

And speaking of regulation, let’s end on this note. Last week, U.S. tech giant Google submitted its 45-page feedback to the EU’s White Paper on AI. Click here for the document. Here are the key points:

• Don’t try to reinvent the wheel, please: That’s Google’s main message. Coated in diplomatic language, the document warns repeatedly against over-regulation. At the same time, it urges EU policymakers to adapt the bloc’s existing rules for high-risk AI in fields like health care, finance or transport rather than create a new set of oversight tools from scratch.

• The backdrop: The European Union is planning to release its first laws drafted specifically for AI early next year. In February, Brussels published a nonbinding “White Paper,” which spelled out preferred options and asked for feedback from the industry, civil society and governments. The document included a suggestion that high-risk AI technology should undergo rigorous testing and might even have to be retrained with fresh data before it can hit the EU’s internal market. Those so-called “impact assessments” have been a thorn in the side of the tech industry.

• Not a good idea, according to Google: “Creating a standalone assessment scheme for AI systems would risk duplicating review procedures that already govern many higher risk products … adding needless complexity,” the company warns, stressing that such “ex ante conformity assessment requirements as recommended by the White Paper strike the wrong balance.” Instead of regulating ahead of time what kind of data an AI system can be trained with, regulators should focus on the outcomes of the technology to evaluate the process, Google’s AI experts say.

• But if Brussels does go down that road, the EU should create less regulated sandbox environments where firms can do research and test products in the early stage of development, the company adds. “If such pre-assessment testing is not permitted, it may result in organisations taking an unduly precautionary stance when considering investments in new products, which could hinder innovation,” the company writes, adding a warning that, “this would significantly weaken Europe’s position vis-à-vis global competitors.”

One Reply to “Is AI coming for reporters’ jobs? Plus: is artificial intelligence based on a decade-old misunderstanding about what intelligence is?”

  1. Aaron Taylor says:

    Very enjoyable and thought-provoking article.

    I have not watched the podcast series “Rabbit Hole”, but am considering doing so as it sounds provocative. Or is that what its originator wants me to think?? Hmmm…

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top