Artificial intelligence gets a yardstick (of sorts) … but the malicious use of AI gets more and more inevitable

Si vous désirez consulter une version de cet article en français, 
 merci de nous envoyer un e-mail à [email protected]
 
Wenn Sie an der niederländischen Version dieses Beitrags interessiert sind, 
senden Sie uns bitte eine E-Mail an [email protected]
 
Als u een versie van dit bericht in het Nederlands wilt, 

stuur ons dan een e-mail naar [email protected] 

 

23 February 2018 (Barcelona, Spain) – Over the last four weeks my media team and I have covered four major events:

  • the International Cybersecurity Forum in Lille, France
  • Legalweek in New York, New York
  • the Artificial Intelligence Workshops at ETH in Zurich, Switzerland
  • the Munich Security Conference in Munich, Germany

It has been a cornucopia of information, talent and new contacts. We have video footage from Lille and Munich in production with detailed reviews of both events to come.

But for the moment we are prepping for the Mobile World Congress here in Barcelona – an event that will have aspects of all the technology fields we cover: artificial intelligence, cybersecurity, e-discovery, mobile/digital media, plus just about every facet of social media. They expect 110,000+ attendees this year and there are 700+ educational sessions and 3,300 vendors booked across the 260,000 square meters of exhibit hall space (roughly a 20 minute walk from the North end to the South end across the eight exhibit halls). Even with my 4-member crew, a lot to cover.

For this post, two significant items of note from the last four weeks ……

Artificial intelligence gets a yardstick

There has not been a standard to measure artificial intelligence’s progress in the technology field. But Stanford University’s Engineering Department thought “hold on; we can do that”. We stumbled across a few Stanford members at ETH and as they explained it, AI has progressed so rapidly over the past two decades that they decided to create their own standard called the AI100 Index which they hope will be THE comprehensive baseline for the state of artificial intelligence.

Rich and Mary Horvitz established the AI100 in 2014 with the purpose of predicting AI’s effects on a 2030 urban environment. While working on their AI100 index, the pair discovered that there has been a dramatic increase in startups and improvement in AI to mimic human behavior. The AI100 is meant to be a measurement tool to chart progress and encourage conversation about the field’s potential. The AI100 pulls information from many sources. They pointed us to a recent article in Stanford Engineering magazine entitled A New Artificial Intelligence Index Tracks The Emerging Field which notes:

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

Read the article. It’s rather interesting. While the AI100 pulls information from scientific and technology communities, it also draws information from the Social Progress Index, Middle East peace index, and the Bangladesh empowerment index. So the AI index will be a historical document … and just might predict future trends in the AI field. But my guess is that so long as it is maintained, the AI index will surely become integral to the industry.

The inevitable malicious use of AI rolls on

A few years ago I earned media credentials for the World Economic Forum in Davos and the Munich Security Conference … a long process in both cases. Both events provide a tsunami of content, plus some fascinating conversations that all seem interesting … although sometimes you are simply not part of them and just overhear. But still, two of my “must attend” events each year. You’ve read my Davos report. I will have my Munich report out over the weekend.

Artificial intelligence and cybersecurity dominated both events, but one report in particular caught the attention and imagination of the Munich participants: Preparing for Malicious Uses of AI. It was officially published a few days ago but was making the rounds last weekend.

It was written by 26 researchers from several organizations including OpenAI, Oxford and Cambridge universities, and the Electronic Frontier Foundation. It performs a valuable … if scary … service in flagging the threats from the abuse of powerful technology by rogue states, criminals and terrorists: drones using facial recognition technology to hunt down and kill victims; information being manipulated to distort the social media feeds of targeted individuals; cleaning robots being hacked to bomb VIPs.

Yes, the potentially harmful uses of AI are as vast as the human imagination. As one of my colleagues said “well, they have pretty much written the entire next series for Black Mirror“. Where the report is less compelling is coming up with any possible solutions.

It is an interesting piece and worth the read only for the fact that much of the public concern about AI focuses on the threat of an emergent superintelligence and the mass extinction of our species. There is no doubt that the issue of how to “control” artificial general intelligence, as it is known, is a fascinating and worthwhile debate. But if you attend some of the more noteworthy AI conferences you’ll hear most AI experts … the ones the AI community respects … say this is probably all a second half of the 21st century problem.

But the crux of the report is compelling. It’s not that we shouldn’t already be worrying today about the abuse of relatively narrow AI. It’s that human evil, incompetence and poor AI design will remain a far bigger threat for the foreseeable future than some omnipotent and omniscient Terminator-style Skynet.

Some reflections

Unlike any other major technological invention since at least the introduction of the combustion engine or even the movable-type printing press, computing and its networking effect with the addition of artificial intelligence have transformed our world totally and will continue to do so. We may think that we control many aspects of this “information revolution” and its ramifications, but we do not.

Neil Postman (the American author, educator, media theorist and cultural critic) often said that advanced technology would render human history to nothing more than a litany of unintended consequences and unforeseen side effects. This is obviously true in cyberspace, where the development of the technology has outstripped government’s ability to regulate and legislate around it. The speed of this development has had profound implications for national security, creating essentially a bottom-up driven problem.

Everything is ambiguous is cyberspace. Alexander Klimburg, in his recent book The Darkening Web, says we can’t even agree on how to spell the very word itself, so it shouldn’t be a surprise that we have difficulties in defining it:

Spelling, as always, is a reflection of one’s preferences: those who spell “cyber space” as two words are implying that the domain is not an entirely separate or unique entity, just as writing “cyber security” as two words implies that it is just another form of security like “maritime security”, and not special at all. Those, like myself, who believe that cyberspace has unique identifiers that make it like a physical domain (like airspace or the seas) spell it as one word, just as the contentious term “cybersecurity” is given in a depth treatment on account of its overall rejection among some parts of the civilian technical community.

Over the last few weeks two of my tenents have certainly been reinforced:

  • the activities of nation states and quasi/faux nation states are making cyberspace a domain of conflict, and therefore increasingly threatening the overall stability and security not only of the internet but also of our very societies.
  • Each aspect of cybersecurity (say, cyber crime, intelligence, military issues, Internet governance, or national crisis management) operates in its own silo, belonging, for instance, to a specific government department or ministry. Each of these silos has its own technical realities, policy solutions, and even basic philosophies. Even if you become proficient in one area, it is likely that you will not have the time to acquire more than a rudimentary knowledge of the others. Your part of the “elephant” will dominate, and inevitably distort, how you see this beast.

I have also come to realize that the media mantra of power grids crashing and armies being immobilized by cyberattacks are puny compared to the comprehensive psychological operations at work. An article in Time magazine (of all places) from 1995 … written in the Stone Age of the Internet, and now lost to history except in a hard copy format, found in a bookshop in (wait for it) Saint-Malo, France … made two points that still hold true today. The writer had an instinctive grasp on the two different shades of the debate:

  • On the one hand, the kinetic-effect “cyberwar” discourse, which involved hacking and bringing down critical infrastructure and the like
  • On the other, the psychological effect “information war” discourse, which the writer … even back then … suggested could covertly influence marketing and political campaigns at a new and much larger scale, using propaganda and other means to subtly influence entire populations.

The writer grasped many of the existential issues this development presented, unconsciously paraphrasing the philosopher Marshall McLuhan when he said that “infowar may only refine the way modern warfare has shifted toward civilian targets.” And, most presciently, the author asserted that “an infowar arms race could be one the U.S. would lose because it is already so vulnerable to such attacks. Indeed, the cyber enhancements that the military is banking on… may be chinks in America’s armor.”

Indeed, as I have written many times before, the worst possible cyber event may not be that the lights go out … but that they will never go out. That it pays for our adversaries to force us to slip into what Klimburg calls “an environment of Orwellian proportions”.

NOTE: we did video interviews with Klimburg and also with John Frank, Vice President of EU Government Affairs for Microsoft, and they addressed the whole “information war” discourse and the effect of states and their abiding interest in conflict both physical and otherwise. Coming soon.

For me, the potential power of AI, running off the yet-to-be-deployed but immediately powerful quantum computing infrastructure meshed with unimaginable data storage capacity, could even be a more immediate threat than that of the Internet turning into Klimburg’s dark web. My vision remains the same: a slow but inexorable shift to darkness, a web of informational control. Yes, a long road, but inevitable.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top