Suggested sources to get a grip on artificial intelligence

 

1 September 2017 – There is a lot of misleading and even false information about artificial intelligence out there, ranging from appallingly bad journalism to overhyped marketing materials to quotes from misinformed celebrities.

AI is a complex topic moving at an overwhelming pace (even as someone working in the field, I find it impossible to keep up with everything that is happening). Beyond that, there are those who stand to profit off overhyping advances or drumming up fear.

I want to recommend several credible sources of accurate information: first three books to get you started, and then some web sites you can access on a regular basis.

I have 57 books on AI in my library but I chose these particular three because they are accessible to anyone even if you aren’t a programmer or don’t work in tech.  I will update this page from time to time:

THREE BOOKS YOU SHOULD START WITH 

The Control Revolution: Technological and Economic Origins of the Information Society
Author: James Beniger

If you work/have worked in any element of information services or you have studied the “Information Age” and/or studied Information Theory, you have read this book. Beniger’s premise was simple: explaining how we found ourselves living in an Information Society. He chronicles the collection, processing, and communication of information and how it came to play an increasingly important role in advanced industrial countries relative to the roles of matter and energy. It is a monumental book, tracing the origin of the Information Society to major economic and business crises of the past century. In the United States, applications of steam power in the early 1800s brought a dramatic rise in the speed, volume, and complexity of industrial processes, making them difficult to control. Scores of problems arose: fatal train wrecks, misplacement of freight cars for months at a time, loss of shipments, inability to maintain high rates of inventory turnover. This led to the first “information systems” to monitor activity and processes. The Industrial Revolution, with its ballooning use of energy to drive material processes, required a corresponding growth in the exploitation of information … and its control: “the Control Revolution”.

The book is impressive not only for the breadth of its scholarship but also for the subtlety and force of its argument. Borrowing a blurb from the MIT Technology Review:

Between the 1840s and the 1920s came most of the important information-processing and communication technologies still in use today: telegraphy, modern bureaucracy, rotary power printing, the postage stamp, paper money, typewriter, telephone, punch-card processing, motion pictures, radio, and television. Beniger shows that more recent developments in microprocessors, computers, and telecommunications are only a smooth continuation of this Control Revolution. Along the way he touches on many fascinating topics: why breakfast was invented, how trademarks came to be worth more than the companies that own them, why some employees wear uniforms, and whether time zones will always be necessary.

 

A Mind at Play: How Claude Shannon Invented the Information Age
Authors: Jimmy Soni and Rob Goodman

 

Just published this year, it has become one of my favorite books. It is often said that the best biographies of scientists, like those of athletes, are all about the journey. We know the breakthrough will be made, the trophy won. The pleasure comes from reliving the moment the world changed. And to make the discovery of a dry theory into a page-turner … well, that presents a particular challenge. Especially if you include the occasional equation. But the authors did it.

In brief, the theory that put Claude Shannon in the history books was foundational for the information age. Shannon’s magnum opus, The Mathematical Theory of Communication, hardly sounds like a work for the masses, though Jimmy Soni and Rob Goodman say that its publication in the late 1940s was followed by a wave of popular interest – something that barely seems imaginable in our own times.

Shannon had an intense interest in mathematics, the field where he made his mark. But a life-long fascination with the mechanical, married to a facility for abstraction, were the qualities that defined him. It fell to Shannon to come up with the ultimate abstraction of the information age, the ones and zeros of digital technology. Breakthroughs in maths, according to Soni and Goodman, are the preserve of the young. Shannon’s master’s thesis, produced at the age of 21, certainly bears that out. He was the first to see the possibility of reducing logic to a series of instructions that could be processed by simple electrical components, represented by switches.

Shannon’s main contribution – his masterful insight – was that information can be separated from both the meaning of a particular message and the medium over which it is transmitted. Any message – whether a moving image, a spoken sentence or a piece of text – could be reduced to a code expressed in binary digits. Borrowing from a review in the London Review of Books:

Converting information into binary code, as Shannon proposed, offered two practical answers. One was that code could be stripped to the barest minimum needed to communicate a specific meaning. This was the idea behind digital compression, the technique that makes it possible to send today’s digital video signals over limited bandwidth. The other insight was that information could be transmitted perfectly over any line, even in the face of interference. All it took was to enhance the digital code, adding redundant “signal” to make up for the loss in transmission.

The only bone I will pick with Shannon … I know, the nerve! … was that he was writing at the same time that major research was underway at several universities on how the brain worked and Shannon developed the metaphor of “the brain as a processor of information” and other computer terminology to describe the brain. That he wrote at a time propelled by subsequent advances in both computer technology and brain research, a time that saw an ambitious multidisciplinary effort to understand human intelligence, it is probably no surprise that firmly rooted would be the idea that humans are, like computers, information processors. The information processing metaphor of human intelligence would dominate human thinking, both on the street and in the sciences. That is incorrect … but I will reserve that for another post.

 

Machine Learning: The New AI (The MIT Press Essential Knowledge series)
Author: Ethem Alpaydin

This is a good read for those who want to get a quick idea about why all the attention has been given to machine learning in Al research and development. It is a “general reader intro” that doesn’t go into math or algorithms in detail, or trees, or Bayesian logic or even pseudocode, so it is perfect for the “math adverse”. Ethem’s book offers a very concise overview of the subject for the general reader, describing the evolution of AI, explaining important learning algorithms, and presenting example applications.

Ethem is a machine learning expert. He is a Professor in the Department of Computer Engineering at Bogaziçi University, Istanbul, and the author of the widely used textbook Introduction to Machine Learning (MIT Press), which I used at ETH Zurich in my artificial intelligence program. I met him last year in Oslo, Norway at the “Technology and the Human Future” conference which featured scores of experts who write about the dramatic proliferation of AI technology and information systems.

The following are a few web sources I have found very helpful:

General Interest

  • Tom Simonite’s posts for Wired magazine – thoroughly researched, crisp writing.
  • Jack Clark’s email newsletter, Import AI, provides highlights and summaries of a selection of AI news and research from the previous week. You can check out previous issues (or sign up) by clicking here. Jack Clark is Strategy & Communications Director at OpenAI.
  • Mariya Yao’s writing on Topbots and in Forbes Magazine. Mariya is CTO and head of R&D for Topbots, a strategy and research firm for applied artificial intelligence and machine learning. Fun fact: Mariya worked on the LIDAR system for the 2nd place winner in the DARPA grand challenge for autonomous vehicles.
  • Dave Gershgorn’s writing on Quartz.

 

Interactions between AI and society

  • Zeynep Tufekci, a professor at UNC-Chapel Hill, is an expert on the interactions between technology and society. She shares a lot of important ideas on Twitter, or read her New York Times Op_eds here.
  • Kate Crawford is a professor at NYU, principal researcher at Microsoft, and co-founder of the AI Now Research institute, dedicated to studying the social impacts of AI. You can follow her on Twitter here.

Deconstructing Hype

I also want to highlight just a few great examples of AI researchers thoughtfully deconstructing the hype around some high-profile AI stories in the past few months, in an accessible way:

  • Denny Britz provides a balanced perspective on the OpenAIs bot that beat humans playing Defense of the Ancients, a popular computer game. Yes, significantly easier than beating a human champion in the game of Go. As Denny says “we did not make sudden progress in AI because our algorithms are so smart – it worked because our researchers are smart about setting up the problem in just the right way to work around the limitations of current techniques”. It is a technically dense piece, I grant you, but quite instructive on “training time” and bots.
  • Stephen Merity gives a thoughtful deconstruction of how the DeepCoder story degraded in accuracy. This is a favorite of mine because it addresses the difficulty in covering hard (“tech thick”) subjects like AI. As Stephen points out in his piece: “I know a number of journalists and appreciate their work in communicating these advances to a wide audience. It’s a hard job to convey complex concepts and in many cases they’re not at fault for how it becomes warped by the broader community. Sadly, regardless of the exact way these research stories are warped, most AI and ML stories in the media will result in an audibly groan from researchers”.
  • Jeremy Howard addresses the highly controversial research on whether deep learning can detect sexual orientation.

A brief note about Twitter

Twitter is quite useful for keeping up on machine learning news and many people share surprisingly deep insights (that I often can’t find elsewhere). I was skeptical of Twitter before I started using it (now in my 10th year). It now occupies a useful and distinct niche for me, right next to LinkedIn. The hardest part is determine who to follow on AI. I suggest you take a look at Jeremy Howard’s favorites (click here) and Rachel Thomas (click here), both of whom I follow. Whenever I read an article I like or hear a talk I like, I always look up the author/speaker on Twitter and LinkedIn to see if I find their other posts and Tweets interesting. If so, I follow them.

2 Replies to “Suggested sources to get a grip on artificial intelligence”

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top