Man.is.not.Bayesian.at.all: musings on artificial intelligence

Garbage in, garbage out

23 July 2018 (Hania, Crete, Greece) – Over the weekend I had a chance to finally read through a stack of Kahneman-Tversky research, and finish the Michael Lewis book The Undoing Project: A Friendship That Changed Our Minds which tells the story of the psychologists Amos Tversky and Daniel Kahneman.

The inability of some intelligent people to learn the right lessons from the past isn’t surprising. In 1972, Kahneman-Tversky research showed: “In the evaluation of evidence, man is not Bayesian at all. Not Bayesian at all.“ This means … people do not think probabilistically. Kahneman even won the 2001 Nobel Prize in Economics for these conclusions. 

Yet, in 2018, the field of AI insists on modeling human intelligence as probabilistic and operating like a Bayesian/Markovian Network. It’s assumed probability will get the machine to be approximate with and equivalent to human causal reasoning, language understanding and good decision-making — when the inevitable outcome of the assumptions of intelligence and language being probabilistic. Just to cite two recent examples from scores:

Google’s Fighting Hate And Trolls With A Dangerously Mindless AI 

Bias is everywhere: Tech companies just woke up to a big problem with their AI 

There are people in the Valley who don’t even begin to realize how unscientific and ignorant they are, blocks to human progress. Prob+stats error rates keep being tweaked when the scientific truth is: humans do not think like bell curves and utility curves. The second article I noted above is particularly insightful because in answering the question “why are some in the Valley “getting it” now after all this time?” they posit maybe the world is just becoming more tech-literate and conscious.

Google is probably the worst because the probabilistic approach is what Google knows the best, and they are very aggressive to make their case. The “if a hammer is your only tool, everything looks like a nail” mentality. With their PR power, they are dragging all the young engineers into this kind of thinking. It is about defending the commitment of all data science people to their field. They are very good at capitalizing on their vast amount of statistical data. And statistics is no longer a sexy word; deep learning sounds better.

And there is a bit of an analogy to another issue these “Stat Heros” bring to their insistence that computers can replicate the brain, the most common utterance being that computers are ways of instantiating abstract objects and their relationships in physical objects and their motion, that a computer can simulate any physical process.

Well, not quite. We can get a computer to draw an N-dimenional object, say an elephant, and put in animation (with a physics engine) to move that elephant in any direction we want, including rotation and reverse flipping to a mirror image. We can even get the computer to “pattern recognize” other elephants that might have been drawn into its database memory.  However, the computer can still not (ok, yet) simulate the internal physical processes it feels and remembers about that object and relate it to the internal physical processes it perceives about that object in isolation as well as in combination and comparison with other objects. That’s an example of the computer being quantum entangled with the information inside it and having a “machine consciousness” and values assessment of what that information represents to the machine that’s independent of how the human user sees the information and independent of how other machines see the information.

It does become important simply from the aspects of articles on “fake news”, toxic comments and how advertisers are requesting more granular insights on which ads are shown and why, It is clear Google, Facebook and other techco’s data sets and algorithms can’t understand consumers and language as may have been assumed. Yet the algorithms rule.

And I think that (granted, I have briefly stated a very nuanced problem) lies at the heart of the dystopian algorithmic future we all read about.  Since so many complex technological systems orchestrate many – if not most – of the consequential decisions in our lives, that always-learning AI-powered technology (no matter how wrong) behind our search engines and our newsfeeds quietly shapes and reshapes the information we discover and even how we perceive it, with mounting evidence that suggests it might even be capable of influencing the outcome of our elections.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top