10 things I learned about technology this past decade. And 52 things I learned in 2019 – tech and otherwise.

 

with many thanks to my media team who have made the year so successful and fulfilling:

Eric De Grasse, Chief Technology Officer
Chloe Demos, Media Content Manager/Media Analytics
Gregory Lanzenberg, Media/Video Operations Coordinator
Alexandra Dumont, Operations Manager of The Project Counsel Group
Angela Gambetta, Manager of E-Discovery Operations

with very special thanks to videographer extraordinaire Marco Vallini and his video crew,
plus my graphics designers Silvio Di Prospero and Catarina Conti

and … thanks to all my readers across North and South America + EMEA and Asia Pacific. I appreciate you taking the time to read (yet another) perspective on tech and other stuff from a guy who lives on multiple shores

* * * * * * * * * *

“Scientific topics receiving prominent play in newspapers and magazines over the past several years include molecular biology, artificial intelligence, artificial life, chaos theory, massive parallelism, neural nets, the inflationary universe, fractals, complex adaptive systems, superstrings, biodiversity, nanotechnology, the human genome, expert systems, punctuated equilibrium, cellular automata, fuzzy logic, space biospheres, the Gaia hypothesis, virtual reality, cyberspace, and teraflop machines. … Unlike previous intellectual pursuits, the achievements of this technical culture are not the marginal disputes of a quarrelsome mandarin class: they will affect the lives of everybody on the planet.”

20 December 2019 (Paris, France) – You might think that the above list of topics is a part of a piece written last year, or perhaps this year. But you would be wrong. It was the central point in an essay published 28 years ago in The Los Angeles Times by John Brockman, a literary agent (best friends with the likes of Andy Warhol and Bob Dylan) and author specializing in scientific literature, most famous for founding the Edge Foundation, an organization aimed to bring together people working at the edge of a broad range of scientific and technical fields. [If you ever get invited to an Edge dinner or event, just go. You might find yourself sitting next to Richard Dawkins, Daniel Dennett, Jared Diamond, John Tooby, David Deutsch, Nicholas Carr, Alex Pentland, Nassim Nicholas Taleb, Martin Rees, A.C. Grayling, etc.]

Brockman certainly got it right. But he may not have been able to predict the dizzying pace. The tech news/announcement cycle moves so fast that by the time it takes to research, write, reflect, edit, and publish, it’s already yesterday’s news. And in 2019, yesterday’s news might as well be last year’s news.

Yes, the breathtaking advance of scientific discovery and technology has the unknown on the run. Despite what Steven Pinker says, we seem destined to live in an age of hate. We’re defined much more by what we reject than what we adore. In the U.S., extreme polarization has guaranteed that nearly 50 percent of the country can hate the other nearly 50 percent of the country - and the feeling is mutual.

Our technological tools make it ever easier to weaponize that antipathy. Facebook posts that generate intense emotion are much more likely to be shared than those that appeal to the cooler ends of our psyches. And we know: intense emotion for a human being usually means hate rather than love, though it shouldn’t. We have come to realize that it is difficult to remove by logic an idea not placed there by logic in the first place.

But over the decade we ever-so-slowly realised the only constant has been disruptive innovation, that infernal term used by the management author Clayton Christensen in 1995 to describe how incumbents are bypassed by technology upstarts. Rapid technological change was accompanied by globalisation and (until recently) lowering of barriers to trade. Innovators such as Facebook, Google and Uber sprinted around the world.

But … hold on! Marx and Engels wrote at a similarly protean time for enterprise, during the Victorian era of globalisation. They did not approve, but they recognised its revolutionary appeal:

The bourgeoisie, by the rapid improvement of all instruments of production, by the immensely facilitated means of communication, draws all, even the most barbarian, nations into civilisation.

And when I was at the Berkman Klein Center for Internet & Society at Harvard University in November,  we discussed an even bigger theme: how over the past decade increasingly our culture has being steadily absorbed into the discussion of business. There are “metrics” for phenomena that cannot be metrically measured. Numerical values are assigned to things that cannot be captured by numbers. Economic concepts go rampaging through noneconomic realms: economists are our experts on happiness! Where wisdom once was, quantification will now be. Quantification is the most overwhelming influence upon our contemporary understanding … well, certainly American understanding … of everything. It is enabled by the idolatry of data, which has itself been enabled by the almost unimaginable data-generating capabilities of the new technology.

 

The distinction between knowledge and information is a thing of the past. And in today’s culture, there is no greater disgrace than to be a thing of the past. Beyond its impact upon culture, the new technology penetrates even deeper levels of identity and experience, to cognition and to consciousness.  And even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. As Virginia Eubanks noted in her book last year Automating Inequality:

The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy. So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.

Or Carl Miller in his book the Death of the Gods :

No culture is philosophically monolithic, or promotes a single conception of the human. A culture is an internecine contest between alternative conceptions of the human.

We hear constant chatter about implementing algorithmic decision-making tools to enable social services or other high stakes government decision-making so to “increase efficiency or reduce the cost to taxpayers”. Which, of course, shall be “implemented ethically”. But it’s bullshit.  As numerous presenters elucidated at the Conference and Workshop on Neural Information Processing Systems (NIPS) which is the mega-AI conference I have quoted from  numerous times this past year, the “ethical AI train has left the station”. Noted one presenter:

In AI and ethics we face a similar issue as the age old security dilemma: speed vs security. These things and their commercial application are simply moving too fast.  The same for AI. Same old story: new technology arrives on the scene, society is often forced to rethink previously unregulated behavior. This change often occurs after the fact, when we discover something is amiss.  The speed at which this tech is rolled out to the public can make it hard for society to keep up. When you’re trying to build as big as possible or as fast as possible, it’s easy for folks who are more skeptical or concerned have issues they’re raising left by the wayside, not out of maliciousness but because, ‘Oh, we have to meet this ship date’.

And we are fighting governments and/or bad actors pushing self-interested agendas against the greater good. There is, in effect, a war going on that is more threatening than nuclear war. Look at the development of AI agents trained on something like OpenAI’s Universe platform, learning to navigate thousands of online web environments .. and then being tuned to press an agenda. This could unleash a locust of intelligent bots and trolls onto the web in a way that could destroy the very notion of public opinion.

As far as AI bias, our current data sets always stand on the shoulders of older classifications. So Imagenet draws from the taxonomy of Wordnet. So if the older classifications are inherently biased, then that affects the current data sets and the biases may compound.

In the U.S., they do not even “get” the real costs, whether we’re talking about judicial decision making (e.g., “risk assessment scoring”) or modeling who is at risk for homelessness. Algorithmic systems don’t simply cost money to implement. They cost money to maintain. They cost money to audit. They cost money to evolve with the domain that they’re designed to serve. They cost money to train their users to use the data responsibly. Above all, they make visible the brutal pain points and root causes in existing systems that require an increase of services. Otherwise, all that these systems are doing is helping divert taxpayer money from direct services, to lining the pockets of for-profit entities under the illusion of helping people. Worse, they’re helping usher in a diversion of liability because time and time again, those in powerful positions blame the algorithms.

Yes, yes, yes.  Any social scientist with a heart desperately wants to understand how to relieve inequality and create a more fair and equitable system. So of course there’s a desire to jump in and try to make sense of the data out there to make a difference in people’s lives. But to treat data analysis as a savior to a broken system is woefully naive.

My issue is that technology has allowed the extraordinary to become quotidian. And so I grasp for a semblance of hope for the future. We aren’t necessarily doomed, but we will be if we can’t see how we’ve allowed the absurd to flourish in ways both macro and micro.

Looking back on a decade: 10 things I learned

And we come to the close of a decade. As the decade began, there were rea­sons to be optimistic: America had elected its first black president, and despite a global recession just two years earlier, the world hadn’t cascaded into total financial collapse. Obama­care, for all its flaws, was passed, and then came the Iran deal and the Paris climate accords. Sure, there were danger signs:

• in the U.S., the anger of the tea party

• in Europe the stirrings of nationalism once again

• the slow hollowing out of legacy news media as the new “workrate” … providing a bonus-for-page-views-structure, the logical outcome of ad-driven internet journalism .. put print and journalism in it downward spiral

• a troubling sense that somehow the bankers and finance guys “just got away with everything”

But then maybe the immediacy of social media gave some hope, at least if you listened to the chatter of the bright young kids in the San Francisco Bay Area trying to build a new kind of unmediated citizenship. Maybe every­day celebrity, post-­gatekeeper, would change the world for the better. Some of that happened.

But we also ended up with the alt­-right and Donald Trump, inequality, impeachment, and debilitating FOMO. And micro-targeted content. How did we get here? How did the Internet expand the horizon so that every utterance or expressive act to a potentially planetary level. There’s a passage in Ernest Hemingway’s novel The Sun Also Rises which sums it up although in a different context. A character named Mike is asked how he went bankrupt and he says:

“Two ways. Gradually, then suddenly.” 

The 2010-2019 decade saw the explosion and true impact of social media, for good and bad. Mostly bad. We realized its algorithmic amplification was simply optimized for outrage. It means favoring the extremes, the conspiracy theorists, the histrionic diatribes on all sides. It means fomenting mistrust, suspicion, and conflict everywhere. We’ve all seen it. We’ve all lived it. And some folks knew it way before we did:

I began to suspect that technology had become the masculine form of the word culture. If you pulled far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of our culture, of our humanity today? We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice.

As I look back over this last decade, I find that so much of what really happened in the world of tech was not part of any forecast for the new decade in terms of accurate futuristic forecasting. In fact, in 2009, nobody could have predicted the following.

Yes, I learned much more than 10 things but in the interest of the “decade = 10” theme, my top “learns”:

1. Wow. Facebook and Twitter would have an impact on a presidential election? And that by the end of the decade Facebook, Twitter, and Google would be the interest of governments around the world regarding their management of customers data, privacy and security?

2. From a technology standpoint, the last decade was really all about the smartphone economy. Smartphones enabled brand new services like Uber, Lyft, food delivery services, etc. More importantly, smartphones became  navigation tools which allowed companies like WAZE to be born and the major smartphone vendors were all making sophisticated GPS and mapping tools which became a key driver and service.

3. Mega learn: neoliberalism will not stop until it has reduced every human interaction to its simplest form, ready and able to be atomized, gamified, and rearranged for maximum profits.

4. Smartphones’ impacted every industry, but most especially the camera industry. Although Apple introduced the iPhone in 2007, it did not really take off until Apple began populating their site with iPhone apps in 2010. At that point, the demand for Phones really grew, and during the last decade, the cameras on smartphones became better and better. Now they are the primary way people take photos. The iPhone camera in 2007 was minimal at best while the iPhone cameras today are verging on DSLR territory.

5. Streaming music, video, VOD, and its impact on the cable, cellular, music, and movie industries continue apace. For the cable business streaming has caused cord-cutting in droves and has threatened the cable industry. On the other hand, it has been a boon for the music industry, and Hollywood and the cellular industry has benefited greatly since their customers need more data for streaming services, which allows them to bring in new revenues they did not have in the last decade.

6. Tesla, electric cars and the auto industry. Although Tesla started in 2003, the previous decade was more for R&D and setting up its supply chain and manufacturing. The Tesla Model S arrived in 2012, and over this last decade, demand for electric cars and electric hybrids have skyrocketed. In 2009, most bets were that Tesla would not survive. Now it is leading the entire auto industry into the era of electric cars. In the auto industry, we saw advances in back-up cameras, in-car navigation, automatic braking, and adaptive cruise control. In the next decade, they hope to bring us self driving vehicles and vow to make their cars smarter and safer.

7. Smartphones as a ubiquitous pocket computer. In 2009, many were still on the fence about the importance and impact the iPhone and smartphones would have in the future. But by 2010, when Apple opened its apps store and began greater innovation on the iPhone design and functions, along with the introduction of smartphones from Samsung and others using Android, smartphones became the one digital device we all carry with us today. Its impact on our mobile computing capabilities and the way it is used to call rideshare services, order food delivery, make mobile payments, navigate to a designated location, etc. make it the most valuable personal computer we have today.

8. Mobile payments. Enabled by smartphones (and in China today it is the way most people pay for just about anything they buy) it upended financial systems. And cryptocurrency, bitcoin, and blockchain. They emerged mid decade and picked up steam over the last five years, also affecting financial markets but obviously much more.

9. Digital extra-territoriality. Singapore has ordered Facebook to take down a post made by someone in Australia. In ten years we’ve gone from a world in which San Francisco ideas of free speech were applied globally by default to one in which all sorts of countries try to apply local laws to global platforms, in fields where there is often no consensus on what should be allowed. You say you want to regulate platforms? Whose regulation? Where? Oh, the regulation kerfuffle yet to come.

10. AI and ML. I left these for last. These technologies have been around for over 30 years but really only began to be used in broader applications and services over this last decade, all due to advances in computational power and numerical optimization routines. They have affected all they have touched: commerce/retail markets, finance, humanities, law, medicine, etc. More a bit later.

AND THE DECADE TO COME? 

Predictions are a fool’s game. Yes, I can see transformational technologies that will take us into the next decade and beyond but I certainly cannot predict their trajectories. To quote the great philosopher Donald Rumsfeld:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.

So I will note things that are developing and/or will drive both the “known knowns”, the “known unknowns” and the “unknown unknowns” in the next decade:


An interesting development just this year and discussed at Cannes Lions. The next big social network is email. Newsletters are the new websites, and expect to see communities growing up around them in interesting new ways, led by companies like Substack. There has been a stunning rise of newsletters – and, I’d say, of subscription-based media generally – that will contribute to a new divide between those who see ads and those who pay to avoid them.

And taking it a step further, I have done what many others have done: set-up a private channel in the cloud, via Linux, access given to a set number of people who I know or who have explicitly chosen to receive it. The post you are reading now goes to my “public list”, about 25,000 subscribers. But it is on my private channel I feel most secure. Where I can be my most “real self”. Not that I hold much back in these public posts. But on my private channel I can share documents and links, etc. with select clients and friends. You cannot find my channel, it cannot be indexed by Google or anybody else.

Of course, you’ll see the perverse angle immediately. Hundreds of alt-right organizations have set up Linux servers in the cloud, sending out Tweets and Facebook posts with links to give people access, via coded messages. People are setting up hundreds of web pages, all outside any monitoring or regulation. Hundreds of tiny pirate kingdoms. As Franklin Foer as explained in a series of articles in The Atlantic magazine, these are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. And they still use Facebook and Twitter to distribute access, coded to avoid the filters.

It’s all called the “Dark Forest”. I will explain how that works when I continue my social media regulation series next year.

Deepfake apps have been going mainstream in the US. at a fantastic clip. Depending on how you think about that viral Snapchat aging filter, one arguably already has. But this year Ben Cunningham (ex-Facebook) made a riveting presentation at the Web Summit in Lisbon, Portugal (an event you really should attend; the networking opportunities  are worth the cost) that showed all the incredibly sophisticated machine-learning-based video editing apps set to take off in 2020, with many that have features that will integrate with the coming Instagram camera.

And the real issue? It’s not just that you might make people believe that something that’s fake is real. But that you might make them believe that something that’s real is fake.

 


Over the past year I have read more and more about Splinternet, how the internet is quickly dividing into zones. There’s an American internet, a European internet, and a Sino-Russian-authoritarian internet, and they all appear to be rapidly pulling apart. Serious? It must be. An internal Facebook memo predicts that this trend will accelerate in the coming decade, “limiting the potential size of any one social network”. Poor Mark.

As I have noted before, the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) were laudable goals but just not the way to go for “data privacy”. Yes, I am still a cynic. There is no data privacy, and there will never will be data privacy. Well, there might be – but it will cost you. It will be only for rich.

This is now coming to fore as the next big policy fight is over location data. With increasing attention being paid to the expanding surveillance networks created by our smartphones, the weakness of both the GDPR and CCPA is becoming obvious. The “acquiring permissions” structure is unworkable given that multiple permissions for both the type of use and the users of the data are required for each app or instance of location targeting. And vendors have figured out how to avoid the whole “consent” issue by making the data appear “anonymous”. There is also a way to process location data within an app on a mobile device in such a way that any exported data is anonymous – but still marketable. We have a 75 page briefing memo in the works for one of our clients which lays this all out. We’ll publish an executive summary next month.

I think that 5G, with its ultra-high-speed wireless capabilities and broad reach, will be one of the most transformational technologies in the next decade. Higher speed wireless networks will enable a whole host of new types of applications and services, many we can’t even imagine today.

But don’t buy a 5G smartphone yet. 5G networks haven’t reached a key maturity level for them to make sense for users. I spend a lot of time in the TMT (telecommunications-media-technology) market. Despite the anticipation for the next generation of wireless networks, for the typical mobile customer, 5G isn’t living up to expectations. Its rollout has been plagued by fits and starts, and the sparse consumer landscape offers only a handful of exorbitantly-priced devices. Meanwhile, actual 5G coverage is lacking in its geographical reach and network speeds are surprisingly, well, average. Millimeter wave coverage—the fastest variety—has been very limited. You have to be outdoors, you have to have line-of-sight to the cell tower. If you’re indoors, you lose a lot of the signal on the millimeter wave coverage. But those troubles were to be expected, and 5G adoption will probably take years.

We are producing a 45 minute video that will put all of this in perspective. Main point? It’s hard to say exactly what the new networks will bring. It’s going to be a while before consumers look at the 5G options out there and say, ‘Hey, I really need this! I’ll pay extra!!”


AI is a transformative technology with the potential to impact almost every enterprise process, but in this still-nascent stage of adoption there are many open questions about how it should be implemented and for which use cases. Until now, many adoptees were more focused on rolling out AI applications into production and less concerned with developing standards and procedures to ensure the technology is safe, manageable, robust, explainable and sustainable.

I have written extensively about AI over the last eight years. We are nowhere near where we need to be when it comes to the infrastructure for AI. This includes the silicon, the software, the connection between the edge device and the cloud, and the way data sets are managed and preserved so the computer can be trained. We’ll see if we sort this out in the next decade.

And finally …

I think the psychological impact of the Trump presidency will continue to drive an especially negative narrative about the social impacts of the Internet and social media. I do not see America recovering from those impacts and I do not see the media industry stabilising. ‪Politicians everywhere are aping the style of “alpha bully” Trump, and anyone who dares to claim a more nuanced middle ground is ignored and shunted to the sidelines. ‬

Bullies in popular culture are often portrayed as cowardly aggressors who prey on the weak, but fear the strong. Challenged by an authority figure, they’re more likely to bolt than stand their ground. We forgive bullying as a phase in adolescent life and assume that with maturity and wisdom, its worst impulses are tamed. These cultural wisdoms may have held true in the past. But we’ve entered a new era — today, trolling and trash-talking are prerequisites for success.

 

52 THINGS I LEARNED IN 2019

How I sort through a firehouse of information

Science has spawned a proliferation of technology that has dramatically infiltrated all aspects of modern life.  In many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. I think as we are hurled headlong into the frenetic pace (made worse by all this development in artificial intelligence) we suffer from illusions of understanding, a false sense of comprehension, failing to  see the looming chasm between what our brain knows and what our mind is capable of accessing.

So every month (plus a good chunk of the summer) I make the concerted effort to step back, and devote one weekend to do a technology “big think” – take all those pieces, shards, ostraca, palimpsests, barrage of news clips from CNN, Facebook, LinkedIn, Twitter, and [“name-your-social-media-blaster”] screaming by me and determine what I think is key to know and internalize. Much of it I write about in way too long blog posts. But that’s how I can internalize everything. We have created an environment which rewards simplicity and shortness, which punishes complexity and depth. I hate it. So as my regular readers know I try to be a “Big Picture” guy, being an opsimath at heart and believing everything in technology is related. The model I try to follow is like the British magazine tradition of a weekly diary – on the issue, but a little distant from it, personal as well as political, conversational more than formal.

And I have a lot of help. A large staff. We pursue a very eclectic conference schedule that provides us a holistic tech education. You can see my annual conference schedule by clicking here. Call it my personal “Theory of Everything”. Although if you are not careful you can find yourself going through a mental miasma with all this overwhelming tech.

Plus my Chief Technology Officer, Eric de Grasse, has built an AI program that uses the Factiva database (plus four other media databases) which allows me to monitor 1,000-1,500 primary source points every month … but feeds me and the staff just the relevant information we need, depending on what we are writing about, and depending on what conference assignments my staff has been assigned.

Factiva aggregates content from both licensed and free resources, and then depending on the what level you want to pay has all types of search and alert functions: it plows through websites, blogs, images, videos, etc. so you have the ability to do a deep dive pretty much into any region of the world or country in the word based on persons, trends, subject matter, etc.

Yes, a tsunami of material but compartmentalized and then distributed to the appropriate staff members for further reading and analysis. It also forms the source for this “52 things I learned …” piece every year.

Laying on top of THAT structure is a technology we are beta testing that, using several machine-learning tricks, produces surprisingly coherent and accurate snippets of text from longer pieces. Not as good as a person reading the piece, but it hints at how condensing text will eventually become automated on an accurate basis.

Much of the technology I read about or see at conferences I also force myself “to do”, taking the old Spanish proverb to heart:

Because writing about technology is not “technical writing.” It is about framing a concept, about creating a narrative. Technology affects people both positively and negatively. You need to provide perspective. You need to actually “do” the technology.

And having a media company, attending all of these wonderous technology events, you meet people like Tom Whitehall, a “reformed journalist/lawyer” who is now a hardware designer who can actually demystify the “innovation process” (yes, that intolerable phrase). He provides brilliant basics and a clear vision to companies to really, really innovate.  Like me, Tom receives a firehose of “content” (as we like to call writing, video and photography) which he reduces to his own “52 things I have learned” list at the end of each year. While our sources sometimes overlap (we attend many of the same events), we have different take-aways from these conferences.

So … onto the “52 Things” list. Please note they are in no particular order, some have long descriptions, etc.

The last 10 items … #42 to #52 … are my favorite books and favorite science photos for 2019.

In some cases we have multiple articles on the same topic so my staff pulled the most recent and/or authoritative one in our archive.

And note: we have subscriptions to 700+ magazines, news feeds, etc. If something is blocked by a paywall and you want a copy, email us at [email protected] and we’ll send you a copy. 

 

1. MIMICKING THE BRAIN

At way too many of these AI conferences there are overwhelming presentations of the AI in-progress that attempts to mimic the brain. And oh, boy, that is so difficult. I thought of visual illusions such as the Necker Cube:

and the Penrose Impossible Triangle

 

which demonstrate that the “reality” we see consists of constrained models constructed in the brain. The Necker Cube’s two-dimensional pattern of lines on paper is compatible with two alternative constructions of a three-dimensional cube, and the brain adopts the two models in turn: the alternation is palpable and its frequency can even be measured. The Penrose Triangle’s lines on paper are incompatible with any real-world object.

These illusions tease the brain’s “model-construction software”, thereby revealing its existence. In the same way, the brain constructs in software the useful illusion of personal identity, an ‘I’ apparently residing just behind the eyes, an “agent” taking decisions by free will, a unitary personality, seeking goals and feeling emotions. Which is why AGI is not around the corner. The ineffable aspects of human personality and emotions remain mostly undecipherable by neuroscientists and programming such things is a very, very, long way off, at least 30-40 years. As one presenter noted at a conference:

The mind is an evolved system. As such, it is unclear whether it emerges from a unified principle (a physics-like theory of cognition), or is the result of a complicated assemblage of ad-hoc processes that just happen to work. It’s probably a bit of both. It is going to take a very, very long time to unbox that.

It’s why the Google Brain Team made a switch in meta-learning: teaching machines to learn to solve new problems without human ML expert intervention. But as a member of that Google team noted at DLD Tel Aviv:

There will not be a real AI winter, because AI is making real progress and delivering real value, at scale. But there will be an AGI winter, where AGI is no longer perceived to be around the corner. AGI talk is pure hype, and interest will wane. 

As I learned in my neuroscience program, the construction of personhood takes place progressively in early childhood, perhaps by the joining up of previously disparate fragments. Some psychological disorders are interpreted as “split personality”, failure of the fragments to unite. It’s not unreasonable speculation that the progressive growth of consciousness in the infant mirrors a similar progression over the longer timescale of evolution. Does a fish, say, have a rudimentary feeling of conscious personhood, on something like the same level as a human baby?

In a way, it is why Amazon and Facebook and Google have focused on voice. The ability to convey intent and emotions through voice seems like a universal constant that predates language. Ever noticed how the tone you’d use to ask a question, or give an order, is the same in essentially every language?

That’s why the Conference and Workshop on Neural Information Processing Systems is always so illuminating. There’s no practical path from superhuman performance in thousands of narrow vertical tasks executed by AI, ML, NN to the general intelligence and common sense of a toddler. It doesn’t work that way. ML is both overhyped and underrated. People overestimate the intelligence & generalization power of ML systems (ML as a magic wand). But they also underestimate how much can be achieved with relatively crude systems, when applied systematically (think ML as the steam power of our era, not the “electricity” metaphor).

Note: also last year I kept hearing “Deep learning est mort. Vive differentiable programming!” Deep Learning – due for a rebranding. Enter “differentiable programming”. Yeah, differentiable programming is little more than a rebranding of the modern collection of Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.  But even “neural networks” is a sad misnomer. They’re neither neural nor even networks. They’re chains of differentiable, parameterized geometric functions,  trained with gradient descent (with gradients obtained via the chain rule).  In a way “differentiable programming” is a lot better, but to be honest that seems a lot more general than what we do in deep learning. Deep learning is a very, very small subset of what differentiable programming could be. But I  think that generality is intentional. As Francis Callot (professor, Graduate School of Decision Sciences, University of Konstanz, who runs a workshop on information processing) noted: “Differentiable programming is an aspirational goal in which the building blocks used in deep learning have been modularized in a way that allows their flexible use in more general programming contexts.

They key? Intelligence is necessarily a part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation is silly. A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it. Beyond your brain, your body and senses — your sensorimotor affordances — are a fundamental part of your mind. Your environment is a fundamental part of your mind. What would happen if we were to put a freshly-created human brain in the body of an octopus, and let it live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but we do know that cognitive development in humans and animals is driven by hardcoded, innate dynamics.

Human culture is a fundamental part of your mind. These are, after all, where all of your thoughts come from. You cannot dissociate intelligence from the context in which it expresses itself. I could write about this for hours but I will end it here.

2. CHINA: SETTING THE STANDARDS FOR FACIAL RECOGNITION TECHNOLOGY

 


Last year when I was reporting from the Conference on Neural Information Processing Systems (NIPS), the largest conference for machine learning and AI, everybody’s focus vis-a-vis China was: the United States has long led the world in technology. But as artificial intelligence becomes increasingly widespread, U.S. engineers have quickly learned they need to contend with another global powerhouse – China. It is why Amazon, Apple, Google and others have set up AI research centres in China.

Now, something bigger. Chinese tech companies have ramped up efforts to set technical standards for facial recognition, raising concerns among business competitors, political observers and humanitarian advocates. China has long made a systematic effort to set international standards on data and hardware compatibility across brands so that the standards reflect how Chinese products already work — giving its domestic industries a leg up in engineering races.

There is a lot of information out there about this, including how their technology is seeping into the U.S. and Europe. I’ll have more to come on this subject. For the moment, here is good summary: click here

 

3. COMPUTATIONAL PROPAGANDA AND THE END OF TRUTH

As I wrote last year when discussing AI and the art of photography, we can no longer trust what we see. As a photographer, I have always thought that pretty soon cameras will be able to automatically adjust highlights, shadows, contrast, cast and saturation based on human aesthetic preferences. Controlling light and shadow is the trickiest thing for a photographer. It’s why manual controls are needed for a pro’s unique art. But that change, that shift is happening at a faster pace than even I imagined.

And just like the technology for cyber attacks is now “off the shelf” so almost anyone can launch their own attack, the imaging machine learning algorithms are developed using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together to create their own “truth”, their own “reality”.

Most interesting visual tool I saw? Project Poppetron, which allows for someone to take a photo of a person, give them one of any number of stylized faces, and create an animated clip using the type they’ve chosen. These feats are possible because machine learning can now distinguish the parts of the face and the difference between background and foreground, better than previous models. At Cannes Lions it was a big hit.

Oh, yes, the creation of “fake news”. No, fake news is not a new phenomenon but the online digital information ecosystem now has immense technological capability … “computational propaganda devices” in the trade … that has created a particularly fertile ground for sowing misinformation. Yes, I get it. Agenda setting is generally about power, specifically the power of the media to define the significance of information. But technology has stood this concept on its head. The extent to which social media can be easily exploited to manipulate public opinion thanks to the low cost of producing fraudulent websites and high volumes of software-controlled profiles or pages, known as social bots, is astounding. These fake accounts can post content and interact with each other and with legitimate users via social connections, just like real people – at a speed and duration unheard of. And because people tend to trust social contacts they can be manipulated into believing and spreading content produced in this way.

To make matters worse, echo chambers make it easy to tailor misinformation and target those who are most likely to believe it. The Russians are THE pros at this, far better than the alt-right movement. Moreover, amplification of content through social bots overloads our fact-checking capacity due to our finite attention, as well as our tendencies to attend to what appears popular and to trust information in a social setting. Last year I wrote about attempts by AI mavens to combat this assault via large-scale, systematic analysis of the spread of misinformation by social bots via two tools, the Hoaxy platform to track the online spread of claims and the Botometer machine learning algorithm to detect social bots. They detected hundreds of thousands of false and misleading articles spreading through millions of Twitter posts during and following the 2016 U.S. presidential campaign.

NOTE: there are also technologies such as RumorLens, TwitterTrails, FactWatcher and News Tracer. As I learned at the Munich Security Conference, these and similar technologies were used by the intelligence services to track Russian intelligence activities. These “bot-not-bot”classification systems use more than 1,000 statistical features using available meta-data and information extracted from the social interactions and linguistic content generated by accounts. They group their classification features into six main classes. Network features capture various dimensions of information diffusion patterns. It builds networks based on retweets, mentions, and hashtag co-occurrence, and it pulls out their statistical features, such as degree distribution, clustering coefficient, and centrality measures. User features are based on Twitter meta-data and Facebook meta-data related to an account, including language, geographic locations, and account creation time.

At several conferences, I learned about companies that were paid millions of dollars to create armies of Twitter bots and Facebook bots that allowed campaigns, candidates, and supporters to achieve two key things during the 2016 Presidential election: 1) to manufacture consensus and 2) to democratize online propaganda. Social media bots manufactured consensus by artificially amplifying traffic around a political candidate or issue. Armies of bots were built to follow, retweet, or like a candidate’s content make that candidate seem more legitimate, more widely supported, than they actually are. This theoretically has the effect of galvanizing political support where this might not previously have happened. To put it simply: the illusion of online support for a candidate can spur actual support through a bandwagon effect.

I have a detailed post “in progress” to give you the nuts and bolts of how this was done, so let me close this section with two points:

  • Technology has allowed us to cease to be passive consumers, like TV viewers or radio listeners or even early internet users. Via platforms that range from Facebook and Instagram to Twitter and Weibo, we are all now information creators, collectors, and distributors. We can create inflammatory photos, inflammatory texts, inflammatory untruths. And of course, those messages that resonate can be endorsed, adapted, and instantly amplified.
  • We have no “common truth”, no “shared truth”. We have polarization and different versions of reality. There is, of course, a huge political and cultural cost for all of this “well-MY-media-says…”, what the French writer (and businessman) Jean-Louis Gassée calls “brouet de sorcières dont nous louche off ce qui convient à vos sentiments” (which, roughly translated, means a witches’ brew from which we ladle off whatever fits our sentiments”). But I must leave that discussion for another day.

As as far as “fixing” Facebook? Not going to happen. Facebook’s fundamental problem is not foreign interference, spam bots, trolls, or fame mongers. It’s the company’s core business model, and abandoning it is not an option. The only way to “fix” Facebook is to utterly rethink its advertising model. It’s this model which has created nearly all the toxic externalities that dear Mark is so “worried about”. It’s the honeypot which drives the economics of spambots and fake news, it’s the at-scale algorithmic enabler which attracts information warriors from competing nation states, and it’s the reason the platform has become a dopamine-driven engagement trap where time is often not well spent. To put it in Clintonese: It’s the advertising model, stupid.

4. THE WORLD OF VIRTUAL ASSISTANTS

At the Mobile World Congress last year, the ever-growing “conference in a conference” is the “living cities” demonstration area. Last year it covered an area the size of 3 football pitches. Living cities are what happens when you bring together sensors, actuators and intelligence to start to respond to the needs of citizens. When you go beyond just “smart” to bring some warmth and engagement.

This is certainly happening in the home. Tom Cheesewright (who writes about this and many other high-level tech subjects) summed up last year for me very nicely when he noted exactly where all of these conferences said these high-tech homes are going, using his own as an example:

There’s no real intelligence in my system — it’s entirely driven by events triggering certain messages. But even with this very simple technology, the house can start to engage with my needs and respond to them in a much more human way than it otherwise might. It can know that I usually like a certain temperature. That I like a certain playlist when I’m cooking, or the lights a certain way when watching a film. And it can tell me that it knows, in quite a natural fashion, and offer solutions to me at appropriate moments.

To truly fit my definition of a ‘living’ system, my house would need ‘real’ intelligence: perhaps predicting needs I hadn’t explicitly expressed. And it would need to be able to evolve its behaviours — and even its physical space — to better meet those needs. Sadly, I can’t 3D-print walls yet. But it’s easy to see that technology coming.

The most interesting part of the Consumer Electronics Show in Las Vegas (held every January; my media team is there now) is not the technology on show but the conversations that happen behind closed doors. Last year and this year are no exception. In the world of virtual assistants, Amazon and Google are leading the way on recruiting partners to help them distribute and activate their artificial intelligence platforms. After focusing on “smart home” technology last year, Amazon is now moving on to the car. It has struck deals with Panasonic, one of the largest players in in-car infotainment systems, Chinese electric car start-up Byton and Toyota .. which this year is showing off a futuristic, versatile autonomous vehicle called “e-Palette” … that the ecommerce group plans to use for package deliveries.

Note: Amazon never has its own public presence on the CES show floor. It rents a sizeable private space (12 suites, 6 conference rooms) in Las Vegas’s Venetian hotel for meetings with partners. Google, by contrast, is unmissable to CES attendees: advertising for Assistant appears on giant digital billboards up and down the Las Vegas Strip, its branding is plastered inside and outside the city’s monorail train carriages, and it will for the first time in several years has a yuuuuuuge booth at the convention centre. 

5. ALGORITHMS THAT SUMMARIZE LENGTHY TEXT GET BETTER, AS DOES TRANSLATION SOFTWARE

Training software to accurately sum up information in documents could have great impact in many fields, such as medicine, law, and scientific research. And if you are a member of the e-discovery ecosystem, no, you will not see this technology at LegalTech or ILTA.  But it is starting to become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you. At two computational linguistic events I saw demos by Metamind (recently acquired by Salesforce) and developed by Richard Socher, who is a fairly prominent name in machine learning and natural-language processing. There are others out there, like Narrative Science and Maluuba (acquired last year by Microsoft).

All of these use several machine-learning tricks to produce surprisingly coherent and accurate snippets of text from longer pieces. And while it isn’t yet as good as a person, it hints at how condensing text could eventually become automated. Granted, the software is still a long way from matching a human’s ability to capture the essence of document text, and other summaries it produces are sloppier and less coherent. Indeed, summarizing text perfectly would require genuine intelligence, including commonsense knowledge and a mastery of language.

But parsing language remains one of the grand challenges of artificial intelligence and it’s a challenge with enormous commercial potential. Even limited linguistic intelligence – the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways – could transform personal computing. Last year our e-discovery review unit participated in a beta test of a system that learns from examples of good summaries, an approach called supervised learning, but also employs a kind of artificial attention to the text it is ingesting and outputting. This helps ensure that it doesn’t produce too many repetitive strands of text, a common problem with summarization algorithms.

The system experiments in order to generate summaries of its own using a process called “reinforcement learning”. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots and at the end of 2016 it seemed to be on everybody’s “breakthrough technologies in 2017” list. Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.

If you are a lawyer, would you trust a machine to summarize important documents for you? Did you trust predictive coding when it first hit the market? It is a work-in-progess.

In modern translation software, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesn’t require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translator’s work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. As a Google engineer (he works in Google’s Speech Division) at DLD Tel Aviv:

When you go from 10,000 training examples to 10 billion training examples, it all starts to work. In machine learning, data trumps everything. And this is especially the case with language translation.

Here in Europe our e-discovery review team and our language translation service have been using Bitext (they are based in Madrid), which is a deep linguistic analysis platform, or DLAP. Unlike most next-generation multi-language text processing methods, Bitext has crafted a platform. Based on our analysis plus an analysis done by ASHA, the Bitext system has accuracy in the 90 percent to 95 percent range. Most content processing systems today typically deliver metadata and rich indexing with accuracy in the 70 to 85 percent range. The company’s platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications including Big Data, Artificial Intelligence, social media analysis, text analytics, etc. It solves many complex language problems and integrates machine learning engines with linguistic features. These include segmentation, tokenization (word segmentation, frequency, and disambiguation, among others.

6. E-DISCOVERY FACES THE DEMON IT KNEW WAS COMING. PLUS ALL THAT $$$ FLOWING INTO THE INDUSTRY. 

As I noted above, the truth is on the run. And the e-discovery industry will battle it. Ralph Losey is right. The big problems in e-discovery have been solved, especially the biggest problem of them all, finding the needles of relevance in cosmic-sized haystacks of irrelevant noise. We can go to almost any legal technology vendor and law firm and they will know what is required to do e-discovery correctly. We have the software and attorney methods needed to find the relevant evidence we need, no matter what the volume of information we are dealing with.

But where are we with the new e-discovery demon? How do you discover what is not there but should be there.  My e-discovery review team only works in EMEA (Europe, the Middle East and Africa). There, it is the Wild, Wild West. Nobody cares about “meet & confer’, or “proportionality”, or “legal holds”. Even U.S. companies that operate there. My e-discovery team does “hard discovery” (compliance and corporate internal investigations) and “extreme discovery” (document pulls and forensics in places like Cote D’Ivoire, Iran, Libya, etc.) Last year we did three internal investigations for Fortune 100 companies. Fraud, collusion with competitors, carving up markets, etc. All HIDDEN via intranets, Messenger (the preferred messaging system among corporates these days), and chat bots. Few emails … or very incomplete email threads … because the email threads had been “cleansed”. No existing e-discovery software can find what is not there. Google and several other AI vendors are working to develop solutions to these issues, but nobody in the present e-discovery ecosystem has the capacity … or the budget … to address the issue.

We sometime stumble upon the “truth” because we often do a full review of 1000s of docs. We build a narrative, then determine where was the answer to the earlier email that begged for a response: where was the bid qualification in a cursory note, where was the collusion hinted in an answer, etc. … in other words what information was missing based on our holistic review of the corpus of data.

We saw this a few years ago in the Volkswagon braking litigation, and this year in all of the opoid litigation. Speaking as the cynic, I always assume the “Dark Side”. Every company knows that e-discovery technology can be/has been weaponised to hide the truth. Just look at the machinations lawyers pulled using it during the Toyota accelerator pedals/dangerous acceleration litigations. Or the more recent opioid litigations. The opioid litigation involves reviewing massive data sets. Specialized information can be lost in a sea of pharmaceutical data … and was lost and/or hidden according to contract attorneys hired to work on the case. They sent our media team scores of cases where documents were hidden, either in priv logs or just simply cleansed. Yes, technology-assisted review solutions have superior categorization capabilities. But even that is not enough.

E-discovery? Well, the venture capital keeps pouring in. Robert Childress voices what many think are the reason for probably nails the reason we see these high levels. We shot this at Legaltech earlier this year:

Whether the VCs see e-discovery as “big business” is hard to say. I have been a limited partner in a venture capital partnership for over 17 years, and a shareholder in a master limited partnership for about the same amount of time, and our funds invested in several e-discovery companies merely to diversify. The larger tech community sees that in the same regard. We started with blogging platforms and some content curators and then we gradually moved into E-Commerce, FinTech, and mobile payment providers. Over the last few years we have expanded into more maturing companies, joining late stage funding efforts, adding several cyber security companies plus several e-discovery vendors. I have seen some interesting financials over these 17+ years.

And if you really want to talk money, the big take-away is this: after a decade of low interest rates there is a lot of cheap money looking for “disruptor” returns – which in turn drives more aggressive burn rates. And failure.

We see it now in the security industry which is wallowing in venture capital funding. That easy money has translated into a welter of security solutions. At cyber security conferences, the “me-too” approach dominates. I see it in the e-discovery technology field, too. Everybody wants to be a Relativity, a Logikcull, a DISCO, a Symantec. Companies and law firms find themselves struggling to understand and differentiate and implement all of these solutions. And if you think Legaltech is a Tower of Babel, join me at the International Cybersecurity Forum.

7. ACROSS THE TECH WORLD, VISUALIZATION BECOMES A POTENT METHODOLGY

 

In the last few years visualization has emerged as a potent methodology for exploring, explaining, and exhibiting the vast amounts of data, information and knowledge produced and captured every second. In 2017 I attended three conferences that were dedicated solely to data visualisation, and attended several where it was a large component.

Researchers and practitioners in visual design, human-computer interaction, computer science, business and psychology aim to enhance the facilitation of knowledge transfer among various stakeholders, and develop methods for the reduction of information overload and misuse. This is vital in the current knowledge economy, where knowledge has become crucial for gaining competitive edge, especially in public policy, scientific research, technological innovation and decision making processes.

These events are cool because they involve engineers, designers and psychologists. Engineers handle the information technology aspect, designers bring visual communication and representation to the table, and psychologists study how humans learn and interpret visuals. Thus, the collaboration between engineers, designers and psychologists is crucial for making sure that visualization indeed succeeds in enhancing knowledge transfer.

And the questions raised are broad: what is gained and what is lost in the transition from data, through images, to insights? What are we taking for granted in the mechanization of statistical image making? And how can we better structure the roles of machines in the creation of visualization, and humans in its interpretation?

8. COMPUTERS HAVE TURNED GERRYMANDERING INTO A SCIENCE

 

THIS IS MORE FOR MY U.S. READERS. About as many Democrats live in Wisconsin as Republicans do. But you wouldn’t know it from the Wisconsin State Assembly, where Republicans hold 65 percent of the seats, a bigger majority than Republican legislators enjoy in conservative states like Texas and Kentucky. The United States Supreme Court is trying to understand how that happened. This past fall the justices heard oral arguments in a case called Gill v. Whitford, reviewing a three-judge panel’s determination that Wisconsin’s Republican-drawn district map is so flagrantly gerrymandered that it denies Wisconsinites their full right to vote. A long list of elected officials, representing both parties, filed briefs asking the justices to uphold the panel’s ruling. I was in Washington, D.C. that day and a long-time member of The Posse List (my legal recruitment company) finagled a seat for me.

This isn’t a politics story; it’s a technology story. Gerrymandering used to be an art, but advanced computation has made it a science. Wisconsin’s Republican legislators, after their victory in the census year of 2010, tried out map after map, tweak after tweak. They ran each potential map through computer algorithms that tested its performance in a wide range of political climates. The map they adopted is precisely engineered to assure Republican control in all but the most extreme circumstances. In a gerrymandered map, you concentrate opposing voters in a few districts where you lose big, and win the rest by modest margins. But it’s risky to count on a lot of close wins, which can easily flip to close losses. Justice Sandra Day O’Connor thought this risk meant the Supreme Court didn’t need to step in. In a 1986 case, she wrote that “there is good reason to think political gerrymandering is a self-limiting enterprise” since “an overambitious gerrymander can lead to disaster for the legislative majority.”

Well, back then, she may have been right. But today’s computing power has blown away the self-limiting nature of the enterprise, as it has with so many other limits. There is a technical paper that was introduced into the court record that paints a startling picture of the way the Wisconsin district map protects Republicans from risk. Remember the Volkswagen scandal? Volkswagen installed software in its diesel cars to fool regulators into thinking the engines were meeting emissions standards. The software detected when it was being tested, and only then did it turn on the antipollution system. The Wisconsin district map is a similarly audacious piece of engineering. Fascinating stuff

 

9. ASTRONOMERS ARE USING AI TO STUDY THE VAST UNIVERSE – REALLY, REALLY FAST

In one of my weekend jaunts to the Observatoire de Paris, the largest national research center for astronomy in France and a major study centre for space science in Europe, I attended a workshop on the use of AI in celestial study.

Note: the Observatoire is surpassed in Europe only by the Roque de los Muchachos Observatory in the Canary Islands, Spain. That observatory is located at an altitude of 2,396 metres (7,861 ft), which makes it one of the best locations for optical and infrared astronomy in the Northern Hemisphere. The observatory also holds the world’s largest single aperture optical telescope.

The next generation of powerful telescopes being developed will scan millions of stars and generate massive amounts of data that astronomers will be tasked with analyzing. That’s way too much data for people to sift through and model themselves – so astronomers are turning to AI to help them do it. The bottom line: algorithms have helped astronomers for a while, but recent advances in AI – especially image recognition and faster, more inexpensive computing power – mean the techniques can be used by more researchers.

There are new methods that didn’t exist 5 to 10 years ago that either improved speed or accuracy which shows why  the number of astronomy papers mentioning machine learning in arxiv.org have increased five-fold over the past five years.

Without getting too geeky, a summary of how they are using AI:

1) To coordinate telescopes. The large telescopes that will survey the sky will be looking for transient events. Some of these events – like gamma ray bursts, which astronomers call “the birth announcements of black holes” – last less than a minute. In that time, they need to be detected, classified into real or bogus events (like an airplane flying past), and the most appropriate telescopes turned on them for further investigation. And we talking about the an order of 50,000 transient events each night and hundreds of telescopes around the world working in concert. It has to be machine to machine.2) To analyze data. Every 30 minutes for two years, NASA’s new Transiting Exoplanet Survey Satellite will send back full frame photos of almost half the sky, giving astronomers some 20 million stars to analyze. We are at the petabyte range of data volume.

3) To mine data. Most astronomy data is thrown away but some can hold deep physical information that we don’t know how to extract.

Most intriguing question: how do you write software to discover things that you don’t know how to describe? There are normal unusual events, but what about the ones we don’t even know about? How do you handle those? More to come on this topic.

10. THE BODY IS NOT A COMPUTER. LET’S STOP THINKING OF IT AS ONE 

Last year when Regina Dugan, the head of Facebook’s secretive hardware lab called Building 8 (and the former head of DARPA, by the way) announced at Facebook’s developer conference that Facebook planned to build a brain computer interface to allow users to send their thoughts directly to the social network without a keyboard intermediary, it had all the Silicon Valley swagger of Facebook circa “move fast and break things.” It came just a few weeks on the heels of Tesla-founder Elon Musk’s announcement that he was forming a new venture, Neuralink, to develop a brain implant capable of telepathy, among other things.”We are going to hack our brain’s operating system!! Humans are the next platform!!” crowed Silicon Valley.

Are we, though? No. It might be because I have just finished the coursework for my neuroscience program. But I am … well … fed up. Yes, agreed. Over the past decade, science has made some notable progress in using technology to defy the limits of the human form, from mind-controlled prosthetic limbs to a growing body of research indicating we may one day be able to slow the process of aging. Our bodies are the next big candidate for technological optimization, so it’s no wonder that tech industry bigwigs have recently begun expressing interest in them. A lot of interest.

But let’s take a look at the most “computational” part of the body, the brain. Our brains do not “store” memories as computers do, simply calling up a desired piece of information from a memory bank. If they did, you’d be able to effortlessly remember what you had for lunch yesterday, or exactly the way your high school boyfriend smiled. Nor do our brains process information like a computer. Our gray matter doesn’t actually have wires that you can simply plug-and-play to overwrite depression a la Eternal Sunshine. The body, too, is more than just a well-oiled piece of machinery. We have yet to identify a single biological mechanism for aging or fitness that any pill or diet can simply “hack.”

Research into some of these things is underway, but so far much of what it has uncovered is that the body and brain are incredibly complex. Scientists do hope, for example, that one day brain computer interfaces might help alleviate severe cases of mental illnesses like depression, and DARPA is currently funding a $65 million research effort aimed at using implanted electrodes to tackle some of the trickiest mental illnesses. After decades of research, it’s still unclear which areas of the brain even make the most sense to target for each illness.

The body is not a computer. It cannot be hacked, rewired, engineered or upgraded like one, and certainly not at the ruthless pace of a Silicon Valley startup. Let’s be clear about something. Human intelligence is 13.8 billion years old since we were made from that Big Bang stardust. Our eukaryotic cells formed 2.7 billion years ago. Yes, mitochondria which powers our brain cells was formed then and mitochondrial DNA is inherited from our mothers. And, yes, computing is traceable to Babbage and Lovelace circa 1840s. But talk to the real AI folks and “computer logic” is traceable to Aristotle circa 384-322 BC – so about 2,300 years. That all means it’s scientifically invalid to suppose the human body and its brain data works like machine logic of {0 or 1, %}.

I blame Claude Shannon. I know, the nerve. The man at the birth of the computer age (the best book to read about him, which I profiled two years ago: A Mind at Play: How Claude Shannon Invented the Information Age). He was that he was writing at the same time that major research was underway at several universities on how the brain worked and Shannon developed the metaphor of “the brain as a processor of information” and other computer terminology to describe the brain. That he wrote at a time propelled by subsequent advances in both computer technology and brain research, a time that saw an ambitious multidisciplinary effort to understand human intelligence, it is probably no surprise that firmly rooted would be the idea that humans are, like computers, information processors. The information processing metaphor of human intelligence would dominate human thinking, both on the street and in the sciences. And, yes: I often fall into the trap of using it, too.

11. HOW DO YOU KNOW YOU ARE A SUCCESSFUL CRYPTOCURRENCY MINING COMPANY? WHEN YOU NEED A BOEING 747 TO SHIP YOUR GRAPHIC CARDS

Joo Long, a technology buddy from Zurich, passed this one along during a cryptocurrency workshop last Spring. It seems a cryptocurrency mining company called Genesis Mining is growing so fast that they rent Boeing 747s to ship graphics cards to their Bitcoin mines in Iceland. Crypto miners – in particular those mining ethereum, the second largest cryptocurrency by market valuation behind bitcoin – have been in the crypto equivalent of a gold rush since early this year. They are racing to take advantage of ethereum’s exploding price by adding more processing power to their mines. Some of them are even resorting to leasing Boeing 747s to fly the increasingly scarce graphics processors from AMD and Nvidia directly to their ethereum mines so they can be plugged in to the network as quickly as possible. Joo: “Time is critical, very critical,in mining. They are renting entire airplanes, Boeing 747s, to ship on time. Anything else, like shipping by sea, loses so much opportunity. Up for grabs is a supply of roughly 36,000 units of new ether, the digital token associated with ethereum, per day. At the (then) current prices of around $200 per ether, that translates to $7.2 million worth of ether that miners compete for each day.

 

12.  IT HAD TO HAPPEN: THE PAY-WITH-A-SELFIE PROJECT

 

Amazon, Apple and Google are all working on “pay-by-selfie” projects. Mobile payment systems are increasingly used to simplify the way in which money transfers and transactions can be performed. I use four mobile payment systems and I think they are brilliant. As I wrote last year after the Mobile World Congress, there are many more such systems to come.

The one that caught my attention was at the annual MIT Media Lab Emerging Tech which was aimed at developing countries. The argument is that to achieve their full potential as economic boosters in developing countries, mobile payment systems need to rely on new metaphors suitable for the business models, lifestyle, and technology availability conditions of the targeted communities.

The Pay-with-a-Group-Selfie (PGS) project noted at MIT, is being funded by the Melinda & Bill Gates Foundation, and it has developed a micro-payment system that supports everyday small transactions by extending the reach of, rather than substituting, existing payment frameworks. PGS is based on a simple gesture and a readily understandable metaphor. The gesture – taking a selfie – has become part of the lifestyle of mobile phone users worldwide, including non-technology-savvy ones. The metaphor likens computing two visual shares of the selfie to ripping a banknote in two, a technique used for decades for delayed payment in cash-only markets. PGS is designed to work with devices with limited computational power and when connectivity is patchy or not always available. Thanks to visual cryptography techniques PGS uses for computing the shares, the original selfie can be recomposed simply by stacking the shares, preserving the analogy with re-joining the two parts of the banknote.

13. THE CONTINUING DEMOCRATIZATION OF TECHNOLOGY:
A USB-POWERED DNA SEQUENCER THE SIZE OF A MARS BAR

 

I often write how the continuing democratization of technology has allowed bad actors to acquire off-the-shelf cyber attack tech and AI tech to foment misdeeds. But there is also a good side that I often overlook. for cyber attacks is now “off the shelf”, the imaging machine learning algorithms are developed using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.
The woman in the photograph is a scientist from the Museo delle Scienze (Museum of Science) in Trento, Italy. It is a research museum and you can sign up for tours of the research area. It has a massive conference center. It was “purpose built” as a modern science museum with a focus on alpine life and evolution. Global issues are also examined. It is visually stunning and has set aside old fashioned curating in favor of a more contemporary and interactive approach. Go to the rooftop for a view of the city and surrounding mountains.

The scientist is in Guinea using a genetic sequencer. That fact alone is astonishing: most sequencing machines are much too heavy and delicate to travel as checked baggage in the hold of a commercial airliner. More importantly, this scientists this sequencer – actually one of three – to read the genomes of Ebola viruses from 14 patients in as little as 48 hours after samples were collected. That turnaround has never been available to epidemiologists in the field before, and could help them to trace sources of infection as they try to stamp out the remnants of the epidemic. The European Mobile Laboratory Project, based in Hamburg, Germany, is providing substantial funding.

This is the democratization of sequencing. The unit … called the MinION … is a palm-sized gene sequencer made by UK-based Oxford Nanopore Technologies. The device is portable and cheap. It can read out relatively long stretches of genetic sequence, an ability increasingly in demand for understanding complex regions of genomes. And it plugs into the USB port of a laptop, displaying data on the screen as they are generated, rather than at the end of a run that can take days.

14. HA! YOU WANT HIGH TECH? YOU WANT “DISRUPTION”? YOU WANT “STATE-OF-THE-ART”?
TACO BELL SPENT TEN YEARS TRYING TO GET ROBOTS TO PICK UP CHEESE AND PUT IT ON TORTILLAS

Before I move on to my “Quick Fire Round”, I want to close out this section with a bit from my second favorite event of the year, Cannes Lions in France (the first being the International Journalism Festival (IJF) in Perugia, Italy).

Cannes Lions is a global event for those working in the creative communications, advertising and related fields. It is considered the largest gathering of worldwide advertising professionals, designers, digital innovators and marketers. The Lion awards are also the most coveted and well respected in the entire advertising and creative communications industry. It runs seven-days, incorporating the awarding of the Lions in multiple categories, and is held every June. And like IJF in Perugia, the Cannes Lions event was where I learn new approaches to storytelling in the digital age, including innovative technologies and methods being used to source stories and new formats being used to visualize the stories being told.

Now, about that cheese. Said the Taco Bell advertising rep I spoke with, having a fabulous taco with melty cheese in every single bite was something Taco Bell started dreaming about 10 years ago. After a decade-long journey of trial and error that dream eventually became the Quesalupa, a taco served in a cheese-stuffed fried shell whose 2016 arrival was heralded by a Super Bowl ad featuring a cackling George Takei. Costing somewhere from $15 million to $20 million, it was Taco Bell’s most expensive ad campaign ever. And it paid off: the company sold out its run of 75 million Quesalupas during the product’s four-month limited release.

Such is the influence cheese wields over the American consumer. Americans eat 35 pounds of cheese per year on average – a record amount, more than double the quantity consumed in 1975. And yet that demand doesn’t come close to meeting U.S. supply: the cheese glut is so massive (1.3 billion pounds in cold storage as of 31 May 2017) that on two separate occasions, in August and October of last year, the federal government announced it would bail out dairy farmers by purchasing $20 million worth of surplus for distribution to food pantries.

Side note from the Taco Bell rep: “here’s a little secret. If you use more cheese, you sell more pizza. In 1995 we worked with Pizza Hut on its Stuffed Crust pizza, which had cheese sticks baked into the edges. The gimmick was introduced with an ad starring a pizza-loving real estate baron named a guy you might know. Donald Trump. By yearend we had increased Pizza Hut’s sales by $300 million, a 7 percent improvement on the previous year’s. We estimated that if every U.S. pizza maker added one extra ounce of cheese per pie, the industry would sell an additional 250 million pounds of cheese annually”

Add to that a global drop in demand for dairy, plus technology that’s making cows more prolific, and you have the lowest milk prices since the Great Recession ended in 2009. Farmers poured out almost 50 million gallons of unsold milk last year—actually poured it out, into holes in the ground—according to U.S. Department of Agriculture data. In an August 2016 letter, the National Milk Producers Federation begged the USDA for a $150 million bailout.

That Taco Bell is developing its cheesiest products ever in the midst of an historic dairy oversupply is no accident. There exists a little-known, government-sponsored marketing group called Dairy Management Inc. (DMI), whose job it is to squeeze as much milk, cheese, butter, and yogurt as it can into food sold both at home and abroad. Until recently, the “Got Milk?” campaign was its highest-impact success story. But for the past eight years, the group has been the hidden hand guiding most of fast food’s dairy hits (“they are the Illuminati of cheese” said my source) including and especially the Quesalupa. In 2017 – EUREKA!! The finished product is mega-cheesy: with an entire ounce in the shell, the Quesalupa has about five times the cheese load of a basic Crunchy Taco. To produce the shells alone, Taco Bell had to buy 4.7 million pounds of cheese.

And demand skyrocketed to Taco Bell had to develop a cheese filling that would stretch like taffy when heated, figure out how to mass-produce it, and then invent proprietary machinery along the way.

Then it gets pretty funny. Or a little crazy. How to mass-produce the shells became a major problem. They hired four PhDs, two whom … wait for it … had doctorates in chemistry in cheese filling, who compared various varieties’ chemical compositions, specifically the interplay between molecules of a protein called casein found in the space around milk fat (“think of casein as dairy glue that, at the right temperature and pH, gives cheese its pull by binding water and fat in a smooth matrix”).

Yes, Americans may buy less dairy from the grocery store than they used to, but they still like to eat it. After McDonald’s switched from using margarine in its restaurants to real butter in September 2015 – a change this advertising company lobbied for – the company said Egg McMuffin sales for the quarter increased by double digits (“our job is getting cheese into foods that never had any, or to increase the cheese content in foods that do have it”)

And those robots? “We helped Taco Bell on the regulation and food promotion front. Then we helped them automate the manual assembly for the Quesalupa 2.0 [just like software!!] and that will open up a whole other level of scale. Those robots can pick up cheese and put it on tortillas at very, very high speeds, almost 20X the pace of  a human worker.”

Ah, the advertising biz. And you thought it was just pop up ads.

15. PRIVACY, PEOPLE AND MARKETS

This is a subject that deserves (and will receive) fuller treatment, and is my lead blog post in 2020.

No one but the executives at data companies, it seems, is happy with the current state of information privacy. The dominant model—known as “notice and consent” or “privacy as control”—is economic. It takes as its conceptual core the idea that autonomous individuals should choose how much of their personal information they reveal to others. These individuals must then negotiate this disclosure on a contractual basis, trading information in return for some desired service.

This sounds good in theory. In practice, it has meant that online access to almost anything requires first accepting the terms of a byzantine and one-sided document that gives the service provider free and unfettered access to a lot of personal information, including (quite often) the right to sell it to others. Even if users understood these terms (they do not), and even if it would not be bad for the entire economy if they tried (it would), there is no opportunity for negotiation or effective control: major sites like Facebook and Google do not have significant market competitors, and fore- going access to social media or Internet searches is impractical.

There was so much print spilled on this topic that it was tough to find a good link but this will get you started.

QUICK FIRE ROUND

The following are a few of the items that came my way last year, ending with my favorite book reads and photos of the year:

16. How AI will eat UI. Now here is something to think about. What if the UI is different for each of us because the AI picks up different things? Nobody’s phone would look the same, nobody’s phone would act the same. You wouldn’t be able to make sense of your best friend’s device. And yet it might happen.

17. There are just so many of these cases you just don’t know what to highlight. Privacy? HA! Your car is likely capturing hundreds of data points about you and your driving, and secretly sending it to the manufacturer. Oh, and you “consented”.

18. Unlike Facebook, the California DMV actually does sell your personal information – to pretty much anyone, for $50m a year. Break them up!

19. Why do Americans work more than anybody else? The internet enables workaholism. White-collar workers are caught up in an arms race. It’s mutually assured exhaustion. The internet creates an ever-present tether to the workplace, that only makes the problem worse.

20. GPS signals are being spoofed in some areas of Moscow: “the fake signal, which seems to centre on the Kremlin, relocates anyone nearby to Vnukovo Airport, 32 km away. The scale of the problem did not become apparent until people began trying to play Pokemon Go.”

21. The most bizarre legal case I read all year. A Louisiana police officer is attempting to impose possibly ruinous tort damages on DeRay Mckesson, a national leader of Black Lives Matter. Doe (who is proceeding under a pseudonym) claims that Mckesson owes him damages because the officer was injured in a protest outside the Baton Rouge Police Department on July 9, 2016. During that protest, someone in the crowd threw a hard object that injured the officer. Mckesson was present that night, but Doe doesn’t claim that Mckesson threw the object; instead, he claims—in defiance of Supreme Court precedent—that Mckesson owes him damages because the civil-rights leader did not prevent the nameless protester from throwing the object.

22. Despite years of scandal, Facebook maintains its unhealthy obsession with growth.

23. Each year humanity produces 1,000 times more transistors than grains of rice and wheat combined.

24. Emojis are starting to appear in evidence in court cases, and lawyers are worried: “When emoji symbols are strung together, we don’t have a reliable way of interpreting their meaning.” (In 2017, an Israeli judge had to decide if one emoji-filled message constituted a verbal contract).

27. Since the 1960s, British motorways have been deliberately designed by computer as series of long curves, rather than straight lines. This is done for both safety (less hypnotic) and aesthetic (“sculpture on an exciting, grand scale”) reasons.

28. The Vietnam draft lotteries created an enormous, ongoing science experiment. Decades later – yes, even today – researchers use the randomized data set these lotteries generated to study all sorts of phenomena. But that scientific treasure trove comes at the cost of the men who lived through the draft.

29. (This came in late last year after my 2018 edition was out. I found it funny). In the early 1980s AT&T asked McKinsey to estimate how many cell phones would be in use in the world at the turn of the century. They concluded that the total market would be about 900,000 units. This persuaded AT&T to pull out of the market. By 2000, there were 738 million people with cellphone subscriptions.

30. There is an increasingly popular tactic used to spread of electoral disinformation. Hundreds of automated local news sites with a shadowy political purpose.

31. Sometime in the 1990s, it seems the US forgot how to make a critical component of some nuclear warheads.

32. Spotify pays by the song. Two three minute songs are twice as profitable as one six minute song. So songs are getting shorter.

33. Fashion++ is a Facebook-funded computer vision project that looks at a photo of your outfit and suggests ‘minimal edits for outfit improvement’ like tucking in a shirt or removing an accessory.

34. Three million students at US schools don’t have the internet at home.

35. No babies born in Britain in 2016 were named Nigel.

36. At least half of the effort of most AI projects is devoted to data labelling – and that’s often done in rural Indian villages.

37. Worldwide, growth in the fragrance industry is lagging behind cosmetics and skincare products. Why? ‘You can’t smell a selfie’.

38. Some blind people can understand speech that is almost three times faster than the fastest speech sighted people can understand. They can use speech synthesisers set at at 800 words per minute (conversational speech is 120–150 wpm). Research suggests that a section of the brain that normally responds to light is re-mapped in blind people to process sound.

40. Headphones are changing the sound of music. Artists have adapted to the listening experience by adjusting vocals and bass levels.

 

I try to average 2-3 book reads a month. Here are my favorites for the year:

41. My Cat Yugoslavia by Pajtim Statovci (translated from Finnish). The publishers should put a cover sticker that says, “This is so fucking weird.” An amazing novel that refuses to be put into any box. Is it a coming out story, a fable, a family novel, an immigrant tale, science fiction, romance? Why set limits? It’s all of them. It’s queer in every sense of the word. Statovci is such a talented writer. Gives me hope for the future of literature.

42. Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers by Andy Greenberg. In mid-October, a cybersecurity researcher in the Netherlands demonstrated, online, as a warning, the easy availability of the Internet protocol address and open, unsecured access points of the industrial control system — the ICS — of a wastewater treatment plant in Vermont. Industrial control systems may sound inconsequential, but as the investigative journalist Andy Greenberg illustrates persuasively in his book, they have become the preferred target of malicious actors aiming to undermine civil society. It has been THE cyber book of the year in my military intelligence community.

43. The Age of Surveillance Capitalism by Shoshana Zuboff. Pretty much the book-of-the-year for most of us. It is a masterwork of original thinking and research, provides startling insights into the phenomenon that she has named surveillance capitalism which many of us, steeped in this stuff, were amazed to find. Her painstaking review of the global architecture of behavior modification and what it has done to human nature in the twenty-first century parallels the industrial capitalism that disfigured the natural world in the twentieth.

45. The Order of the Day by Eric Vuillard (translated from French). An exquisite book of vignettes dealing with the rise of fascism in Europe in the 1930s. The opening scene of Hitler meeting the great industrialists of Germany when the deal to get him to power was struck is chilling. The book never goes where I expect it to, always swerving and meandering, yet for a short book, it builds slowly and ever so frighteningly. The book is both quiet and angry, spare and full. A tour de force, a must-read for our current political climate.

46. The Death of the Gods: The New Global Power Grab by Carl Miller. Tech, both big and small, keeps smashing its way into our lives leaving at least some of us too punch-drunk to take stock and think about who or what exactly is driving this change, and what it’s doing to us. That’s even assuming we want to, given the lurking suspicion that what we’ll find there is even worse than we imagine. Luckily Carl Miller has dived into the murkiest depths of the net. The Gods in question are the structures and institutions that have governed and shaped our world for living memory, even centuries: democratic government; business; press and media; crime and policing. These are the structures in which “power” has resided, and through which it is exercised.

 

47. The Event Horizon Telescope collaboration unveiled this first direct image of a black hole and its event horizon in April. The team used eight radio observatories to capture the ring of light around the void, which is at the centre of the galaxy Messier 87.

 

48. French researchers carved a labyrinth of microfluidic chambers in a silicon wafer to mimic blood flows in circulatory networks. Biophysicist Benoît Charlot at the University of Montpellier, France, captured this image using a scanning electron microscope.

49. Stentors — or trumpet animalcules — are a group of single-celled freshwater protozoa. This image won second prize in the 2019 Nikon Small World Photomicrography Competition, and was captured at 40 times magnification by researcher Igor Siwanowicz at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia.

 

50. This aerial view of the sea ice in East Greenland was captured by photographer Florian Ledoux using a drone. Low levels of winter snow cover, heatwaves in the spring and a sunny summer all contributed to significant melting of Greenland’s ice sheet in 2019.

51. Most of you have seen this and know the back story via a post I wrote on how the technology works. In brief, this fluorescent visualization of a turtle embryo was the winner of the 2019 Nikon Small World Photomicrography Competition. Microscopists Teresa Zgoda and Teresa Kugler stitched together and stacked hundreds of stereomicroscope images of the roughly 2.5-centimetre-long embryo.

52. “Sleeping like a Weddell” was my favorite image from this year’s Wildlife Photographer of the Year competition, held by the Natural History Museum in London. After seeing thousands of heartbreaking images of climate change,  destruction and devastation this year, discovering this picture of a Weddell seal (Leptonychotes weddellii) – an image that captured peace and innocence – was a breath of fresh air. It grounded me and reignited my passion for our natural world. And the perfect way to end this series.

* * * * * * * * * * * * * * * * *

 

I look forward to seeing a few of you on the conference circuit in 2020, although this coming year I am winding down. I normally personally cover 20 events. This year, only attend 2-3 events, leaving the kids to do the heavy lifting. My focus is on finishing production on two documentaries which I will release next year … and enjoying myself on and by the sea.

So less of this …

 

 … and more of this

 

 

Merry Xmas, Happy New Year, Happy “Whatever Your Holiday Is”

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top