The run-up to Mobile World Congress 2017: a short history of mobile

MWC 2017

 

MWC 2017 opener

Based on:

Wanderer Above the Sea of Fog, by Caspar David Friedrich (1818)

24 February 2017 –  It is mildly subversive and perhaps a little quaint when someone clings to their flip phone and refuses a smartphone. Refusing both kinds of phones is viewed as downright lunacy, especially if the person refusing was born after the mid-1970s.

And to be honest, when I retire to my home in Greece every summer for my “Greg time” I eschew my smartphone and use just a feature phone.  I do not want the omnipresent ability to communicate with anyone who is absent. Cellphones put their users constantly on call, constantly available, and as much as that can be liberating or convenient, it can also be an overwhelming burden. The burden comes in the form of feeling an obligation to individuals and events that are physically elsewhere. Anyone who has checked their phone during a face-to-face conversation understands the temptation.

And anyone who has been talking to someone who has checked their phone understands what is wrong with it.

NOTE:  Nokia will be re-launching the 3310 at the Mobile World Congress next week. It was my first phone, and is one of its most loved phones ever in history:

Nokia 3310

An insider told me it will sell for €59. Much before the iPhone came into picture, Nokia ruled the mobile phone market – and one device that truly established its formidable position was the Nokia 3310.  The legendary phone was popular for many reasons: it featured the ability to compose text messages beyond the SMS character limit, an 84 x 48 pixel monochrome screen, screen savers, a calculator, and games like Snake II. It could even store seven custom ringtones. That’s not all, Nokia 3310 allowed users to replace the front and back covers to make the phone more personal than ever. Wow! And that might not sound like a lot in today’s world with 32GB storage becoming standard, but in the year 2000 these capabilities made it a great phone. Plus the Nokia 3310 was known for its durability and the ability to survive in extreme conditions. It was an absolute beast and also featured a long battery that could last 55 hours on standby.

But in the day-to-day world for most of us, a different story.

Eric Schmidt recently admitted that he underestimated machine learning and artificial intelligence. Schmidt’s assessment back then was that artificial intelligence research faced tremendous obstacles that inhibited its progress. He “didn’t think it would scale,” he said of the machine learning tech.

And he said he also didn’t think it would “generalize,” meaning becoming more flexible and elastic, like the human mind, rather than remaining a specialized tool suited only to specific tasks. Schmidt had underestimated the power of simple algorithms to “emulate very complex things,” he said, while qualifying that “we’re still in the baby stages of doing conceptual learning.”

Google’s current CEO Sundar Pichai has described the world as having entered an “AI-first” era. The preceding phase was a focus on all things mobile – “mobile first”, and then “mobile only”, and now “AI first”.

Pichai has said:

“the world is moving from the smartphone age into that AI-first era, in which Google products (and Apple products) will help people accomplish tasks in increasingly sophisticated, even anticipatory ways”

We see that in the Pixel phones and Google Home, for instance: the first devices with embedded support for Google Assistant, a rival to Apple’s Siri and Amazon’s Alexa that is designed not only to handle straightforward commands but also fuzzier requests such as “Play that Shakira song from Zootopia.” The Assistant is also designed to engage in relatively complex conversations related to tasks such as making vacation arrangements.

So how about a little history of mobile before we start the Barcelona festivities this weekend:

Before iPhone/after iPhone
 
I’ve been going to the Mobile World Congress since 2008, on and off. It started in Cannes about 2001. Then it was a tenth of its current size (there were 110,000 attendees last year).

In 2001 we were right at the top of both an internet bubble and a mobile bubble. There were these enormous promises of data services to be delivered over “entire spectrums”.  It was the time of bricks for phones and nothing would happen until 2005 and the arrival of 3G phones that weren’t bricks.

Trivia: why did Nokia become the mobile powerhouse? Because of Finland’s frozen tundra.  Industry in Finland (and Sweden) was expanding, laying communication wire to connect factories/operations was impossible.  A few creative engineers said “hold on – let’s go wireless!”  I will save you the details. But to tell you how farsighted Nokia was, there is a 1984 memo by Nokia engineer Marita Viljanen that says “realize this cellular phone is versatile and we should not restrict its use to the car [as many of you may recall, an enormous battery in the trunk, and a brick-of-a-phone] because it can easily fit in a briefcase …it’s only a matter of time before I can design one for the breast pocket”.   How sad.

Today, one can date “mobile” to before iPhone and after iPhone.  But the interesting thing, looking back, is that before the iPhone, it didn’t really feel like we were desperately in need of some catalytic event.  We all recall our history professor at university who told us “people in the ‘Middle Ages’ didn’t know they were living in the ‘Middle Ages'”.   It wasn’t clear at all that we were waiting for a new class of device, with a new approach, that would transform the mobile internet from a segment of telco revenue into a near-universal experience that would become the main part of the internet itself.

Note: for my e-discovery/litigation readers we see it with predictive coding, as in “Before Peck/After Peck” and I will expand on that in a subsequent post about the pathetic event called “Legalweek” … or as I call it the “Beads-and-Baubles Show”.

In mobile there was not the division between “smart” and “feature” so clearly defined today.  But the ecosystems were so poorly developed, in hindsight, that I didn’t get much of a sense that I was losing something when I went from smartphone to featurephone to smartphone. The taxonomy has no sense that some of these are ‘smart’ and some are not – it’s all about the hardware features.

And, of course, in parallel, there were “PDAs” which eventually merged with phones. And does anybody remember cellular radio and those $100 radio cards, just to connect your phone and get online (over GPRS, at narrowband speeds)? Today you could buy a half-way decent Android smartphone now for the price of one of those cards.

As we now know, the iPod would change all of that. Yes, it did look like an expensive toy at the time but it was signalling the future at a more structural level: a fundamental change in who could make phones and in who controlled what you could do on them.

At Barcelona last year Ben Evans – mobile analyst extraordinare – noted:

“I remember being at MWC in Cannes in 2004 and one of the most senior execs at Motorola was explaining how hard it was to put hard disks into phones, because people accepted an iPod breaking if they dropped it but not a phone. Meanwhile, Apple had already moved onto flash storage .. and more”

Yes, we all know how senior management across the ecosystem laughed at the iPhone, seeing only the ‘minimal’ parts of the MVP (in tech  this stands for “Minimum Viable Product”) and not realizing that it reflected a fundamental shift in the tradeoffs that were possible and that consumers would choose.  My favorite:

 

 

 

And there is also a great lesson here about metrics … and how false they can be.  Nokia and RIM seemed to be doing great, their “smart” products seemingly going up, up, up every year.  Because for all sorts of reasons (not relevant here) Nokia’s products and indeed most of the best products on the market in Europe (let alone Japan) were not available in the U.S.

Walter Isaacson in his bio on Steve Jobs notes that Jobs and his team hated their phones much more than Europeans – because their phones weren’t as good.  Nokia and Blackberry said “oh, poo poo on you” because sales grew strongly for over two years after the iPhone was first announced. And as Ben Evans has pointed out that was how long it took for the iPhone to fix the issues in the MVP and get widespread distribution, and for the first Androids to start appearing.

Then, within a quarter of each other, sales went away at both companies.

It’s the old “Willie E Coyote effect” : you’ve run off the cliff, but you’re not falling, and everything seems fine. But by the time you start falling, it’s too late.

Ben Evans turned me onto a piece by Michael Mace about the collapse for Blackberry, looking into the problem of lagging indicators:

“the headline metrics tend to be the last ones to start slowing down, and that tends to happen only when it’s too late. So it can look as though you’re doing fine and that the people who said three years ago that there was a major strategic problem were wrong”

That is, using metrics that point up and to the right to refute a suggestion there is a major strategic problem can be very satisfying, but unless you’re very careful, you could be winning the wrong argument.

Switching metaphors (and again quoting Ben):

“Nokia and Blackberry were skating to where the puck was going to be, and felt nice and fast and in control, while Apple and Google were melting the ice rink and switching the game to water-skiing.”

 
Yes, all hindsight.  Back then if I (or any analyst) came out and said …

  • Hey, listen, chill. I’ll take a decade for the devices to do all this to become mass-market.
  • But then, all of the stuff in the concept videos and mock-ups and crazy imagineering and futurology will happen. All of it. For billions of people. Even David Bowie thinks so!!
  • But look: the device-makers won’t make much money at all. Well, except for Samsung. Oh, and that has-been computer company called Apple. Huh? Yeah, I know. The guys with the expensive MP3 player. And no expertise or IP in mobile. At all. But trust me on this!
  • Oh, and the telcos won’t do any of it. No, really. Because the portal model and the AOL model are going to fall apart. Yes.  The “You’ve got mail” folks.  Kaput.
… how many people would have been convinced?

But this is what happens.  Today’s expensive toys .. Fitbits, Bands, Pebbles, etc. .. are going to be subsumed. Today’s emerging expensive toys .. drones, home security systems, the high level IoT (smart scales, Eve Room, smart Fridge, lots of other health gadgets) are still in a fog.

In all of those cases it’s not at all clear how things will shake out, though one can make guesses. For example, drones. These seem to be sort of thing where Amazon or Google could ultimately be the winner-takes-almost-all. They have the engineering, and they have the ability to interact with the inevitable laws that are coming, both to shape those laws and to meet their needs, in a way that the small drone companies do not.

The other cases are a lot more fluid.

And if you’re looking for a break-out tech company for 20 years from now, a company that doesn’t exist (or is tiny) today, I’d expect to see something medical because that’s a field that requires a level of seriousness that, once implemented for an initial device, can be redeployed over and over again much faster than it can be recreated and copied by others.

And now? Oh, it’s all about interfaces and friction

Yep. Mobile phones and then smartphones have been swallowing other products for a long time – everything from clocks to cameras to music players has been turned from hardware into an app. But that process also runs in reverse sometimes – you take part of a smartphone, wrap it in plastic and sell it as a new thing. This happened first in a very simple way, with companies riding on the smartphone supply chain to create new kinds of product with the components it produced, most obviously the GoPro. Now, though, there are a few more threads to think about.

Quoting Ben Thompson of the blog Stratechery:

First, sometimes we’re unbundling not just components but apps, and especially pieces of apps. We take an input or an output from an app on a phone and move it to a new context. 

 
So where a GoPro is an alternative to the smartphone camera, an Amazon Echo is taking a piece of the Amazon app and putting it next to you as you do the laundry. In doing so, it changes the context but also changes the friction. You could put down the laundry, find your phone, tap on the Amazon app and search for Tide, but then you’re doing the computer’s work for it – you’re going through a bunch of intermediate steps that have nothing to do with your need. 

 
Using Alexa, you effectively have a deep link directly to the task you want, with none of the friction or busywork of getting there.

This is what we call “in the biz” by the phrase “removing friction”.  We are removing or changing how we use power switches, buttons and batteries. You don’t turn an Echo or Google Home on or off, nor AirPods, a ChromeCast or an Apple Watch. Most of these devices don’t have a power switch, and if they do you don’t normally use it. You don’t have to do anything to wake them up. They’re always just there, present and waiting for you. You say ‘Hey Google’, or you look at your Apple Watch, or you put the AirPods in your ear, and that’s it. You don’t have to tell them you want them.

Part of this is “ambient computing” or “ambient intelligence” and it refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s by Eli Zelkha and his team at Palo Alto Ventures for the time frame 2010-2020.

Note: Eli passed away last month.  He left a treasure trove of research material.

 
The point here is how many of these devices are driven by some form of AI. The obvious manifestation of that is in voice assistants, which don’t need a UI beyond a microphone and speaker and so theoretically can be completely ‘transparent’. But since in fact we do not have HAL 9000 – we do not have general, human level AI – voice assistants can sometimes feel more like IVRs, or a command line – you can only ask certain things but there’s no indication of what. The mental load could be higher rather than lower.

Note: this was where Apple went wrong with Siri – it led people to think that Siri could answer anything when it couldn’t. Conversely, part of Amazon’s success with Alexa, I think, is in communicating how narrow the scope really is.

And it will only get better. This past December I was at the biggest artificial-intelligence conference of the year … also in Barcelona; all the mega tech events are here and I will explain why next week in our continuing MWC coverage … and saw some incredible “reinforcement learning” demos. They explained why and how AlphaGo, a computer developed by a subsidiary of Alphabet called DeepMind, mastered the impossibly complex board game Go and beat one of the best human players in the world in a high-profile match last year.

Last year I wrote a very long brief on reinforcement learning but in brief:

  • Reinforcement learning copies a very simple principle from nature.
  • The psychologist Edward Thorndike documented it more than 100 years ago. Thorndike placed cats inside boxes from which they could escape only by pressing a lever.
  • After a considerable amount of pacing around and meowing, the animals would eventually step on the lever by chance.
  • After they learned to associate this behavior with the desired outcome, they eventually escaped with increasing speed.
  • Some of the very earliest artificial-intelligence researchers believed that this process might be usefully reproduced in machines. In 1951, Marvin Minsky, a student at Harvard who would become one of the founding fathers of AI as a professor at MIT, built a machine that used a simple form of reinforcement learning to mimic a rat learning to navigate a maze.
  • Minsky’s Stochastic Neural Analogy Reinforcement Computer, or SNARC, consisted of dozens of tubes, motors, and clutches that simulated the behavior of 40 neurons and synapses.
  • As a simulated rat made its way out of a virtual maze, the strength of some synaptic connections would increase, thereby reinforcing the underlying behavior.
Reinforcement learning will soon inject greater intelligence into much more than games: self-driving cars, personalized medicine, text analytics, etc.

And I saw it the smartphone’s image sensor. Those sensors are becoming a universal input, and a universal sensor. Talking about “cameras” taking “photos” misses the point here: the sensor can capture something that looks like the prints you got with a 35mm camera, but what else? Using a smartphone camera just to take and send photos is like printing out emails – you’re using a new tool to fit into old forms.

 

In that light, simple toys like Snapchat’s lenses or stories are not so much fun little product features to copy as basic experiments in using the sensor and screen as a single unified input, and in creating quite new kinds of content.  The emergence of machine-learning-based image recognition and reinforcement learning means that the image sensor can act as input in a more fundamental way – translation is now an imaging use case, for example, and so i math. Here it’s the phone that’s looking at the image, not the user. Lots more things will turn out to be “camera” use cases that aren’t obvious today: computers have always been able to read text, but they could never read images before.

Ok, enough.  I have my video crew in Barcelona this year and we plan some more very nifty coverage of the Mobile World Congress for 2017.

Now?  For the crew, it’s tapas time!!

Tapas

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top