Apple’s new hardware is built for our visual future

 

22 April 2021 – For the last several years I have been writing about Apple and what it has been doing to build the visual future. I do not attend the annual Apple World Wide Developer Conference because I am not a developer. But my CTO is an Apple developer and he attends. It is where new updates to the Operating System (OS) for Apple’s devices are announced, plus all the new device features.

Essentially Apple has been working towards 3 standard interfaces for getting access to content and functionality: voice based (Siri); visual (widgets) and haptic/audible (notifications). Those interfaces, over time, will replace all the UI we currently use: no app icon, no app navigation, no search, no feed. Just the OS giving you access to what you need. In most cases, you won’t need to take your phone out of of your pocket, instead interacting with content through your AirPods using Siri or your watch using widgets.

Apple is clearing working to make AR part of everyday life, using the new U1 chip inside iPhones and Apple Watch to support connected AR experiences (and more)

 

As I have been banging on ad nauseam, two decades into the new millennium it is pretty apparent that hardware without software and smarts is nothing more than a gimmick. Apple’s product launch event this week was a timely reminder of this new hardware reality. The new iMac video cameras and the new iPad Pro’s front-facing cameras are a big step toward our visual future. Ever since the pandemic began, and we started to live most of our lives on Zoom and its video conferencing cousins, we have complained about the poor quality of video cameras inside our computers. Apple computers, especially.

We mostly put up with poor quality, though some of us bought a gadget to connect our bigger and better camera to our computers. Or if you are like me, you used the camera on the iPhone via Camo App — after all, the iPhone camera stack is way better than any standalone streaming camera. Whatever the hack, we all were sick and tired of substandard front-facing cameras on our computers.

Throughout this pandemic experience, I have wondered why we don’t have computers with cameras that can use computer vision and depth perception to create a better video-calling experience. After all, the necessary technology building blocks are all here. Computer vision algorithms are progressing by the day. Computers in our pockets, bags, and homes are power-packed with computing power, machine learning chips, and GPUs. Tiny cameras and sensors — you know, the ones they use in billions of smartphones— are now capable and cheap enough to be built into computers and to eventually create a better conference experience.

Zoom (or at least, working and interacting over video) culture is here to stay. Unlike the 1918 flu epidemic, when telephone failed to live up to its potential because telephone exchanges were manned by telephone operators who fell victim to the flu, the current pandemic has been made bearable by reliable video conferencing. The “always-on” servers are impervious to the kind of virus the rest of us are dealing with. It is not far-fetched to say that, without technology, the economic impact of the pandemic would have been far worse than we have currently experienced.

Just as the taste of the convenience of streaming media got us all addicted to Spotify and Netflix, we are all going to be video-calling for a long time. That is why we underestimate the behavior modification we have experienced due to the pandemic and video calling. Whether it is casual social contact, telemedicine, remote learning, or remote work, we have entered an era where communication will be increasingly visual.

Data during the pandemic makes this clear. Google Duo and Google Meet hosted over 1 trillion minutes of video calls globally in 2020. OpenVault, a company that studies US network traffic patterns, noted in a recent study that the total upstream traffic increased 63 percent in 2020. A lot of that was due to the use of video conferencing apps. OpenVault data shows that a one-hour call on Zoom takes up between 360 megabytes to 1.2 gigabytes. The usage of bandwidth consumption nearly doubled during business hours. This additional bandwidth presents new challenges and opportunities, and Google is already on the case with its AI division, using new codecs to compress audio and video into less space.

With so much of our work and life going visual, there is an opportunity for a superior experience. Apple has recognized that — and so have Amazon, Facebook, Google, and Microsoft, who have all launched dedicated video-calling devices.

The new iMac has a 1080pHD camera that taps into the image signal processor in the M1 chip and the Neural Engine, which boosts image quality with better noise reduction, greater dynamic range, and improved auto exposure and white balance. The three-microphone array results in less feedback. There is directional beamforming that helps ignore the background noise and focus on a user’s voice. In short, Apple says conversations will sound natural and clear. The six speakers, too, are designed to use algorithms and enable spatial audio.

And tech note: the new iPad Pro has a 12 megapixel camera for video calls, and the new new iMac is 1080p. There are no video call services that support 4K. Most are capped at 720P. So we can expect all video call services to up their game.

Apple isn’t the first to marry video with artificial intelligence and machine learning. Google and Microsoft have been offering better features for a while. Google, for example, had launched noise cancellation, background blur, and low-light mode, among many other features on its video calling services last year. Apple, however, can take full advantage of its vertical integration. The new audio and video hardware will capitalize on the M1-chip’s components, such as image sensor, Neural Engine, and GPUs to their potential.

The question is, where are we going when it comes to video calling in the future? We don’t have to look too far — the new iPad Pro gives us ample clues. The iPad Pro is now powered by the same M1 chip that powers the Mac devices. In addition to the TrueDepth camera system, the iPad Pro has a new ultra-wide front camera. Looking at the new iPad Pro’s camera rig, it is not difficult to see these same capabilities coming to your laptop or desktop. And most certainly to your iPhone. The visual sensors are much more capable than what we have experienced. It is all about using that power and artificial intelligence to create a user experience.

Apple hopes to do that with a new feature launched on the iPad Pro called Center Stage. Not that I would recommend anything related to Facebook to even my worst enemy, but Center Stage reminds me of Facebook’s Portal device that offered similar zooming capabilities and tracked people’s movements. From the press release:

“Center Stage uses the much larger field of view on the new front camera and the machine learning capabilities of M1 to recognize and keep users centered in the frame. As users move around, Center Stage automatically pans to keep them in the shot. When others join in, the camera detects them too, and smoothly zooms out to fit everyone into the view and make sure they are part of the conversation.”

Today, it is hard to tell the difference between my competent 2018 model of iPad Pro and the newest iPad Pro. The 2021 model’s screen might be better. It might have a newer chip. It even has a more capable port to connect peripherals.

But in the end, the only reason for me to upgrade is to use the new camera capabilities to have a more productive video calling experience. The underlying specs won’t matter as much — it is all about smarts and intelligence.

The other big Apple story this week is that the company intends to boost its ads business as iPhone changes hurt Facebook. It’s new privacy rules deal a huge blow to how the mobile advertising industry now works, and shows how it is actually doing a more effective job on data privacy than either the European Union General Data Protection Regulation, or the California Privacy Rights Act.

But it is not as straight-forward as that. The online advertising has boomed, with annual sales of $378bn, according to the market research group Insider Intelligence, one of my trusted sources on this sort of thing. Google and Facebook are the two biggest players in the market, but Tim Cook, Apple’s chief executive, has repeatedly attacked their business models as unsustainable because of how they accumulate large troves of data to target their ads.

If you take the time to plow through Apple’s 10Ks you can estimate that Apple currently earns around $2bn a year from search ads in the App Store, with 80 per cent margins. Apple also sells ads in its Stocks and News apps.

A second advertising slot in the App Store is likely to appeal to advertisers after the iPhone’s privacy changes reduce the effectiveness of targeted ads. But there is more than money at stake. As I noted in my monograph about Apple last year, nearly a decade ago the App Store played a critical role in how consumers discovered new content. Apple used to be “king maker – if you got featured, your company valuation might increase by a hundred million dollars. But then it slagged off.

Apple now wishes to regain this level of control. If Apple cripples mobile advertising, then the App Store becomes the primary discovery point for apps again, and Apple decides how people use our iPhones, Apple decides which apps are the most popular.

Any way, fodder for another post. I need to tackle this, “FLoC” (Google’s tracking plan for Chrome) and how the entire mobile advertising industry and “data privacy” (hahah) have been upended.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top