clock menu more-arrow no yes mobile

Filed under:

The rise of machine intelligence at #codecon 2016

Machine learning, artificial intelligence, deep simulation — or, as IBM’s CEO insisted, cognitive computing — were front and center for nearly every Code Conference speaker.

At last week's Code Conference, Facebook COO Sheryl Sandberg outlined numerous efforts in AI across the site, in advertising and, of course, in virtual reality.
Asa Mathat

What do you get when you mix a choice selection of the leaders of the most important companies, a deeply engaged tech-savvy audience, and the best interviewers around? You get Code Conference. This thirteenth (13!) installment of the unique conference proved as timely, revealing and interesting as ever.

While the conference has been through a few changes of ownership, some name changes and different venues (and seat configurations), the constants of Walt Mossberg and Kara Swisher — along with a deep roster of Recode writers supporting them — make for the place to hear from (and on video, see) how technology and business trends are created, used or handled by the important companies of our time. On a personal note, I’ve been lucky enough to attend every one of these — you can find me on the far side, taking notes and listening attentively.

By any name, machine learning, artificial intelligence, deep simulation — or, as IBM’s CEO insisted, cognitive computing — were front and center for nearly every speaker.


Some years are remembered as launch years (like the iPod, or even Windows or Office releases!), and others are marked more by debates over disruption (such as net neutrality or music distribution). Rarely do we get to experience a year when the breadth of speakers collectively express both optimism and execution plans for a technology shift as we saw this year. By any name, machine learning, artificial intelligence, deep simulation — or, as IBM’s CEO insisted, cognitive computing — were front and center for nearly every speaker.

Over the course of the previous 12 conferences, there have been themes that everyone might have touched on, but I don’t recall a case where there was such uniform aggressiveness at staking a claim on the technology. Across the board, speakers went to great lengths to talk about how their customers are going to benefit from the use of intelligence technologies (let’s call this "AI") in products and services.

Why is this not simply a waypoint on the way through the hype cycle? The short answer is because we, as customers, are already "using" intelligence every day on our smart phones. It is worth pausing to acknowledge that somehow over the course of the past year, AI has gone from passing reference through implementation to daily use.

Today’s use of AI is not hype, but reality.

Across the board

The most fascinating aspect of listening to the speakers respond to questions about how AI will or does play a role in their respective enterprises is how the technology spans devices, strategies and business models. While each of us might be familiar with a specific example, looking across the speakers paints a picture of an incredibly rapid and deep technology diffusion.

Amazon CEO Jeff Bezos spoke at great length about the role of AI in the company's breakout product, Echo. In many ways, Echo has come to symbolize the true potential of multiple technologies across voice commands, agents and machine learning, all packaged up in super-simple consumer devices. In addition, Bezos outlined how Alexa — the technology underpinnings of Echo — is both a customizable platform and an embeddable technology. Developers can build new "skills" for Alexa and contribute "learning" to offer new capabilities (and as Mossberg noted, owners receive weekly email detailing the latest skills added). Makers can embed Alexa technology in their own devices — one example mentioned was an alarm clock — and turn a mundane technology into another AI-enabled endpoint.

Ford CEO Mark Fields worked to convince the Code crowd that while Ford’s intrinsic strength in AI or cloud might not be there, they are actively partnering (with Pivotal, for example) and bringing in-house the skills needed to make sure Ford vehicles participate in this technology wave. While Fields did not articulate a deep vision of AI, he used examples around maintenance and navigation to illustrate the company's commitment.

In an approach similar to Ford, Cisco CEO Chuck Robbins talked about committing his company's thousands of software engineers to AI, and incorporating it into Ford's existing product lines. He spoke of using AI to bring better management and understanding to modern networks that will contain millions of end points of all sorts.

While spending a good deal of time differentiating his company from Amazon, eBay CEO Devin Wenig talked in detail about eBay's amazing work to improve the shopping experience, and to eliminate fraud by using AI. Wenig characterized the use of AI as the way the company will deliver a highly personalized eBay — one that is curated and relevant. Fraud, he claimed (in a follow-up interview by Lauren Goode, from Recode's Vox partner The Verge), has become a "meaningless" number through the use of technology.

That AI is so mainstream today that dozens of CEOs can articulate company execution plans using the technology is directly attributable to four important changes in the technology landscape.


IBM CEO Ginni Rometty traced IBM’s longstanding efforts in AI, and outlined what amounts to a "bet the company" investment in the technology. She discussed scenarios from health care to education, from business IT to third-party developers, and from cloud to on-premises as ways IBM is working to contribute to and support the use of AI. Most interesting were the efforts IBM is making to provide open source or freely available solutions, which Rometty said ran counter to IBM’s history, but were essential to see the company moving forward. While many of us think of IBM’s Big Blue or "Jeopardy" building up to Watson, IBM’s history with AI goes to the very start of the field, where the famed Watson Labs worked to pioneer the earliest ideas in translation, language processing, speech and handwriting recognition, and more. Even with so many challenges, one has to be impressed by Rometty's outlook on the technology and the depth of IBM’s engagement.

Perhaps the most frequent use of AI any of us experience is through Facebook on our mobile phones, and together onstage, Facebook COO Sheryl Sandberg and CTO Mike Schroepfer outlined numerous efforts in AI across the site, advertisers and, of course, in virtual reality. The depth of both product features and applied research going on in AI at Facebook is not as widely appreciated, I believe. For example, a few weeks ago, Facebook contributed a dozen papers to the major AI conference ICLR. As mentioned below, the role each of us play as people (not "user," according to both speakers) is a huge contributor to Facebook’s ability to deliver AI-based features like photo tagging that we find so valuable.

Fresh off the heels of the I/O Conference, Google CEO Sundar Pichai took us through the history of AI at Google. There is clearly no other company that has the depth of efforts and the broad use of AI in products over time. Without a doubt, AI has always defined Google, and it is only in the past year or so that this has become broadly understood. Photos, inbox, search, advertising, Assistant, self-driving cars — and the list continues —were all examples used by Pichai to illustrate Google’s ongoing commitment to AI. If AI is itself a platform, Google is most certainly the most invested and best positioned to be the leader in such a platform.

Why this year?

The Code Conference has certainly seen disruptive shifts before: Digital media, mobile devices and smartphones, to name a few. AI is proving to be a different kind of shift — not one that some are resisting or concerned about disrupting their legacy business, but one that everyone is embracing and running toward.

Down the road, the interesting question will not be which companies used AI, but which companies made the most of AI in novel ways.

Down the road, the interesting question will not be which companies used AI, but which companies made the most of AI in novel ways. There is a massive amount of inventing left to do in the field, because even with the rapid rise of the technology, things are still very much in their infancy. For example, Skype Translator was previewed by Microsoft two years ago at the Code Conference, and is built using the most modern AI techniques of deep learning.


That AI is so mainstream today that dozens of CEOs can articulate company execution plans using the technology is directly attributable to four important changes in the technology landscape. I think it's worth reflecting on how these came together to make it possible to see such a rapid move from lab to deployed feature in such a high-tech endeavor.

Raw compute power for models: While we love to talk about Moore’s law as enabling so much, when it comes to AI, it is Moore’s law applied to parallel architectures, not Intel’s scalar ones. The application of even more transistors to graphics processing units (GPU) has been a key enabler of AI technology. Cloud architecture plays an incredibly important part in this because companies do not need to build out their own GPU data centers in order to tap into the power of AI training models, but can tap into on-demand scale.

Massive data capacity for training: Everyone has come to learn that more data is the only way to train AI models, and you can never have too much data (but it is easy to have too little). It is only recently that cloud architectures have become "routine" and "economical" at both retaining and accessing the quantities of data required for training. Facebook provides the easiest-to-visualize view of this as it trains recognition of people on the more than 300 million photos a day (or half a petabyte). IBM’s examples using radiology offer another view into just how important the evolution of storage is to AI.

In the span of a short time, AI has made a leap, and has likely skipped over the trough of disillusionment. I am not saying that lightly.


Incredible availability of labeled data: As important as data is to training, data without "labels" isn’t very helpful. This is where our role using technology plays an important part, as well as the openness of the internet. It isn’t just that we upload photos to Facebook, but that we tag people we know, and in doing so, train the image-recognition engine. It isn’t enough for eBay to say it wants to offer a customized store, but that we are signed in and purchasing items to inform the customization engine. This is such an advance over the way we used to think of click streams or guessing if someone is a return visitor.

In addition to all of this, sensors in our phones offer motion and location data, enhancing everything we are doing. Clearly and obviously, there are privacy and security questions with all of this, but at the same time, never before have there been such personal benefits to each of us as we use services. The availability of data goes beyond that which I personally generate (and label), and includes data sets and APIs that are now available simply because of openness and cloud-based solutions (for example, economic and demographic data from governments) that can be incorporated as part of training models.

Open implementations of technology underpinnings: The most fascinating aspect of the rise of AI solutions has been that so much of the core technology has been developed in the open (often by the research arms of companies) or is at least contributed in an open way relatively early in the evolution of the technology. Google’s TensorFlow, Facebook’s Torch, IBM’s SystemML and UC Berkeley’s CAFFE, along with technologies for data (such as Spark) are all openly available platform elements. This is most certainly following the same pattern as HTML/HTTP, which means the economics will be elsewhere in the system (in the data, training and models, of course).

What’s next?

In the span of a short time, AI has made a leap, and has likely skipped over the trough of disillusionment. I am not saying that lightly.

While there is no doubt that some will be disappointed in what transpires over the next couple of years, there is also no doubt that such skepticism will be communicated through a vast number of writing and communication tools all benefitting from AI. Nothing ever does everything everyone wants as soon as everyone would like.

It is clear, without qualification, that AI is a mainstream technology among the technology leadership companies, and in the near future will be an ingredient of most every leading product and service.


Steven Sinofsky serves on boards of several Andreessen Horowitz investments, and is an investor and adviser to startups and an adviser at Box Inc. Reach him @stevesi.

This article originally appeared on Recode.net.