The New AI-on-board Applications: Who Will Build Them?

Posted by:

Cognitive computing faces a range of challenges as it moves from computer science labs and pilot projects and edges out into the main stream. The media is full of articles on the promises and perils of the technology. We can read about a few real successes in the field, and a few spectacular misses, like the scandal surrounding IBM and the MD Anderson Cancer Center in Austin, TX. We can read about how the technology will take all of our jobs, or about how the impact of the machines taking on human tasks will not be a big deal. In all the ruckus, there has been much less attention directed to the gritty details of what we might call the cognitive sausage factory: how and by whom are these new AI systems – for the better or for the worse — going to be built?

Until we reach the point at which machines can design and implement new systems automatically – autonomously – we will be dependent on humans to do the heavy lifting of design, preparation, curation, and implementation of new generation systems.

Granted that we have already seen some truly fabulous new capabilities come to light: in image recognition and analysis, for example, where the reading of various kinds of X-rays and complex medical scans or the interpretation of reconnaissance photos or the process of identifying defects in semiconductor manufacturing fabs may well be fully automated in the near future. This breakthrough was in no small part the result of Google’s determination to make a big bet on the newly accessible technology of deep neural networks. The bet involved loading up teams of hundreds of researchers, buying companies whose people were deeply knowledgeable in the strengths and weaknesses of the techniques, and investing in new hardware infrastructure to support the processing of analytics across big data loads.

But these seemingly overnight advances in image processing and similar ones in language facilities like auto-translation have not been accompanied by breakthroughs across the wider spectrum of applications. While the intense interest in autonomous vehicles of all kinds, most dramatically self-driving cars, has captured the imagination of the public and the investment dollars of both technology and transport giants, even in that high-stakes field, the sense of real breakthrough to practical adoption is missing. And how do we understand the roadblock to more general progress throughout industry?

The simple fact is that the problems AI systems address are gnarly, multi-faceted, and require true innovation. As the definition proposed by the Cognitive Computing Consortium states: “cognitive computing makes a new class of problems computable.” These are often problems that have never been computerized, the last bastions of manual, often highly skilled labor. The world was rocked 40 years ago when computing came to the accountants’ spreadsheet. The relentless spread of intelligent automation will now impact our professional and consumer lives in ways that will be similarly hard to predict.

Where will the professionals come from who will create this innovation and spread these new applications far and wide? Clearly we don’t have enough of such people now, and there are several reasons why the shift to cognitive computing is not going as quickly as many have predicted.

The Boston Globe reports that at MIT, the Intro to Machine Learning course is among the most popular courses in 2017, with 700 signing up per semester, requiring four instructors and 15 TAs. While machine learning in general is the primary ingredient in the secret sauce that powers AI, it is “only” an engine, and the complexities of building the whole car around it are often daunting. For example, the current issue of the MIT Technology Review leads with a feature raising the question of whether we are at the end of the “AI boom” instead of the beginning – based on the perception that current progress is coming primarily from 30-year-old technology. The learning systems of today are heavily dependent on curated data, for example, and tend toward fragility, unpredictable changes in direction and output, and “black box” operations. There is a need for a new generation of more flexible intelligence.

The students at MIT will certainly be exposed to these issues top of mind in the machine learning community, but by the end of the course will they be masters of the emerging discipline of XAI, or explainable AI, which posits that all AI programs should be able to explain themselves automatically to their human designers? The list of issues to address as a neophyte AI developer or AI data scientist is daunting.

So one reason that progress toward broad-based adoption of cognitive computing is slow across many industries is simply the “gap,” the shortfall in trained people. But another, perhaps equally key reason is that the talent that has been coming into the market is very unevenly distributed.

The companies who have already bet their businesses on machine learning systems are hiring and acquiring as many talented people as they can find, in both research and applied technology. These are not only the top internet firms — Amazon, Google, Facebook — but also the likes of Microsoft and Apple and Intel and Salesforce, and beyond these giants, the many specialized consumer-facing sites like Trip Advisor, NetFlix, Uber, and many more. What all of these firms share is that they are making extraordinary profits with this technology, and they can therefore afford to pay outsized wages to the scarce people they can find with the requisite skills. This effect raises a high barrier to most “normal” industrial firms who are not benefitting as directly from the new attention-driven business models.

In many cases, firms face the kind of existential risk once experienced by the buggy whip manufacturers. But in virtually all cases, finding strategies to partner or acquire or develop the talent to compete in cognitive computing is an imperative. Coming to an accurate understanding of the competence and cognitive task profiles that will lead teams to successful AI outcomes is becoming a foundation for the next generation of the business. Many of today’s executives will be held accountable for how well their firms embrace the nuances of the coming transition to machine intelligence across the spectrum of business functions.


About the Author:

Hadley Reynolds is Co-founder and Managing Director at the Cognitive Computing Consortium. He is a leading analyst of the search, content management, and knowledge management industries, researching, speaking, and writing on emerging trends in these technologies and their impact on business practice. He currently leads the publications program at the Consortium.
  Related Posts

Add a Comment