Blog

Where Are We With Cognitive Computing Today? Part 2.

Posted by:

I recently had the opportunity to attend a focus group, sponsored by SAS Institute, on cognitive computing adoption outside the US. The group of early adopters attending this focus group was proceeding with caution. They had the bruises from past new technology experiments and don’t believe the hype around AI today. In each case, it was apparent, however, that they had support from high-level management, and that they were starting with a proof of concept, or several. We have heard this from other buyers. Several enterprises are working with more than one vendor, trying to compare dissimilar products with little in the way of best practices to guide them.

Some of the concerns that emerged were, first, that these systems are often a black box; that it was not clear why they were getting the recommendations that were delivered. Because business systems are traditionally data-based and deterministic, rather than stochastic, this ambiguity appears to be unacceptable to them for some uses today. The buyers felt that they needed the evidence behind the results. Probabilistic systems, including search engines, have long struggled with this problem. Although we know that information systems of all sorts deliver only what you ask for and not what you should have asked for, nevertheless they are seen as precise and complete. Managing expectations is a challenge for vendors and for IT managers.

Other groups of attendees were concerned about the common requirement for an extraordinary amount of computing power to handle cognitive processing. Several mentioned the challenge of developing non-English applications because most of the research has been in English-based systems. Perhaps most intriguing in terms of issues, though, were the predictable “What-if” questions: will we lose the institutional memory that originally trained the system? If so and if the system breaks down, will we be able to fix it? Centralized systems are always a problem, they said. They must be up and running 24/7. They must be reliable. That’s a challenge for any system. Cognitive systems’ lack of a track record in production operations sets off risk alarm signals for many.

Finally, these cognitive experimenters pointed to interaction design as a great unknown, especially for non-IT, non-analyst business users who need access to data stores but won’t understand the system design behind the interface. Right now there are experiments, but no accepted best practices.

It is apparent that SAS is seizing this trend toward cognitive computing. The announcement of SAS Viya ™ at this conference, along with a variety of tools for both their loyal developer and analyst base and a wider business user audience positions them nicely as both a partner with other cognitive and IOT platforms and as a potential competitor.

We will continue to track cognitive use cases and report on them. The field is evolving rapidly. Focus groups like this international one, and like the Cognitive Computing Consortium’s online discussion forum will enable experimenters to teach each other, perhaps mitigating mistakes that might otherwise be widespread.

0

About the Author:

Sue Feldman is Co-founder and Managing Director at the Cognitive Computing Consortium. She also is PrAs VP for Content Technologies at IDC, Sue developed and led research on search, text analytics and unified access technologies and markets. Her most recent book, The Answer Machine was published in 2012. Her current research is on use cases and guidelines for adopting cognitive computing to solve real world problems.
  Related Posts

Add a Comment