Blog

Big Data and Cognitive Computing – Part 3

Posted by:

In Parts 1 and 2 of our series on Big Data and Cognitive Computing, we proposed that these two trends are often brought up in conversation, but that those conversations rarely help clarify the important differences between these two trends. Are we truly confident that we know which is which and why that is so?

We felt that finding a way to describe these two trends in simple terms—and differentiate among them and define their relationship to each other—could help lower the level of hype and confusion in this active corner of the technology landscape. If we can achieve a new kind of clarity in this conversation, we can get on with the business of talking about cognitive computing in a much crisper and more intelligent manner than we’ve typically experienced to date.

We proposed that there are four important levels or “meanings” that these terms are operating on. We need to get better at understanding and differentiating these meanings. We need to be more accurate as we throw these terms around. The four levels are:

1) The mission or purpose of big data vs. that of cognitive computing
2) The foundation technologies of each
3) The functional description of what these trends and their technologies actually do for people
4) The symbolic level, where our public conversation has already transformed these terms into labels for various business strategies, worldviews, and hype campaigns.

In Parts 1 and 2, we deconstructed the first two levels, of mission and foundation technologies, seeking to identify the important pieces and offering a view of how they relate and how they diverge. We found that the mission of big data is best understood as the next generation of the traditional IT function of storage and organization of machine-based enterprise information—now extended to include different types of data handled in new ways. We also found that the rise of new, high-volume data sources (like data from smart phone or tablet sessions or Internet of Things device logs or social media sniffers) have led innovative technologists to devise new approaches to try to keep pace. Hadoop, Spark, data lakes, No SQL, in-database processing, software-based storage—there are many different new technology tools now in the market which hardly existed 10 years ago, all designed specifically to provide solutions to Big Data challenges.

On the cognitive computing side, we noted in the first place that cognitive computing does not have to be involved with big data at all. In fact, many “smart” applications have achieved their value through focusing the data they attempt to analyze rather than taking a “boil the ocean” approach across the largest data sets possible. What we saw was that the technology foundation of cognitive computing is not fundamentally about programming, processing, or storage paradigms, or about data flows and stream handling, but rather about the broad ranging data analysis technologies addressing discovery, disambiguation, contextual understanding, inference, recommendation, probabilistic reasoning, and human/machine communications. Cognitive computing is based in analytics, but its value proposition is grounded on the ability to offer contextualized insights to a human decision-maker. Whether its data is big or small should be totally transparent to the human user.

We turn now to addressing the important issues that come up on the levels of functional description and symbolic communications around the big data and cognitive computing trends.

On the level of function, confusion arises immediately from the many statements we can read which appear to wrap big data and cognitive computing into the same phenomenon.

Consider, for example, this marketing statement from a software vendor promoting cognitive computing: “The … Cognitive Reasoning Platform is an artificial intelligence platform that allows organizations to capitalize on the power and potential of Big Data through advanced analytics and actionable insights that fundamentally inform organizations about the business, customers, and value chains in which they operate.” At the 50,000’ level, we could agree with this characterization of cognitive computing’s potential, but linking it up with the power of big data in the same sentence implies a symbiosis that does not in fact exist. It would be far more accurate to say that the power and potential of cognitive computing is one approach that organizations can consider when they face the challenge of teasing important business insights out of their many diverse sources of data, bigger and smaller and both together.

Another source of confusion about the respective function of these two trends arises from a phrase that has been used with increasing frequency over the past two years, the label “big data analytics.” While big data analytics often refers simply to traditional kinds of structured data analytics now applied to large data volumes, it also often points to the emergence of analytics on unstructured or semi-structured data encountered in the new data lakes or hadoop clusters. Regardless of data format, the continuing priority of both the IT owners of these resources and the analysts attempting to get at them is to uncover actionable insights for the business. It is here that cognitive computing has entered the big data analytics picture, as a scanner and sense-maker for the diverse kinds of data in these stores.

The conversations around big data analytics often seem to ignore the reality that an application using cognitive computing approaches and technologies must be developed as a project distinct and largely independent of the “big data-ness” of the analytics environment.

Another consideration generating confusion around the functions of the two trends arises from statements that conveniently ignore the relative maturity of each. Big data has well over a decade of development and is already seeing the adoption of its second generation of tools and techniques. Cognitive computing, on the other hand, is in its earliest stages, with very few products even ready for the market and many more promises in the air than tangible results on the ground. When statements are put forward linking the two trends like peas in a pod, they rarely pause to consider that fifteen years from now, when cognitive computing arrives at a similar stage of maturity as today’s big data, the only certain thing is that we will be talking about both of these trends in very different, probably unrecognizable terms.

0

About the Author:

Hadley Reynolds is Co-founder and Managing Director at the Cognitive Computing Consortium. He is a leading analyst of the search, content management, and knowledge management industries, researching, speaking, and writing on emerging trends in these technologies and their impact on business practice. He currently leads the publications program at the Consortium.
  Related Posts