We are arguably in the midst of a profound shift in computing, from programmatic to so-called cognitive, where intelligent workflows are no longer simply encoded into machines, but are rather discovered automatically through observations of best practices. Many would say that smart systems and data mining have been around for a while, so what is so different now? It turns out that it is not so much about new algorithms, but rather about new engineering approaches, invented out of the necessity to cope with the sheer size, richness and volatility of big data.
Traditionally, inference models, which draw conclusions from data and content, have predominantly been used by experts in a “data lab” environment for the sole purpose of finding new lessons learned, or rules, that could then be leveraged effectively within business workflows, for example. Such models would often rely on human knowledge because the observed data would be insufficient to produce any meaningful rules. In a relatively static and data-poor environment, that approach was in many ways the only viable option.
The above article was written by Guy Mounier; it appears in the issue September 2014, [Vol 23, Issue 8]Share