Blog

Considering Bias & Cognitive Systems

Posted by:

When we examine the differences between humans and cognitive systems, we often point to lack of bias in automated systems vs. the unavoidable biases that humans bring to making judgments. But is that really true? The fact is that every time we select a training set, write an algorithm or design a ranking scheme, we are building in our biases of how the world is ordered, and what is more or less important in any category. We build in relationships when we build ontologies that in turn are used to sort and categorize incoming data.

Make no mistake. Categorization is part of what makes us tick. Humans could not navigate their world if they did not establish knowledge frameworks. At a basic level, we classify objects or actions into two categories: “pay attention” and “ignore.” We couldn’t get out of bed in the morning if we had to perceive each time the significance and properties of the floor, the closet, the door or the pictures on the wall.

People make sense of their world by sorting events or objects into categories of things with similar attributes. This process is an innate. As we learn, we use what we already know to predict how something new might be similar to what we have seen before. Agatha Christie’s Miss Marple is a classic example of extrapolating from experience.

But there are all-too-human pitfalls to categorization because it may order our thinking and prevent us from seeing new trends and patterns. Categories shift and evolve according to the context within which they are used. I don’t notice the dandelions in the lawn unless I’m gardening. Otherwise the lawn is just background. Categories may also mislead if we insist on mashing marginally related things into the same category because we have no other place to put them. And worse, they perpetuate stereotypes, as Amos Tversky noted: to get rid of stereotypes, we need to get rid of categories.

As I’ve listened to talks at various conferences this year, I’ve noticed that the issue of unintended bias in automated systems is increasingly under discussion. In the digital world, bias is an issue in medical diagnoses, for instance. As we cross the divide from physical to digital, how do we determine what biases or expectations we might be building into self driving cars or home health aid robots?

People design systems, so perhaps unintended bias is unavoidable. Perhaps for many systems that need quick, but not perfect answers, bias may even be helpful if the designer’s and the user’s world views are congruent. Product recommendation systems come to mind. But what if you are looking for the unexpected?  In that case, forcing matches, or relying on a single view may eliminate an “aha! moment.”

One of the effective design strategies we’ve seen is to incorporate multiple categorization and/or ranking systems with a voting algorithm and a flexible set of heuristics so that it’s possible to examine more than one set of recommendations. In the spectrum from “good enough” to “elegant,” it appears that allowing a competition among multiple “takes” on what’s important and what’s-related-to-what is our best bet at developing the kind of subtle understanding we need to select ideas that are both contextually sensitive and effective in solving the problem at hand.

0

About the Author:

Sue Feldman is Co-founder and Managing Director at the Cognitive Computing Consortium. She also is PrAs VP for Content Technologies at IDC, Sue developed and led research on search, text analytics and unified access technologies and markets. Her most recent book, The Answer Machine was published in 2012. Her current research is on use cases and guidelines for adopting cognitive computing to solve real world problems.
  Related Posts

Add a Comment