Blog

Legal Issues: Can we depend on algorithms to make decisions?

Posted by:

In the cognitive computing era, there are plenty of tough technical challenges. Their difficulty pales, however, when compared to the social and legal issues these new technologies raise. Increasingly, we rely on algorithms to help us sort through the complex factors that lead to making a decision. Often this reliance is not based on knowing whether the algorithm is dependable. Articles by Julia Angwin in the New York Times and ProPublica on Aug. 1st celebrate a decision by the Wisconsin Supreme Court to limit the influence of algorithmic recommendations for sentencing offenders. The algorithms predict the risk that an offender might commit a crime in the future. Based on these recommendations, an offender might face jail time or probation.

Check out:

http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?ref=opinion&_r=0

https://www.propublica.org/article/making-algorithms-accountable

There is no stuffing the algorithm genie back in the virtual bottle. The fact is that we need help in making sense of the welter of data that showers us whenever we make a decision. From choosing a carpenter to treating a cancer patient, the human mind can’t take in every available data point quickly enough to make the most optimum decision in a reasonable amount of time. For the most part, that is not a problem. We don’t need to know everything to make an acceptable decision. There are plenty of good carpenters, restaurants, or books. Rarely are day-to-day decisions a matter of life or death. But sometimes they are. From self-driving cars to medical treatment, when lives are at stake, should we rely on algorithms alone?

Our society tends to rate the accuracy of computer results much more highly than that of human decisions. For some reason, we leave our skepticism behind when recommendations are digital. What has created this aura of infallibility? As a young researcher, I found that I could hand a client the same information in the same words as a printout and as a photocopy and have the printout more readily accepted. The believability bias hasn’t changed much since then. It’s time to develop a more mature approach to melding digital evaluations with human common sense. We need to ensure that the path to digital recommendations is transparent and that the underlying data is reliable so that we can judge the conclusions for ourselves. We also need to teach skepticism.

Computers and humans complement each other. Neither is perfect. Combined, human sense making and algorithmic pattern detection make for more complete (but still imperfect) understanding. Angwin says we must require “the right to examine and challenge the data used to make algorithmic decisions about us.” That’s a good first step.

0

About the Author:

Sue Feldman is Co-founder and Managing Director at the Cognitive Computing Consortium. She also is PrAs VP for Content Technologies at IDC, Sue developed and led research on search, text analytics and unified access technologies and markets. Her most recent book, The Answer Machine was published in 2012. Her current research is on use cases and guidelines for adopting cognitive computing to solve real world problems.
  Related Posts
  • No related posts found.

Add a Comment