Dumping AI Advisors – Motivations?

Future of Interaction User Experience
Date 13 June 2016

People dump AI advisors that give bad advice, while they forgive humans for doing the same

Research Summary

  • Researchers from the U of Wisconsin asked 160 college undergrads to complete an unfamiliar task with guidance from either a real or “advanced computer system” advisor.
  • After 7 of 14 tasks, participants were given faulty advice from their “advisor”.
  • At the beginning of the experiment “equal trust” was reported, but after the AI advisor made an error, many participants abandoned it and ignored its advice.
  • After the error, AI consultations fell 25%, but human advisors saw only a 5% drop.
  • Findings suggest that automation in the workplace could problematic if people lose trust.
  • AI is on track for functioning as predictive machines, not just databases, and will be wrong far more often than our current expectations for automated systems.

Prahl and his co-author Lyn Van Swol are now trying to understand what psychological forces underlie this phenomenon. Participants in his study reported they felt they held “more in common” with the human advisor than the automated advisor, but exactly what characteristics people believe they share are unclear. Previous research suggests a common sense of being imperfect, a willingness to self-correct after mistakes, and a sense of wanting to do well were all candidates. Prahl believes artificial intelligence can be programmed to exhibit some of these traits, and plans to tackle this research next.



  • Is it really a lack of identifying with the AI that creates the problem?
  • Did participants subconsciously that AI advisors are programmed, and that with one mistake, they would not be able to rely on the information? (G.I.G.O.)
  • Also possible factors:
    • frustration with past experiences with technology/remembered poor experience design
    • perception that the AI advisor is not as complex and therefore able to make intuitive leaps as a human being
    • people have fallible memories, but are also far more plastic in recall, prediction and guessing than machines
    • Was the UI and introduction of the AI advisor somehow not presented well enough to caveat the predictive nature of the advisor as opposed to a database-style computer-based content? Present advice as “hunch” instead of fact?
    • Would users believe that the AI has hunches?
    • What does this say about Ai advisors outside the workplace?
    • What about less-concrete advice than room scheduling (health advice, emotional advice)?


Sometimes I really miss research and academia 😛

Prahl, A.; Van Swol, Lyn M. : The Computer Said I Should: How Does Receiving Advice From a Computer Differ From Receiving Advice From a Human


Share this article

Leave a comment