AI-assisted decision-making shows great promise, but human biases get in the way
- Oct 7, 2024
Text originally published at: Smith School of Business
The future of decision-making has already arrived in banking, energy, legal counselling, health care, insurance, retail and other areas of the economy. AI assistants work in medical diagnostics to analyze medical records and test results. In hospitals, they help determine the optimum number of nurses per shift. In banking, they detect fraudulent activities and assess risks. In website management, they help content moderators evaluate the credibility of social media posts.
So what does the experience with AI-assisted decision-making so far tell us about our working relationship with algorithmic colleagues? It tells us that there are significant tension points that need to be resolved, and that this is all just a precursor of massive changes to come. Consider these four questions.
Based on experience to date, do humans and AI assistants work well together?
The collaboration has not exactly been a smashing success. Evidence suggests that individuals working with AI assistants typically outperform individuals working alone. Still, their performance is usually inferior to AI making decisions without human supervision.
When all goes right, human decision-makers consider their own insights when determining whether a recommendation from a computational model should be followed. Unfortunately, things rarely go right because we can’t seem to get out of our own way. We either accept random predictions or faulty decisions without verifying whether the AI is correct. Or we mistrust the AI model and ignore highly accurate recommendations.
Humans seem to struggle to detect algorithmic errors, says Tracy Jenkin, an associate professor at Smith School of Business and a faculty affiliate at the Vector Institute for Artificial Intelligence.
“What we’re finding is that, in some cases, there is algorithmic aversion where individuals just don’t want to adhere to the recommendations of those AI systems,” says Jenkin, who is studying human-AI collaboration with Smith colleague Anton Ovchinnikov. “They’d rather listen to the advice of humans or themselves even though the algorithms outperform humans.”
“On the other hand, there’s also algorithmic appreciation where individuals will just go along with the AI’s advice even though it might be wrong,” she says.
There are many reasons why people struggle with AI assistants. Previous studies identified 18 different factors ranging from lack of familiarity and human biases to demographics and personality.
We are predisposed to overestimate AI capabilities, for example, when our brains are taxed trying to solve complex tasks or when we lack self-confidence. (It doesn’t help when misguided managers, in the interest of encouraging AI use, denigrate human decision-making abilities.)
Alternatively, when we lack information on how well an AI model performs, we rely on our own spider sense or focus on irrelevant information.
On the bright side, humans seem to be better at calibrating trust in AI assistants when they are part of a team. Groups have been shown to have higher confidence when they overturn the AI model’s incorrect recommendations and they appear to make fairer decisions.
Wouldn’t humans trust an AI recommendation if they are given the reasoning behind the recommendation?
“When it’s a favourable recommendation and [people] ask for an explanation,” says Jenkin, “they’re much more likely to actually adhere to the recommendation of a human adviser.” On the other hand, when it’s an unfavourable recommendation, such as an AI recommendation to an apartment owner to lower the rental price of a unit, people are more likely to adhere to the AI advice than a human adviser.
If people don’t respond to explainable AI as hoped, are there other promising ideas to improve AI-assisted decision-making?
A set of interventions known as “cognitive forcing functions” has the potential to get human decision-makers to engage with AI assistants more thoughtfully. For example, asking an individual to decide on an issue before seeing the AI’s recommendation can get around the so-called anchoring bias that can be triggered when we’re presented with an AI recommendation. Even delaying the presentation of an AI recommendation can lead to better outcomes.
It has been shown that cognitive forcing interventions significantly reduce overreliance compared to explainable AI approaches.
Encouraging human decision-makers to consider second opinions may also be a winning strategy. One study looked at the outcomes when second opinions from other human peers or another AI model are presented to a decision-maker in addition to the AI recommendation. When an investor, for example, is about to buy or sell a stock based on an AI model’s recommendation, would a second opinion from investors on an online discussion forum help them make better investment decisions?
What does the future of organizational decision-making look like beyond the next five years?
The trajectory of decision-making technology is going way beyond AI-driven assistants.
Humans today still have an edge on AI decision systems in some areas. We are better than AI at noticing subtle patterns in unstructured data. And we have an easier time accessing insights across organizational boundaries; a decision-maker can visit a supplier’s site and pick up valuable intel or talk to policymakers about political trends.
These advantages will last for another five years, some experts say. The tipping point will be when businesses en masse equip employees with wearable recording devices and cameras, with all that data fed into machine-learning algorithms.
Consider too the advances in computational cognitive modelling that will enable AI systems to know us better than we know ourselves. Computational cognitive modelling allows researchers to model the cognitive state of humans and make predictions regarding their mental state, beliefs and knowledge.
At a certain point in the not so distant future, why bother keeping humans in the loop?