Behavioral New World
March 15, 2024
Mid-month bonus: Bias and AI, part 2
The issue of bias arises in several ways when we think of Artificial Intelligence (AI). First, many of you have read about “algorithmic bias.” In brief, this bias occurs because AI is trained on information that reflects human biases, biased datasets, or may have some aspects of its human-programmed systems that intentionally or unintentionally exhibit some bias.
Second, in our newsletter of November 15, 2023 (link here), Scott Christianson and I discuss how human bias leads to inefficient “collaboration” with information provided by AI. He joins me on this follow-up; you can find him here.
There is a third dimension to bias in the use of AI—how we “prompt” AI, that is, how we make our inquiries. Example of a prompt: “What is the average temperature in Hanoi in November?” This is a straightforward prompt.
But now imagine: “Why will Candidate X win the general election in 2024?” The question is slanted, as it asks only for factors that might lead Candidate X to win. There is more than a whisper of “confirmation bias” in this prompt. As discussed in my May 2022 (link here) newsletter, confirmation bias is the tendency that we all have to be receptive to information that supports our views, and a tendency to downplay information that contradicts our views.
Now perhaps the person making this prompt merely wants reassurance that there are factors that could lead to a Candidate X victory. When my sister has a difficult situation to deal with, she has two sets of friends she can call on: Those who will tell her what she wants to hear and those who will tell her the truth. A biased prompt is calling your friends who will tell you what you want to hear -- OK if you are aware of what you are doing.
But if the goal is to take an objective view of the situation, a prompt more like, “What factors influence the probability that Candidate X will win the general election?” is more appropriate.
“Prompt engineering” is a necessary skill when working with generative AI systems like ChatGPT. Just like the computers of old, the “Garbage In, Garbage Out” adage applies in the ChatGPT era; entering a prompt that expresses your own biases will increase the likelihood of a biased response.
Besides doing a “bias check” when you are formulating a prompt, you should remember that these systems are conversational -- you can follow up with questions like “What are the arguments against this position,?” “What are the assumptions that I am making in the prompt that I asked you?,” and other ways to creatively argue against yourself!
It is still early days for generative AI technology. Will it be a way to check our biases, or will it just make them worse? Tell us what you think in the comments below.