OpenAI’s wildly popular ChatGPT artificial-intelligence service has showed a clear bias toward the Democratic Party and other liberal viewpoints, according to a recent study conducted by UK-based researchers.
Academics from the University of East Anglia tested ChatGPT by asking the chatbot to answer a series of political questions as if it were a Republican, a Democrat, or without a specified leaning. The responses were then compared and mapped according to where they land on the political spectrum.
“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” the researchers said, referring to the left-leaning Brazilian President Luiz Inácio Lula da Silva.
ChatGPT has already drawn sharp scrutiny for demonstrating political biases, such as its refusal to write a story about Hunter Biden in the style of The New York Post but accepting a prompt to do so as if it were left-leaning CNN.
In March, the Manhattan Institute, a conservative think tank, published a damning report which found that ChatGPT is “more permissive of hateful comments made about conservatives than the exact same comments made about liberals.”
To reinforce their conclusions, the UK researchers asked ChatGPT the same questions 100 times. The process was then put through “1,000 repetitions for each answer and impersonation” to account for the chatbot’s randomness and its propensity to “hallucinate,” or spit out false information.
“These results translate into real concerns that ChatGPT, and [large language models] in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media,” the researchers added.
The Post has reached out to OpenAI for comment.
The existence of bias is just one area of concern in the development of ChatGPT and other advanced AI tools. Detractors, including OpenAI’s own CEO Sam Altman, have warned that AI could cause chaos – or even the destruction of humanity – without proper guardrails in place.
OpenAI tried to deflect potential concerns about political bias in a lengthy February blog post, which detailed how the firm “pre-trains” and then “fine-tunes” the chatbot’s behavior with the assistance of human reviewers.
“Our guidelines are explicit that reviewers should not favor any political group,” the blog post said. “Biases that nevertheless may emerge from the process described above are bugs, not features.”
𝗖𝗿𝗲𝗱𝗶𝘁𝘀, 𝗖𝗼𝗽𝘆𝗿𝗶𝗴𝗵𝘁 & 𝗖𝗼𝘂𝗿𝘁𝗲𝘀𝘆: nypost.com
𝗙𝗼𝗿 𝗮𝗻𝘆 𝗰𝗼𝗺𝗽𝗹𝗮𝗶𝗻𝘁𝘀 𝗿𝗲𝗴𝗮𝗿𝗱𝗶𝗻𝗴 𝗗𝗠𝗖𝗔,
𝗣𝗹𝗲𝗮𝘀𝗲 𝘀𝗲𝗻𝗱 𝘂𝘀 𝗮𝗻 𝗲𝗺𝗮𝗶𝗹 𝗮𝘁 dmca@enspirers.com