Gen AI Cleans Up Its Bias

Executives are seeing progress in generative AI’s “balance” problem.

Generative AI has been around for decades without drawing so much as a glance from business leaders for most of that time.

That all changed on November 30, 2022 when Open AI released its ubiquitous ChatGPT natural language processing chatbot. Ever since, companies have cracked opened their checkbooks and have spent liberally on Gen AI tools.

The response has been nothing short of amazing.

According to Deloitte, 79% of 2,800 senior executives surveyed in a recent study expect Gen AI to “drive substantial organizational transformation” in less than three years.

“We’re in the early days of a major technological transformation with Gen AI beginning to drive a wave of innovation across industries,” says Joe Ucuzoglu, Deloitte Global CEO. “The speed, scale, and use cases of Gen AI are breathtaking. Business leaders are under an immense amount of pressure to act, while ensuring appropriate governance and risk mitigation guardrails are in place.”

Battling Bias

One fly in the ointment has been Gen AI’s penchant for bias and misinformation, but even those issues seem to be abating as the technology matures.

That’s the takeaway from a new study by Applause, a software testing and development company. The company surveyed 6,300 consumers, software developers and QA testers to see how they’re faring with Gen AI tools and applications.

The results are a mixed bag, but some progress is being made on the bias and accuracy fronts.

“Respondents thought chatbots are managing toxic and inaccurate responses better, but many have still experienced biased or inaccurate results and have data privacy concerns,” the study reported.

This from the survey.

  • Overall, 89% of respondents told Applause they were “concerned about providing private information to chatbots”.
  • 50% of the respondents have experienced biased responses, and 38% have seen examples of inaccurate responses.
  • 75% of respondents felt that chatbots are getting better at managing toxic or inaccurate responses.

“It’s clear from the survey that consumers are keen to use Gen AI chatbots, and some have even integrated it into their daily lives for tasks like research and search,” said Chris Sheehan, SVP strategic accounts and AI at Applause. “Chatbots are getting better at dealing with toxicity, bias and inaccuracy — however, concerns still remain.”

Sheehan says Gen AI users have an interesting way of testing chatbots for performance and for accuracy.

“Not surprisingly, switching between chatbots to accomplish different tasks is common, while multimodal capabilities are now table stakes,” said Chris Sheehan, SVP Strategic Accounts and AI at Applause. “To gain further adoption, chatbots need to continue to train models on quality data in specific domains and thoroughly test across a diverse user base to drive down toxicity and inaccuracy.”

Just 19% of users surveyed said the chatbot understood their prompt and provided a helpful response every time, and that’s an ongoing headache for AI developers.

“Chatbot features users would like to see include better source attribution, more localized responses, support for more languages and deeper personalization,” the study noted.  “Concerns still linger around data privacy, inaccurate responses and biased responses. However, respondents thought chatbots are managing toxic and inaccurate responses better.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *