Visit

By Grace Stanley

A new Cornell study revealed that Amazon’s AI shopping assistant, Rufus, gives vague or incorrect responses to users writing in some English dialects, such as African American English (AAE), especially when prompts contain typos.

The paper introduces a framework to evaluate chatbots for harms that occur when AI systems perform worse for users who speak or write in different dialects. The study has implications for the increasing number of online platforms that are incorporating chatbots based on large language models to provide services to users, the researchers said.

“Currently, chatbots may provide lower-quality responses to users who write in dialects. However, this doesn’t have to be the case,” said lead author Emma Harvey, a Ph.D. student at Cornell Tech. “If we train large language models to be robust to common dialectical features that exist outside of so-called Standard American English, we could see more equitable behavior.”

The research received a Best Paper Award at the June 23-26 ACM Conference on Fairness, Accountability, and Transparency (FAccT). Co-authors are Rene F. Kizilcec, associate professor of computer and information science at Cornell Ann S. Bowers College of Information Science, and Allison Koenecke, assistant professor at Cornell Tech.

Read more at the Cornell Chronicle. 

Grace Stanley is the staff writer-editor for Cornell Tech.