Mitigating Bias in Production

In November 2022, I presented at the NeurIPs Global South in AI workshop how researchers can mitigate bias in General Conversation AI.

Presentation Summary:

“It's no secret that conversational bots have been having their fair share of publicity lately. From Tay to BlenderBot3 and even open sourced models like GPT-3 that have been used to train data for conversational bots have been flagged for perpetuating bias. It's important that we take a wholistic approach to combatting this bias, not only by training our models on better data but taking an operational approach to how we build, train, and deploy our models. Today, I will discuss how we can improve our ethical bot building process by taking an operational approach. One that not only focuses on what data we use to build a model but also what built in assumptions our models have and how we can build post-production strategies to mitigate bias of bots ‘in the wild’.”

Find a recording of the presentation here.

Find the presentation slides here.

Previous
Previous

Knowing Your Customer through Analytics and ML

Next
Next

How Nvidia’s Megatron is Boosting Transformer Performance