Categories
Podcast Season 2

Building Transparency and Fighting Bias in AI with Ayodele Odubela

When it comes to AI, it’s garbage in, garbage out: A model is only as good as the data used

When it comes to AI, it’s garbage in, garbage out: A model is only as good as the data used. In this episode of Utilizing AI, Ayodele Odubela joins Chris Grundemann and Stephen Foskett to discuss practical ways companies can eliminate bias in AI. Data scientists have to focus on building statistical parity to ensure that their data sets are representative of the data to be used in applications. We consider the sociological implications for data modeling, using lending and policing as examples for biased data sets that can lead to errors in modeling. Rather than just believing the answers, we must consider whether the data and the model are unbiased.

Guests and Hosts:

Ayodele Odubela of @CometML is an ML instructor, founder, and author. Connect with Ayodele on Twitter at @DataSciBae.

Chris Grundemann is the Managing Director at Grundemann Technology Solutions. You can connect with Chris on LinkedIn and on X/Twitter or visit his website to learn more.

Stephen Foskett, Organizer of the Tech Field Day Event Series, part of The Futurum Group. Find Stephen’s writing at GestaltIT.com, on Twitter at @SFoskett, or on Mastodon at @[email protected].