As machine learning is used to market and sell, we must consider how biases in models and data can impact society. Arizona State University Professor Katina Michael joins Frederic Van Haren and Stephen Foskett to discuss the many ways in which algorithms are skewed. Even a perfect model will produce biased answers when fed input data with inherent biases. How can we test and correct this? Awareness is important, but companies and governments should take active interest in detecting bias in models and data.
Links:
“Algorithmic bias in machine learning-based marketing models”
Three Questions:
- Frederic: When will AI be able to reliably detect when a person is lying?
- Stephen: Is it possible to create a truly unbiased AI?
- Tom Hollingsworth of Gestalt IT: Can AI ever recognize that it is biased and learn how to overcome it?
Gests and Hosts
Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence at Arizona State University. Read here paper here in the Journal of Business Research. You can find more about her at KatinaMichael.com.
Frederic Van Haren is the CTO and Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on LinkedIn or on X/Twitter and check out the HighFens website.
Stephen Foskett, Organizer of the Tech Field Day Event Series, part of The Futurum Group. Find Stephen’s writing at GestaltIT.com, on Twitter at @SFoskett, or on Mastodon at @[email protected].