Biases can creep into any data set, and these can cause trouble when this data is used to train an AI model. Alf Rehn, Professor of Innovation, Design, and Management at the University of Southern Denmark, joins Andy Thurai and Stephen Foskett to discuss the lessons he has learned about algorithmic bias based on his work with the Velux Foundations Algorithms, Data and Democracy project. Society is directing artificial intelligence to solving some problems and ignoring others, and this can create biases as surely as data selection in model training. Can we ever truly eliminate bias? If not how do we work against it? Can we keep the genie in the bottle even if we want to? And can machines ever make sound, ethical subjective decisions?