Data is the most important component of AI implementation, but most companies neglect data infrastructure and focus too much on the ML models. In this episode of the Utilizing AI podcast, Melisa Tokmak of Scale AI joins Frederic Van Haren and Stephen Foskett to discuss the democratization of data infrastructure to support machine learning projects. Enterprises often don’t have a good understanding of their data, and this can undermine the success of an AI project, and this must be addressed before the project can proceed. Companies also must consider the quality of their data, beginning with a definition of the metrics that will properly assess the data foundation for their ML models.
- Frederic: Will we ever see a Hollywood-style “artificial mind” like Mr. Data or other characters?
- Stephen: How big can ML models get? Will today’s hundred-billion parameter model look small tomorrow or have we reached the limit?
- Alexandrine Royer: What do you think is one of the biggest ethical challenges that comes with AI that often goes under-discussed and should be more present in conversations surrounding the deployment of AI models?
Guests and Hosts