Data is the most important component of AI implementation, but most companies neglect data infrastructure and focus too much on the ML models. In this episode of the Utilizing AI podcast, Melisa Tokmak of Scale AI joins Frederic Van Haren and Stephen Foskett to discuss the democratization of data infrastructure to support machine learning projects. Enterprises often don’t have a good understanding of their data, and this can undermine the success of an AI project, and this must be addressed before the project can proceed. Companies also must consider the quality of their data, beginning with a definition of the metrics that will properly assess the data foundation for their ML models.
Three Questions
- Frederic: Will we ever see a Hollywood-style “artificial mind” like Mr. Data or other characters?
- Stephen: How big can ML models get? Will today’s hundred-billion parameter model look small tomorrow or have we reached the limit?
- Alexandrine Royer: What do you think is one of the biggest ethical challenges that comes with AI that often goes under-discussed and should be more present in conversations surrounding the deployment of AI models?
Guests and Hosts
Melisa Tokmak, GM at Scale AI. Connect with Melisa on LinkedIn or on Twitter @MelisaTokmak .
Frederic Van Haren is the CTO and Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on LinkedIn or on X/Twitter and check out the HighFens website.
Stephen Foskett, Organizer of the Tech Field Day Event Series, part of The Futurum Group. Find Stephen’s writing at GestaltIT.com, on Twitter at @SFoskett, or on Mastodon at @[email protected].