Enterprises are working to simplify the process of deploying and managing systems to support AI applications. That’s what NVIDIA’s DGX architecture is designed to do, and what we’ll talk about on this episode. Frederic Van Haren and Stephen Foskett are joined by Tony Paikeday, Senior Director, AI Systems at NVIDIA, to discuss the tools needed to operationalize AI at scale. Although many NVIDIA DGX systems have been purchased by data scientists or directly by lines of business, it is also a solution that CIOs have embraced. The system includes NVIDIA GPUs of course but also CPU, storage, and connectivity and all of this is held together with software that makes it easy to use as a unified solution. AI is a unique enterprise workload in that it requires high storage IOPS and low storage and network latency. Another issue is balancing these needs to scale performance in a linear manner as more GPUs are used, and this is why NVIDIA relies on NVLink and NVSwitch as well as DPU and InfiniBand to connect the largest systems
- How big can ML models get? Will today’s hundred-billion parameter model look small tomorrow or have we reached the limit?
- Will we ever see a Hollywood-style “artificial mind” like Mr. Data or other characters?
- Can you give an example where an AI algorithm went terribly wrong and gave a result that clearly wasn’t correct?
- *Question asked by Mike O’Malley of SenecaGlobal .
Guests and Hosts
Date: 9/21/2021 Tags: @TonyPaikeday, @nvidia, @SFoskett, @FredericVHaren