The edge isn’t the same thing to everyone: Some talk about equipment for use outside the datacenter, while others talk about equipment that lives in someone else’s location. The difference between this far edge and near edge is the topic of Utilizing Edge, with Andrew Green and Alastair Cooke, Research Analysts at Gigaom, and Stephen Foskett. Andrew is drawing a line at 20 ms roundtrip, the point at which a user feels that a resource is remote rather than local. From the perspective of an application or service, this limit requires a different approach to delivery. One approach is to distribute points of presence around the world closer to users, including compute and storage, not just caching. This would entail deploying hundreds of points of presence around the world, and perhaps even more. Technologies like Kubernetes, serverless, and function-as-a-service are being used today, and these are being deployed even beyond service provider locations.
One of the main differentiators for edge computing is developing a scalable architecture that works everywhere, from deployment to support to updates. This episode of Utilizing Edge welcomes Dave Demlow of Scale Computing discussing the need for scalable architecture at the edge. Scale Computing discussed Zero-Touch Provisioning and Disposable Units of Compute at their Edge Field Day presentation, and we kick off the discussion with these concepts. We also consider the undifferentiated heavy lifting of cloud infrastructure and the tools for infrastructure as code and patch management in this different environment. Ultimately the differentiator is scale, and the key challenge for designing infrastructure for the edge is making sure it can be deployed and supported at hundreds or thousands of sites.
There is a long-standing gulf between developers and operations, let alone infrastructure, and this is made worse by the scale and limitations of edge computing. This episode of Utilizing Edge features Carl Moberg of Avassa discussing the application-first mindset of developers with Brian Chambers and Stephen Foskett. As we’ve been discussing, it’s critical to standardize infrastructure to make them supportable at the edge, yet we also must make platforms that are attractive to application owners.
Edge has many different stakeholders, applications, and needs, and this is especially true in distributed retail environments. This episode of Utilizing Edge features Simon Gamble of Mako Networks talking with Brian Chambers and Stephen Foskett about the complexities of technology at the retail edge. The key according to Gamble is segmentation of brands, franchisees, and technical applications. In some cases a single location might even include multiple separate companies or tenants under the same roof. Video, sensors, IoT, and location-based services are coming to retail locations as well, and some of these leverage outside service providers as well. Although sharing infrastructure is desirable, segmentation and security is key. Retail edge environments are increasingly complicated, but there are many ways to consolidate, converge, and standardize to make them practical to implement.
Edge environments were historically very specialized, but virtualization and cloud technology is enabling companies to deploy commodity platforms at the edge. This episode of Utilizing Edge features Raghu Vatte of ZEDEDA discussing this commoditization with Alastair Cooke and Stephen Foskett. Although the transition is still getting started, standard compute platforms are rapidly being exploited at edge locations, from warehouses to retail to industrial. Even is some specialized hardware is still needed, a unified platform can increasingly absorb a majority of applications at the edge. Another factor contributing to commoditization is the standardization of application requirements, with most now virtualized or containerized with standards developing for I/O and shared hardware resources.
Although the technology is roughly similar to datacenter or cloud, the unique challenges of edge computing require new approaches to storage, networking, orchestration, deployment, and more. We were kicking off a new season of Utilizing Tech focused on edge computing, featuring Alastair Cooke and Brian Chambers as co-hosts along with Stephen Foskett.
Season 5 of Utilizing Tech focuses on edge computing, a whole new dimension for enterprise technology. As seen during our recent Edge Field Day event, there are new problems and new solutions for compute, networking, security, application orchestration, storage, and more!
We’re wrapping up this season of Utilizing Tech by asking the key question: Why would a system architect choose to utilize CXL in their designs? This episode of Utilizing CXL features Stephen Foskett, Nathan Bennett, and Craig Rodgers discussing the practical prospects and benefits of CXL.
This episode wraps up the season of Utilizing Tech with Stephen Foskett and Craig Rodgers discussing the evolution of CXL with Jim Pappas, Director of Technology Initiatives at Intel and Chairman of the CXL Consortium. No matter how good the technology is, it needs widespread industry support, backwards and forwards compatibility, and open cooperation, and that’s what made technologies like PCI, PCI Express, USB, and now CXL successful.
This episode of Utilizing Tech features Mark Orthodoxou, VP of Strategic Marketing for Datacenter Products at Rambus discussing with Stephen Foskett and Craig Rodgers the technology and standards required for mass adoption of CXL-attached memory. Rambus brings decades of experience and a breadth of technology to the deployment of memory in high-performance and highly-available systems. As a CXL Consortium member, Rambus is bringing this experience to CXL, enabling the technology across the ecosystem.