Perhaps no company is more important to the datacenter than VMware, but how are the company’s technologies applied at the edge? This episode of Utilizing Edge features Saadat Malik, VP and GM of Edge Computing at VMware, discussing the evolution of VMware at the edge with Brian Chambers and Stephen Foskett. The discussion delves into the evolution of VMware’s presence at the edge, highlighting the differences between datacenter and edge environments in terms of people and technology. Malik emphasizes the importance of outcomes and product-focused mindsets in edge environments, as well as the constraints posed by limited physical resources. VMware’s technologies in connectivity, storage, security, and management are showcased as key enablers of successful edge computing. The episode also touches upon the growing significance of AI and machine learning at the edge and the need for standardized solutions to drive edge growth and transformation.
Datacenter IT is used to having tight control over infrastructure and applications, but this is challenging to maintain at the edge. This episode of Utilizing Edge features Pierluca Chiodelli of Dell Technology discussing the modern edge application platform with Allyson Klein and Stephen Foskett. A typical edge environment features many different platforms, devices, and connections that must be deployed, managed, and controlled remotely. When looking at the modern edge, Chiodelli recognizes the different personas and needs and constructs a plan to achieve the required outcome at this location. Modern applications need specialized hardware and connectivity that must be supported, deployed, and managed.
Between the so-called last mile and first mile lies the middle mile, the realm of colocation and network service providers. This episode of Utilizing Tech features Roy Chua and Allyson Klein, discussing the middle mile with Stephen Foskett. This middle area includes content delivery services like Varnish and Akamai, as well as companies like Cloudflare that are delivering content and compute there. The middle network includes providers like Equinix, Digital Realty, and Megaport, which provide connectivity to the cloud and service providers, the hyperscalers themselves, and some interesting networking startups like Packet Fabric and Graphiant. We must also consider observability, with companies like cPacket and Kentik as well as companies like Cisco and Juniper Networks.
When it comes to edge computing, money is not limitless. Joining us for this episode of Utilizing Edge is Carlo Daffara of NodeWeaver, who discusses the unique economic challenges of edge with Alastair Cooke and Stephen Foskett. Cost is always a factor for technology decisions, but every decision is multiplied when designing edge infrastructure with hundreds or thousands of nodes. Total Cost of Ownership is a critical consideration, especially operations and deployment on-site at remote locations, and the duration of deployment must also be taken into consideration. Part of the solution is designing a very compact and flexible system, but the system must also work with nearly any configuration, from virtual machines to Kubernetes. Another issue is the fact that technology will change over time and the system must be adaptable to different hardware platforms. It is critical to consider not just the cost of hardware but also the cost of maintenance and long-term operation.
Although everyone wants high availability from IT systems, the cost to achieve it must be weighed against the benefits. This episode of Utilizing Edge focuses on HA solutions at the edge with Bruce Kornfeld of StorMagic, Alastair Cooke, and Stephen Foskett. Although it might be tempting to build the same infrastructure at the edge as in the data center, but this can get very expensive. Thinking about multi-node server clusters and RAID storage, the risk of a so-called split brain means not just two nodes but three must be deployed in most cases. StorMagic addresses this issue in a novel way, with a remote node providing a quorum witness and reducing the need for on-site hardware. Edge infrastructure also relies on so-called hyperconverged systems, which use software to create advanced services on simple and inexpensive hardware.
The edge isn’t the same thing to everyone: Some talk about equipment for use outside the datacenter, while others talk about equipment that lives in someone else’s location. The difference between this far edge and near edge is the topic of Utilizing Edge, with Andrew Green and Alastair Cooke, Research Analysts at Gigaom, and Stephen Foskett. Andrew is drawing a line at 20 ms roundtrip, the point at which a user feels that a resource is remote rather than local. From the perspective of an application or service, this limit requires a different approach to delivery. One approach is to distribute points of presence around the world closer to users, including compute and storage, not just caching. This would entail deploying hundreds of points of presence around the world, and perhaps even more. Technologies like Kubernetes, serverless, and function-as-a-service are being used today, and these are being deployed even beyond service provider locations.
One of the main differentiators for edge computing is developing a scalable architecture that works everywhere, from deployment to support to updates. This episode of Utilizing Edge welcomes Dave Demlow of Scale Computing discussing the need for scalable architecture at the edge. Scale Computing discussed Zero-Touch Provisioning and Disposable Units of Compute at their Edge Field Day presentation, and we kick off the discussion with these concepts. We also consider the undifferentiated heavy lifting of cloud infrastructure and the tools for infrastructure as code and patch management in this different environment. Ultimately the differentiator is scale, and the key challenge for designing infrastructure for the edge is making sure it can be deployed and supported at hundreds or thousands of sites.
There is a long-standing gulf between developers and operations, let alone infrastructure, and this is made worse by the scale and limitations of edge computing. This episode of Utilizing Edge features Carl Moberg of Avassa discussing the application-first mindset of developers with Brian Chambers and Stephen Foskett. As we’ve been discussing, it’s critical to standardize infrastructure to make them supportable at the edge, yet we also must make platforms that are attractive to application owners.
Edge has many different stakeholders, applications, and needs, and this is especially true in distributed retail environments. This episode of Utilizing Edge features Simon Gamble of Mako Networks talking with Brian Chambers and Stephen Foskett about the complexities of technology at the retail edge. The key according to Gamble is segmentation of brands, franchisees, and technical applications. In some cases a single location might even include multiple separate companies or tenants under the same roof. Video, sensors, IoT, and location-based services are coming to retail locations as well, and some of these leverage outside service providers as well. Although sharing infrastructure is desirable, segmentation and security is key. Retail edge environments are increasingly complicated, but there are many ways to consolidate, converge, and standardize to make them practical to implement.
Edge environments were historically very specialized, but virtualization and cloud technology is enabling companies to deploy commodity platforms at the edge. This episode of Utilizing Edge features Raghu Vatte of ZEDEDA discussing this commoditization with Alastair Cooke and Stephen Foskett. Although the transition is still getting started, standard compute platforms are rapidly being exploited at edge locations, from warehouses to retail to industrial. Even is some specialized hardware is still needed, a unified platform can increasingly absorb a majority of applications at the edge. Another factor contributing to commoditization is the standardization of application requirements, with most now virtualized or containerized with standards developing for I/O and shared hardware resources.