Categories
Podcast Season 5

Designing a Scalable Edge Infrastructure with Scale Computing

One of the main differentiators for edge computing is developing a scalable architecture that works everywhere, from deployment to support to updates. This episode of Utilizing Edge welcomes Dave Demlow of Scale Computing discussing the need for scalable architecture at the edge. Scale Computing discussed Zero-Touch Provisioning and Disposable Units of Compute at their Edge Field Day presentation, and we kick off the discussion with these concepts. We also consider the undifferentiated heavy lifting of cloud infrastructure and the tools for infrastructure as code and patch management in this different environment. Ultimately the differentiator is scale, and the key challenge for designing infrastructure for the edge is making sure it can be deployed and supported at hundreds or thousands of sites.

One of the main differentiators for edge computing is developing a scalable architecture that works everywhere, from deployment to support to updates. This episode of Utilizing Edge welcomes Dave Demlow of Scale Computing discussing the need for scalable architecture at the edge. Scale Computing discussed Zero-Touch Provisioning and Disposable Units of Compute at their Edge Field Day presentation, and we kick off the discussion with these concepts. We also consider the undifferentiated heavy lifting of cloud infrastructure and the tools for infrastructure as code and patch management in this different environment. Ultimately the differentiator is scale, and the key challenge for designing infrastructure for the edge is making sure it can be deployed and supported at hundreds or thousands of sites.

Hosts and Guest:

Stephen Foskett, Organizer of the Tech Field Day Event Series, part of The Futurum Group. Find Stephen’s writing at GestaltIT.com, on Twitter at @SFoskett, or on Mastodon at @[email protected].

Brian Chambers, Technologist and Chief Architect at Chick-fil-A. Connect with Brian on LinkedIn and Twitter. Read his blog on Substack.

Dave Demlow, Vice President Of Product Strategy at Scale Computing. You can connect with Dave on LinkedIn or find out more on Scale Computing’s website.

Follow the podcast on Twitter at @UtilizingTech, on Mastodon at @[email protected], or watch the video version on the Gestalt IT YouTube channel

Transcript:

Stephen Foskett: Welcome to Utilizing, Tech the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech focuses on edge computing which demands a new approach to compute storage, networking and more. I’m your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. Joining me today as my co-host is Brian Chambers.

Brian Chambers: Hello everyone I’m Brian Chambers I lead enterprise architecture at Chick-fil-A and I write about tech at BrianChambers.substack.com with the Chamber of Tech Secrets and you can also find me on Twitter @BRICHAMB.

Stephen: So Brian, one of the defining characteristics of edge is scale and scale means many things. Tell us first, you know, what do you think of, what are the constraints and the requirements for deploying at scale?

Brian: Yeah, well, there’s lots of constraints when we think about deploying things at the edge, right, we’ve got limited human technicians, most likely. There’s not a lot of people who can help us troubleshoot problems, or maybe even install sophisticated equipment and standup architecture so that’s one thing. We’ve obviously got unreliable connections, potentially due to being a remote scenario or just not being able to invest in something resilient. So all kinds of constraints right now that’s just a few. So I think about scaling at the edge, it’s not necessarily the way that we would talk about scaling in the cloud or we think about like horizontal autoscaling, or scaling of a machine size or something like that. I think what we’re actually talking about is the ability to deploy and manage and operate lots and lots of copies and lots of lots of foot prints of whatever your thing looks like and be able to do that you know 100 times, 1000 times, 10,000 times or even more brings a whole new set of challenges and something fun that we can hopefully explore today.

Stephen: Yeah, that’s one of the things that is coming from my background in data center IT, this is unlike anything we ever had to deal with before. I mean if we had multiple locations it was two or three or you know maybe 10 right, if you’re a huge company, not 10,000, and then that’s one of the craziest things. And then because of this it really demands a completely different architecture, a completely different approach. As you’ve learned at Chick-fil-A and also as we heard from company after company at at Edge Field Day. And so that’s why we invited on to join us today for this episode Dave Demlow from Scale Computing, it’s right there in the name. Dave welcome and thanks for being part of this.

Dave Demlow: Thanks very much for having me

Stephen: So when I say scale and I don’t mean the name of the company, when I say scale in terms of scaling at the edge, what do you think, I mean how would you answer the same question.

Dave: Yeah I mean it’s all the things that the Brian said. I mean in many cases it is scaling across multiple locations, different connectivity options, but you know we really wanna encourage, also, the design for scale of the new applications that we all know we’re coming to the edge, so not just building in buying a box or a software stack to deploy one thing and I may be just hot new computer vision app but you know what are those other things you know? What are the data requirements gonna be at? How would you adjust those without you know any rip and replace architecture without having to roll a truck every time you need to scale your applications, and so forth so I think it means a lot of things, a lot of sites way different than the data center in just about every way imaginable and those are the kinds of problems that we’re working really hard to solve.

Brian Chambers: So Dave, one of the things we talked about a second ago with constraints was thinking about some of the limitations with edge sites not having humans there and there’s some different kinds of, maybe things that are buzzwords to people, phrases like zero-touch provisioning. What do you think about that concept and what does that mean to you when you think about zero touch with the edge? What does that solve for and how does that work?

Dave: Sure yeah I mean, obviously zero-touch provision is a big part of our solution, and we look at it is not just for that initial like how do you get the initial set up hardware infrastructure out, how do you get the initial applications out there, but as I was talking about earlier, how do you scale that out? You know, when you need how many GPU resources or we need additional storage or we need those, and again designing from day one how you can zero-touch deploy and provision new types of resources kind of the same way you were in the cloud, but it’s obviously not the cloud, some hardware eventually has to show up. Some hardware is eventually going to break, how do you handle replacing parts of this edge cloud that have failed that they let you know eventually, hopefully you designed the system to be resilient, to handle that failure, that you’re not, you know, emergency flying a helicopter out to the oil rig or, you know, those kind of scenarios, and also quickly provisioning and racking to patching security. Those are essential across the edge of zero day type of exploit, making sure that you can zero-touch quickly, rapidly deploy those out across a large fleet. Again, completely different than the kinds of things you see in the data center even with a lot of VM instances. That’s a much easier to problem solve than disconnected sites, different you know, in many cases different hardware configurations running different applications, depending on the different kinds of sites, things like that. So a lot of different things to consider there.

Brian: One of the things at Edge Field Day during your presentation that I think stuck with a number of people was that a phrase came up and it was related to this zero-touch provisioning idea and it was a term disposable units of compute. Can you tell us a little bit more about how that came up in the discussion with that means?

Dave: Sure yeah, I mean it is kind of a design goal of our infrastructure for our software, the Scale Computing software and hardware infrastructure from day one that things are going to fail. So plan for anything to fail, and provide the appropriate resiliency in an automated fashion without requiring on the cloud brain, much less a humans, to interact with that so everything within a scale platform is designed to be resilient, disposable. Now we do have some applications where it’s just a single node and you don’t necessarily have the requirements for a full cluster-type of solution, which we obviously offer, and it’s kind of our flagship, but even there in a you can accommodate that make a disposable of, all right, we’re not going to have an on-premises infrastructure and the cloud failure at the same time, we hope, I mean there’s cases where you can make that so we’ll fail back to the cloud there or will have maybe a cloud centric application that we’re getting some benefits of local processing for latency. We can tolerate a little longer in an emergency or vice versa, you know, failing back-and-forth could be another way of basically making that… designing your applications, designing your infrastructure to be disposable, which is really key across the sites, this many sites when you’re talking large and wide deployment.

Stephen: I think one of the other aspects of this whole disposable units of compute, which it sounds kind of like a product from some company but I can’t quite put my finger on it…
Dave: Yeah, yeah

Stephen: …one of the things that also kind of goes hand-in-hand with that is that when you’re deploying so much infrastructure at so many locations, every choice you make that makes it just a little more expensive is magnified and so you have to really use small lower cost, maybe lower reliability systems because they have to be disposal and it’s almost like there’s a tipping point where in the enterprise, you’re basically deploying things that are supposed to be really really solid. I mean essentially we’re deploying little mini mainframes that are supposed to have all sorts of redundancy features, and all sorts of high availability features and will pay huge amounts of money for reliable storage and redundant this and redundant that, and so on. At the edge, you know, you have to ask yourself a question does it make sense to pay for this stuff or should we just use the cheapest disposable list stuff because we know we’re gonna burn it up and chuck it at the end. And so again, I think that that’s another aspect here that is very different at the edge than in the data center.

Dave: I would agree, and you have a lot of choices there. So we have this discussion a lot of times with customers who are saying that if they are acknowledged, they say hey my application needs some resiliency, some data persistent resiliency, some fail over or something like that. They might come to us saying well the easiest way I can think of to do that I’m just going to buy two of everything, one active, one passive, and if one goes away, you know, we’ll just fail over the other one and then without realizing well okay, you’re probably going to buy pretty expensive boxes to do that. You’re going to set up raid arrays and things like that. What if you could distribute that across three smaller disposable things or four or five or six disposable things and then you can bolt on. And how do you scale out from a two node failover cluster you know? It’s really hard to go to that third note or not, do you do pairs of, you know, all these things and proliferate. So we have a lot of those kinds of discussions of you know do you want to get locked into “This is the way our stores all have to be, they all have to look like this.” So you know this kind of hardware footprint, this kind of management stack, or do you want to design for something that can kind of plug and play different resources. We make… Our solution lets you mix and match nodes, things that are storage heavy, things that are compute heavy, things that are your GPU resident, things like that and so that you can know that overtime you can stay within our entire management framework even zero-touch deploy, whatever kind of resources you need, aggregate those into each edge site and you’re like, you’re not like, you don’t have to make all those decisions. You also don’t have to pay for stuff that you think you might need two years down the line which is another thing. We often see customers kind of having to do what we think we’re going to scale to this so we’re going to buy more CPU than we need, we’re gonna buy more RAM than we need and cross our fingers that you know A) we don’t waste it and B) we didn’t estimate or guess wrong, and just blow right through it.

Stephen: But each of those choices is going to have a lot of implications. Oh, we’re going to do 32 gigs instead of 16 gigs, well, you just spent a lot more money.

Dave: Yes exactly.

Brian: another thing that I think gets multiplied by the number of foot prints with the edge is the amount of work it takes to do any given thing manually, right? Like no matter what it is that you might’ve thought manual was possible in a data center environment or even in the cloud, where we still trying to fall to automation, the edge is just so a whole other level right like a being able to respond to incidents that occur manually or super challenging the idea of like to plan things manually. We talked a little bit about supporting things manually. Automation just becomes a really critical concept in critical factor. How do you say that, Dave, when you think about being able to effectively scale the edge but being able to lay in appropriate automation to make that happen?

Dave: Yeah, really from day one we expose all of our functions via API the we, you know, use ourselves and testing so they’re well battle tested. You know we’ve really put a lot of effort into both automation that happens under the hood that nobody sees but is super valuable like you know when there’s, you know, obviously a drive failure, a node failure, network outages, things like that, but even things that are probably less visible you don’t think about like hey we’re upgrading in a Linux kernel here. We also probably need to update the bios on this physical device or this you know firmware on this network card that you know, things like that that can be automated and especially again when you’re dealing across a large set of fleets, you really want to have consistency and if you got different drivers, different bios versions things like that. And all of a sudden why is this site always a little weirder than the other sites? So automating from the deployment from what your infrastructure footprint looks like to deploying… automating deployment to applications to patching the operating systems to, you know, how do you deploy your containers and all that. And then how do you, you know, again react to the time critical events like zero day security patch? We need to get this out. Or there’s, you know, a critical fix that we need to get out to a thousand sites quickly.

Brian: Yeah as I was getting a vibe, Dave, that it was kind of very similar to what we hear from the cloud providers who talk about undifferentiated, heavy lifting and maybe that’s the question really is when you think about the way that the edge works, is it possible to enable people who want to put things at the edge to be able to focus on just their app or you know just the core services that they want or does the nature of the edge… does it require that you get a lot closer to exactly what infrastructure you’re running on? Like is there a model where you can be more focused and you don’t have to see the whole picture? What are your thoughts on that?

Dave: Well if I’m understanding your question correctly, I mean from the Scale perspective that’s a lot of what we want to do is let the DevOps team let the application developers team not have to care about the details of you know the bios update that just came out for these f11 or these CPU generations or things like that and handle that form using automation A) because it’s important and it’s got to be done quickly and B) because it has to be done right you know and so employing humans to do it and do it rapidly is generally a recipe for disaster. So those are definitely kinds of things that we try to just take care of and hide to a degree. And in the cloud, you don’t even know what box you’re on or what the drivers are. You assume that those are all being handled for you. We want to provide that kind of experience where you know where its desired I think is the most cases in the distributed edge. If people didn’t have to worry about hardware they’d rather not. But it’s a it’s a necessary evil and we talk to a lot of software developers as well that are building these apps to go on top of you know go out to all of these edge locations. They don’t want to deal with a hardware and so yeah we’re good at that we’ve built a lot of IPE around how you monitor specific hardware. How do you test? How do you qualify? How do you update, you know, an entire software stack on top of very specific fingerprinted hardware additions, in order to achieve reliability? I mean in our case there is, you know, there’s no customer that’s running a configuration from hardware, to firmware, to bios to software stack to right up to their applications, which is what they care about that we don’t have in our lab exactly. It’s not a guess, it’s not an HCL, or you know, check all these pieces yourself and hope that you’re close enough. We ensure that in our solution and that definitely goes a big way across these large number of sites of ensuring consistency.

Brian: I could say from a company that had to think about the whole thing ourselves because we couldn’t find the time any options that gave sort of the building blocks that we wanted. I don’t think we knew that scale actually did any of these things when we got started at Chick-fil-A. But it is a tremendous amount of work and a lot of organizations probably don’t appreciate how much engineering goes into doing all that hardware thinking about images and all of the things just mentioned, bios, and what operating system are we going to use and how we’ gonna keep it,’re going to keep it supported and patched overtime. Like, you’re basically in the data center business if you’re going to do all this stuff with thousands and thousands and thousands of data centers and it’s just a tremendous amount of work to do it well. And again, like when things happen and there’s issues you just got lots of copies, not to mention the brickability factor of not wanting to destroy all the hardware that exists by doing something that makes it go off-line and they all have to be replaced. There’s just a ton of work that goes into that I think, and I’ve been able to find ways to offload as much of the “undifferentiated, heavy lifting” as possible seems like a really good strategy for people who are more focused on presenting an application to someone that does a thing for them that adds value as opposed to being really really focused themselves on all the infrastructure.

Stephen: And one of the things that occurs to me though is that a lot of the tools that you might use to do that sort of hardware management whether it’s on the cloud side, a lot of the infrastructure is code work has been done, and then on the enterprise side, there’s obviously a lot of sort of out-of-band management, IPMI kind of stuff and then on the desktop side there’s a lot of, you know, patch management, things like that. A lot of these tools are designed for an environment that is not at all like the design, of the environment we’re talking about and maybe not at all like the hardware that we’re talking about. In terms of practical experience, I guess for both of you, does it work? Is this stuff good? Is this stuff what you need or do you need kind of different tools, different technologies, because it just isn’t applicable to the edge?

Dave: I guess I’ll go. I mean from our perspective, I think a lot of the tools can be used, for example Ansible. We put a lot of effort into taking our API coming up with a fully declarative item potent Ansible collection that is actively maintained, is actively expanded, is certified with Red Hat that is designed to, uh… The thing about Ansible, you’re not only can they configure and control and declaratively manage the infrastructure, the Scale Computing infrastructure in our case, it also can extend to the applications themselves, there’s plug-ins for everything it can, and even the network infrastructure. So I think that’s an example of an existing tool that’s very very widely supported, a large ecosystem vendors that can be used our across an edge location. You do have to use a little bit differently and that’s one of the things that we did. For example, you can use our Fleet Manager Console, as well as our on-premises system as your source of truth so when you’re writing a playbook, you don’t have to have an inventory of every VM across all the site, you can orchestrate using us as the source of truth for your fleet, for all your sites, and then for the system locally as well. And so you’re kind of uh… and we can actually go to do things like launch those playbooks, and those are things will be doing and adding more over time. Basically be managing those kind of jobs, those tasks, those playbook runs for things like you’re applications and even doing things like custom application monitoring using tools like Ansible or other, you know, monitoring to say not only is our infrastructure up on my application is performing, it’s performing acceptably, and be able to gather that telemetry and the observability data for applications that may not have a built-in. If you’re using Kubernetes or things like that, you probably got good tooling to do that. If you’re using some of the legacy stuff that we still see in manufacturing sites and retail sites, there’s some need to have a standardized way to get some of that data.

Brian: I tend to agree. I think there’s a good deal of convergence in a lot of the toolling but at the same time we have to probably acknowledge that the constraints that are different mean that certain paradigms are going to be different. Like we already talked about scale meeting a different thing. It doesn’t mean horizontal autoscaling, it means a number of footprints and a number of copies. Another thing that you know is just different is a lot of environments, ours for example, we couldn’t put you know anything to do pixie booting or we didn’t have near the network control that we even have in the cloud you know through APIs and things like that. So I think there’s a lot of convergence but I think there’s got to be attention with getting a full convergence because of some of the constraint challenges that or just unique to those edge environments. Is that something that you guys have seen in and consistent with what you think as well Dave?

Dave: yeah, and you know, we do see more on the monitoring side, you know, customers that are using the existing monitoring tools so you know we’re feeding, providing data both from our infrastructure from the site level, and in some cases even the application into you know some even fairly old-school kind of monitoring stuff, but they have their knocks and they’re trying to extend the edge locations out into some of those environments in many cases and doing it fairly successfully because, I guess because they’re not having to deal with very specific hardware monitoring. Like, you know, drive failure we’re going to handle that we’re going to correct that give them maybe one event that says hey you need to send a new drive out to the site next time you go. It’s not a fire drill, you don’t need to roll a truck today. We’ve taken care of the immediate crisis, your applications, your data are fine but somebody needs to know that and work that into almost their ticketing process, their service desks, things like that. We see some integration, starting to happen there even.

Stephen: And to what extent you even have somebody remotely? I know that this is another thing we’ve talked quite a lot about. So you talking about ticketing, you’re talking about truck rolls, and so on, but but you mean you don’t really have anybody there to do that.

Dave: No I’m referring to all central. I mean your telemetry coming from the site, it’s coming from the ship like hey we’re in the middle of the ocean, a drive failed, the system took care of itself. The applications are fine. You need to request that the hard drive go to the dock or this port that are showing up to in one or two weeks, those kinds of things. Or hey we need to send you know, whatever, an extra node we’re having a capacity warning, we need to send an extra node configure it for zero-touch provisioning so that we’re not ship gets in the port they plug it in and it joins the cluster and resolves that resource need.

Stephen: Yeah that that’s an interesting differentiator because you know in the in the case of like retail or a restaurant or something, you know, somebody can go get an overnight package and plug it in theoretically, probably, maybe, hopefully. You know, but you mentioned, you know there are a lot of edge environments that are not like that, where they absolutely cannot get replacement hardware and so on. And I imagine that there it might make sense to overbuy overbuild, over provision, because you just don’t know if you’re going to be able to have replacement nodes. You might as well deploy four, five, six nodes now

Dave: yeah there’s definitely some math we could do to calculate your expected cost of sending helicopter out versus I’m just going to have a spare node. With us it can actually be part of the cluster being consumed, so you’re still getting benefits from that from kind of load balancing but you’re kind of designing the survivability, make sure you have enough resources that cab handle the storage, handle the compute, you know even potentially with multiple failures.

Stephen: And I would think to that in that case, you might want to actually deploy maybe more reliable, more high-end, more highway, available hardware in some of those locations, or not? I mean I think that that’s part of the math too you might look at it and be like okay so we could deploy like an HA system with redundant power supplies and you know raid and all this kind of stuff. Or maybe we just have an extra cheap node out there you know?

Dave: Exactly, yeah yeah and then we tend to see more of that ladder but yeah.

Brian: Yeah that’s a paradigm difference I think for sure which is you’re going to think about your problems a little bit differently because of not just their cost profile, but actually the complexity to manage and the more pieces you have and the more sophisticated, you get it out what you put at the edge, the more complex than management is going to be. And so you either need something that’s really simple or you need I think really great really great primitives you build on top of an really great automation to make sure that if any of these weird things happen and there’s failures you kind of architected a design for what’s the graded state going to look like if you have a raid device or something but it just blows up and is bricked like are you completely down and was that a big thing and you know you’re highly dependent on it? So I think it’s a it’s interesting to where the trade-offs of a really light weight like simple you know fairly cheap low complexity architecture that employs these ideas of like zero-touch and disposability and things like that verse the we’re going to make it a little bit more like a small data center. We’re going to buy more integrate enterprise grade components. We’re gonna put more resiliency into it and try and manage toward success that way. They’re both the edge they’re just two different ways of thinking about it and it probably really depends on your use case. If you’re out in ocean or you know if you’re in a restaurant in a city, you’ll be doing different things or maybe sometimes the same. So it’s very interesting.

Dave: Yeah and the other thing is it’s not always perfectly correlated, but a lot of these cases where yeah you might make those trade-offs of I need this extra level resiliency. They also, a lot of them have space constraints, so so form factor in power and cooling and all of a sudden, either is not possible or it’s like wow if I can get three or four or five of these little tiny Intel NUC units in cluuster them together resiliency or similar devices, you know it’s like… and even, you know, environment, you can recognize those things really easily, you can, you know, so there’s almost a different class of hardware that comes into play, or at least consideration, a lot of these environments, due to space constraint, power, and cooling and what not but.

Brian: Yeah. You hit on something important I think which is the environmentals, I’m not sure we talked about before but they’re just different at the edge and depends what edge we’re talking about I mean there’s data center like edges you know, the CDNs and the regional data centers and things like that but when you get to this more close to the user close to the business, really remote edge, the more and more that looks different, the more you’re likely to have somebody unplug something in the office on accident. Not that that’s ever happened to our environment before or spilled coffee on it or it gets wet from a storm or whatever the case may be but there’s those heating and cooling could be either one I suppose, moisture in the air or other things like that all is not clean and regulated in perfect at the edge so it definitely makes you I have a new set of concerns and that that informs the way that you think about some of it. Would you agree with that and anything that you’d add?

Dave: Yeah I would definitely agree we see very unique environments and some of our edge customers who are early on with kind of obvious stuff I mean they’re bringing us these new use cases like hey, can we have a cluster on a crane the moves around? There’s no way we can connect ethernet to that. Of course it’s wireless. So yeah, factory floors where there are unique systems for all kinds of different manufacturing pods because it’s better to just keep the data local there versus even sending it across the shop network during the day, during traffic, so yeah, you run into a lot of unique environmental stuff. The retail stores are very interesting you know where they tend a hoist up that is a little micro data centers somewhere in the stockroom, you know, things like that. You’ve got physical security we haven’t really touched on here as well, but it’ll be able to secure those devices. It’s a lot easier secure, small form, factor devices and data center class devices in a lot of cases but it’s important that you consider the physical security as well.

Stephen: Yeah, I was talking to one company and they were talking about as well about very specific thing that I can just having occurred to me. DC power in retail or restaurant or something. AC, of course yes you plug it in the wall you got a power strip, whatever. But they were talking about industrial IOT and specifically about ships and they were talking about yeah it’s all DC power you know they absolutely cannot use anything with an AC power adapter. Apparently there’s this Anderson power pole, which is like the special connector that they’re using. They’ll have basically a power system that they’ll use with battery back up that’ll power like a whole cluster together and things like that but it’s all it’s all DC. I just hadn’t thought about things like that. Again you know you kind of look at it and you’re like well. nothing in the data center. I mean I guess there’s a DC movement in the data center but that’s a topic for a different day. But nothing in the data center that comes off the off the rack is really ready for an environment where you need to plug it into like a 19 V DC dedicated line or something like that. I mean it’s just not a thing. And again the cool thing about it is these little small form factor devices is most of them actually do have external power bricks and you just chuck that and rig up a DC wiring and so on.

Dave: And in some cases power over ethernet as well.

Stephen: That’s an interesting aspect to me and I guess because we talked about Wi-Fi. I guess the flipside of that is it’s sort of like all ethernet power, power over ethernet kind of environments, something and ceiling, something you know that where you where you don’t have power in the room. Yeah, just to so many things to think about but when you’re when you’re designing this stuff of course I guess there’s also a trade-off between standardization and specialization as well because you know maybe you’re going to do some extraordinary measure to make this system work in this particular environment because all the other systems work that way and you want to make sure you’re deploying everything everywhere. I guess, how do you deal with situations like that where you don’t have a uniform place to deploy it?

Dave: So we definitely see environments that have a different needs, whether it be in a legacy hardware or that an application is running a legacy hardware, that in a maybe they’re ready to replace a little often times that’s where we’re just lifting and shifting. We’re going in and taking an old Windows virtual machine or old Lennox box, or some cases, an appliance, and moving it into a virtual machine on resilient, well-managed infrastructure. The other thing we do see, is you know our, and I think I mentioned earlier, our clusters do let you mix and match node types, I mentioned we have all different kinds of capacities, performance, CPU heavy storage, so while we talk about standardizing, you can have sites of, you know, we see this commonly in retail like well, this store also has a pharmacy so we have this extra application. We have this extra resource we need. Oh, this site is one of our, you know, customer experience stores so we’ve got all sorts of GPUs and digital signage we have nodes that are tailored for those kinds of workplaces.. those kind of applications, but they still can benefit from all the same management, the same zero-touch provisioning, the same fleet management with our consule and so that’s kind what we try to do. And the same automation so you can have automation blueprints, templatize, you know, your pharmacy stores and the different sets of applications and resources that they need to kind of plan for a handle on design for those kinds of differences as well.

Brian: Well this has been really great. I think we sit on a number of really interesting challenge and interesting solutions to thinking about how we scale the edge. Maybe to wrap up with one more question and I’ll ask it to both of you guys, you can both answer. When you think about people, companies that are looking to deploy things to the edge for whatever reason what do you think is the biggest challenge that they’re going to face and what advice would you give them?

Dave: I guess I’ll jump first. I mean I think the biggest challenge is planning for not just the one proof of concept. You know, what you can do in a lab but planning from day one how you going to get this out to however many stores, how many different kinds of infrastructure platforms that you need as part of your POC. And so we want customers who are looking at our stuff for example, to experience our fleet management, to go through zero-touch provision, to use automation in their POC. Don’t you say all we’re going to go do it manually once and then go back and figure out the automation later. I mean we think that’s really an important thing to think about, to design for, to understand and so, yeah, we’re proud of our tools that we provide to do that, so we want to help people try that experience and simulate a large scale deployment even if that’s your POC. But that’s probably made my recommendation is just to make it as real world as possible and doing one local POC is great but you definitely need to go the next step and you’ll think about the fleet level deployment,

Stephen: Yeah and that really goes I think to the topic of this whole discussion is that scale really changes everything and it’s one of those things where just because it work here doesn’t mean that it’s going to work everywhere and doesn’t mean it’s going to work forever and so you really have to make sure that you’re going to build something that will actually work hands off that it’s got to be automated you know we talked about that as well in previous discussions. It just seems like that’s the big thing, the big mistake people are going to make is they’re going to build something that works on my desk and say okay this is what we’re going to go for and then not be able to support it long-term out there. Brian, I’m going to throw at you too. Tell us what your experience is.

Brian: Yeah, I think this is really good. I think thinking about how you’re going to scale your solution from day one is probably critical to success. I mean it’s all kinds of things, it’s how are you going to get it to the sites? How are you going to manage it? Who’s going to do the things if there’s any human tasks he probably shouldn’t do that. It’s probably should lean on automation, but really thinking about how this all going to work when it’s in all of it is in places, not just thinking about how do I get it to work in one copy. Some of the biggest challenges that you’re gonna find are in managing the constraints of your environment and then multiplying that by X whatever your number of copies is so I think designing and starting with a really good foundation is obviously gonna set companies that are thinking about doing this up for success, so I think that would be be my recommendation is to think about scaling now, even before you’re sure that you’re ever going to have to.

Stephen: I think we said the word scale more times than average.

Dave: Did we pay for that?

Stephen: No, this isn’t sponsored man. Listeners send us in a dollar for every time we said scale that be great. Thank you so much, it’s been a great conversation and again scale is the differentiating factor here when it comes to the edge at least for this conversation. Dave where can we connect with you, where can we continue this conversation with you, and do you have anything going on that you want to pitch?

Dave: Sure. so definitely check out ScaleComputing.com and I am personally my Twitter is at DavidDemlow on Twitter. Feel free to reach out and you can also find me on LinkedIn. But yeah, I mean we’re definitely moving forward with the edge development. Recently announced a Fleet Manager product. We definitely want everybody to take a look at that, become a development partner and we work with a lot of different companies, a lot of different use cases, so you know, we actually understand that there’s a difference in the environment. Come work with us and know that there’s something that’s a little bit different or a little hardware platform that you need, we can design an edge solution that is designed to scale with with your environment your applications, and your needs, and mine.

Stephen: Great and Brian how about you? What’s new?

Brian: Yeah I continue to write about the edge among other things at my substack it’s at the Chamber of Tech Secret, fun name. BrianChambers.substack.com and you can kind of just follow me and see what’s going on on Twitter as well BRICHAMB usually getting into some interesting tech discussions. We’ can’ve talked a lot about the role of the architect lately which has been a fun one as well so go check that stuff out and I appreciate it.

Stephen: Excellent yeah and I got to say the substack is a lot of fun because it’s not just sort of, hear, I’m writing about this. I’m writing about that. You kind of have a message in each of these posts which I really enjoy, and you’ve got kind of a thing going on there so it’s working, it’s working for me.

Brian: Thanks so much.

Yeah and thank you Brian, thank you Dave. As for me of course, you can find me here every Monday on Utilizing Tech. You can also find me every Tuesday, or most Tuesdays, on in the On-Premise IT podcast which is available on a pod catcher near you as well as every Wednesday on the Gestalt IT Rundown, which is our news show. Of course you can find me on the socials at SFoskett on Twitter in Mastodon and more. Thank you so much for listening to Utilizing Edge part of the Utilizing Tech podcast series. If you enjoyed this discussion, please do subscribe, we would love to hear from you as well s0 give us a rating give us a review wherever you can. This podcast is brought to you by GestaltIT.com your home for IT coverage from across the enterprise for show notes and more episodes, head to our special dedicated site utilizingtech.com or find us on the social media sites Twitter and Mastodon at Utilizing Tech. Thanks for listening and we’ll see you next week.