There is a long-standing gulf between developers and operations, let alone infrastructure, and this is made worse by the scale and limitations of edge computing. This episode of Utilizing Edge features Carl Moberg of Avassa discussing the application-first mindset of developers with Brian Chambers and Stephen Foskett. As we’ve been discussing, it’s critical to standardize infrastructure to make them supportable at the edge, yet we also must make platforms that are attractive to application owners.
Hosts and Guest:
Stephen Foskett: Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech focuses on edge computing which demands a new approach to compute storage, networking and more I’m your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. Joining me today as my co-host is Brian Chambers.
Brian Chambers: Hello everyone, I’m Brian Chambers, I am the leader of the enterprise architecture practice at Chick-fil-A. Also do a lot of writing about technology and specifically about edge on my substack, Chamber of Tech Secrets, which you can find at BrianChambers.Substack.com.
Stephen: So Brian, you and I have been in IT a little bit here and I think that both of us have experienced that there’s a big gap between developers and infrastructure. Of course we’re trying to close that this whole DevOps approach in cloud application development is trying to attack that, but honestly, there’s still a gulf, there’s still a void between developers and infrastructure, let alone operations. Is it worse in edge infrastructure, do you think, than in data center in cloud?
Brian: That’s really good question. I think it is in a lot of ways. I think there is a convergence happening between the tooling that were used to seeing in the cloud where infrastructure has become, you know, more API enabled, which is a construct the developers are familiar with. But one of the challenges at the edge that I’ve observed is sometimes the tools are really built with the type of scale in terms of the number of foot prints that you may have an edge environment thinking about a couple regions in hyperscale cloud that’s very different than 2000, 5000, 10,000 different edge sites. So that can break some of the tooling whether it’s from a user experience perspective or another thing I see break a lot is just commercial licensing models where they haven’t contemplated that type of scale and it just is a non-starter for organizations. So I think a lot of the tooling is going to end up converging and being very similar, but there’s definitely still some challenges and some friction out there that exists it in that world and I’d say it’s still easier in the cloud to find things that makes sense than it is at the edge.
Stephen: Yeah and this is one of the conversations that we had at Edge Field Day in February when we were talking about how applications are deployed in the cloud verses in the edge. We were talking about Kubernetes specifically in containerization, and the person we were talking with about this was Carl Moberg from Avassa. So because of that we decided to invite Carl on the show today to join the conversation. Welcome to Utilizing Edge Carl.
Carl Moberg: Hey Stephen, hi Brian, thanks for having me. I’m Carl Moberg. I am the CTO over here at Avassa and it’s funny that you would invite me just when you’re talking about my favorite topic, so, liking the set up here. Lots of thoughts and observations going, so looking forward to this conversation.
Stephen: Yeah, it’s funny how that works out, isn’t it?
Carl: Crazy, crazy.
Stephen: Well it’s good to have you on here Carl. Now, I’ve known you for a long time. You’re definitely a lot of fun in your presentations, but one of the things really that that caught my attention from Edge Field Day with almost every presentation was that it’s not about the technology so much is how you’re using the technology, and I guess that’s how it is everywhere, but it certainly seems to be that way with edge. We talked about that last week on the Utilizing Edge, and here we are again with kind of the same idea. So first off, I guess, let’s try to kick the conversation off by talking about developers. Who are these people? What do they want? What do they want from me? Who are they?
Carl: Well I think maybe, you know, in the same vein, I’ve heard. I mean, we’re trying to count them at the office. We know we’re talking to a developer or actually some sort of application center personality when they say something along the lines of please, I just want to run my applications. And the kind of flipside of that is that is please don’t provide too leaky of an abstraction in terms of infrastructure. I don’t need to see the entirety of the proverbial sausage making factory, you know, in the infrastructure itself. Please provide me something that has my application workloads in focus. And they come in different shapes and forms, of course but that’s something we always look for is like, if there’s that kind of term like I care mostly about my applications, I trust others to run the infrastructure for me, then that’s something I would, you know, someone I would probably put in the application bucket, and a game they don’t have to necessary to beat. That’s why I think maybe the word developer time is it in sketchy because they don’t necessarily have to be living, you know, a part of their working day inside of this code or something, but they definitely have an application first mindset, so I don’t know whether we want to it’ll start inventing terminologies here but at times I feel like developers might be a little distracted, I would rather call them something. I don’t know man we have an opening for a new TLA here application centric engineers, aces. I don’t know, but but again applications first and foremost, and they are happy when they see less of the naked infrastructure and the sausage making.
Stephen: Yeah I think that’s really the key and then the idea of application first or application centric is important to me because too often, especially in IT infrastructure communities, application last is the approach that they put that they bring, essentially. I don’t know who’s going to use this, I don’t know what they’re going to use it for, I have no idea how valuable or important it is. All I know is that these are my SLAs, these are my SLO’s for this particular platform that I’ve created and I’m going to meet those as well as I can. And whatever happens with it, whoever is using it, if anyone, is completely fine with me. And in a way, I kind of heard that, again I’m not trying to criticize you Brian, but I kind of hear that sometimes when we’re talking with people who are talking about deploying infrastructure at the edge because it’s so important to deploy, basically sort of a blank slate at the edge that you can run different things on whether it’s networking or compute. And so in a way, you kind of have to not care what’s being run on it. It has to support whatever it needs to support, whatever comes down the line, but at the same time once that’s running, you need to make sure that it’s really optimized for those users, and you have to make sure that whatever you’re building is attractive so that the developers and the application owners are going to want to make their home on that infrastructure, right, because if it’s the wrong thing, then they’re just gonna buy something else and deploy it out there and then it breaks the whole goal of standardization, right? So by having this sort of application first mindset, I think that’s important but I don’t understand how we’re gonna get there.
Carl: So the way I tend to think about it, and I I think this has been an interesting exercise of the last couple of years as this, you know the edge market does kind of come to happen. An interesting task when I when I see new solutions or new software is to try and figure out what is the abstraction here. If I look at the world through this product, what do I see, right? I don’t want to make this too fluffy or too hippie like, but of course I’d like to see computer programs or platforms that has the application as the central object, right, that has that as the anchoring points in terms of abstractions, and then have operations on applications. I think there’s a lot of systems out there for edge computing, for example, that has the infrastructure as the first class objects at the center of the abstraction. That’s all good and well for the platform and IT teams but I think what we should be looking for are systems that has applications as the central abstraction because what we’ll find and what we fine is that if you have that, that will harmonize, and it will integrate really easy with the rest of the application teams, tooling because let me tell you that they have the application, says the central object, the central thing to deploy, the central thing to update, the central thing to monitor, the central thing to drill into in terms of observability. So I think, as vague as it sounds, I think looking at abstractions, and really trying to make abstractions about applications and build supporting infrastructure for that is kind of key to making this something easily digestible or easily consumable by the application side of things. I don’t know, Brian maybe you can make it a little more tangible. That was a little abstract maybe.
Brian: Perhaps, let me ask you a question. So when you think about that person you have in mind, the software engineer, the developer or the just application centric person that wants to get something done and they’re going to use the edge because they have a business reason for it, right. It’s not because it’s easiest place of the most fun place but there’s some reason they want the latency or the tolerance for disconnect or whatever the case may be. Given that, what are some of the things we think about being friendly to those people? What do you think is important> Like what are some of those components of a solution that you’re thinking about that would make that experience actually work that would be a developer centric or developer friendly in nature?
Carl: Yeah, I think a really simple kind of litmus test is this: so go into whatever UI or command line interface or maybe a rest API and you look at the data structures and hopefully what you’re seeing in the systems that are built to be enjoyed by application teams, they start a game. They start with the application. So you can ask the system show me what applications are actually running on this system right now, and for each of these applications, show me that where they are running. Now this is in stark contrast to saying show me all my 2000 locations and for each location show me each application that’s running. See what I mean? So you start with the application. So here’s an application, where is it running? Not here’s 1 million locations and I need to go into each and every one of them to keep track of how many applications are running. So that’s kind of the basics right? That’s really one of the basic things and after that comes, okay, so what is the health status of each of these replicas? Show me that rolling up into an application view because I want to understand not the fact that you know 10 out of my 2000 locations are out of pocket, but which applications does that actually impact. And of course I can go on, right, how are they performing? Can I drill into a subset of them. So instead of going the route through here is my infrastructure can I see which applications are running in each, have the applications at the central object, because that will fit nicely into your release orchestration, it will fit nicely into your monitoring. It will fit nicely into your oh my God I have to actually upgrade constitution part of this. And it will fit very nicely into when someone calls the application team and says we’re gonna take these three locations of offline tomorrow. What’s gonna be the impact of that? Well, that’s easy I can see here that these are the two applications that I need to do something about. So again, I’m sorry that I am repeating myself, but I think this is a very very fundamental issue here is the start with the application as the managed object, as the lifecycle object.
Brian: Which is a bit of a change from what we sometimes will see happen, which is very infrastructure centric solutions, the thin layer, the applications in and I think what I hear you saying is what start with a basic construct of the application, and then figure out how you understand the infrastructure that is living on, or maybe to the minimal degree like, maybe you don’t need much understanding of infrastructure, let’s be app first and see infrastructure supporting at night infrastructure first, and they happen to be apps running on it, which I think is a really important distinction, so I think that’s super cool.
Carl: And I mean that is also the entry point to such an interesting conversation which I think we all should be having, and we’re kind of having it, piecemeal, is what are some of the useful aspects of the infrastructure that could actually enrich the application persons worldview, you know? What is it that we should leak from the infrastructure to the application teams? Things like, you know, the canonical example these days is there a GPU in this site because some of these applications simply won’t run without a GPU. Is there a camera attached? You know, is this label to be, I don’t know, a Chick-fil-A restaurant of size gigantic and it’s got so many, it’s getting so big that it needs an additional size cluster or something. So kind of what you were saying there, that’s when it starts to get interesting. What parts of the infrastructure configuration or behavior or whatever you wanna call it actually makes sense to the application so we can make informed placement decisions on it, right? And that’s where they meet and that you know again I just find it fascinating. What is it that application teams are interested in knowing about the infrastructure? I know for sure I’m gonna take the first jab here. They’re probably not very interested in what type of service mesh is running in each site but again they may be interested in the kind of hardware equipment that’s attached to the hosts that these applications are running on.
Stephen: yeah that’s a good point because to me I still feel like there’s this fundamental push and pull between standardization and being application centric and application focus. In other words, to play the devils advocate here, I am developing a standard stack to deploy across my 2000 retail stores. I am not going to have bespoke, you know, custom offerings for each individual application. That’s what’s wrong at the edge in decades past. Instead what I’m going to have is a standard environment that either runs virtual machines or containers that runs them in a standard way and as you say Carl, there are certainly offerings that can be given it, so for example, as you said, GPU yes/no, right? Maybe even sort of, not hopefully to the extent that Amazon does it in the cloud, but some sort of offerings in terms of I want a big one with a GPU or I want a little one that can, you know, that doesn’t need much memory or something like that. Imagine that there could be things like that. But at the end of the day, is it really possible to create applications specific infrastructure, and yet have it be supportable across 1000 locations with no IT admin. I’m going to say no, right? It has to be standardized. I know, there’s this sort of this push and pull here, but can we overcome that by having some sort of a standard language that we used to describe the infrastructure in a way that’s attractive to application owners? Because like you said, we can’t fall back on ITBS and be like oh what is the Intel you know XYZPDQ processor and it’s got the PZY, you know, smart flash component, no no no no no no no. But, you know, there’s got to be someway that were describing what the application owner needs in a way that fits with a standard approach to deploying that.
Carl: Oh yes, it’s not for the lack of the IT industry trying, to be very honest, I mean there are things if you stick to Lenox land for now, there are things like SM bios and DMI, and there’s actually server level things like redfish. There are, you know, there’s something called you dev that lives in the Linux kernel and they’re all kind of from different eras with different structures with different original intent. And I don’t… it’ll be interesting to see whether we then as the IT industry has a whole, of course, of the royal we, will ever get our ducks aligned or in a row enough that we believe that that we need a single standard for it. This is actually, it’s rough being a CTO and not having a formed opinion right? So I’m going to be really burying myself here to say I don’t actually have a very strong bet on where this is going. Whether we’re just gonna build stuff on top of the existing stuff in a very pragmatic fashion, that’s certainly how we do it. We happen to love data models, so we wrapped a data model around, you know, the subsystems that I just told you about SM bios, DMI, and You Dev. The good thing about that is that it allows you to do like three base label matching for it. There’s a slew of really challenging things that are overlapping, you know, the semantics are you a little tough to understand. I’m sure a lot of application centric people will be underwhelmed, or not impressed at all by the lack of structure in such. But it’s it’s a very, maybe I should try to form an opinion about this actually. It feels like a weakness but I don’t have one. Maybe Brian, do you have any ideas of how we should… should we structure that kind of data? Is that worth doing at all?
Brian: Yeah, I don’t know that I have a strong opinion, I mean keeping, kind of in the same theme of the topic we are talking about with friendliness to developers, and then kind of trying to use cloud here as a parallel in my mind. There are times were some of those details matter when those things matter to a specific application, right? It’s like sometimes you might care what sort of host you’re running on because, let’s just be trinity, you’re at your training on the data bricks dolly LLM or something like that you need, like, a really big instance and you care, but a lot of cases people just simply or not that concerned about that and what they do you want to know is is there some reasonable abstraction I can use it let me focus on building the app and understanding if it’s working well and if it’s safe and secure. But then I want to, like, do the handoff via API in the cloud or can I do it via API at the edge? Can I do it for a construct like, you know, Kubernetes, or HashiCorp Nomad, or something, you know, different container runtimes whatever the case may be. I think that’s where the energy is going to go and there’s going to have to be people who care about the infrastructure right? We’ve pretended with cloud that nobody cares about infrastructure, that serverless is a thing and that we just run our stuff but somebody cares because they have to run the thing behind the scenes so it’s still matters. I just think it’s going to be a decreasing amount that it matters, that it surfaces up to the level of someone who’s building an app with probably GPUs for a while being one of the exceptions, and that being the thing that came up in the LLM example I just mentioned as well. So that’s kind of my thought. How does that resonate with you?
Carl: It resonates, and that’s the danger because it gets kind of a long all over the place. There’s a position to take where are you say, look, we’re going to be prudent with the infrastructure. We’re not going to need or require that kind of detailed mapping, right, because we actually know what kind of equipment we have. So maybe there is, as usual, there’s a boring middle ground, Stephen, that makes no one upset in the least, but everybody kind of mildly happy where, like you said, there’s a couple of things that we could probably use of labeling for which is probably the presence of some external device and the type of that, the presence of some super powers, you know, on the board level, including things like a GPU and then the rest we can probably leave it to humans. But you guys know that there is only three things, or is it two things, that are tough in computer science, right? Actually two isn’t it? It’s naming, that’s what we’re talking about now, and it’s the second thing that everybody forgets about, it’s cache and validation but no one really cares because it’s not a very cool subject. But naming is tough. I mean something I’ve seen already and I’m saying already because again we’re not, you guys are the forerunners here to some extent, Brian. I’ve seen a couple of other instances of ambitious edge container environments, and one of the things that people lose control over really fast, funny enough, is name spaces and label spaces because it is really really hard to be the librarian of a useful set of taxonomy, you know, across a sprawling infrastructure that’s that’s also meaningful, but maybe that’s what we do first. We allow humans to do it and we’re careful with the automatic or more infrastructure derived matching patterns and that’s that’s probably fine, that’s probably fine for now.
Stephen: It’s interesting because I feel like in a way this whole conversation dances around the fundamental limitations of the edge and I keep kind of throwing this during this ball in there and its, you know, well it’s all well and good to have custom environments, but we just can’t do that in a supportable way in a maintainable way. In a way that the fundamental aspect of edge to me is is the limitations more than anything and I think that that’s one of the aspects that gets in here. Whether it is in terms of customization and support ability, but also just in just in terms of you know, what kind of hardware can we deploy out here because yeah, as you said like, if we wanna say yes, we can have an application that needs a tensor processor In order to do some kind of AI app, like an inferencing app, then that means you have to roll that out everywhere and it has to be university available everywhere and that’s a much much bigger question than it is to say, I have one application a runner in the cloud and I’ll run this an Amazon zero GPU enabled instance and I’m good. And it’s the same with all these things. In one of the things I think that we heard all throughout the discussion here at Utilizing Edge, but also at Edge Field Day and in other places is this whole I don’t know sword of Damocles above your head of yes, but you’ve got to make sure that it’s not using too much, too much memory, too much storage, too much CPU, too much special hardware. It’s all about the limitations and in a way I think that the descriptions that we’re kind of getting around here about how to describe the application’s needs also kind of work both ways in order to say look but application owner we’ve got to make sure that it fits within this envelope. So what is that envelope and how do we describe that and how do we work with application owners in order to create an application friendly environment within the context of these limitations. Another tiny tiny topic here. I think maybe one angle to take this on around is that, I’ve written a little bit about this actually on our website at Avassa.IO, something we called the second application problem. I’ve had a number of conversations with people planning for, let’s call it their first kind of modern endeavor into edge. Maybe they’ve had edge kind of computing before meaning computers in many locations, but now they’re thinking about the design of the first, let’s say general application infrastructure, you know it’s us, so we do a container application so it’s generally container applications, and to my absolute horror, they size this after the first application, so they don’t think very much further down the line in terms of spend than their first application, which is frustrating, let’s just say it, it’s just frustrating, because you know there’s probably, and I’m sure Brian you can even speak to this, there’s probably nothing more time-consuming and you know generally resource consuming and rolling out new hardware infrastructure to hundreds, if not thousands in your case, locations, right? So you would think that they would pony up for some future proof infrastructure that would cover the first let’s say 20 applications. But I’ve seen a horrendous amount of conversations where it’s like you know we’re gonna go for the smoke, I can feel myself getting closer to jumping up and down on raspberries here, I’m not gonna do that. We’re just gonna build the smallest possible infrastructure for first application, maybe first and a half application, and they’re doing themselves such a disservice when they do that. It is crazy and the way it usually ends up usually, I have maybe half a handful of examples, is that this is the way that some of these application people get to know what the out of memory killer is in Linux. I don’t know if you guys have heard of that, but Linux has a very interesting behavior when you start to run out of memory and it’s not something you want to get operationally knowledgeable about. So I think the first thing we should talk about is that how do you please size your edge infrastructure for at least 2-3 generations of applications for at least two or three handfuls of containers. That is, if you don’t do that there’s just no way an application team or application teams in general will be particularly happy for a particularly long. And I think I keep coming back to, I feel like I’m talking about Brian’s expertise here. I’m hoping he can. How did you think about this when you size sized your install base and did you have hard and fast thoughts about what is the size of infrastructure that we need for application teams to to be able to live on this?
Carl: I mean of course, here’s what’s been spinning in my mind right now. I don’t want to be too much of a ping-pong player, as much as I love table that is, but would you say, and this was going to lead back to my observations here. Would you say that over-investing was actually part of your lists on what not to do> You now mention, I guess, two things. I am sure you you were thinking about financial over investment and also you mentioned physical space and may be other things. I mean were both of those important or was financial more important the floor space or what was the concern here or was there… maybe this is a great hard shooting question for a program like this. Was there a future where that kind of project would have sunk and you would have declared it failed, and you would have two write off the expense. Was that actually in some sort of planning horizon here?
Brian: Yeah, so a couple questions here. I’ll do my best, correct me if I’m missing any. I’ll answer the last one first. So I think the possibility of a failure at the edge was completely possible in the early days that this didn’t make sense, that it wasn’t actually possible to operate it effectively at the scale of you know 2500, now 2800 restaurants, and on and on, so we certainly contemplated that as a possibility, and then we thought about that reality, we wanted to make an investment that was big enough, again, that it would enable us to have some room to accommodate more if we succeeded, but it wouldn’t be, like, you know, a complete and total disaster if it failed. We actually made our investment over about a 2 1/2 year period, 0r probably two year period when it came to rolling that solution at the edge out across our restaurants and I think that something that people will need to think about. Do they need to be everywhere on day one or is that a period of time to learn with a little bit less risk and then figure out what works. And then when I think about what we did, if we had had 10x the demand we expected, the size of the hardware investment that we chose to make was not so big that we couldn’t have done one of two things: refreshed it quicker than anticipated, refresh it like two years instead of three or four years. Or plus one. So we started with three nodes. We do have physical space and probably could’ve found a way to make it work with some some dancing to go to four or five if we truly had to. And so to me when I think about scale that’s kind of what I’m thinking about is there’s a pod scaling in Kubernetes for us or the container scaling or the Wasem scaling or whatever else you wanna say an application level but the infrastructure scaling, you know, it’s not dynamic and on the fly. It requires procurement and shipping devices now instead at the edge, but it is still possible given you have a physical space for it, the network capacity for it, etc. etc. So that’s what I would say is the way we thought about it, and sort of the way they would answer that question about scale Yeah, no. So thank you and I know you said half of the words that I was thinking about saying. I think that I can get at least two layers right like you said there’s a physical layer… So I mean I focus mostly, and I’m thinking about mostly let’s go to the platform-style edge as opposed to very heterogenous, you know, one piece of hardware per application that kind of vertical a little more exotic or maybe some old-school edge. So of course good good practice is to build it, I guess the way that you mentioned, like make it easy to N plus one. Make it easy to N plus one physically. Make it easier to N plus one in terms of what kind of plastering mechanisms that you have, the scheduling mechanisms that you have. And I think also, and I know you guys are running I guess a single administrative domain, but we have seen a lot of users where they want resource, well they want multi-tendancy on the resource level to make it easy and of course, that puts a hole that’s a kind of an almost like a next level scaling, that how do we actually assign more or less? How do we scale out existing tenants? Are we add new tenants? Are we may be even contract certain tenants? And I guess the long-term planning, and I would love to hear if you’re doing any of that, is like how do I track resource consumption so that we can reliably say well so you can go back to your team and say guys in 3 1/2 months there’s a fourth node coming or fifth node coming. Do you guys do any of that gonna forward planning based on resource consumption is that is that a thing yet or is it still a manual enough kind of analysis labor here
Brian: yeah in our world the quick answer be yes and we factor in both like are we getting to a point where we’re using all the capacity we have considering failure conditions to write like we didn’t get a hundred percent 0f all our nodes because that means if one of them fails stuff is no longer running. So we factor that in from the use perspective and then we’re thinking about that timing with our refresh cycles for the hardware there, you know, someone already planned and that’s a good sizing input it for us into how much do we need to go up in our next iteration? That’s what we’re thinking about it.
Stephen: And I’m cognizant too, one of the things that we’ve heard from other people in the space is that there are really some hard limits here, especially in terms of things that, I guess data center people and cloud people are not used to dealing with for example, number of port sleep number of switch ports. Adding one more nodes sounds easy until you realize that that means that the switch isn’t big enough for one more node and then I need to have a completely different switch at every place which means we need to replace that everywhere in and there’s this weird cascades like that. And maybe it’s possible to add a fourth no but not a fifth node or you know something like that, and so they’re all sorts of strange constraints and limitations at the edge that people start used to.
Brian: I just want to say real fast that may sound silly to some people, but that is like our exact reality in our world is we have a limited port capacity on our switces available to us and we have to factor that it is another constraint to consider. So that is one hundred percent reality when you’ve done this in the real world. Like you do have weird constraints that you just do not think about in the cloud anymore.
Stephen: Absolutely. And of course, you know, obviously there’s power and space requirements but generally you can find another electrical outlet or you know stick another thing in there. But you know, in many cases, you can’t get much bigger and also some some solutions don’t don’t lay out nicely and don’t scale nicely and that’s another another constraint. So I think honestly for a lot of people the answer to scaling is going to be deploying a bigger one next time. So we actually are going a little bit long here, but I hate to cut this off because we’re really warming up on this topic. That being said, given all of this, you know, what’s the answer? How do we build a developer friendly and application centric infrastructure when we’re subject to all of these constraints and all of this requirement in and making it supportable and making something I can be rolled out everywhere? And Carl, that’s kind of you you’re the guest.
Carl: Yeah sure. Again well, what we tried to do was to really, again, look at a distribution infrastructure with an application centric world view and I think, I mean what we did, and I love I know that this is one of your favorite topics. We started from the API down right? So we looked at what would the abstraction look like? What would an API look like right? And then we picked the components all the way down to the constituent nodes, right? And one of the many things that we realize was that there is a number of things that we could just easily reuse right there things like how do we… What is the structure for describing what an application looks like? There’s insane amounts around that in terms of swarm and Kubernetes charts and all kinds of things that you can look for. Really things that would’ve been painful and stupid to reinvent like the fact that applications have names and a version. There’s something called a service that is the set of containers that need to be scheduled on the same node. All these things like they’re already there, no need to reinvent that. So okay, so that holds true then, let’s see if we can reuse that in here I’m going to, I like poking on on my pet peeves hear this to me for example takes out the whole idea of an edge specific marketplace. I don’t like that. The marketplace for containerized applications, and every application centric person’s worldview is called a registry and a repository. There’s no need to create another, you know, app store, or marketplace for that. Actually, that’s probably a disservice, so try to build on what’s already there and what kind of works. The second thing then, again, that’s where to start to get interesting is if we are to allow application centric people some sort of self service experience then we have to give them the reins in someway, shape, or form to describe to a system under which circumstances or where does this application run. Of course at the fundamental heart of that scope of where applications can run needs to be managed by the platform or IT team but given that they have control of that I think. That’s the second part, you know, allow them to define the applications and allow them to describe to a system under which circumstances or where do I need to run this and that’s where we touch on or we touch before. Things like only if there’s a GPU or only in Sweden because we love Sweden so much or only in Chick-fil-A restaurants that are sized for more than 100 chairs or something like that, but a configured aspect of it. That one is new in that one I am so very interested to see where that kind of description will end up. Is that something you should put in your get ops? Is that something you should manage in a separate application race orchestration manifest? Is there something you should version? I don’t know yet, but I think those two abstractions what’s an application and where should that run should be at the heart of the conversation and I truly think that’s a great starting point and again, that’s just for the lifecycle been talking about observability and manageability maybe is that is that is a different thing because that seems to be even more undecided whether we should eventually look to the application teams to actually monitor the life cycle or the health. And actually be kind of on almost be on call for that or whether that is better left of the IT operation teams, but again, leaving the description of another application and where to run it and make those beautiful abstractions and easy to use easy to understand opinionated abstraction I think it’s that could be a game changer for this for this industry.
Stephen: Yeah, I have to agree because in order to make the stuff supportable it has to be abstracted. It has to be well described and it has to be, you know, standardized but yet customized enough to be attractive to developers and I think that’s really where it comes down to. So unfortunately we went a little long but I’ve got a I got a cut it off. Carl I really appreciate having you joining us here today. if you’re interested in this topic if you want to continue this, I do recommend checking out the Avassa presentations from Edge Field Day. Just Google or Bing or use your favorite search engine to find Avassa and Edge Field Day. You’ll hear a lot more discussion about this but Carl, where can we connect with you? What are you doing lately? Where can people continue this conversation.
Carl: Well if you’re on the event circuit, you can find me at the Edge Computing Expo in the coming week I believe it is, in Santa Clara. I’m gonna be all caffeined up in the booth ready to demo. We’re going to have some exciting hardware with us as well to show the power of edge computing and in many ways, shapes, and forms. So that’s a good one. Otherwise I’m always writing my thinking at Avassa.IO on the resources. We have a little bit of a blog there and I’m on Twitter at CMoberg. You can always find me along side yourself Stephen at Mastodon these days.
Stephen: Yeah, right on go Mast. How about you, Brian? What’s going on with you?
Brian: Yeah, this was a conversation. Thanks Carl. You can find me as I mentioned at the beginning at Chamber of Tech Secrets on Substack, it’s BrianChambers.substack.com. Writing once a week on that and I touch on all kinds of topics that are relevant from an enterprise technology perspective, cloud and edge to the big ones so I would love to have you follow me there. I’m also on the Twitters at BRICHAMB part of my name so you can connect with me there or LinkedIn if you search Brian Chambers, I’ve cornered that market so that we can find me.
Stephen: Excellent, we’ gotta’ve got to get you on over on the Mastodons, Brian. And as for me you can find me as SFoskett on the social media networks including yes Mastodon. You’ll also find me here with our weekly Gestalt IT podcast, the On-Premise IT podcast, as well as our weekly Gestalt IT News Rundown that you can find on YouTube or your favorite podcast applications. I do want to call out the during his conversation those of you listening on audio I held up this bad boy. I had a great many Intel NUCs, probably not as many as Brian but I just happen to have one here to wave around and he mentioned that during the show.
Brian: Let me know when you have 8000
Stephen: Yeah I don’t have 8000. I think that there would be complaints about that. I think I only have 30 but there you go. Well thanks everybody for joining us, thank you as well for listening. I hope you enjoy this, so this is utilizing edge which is a season five of Utilizing Tech podcast series. If you enjoyed this discussion, we would love a rating, we would also love a message from you you can find us I just sent us an email [email protected]. Of course you can also connect with us on the socials at UtilizingTech on Twitter and Mastodon. This podcast is brought to you by GestaltIT.com your home for IT coverage from across the enterprise, but we have our own special website for this, so go to utilizingtech.com and you’ll find all the episodes of this season as well as our previous seasons, focusing on CXL and on artificial intelligence. Thanks for listening. Thanks for being part of this and will see you all next week.