Many of us in software development know of, and have used HashiCorp projects. How do they stay innovative in a rapidly changing ecosystem? I speak with Ray (Consul) and Chang (Nomad) to find out more.

Transcript

Chris Ward 0:03
Welcome to another chinchilla squeaks. And this time, I actually have two people from a company who I’m quite excited to speak with I have been wanting to speak with for some time I almost have this feeling of, I feel like a might have actually spoken to someone from HashiCorp before, but I’m not even completely sure. Not sure it might have been some time ago as well. Something like vagrant related or something. So it was possibly a little while ago. But today I have Chiang and Raymond and I know you both deal with two different products or projects. So let’s maybe just start with quick introductions of what project you work with. On hashey Corp in her head, her attitude.

Change 0:44
Awesome. And thank you, Chris, for having me here. This is Tom. I’m tele Product Marketing for one of the Hashi core product called Nomad.

Chris Ward 0:52
Okay. And Raymond who is having some video issues, but we can hear him so go ahead, Raymond.

Ray 0:57
Yeah, Thanks, Chris. Good morning, or good afternoon, everyone or good evening. My name is Raman Austin. I’m Product Marketing for console, one of several hashicorp tools.

Chris Ward 1:08
Okay, and let’s maybe just very quickly start with a little overview of hashey Corp as a whole is a company that in the tech space has been around for a little while, which is probably isn’t actually that long, but in the tech space is reasonably long period of time. So where did the company start? And kind of what was the what are the aims of the company? And I think this feeds nicely into the, the the the products we’re here to talk about?

Change 1:36
Sure. So our company is a cloud infrastructure automation, providing cloud automation infrastructure tools. I think the goal Well, the company was founded in 2000. Back in 2012. The goal for our company is really helping organizations regardless of the scale to move their, their workload into a multi cloud environment in a consistent way. And do that through all the critical layer in the infrastructure including terraform for provisioning vol for dynamic secret or zero trust Security Console for service networking, and Nomad for orchestration.

Chris Ward 2:16
And so, console and nomads how, how old are they? I think I my journey with hashey Corp started with vagrant, which, back in the day, shall we say like in the days of lamp stacks, and lamp stacks, and lamp stacks, and all those sorts of things. This was a groundbreaking tool to enable people to like have Linux machines on their local desktop. And vagrant is probably a little less used these days. But I think that’s where my journey began and console. I’ve definitely come across more with recently, as well. So how, what’s the what’s the age of those two projects? And I guess, why? why create them?

Ray 3:01
Yeah, maybe I’ll start on that one. So for console console has been around since 2014. It’s one of our oldest tools, it’s one of the most heavily downloaded tools out there. As part of the Hashi Corp suite. Initially, it was used as more of a KV store product for wallets as well as more of a service discovery tool, as customers make their you know, make their journey to cloud native environments, some way of sort of tracking the the different locations of different services and being able to determine the health of those services. So that was a key use case for console for a number of years. And then over time, we added new capabilities to console to make it a full fledged service mesh. That was about a year and a half ago. And so yeah, a lot of practitioners and customers use console for a several major use cases one service discovery, two service mesh, and then three they use it for and this is something we released a couple months ago. Console terraform sync, which is a binary to to do networking infrastructure updates. So think of console CTS is a sort of a de two way to automate your networking devices use terraform you know, infrastructures code, right to provisioning your networking devices and now you can use console CTS to then automate sort of the day to day activities with with the various networking devices.

Chris Ward 4:30
And Nomad is a new one to me. And I sort of read the the web page and I’m instantly thinking how does this compare to Kubernetes? I guess so. So what is Nomad and yet why create a quote unquote, competitor? I don’t know challenger to the large elephant in the room.

Change 4:53
Sure, yeah. I’m happy you asked a question. So I think Nomad is a relatively youngest product. The core products that hashey Corp offers. I think it was born in 2014, probably just a little bit late after Kubernetes was born or created. So the main initial goal for Nomad is it’s really about addressing a different persona. It’s about developer developer community, help them to accelerate the velocity increase their release frequency, just kind of ease the delay applications over workflow for them. And orchestrator is really kind of the tool in the middle to decouple the tool workflow between developers and operators. Lead appraiser just focused on the fleet management, the infrastructure capacity, and also allow developers to own application lifecycle by themselves, and gave them autonomy, allow them to directly connect consumed orchestrator in a reliable and responsible way. So this is kind of the initial goal for for Nomad, I think there’s a lot of difference. Between Nomad and Kubernetes. I would say there’s probably two unique strengths that Nomad has that attract people to adopting or using Nomad. The first is just the operational simplicity. Nomad is a single binary as although can most of the Hashi core products as only around like 50 megabytes The last time I check, it can be installed either as a server or as a client. So it’s really kind of lightweight layer and also hashey core Prada has this kind of Unix philosophy meaning when you design orchestrator, I should only focus on scheduling and orchestration. So no matter compared to Kubernetes, you can do everything with Kubernetes. So that’s why customers choose especially for you know, small, medium size team or organizations, or doesn’t have a fully budget or like a fully staffed, dedicated team to manage their Orchestrator Platform on a daily basis. They choose Nomad as a much simpler alternative to Kubernetes. But the another I was did the second point for Nomad or the stress is nomads, now just focused on containers. So it also support like legacy applications like Windows, Java application, or even single standalone application binary, it hasn’t this

Chris Ward 7:17
vagrant virtual machine

Change 7:21
could be because it has this flexible task driver, which extend the workload, you know, not only just containers, but beyond containers, a lot of legacy applications, it allows or like a lot of Brownfield projects or athlete or organization has legacy applications, to be able to containerize that have their own place, and be able to bring all this more than orchestration benefits or capabilities to their legacy applications without the need to rewrite all of them. So that’s kind of a unique advantage that helping organizations have a smooth or faster migration journey.

Chris Ward 8:02
Okay. And I feel I kind of because we have two topics at once here, I kind of I’m going to try and maintain the same pace with each one. So I’m guessing then that an obvious console comparison is if I’m correct things like at CD, is that a common comparison to console or? Because Oh, yeah, console is also service discovery, as well as?

Ray 8:25
I think that’s right. Yeah. So So initially, it would be comparable to a number of, you know, more service discovery types of tools out there. Over time, as we added new features to make it a viable service mesh, we’ve been often compared to other more cloud native service meshes out there like sto or cameralink, or new, of course, our value proposition is to be able to is Chung talked about, you know, other types of, of runtime platforms and environments, it’s not just relegated to cloud native environments, right? So whether or not you’re using it for on prem, or even bare metal server environments, right, or across multiple cloud environments. If you think about the journey from on prem to cloud, it’s very, it differs right for each organization. And so we provide a way for you no matter what types of type of runtime platform you’re using, we provide a way for for users to stitch together their applications across these environments.

Chris Ward 9:21
So it’s interesting because a lot of people are probably I would say, These days, I would guess the most common open source tool for magical use is probably terraform. I think that’s the one I hear the most which is so that’s a very broad, open ecosystem type tool. But in some ways, especially with nomadic kind of almost feels like you’re creating your own like little alternative world over here. And I suppose the first question would be, how will you learn from the mistakes but the, the, the lack of success of things like Docker swarm and Apache So is that just, they still there, but they’re not as successful as Kubernetes is, and also kind of maintain, I guess, interoperability with the wider ecosystem without, you know, shooting yourself in the foot in terms of potential customers and things like that. Yeah. How are you going to walk that? That line?

Change 10:21
That says, uh, that’s great question. So I answer your first question. It’s like what we learn from the previous incumbents, like, like, like myself, or Docker swarm? I think, one we’re in the orchestration market. That’s to say this is not like one tool fits all. And I think the majority the customers in the market both mature rapidly in the past one or two years, I think people already a lot of evaluation of their, their what they want experience gained through this, you know, adopt orchestration, adoption or container adoption journey. I think people realize this is knelt by Kubernetes comm self addressed all the use cases they have, right. So I think the lesson we learned is, the reason no matter exists, or people are choosing organic adopting mailman is because no matter is different from Kubernetes, we address the white spaces or some underserved area that Kubernetes couldn’t address, for example, on prem deployment, still vanilla Kubernetes. This is a known challenge, whether it’s a hit or miss, or edge deployment where people prefer or have to have a lightweight or small

Chris Ward 11:28
footprint alternatives to Kubernetes to achieve that, so yeah, yeah,

Change 11:33
yeah. And, and also, like, for example, if organizations, they want to have a mixed type of applications, and really want to have one single, unified, you know, occupation or deployment workflow, where nomads support both, you know, existing and new applications. And also a lot of organizations, they just want to get their containers up and running for you know, that they only they focus on their mature production, workload or customer facing workloads, has less dependency on the ecosystem, where they really appreciate it were valuable, the core kind of scheduling and orchestration capability that Nomad provide has less dependency on the other, you know, third party tools, then Nomad is a perfect fit, right? So they can they don’t worry about the whole ecosystem, they can just focus on what Nomad really good, which is scheduling. And that naturally ties to the second kind of question, part of your questions that the ecosystem right does this make, like terraform become the standard off the provisioning layer, and how well how we deal with with ecosystem. So I think the first is to acknowledge that we will never be able to catch up with keep pace with Kubernetes. And I think a lot of users or organization choose Kubernetes, is because it’s thriving ecosystem, right? For example, if they want to do big data, serverless, or AI, they can easily consume third party, you know, open platform directly on top of Kubernetes. And we’ll be we’ll never be able to catch up with this. So our approach to ecosystem is we will pick working with leading lenders in the most critical ecosystem component, right like monitoring, like ci CD, we work with a circle ci or gilla. So we choose leading one or two leading winner vendors in the in the critical ecosystem components who have provided a prescriptive approach, or path to ecosystem instead of you know, being able to cover the whole ocean, which is impossible for us.

Chris Ward 13:35
A nice analogy with the ocean because that’s the whole Kubernetes Docker world isn’t. So one thing that hashey Corp is usually done quite well. I think, as far as I know, as far as I can tell, is this kind of difference between what you offer for free versus what you offer for paid and console is definitely open source as well. No matter I’m, uh, can’t entirely tell if there’s also an open source version or not, but we’ll come to that. So yeah, how, what, what are the two projects offering in the different flavors? Let’s start with with constant I guess is that’s the one I know is definitely open source and then switch to Nomad. Like, what are the different offerings you have on the open source versus commercial?

Ray 14:22
Yeah, and I think I would describe there’s the open source version, which we have many practitioners that enjoy OSS, we have lots of we have a huge, robust set of features for service mesh and service discovery. For OSS users. We actually have two types of commercial offerings, we have what we call self managed offering, which is, you know, can be run on any any customer environment. But then we also released we went ga a few months ago around our hashicorp Cloud Platform HCP, which is a managed service offering that takes away a lot of the burden, you know, for operators in terms of how do I how do I manage All the infrastructure, how do I stand up the infrastructure? How do I do things like, you know, backups and upgrades and things like that all of that is handled by an our dedicated SRP team here at hashey. Corp. And in providing that we provide that as a service. So those are the two commercial offerings. But in terms of the, you know, the enterprise types of capabilities, yeah, we have, you know, a number of customers that, let’s say, you’re a large customer, and you want, you have many different teams, right. So the way that this adoption journey kind of works is you start off with a handful of practitioners, they’re free to use our OSS tools, everything works great. But then you have a bigger team, you may have multiple teams, in fact, you may have a central IT group that’s providing a shared service across many, many different types of teams. So you need, you know, certain features and capabilities around compliance and governance, even around you know, performance and scalability and reliability as you start to deploy console across many different environments. Right. So that’s where the enterprise offering provides value is for those environments, where you have many teams, and you have certain enterprise requirements around, you know, scalability and resiliency of the different applications think things like that, where they would use the enterprise offering.

Chris Ward 16:14
Okay, and so with Nomad is doesn’t know might have an open source offering, or is it just the enterprise offering? Nomad also have open source offering? No, no, I see that. Yes. This is

Change 16:30
all hashey core products have an open source offering, not every product has the enterprise outcome is the other way around. Yeah.

Chris Ward 16:39
Okay. And so I’m guessing that the main have been benefits, there would be a self not self hosted. You hosts you manage multi clouds, these kind of things.

Change 16:55
So no, my constant doesn’t have an cloud offering. That means it’s still have to be like customers self hosted or self managed platform. So our enterprise like offering focused on two Part one is about most was organization path or Day Zero, day one. And when they’re really focusing on a tool, right, the scaling the making more drive more efficiency and scale their deployment, we offer the capability to support Advanced Data operations, for example, dynamic application sizing, which is how is to help organizations optimize their application resource consumption to its most efficient level based on their own strategy, I think this is gonna be the the resource kind of over provisioning for applications is a hidden costs that brought by orchestrator, I think it’s only get worse as more organizations adopt this self serve approach to allow developers you know, define the resource consumption by themselves, it tends to lead to over provisioning, I think that’s an example. We’ll be talking about how they tool capabilities, drive, higher efficiency, reduce costs. And another set is just just like wait a describe when you when organizations expand the Nomad footprint, supporting multiple teams or more people organize smaller groups inside, they want to have more control of compliance, you know, more, more governance around them, we provide governance and policy type of features to help them have better control, either in access or in resource.

Chris Ward 18:27
Okay, now, I actually received a comment and a question. The one is agreeing with you, yes. On Premise Kubernetes is a horror. One is a question. But unfortunately, it’s it’s mostly a terrible question. So I don’t know if this is really something I think we could we could broaden the question. So the question was, is there a benefit of using terraform? With an on prem managed cluster, as far as I’ve seen is only really worth it if you’re in the AWS or any cloud. I mean, maybe we can abstract that question to console and Nomad. And maybe the question could be, you know, with with with console and with Nomad, do you lose or gain features, sort of on premise versus cloud offerings? And obviously, I realized that on premise can mean, many different things, which is probably one of the bigger problems, but yeah, are there any features that you obviously gain or lose in those sort of installation options? How about we start with Raymond and console?

Ray 19:31
Yeah, so for console, I mean, the same, you know, same issue exists whether or not you’re, you’re on prem or in the in the AWS or any any cloud, which is, I have some applications, I spun up either, maybe, across any number of virtual machines, which can be on prem, I need to I need to figure out a way to stitch those applications together in a way that doesn’t involve, you know, new code entries. And, you know, for developers, you want to be able to make sure that there’s some sort of abstraction layer that prevents all of this stuff from being written in code in the application. And so whether or not it’s on in the cloud or on prem, it’s the same types of network servers, networking features that are necessary to be able to do it reliably, you know, at scale, things like that. So yeah, I would say it’s the same benefit from a console perspective.

Chris Ward 20:22
And with no made, I guess, you probably the more relevant to this question, because you’re probably orchestrating these different environments together.

Change 20:30
Yeah, I think Nomad, actually part one of the biggest strengths is that it’s bringing the, the ease of use to on prem to like, make it as easy to use as in the cloud, for a lot of organizations also allow them, you know, for developers, it’s almost transparent to them in it, whether they’re deployed on prem or whether they’re deploying, you know, migrate that workload to a cloud environment. So operators, it’s a small, like reconfiguration for operators, almost transparent to developers. That’s why we see like some customers even leverage that for like cloud bursting type of things, or, you know, migrate workloads to from on prem to cloud, almost in a real time, like a no downtime way. So he said, there must be

Chris Ward 21:15
some limitations there. Surely, I mean, yeah, as I say, on prem, maybe I’m being ridiculous here. But on prem could be some person’s old like Java server set in a corner to a proper kind of server setup. You know, I don’t know, maybe those sorts of people don’t come to tools like Nomad, but I don’t know.

Change 21:36
So I think one area right now is, for example, auto scaling. Right? So Nomad launched auto scaler, last year 2020. So right now we support the auto scaler integration with all major cloud with their, you know, auto scaling group like asj, or, or, like Google manage instance. But we haven’t haven’t the same kind of auto box offering for on prem. So I think eventually, we’ll get there. But right now, this is something you know, for a cloud deployment, it’s probably easier to scale their deployment.

Chris Ward 22:10
And so let’s just say you’ve just come out of hashey. conf Europe. What were the the new features that you both announced? Ramin?

Ray 22:22
Yeah, maybe from a console perspective. So we introduced it’s in public beta right now, but it will be in ga very, very shortly. It’s our latest software release called Windows 10. And it the one of the main features is something we call transparent proxy, which allows updates be made to service mesh, you know, whenever you have you add new services to the mesh, things like things like that, without any added modifications to the application. So that’s a big deal for in terms of being able to prevent, you know, for unnecessary modifications in code, and to allow the service mesh to handle, you know, all the different interactions of services, all that is transparently done intercepted with within the the envoy proxy, but again, at the core benefit is to prevent, you know, all this added modifications from being being in the in the application. So that’s one of the big features. The other big feature is screaming. Screaming is something we introduced in 1.9. As a way to improve what we call blocking queries, which is I think a blocking queries is sort of a queue of requests to console that come in for different services that get added to the to the to the mesh. And so the big, big benefit, there is overall resource efficiency, reducing CPU utilization, increasing performance, we just released a couple about a month or two ago, our scalability benchmark. And that was done it was 10,000 nodes have console 172,000 services, all updating service entries within one second. So it’s really it was really a proof point around scalability performance of console when used in a large large environment. So those are the two big features with the

Chris Ward 24:10
110 it’s actually interesting I remember doing those sorts of experiments years ago, well three years ago, it was like 2015 or something when you would see how much services could scale and just launch a ridiculous amount of nodes and then they couldn’t none of them nothing could really cope everything would come crashing to a halt but I think now it actually kind of there was usually too much network traffic. I don’t know what we use for service discovery. can’t honestly remember. And Nomad me edge was that. What was your main announcements?

Change 24:45
Oh, no, my wonder one was just shade before Hashi comes. So when no one really? Yeah. So why no one is really about enhancing our core strengths. The improve the quality of life for operators, but also enable greater flexible scheduling capabilities. For example, I think one of the one or two key features like memory, oversubscription drive higher cluster efficiency and those to help operators to, you know, to accommodate these spiky workloads where sometime that one below start point then just drop significantly as they go into the steady state that you really have to accommodate without memory oversubscription, you really have to accommodate for the worst case scenario, then you leave a lot of idle resource right can now be consumed by other applications. So we’re now extending this memory subscription not only to Docker containers, but it also you know, like Java applications, other existing or legacy applications. Another Another improvement on flex scheduling capabilities about reserved CPU cores. These are particular important critical for like latency sensitive applications, which they don’t want, you know, to multiple applications share the same CPU strive, for example, like gaming online gaming, like Roblox, actually one of the the company really can benefit from these type of features. So you can actually have up in your application to an isolated a dedicated CPU core to make sure to make sure their application performance.

Chris Ward 26:24
And so what’s what’s next for your two products, projects, products? But also, what’s next for hashey? core? what’s what’s the next piece of the puzzle that the company is looking to fill? If, if you can mention it? Let’s start with Raymond first. And yeah, what’s next?

Ray 26:43
Well, I mean, you know, there’s been a huge effort around cloud delivery, in the form of the managed service offering. And, you know, we have that with ACP today with console and vault. Of course, we have that with terraform, as well. terraform cloud. But moving it with console, I think what we’re, what we’re anticipating is more and more customers moving to more of a managed service model today that the offering is specific to a single region environment. And so one of the the big feature offerings, we’re going to come out with it shortly in a month or so, or two, is something that would allow for multi region support. And so this is, you know, environments where you need to start to stitch together multiple regions, to do your service networking. And so that’s going to be a heavily anticipated features feature for especially for larger customers. The other thing we we announced as part of a tech preview is support for ECS. with with with fargate. And that was something that we worked with AWS on, that is an A in tech preview today, but we will be giing that in a few months as well. So that’s a heavily anticipated capability for those ECS

Chris Ward 27:54
customers. For switch to Chang actually had one other question come up. Just if there was a problem is sometimes being semi live is people come in at different times. But I don’t think we explicitly mentioned this. So it’s probably worth mentioning. So can you use the service mesh aspect of console in collaboration with other service meshes? Or is it best kind of just using it? So yeah,

Ray 28:21
there there is an initiative out there called the smmi specification for interoperating multiple different meshes. It’s not something we see widely used today. So to answer the specific question today that you know, it’s either or right so typically, you will use cons either console’s as a control plane or is to

Chris Ward 28:42
go and if anyone is interested I have seemingly interviewed all of the major service meshes in the past three weeks you can go back and look on the the podcast feed or the YouTube channel and find interviewed with linker D is to the console and someone else in there somewhere. Okay, um, and Chang via what’s coming next for Nomad?

Change 29:05
Yeah, I think for Nomad, the really spell stood up doubling down on our unique strengths, simplicity, and flexibility to really mature a lot of shiny like big features we deliver last year like CSI seen, like no one implement the whole spec, but we follow customer’s needs or what they love listening to what they want to help making sure we have the best integration with their top up the choice of either a story solution or networking solutions, and auto scaling, you know, both virtually and both are both vertically and horizontally, right, driving higher efficiency, improve the scalability. And your last questions about what’s the last what’s the next piece for over a hashey Corp? I think that’s a perfect question and also put into the context with Nomad and console Nomad and console is really kind of the infrastructure tool but also eventually consumed by developers. Ready, especially like, you know, orchestration allow developers to deploy their applications in a self serve manner. Service mesh capabilities allows developers to be able to control traffic shaping, observability and measurement, oldest extended functionalities. It’s all about how we optimize developer experience, help help to build this application, new application delivery, automation. And recently, we launched another open source project new open source project called waypoint. Okay, so liquid, our vision for waypoint is eventually that’s the interface between developers and operators. So way point is, think about this you today, we have like, organizations have Nomad deployed, and some different team, they have Kubernetes deployed and some other team, maybe a different lineup is to decide to use like serverless functions and lambda. So eventually, you will see, there’ll be a lot of runtime platform, there’ll be a lot of Cloud Endpoints. and still they have legacy, right? So when he talked about when we talk about application delivery, they have each runtime or each cloud endpoint has a different build, release, build, deploy, and release workflow is diverged there. Yeah, they may have a similar code and test experience, but it diverged when you add to talking about deployment. So waypoint is really about having a consistent, unified way to describe, build, deploy, and release. So regardless whether you’re working with Java,

Chris Ward 31:31
Terra terraform, as well, I

Change 31:33
guess, oh, that’s a great analogy. It’s like how we bring the terraform experience to the deployment application delivery workflow. Yeah, you just speak one language, but you support all the underlying technology.

Chris Ward 31:45
Cool. I feel like I would love to talk to you more. But unfortunately, we’re gonna have to wrap up especially, you mentioned one of my favorite times at the moment around developer experience. I know hashey Corp has always be big into that. And it has, I think you have your own principles and things like that from memory. So anyone who’s interested and hasn’t heard of you before, which is probably not too many people who would listen to or watch. But it’s Hashi. corp.com, you can find the products pages, all are available for open source versions to play with. And if you want to take it from there, then then go ahead. But yeah, Chang and Raymond, thank you very much for joining me. I’ve always wanted to get some people from hashey coupon so it’s good to finally have you. And yeah, I look forward to seeing what you do and what the company does next. Thanks very much for having us. Thank you. Thank you.