HomeAboutMailing ListList Chatter /0/0 3.223.3.251

kubernetes anyone?

2022-05-04 by: Ed King
From: Ed King 
------------------------------------------------------
so while I'm on, ahem, vacation, I'm taking the time to study some stuff
that I've been wanting to study...    kubernetes being one of them

Been watching youtube tutorials, and I'm at the point in the tutorial where
I set up a kubemini lab

I was just wondering...   is anyone on chugalug actually using this stuff
in PRODUCTION, or is it just hype?

=============================================================== From: Dan Danese ------------------------------------------------------ My company has some products that use kubernetes. Machine learning and ai st= uff.=20 Sent from my iPhone rote: hat I've been wanting to study... kubernetes being one of them e I set up a kubemini lab n PRODUCTION, or is it just hype?

=============================================================== From: Tia Lobo ------------------------------------------------------ My company has customers using k8s in PRODUCTION. But k8s is about where "the cloud" was 10+ years ago (both the tech and adoption). My company, Netapp, makes tools for managing storage for applications running in K8s. This is the one I work on: https://cloud.netapp.com/astra Our customers, who I am not at liberty to name but several Fortune 50 companies are on the list, do use k8s in PRODUCTION. Personally, I am always surprised this stuff works at all. It seems like a bajillion loosely coupled applications, most of which are not stateful and k8s is constantly spinning new containers to provide HA. On the other hand, containers provide Just Enough Abstraction(c), to allow the physical layer to be "whatever". So our tools work on Azure, GCP, AWS and private. I think this is the first level of abstraction that allows applications to be written once and run on all the public clouds (and private). I suspect we'll see more of it. But I don't see any companies porting all of their applications to this stack. It's usually greenfield projects that are being spun up on timelines that just aren't practical for more "traditional" stacks (like VMs or physical hardware). That is, except for the companies, like mine, that are doubling-down on this tech. -Erica -=--=---=----=----=---=--=-=--=---=----=---=--=-=- Erica Wolf (she/her) On Wed, May 4, 2022 at 12:25 PM Ed King wrote:

=============================================================== From: Mark Huguet ------------------------------------------------------ I've been working with Kubernetes for about 4 years now, and as a Kubnernetes/DevSecOps SME for the last 2 years. There are 3 places that are huge value adds (and are worth keeping in mind as you learn). 1) Developers love being able to define the container and send it your way without having to plan for the system it's being deployed on (once they understand how containers work). 2) Fault tolerance and self healing orchestration sound like black magic to most IT managers, I have demonstrated forced failures without downtime only to have managers question the validity of the demo (but once they wrap their head around it, they love it) 3)Lack of scalability is the most common failure mode of failed cloud migration strategies. Per CPU cycle, the cloud is almost always more expensive than on prem. Cost savings come from the ability to only pay Per CPU cycle consumed. Kubernetes built-in scalability makes cost saving in the cloud seamless (for properly architected workloads). Now for the bad news, there are very few opportunities for administering Kubernetes without having to do development work. At bare minimum most organizations will want Infrastructure as Code, but realistically someone needs to teach the developers how to code, build and package for Kubernetes. Often that expectation falls on the person with the most knowledge of the tech whether they are a dev or not. Therefore coaching devs has been a significant component of my Kubernetes work.

=============================================================== From: "Mike (meuon) Harrison" ------------------------------------------------------ I've played with K8s and other container systems. My personal opinion is at small scale, it's better to write code and build systems that work well without it. At large scale: It's awesome and incredible and you build entire ecosystems to use it well with serious people that know what they are doing from both the DevOps and programming perspective. You'd make a great one of them people.

=============================================================== From: Lisa Harrison Ridley ------------------------------------------------------ It is not hype, we run our entire client hosting on managed K8s on Google Cl= oud. We have roughly 100 or so clients with dev, staging and production sit= es, all managed with horizontal scaling on K8s. Sent from my iPhone e: in t small scale, it's better to write code and build systems that work well wi= thout it. s to use it well with serious people that know what they are doing from both= the DevOps and programming perspective.

=============================================================== From: flushy@flushy.net ------------------------------------------------------ This is an old thread, but it's been on my mind since I saw it a few weeks ago. I just didn't have the time nor motivation to write something that would fully capture my feelings on it. Disclaimer: I work for Red Hat in tech sales - one of my core products is about containers and kubernetes. Post-Disclaimer: I'm snarky, and I don't always spout the market-speak, sales-speak, nor tech-speak. TL;DR version: kubernetes, containers, whatever doesn't matter if you have Supply Chain Discipline, and these technologies make it easier (for some degrees of easier) to invest in that discipline. # Long version: So let me just start with (b/c Mike liked it so much), I'm not a container cultist. I truly believe in using the right tool for the job. I also believe in advising my customers on where technology is headed, where the talent pool is deepest, and where their industry is investing. ## Words matter Some industries are adopting containers more than others. Now, I'm making a distinction here. I'm deliberately mentioning containers and not kubernetes. Why? Well, because in my mind, kubernetes is a tool - it's actually being used as an umbrella term for a lot of tools, essentially a suite of tools that "do stuff" with containers. Kubernetes is the Xerox of container WORKLOADS. Oooh.. another term. Why? Well, containers are just categories of workloads. Ok.. so we have kubernetes, containers, and workloads. Why would those terms matter? Well.. because business.. aka the folks that PAY for stuff, DON'T CARE about kubernetes or containers - but they care about workloads. (to be pedantic they care about revenue, but business functions manifest as workloads - which result in revenue or whatever else is $important). Anyways, the gap between what the business wants, and what's cool tech is what my day to day job normally consists of. We geeks tend to fall in the trap of "this is cool tech, so we should use it." and then a boss looks at it and says, "but why?" The geek says "b/c it's cool!" Boss: "no." ## So what's a container? (folks that know this should skip down) As my colleague Scott McCarty (@fatherlinux on twitter) has said (for OCI containers): - it's fancy file (container file) - pulled from a fancy file server (container registry) - to run a fancy process (the container itself) There's some details missing, but basically, the guts of a runtime container are: - chroot filesystem - cgroups to restrict visibility of /proc, users, memory, and networks - SELinux or App Armor for security - user namespaces - so you can run stuff as root w/o actually being root All that is wrapped in an API, so you can run it with a command pointing to a container file, which is essentially an archive of files with metadata. That's about it. Seems simple! ## So what's kubernetes? (again, skip down if you know) It's easy to think that containers didn't exist prior to Kubernetes. That's wrong. Containers existed before Docker, too. Docker and Kubernetes wrapped the technologies above into APIs. Then folks got together and created a group to ensure those standards were open (Open Container Initiative). Google started kubernetes as a project to run Docker containers (container files and processes started by docker, which was made by Docker, Inc). Eventually, as part of OCI, you have multiple groups contributing to kubernetes so that it can ORCHESTRATE containerized workloads as part of a specification (OCI spec). In layman terms: kubernetes orchestrates containerized workloads. What does that even mean? Well, kubernetes essentially provides a framework that you (generalized you *waves hands*) can follow to apply rules, requirements, and resources to the containerized workloads that you want to run. This matters a lot if you want to run A BUNCH of them. Some of you mainframe folks might say, "hey.. that sounds REALLY familiar." And it is. It's a job server on steroids - but with an ecosystem that folks out of college actually want to work on. (/snark) ## So should I use kubernetes? Yes! And.. also no. What's crucial here is to define what value does kubernetes bring to your business? Remember I said that kubernetes is being used as an umbrella term to define a suite of tools? Yeah... that suite of tools. There's a lot of cool partnerships and tech stacks and plugins that integrate with kubernetes. Some of this is under the hood - that you don't see, but it's there. There are security plugins, authentication/auth, scanners, development, testing, CI/CD, compliance, mesh networking, and custom resources (think building custom $things that your business can buy or build yourself that alters the way kubernetes runs and behaves). I'm simplifying a lot here, which might anger some of container cultists, but I think most get the idea. There is a rich and complex ecosystem that is essentially being labeled "kubernetes." And it's also bespoke. WHAT? Yes, every public cloud and private cloud vendor has their own recipe for pieces of this ecosystem. Some stuff you can mix and match.. other stuff.. yeah.. don't mix and match. Move from one vendor solution to another? Ooops.. gotta use their $widget. You can do it yourself though! It's all open source. But now we're talking about Supply Chain Discipline. When Google pieces together their kubernetes stack, they manage the versions of the APIs, the x509 certs between the different internal pieces, the availability and robustness of the services, and maintain their own "supported" matrix of interoperability between the different projects that make up the umbrella of the "kubernetes platform", and the upgrade paths, testing, and roll back procedures of the platform (and infrastructure). Sounds easy? heh.. just kidding. Note: all those complexities I mentioned are things my customers have experienced in various (DIY) stages of their journey. ## So what about this business value? Ask yourself, what is my business hoping to achieve by adopting kubernetes, or even just containerizing our workloads? Some examples that might be important to you (or not): ### faster time to market for features and capabilities - via faster code delivery - aka dev -> qa -> prod can be really fast. You can even automate some of these processes - easier to maintain app deliverables: less human error, more automated testing process, tools to do "version rollouts" ### less customer disruptions - via fail fast testing, monitoring, and automated rollbacks - via A/B testing and versioning, testing versions w/ some customers and not others - rolling % deployments using blue/green deployments - rollout ver 2.0 w/ sliding scale of % at a time w/ a defined "these look good" metrics. ### better resource utilization - aka SAVE MONEY - via auto-scaling and auto-shrinking - only deploy workloads on the resources you need - via batch jobs that use idle capacity - via higher density of workloads compared to physical and virtual: containers are lighter than a VM, they can contain a fraction of the OS libs - or none at all. You can fine tune memory and CPU to be very specific ### reduce security risks - immutable containers - even if "hacked", it's easy to remediate - decoupled workloads means bad behavior of one end point doesn't take down another - etc... (I got bored coming up with examples) Other example summaries of value: ### wrangle company costs and reduce sprawl ### attract new talent and keep exiting folks ### ease migration and consumption of technical debt (if anyone is interested in some of the examples I didn't fill above, message me, and you, too, can have your own personal 7-page monolog on this stuff) ## Supply Chain Discipline - why does this even matter? At its core, kubernetes, the ecosystem around it, the public / private clouds, and vendors such as myself have built processes and frameworks around various investments that can be summed up as Supply Chain Discipline. We're trying to make it easier, so you as a business can do your business stuff without having to spin important cycles on tech. - People - Processes - Technology Supply Chain Discipline is the ideology around wrangling the above list. As a vendor, we can't do anything about your people, but we do have some strong opinions on processes and the underlying technology. We can't solve everything, but tried to build a good starting point. I was a developer and admin for 20 years. Supply Chain Discipline is a thing. Always has been. Virtual Machines need to be upgraded. Java code needs to be maintained. In both of those instances those technologies promised nirvana: move to any physical host, run that jar anywhere! You’re free! But its promises fell short. We sell an embarrassing amount of Extended Life Support (ELS) for software that has a ~13 year lifecycle. I’ve had customers beg us to extend it, even offering blank checks. The promise of Java was to run that jar anywhere, but in practice, you can’t. It will /probably/ work, but unless I test it, I can’t be for sure everything will work. I can’t bet my job that it will work until I test it. There are enough differences between jdk5 and 9, that and even 8 and 9, that there is no “contract” that you can run it everywhere. There is a non-zero probability that something won't work as expected. Maybe that something is important to you - or maybe not. Let's roll the dice! It’s even worse with dynamically linked binaries. I can’t move a single dynamic binary from one Linux to another. It’s very apparent when things don’t work. Worse than that is static binaries, because it’s not easily apparent when or why it won’t work. I can move those binaries to Linux to Linux and it works - except when it doesn’t. Deprecated calls, incompatible types. With any increased complexity, and the farther you get from the target system and version, the less guarantee you have it’ll run. A few years ago, I found an old backup that had a static build of a prog I wrote for ~2006 (I can’t be for sure what version gentoo it was - 2.6 kernel I know) gentoo to test a directory for changes using inotify. It didn’t run on modern fedora. I recompiled it with a little pain, and it worked. Wait… guarantee? Well, for the vast majority you have no guarantee. You have to test it yourself. You have to have discipline. Eventually, we’re going to see virtual machines have issues as well.. as virt devices become deprecated, and the newer drivers aren’t inside those VMs, we’ll see those VMs fail to start. Those nasty, decades old VMs that nobody wants to touch. Gotta keep them on that old almost out of support hypervisor. It’ll happen.. maybe in a 5-10 years. Yet, some folks keep saying that containers don’t have these problems. Just because folks created a cool storage format with metadata, using chroot, cgroups, SELinux or apparmor, and user namespaces - all those supply chain discipline problems went away? The problems are still there. The app technology is just cool right now. Everyone wants it and uses it. It’s being touched. It’s being maintained. Tested. There’s a discipline around it. Well... for most folks on the container journey. Some just fire and forget. They don’t test after the 1st deployment. Aka: "My dockerfile got built, and it’s deployed now. Why would I ever need to test it again?" Except when it doesn’t work. I hear a lot of hand waving: it’ll just work. Containers can run anywhere, anytime, and anything. Immutable means it’s stable. But without testing, without supply chain discipline, you can’t bet your job on those statements. Lots of folks seem willing to bet other folks jobs on those statements, though. If you want a guarantee, you need the discipline. This is the core of my message: If you're willing to deeply invest in supply chain discipline, then you don't need kubernetes or containers, or any of that. At a small scale, that discipline is pretty easy. At a large scale, you might want to consider some of these technologies (maybe even a partnership with a vendor) to help you with some of the discipline, so you can concentrate more on what parts matter the most to your business. --b

=============================================================== From: Ed King ------------------------------------------------------ Flushy, the next time you are in Chatt, I'd like to buy you a tasty burger. It will take me awhile to read and digest your long response, but I am impressed that you took the time to write such a detailed response! Wow f s e r s is . as t ta. s de u em ol e , he . g es s, , u ic is e ve had =99t be for sure I test it. e. There is a move a single things ent when and it pes. With go, I I gentoo to rn fedora. e. You ll.. as de those VMs that lems. =E2=80=99s a discipline nt. ever anywhere, et your

=============================================================== From: mike@geeklabs.com ------------------------------------------------------ don't need kubernetes or=0A> containers, or any of that.=0A> =0A> At a s= mall scale, that discipline is pretty easy.=0A> =0A> At a large scale, yo= u might want to consider some of these technologies (maybe even a partner= ship=0A> with a vendor) to help you with some of the discipline, so you c= an concentrate more on what parts=0A> matter the most to your business.= =0A=0A=0AI think your version of small scale and mine differ.. Mine is ve= ry small. Micro/nano in comparison?=0A=0ABluntly: epic post and perspecti= ve. You should do that for a living ;)

=============================================================== From: Deidre Holtsclaw ------------------------------------------------------ Wow. We use Kubernetes (and a metric crap-ton of other open source stuff). Yes, in Production. And I'd say our scale is rather large. Vendor support is very important to us, including Red Hat and others. I'd love to expound about some of it, but =E2=80=A6 can't without going through= a lot of paperwork to get permission. Awesome post. Thanks! =3D At a s=3D , yo=3D =3D c=3D =3D is ve=3D ecti=3D

=============================================================== From: Tia Lobo ------------------------------------------------------ "Vendor support is very important to us" This is NetApp's entire business plan now. And "keep schilling flash-based enterprise storage" because Wall Street still judges NetApp as an Enterprise Storage company (like EMC and HPE). Every time I look at "kubectl get pods" on one of our k8s deployments (we have hundreds of test deployments in addition to test deployments in AWS, GCP and Azure), I am amazed it works! It does work and it handles scale so much better than a monolithic stack (like LAMP). And I'm not talking about scaling from 10,000-1M web page hits, I'm talking about "We currently have 1M simultaneous web users. We want to push out this new app that uses the same microservices. We expect to add 10M simultaneous users to this loose confederation of services". Kubernetes marks the first time there in my 35+ year career that a technological "disruption" has both left me wondering WTF? and surprised because it actually works. What has really impressed me is that NetApp's C-Level execs recognized this change and did a hard pivot in this direction (I don't think we'd be selling Enterprise Storage at all any more if it weren't for the pandemic causing a lot of big companies to need to refresh their storage for VDI support for remote workers... and ransomware... Having your workforce use VDI with scheduled snapshots makes ransomware almost a non-issue). -=3D--=3D---=3D----=3D----=3D---=3D--=3D-=3D--=3D---=3D----=3D---=3D--=3D-= =3D- Erica Wolf (she/her) On Thu, Jun 16, 2022 at 9:00 AM Deidre Holtsclaw wrote: . gh a lot u=3D At a s=3D e, yo=3D r=3D u c=3D .=3D e is ve=3D pecti=3D

=============================================================== From: flushy@flushy.net ------------------------------------------------------ Yeah, and as I peel back the layers, it's not really about support. I don't want a customer to call me for support, because I want the product to be so well documented, so well engineered and tested, with a clearly defined matrix of interoperability and upgrades, clear APIs and ABIs, and a lifecycle I can depend on - that it never breaks. And my hope is if it does break, then it's due to physical issues or procedures that weren't apparent - and we understand the product so intimately that we can discover the underlying issue quickly and without doubt. So, it's not really about support - that's a secondary goal in my mind. I don't want to call support. I want it put so well together that I don't ever need to call support. That's my wish for my customers at least. --b

=============================================================== From: flushy@flushy.net ------------------------------------------------------ Glad you enjoyed it. Like I said, it's been rattling around in my head for awhile. I have a treatise on cryptocurrency as well.. but that's for another time.. --b

=============================================================== From: Ed King ------------------------------------------------------ my grandma knows more about cryptocurrency than I do. I'd still enjoy reading your thoughts about it, even if I dont understand it I have a treatise on cryptocurrency as well.. but that's for another