HomeAboutMailing ListList Chatter /0/0 3.225.221.130

containers vs virtual machines vs bare metal

2022-09-26 by: flushy@flushy.net
From: flushy@flushy.net
------------------------------------------------------

Been quiet here. Thought I'd get some conversations started.

What are you seeing in your world regarding where workloads are running?

I'll start.

I'm seeing very "little" bare metal except for some very specific, 
mostly edge situations. Still lots of VMs by far, but the container 
growth has great acceleration.

Examples:

Bare metal:

Several railroads I covered have something like 250k edge or "remote 
server" deployments of Linux systems. Then there's other companies that 
have deployed something like 50k kiosk type Linux systems for end user 
interaction.

That physical footprint isn't growing. It has between a 10-20 year 
lifecycle, and in a lot of cases, it's very expensive to replace (send a 
guy in a truck to a shed 2 hours away from any cities to a shed in the 
wilderness).

Virtual machines:

Virtual machines, I'm still seeing the bulk of installations - even for 
places that brick and mortar stores, but with small datacenters in the 
stores (1-3 servers running VMs). Also seeing a lot of customers running 
lots of VMs on-prem and in the cloud. Just regular VMs.

Containers:

I'm seeing a lot of growth in containers. As VMs retire, those workloads 
are going to containers - or apps are being replaced with apps that 
built to run in containers. Not everything, sure, but a the growth of 
containerized apps is very apparent, as is where my customers are 
investing their money.

My observations:

Containers force commonality and standardization. I build an app and 
deploy it 10,000 times from those containers, I know they are all the 
same. Some might have different versions, but I can easily see that, and 
I can easily replace them. I know the all share the same configuration 
and development lifecycle.

VMs, it's much harder to know that. If I created a Standard Operating 
Environment (SOE) and enforced it from the beginning - then sure. But 
that's not a requirement to deploy a VM, so lost orgs didn't. If I 
deploy that same app across 10k VMs, if I used automation and SOE, sure 
but if it grew over time, one by one, manually, with long lived VMs, 
then it's really hard to know if all those VMs are configured the same 
or if they are even configure the same.

Anyways - I think that's probably one driving factor. Agility is 
probably another. I see kubernetes and containers as being a technology 
that is driving behavior and policies around standardization - which 
*could have* been done with other traditional means (VMs, bare metal), 
but in those cases it was purely *optional*.

--b



=============================================================== From: Joe Freeman ------------------------------------------------------ What I'm seeing in the service provider space is a drive towards moving NetOps towards a CI/CD model, based on containers. I was at Nokia's SReXperts conference a couple of weeks ago and a large portion of the breakouts and workshops were around that move. Not to mention it was a significant part of the keynote speech as well. They are pushing docker based containers right now. They've also open sourced some of their internal code (containerlab.dev and the Robot framework) over the last few years. Most of their EMS/NMS applications are moving to containers as well. Adding a feature module is essentially just adding a container and a license key to the installation.

=============================================================== From: Dave Brockman ------------------------------------------------------ What I'm seeing with clients who do not have devs on-staff, which is=20 currently 100% of my clients, bare metal is only deployed by vendors=20 supplying black boxes, or vendors supplying hardware (think niche roles=20 that require access to specific hardware interfaces, like a TV station). In some cases, those black box vendors are now shipping ESXi and giving=20 management access to the host, but not the VMs. Otherwise, it's mostly=20 like what you see, single hardware server running 2-5 VMs. None of my=20 clients use containers, and the one application I use that is=20 containerized continues to be a source of strike and huge PITA when it=20 comes time to update. That would be UNMS/UISP/WhatTheFuckEver Ubiquiti=20 calls it this week, to centrally manage Edgerouters, etc not part of the=20 Unifi line. With Gratitude, Dave Brockman Senior Network Engineer Gig City Cloud, LLC oving=20 =20 d=20 ning? e that ser the for the of d he , ion ng ut sure , ame logy h l),

=============================================================== From: Lee Walker ------------------------------------------------------ I see 100% containers. But I use 100% containers, so it could be that I'm biased. I develop in Docker, Test with Docker, deploy on Kubernetes (with Docker). I have a couple of bare metal Digital Ocean instances, but only because I've not got around to deploying Docker on them yet..... Containers are the future (right now). For Web Dev anyway. (Kubernetes is OVERKILL for Web Dev. But my clients use it... so.....) ** But use what works for you. **. currently 100% of my clients, bare metal is only deployed by vendors=20 -- Lee Walker Principal Engineer 404-405-1194 l.s.walker (Skype) www.codejourneymen.com Code Journeymen LLC 1028 Signal Mountain Road Suite #103, Chattanooga TN, 37405

=============================================================== From: Stephen Kraus ------------------------------------------------------ One thing that is clear: Everything will continue to have use cases. Containers are non an panacea. VMs and even bare metal will continue to have use cases well into the mid century. The reality is a lot of this is glitz and glitter around new toys, seen plenty of cases where containers fail to hold up to what a VM could do, or VMs underutilized where a container will do better.

=============================================================== From: Michael Harrison ------------------------------------------------------ "Adding a feature module is essentially just adding a container and a license key to the installation." Might be a reason vendors (people selling stuff) really like them. I live in a strange world, bare metal or in a hosted VM (mostly Linode). I contend that if the dev's code can't be installed on an OS, without essentially inheriting a copy of the dev systems, maybe you shouldn't run that in production. I understand some things have special needs, OS plus things that don't play well with other installs, but so far, that's just another VM for me. I've also learned to write code, scripts and such that work across Linux versions and hardware (Deb 10/11, Ubuntu's, Armbian... on Intel/AMD64 and ARM) with the exception of some compiled C code that has to be specific to architecture and OS and a couple of shell scripts that find and use specific locations of executables. The exception being ecosystems where the "containers" (K8s, Docker, etc..) are actively maintained by people that know what they are doing. Flushy's convinced me that they exist, ;) but way above my needs/pay grade. Infosec/DevOps Twitter is loaded with bad examples. But I am sure there are good ones as well. So far, I haven't needed containers. Yet. Yet.

=============================================================== From: Tia Lobo ------------------------------------------------------ My company, Netapp, has been making a hard pivot away from selling physical storage and towards providing data management for kubernetes. We made a brief stopover in HCI land (hardware for VMs). Working in the guts of the beast, I am surprised that any of it works at all. But we are adding customers. We tried (and will try again) to provide a software storage solution (SDS). But the current thinking in Enterprise IT is that they buy storage because it fits a need right now and don't want it to change until they are ready to bring in the forklift. Right now I think we get customers mostly because we actually test our product and are willing to share test plans and test reports with the customer. -Erica -=--=---=----=----=---=--=-=--=---=----=---=--=-=- Erica Wolf (she/her) On Mon, Sep 26, 2022 at 12:55 PM Michael Harrison wrote:

=============================================================== From: Michael Harrison ------------------------------------------------------ "I am surprised that any of it works at all" Hmm, that sounds familiar. Laughing.. But it works! "..mostly because we actually test our product and are willing to share test plans and test reports with the customer." I often argue that the support: access to the people, or at least their end product that actually can make it work is what people and companies actually pay for. Test results are a symptom/perception/trapping of that support.