Tag Archives: Kubernetes

Should K8s run on VMs?

This morning I saw the VMware twitter account tweet a link to this blog post from Chip Zoller. It’s about why it’s better to run Kubernetes on VMs as opposed to running it on base-metal.

I think the post is a good read so before you proceed please checkout Chips post first. That being said, I think there are some arguments in there that are not 100% valid. I’ll give you the TL;DR and then go through each point in Chips post.

TL;DR;

If you’re running k8s on bare-metal you’ll miss out on some infra level features you are used to when running on vSphere. My point is that there are alternatives and that in some cases you don’t need those feature at all (like HA) because cloud native apps handle those things in the application layer.

The only valid point I see is that VMs still provide a far better isolation than containers.

Let’s dive in

No cloud provider, no storage/network options

This statement is somewhat true. But only if you refuse to go outside your vSphere Comfort Zone. of course if you run on bare-metal you can’t automatically deploy an Amazon ELB. But the same is true if you run on vSphere. If you’re running on vSphere you’ll need to add NSX-T to get that same functionality. And guess what… NSX-T runs fine on physical machines, not just in vSphere environments.

For storage you could easily use NFS or Ceph or ScaleIO or whatever works for you. Saying that vSphere already provides the storage for you is not true. You’ll have to maintain the storage solution vSphere is using anyways. Even if it’s vSAN. So imho it does not matter if that somebody in your company is maintaining storage for vSphere or for bare-metal kubernetes. Obviously if you run on a public cloud you don’t have to worry about that.

No fleet management, node scale. Automating bare metal very difficult.

well… ever looked at packet.net? Apart from that this statement is very true. Especially for on-prem hardware.

Hardware issues, firmware, drivers, tuning. Linux distro compatibility.

The main take away being this quote: “On virtual machines, the precise physical details are abstracted away from you as the hypervisor presents a set of consistent, virtualized resources” But eehh… your vSphere layer has to run somewhere right? So you’ll still have manage the firmware and all that, it’s just someone else’s problem. But with on-prem hardware it is still your companies problem. Of course there are solution out there that make life easier (uch.. VxRail) but why couldn’t a company (Dell?) come up with a similar offering for bare-metal k8s?

No built-in monitoring for bare metal, not too many solutions out there for that.

This might be a valid point. Although I’m pretty sure Prometheus can monitor most stuff. Just slap the node exporter on all your machines anre you’re done. You’ll probably do that in a virtual environment as well anyways so there is no added “cost” here.

Procurement process and installation. Heterogeneous hardware.

VMs do not run on thin air. They run on hardware. So given the same amount of hardware resources for a bare-metal vs VM environment the installation and procurement process should not be any different.

Persistent storage on bare metal way more difficult.

This point is redundant. See first point.

Config management

Well… which config management tool do you use for you VMs? It’s not much different than that I’d think. Puppet, Chef, Ansible work great with bare-metal. Only if you’re talking about BOSH and maybe terraform you might have a point. There is a solution out there to manage physical machines with BOSH but not many people are using it. Still possible though.

Patching process on hardware, must kill workloads

If you path your K8s worker VMs you must kill workload as well! So by using vSphere VMs the only thing you’re avoiding is having to shutdown workers just for firmware upgrades.

Security isolation and multi-tenancy. Blast radius.

This! This is the real reason to run on VMs. I can’t think of any better reason. Another way to achive the same would be to spin up a bare-metal cluster fore each tenant. But that would lead to a lot of resource waste and management overhead.

No dynamic resource utilization (DRS)

100% true. But I would be curious to see what actually happens with load distribution in a bare-metal cluster. IF your apps scale horizontally you’ll end up distributing the load equally among the worker nodes. So you might nod need DRS.

No failure recovery of hardware.

Yes it takes 5 minutes for k8s to detect a lost node. But that should not matter. Because you’re supposed to run cloud native apps on k8s. This means that availability is taken care of in the application layer, not on infra layer. The whole point of cloud native is to assume the infra is unreliable and can fail at any moment.

No extra availbility features.

huh? k8s has a ton of availability features. you can configure your pods to not run on the same node or in the same AZ? There are (anti) affinity rules and taints and toleration. I do agree with the fact that there is probably no orchestration layer that will bring back a physically failed node. But again, you could use BOSH for that, it’s probably not the easiest thing in the world but definitely possible.

Availability of snapshots.

You could have your hardware boot from SAN and then make SAN based snapshots.

Kubernetes opinions

I have recently been working with and learning about kubernetes quite a lot.  I have also spent a lot of time with Cloudfoundry and BOSH over the last few years so I can’t help myself and compare these different products.

Let me start of by saying that BOSH, Kubernetes and Cloudfoundry are complementing each other in terms of functionality. I’ll probably write a separate post on that. I want to use this post to go into they way these products are build and used. Namely opinionated vs non-opinionated.

Opinionated

Both BOSH and Cloudfoundry are both heavily opinionated products/platforms. This means that if you want to run an application on CF you’ll have to adhere to those opinions. More specifically: you’ll need to built a 12 factor app.

If you deploy Cloudfoundry you’ll always use the same log aggregator and until recently you always used the same container runtime and scheduler. Also, the authentication is always done by the same product and the permission system is built-in and always the same. This means that it doesn’t matter on which CF platform you run your app, it will always look very similar.

Same goes for BOSH. If you want to deploy software using BOSH you need to put it into a BOSH release. Which is a predefined way of deploying software. BOSH has opinions on where to put your scripts, where to put your binaries and config files and how to start and monitor the application. If you want stuff done differently then you’ll have to look for another product.

The CF ecosystem also has an opinion on how you operate the platform. The role of operator and user is always clearly defined.

Non-Opinionated

I see Kubernetes as a non-opinionated solution. Most of the components are plug-able, you are forced to bring your own container runtime and there the networking solution if even worse. Also, You can run whatever you like on top of k8s as long as it comes as an OCI image. But the latter is hardly an opinion since an image can literally contain anything.

There is also no clear distinction between operators and users of the software either. It seems like every project I do that involves k8s ends up in very long discussion about where the platform operators responsibilities stops and where the platform user responsibilities start. The tooling doesn’t make this any easier. For example: You use the same cli tool to both update an app and remove a worker node and there are barely any usable predefined roles in the system.

You get an opinion and you get an opinion and….

The problem with the fact that a platform in and of itself does not have an opinion is that people using the platform will start making up their own opinions. Then they fight each other about it. Here are a few examples:

  • Kubernetes is intended to run 12 factor workloads, don’t run stateful applications on it
  • Kubernetes on it’s own isn’t friendly to run 12 factor micro-services. You need extra magic to make that happen
  • It’s fine to run databases on k8s
  • Managing state on k8s is horrible
  • Each dev team should have its own k8s cluster
  • Use namespaces, PSPs and something like OPA to build a secure multi-tenant cluster
  • You won’t need backups, just redeploy the cluster, r-run the deploy pipelines and you’re fine
  • Here is a backup tool bacuse you need backups
  • What’s you opinion? Leave a comment or hit me up on twitter.

My Opinion

So all this has upsides and downsides. In my opinion :). It is great that everybody can build a platform to their liking and use it as they see fit. It is also great that there is a huge ecosystem around this (Keeps a lot of people in a job).

At the same time it means you have to build a platform out of kubernetes, just deploying k8s itself is not enough. So now you have to form your own opinion, or even worse your (enterprise) organization has to form its opinion. Then you have to build and maintain the platform. I’m sure a lot of readers will now respond with “you don’t run your own k8s, you just use one of the cloud offerings”. And that is certainly a good point but even then you’ll end up in discussions around what you should run on k8s, who should have access and so on.

In the end I prefer a platform that is clearly opinionated. It just saves time and makes life easier. You’d think that this implies I do not like Kubernetes and you might be right. I think Kubernetes is definitely cool tech and it may have a bright future. Just not as an application platform.

Having said that, the application platform might run on top of some non-opinionated infrastructure. But that’s for another post.

VMware Cloud Native Master Specialist Exam

I recently passed the VMware Cloud Native Master Specialist Exam. The what? yeah…. Let me explain. By passing this exam you’ll earn a badge called “Cloud Native Master Specialist”. And according to VMware:

Earning this badge will certify that the successful candidate has the knowledge and skills necessary to architect a Kubernetes-based platform supported by complimentary technologies from the Cloud Native ecosystem for continuous delivery of applications.

So you’ll get a badge, not a “certification”. Not sure what the difference is though… The most important reason why I needed to pass the exam is that VMware requires a certain number of people with said badge in a company seeking to earn the Cloud Native Master Service Competency. Read more about MSC here. Since there is not a log of information about the exam out there I’ll give a quick overview of it here.

Exam Pre-reqs

The only pre-req for this exam is the CKA certification. I posted about the CKA exam earlier. You don’t need a VCP certification as most other VMware exams do. Nor do you need any formal training.

Exam Format

The exam is a multiple choice exam. I think I got 67 Questions, not sure if that’s always the same. You have 100 minutes to answer all of them.

Knowledge needed

All exam topics are listed in this blueprint pdf. To summarize you’ll need to the following skills to pass the exam:

  • Be comfortable with k8s. Since you passed the CKA exam before doing this one you’ll probably be fine. It won’t hurt to also look into the CKAD training materials or even pass the CKAD exam. In summary make sure you know about: deployments, pods, replicaSets, priorityClasses, networkPoicies, affinity, RBAC, auditPolicies and all that good stuff.
  • As mentioned in the blueprint you need some knowledge about a few other products as well:
    • Valero. This is a backup solution for k8s. Watching this video is probably sufficient to pass the exam. Playing around with it yourself will definitely help memorize details better.
    • Sonobouy. This is a tool to run conformance tests on a k8s cluster. Watch this video for more details. That will probably give you enough info to pass the exam. But again, I recommend trying it out for yourself if you have never used it.
    • OpenPolicyAgent. OPA gives you policy based control over eehhh… basically everything if you want to. For the exam you need to know how it can be used with k8s.
  • Some very basic AWS knowledge can come in handy. And I mean really basic: If you know what regions, AZs and VPC are you’ll probably be fine.
  • You don’t need any “Traditional” VMware knowledge at the moment. I say “traditional” because 2 out of the 3 tools mentioned above are now part of VMwares portfolio as well.
  • Know about flannel, its limitations and alternatives.
  • Take some time to learn about podSecurityPolicies if you haven’t done so already. Start with this video.
  • Docker build best practices

VMware Photon Platform 1.2 released

Yesterday VMware silently released a new version of its opensource cloud native platform. VMware Photon Platform 1.2 is available for download at github now. You can find the details of the new release in the release notes. Below are the highlights of the new release.

What’s new?

  • Photon Controller now supports ESXi 6.5 Patch 201701001. Support for ESXi 6.0 is dropped.
  • Photon platform now comes with Lightwave 1.2 which supports authenticating using windows sessions credentials. Given you’re using the CLI from a windows box.
  • The platform now supports Kubernetes 1.6 and also supports persistent volumes for Kubernetes
  • NSX-T support is improved
  • Resource tickets have been replaced with quotas which can be increased and decreased. This is a big improvement in my opinion. The previous release wouldn’t let you change resource allocation which was a definite blocker for production use.
  • The API is now versioned. Which means the API url now starts with /v1/

What’s broken?

  • Lightwave 1.2 is incompatible with earlier versions
  • ESXi 6.0 is no longer supported
  • The API is incompatible with previous API versions. But the good new is that it’s now versioned so this was the last time they broke the API (hopefully).

update 20-04-2017: Some updates taken from the github issues

  • HA Lightwave setup is no longer supported. Will be back in 1.2.1
  • version 1.1.1 didn’t create any flavours at installation but 1.2 seems to create duplicate flavours.