How to set up your first Kubernetes environment on Windows

How to set up your first Kubernetes environment on Windows

You’ve crushed the whole containers thing—it was much easier than you anticipated, and you’ve updated your resume. Now it’s time to move into the spotlight, walk the red carpet, and own the whole Kubernetes game. In this blog post, we’ll get our Kubernetes environment up and running on Windows 10, spin up an image in a container, and drop the mic on our way out the door—headed to Coderland.

Windows? Oh, yeah!

Just because containers run on Linux doesn’t mean Windows developers should be left out of the picture. Quite the contrary: Given that .NET runs in containers now, Windows devs are full first-class citizens of the containers and Kubernetes community. In this article, we’ll apply a bit of PowerShell magic and we’ll have your Windows PC already to for you to start learning about and using Kubernetes.

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.


The parts

Unlike a certain brand of kitchen cabinets that I recently purchased that had instructions only as diagrams with a stick figure person and some numbers and arrows, we’ll lay out the items and steps with diagrams and words. We need:

  1. A way to run containers.
  2. Kubernetes.
  3. The Kubernetes command-line tool, kubectl. We can argue over how to pronounce it later.
  4. Red Hat OpenShift command-line tool, oc.
  5. An image to run as a test.

A way to run containers

We need some sort of environment to run containers. Options include Minikube, Minishift, and a Red Hat OpenShift cluster (running on, say, AWS). We’ll keep things simple and forward-looking by choosing Minishift; it’s forward-looking in that, when we want to move from Kubernetes to OpenShift, we’ll already have everything we need in place. This setup will run on our local machine yet give us the power of Kubernetes (and OpenShift, by the way) without spending any money. That’s always nice.

So let’s install Minishift on Windows. For starters, if you’re not using Chocolatey, stop everything and install it. You will absolutely love it, I promise.

From that point on, it’s way too easy:

choco install -y minishift

Prove it

The command minishift version should yield:

Installing Kubernetes

Bonus! It’s included in Minishift. Wow, that was easy.

Installing kubectl

The Kubernetes command-line tool, kubectl, is a breeze to install on Windows:

choco install -y kubernetes-cli

(If that fails, there is a more in-depth explanation on the kubectl installation page.)

Installing oc

To install oc, the OpenShift command-line tool, visit the CLI installation page and follow the directions. Basically, you download it and make sure it’s in your system’s path.

Fire it up

Time to get your local cluster up and running. It’s quite simple; at the command line, use the following command:

minishift start

Note: If, at any time, you want to start fresh with Minishift, use the commands minishift stop and minishift delete --force --clear-cache.

When that finishes, we need a few commands to get “attached,” if you will, to our cluster. We’re going to cheat here and use some OpenShift commands; those commands are shortcuts. If we didn’t use them, we’d have to alter our Kubernetes configuration and create a user and grant access. We can save a ton of steps. If you want to use only kubectl and be a purist, you can follow this blog post “Logging Into a Kubernetes Cluster With Kubectl.”

$MinishiftIP = minishift ip
oc login $MinishiftIP:8443 -u admin -p admin
oc new-project test

An image to run

Finally, it’s a good idea to run a very basic image as a Kubernetes pod to test your setup. To do this, we’ll run an image in a pod, create a route to it (i.e., create a URI), and then run curl to make sure it’s all working as expected.

Use this command to spin up a pod:

kubectl run locationms --port=8080

This will pull an image down from my public repository to your system and run it using Kubernetes.

A little more detail: This creates a deployment named locationms, retrieves the image, starts the image in a container, and uses port 8080 to route to it. Note that the deployment name and the name of the image do not need to match. This is an area where you want to put some management thought into place. In other words, this is a great opportunity to make things really confusing if you’re not thoughtful. Don’t ask how I know this.

Note that waiting for this pod to get up and running might take a few minutes, depending on your machine’s performance. When done on a server or high-performance PC, it takes about a minute or so. My MacBook Air with the i5 processor takes about four minutes. You can check on it by running kubectl get pods.

When the pod is up and running, you cannot access it from your command line. Why is that? Because it’s running “inside” your Kubernetes cluster. However, Kubernetes is smart and provides a proxy to your pods in Kubernetes because there may be several containers running the same application; a pod of containers. All with the same URI. When you run kubectl get pods you can see your locationms pod.

Reaching the application

There are two aspects, if you will, to the proxy that Kubernetes has created. One aspect is the proxy itself. The other aspect is the public face of the proxy, which allows you to access your pods.

To test the proxy and its access to your pod is a two-step processing. Not to worry; this gets much better and much easier later. But for now, we must start the proxy and then access the pod through the proxy. You’ll need two terminal windows to do this.

In the first terminal window, run the following command:

kubectl proxy

The proxy is running. This will tie up the command line (i.e., it runs interactively), so you need a second terminal window to run a command to reach it. The endpoint that leads to our application is /api/v1. The format we want is api/v1/namespaces/{our namespace}/pods/{pod name}/proxy/.

The {our namespace}, in our particular instance, is test.

The pod name can be found, again, by running kubectl get pods.

Put those pieces together and you can reach our application from your second terminal (remember: kubectl proxy is still running in our first terminal window):

curl http://localhost:8001/api/v1/namespaces/test/pods/locationms-foo/proxy/

Let’s have some fun

While we’re here, let’s see the locationms app in action by passing in an IP address. You can get your machine’s IP address by using this:


Then, using that IP address, we can run our application as in this example:

$(curl http://localhost:8001/api/v1/namespaces/test/pods/locationms-7b4fb849cc-8lmwg/proxy/ | ConvertFrom-Json

Of course, the actual pod name will be different. You should see the following results:

(Cool side note: is owned and run by @majorhayden. We worked together at Rackspace, and now we work together at Red Hat.)

By the way…

That application your running in a container? It’s a .NET Core application, written in C#. Just more proof that .NET developers can take over the microservices world.

Wait, there’s more

Although you now have a Kubernetes cluster running on your local machine, there’s still a lot more to know and do. For instance, there must be an easier way to get to your application than running kubectl proxy in a second terminal. There must be a way to run more than one container, or more than one application. There must be a way to update your code while it’s running—a “rolling update” as it’s known.

And there is. We’ll cover all this as the series continues.

Also read

How to set up your first Kubernetes environment on MacOS

To learn more, visit our Linux containers or microservices pages.

Join Red Hat Developer and get access to handy cheat sheets, free books, and product downloads that can help you with your microservices and container application development.