Getting Started with Kubernetes / Docker on Fedora

*******

EDIT

This entry is out of date, I have moved the instructions to the Google Kubernetes github repo.

END EDIT

*******

These are my notes on how to get started evaluating a Fedora / Docker / kubernetes environment.  I’m going to start with two hosts.  Both will run Fedora rawhide.  The goal is to stand up both hosts with kubernetes / Docker and use kubernetes to orchestrate the deployment of a couple of simple applications.  Derek Carr has already put together a great tutorial on getting a kubernetes environment up using vagrant.  However, that process is quite automated and I need to set it all up from scratch.

Install Fedora rawhide using the instructions from here.  I just downloaded the boot.iso file and used KVM to deploy the Fedora rawhide hosts.  My hosts names are: fed{1,2}.

The kubernetes package provides four services: apiserver, controller, kubelet, proxy.  These services are managed by systemd unit files. We will break the services up between the hosts.  The first host, fed1, will be the kubernetes master.  This host will run the apiserver and controller.  The remaining host, fed2 will be minions and run kubelet, proxy and docker.

This is all changing rapidly, so if you walk through this and see any errors or something that needs to be updated, please let me know via comments below.

So let’s get started.


Hosts:
fed1 = 10.x.x.241
fed2 = 10.x.x.240

Versions (Check the kubernetes / etcd version after installing the packages):

       
# cat /etc/redhat-release
Fedora release 22 (Rawhide)

# rpm -q etcd kubernetes
etcd-0.4.5-11.fc22.x86_64
kubernetes-0-0.0.8.gitc78206d.fc22.x86_64
       

1. Enable the copr repos on all hosts.  Colin Walters has already built the appropriate etcd / kubernetes packages for rawhide.  You can see the copr repo here.

       
# yum -y install dnf dnf-plugins-core
# dnf copr enable walters/atomic-next
# yum repolist walters-atomic-next/x86_64
Loaded plugins: langpacks
repo id                          repo name                                                                     status
walters-atomic-next/x86_64       Copr repo for atomic-next owned by walters                                    37
repolist: 37
       

2.  Install kubernetes on all hosts – fed{1,2}.  This will also pull in etcd.

       
# yum -y install kubernetes
       

3.  Pick a host and explore the packages.

       
# rpm -qi kubernetes
# rpm -qc kubernetes
# rpm -ql kubernetes
# rpm -ql etcd
# rpm -qi etcd
       

4.  Configure fed1.

Export the etcd and kube master variables so the services know where to go.

       
# export KUBE_ETCD_SERVERS=10.x.x.241
# export KUBE_MASTER=10.x.x.241
       

These are my services files for: apiserver, etcd and controller.  They have been changed from what was distributed with the package.

Make a copy first. Then review what I have here.

       
# cp /usr/lib/systemd/system/kubernetes-apiserver.service{,.orig}

# cp /usr/lib/systemd/system/kubernetes-controller-manager.service{,.orig}

# cp /usr/lib/systemd/system/etcd.service{,.orig}
       
       
# cat /usr/lib/systemd/system/kubernetes-apiserver.service
[Unit]
Description=Kubernetes API Server

[Service]
ExecStart=/usr/bin/kubernetes-apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address=127.0.0.1 -port=8080 -machines=10.x.x.240

[Install]
WantedBy=multi-user.target


# cat /usr/lib/systemd/system/kubernetes-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager

[Service]
ExecStart=/usr/bin/kubernetes-controller-manager --logtostderr=true --etcd_servers=$KUBE_ETC_SERVERS --master=$KUBE_MASTER

[Install]
WantedBy=multi-user.target


# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
# etc logs to the journal directly, suppress double logging
StandardOutput=null
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
       

Start the appropriate services on fed1.

       
# systemctl daemon-reload

# systemctl restart etcd
# systemctl status etcd
# systemctl enable etcd

# systemctl restart kubernetes-apiserver.service
# systemctl status kubernetes-apiserver.service
# systemctl enable kubernetes-apiserver.service

# systemctl restart kubernetes-controller-manager
# systemctl status kubernetes-controller-manager
# systemctl enable kubernetes-controller-manager
       

Test etcd on the master (fed1) and make sure it’s working.

       
curl -L http://127.0.0.1:4001/v2/keys/mykey -XPUT -d value="this is awesome"
curl -L http://127.0.0.1:4001/v2/keys/mykey
curl -L http://127.0.0.1:4001/version
       

I got those examples from the CoreOS github page.

Open up the ports for etcd and the kubernetes API server on the master (fed1).

       
# iptables -I INPUT -p tcp --dport 4001 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
       

Take a look at what ports the services are running on.

       
# netstat -tulnp
       

5. Configure fed2

These are my service files.  They have been changed from what was distributed with the package.

Make a copy first, then review what I have here.

       
# cp /usr/lib/systemd/system/kubernetes-kubelet.service{,.orig}
# cp /usr/lib/systemd/system/kubernetes-proxy.service{,.orig}
       
       

# cat /usr/lib/systemd/system/kubernetes-kubelet.service
[Unit]
Description=Kubernetes Kubelet

[Service]
ExecStart=/usr/bin/kubernetes-kubelet --logtostderr=true -etcd_servers=http://10.x.x.241:4001 -address=10.x.x.240 -hostname_override=10.x.x.240

[Install]
WantedBy=multi-user.target


# cat /usr/lib/systemd/system/kubernetes-proxy.service
[Unit]
Description=Kubernetes Proxy

[Service]
ExecStart=/usr/bin/kubernetes-proxy --logtostderr=true -etcd_servers=http://10.x.x.241:4001

[Install]
WantedBy=multi-user.target
       

Start the appropriate services on fed2.

       
# systemctl daemon-reload

# systemctl enable kubernetes-proxy.service
# systemctl restart kubernetes-proxy.service
# systemctl status kubernetes-proxy.service

# systemctl enable kubernetes-kubelet.service
# systemctl restart kubernetes-kubelet.service
# systemctl status kubernetes-kubelet.service

# systemctl restart docker
# systemctl status docker
# systemctl enable docker
       

Take a look at what ports the services are running on.

       
# netstat -tulnp
       

Open up the port for the kubernetes kubelet server on the minion (fed2).

       
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
       

Now the two servers are set up to kick off a sample application.  In this case, we’ll deploy a web server to fed2.  Start off by making a file in roots home directory on fed1 called apache.json that looks as such:

       
# cat apache.json
{
  "id": "apache",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "id": "apache-1",
      "containers": [{
        "name": "master",
        "image": "fedora/apache",
        "ports": [{
          "containerPort": 80,
          "hostPort": 80
        }]
      }]
    }
  },
  "labels": {
    "name": "apache"
  }
}
       

This json file is describing the attributes of the application environment.  For example, it is giving it an “id”, “name”, “ports”, and “image”.  Since the fedora/apache images doesn’t exist in our environment yet, it will be pulled down automatically as part of the deployment process.  I have seen errors though where kubernetes was looking for a cached image.  In that case I did a manual “docker pull fedora/apache” and that seemed to resolve.
For more information about which options can go in the schema, check out the docs on the kubernetes github page.

Now, deploy the fedora/apache image via the apache.json file.

       
# /usr/bin/kubernetes-kubecfg -c apache.json create pods

       

You can monitor progress of the operations with these commands:
On the master (fed1) –

       
# journalctl -f -xn -u kubernetes-apiserver -u etcd -u kubernetes-kubelet -u docker

       

On the minion (fed2) –

       
# journalctl -f -xn -u kubernetes-kubelet.service -u kubernetes-proxy -u docker

       

This is what a successful expected result should look like:

       
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
I0730 15:13:48.535653 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:08.538052 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:28.539936 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:48.542192 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:08.543649 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:28.545475 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:48.547008 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:16:08.548512 27880 request.go:220] Waiting for completion of /operations/8
Name                Image(s)            Host                Labels
----------          ----------          ----------          ----------
apache              fedora/apache       /                   name=apache


       

After the pod is deployed, you can also list the pod.

       
# /usr/bin/kubernetes-kubecfg list pods
Name                Image(s)            Host                Labels
----------          ----------          ----------          ----------
apache              fedora/apache       10.x.x.240/      name=apache
redis-master-2      dockerfile/redis    10.x.x.240/      name=redis-master
       

You can get even more information about the pod like this.

       
# /usr/bin/kubernetes-kubecfg -json get pods/apache
       

Finally, on the minion (fed2), check that the service is available, running, and functioning.

       
# docker images | grep fedora
fedora/apache       latest                6927a389deb6        10 weeks ago        450.6 MB

# docker ps -l
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS               NAMES
d5871fc9af31        fedora/apache:latest   /run-apache.sh      9 minutes ago       Up 9 minutes                            k8s--master--apache--8d060183

# curl http://localhost
Apache
       

To delete the container.

       
/usr/bin/kubernetes-kubecfg -h http://127.0.0.1:8080 delete /pods/apache
       

That’s it.

Of course this just scratches the surface. I recommend you head off to the kubernetes github page and follow the guestbook example.  It’s a bit more complicated but should expose you to more functionality.

You can play around with other Fedora images by building from Fedora Dockerfiles. Check here at Github.


Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads.

 

Share
What did you think of this article?
-1+1 (No Ratings Yet)
Loading...

  1. I am getting the following error while trying to create the pod.

    [root@kubernetes-master ~]# /usr/bin/kubernetes-kubecfg -c web.json create pods
    F0804 18:32:18.708939 01892 kubecfg.go:185] Got request error: Post http://localhost:8080/api/v1beta1/pods?labels=: dial tcp [::1]:8080: connection refused

    I have verified that the apiserver is up and listening on 8080, any ideas on what could be wrong?

    1. Hey Brad,

      Sorry for the delay getting back to you.

      I didn’t run into the issue that you are reporting, but I was able to reproduce it on a newer version of kubernetes. I was able to repro it by setting the KUBE_API_ADDRESS incorrectly.

      We need to make sure you are setting the correct API address for the kubernete-apiserver.service. Please pass:

      –address=127.0.0.1

      To the kubernetes-apiserver daemon in the unit file and restart the service.

      If that doesn’t work, hop on #atomic on freenode and let’s chat. I’m scollier on freenode.

  2. I was using https://copr-fe.cloud.fedoraproject.org/coprs/maxamillion/epel7-kubernetes/repo/epel-7/maxamillion-epel7-kubernetes-epel-7.repo and package version is: kubernetes-0-0.0.11.gitc78206d.el7.centos.x86_64 – but cant get Kubernetes to work. Here is my (packages default) version of /etc/sysconfig/kubernetes:
    — snip —
    KUBE_MASTER=”127.0.0.1:8080″
    KUBE_ETCD_SERVERS=”http://127.0.0.1:4001″
    KUBE_LOGTOSTDERR=”true”

    # API SERVER
    # Items specific to kubernetes-apiserver.service
    KUBE_API_ADDRESS=”127.0.0.1″
    KUBE_API_PORT=”8080″
    KUBE_API_MACHINES=”127.0.0.1″

    # KUBELET
    # Items specific to kubernetes-kubelet.service
    KUBE_KUBELET_ADDRESS=”127.0.0.1″
    KUBE_KUBELET_PORT=”10250″
    KUBE_KUBELET_HOSTNAME_OVERRIDE=”127.0.0.1″
    — snip —

    When firing up services – there are some errors on the logs (btw /tmp/proxy_config is missing also):
    systemctl daemon-reload
    systemctl restart docker.service
    systemctl restart etcd.service
    systemctl restart kubernetes-apiserver.service
    systemctl restart kubernetes-controller-manager
    systemctl restart kubernetes-kubelet.service
    touch /tmp/proxy_config
    systemctl restart kubernetes-proxy.service

    [root@kube1 ~]# systemctl status docker.service -l
    docker.service – Docker Application Container Engine
    Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:15 GST; 9s ago
    Docs: http://docs.docker.io
    Main PID: 19325 (docker)
    CGroup: /system.slice/docker.service
    └─19325 /usr/bin/docker -d –selinux-enabled

    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268.init_networkdriver()] creating new bridge for docker0
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268.init_networkdriver()] getting iface addr
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268] -job init_networkdriver() = OK (0)
    Aug 04 01:14:15 kube1.node.int docker[19325]: Loading containers: : done.
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268.initserver()] Creating pidfile
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268.initserver()] Setting up signal traps
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268] -job initserver() = OK (0)
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268] +job acceptconnections()
    Aug 04 01:14:15 kube1.node.int docker[19325]: [b262f268] -job acceptconnections() = OK (0)
    Aug 04 01:14:15 kube1.node.int systemd[1]: Started Docker Application Container Engine.
    [root@kube1 ~]# systemctl status etcd.service -l
    etcd.service – Etcd Server
    Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:15 GST; 9s ago
    Main PID: 19402 (etcd)
    CGroup: /system.slice/etcd.service
    └─19402 /usr/bin/etcd

    Aug 04 01:14:15 kube1.node.int systemd[1]: Started Etcd Server.
    Aug 04 01:14:16 kube1.node.int etcd[19402]: Using the directory kube1.node.int.etcd as the etcd curation directory because a directory was not specified.
    Aug 04 01:14:16 kube1.node.int etcd[19402]: kube1.node.int is starting a new cluster
    Aug 04 01:14:16 kube1.node.int etcd[19402]: etcd server [name kube1.node.int, listen on :4001, advertised url http://127.0.0.1:4001%5D
    Aug 04 01:14:16 kube1.node.int etcd[19402]: peer server [name kube1.node.int, listen on :7001, advertised url http://127.0.0.1:7001%5D
    Aug 04 01:14:16 kube1.node.int etcd[19402]: kube1.node.int starting in peer mode
    Aug 04 01:14:16 kube1.node.int etcd[19402]: kube1.node.int: state changed from ‘initialized’ to ‘follower’.
    Aug 04 01:14:16 kube1.node.int etcd[19402]: kube1.node.int: state changed from ‘follower’ to ‘leader’.
    Aug 04 01:14:16 kube1.node.int etcd[19402]: kube1.node.int: leader changed from ” to ‘kube1.node.int’.
    [root@kube1 ~]# systemctl status kubernetes-apiserver.service -l
    kubernetes-apiserver.service – Kubernetes API Server
    Loaded: loaded (/usr/lib/systemd/system/kubernetes-apiserver.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:16 GST; 9s ago
    Main PID: 19407 (kubernetes-apis)
    CGroup: /system.slice/kubernetes-apiserver.service
    └─19407 /usr/bin/kubernetes-apiserver –logtostderr=true –etcd_servers=http://127.0.0.1:4001 –address=127.0.0.1 –port=8080 –machines=127.0.0.1

    Aug 04 01:14:16 kube1.node.int systemd[1]: Started Kubernetes API Server.
    Aug 04 01:14:16 kube1.node.int kubernetes-apiserver[19407]: I0804 01:14:16.075877 19407 apiserver.go:70] No cloud provider specified.
    Aug 04 01:14:16 kube1.node.int kubernetes-apiserver[19407]: E0804 01:14:16.081077 19407 pod_cache.go:80] Error synchronizing container list: &etcd.EtcdError{ErrorCode:501, Message:”All the given peers are not reachable”, Cause:”Tried to connect to each peer twice and failed”, Index:0x0}
    [root@kube1 ~]# systemctl status kubernetes-controller-manager -l
    kubernetes-controller-manager.service – Kubernetes Controller Manager
    Loaded: loaded (/usr/lib/systemd/system/kubernetes-controller-manager.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:16 GST; 9s ago
    Main PID: 19414 (kubernetes-cont)
    CGroup: /system.slice/kubernetes-controller-manager.service
    └─19414 /usr/bin/kubernetes-controller-manager –logtostderr=true –etcd_servers=http://127.0.0.1:4001 –master=127.0.0.1:8080

    Aug 04 01:14:16 kube1.node.int systemd[1]: Starting Kubernetes Controller Manager…
    Aug 04 01:14:16 kube1.node.int systemd[1]: Started Kubernetes Controller Manager.
    Aug 04 01:14:16 kube1.node.int kubernetes-controller-manager[19414]: I0804 01:14:16.117011 19414 logs.go:39] etcd DEBUG: watch [/registry/controllers http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:16 kube1.node.int kubernetes-controller-manager[19414]: I0804 01:14:16.117667 19414 logs.go:39] etcd DEBUG: get [/registry/controllers http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:16 kube1.node.int kubernetes-controller-manager[19414]: I0804 01:14:16.118024 19414 logs.go:39] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/controllers?consistent=true&recursive=true&wait=true]
    Aug 04 01:14:16 kube1.node.int kubernetes-controller-manager[19414]: I0804 01:14:16.118058 19414 logs.go:39] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/controllers?consistent=true&recursive=true&wait=true | method GET]
    Aug 04 01:14:16 kube1.node.int kubernetes-controller-manager[19414]: I0804 01:14:16.147603 19414 logs.go:39] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registry/controllers?consistent=true&recursive=true&wait=true%5D
    [root@kube1 ~]# systemctl status kubernetes-kubelet.service -l
    kubernetes-kubelet.service – Kubernetes Kubelet
    Loaded: loaded (/usr/lib/systemd/system/kubernetes-kubelet.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:16 GST; 9s ago
    Main PID: 19422 (kubernetes-kube)
    CGroup: /system.slice/kubernetes-kubelet.service
    └─19422 /usr/bin/kubernetes-kubelet –logtostderr=true –etcd_servers=http://127.0.0.1:4001 –address=127.0.0.1 –port=10250 –hostname_override=127.0.0.1

    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.152371 19422 logs.go:39] etcd DEBUG: get [registry/hosts/127.0.0.1/kubelet http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.152404 19422 logs.go:39] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=false&sorted=true]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.152415 19422 logs.go:39] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=false&sorted=true | method GET]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.155365 19422 logs.go:39] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=false&sorted=true%5D
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.155544 19422 logs.go:39] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=false&sorted=true%5D
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.155798 19422 logs.go:39] etcd DEBUG: watch [registry/hosts/127.0.0.1/kubelet http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.155838 19422 logs.go:39] etcd DEBUG: get [registry/hosts/127.0.0.1/kubelet http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.155907 19422 logs.go:39] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=true&wait=true]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.156042 19422 logs.go:39] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=true&wait=true | method GET]
    Aug 04 01:14:16 kube1.node.int kubernetes-kubelet[19422]: I0804 01:14:16.158951 19422 logs.go:39] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registry/hosts/127.0.0.1/kubelet?consistent=true&recursive=true&wait=true%5D
    [root@kube1 ~]# systemctl status kubernetes-proxy.service -l
    kubernetes-proxy.service – Kubernetes Proxy
    Loaded: loaded (/usr/lib/systemd/system/kubernetes-proxy.service; disabled)
    Active: active (running) since Mon 2014-08-04 01:14:19 GST; 7s ago
    Main PID: 19432 (kubernetes-prox)
    CGroup: /system.slice/kubernetes-proxy.service
    └─19432 /usr/bin/kubernetes-proxy –logtostderr=true –etcd_servers=http://127.0.0.1:4001

    Aug 04 01:14:23 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:23.788233 19432 logs.go:39] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true%5D
    Aug 04 01:14:23 kube1.node.int kubernetes-proxy[19432]: E0804 01:14:23.788271 19432 etcd.go:119] Failed to get the key registry/services: 100: Key not found (/registry) [2]
    Aug 04 01:14:23 kube1.node.int kubernetes-proxy[19432]: E0804 01:14:23.788291 19432 etcd.go:79] Failed to get any services: 100: Key not found (/registry) [2]
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:25.789958 19432 logs.go:39] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:25.790042 19432 logs.go:39] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services/specs?consistent=true&recursive=false&sorted=true]
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:25.790055 19432 logs.go:39] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true | method GET]
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:25.796640 19432 logs.go:39] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true%5D
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: I0804 01:14:25.796715 19432 logs.go:39] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true%5D
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: E0804 01:14:25.796756 19432 etcd.go:119] Failed to get the key registry/services: 100: Key not found (/registry) [2]
    Aug 04 01:14:25 kube1.node.int kubernetes-proxy[19432]: E0804 01:14:25.796775 19432 etcd.go:79] Failed to get any services: 100: Key not found (/registry) [2]

    Repeating log block with some warnings:
    — snip —
    Aug 4 01:16:36 kube1 kubernetes-proxy: I0804 01:16:36.292406 19432 logs.go:39] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001%5D [%!s(MISSING)]
    Aug 4 01:16:36 kube1 kubernetes-proxy: I0804 01:16:36.292490 19432 logs.go:39] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services/specs?consistent=true&recursive=false&sorted=true]
    Aug 4 01:16:36 kube1 kubernetes-proxy: I0804 01:16:36.292503 19432 logs.go:39] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true | method GET]
    Aug 4 01:16:36 kube1 kubernetes-proxy: I0804 01:16:36.296257 19432 logs.go:39] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true%5D
    Aug 4 01:16:36 kube1 kubernetes-proxy: I0804 01:16:36.296316 19432 logs.go:39] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/services/specs?consistent=true&recursive=false&sorted=true%5D
    Aug 4 01:16:36 kube1 kubernetes-proxy: E0804 01:16:36.296348 19432 etcd.go:119] Failed to get the key registry/services: 100: Key not found (/registry) [2]
    Aug 4 01:16:36 kube1 kubernetes-proxy: E0804 01:16:36.296363 19432 etcd.go:79] Failed to get any services: 100: Key not found (/registry) [2]
    — snip —

    When creating apache pod as per this blogpost example it will leave this task running indefinitely. It will appear however in pods list.

    It seems to me that some keys/entries are missing from ETCD – like:
    [root@kube1 ~]# curl -L http://127.0.0.1:4001/v2/keys/registry/controllers
    {“errorCode”:100,”message”:”Key not found”,”cause”:”/registry”,”index”:2}

    [root@kube1 ~]# curl -L http://127.0.0.1:4001/v2/keys/registry/services/specs
    {“errorCode”:100,”message”:”Key not found”,”cause”:”/registry”,”index”:2}

    [root@kube1 ~]# curl -L http://127.0.0.1:4001/v2/machines
    http://127.0.0.1:4001

    What could be the problem here?

  3. Hello,
    Even I faced the issues mentioned in the above comment, define the port number of the running minion kubelet service in the “kubernetes-apiserver.service” file of master and do even define the port number in “kubelet.service” file of minion. After doing this restart all the services. And the container deployment will be possible from master.

    Good Tutorial

  4. Great article, and my system is mostly working. I can deploy 1 pod successfully the apache image, but if I add a 2nd address to the apiserver configuration on the master, it is ignored, if I swap the address so the apiserver has only the new address, and try a deploy it works. Basically the deploy will only work on one pod/vm at anyone time, if two addresses are set only one is ever used.

    /usr/bin/kube-apiserver –logtostderr=true –etcd_servers=http://localhost:4001 –address=127.0.0.1 –port=8080 –machines=192.168.201.241,192.168.201.242 –minion_port=10250

    In my above example, .241 works as a single and so does .242, if I deploy with the above config only .242 gets the image.

    I have looked at the coreos redis example and it seems the only thing that is required to add more pods to the apiserver is to extend the –address parameter with more values delimited by comma. Not working though.

    Any ideas? you can also reach me internally, thanks in advance.

  5. For what it’s worth: in this article, you modified the systemd unit files installed by the packages. Best practice is to *not* modify those files; instead, make a copy in /etc/systemd/system and modify that copy. Files in /etc/systemd/system override those in /usr/lib/systemd/system. This allows administrators to easily customize local units without worrying about losing changes as the result of a package upgrade.

    1. Hey Lars, That’s correct. I had another post about this here: http://www.colliernotes.com/2014/07/getting-started-with-kubernetes-docker.html and I was notified of the mistake made the changes you are talking about the same day it was released. At the time I couldn’t edit this blog to match. I believe I can now. At any rate, there have been quite a few changes to the way we package kuberenets now, and I need to totally overhaul the article. I’ll try to release a new one ASAP. Thanks for pinging me on this.

Leave a Reply