Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Getting Started with Kubernetes / Docker on Fedora

July 31, 2014
Scott Collier
Related topics:
Containers
Related products:
Red Hat OpenShift

Share:

    *******

    EDIT

    This entry is out of date, I have moved the instructions to the Google Kubernetes github repo.

    END EDIT

    *******

    These are my notes on how to get started evaluating a Fedora / Docker / kubernetes environment.  I'm going to start with two hosts.  Both will run Fedora rawhide.  The goal is to stand up both hosts with kubernetes / Docker and use kubernetes to orchestrate the deployment of a couple of simple applications.  Derek Carr has already put together a great tutorial on getting a kubernetes environment up using vagrant.  However, that process is quite automated and I need to set it all up from scratch.

    Install Fedora rawhide using the instructions from here.  I just downloaded the boot.iso file and used KVM to deploy the Fedora rawhide hosts.  My hosts names are: fed{1,2}.

    The kubernetes package provides four services: apiserver, controller, kubelet, proxy.  These services are managed by systemd unit files. We will break the services up between the hosts.  The first host, fed1, will be the kubernetes master.  This host will run the apiserver and controller.  The remaining host, fed2 will be minions and run kubelet, proxy and docker.

    This is all changing rapidly, so if you walk through this and see any errors or something that needs to be updated, please let me know via comments below.

    So let's get started.


    Hosts:
    fed1 = 10.x.x.241
    fed2 = 10.x.x.240

    Versions (Check the kubernetes / etcd version after installing the packages):

           
    # cat /etc/redhat-release
    Fedora release 22 (Rawhide)
    
    # rpm -q etcd kubernetes
    etcd-0.4.5-11.fc22.x86_64
    kubernetes-0-0.0.8.gitc78206d.fc22.x86_64
           
    

    1. Enable the copr repos on all hosts.  Colin Walters has already built the appropriate etcd / kubernetes packages for rawhide.  You can see the copr repo here.

           
    # yum -y install dnf dnf-plugins-core
    # dnf copr enable walters/atomic-next
    # yum repolist walters-atomic-next/x86_64
    Loaded plugins: langpacks
    repo id                          repo name                                                                     status
    walters-atomic-next/x86_64       Copr repo for atomic-next owned by walters                                    37
    repolist: 37
           
    

    2.  Install kubernetes on all hosts - fed{1,2}.  This will also pull in etcd.

           
    # yum -y install kubernetes
           
    

    3.  Pick a host and explore the packages.

           
    # rpm -qi kubernetes
    # rpm -qc kubernetes
    # rpm -ql kubernetes
    # rpm -ql etcd
    # rpm -qi etcd
           
    

    4.  Configure fed1.

    Export the etcd and kube master variables so the services know where to go.

           
    # export KUBE_ETCD_SERVERS=10.x.x.241
    # export KUBE_MASTER=10.x.x.241
           
    

    These are my services files for: apiserver, etcd and controller.  They have been changed from what was distributed with the package.

    Make a copy first. Then review what I have here.

           
    # cp /usr/lib/systemd/system/kubernetes-apiserver.service{,.orig}
    
    # cp /usr/lib/systemd/system/kubernetes-controller-manager.service{,.orig}
    
    # cp /usr/lib/systemd/system/etcd.service{,.orig}
           
    
           
    # cat /usr/lib/systemd/system/kubernetes-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    
    [Service]
    ExecStart=/usr/bin/kubernetes-apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address=127.0.0.1 -port=8080 -machines=10.x.x.240
    
    [Install]
    WantedBy=multi-user.target
    
    
    # cat /usr/lib/systemd/system/kubernetes-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    
    [Service]
    ExecStart=/usr/bin/kubernetes-controller-manager --logtostderr=true --etcd_servers=$KUBE_ETC_SERVERS --master=$KUBE_MASTER
    
    [Install]
    WantedBy=multi-user.target
    
    
    # cat /usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    
    [Service]
    Type=simple
    # etc logs to the journal directly, suppress double logging
    StandardOutput=null
    WorkingDirectory=/var/lib/etcd
    ExecStart=/usr/bin/etcd
    
    [Install]
    WantedBy=multi-user.target
           
    

    Start the appropriate services on fed1.

           
    # systemctl daemon-reload
    
    # systemctl restart etcd
    # systemctl status etcd
    # systemctl enable etcd
    
    # systemctl restart kubernetes-apiserver.service
    # systemctl status kubernetes-apiserver.service
    # systemctl enable kubernetes-apiserver.service
    
    # systemctl restart kubernetes-controller-manager
    # systemctl status kubernetes-controller-manager
    # systemctl enable kubernetes-controller-manager
           
    

    Test etcd on the master (fed1) and make sure it's working.

           
    curl -L http://127.0.0.1:4001/v2/keys/mykey -XPUT -d value="this is awesome"
    curl -L http://127.0.0.1:4001/v2/keys/mykey
    curl -L http://127.0.0.1:4001/version
           
    

    I got those examples from the CoreOS github page.

    Open up the ports for etcd and the kubernetes API server on the master (fed1).

           
    # iptables -I INPUT -p tcp --dport 4001 -j ACCEPT
    # iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
           
    

    Take a look at what ports the services are running on.

           
    # netstat -tulnp
           
    

    5. Configure fed2

    These are my service files.  They have been changed from what was distributed with the package.

    Make a copy first, then review what I have here.

           
    # cp /usr/lib/systemd/system/kubernetes-kubelet.service{,.orig}
    # cp /usr/lib/systemd/system/kubernetes-proxy.service{,.orig}
           
    
           
    
    # cat /usr/lib/systemd/system/kubernetes-kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    
    [Service]
    ExecStart=/usr/bin/kubernetes-kubelet --logtostderr=true -etcd_servers=http://10.x.x.241:4001 -address=10.x.x.240 -hostname_override=10.x.x.240
    
    [Install]
    WantedBy=multi-user.target
    
    
    # cat /usr/lib/systemd/system/kubernetes-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    
    [Service]
    ExecStart=/usr/bin/kubernetes-proxy --logtostderr=true -etcd_servers=http://10.x.x.241:4001
    
    [Install]
    WantedBy=multi-user.target
           
    

    Start the appropriate services on fed2.

           
    # systemctl daemon-reload
    
    # systemctl enable kubernetes-proxy.service
    # systemctl restart kubernetes-proxy.service
    # systemctl status kubernetes-proxy.service
    
    # systemctl enable kubernetes-kubelet.service
    # systemctl restart kubernetes-kubelet.service
    # systemctl status kubernetes-kubelet.service
    
    # systemctl restart docker
    # systemctl status docker
    # systemctl enable docker
           
    

    Take a look at what ports the services are running on.

           
    # netstat -tulnp
           
    

    Open up the port for the kubernetes kubelet server on the minion (fed2).

           
    # iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
           
    

    Now the two servers are set up to kick off a sample application.  In this case, we'll deploy a web server to fed2.  Start off by making a file in roots home directory on fed1 called apache.json that looks as such:

           
    # cat apache.json
    {
      "id": "apache",
      "desiredState": {
        "manifest": {
          "version": "v1beta1",
          "id": "apache-1",
          "containers": [{
            "name": "master",
            "image": "fedora/apache",
            "ports": [{
              "containerPort": 80,
              "hostPort": 80
            }]
          }]
        }
      },
      "labels": {
        "name": "apache"
      }
    }
           
    

    This json file is describing the attributes of the application environment.  For example, it is giving it an "id", "name", "ports", and "image".  Since the fedora/apache images doesn't exist in our environment yet, it will be pulled down automatically as part of the deployment process.  I have seen errors though where kubernetes was looking for a cached image.  In that case I did a manual "docker pull fedora/apache" and that seemed to resolve.
    For more information about which options can go in the schema, check out the docs on the kubernetes github page.

    Now, deploy the fedora/apache image via the apache.json file.

           
    # /usr/bin/kubernetes-kubecfg -c apache.json create pods
    
           
    

    You can monitor progress of the operations with these commands:
    On the master (fed1) -

           
    # journalctl -f -xn -u kubernetes-apiserver -u etcd -u kubernetes-kubelet -u docker
    
           
    

    On the minion (fed2) -

           
    # journalctl -f -xn -u kubernetes-kubelet.service -u kubernetes-proxy -u docker
    
           
    

    This is what a successful expected result should look like:

           
    # /usr/bin/kubernetes-kubecfg -c apache.json create pods
    I0730 15:13:48.535653 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:14:08.538052 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:14:28.539936 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:14:48.542192 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:15:08.543649 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:15:28.545475 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:15:48.547008 27880 request.go:220] Waiting for completion of /operations/8
    I0730 15:16:08.548512 27880 request.go:220] Waiting for completion of /operations/8
    Name                Image(s)            Host                Labels
    ----------          ----------          ----------          ----------
    apache              fedora/apache       /                   name=apache
    
    
           
    

    After the pod is deployed, you can also list the pod.

           
    # /usr/bin/kubernetes-kubecfg list pods
    Name                Image(s)            Host                Labels
    ----------          ----------          ----------          ----------
    apache              fedora/apache       10.x.x.240/      name=apache
    redis-master-2      dockerfile/redis    10.x.x.240/      name=redis-master
           
    

    You can get even more information about the pod like this.

           
    # /usr/bin/kubernetes-kubecfg -json get pods/apache
           
    

    Finally, on the minion (fed2), check that the service is available, running, and functioning.

           
    # docker images | grep fedora
    fedora/apache       latest                6927a389deb6        10 weeks ago        450.6 MB
    
    # docker ps -l
    CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS               NAMES
    d5871fc9af31        fedora/apache:latest   /run-apache.sh      9 minutes ago       Up 9 minutes                            k8s--master--apache--8d060183
    
    # curl http://localhost
    Apache
           
    

    To delete the container.

           
    /usr/bin/kubernetes-kubecfg -h http://127.0.0.1:8080 delete /pods/apache
           
    

    That's it.

    Of course this just scratches the surface. I recommend you head off to the kubernetes github page and follow the guestbook example.  It's a bit more complicated but should expose you to more functionality.

    You can play around with other Fedora images by building from Fedora Dockerfiles. Check here at Github.

    Last updated: February 5, 2024

    Recent Posts

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue