Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Bring your Kubernetes workloads to the edge

November 22, 2021
Dejan Bosanac
Related topics:
Edge computingKubernetes
Related products:
Red Hat OpenShift

Share:

    Although cloud-based applications continue to grow, some use cases require moving workloads out of cloud data centers. The reason is usually to keep your computing power closer to the users, the source of data, or other things you want to control. Instead of running these workloads as separate entities, you might want to create uniform systems, extending clouds to the edge. This technique is known as edge computing.

    The past few years have seen a proliferation of edge computing infrastructure. Today you have a wealth of options, from running containers directly in container runtimes (such as Podman), to joining nodes to Kubernetes clusters, to running whole lightweight Kubernetes distributions on edge nodes.

    As infrastructure becomes widely accessible, developers need to think through the edge computing journey. An important question in this arena is, How do we build our workloads for this new world? This article discusses the current state of tools for managing containers at the edge, including what WebAssembly (also known as Wasm) offers in this domain and what to expect from the field of edge computing in the near future.

    Managing edge workloads in Kubernetes

    Before we dive into technical details, let's quickly recap what's special about edge workloads. This is the foundation for further technical discussions and solutions.

    If we start with the premise that the edge is an extension of the cloud, not independent on-site infrastructure, we can better understand the emphasis on using existing cloud-native tools to develop, deploy, and manage edge workloads. Ideally, we want to create a continuum between devices, edge, and cloud, and treat them as a single system. We want our experience to be as similar as possible when developing the different parts of that system.

    But workloads running in the cloud and on the edge encounter different environments. We need to account for those differences in order to build successful systems. Here are some of the most commonly recognized traits of edge workloads:

    • Mixed architectures: While running in the cloud assumes a uniform hardware architecture among nodes running the workload, you can expect more variety at the edge. A common example is to target ARM platforms, such as RaspberryPi, in production while developing your workloads on x86 Linux or Mac OS X workstations.
    • Access to peripherals: Workloads running in the cloud rarely need access to specialized hardware on the nodes, with the exception of using GPUs for machine learning. In contrast, one of the main points of edge workloads is to interact with their environments. Although it's not always the case, you commonly want access to sensors, actuators, cameras, and other devices connecting over low-range protocols such as Bluetooth. You need to take additional care to support this kind of access to your services.
    • Resource consumption: While the whole premise of the cloud is painless horizontal resource scaling, that is not the case on the edge. Even the best scenarios offer very limited clusters that you can't easily autoscale. So, you need to pay extra attention to the compute resources available to your edge workload. Things such as CPU and memory usage, network availability, and bandwidth play very important roles when choosing technologies and practices for edge workloads.
    • Size: This trait also fits under resource consumption, but it's important enough to stand on its own. Because network bandwidth and storage on the nodes can be limited, you must account for the distribution size of edge workloads.

    Containers at the edge

    Having all of this in mind, let's review the current state of running containers at the edge.

    How to build images for different targets

    We will start with building images for a targeted architecture. There are multiple ways to do builds, the most obvious is to use the actual hardware or virtual machines of the targeted architecture to build the images. Although this works, it could be slow and impractical to fully automate in the actual build process.

    Luckily, most of the image-building tools we use today support a target architecture that's different from the system doing the build. The following example shows both Docker buildx and Buildah building images for different targets:

    $ docker buildx build --platform linux/arm -t quay.io/dejanb/drogue-dht-py -f Dockerfile --push .
    $ buildah bud --tag quay.io/dejanb/drogue-dht-py --override-arch arm64 Dockerfile

    How to run containers at the edge

    Now that we have our images, let's see how to run them. Again, our two usual suspects, Podman and Docker, provide all we need to enable our workload to access the peripherals it needs. Consider the following example:

    $ podman run --privileged --rm -ti --device=/dev/gpiochip0 \
      -e ENDPOINT=https://http.sandbox.drogue.cloud/v1/foo \
      -e APP_ID=dejanb \
      -e DEVICE_ID=pi \
      -e DEVICE_PASSWORD=foobar \
      -e GEOLOCATION="{\"lat\": \"44.8166\", \"lon\": \"20.4721\"}" \
    quay.io/dejanb/drogue-dht-py:latest

    Two important actions in this command are worth highlighting: To access a peripheral device, you need to run the container in a "privileged" mode and pass the path to the device you want to access. You can find the whole example used in this discussion in my GitHub repository.

    Podman, used in this example, has more tricks for allowing you to manage your workloads on the edge. First, this very simple example runs only a single container, but Podman allows you to pack multiple containers into a single pod. This is very useful on its own but also makes it easier to deploy those pods on Kubernetes clusters if needed.

    Additionally, Podman's excellent integration with systemd and ability to auto-update (and rollback) containers makes it an ideal lightweight platform for running containers at the edge. For more complex use cases, you could integrate the application into existing or new Kubernetes clusters. But there's nothing stopping you from starting simply with Podman and moving to more complex scenarios as you need them.

    WebAssembly and Wasi

    Until recently, the Open Container Initiative (OCI) was the only game in town when it came to running containers in the cloud. Naturally, that limitation carried over to the edge, as well. Lately, however, more attention is being paid to WebAssembly (or Wasm) as an alternative approach.

    So, what is WebAssembly? It's an open standard for defining small and portable virtual machines. It was designed for embedding high-performance native applications into web pages. But since then, it has found its way outside of web browsers. For example, Wasi provides a system interface that allows you to run WebAssembly binaries on servers.

    The rest of this article discusses the current state of running Wasm payloads in the cloud and how the format can play a role on the edge in the future.

    WebAssembly in the cloud (with Rust)

    Let's see how we can create WebAssembly workloads and run them in the cloud. For this example, we'll use a simple Rust program that sends and receives CloudEvents using HTTP.

    Although this particular example uses Rust, most of today's popular programming languages can be compiled to WebAssembly. Even so, Rust is interesting in this environment because it provides a modern language that is memory safe and can match C/C++ in terms of performance and binary size. Both are important traits in edge computing workloads.

    Lucky for us, Rust comes with great out-of-the-box support for compiling programs for multiple target architectures. In that sense, Wasm is just another target. So, executing the following command adds a new target to the system:

    $ rustup target add wasm32-wasi
    

    Then, with this next command, you can easily build a Wasm binary:

    $ cargo build --target wasm32-wasi --release
    

    If you check the binary, you can see that its size is just around 3MB, which is a great start.

    Now we just need an environment in which to run this binary. Because we are planning to run it on the server side, Wasi is the main candidate. Wasmtime is an appropriate runtime for the job because it supports Wasi binaries.

    Running Kubernetes nodes on WebAssembly

    So, how can we bring all this to the cloud? Krustlet is a project that implements Kubelets (Kubernetes nodes) that can run Wasm and Wasi payloads. Krustlet actually uses Wasmtime to run the binaries. But before we can get those binaries to the Krustlet, we need to package them as OCI container images and store them in a container registry. The wasm-to-oci project let us do that:

    $ wasm-to-oci push target/wasm32-wasi/release/ce-wasi-example.wasm ghcr.io/dejanb/ce-wasi-example:latest
    

    The important thing to note here is that our OCI image is small—very similar in size to the binary on which it is based—which is one criterion I mentioned as important for edge use cases. The other thing to note is that not all container registries accept this kind of image today, but that should change soon.

    Create a pod and schedule it

    With a container image ready, you can create a pod and schedule it in your cluster:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ce-wasi-example
      labels:
        app: ce-wasi-example
      annotations:
        alpha.wasi.krustlet.dev/allowed-domains: '["https://postman-echo.com"]'
        alpha.wasi.krustlet.dev/max-concurrent-requests: "42"
    spec:
      automountServiceAccountToken: false
      containers:
        - image: ghcr.io/dejanb/ce-wasi-example:latest
          imagePullPolicy: Always
          name: ce-wasi-example
          env:
            - name: RUST_LOG
              value: info
            - name: RUST_BACKTRACE
              value: "1"
            - name: ECHO_SERVICE_URL
              value: "https://postman-echo.com/post"
      tolerations:
        - key: "node.kubernetes.io/network-unavailable"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "kubernetes.io/arch"
          operator: "Equal"
          value: "wasm32-wasi"
          effect: "NoExecute"
        - key: "kubernetes.io/arch"
          operator: "Equal"
          value: "wasm32-wasi"
          effect: "NoSchedule"

    The special parts in this configuration are defining tolerations for the pod, marking a workload as wasm32-wasi architecture, and telling the cluster to schedule the container on the Krustlet node.

    And that's it: We've gone from Wasm to the cloud in a few simple steps. You can find the whole demo covered in this section in my GitHub repository.

    Is WebAssembly ready to use?

    You might be wondering why we would want to run WebAssembly in the cloud. How does it compare to containers? And is it useful in the context of edge computing? Let's consider these questions next.

    WebAssembly versus containers

    We'll start with how WebAssembly improves on OCI containers. Wasm was designed to be high-performing and portable, so there should be no surprise that it really shines in those areas. Remember that the OCI container that we created from our demo binary was about 3MB large? That's a huge difference compared to container images, which are usually hundreds of megabytes in size. There are many ways to minimize the size of your container images, but in any case, they will probably be an order of magnitude bigger than comparable Wasm binaries (and the corresponding images).

    With its compact size, great runtime performance, and startup speeds, it's no wonder that Wasm was employed in a lot of applications as embedded logic for existing systems, such as creating filters, plugins, and so on. But here we are trying to figure how to build whole applications using WebAssembly. This brings us back to Wasi and the state of its readiness to replace containers.

    Wasi is in early development

    Containers rely on the underlying kernel to enable the sandboxing and security of the process. In contrast, Wasi was designed with its own system interface for that protection. This means that the layer between your services and operating system needs to be implemented with all these new interface specifications of Wasi. So, runtimes such as Wasmtime will gradually catch up with containers.

    If we go back to our example, you might notice that we haven't been ambitious and tried something complicated such as reading sensors. We merely used HTTP calls and made it work nicely with the Rust CloudEvents SDK. Moreover, the connection was accomplished using an experimental HTTP library.

    This lag in development does not mean that Wasi lacks potential, just that it's in early development. The framework still needs proper support for networking, threading, device interfaces, and such to be really ready for most common edge use cases.

    Finally, you can take virtually any code you have and run it as a container. Wasi, even when fully developed, will probably have some restrictions on the libraries and features used. How all that will look remains to be seen and will probably differ between ecosystems.

    Conclusion

    This article started by discussing special considerations for edge workloads and how container environments deal with some of them. We also saw what WebAssembly brings to the table. Although WebAssembly is quite new, with the amount of buzz around it these days I expect we'll see very interesting developments in the near future. This area will be good to watch and contribute to, and the developer experience for edge computing will keep improving. I hope that I managed to spark your interest in this topic and that you will find more topics in edge computing to explore.

    Last updated: September 20, 2023

    Related Posts

    • Developing at the edge: Best practices for edge computing

    • Deliver your applications to edge and IoT devices in rootless containers

    • IoT edge development and deployment with containers through OpenShift: Part 1

    • IoT edge development and deployment with containers through OpenShift: Part 2

    Recent Posts

    • More Essential AI tutorials for Node.js Developers

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue