AArch64

IoT edge development and deployment with containers through OpenShift: Part 2

IoT edge development and deployment with containers through OpenShift: Part 2

In the first part of this series, we saw how effective a platform as a service (PaaS) such as Red Hat OpenShift is for developing IoT edge applications and distributing them to remote sites, thanks to containers and Red Hat Ansible Automation technologies.

Usually, we think about IoT applications as something specially designed for low power devices with limited capabilities.  IoT devices might use a different CPU architectures or platform. For this reason, we tend to use completely different technologies for IoT application development than for services that run in a data center.

In part two, we explore some techniques that allow you to build and test contains for alternate architectures such as ARM64 on an x86_64 host.  The goal we are working towards is to enable you to use the same language, framework, and development tools for code that runs in your datacenter or all the way out to IoT edge devices. In this article, I’ll show building and running an AArch64 container image on an x86_64 host and then building an RPI3 image to run it on physical hardware using Fedora and Podman.

Continue reading “IoT edge development and deployment with containers through OpenShift: Part 2”

Share
The ARM Arc Part 3

The ARM Arc Part 3

Logotype_RH_EnterpriseLinuxServerforARM_DevPreview_RGB_Black

This week heralded the announcement of Red Hat Enterprise Linux Server for ARM Development Preview 7.1, the next milestone in Red Hat’s exploring the potential for ARM servers.  There is a lot in a name, and this one is a mouthful.

The Linux kernel is famous – it is the namesake of the complete operating system, but it does not exist on its own.  A complete OS runs on hardware, starts out in firmware, loads the kernel, which in turn loads a software and service initialization system, all of which require function libraries, all of which were built with compiler tools that do the magic conversion from human readable source code to machine readable binaries.  When ARM designed the AArch64 architecture, they also had to provide ports and specifications for the firmware, the kernel, the libraries, the compiler, and so on. Hundreds of packages were affected.  Not only did they need to provide ports, those ports needed to be designed, written correctly, in a style acceptable to each of the communities whose coding standards are frequently rigorous, distinct, and strictly enforced.  To top it all off, this work needed to be done before the actual hardware existed, necessitating writing software simulators to check all the work and extensive documentation to empower community collaboration.

Continue reading “The ARM Arc Part 3”

Share

The ARM Arc Part 2

This is a continuation to The ARM Arc Part 1 published in July.

ARM_LogoIt all started in 2012 when the Fedora ARM community decided to move from the legacy ARMv5 software floating point ABI to the new ARMv7 hard float ABI.  The move meant better performing code, native atomic operations, threading support, and other modern OS features becoming available to ARM software developers on a general purpose OS.  Doing the work required a way to bootstrap a new architecture, which is notoriously difficult due to Fedora’s inter-dependent package structure.  When building package A depends on having package B, and building package B depends on having package A, and you have 15000 such packages, it poses an epic challenge.

Continue reading “The ARM Arc Part 2”

Share

The importance of standardization to emerging 64-bit ARM servers

At Red Hat, we have been involved in the creation of many of the core industry standards that will be used in building future 64-bit ARM powered servers. Over the past few years, we have assisted in the creation of such standards as the Server Base System Architecture (SBSA), the UEFI 2.4 and ACPI 5.1 bindings for the 64-bit ARM Architecture, and other standards and specifications that have yet to be announced. I believe that each of these standards forms an essential component in the creation of a general purpose computing platform suited to widespread enterprise adoption, as opposed to an embedded or appliance-like device that is tailored to one specific purpose (but for which the Operating System and platform are welded together). Such general purpose platforms are important because customers and end users have many expectations around interoperability and system behavior that they have come to expect from decades of working with highly reliable (and highly standardized) platforms. And while it is important to foster true innovation, gratuitous differentiation only serves to harm everyone involved. It might be fun to build an embedded appliance for a specific application, but using that approach in designing a server is a surefire way to ensure a lack of commercial success.

Continue reading “The importance of standardization to emerging 64-bit ARM servers”

Share