Edge computing continues to gain force as ever more companies increase their investments in edge, even if they're only dipping their toes in with small-scale pilot deployments. Emerging use cases like Internet-of-Things (IoT), augmented reality, and virtual reality (AR/VR), robotics, and telecommunications-network functions are often cited as key drivers for companies moving computing to the edge. Traditional enterprises are also looking at edge computing to better support their remote offices, retail locations, manufacturing plants, and more. At the network edge, service providers can deploy an entirely new class of services to take advantage of their proximity to customers.
In this article, we consider edge computing from the perspective of application developers. The developer perspective is vital because the applications being developed today—leveraging emerging technologies like artificial intelligence and machine learning (AI/ML)—reveal new opportunities to deliver services and optimize costs.
Note: For more about why companies are increasingly looking at edge computing, see my blog post We're headed to edge computing.
The term edge computing has been used to describe everything from actions performed by tiny IoT devices to datacenter-like infrastructure. At the conceptual level, edge computing refers to the idea of bringing computing closer to where it's consumed or closer to the sources of data.
Although the underlying infrastructure is a key enabler, the benefits of edge computing are realized through the applications. If done right, edge applications can enable new experiences across a range of industries:
- Healthcare: Doctors and nurses can advance patient care by integrating live data from patient fitness trackers, medical equipment, and environmental conditions into medical records.
- Smart infrastructure: Cities can leverage real-time data from roadside sensors and cameras to improve traffic flow by synchronizing traffic lights and reducing or increasing traffic lanes; improve traffic safety by detecting wrong-way drivers and dynamically updating the speed limit; and optimize shipping-port utilization by monitoring the loading and unloading of cargo ships.
- Autonomous driving: Self-driving cars can use real-time data to safely navigate a range of driving conditions.
- Industry 4.0: Managers on the factory floor can use AI/ML analytics to improve equipment utilization and maintenance.
- Far edge services: Service providers can use their proximity to customers to offer low-latency (sub-1ms), high-bandwidth, and location-based services for use cases like AR/VR or virtual desktop infrastructure (VDI).
Technology best practices for edge development
Edge computing gives companies the flexibility and simplicity of cloud computing for a distributed pool of resources across a large number of locations. In the context of IoT, edge computing's approach to application development differs from the embedded systems of the past. Embedded applications required heavily customized operating systems that were dependent on the underlying hardware. It follows that developers working on embedded applications needed to deeply understand the hardware and interfaces used by their applications. These development tools lacked the flexibility and capabilities we see in tools used for edge computing.
Consider these technology best practices for edge development:
- Consistent tooling: Developers need to be able to use the same tools regardless of where the application is deployed. As a result, no special skills are required to create edge applications, or at least, not more than for non-edge applications. As an example of edge tooling, Red Hat CodeReady Workspaces, built on Eclipse Che, provides a Kubernetes-native development solution with an in-browser IDE. This tooling supports rapid application development that can be easily deployed at the edge or in the cloud.
- Open APIs: Well-defined and open APIs make it possible to access real-time data programmatically so that businesses can offer new services (and classes of services) that were previously impossible. Developers use open APIs to create standards-based solutions that can access data without concern for the underlying hardware interfaces.
- Accelerated application development: Edge architectures are still evolving, but the design decisions made today will have a lasting impact on future capabilities. Instead of offerings purpose-built for the edge, which limits developer agility, it is better to invest in technologies that can work anywhere—cloud, on-premises, and at the edge. Containers, Kubernetes, and lightweight application services are all examples of technologies that accelerate application development from cloud to edge.
- Containerization: Most new applications are built as containers because containerized applications are easy to deploy and manage at scale. Containers are an especially good fit for edge application requirements of modularity, segregation, and immutability. Applications will need to be deployed on many different edge tiers, each with their own unique resource characteristics. Combined with microservices, containers representing function instances can be scaled up or down depending on changing resources and other conditions.
Note: For more about how resource requirements vary between edge and cloud computing, see No more illusions of infinite capacity.
Additional technology considerations
For developers, it is important to know that there will not be an either/or choice between edge computing and centralized computing. As edge computing gains greater adoption in the marketplace, the best solutions will often encompass a combination of the two. In such a hybrid computing model, centralized computing will be used for compute-intensive workloads, data aggregation and storage, AI/MI, coordinating operations across geographies, and traditional back-end processing. Edge computing, on the other hand, can help solve problems at the source, in near real-time. Distributed architectures will allow us to place applications at any tier from cloud to edge, wherever it makes the most sense.
Monolithic edge solutions that require custom tooling and don't integrate with the overall IT infrastructure could cause major pain when edge computing achieves mass deployment. Open source is an obvious choice for providing flexibility, while also future-proofing our current investments in edge computing.
As computing moves increasingly to the edge, the benefits of the edge will be realized on the backs of applications. Instead of treating edge computing as a separate computing paradigm, a preferable approach is to use a hybrid computing model that combines the best of centralized and edge computing. For developers building edge applications, they should be leveraging modern application development principles. These principles involve the use of consistent tooling (regardless of the location where the application is deployed), open APIs, and highly modular yet scalable technologies (such as containers, Kubernetes, and lightweight application services). Open source allows flexibility, while also future-proofing the current investments in edge computing.Last updated: July 14, 2020