Kubernetes

Subsecond deployment and startup of Apache Camel applications

Subsecond deployment and startup of Apache Camel applications

The integration space is in constant change. Many open source projects and closed source technologies did not withstand the tests of time and have disappeared from the middleware stacks for good. After a decade, however, Apache Camel is still here and becoming even stronger for the next decade of integration. In this article, I’ll provide some history of Camel and then describe two changes coming to Apache Camel now (and later to Red Hat Fuse) and why they are important for developers. I call these changes subsecond deployment and subsecond startup of Camel applications.

Continue reading “Subsecond deployment and startup of Apache Camel applications”

Share
Use the Kubernetes Python client from your running Red Hat OpenShift pods

Use the Kubernetes Python client from your running Red Hat OpenShift pods

Red Hat OpenShift is part of the Cloud Native Computing Foundation (CNCF) Certified Program, ensuring portability and interoperability for your container workloads. This also allows you to use Kubernetes tools to interact with an OpenShift cluster, like kubectl, and you can rest assured that all the APIs you know and love are right there at your fingertips.

The Kubernetes Python client is another great tool for interacting with an OpenShift cluster, allowing you to perform actions on Kubernetes resources with Python code. It also has applications within a cluster. We can configure a Python application running on OpenShift to consume the OpenShift API, and list and create resources. We could then create containerized batch jobs from the running application, or a custom service monitor, for example. It sounds a bit like “OpenShift inception,” using the OpenShift API from services created using the OpenShift API.

In this article, we’ll create a Flask application running on OpenShift. This application will use the Kubernetes Python client to interact with the OpenShift API, list other pods in the project, and display them back to the user.

Continue reading “Use the Kubernetes Python client from your running Red Hat OpenShift pods”

Share
Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Guru Night at Red Hat Summit: Hands-on experience with serverless computing

Millions of developers worldwide want to learn more about serverless computing. If you’re one of the lucky thousands attending Red Hat Summit in Boston May 7-9, you can gain hands-on experience with the help of Burr Sutter and the Red Hat Developer team.

Guru Night is a BYOL (bring your own laptop) event taking place Wednesday, May 8 from 5:00 p.m. to 8:00 p.m. at the Boston Convention and Event Center in ML2 East-258AB. (Doubtless there will be a map to show you where or what ML2 East etc. is; we have no idea.) Head to the signup page and fill out your details now.

TL;DR: Beer and pizza will be served.

We felt compelled to point that out. But read on.

Continue reading “Guru Night at Red Hat Summit: Hands-on experience with serverless computing”

Share
Build and deploy an API with Camel K on Red Hat OpenShift

Build and deploy an API with Camel K on Red Hat OpenShift

With the growing number of APIs and microservices, the time given to creating and integrating them has become shorter and shorter. That’s why we need an integration framework with tooling to quickly build an API and include capabilities for a full API life cycle. Camel K lets you build and deploy your API on Kubernetes or Red Hat OpenShift in less than a second. Unbelievable, isn’t it?

For those who are not familiar with it, Camel K is a subproject of Apache Camel with the target of building a lightweight runtime for running integration code directly on cloud platforms like Kubernetes and Red Hat OpenShift. It was inspired by serverless principles, and it will also target Knative shortly. The article by Nicola Ferraro will give you a good introduction.

In this article, I’ll show how to build an API with Camel K. For that, we will start first by designing our API using Apicurio Studio, which is based on the OpenAPI standard, and then we will provide the OpenAPI standard document to Camel K in order to implement the API and deploy it to Red Hat OpenShift.

Continue reading “Build and deploy an API with Camel K on Red Hat OpenShift”

Share
Build your Kubernetes armory with Minikube, Kail, and Kubens

Build your Kubernetes armory with Minikube, Kail, and Kubens

Kubernetes has grown to be a de facto development platform for building cloud-native applications. As developers, we want to be productive from the word go, or, shall we say, from the word code. But to be productive, we must be armed with the right set of tools. In this article, I will take a look at three important tools that should become part of your Kubernetes tool chest, or armory.

Continue reading “Build your Kubernetes armory with Minikube, Kail, and Kubens”

Share
Introduction to Kubernetes: From container to containers

Introduction to Kubernetes: From container to containers

After being introduced to Linux containers and running a simple application, the next step seems obvious: How to get multiple containers running in order to put together an entire system. Although there are multiple solutions, the clear winner is Kubernetes. In this article, we’ll look at how Kubernetes facilitates running multiple containers in a system.

Continue reading “Introduction to Kubernetes: From container to containers”

Share
How to set up your first Kubernetes environment on Windows

How to set up your first Kubernetes environment on Windows

You’ve crushed the whole containers thing—it was much easier than you anticipated, and you’ve updated your resume. Now it’s time to move into the spotlight, walk the red carpet, and own the whole Kubernetes game. In this blog post, we’ll get our Kubernetes environment up and running on Windows 10, spin up an image in a container, and drop the mic on our way out the door—headed to Coderland.

Continue reading “How to set up your first Kubernetes environment on Windows”

Share
How to set up your first Kubernetes environment on macOS

How to set up your first Kubernetes environment on macOS

By following my previous article in this series, you’ve crushed the whole containers thing. It was much easier than you anticipated, and you’ve updated your resume. Now it’s time to move into the spotlight, walk the red carpet, and own the whole Kubernetes game. In this blog post, we’ll get our Kubernetes environment up and running on macOS, spin up an image in a container, and head to Coderland.

Continue reading “How to set up your first Kubernetes environment on macOS”

Share
Eclipse Wild Web Developer adds a powerful YAML editor with built-in Kubernetes support

Eclipse Wild Web Developer adds a powerful YAML editor with built-in Kubernetes support

YAML Ain’t Markup Language (YAML) has grown increasingly popular during the past few years. It is a human-readable text-based format for specifying configuration information and is used in many platforms, such as Kubernetes and Red Hat OpenShift.

Eclipse Wild Web Developer is a language-based extension that provides a rich development experience for developing typical web and configuration files in the Eclipse IDE. According to the project page, “Eclipse Wild Web Developer relies on existing mainstream and maintained components to provide the language smartness, over popular configuration files like TextMate and protocols like Language Server Protocol  or Debug Adapter Protocol.”

Recently, the YAML Language Server has been integrated into Eclipse Wild Web Developer. This is a feature-rich YAML Language Server implementation that also powers editors including VSCode, Eclipse Che, and Atom. This integration brings all the features that Language Server supports, including validation, autocompletion, hover support, and document outlining to the Eclipse Generic Editor, making it much easier to write and maintain YAML files.

Continue reading “Eclipse Wild Web Developer adds a powerful YAML editor with built-in Kubernetes support”

Share
Quarkus: Why compile to native?

Quarkus: Why compile to native?

Quarkus is Kubernetes native, and to accomplish that we’ve spent a lot of time working across a number of different areas, such as the Java Virtual Machine (JVM) and various framework optimizations. And, there’s much more work still to be done. One area that has piqued the interest of the developer community is Quarkus’s comprehensive and seamless approach to generating an operating system specific (aka native) executable from your Java code, as you do with languages like C and C++, which we believe will typically be used at the end of the build-test-deploy cycle.

Although the native compilation is important, as we’ll discuss later, Quarkus works really well with vanilla OpenJDK Hotspot, thanks to the significant performance improvements we’ve made to the entire stack. The native executable aspect Quarkus offers is optional and, if you don’t want it or your applications don’t need it, then you can ignore it. In fact, even when you are using native images, Quarkus still relies heavily on OpenJDK. The well-received dev mode is able to deliver near-instantaneous change-test cycles all due to Hotspot’s rich dynamic code execution capabilities. Additionally, GraalVM internally uses OpenJDK’s class library and HotSpot to produce a native image.

Still, there’s the question: Why have native compilation at all if the other optimizations are so good? That’s the question we’ll look at more closely here.

Continue reading “Quarkus: Why compile to native?”

Share