OKD: Renaming of OpenShift Origin with 3.10 release

[We are reposting on the Red Hat Developers blog this article from the Red Hat OpenShift blog, which was written by Diane Mueller-Klingspor.]

When we released OpenShift Origin as the open source upstream project for Red Hat OpenShift back in April 2012, we had little inkling of the phenomenal trajectory of cloud-native technology that was to come. With all the work that has gone into the Kubernetes-based core platform (OpenShift 3) from the initial OpenShift Origin 1.0 Release (OpenShift 3) in June 2015, to the current release of Red Hat OpenShift 3.10 release last week, we’ve seen the rise of Kubernetes and containers create the basis of the cloud-native landscape. We collaborated in the incubation and maturation of dozens of new cloud-native projects and into a myriad of upstream projects, expanding the universe of tools and platforms in a way we could only have dreamed about just three years ago.

So it’s time for a new logo, a new website, and a new name for our open source project. We are changing the name of our open source project to better represent who we are today, and who we’ll be tomorrow—the Origin community distribution of Kubernetes that powers Red Hat OpenShift.

Continue reading “OKD: Renaming of OpenShift Origin with 3.10 release”

Share

July 19th DevNation Live: Container pipeline master: Continuous integration + continuous delivery with Jenkins

Join us for the next online DevNation Live on Thursday, July 19th at 12pm EDT for Container pipeline master: Continuous integration + continuous delivery with Jenkins, presented by Red Hat principal technical product marketing manager for Red Hat OpenShift, Siamak Sadeghianfar.

In this session, we’ll take a detailed look into how you can build a super slick, automated continuous integration and continuous delivery (CI/CD) Jenkins pipeline that delivers your application payloads onto the enterprise Kubernetes platform, Red Hat OpenShift. You see how zero-downtime deployment patterns can be part of your release process when you are using a container platform based on Kubernetes.

Automating your build, test, and deployment processes can improve reliability and reduce the need for rollbacks. However, we’ll show you how rollbacks can be handled too.

Register now and join the live presentation at 12pm EDT, Thursday, July 19th.

Session Agenda:

Continue reading “July 19th DevNation Live: Container pipeline master: Continuous integration + continuous delivery with Jenkins”

Share
Red Hat OpenShift

How to call the OpenShift REST API from C#

When you want to do automated tasks for builds and deployments with Red Hat OpenShift, you might want to take advantage of the OpenShift REST API. In scripts you can use oc CLI command which talks to the REST APIs. However there are times when it is more convenient to do this directly from your C# code without having to invoke an external program. This is the value of having an infrastructure platform that is exposed as services with an open API.

If you want to call the API from your C# code, you have to create a request object, call the API, and parse the response object. The upstream project, OpenShift Origin, provides a Swagger 2.0 specification and you can generate a client library for each programming language. Of course, C# is supported.  This isn’t a new approach, Kubernetes has a repository that is generated by Swagger Codegen.

For C#, we can use Microsoft Visual Studio to generate a C# client library for a REST API. In this article, I’ll walk you through the process of generating the library from the definition.

Continue reading “How to call the OpenShift REST API from C#”

Share

A Beginner’s Guide to Kubernetes (PodCTL Podcast #38)

If you aren’t following the OpenShift Blog, you might not be aware of the PodCTL podcast. It’s a free weekly tech podcast covering containers, kubernetes, and OpenShift hosted by Red Hat’s Brian Gracely (@bgracely) and Tyler Britten (@vmtyler). I’m reposting this episode here on the Red Hat Developer Blog because I think their realization is spot on—while early adopters might be deep into Kubernetes, many are just starting and could benefit from some insights.

Original Introduction from blog.openshift.com:

The Kubernetes community now has 10 releases (2.5 yrs) of software and experience. We just finished KubeCon Copenhagen, OpenShift Commons Gathering, and Red Hat Summit and we heard lots of companies talk about their deployments and journeys. But many of them took a while (12–18) months to get to where they are today. This feels like the “early adopters” and we’re beginning to get to the “crossing the chasm” part of the market. So thought we’d discuss some of the basics, lessons learned, and other things people could use to “fast-track” what they need to be successful with Kubernetes.

The podcast will always be available on the Red Hat OpenShift blog (search: #PodCTL), as well as on RSS FeedsiTunesGoogle PlayStitcherTuneIn, and all your favorite podcast players.

Continue reading “A Beginner’s Guide to Kubernetes (PodCTL Podcast #38)”

Share
Red Hat OpenShift

Using OpenShift to deploy .NET Core applications

Containers are the new way of deploying applications. They provide an efficient mechanism to deploy self-contained applications in a portable way across clouds and OS distributions. In this blog post we’ll look at what OpenShift brings for .NET Core specifically.

Kubernetes and OpenShift

Kubernetes is the de facto orchestrator for managing containerized applications. Google open-sourced Kubernetes in 2014 and Red Hat was one of the first companies to work with Google on Kubernetes. Red Hat is the 2nd leading contributor to the Kubernetes upstream project.

OpenShift is an open-source DevOps platform that is built on top of Kubernetes. It integrates directly with your application’s source code. This enables continuous integration/continuous deployment (CI/CD) workflows. Tools are available to scale and monitor your applications. The OpenShift Catalog makes it easy to setup middleware and databases. OpenShift comes with comprehensive documentation to install and manage your installation. It can run on-prem and on public clouds such as AWS, GCP and Azure.

Continue reading “Using OpenShift to deploy .NET Core applications”

Share

Why Kubernetes is The New Application Server

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Continue reading “Why Kubernetes is The New Application Server”

Share

Eclipse Che 6.6 Release Notes

[This article is cross-posted from the Eclipse Che Blog.]

Eclipse Che 6.6 Release Notes

Eclipse Che 6.6 is here! Since the release of Che 6.0, the community has added a number of new capabilities:

  • Kubernetes support: Run Che on Kubernetes and deploy it using Helm.
  • Hot server updates: Upgrade Che with zero downtime.
  • C/C++ support: ClangD Language Server was added.
  • Camel LS support: Apache Camel Language Server Protocol (LSP) support was added.
  • <strong”>Eclipse Java Development Tools (JDT) Language Server (LS): Extended LS capabilities were added for Eclipse Che.
  • Faster workspace loading: Images are pulled in parallel with the new UI.

Quick Start

Che is a cloud IDE and containerized workspace server. You can get started with Che by using the following links:

Continue reading “Eclipse Che 6.6 Release Notes”

Share

Next DevNation Live: Your Journey to a Serverless World—An Introduction to Serverless, June 7th, 12pm EDT

Join us for the next online DevNation Live on June 7th at 12pm EDT for Your Journey to a Serverless World—An Introduction to Serverless, presented by Kamesh Sampath and hosted by Burr Sutter.  Serverless computing is an emerging architecture that represents a shift in the way developers build and deliver software systems. By removing application infrastructure concerns, development and deployment are simplified, allowing developers to focus on writing code that delivers value.  Additionally, operational costs can be reduced by only consuming resources when needed to respond to application events.

In this session, we’ll learn what serverless is and what it means to a developer. Then, we’ll quickly deploy a serverless platform using Apache OpenWhisk on Kubernetes. Using this platform, we’ll demystify which Java™ programming model you should use in a serverless environment. And finally, we’ll look at tools that can make your serverless journey quick, easy, and productive.

Watch the recorded session and view the slides.

Session Agenda

Continue reading “Next DevNation Live: Your Journey to a Serverless World—An Introduction to Serverless, June 7th, 12pm EDT”

Share

Next DevNation Live: Serverless and Servicefull Applications: Where Microservices Complements Serverless, May 17th, 12pm EDT

Join us for the next online DevNation Live on May 17th at 12pm EDT for Serverless and Servicefull Applications: Where Microservices Complements Serverless hosted by Burr Sutter.  Serverless is a misnomer. Your future cloud-native applications will consist of both microservices and functions, wrapped in Linux containers, but in many cases where you, the developer, will be able to ignore the operational aspects of managing the infrastructure and even much of the runtime stack.

In this technical session, we will start by using Apache Whisk, a Functions-as-a-Service (FaaS) engine,  deployed on Kubernetes and Red Hat OpenShift to explore how you can complement cloud-native Java applications (microservices) with serverless functions. Next, we’ll open up a serverless web application architecture and deploy an API Gateway into the FaaS platform to examine the microservices talking to the serverless functions. We finish with a look at how event sinks and event sources map in the serverless world.

Watch the recorded session and view the slides.

Session Agenda

Continue reading “Next DevNation Live: Serverless and Servicefull Applications: Where Microservices Complements Serverless, May 17th, 12pm EDT”

Share
Moscone West graced with the Shadowman logo

Red Hat Summit: Containers, Microservices, and Serverless Computing

You’re in an IT department. How does the rest of the organization see you? As a valuable asset whose code and APIs make a difference in the marketplace, or as a necessary evil that should be trimmed as much as possible? Containers, microservices, and serverless computing can make you more responsive, flexible, and competitive, which in turn makes your organization more effective. And that puts you solidly in the asset column.

After sprinting through the streets of San Francisco from the stage of the opening keynote at Red Hat Summit 2018 (replay available here), Burr Sutter hosted a packed house in Moscone South to talk about these technologies. Containers are widely accepted (see the announcement from Red Hat and Microsoft for an example), microservices are increasingly popular as an approach to modernizing monolithic applications, and serverless computing is emerging as an important new programming model.

Continue reading “Red Hat Summit: Containers, Microservices, and Serverless Computing”

Share