At Red Hat Summit 2018, Red Hat’s John Osborne and Microsoft’s Harold Wong gave a talk: Developing .NET Core Applications on Red Hat OpenShift.
.NET Core 1.0 availability for Linux was announced two years ago, but many developers still have a number of questions about the differences between .NET Framework and .NET Core. The session started with an overview of the differences. In a nutshell, .NET Framework is the set of APIs and libraries that Windows developers have used to years, which is pretty heavily tied to Microsoft Windows and Windows GUI APIs. On the other hand, .NET Core is the cross-platform set of APIs that are available for building applications that can run on Linux, macOS, or mobile devices via Xamarin. .Net Core 2.0 was released last August; see Don Schenck's article.
One of the key questions is when to use one versus the other. Here's the summary Harold Wong presented:
- is best for Windows desktops applications
- Windows Forms and Windows Presentation Framework applications
- ASP.NET Web Forms
- Windows Communication Foundation framework
- Windows-specific APIs
- has full support for VB.NET and F#
- is installed system-wide, so there's only a single version per system
- is lacking features in .NET Core:
- including third-party NuGet packages
- the APICompat library
- is for applications intended to run cross-platform
- is designed to address the needs of
- can be developed on Linux but target other platforms such as macOS
- has the best performance at scale
- can run multiple versions side by side
- is ideal for CLI-centric environments
.NET Core is the platform to use when you want to write an application or service to run in a container. While it’s possible to run .NET Framework apps in a Windows container, often this isn’t desirable especially on cloud platforms. The rough metric Harold gave is that a Linux container starts out at around 30 MB, and a Windows container currently starts out at around 480 MB. When adding in all the libraries of the .NET Framework and the app itself, you'd likely be adding another GB, so that'd be roughly a 1.5 GB container for a minimal app. A .NET Core app running in a Linux container can be much smaller and lighter weight, on the order of 100–200 MB.
As the number of available APIs in .Net Core has increased, it is getting easier to port from .NET Famework to .NET Core. However, Harold raised the point that just because you can port something doesn’t mean you should. In many cases, a refactoring that is coupled with a move from monolithic to microservices makes more sense for many applications.
A particularly sweet spot for .NET is ASP.NET applications. ASP.NET is considered one of the fastest MVC frameworks. It is an easy environment in which to create RESTful web services. Even if an ASP.NET application was originally built using .NET Framework, it is usually very portable to .NET Core. This makes a great path for Windows developers to scale ASP.NET applications by using .NET Core and deploying to the cloud on Red Hat OpenShift.
For the demo portion of the talk, John Osborne demoed building and deploying .NET Core 2.0 applications on OpenShift. He described two models for development. One is a Container-as-a-Service (CaaS) approach where containers are built in the local development environment and then deployed to run on OpenShift/Kubernetes. The other approach is the Platform-as-a-Service (PaaS) model, where the containers are built on OpenShift using Source-to-Image (S2I) builds that can be part of a CI/CD pipeline that is kicked off from git commits via GitHub's webhooks. The advantage of this is that builds can be automatically kicked off when any of the components of the stack change.
If you choose to go the CaaS route, building containers locally, John recommended the use of metaparticle.io. Metaparticle.io can take some of the hard work out of building distributed applications.
For many organizations, the advantage of deploying on OpenShift is the ability to avoid creating application code that is tightly coupled to the APIs and services offered by a single cloud infrastructure platform. Through the Kubernetes functionality in OpenShift, configuration information for service location and secure credentials storage can be factored out of the application to be picked up from the runtime environment. This allows the same containers to be used across environments, because they aren't tainted with environment-specific configuration details.
In the demo, John showed an application being deployed that used an Microsoft SQL Server instance found via the Azure Service Broker that was exposed through OpenShift. The application got the database binding information through the Kubernetes config and secrets, which were created through the OpenShift web sonsole without having to create and edit YAML files.
A few others points worth noting from the presentation:
- .NET Core 3.0 was just announced a few days ago. Additional APIs are being added to .NET Core that will make it easier to move .NET Framework code over to .NET Core.
- Microsoft's VS Code—which runs on Windows, macOS, and Linux—has become the preferred development environment for those who are building .NET Core applications.
- .NET Core is key for building cross-platform applications that run from Windows to mobile to Linux containers, and it can be lightweight enough for comfortably deploying and—most importantly—scaling on cloud platforms.
- OpenShift can use the Azure Service Broker to let you consume cloud services such as Microsoft SQL Server in the cloud as a service without having to package it and manage it yourself.
- John Osborne is the co-author of a new book, OpenShift in Action. A limited number of advance copies will be available at Red Hat Summit.
Last updated: June 6, 2023