.NET 5.0 now available for Red Hat Enterprise Linux and Red Hat OpenShift

We’re excited to announce the general availability of .NET 5.0 on Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 8, and Red Hat OpenShift Container Platform.

What’s new

.NET 5.0 is the successor of .NET Core 3.1, and supersedes .NET Framework as the preferred target platform for building Windows Forms and WPF applications.

.NET 5.0 includes new language versions C# 9 and F# 5.0. Significant performance improvements were made to the base libraries, GC and JIT. The ASP.NET Core framework has many great new features and improvements, like improved gRPC and Blazor performance, and enabling OpenAPI by default.

Install .NET 5.0

.NET 5.0 can be installed on RHEL 7 with the usual:

# yum install rh-dotnet50

On RHEL 8, enter:

# dnf install dotnet-sdk-5.0

The .NET 5.0 SDK and runtime container images are available from the Red Hat Container registry. You can use the container images as standalone images and with OpenShift:

$ podman run --rm registry.redhat.io/ubi8/dotnet-50 dotnet --version
5.0.100

Support

.NET 5.0 is a current release. It is scheduled for support until January 2022, three months after the .NET 6.0 release in November 2021. .NET 6.0 will be a long-term support (LTS) release, supported for three years. The previous LTS releases, .NET Core 2.1 and .NET Core 3.1, are supported until August 21, 2021 and December 3, 2022, respectively.

Visit the .NET overview page to learn more about using .NET on Red Hat Enterprise Linux and OpenShift.

Share

Using Microsoft SQL Server on Red Hat OpenShift

In this article, you’ll learn how to deploy Microsoft SQL Server 2019 on Red Hat OpenShift. We’ll then use SQL Server from an ASP.NET Core application that is also deployed on OpenShift. Next, I’ll show you how to connect to SQL Server while working on the application from your local development machine. And finally, we’ll connect to the server using Azure Data Studio.

Note that I am using Red Hat CodeReady Containers to run OpenShift 4.3 locally on my development machine.

Deploying Microsoft SQL Server

To start, log in to your OpenShift cluster using the oc login command. Create a new project by entering:

$ oc new-project mssqldemo

Use the following template to facilitate deploying the Red Hat Enterprise Linux (RHEL)-based SQL Server image:

$ oc create -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore-persistent-ex/dotnetcore-3.1-mssql/openshift/mssql2019.json
template.template.openshift.io/mssql2019 created
$ oc process --parameters mssql2019
NAME                DESCRIPTION                                                                  GENERATOR           VALUE
NAME                The name assigned to all of the frontend objects defined in this template.                       mssql
SA_PASSWORD                                                                                      expression          aA1[a-zA-Z0-9]{8}
ACCEPT_EULA         'Y' to accept the EULA (https://go.microsoft.com/fwlink/?linkid=857698).
MSSQL_PID           Set to 'Developer'/'Express'/'Standard'/'Enterprise'/'EnterpriseCore'.                           Developer
VOLUME_CAPACITY     Volume space available for data, e.g. 512Mi, 8Gi                                                 512Mi

For this deployment, you can retain the default parameters. Accept the end-user license agreement (EULA) as follows:

$ oc new-app --template=mssql2019 -p ACCEPT_EULA=Y
--> Deploying template "mssqldemo/mssql2019" to project mssqldemo

 	Microsoft SQL Server 2019
 	---------
 	Relational database management system developed by Microsoft.

 	* With parameters:
    	* Name=mssql
    	* Administrator Password=aA1qxWYb8ME # generated
    	* Accept the End-User Licensing Agreement=Y
    	* Product ID or Edition=Developer
    	* Persistent Volume Capacity=512Mi

--> Creating resources ...
	secret "mssql-secret" created
	service "mssql" created
	deploymentconfig.apps.openshift.io "mssql" created
	persistentvolumeclaim "mssql-pvc" created
--> Success
	Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
 	'oc expose svc/mssql'
	Run 'oc status' to view your app.

In addition to deploying SQL Server in a container, the template creates a secret (mssql-secret), which stores the administrator password. It also creates a persistent volume claim (mssql-pvc) for storage. Note that the secret includes the SQL Server service name, which facilitates binding to SQL Server later.

You can use the oc status command or the OpenShift web console to monitor the deployment’s progress.

Using SQL Server from .NET Core on OpenShift

For this demo, we’ll use the s2i-dotnetcore-persistent-ex example application. This is a create, read, update, and delete (CRUD) application. The dotnetcore-3.1-mssql branch has support for an in-memory, PostgreSQL, or SQL Server back end.

You can configure the application with environment variables to support the back end that you choose. We’re using the MSSQL_SA_PASSWORD and MSSQL_SERVICE_NAME environment variables for SQL Server. Here are the relevant code snippets:

// Detect that we should use a SQL Server backend:
string saPassword = Configuration.GetValue("MSSQL_SA_PASSWORD");
if (saPassword != null)
{
    dbProvider = DbProvider.Mssql;
}
...
// Determine the connection string:
case DbProvider.Mssql:
{
    string server = Configuration["MSSQL_SERVICE_NAME"] ?? "localhost";
    string password = Configuration["MSSQL_SA_PASSWORD"];
    string user = "sa";
    string dbName = "myContacts";
    connectionString = $@"Server={server};Database={dbName};User Id={user};Password={password};";
}
...
// Configure EF Core to use SQL Server:
case DbProvider.Mssql:
    Logger.LogInformation("Using Mssql database");
    services.AddDbContext(options =>
                options.UseSqlServer(connectionString));

The application that we want to deploy requires .NET Core 3.1. Let’s find out whether this version is available on the OpenShift cluster:

$ oc get is -n openshift dotnet
NAME      IMAGE REPOSITORY                                                           TAGS                 UPDATED
dotnet    default-route-openshift-image-registry.apps-crc.testing/openshift/dotnet   3.0,latest,2.2,2.1   2 months ago

.NET Core 3.1 is not listed, but we can add it by importing the required Universal Base Image (UBI) 8-based images:

# note: only needed when .NET Core 3.1 is not available
$ oc create -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams_rhel8.json
imagestream.image.openshift.io/dotnet created
imagestream.image.openshift.io/dotnet-runtime created

Now, we’re ready to deploy the application:

$ oc new-app dotnet:3.1~https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex#dotnetcore-3.1-mssql --context-dir app
--> Found image 45eae59 (28 hours old) in image stream "mssqldemo/dotnet" under tag "3.1" for "dotnet:3.1"

	.NET Core 3.1
	-------------
	Platform for building and running .NET Core 3.1 applications

	Tags: builder, .net, dotnet, dotnetcore, dotnet-31

	* A source build using source code from https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex#dotnetcore-3.1-mssql will be created
  	* The resulting image will be pushed to image stream tag "s2i-dotnetcore-persistent-ex:latest"
  	* Use 'start-build' to trigger a new build
	* This image will be deployed in deployment config "s2i-dotnetcore-persistent-ex"
	* Port 8080/tcp will be load balanced by service "s2i-dotnetcore-persistent-ex"
  	* Other containers can access this service through the hostname "s2i-dotnetcore-persistent-ex"

--> Creating resources ...
	imagestream.image.openshift.io "s2i-dotnetcore-persistent-ex" created
	buildconfig.build.openshift.io "s2i-dotnetcore-persistent-ex" created
	deploymentconfig.apps.openshift.io "s2i-dotnetcore-persistent-ex" created
	service "s2i-dotnetcore-persistent-ex" created
--> Success
	Build scheduled, use 'oc logs -f bc/s2i-dotnetcore-persistent-ex' to track its progress.
	Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
 	'oc expose svc/s2i-dotnetcore-persistent-ex'
	Run 'oc status' to view your app.

Use the oc status command or the OpenShift web console to monitor the deployment’s progress. Once the application is deployed, expose it externally and capture the URL:

$ oc expose service s2i-dotnetcore-persistent-ex
route.route.openshift.io/s2i-dotnetcore-persistent-ex exposed
$ oc get route s2i-dotnetcore-persistent-ex
NAME                       	HOST/PORT                                             	PATH  	SERVICES                   	PORT   	TERMINATION   WILDCARD
s2i-dotnetcore-persistent-ex   s2i-dotnetcore-persistent-ex-mssqldemo.apps-crc.testing         	s2i-dotnetcore-persistent-ex   8080-tcp             	None

When browsing to the URL, note that the application is running from an in-memory database.

Adding contacts

Next, we’ll add a few contacts, as shown in Figure 1.

A screenshot of the application dialog to add contacts from the in-memory database.

Figure 1: Adding contacts from the application’s in-memory database.

We’ll use the oc set env command to configure the application to connect to SQL Server. Then, we’ll add the data from mssql-secret to the application’s deployment configuration:

$ oc set env --from=secret/mssql-secret dc/s2i-dotnetcore-persistent-ex --prefix=MSSQL_

The oc set env command restarts the application and connects to the Microsoft SQL Server running on OpenShift. Now you can create, remove, and update contacts in the database. Figure 2 shows a list of contacts.

A list of contacts in the Microsoft SQL Server database.

Figure 2: Contacts in the Microsoft SQL Server database.

Connecting from a local .NET application

It is sometimes useful to connect to SQL Server on OpenShift from a .NET application that is running on your development machine. I’ll show you how to do that next.

First, let’s get the application source code:

$ git clone https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex
$ cd s2i-dotnetcore-persistent-ex
$ git checkout dotnetcore-3.1-mssql
$ cd app

Use the oc get pod command to identify the SQL Server pod. Then, enter the oc port-forward command to expose SQL Server on the local machine:

$ oc get pod | grep mssql | grep Running
mssql-1-288cm                           1/1       Running     0          34m
$ oc port-forward mssql-1-288cm 1433:1433
Forwarding from 127.0.0.1:1433 -> 1433
Forwarding from [::1]:1433 -> 1433

To connect the application to the database, we set the MSSQL_SA_PASSWORD environment variable. The password was printed when we deployed the SQL database. If you missed it, try doing a Base64-decode from the oc get secret mssql-secret -o yaml output.

Let’s run the application with the environment variable set:

$ MSSQL_SA_PASSWORD=aA1qxWYb8ME dotnet run
info: RazorPagesContacts.Startup[0]
      Using Mssql database
info: Microsoft.EntityFrameworkCore.Infrastructure[10403]
      Entity Framework Core 3.1.0 initialized 'MssqlDbContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer' with options: None
...
info: Microsoft.EntityFrameworkCore.Migrations[20405]
      No migrations were applied. The database is already up to date.
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
      User profile is available. Using '/home/redhat-developer/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest.
Hosting environment: Production
Content root path: /tmp/s2i-dotnetcore-persistent-ex/app
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.

Browse to the localhost web server to display the contacts that you added earlier.

Note that the demo starts the operation from the command line. You can also set the environment variable as part of the IDE debug configuration and launch the application from your IDE.

Managing SQL Server

On a Windows desktop, you can manage SQL Server as you always have, with SQL Server Management Studio. On a Linux or Mac desktop, you can use Azure Data Studio. You can find the installation instructions for Azure Data Studio here.

To connect to SQL Server, you need to set up port forwarding, as we did in the previous section. Next, you can open Azure Data Studio and add a connection for the localhost user sa and the password from the mssql-secret, as shown in Figure 3.

The dialog to add a new connection.

Figure 3: Add a connection and secret for a localhost user.

After connecting, you can perform operations from Azure Data Studio. For example, you could execute an SQL query against the Customer database, as shown in Figure 4.

The dialog to execute a query against the Customer database.

Figure 4: Execute an SQL query against the Customer database.

Conclusion

In this article, you learned how to deploy Microsoft SQL Server on Red Hat OpenShift. I showed you how to use SQL Server from an ASP.NET Core application running on OpenShift and a .NET application running on your development machine. You also saw how to use Azure Data Studio to connect to the SQL Server database on OpenShift. You can try this on your development machine with CodeReady Containers.

Share

Set up continuous integration for .NET Core with OpenShift Pipelines

Have you ever wanted to set up continuous integration (CI) for .NET Core in a cloud-native way, but you didn’t know where to start? This article provides an overview, examples, and suggestions for developers who want to get started setting up a functioning cloud-native CI system for .NET Core.

We will use the new Red Hat OpenShift Pipelines feature to implement .NET Core CI. OpenShift Pipelines are based on the open source Tekton project. OpenShift Pipelines provide a cloud-native way to define a pipeline to build, test, deploy, and roll out your applications in a continuous integration workflow.

In this article, you will learn how to:

  1. Set up a simple .NET Core application.
  2. Install OpenShift Pipelines on Red Hat OpenShift.
  3. Create a simple pipeline manually.
  4. Create a Source-to-Image (S2I)-based pipeline.

Continue reading “Set up continuous integration for .NET Core with OpenShift Pipelines”

Share

Using OpenAPI with .NET Core

In this article, we’ll look at using OpenAPI with .NET Core. OpenAPI is a specification for describing RESTful APIs. First, I’ll show you how to use OpenAPI to describe the APIs provided by an ASP.NET Core service. Then, we’ll use the API description to generate a strongly-typed client to use the web service with C#.

Writing OpenAPI descriptions

Developers use the OpenAPI specification to describe RESTful APIs. We can then use OpenAPI descriptions to generate a strongly-typed client library that is capable of accessing the APIs.

Note: Swagger is sometimes used synonymously with OpenAPI. It refers to a widely used toolset for working with the OpenAPI specification.

Continue reading “Using OpenAPI with .NET Core”

Share

Monitoring .NET Core applications on Kubernetes

Prometheus is an open source monitoring solution that collects metrics from the system and its applications. As a developer, you can query these metrics and use them to create alerts, which you can use as a source for dashboards. One example would be using Prometheus metrics with Grafana.

In this article, I show you how to use Prometheus to monitor a .NET Core application running on Kubernetes. Note that installation instructions are not included with the article. I do include a reference for using the Prometheus Operator to create and configure Prometheus on Kubernetes.

Note: Learn more about Prometheus’ support for monitoring Kubernetes and containerized applications deployed on OpenShift.

Open source monitoring with Prometheus

Prometheus organizes data in a time series. This type of data graph is useful for tracking how a numeric value changes over time. Prometheus then uses the time series to track the following:

  • Counters: Values that can only increment, like the number of requests handled.
  • Gauges: Values that can go up and down, like memory used.
  • Histograms: Values that are counted in a number of buckets, like response time.

A single metric (like HTTP response time) corresponds to multiple time series that have a unique set of labels. Thanks to these labels, you can filter queries for specific criteria, such as the HTTP response time for a particular URL.

Deploying Prometheus

You can use the Prometheus Operator to create and configure Prometheus on Kubernetes. To set up this example, I started by creating a project with a user that has monitoring permissions. I followed the steps described in Monitoring your own services to create the project and user.

Exposing metrics from .NET Core

We’ll use the prometheus-net library to expose metrics from .NET Core. This library includes a package for monitoring .NET Core, and a separate package for monitoring ASP.NET Core. The ASP.NET Core monitoring package includes additional metrics related to the web server.

As described in the prometheus-net README, we need to include the prometheus-net.AspNetCore package:

<ItemGroup>
  <PackageReference Include="prometheus-net.AspNetCore" Version="3.5.0" />
</ItemGroup>

Next, we’ll add an endpoint that Prometheus will use to retrieve the metrics:

app.UseEndpoints(endpoints =>
{
  // ...
  endpoints.MapMetrics();
};

Finally, we enable capturing the HTTP metrics:

public void Configure(IApplicationBuilder app, ...)
{
  // ...
  app.UseRouting();
  app.UseHttpMetrics();
  // ...
}

We’ll deploy this application on Red Hat OpenShift, and make it accessible from outside the cluster:

$ oc new-app dotnet:3.1~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnetcore-3.1-monitor --context-dir app
$ oc expose service s2i-dotnetcore-ex

Now that our application is up and running, we can have a look at the HTTP endpoint that is used by Prometheus at the /metrics path. Notice the different gauges, counters, and histograms exposed by the ASP.NET Core application:

# HELP process_private_memory_bytes Process private memory size
# TYPE process_private_memory_bytes gauge
process_private_memory_bytes 383516672
# HELP process_working_set_bytes Process working set
# TYPE process_working_set_bytes gauge
process_working_set_bytes 229879808
# HELP http_requests_in_progress The number of requests currently in progress in the ASP.NET Core pipeline. One series without controller/action label values counts all in-progress requests, with separate series existing for each controller-action pair.
# TYPE http_requests_in_progress gauge
http_requests_in_progress{method="GET",controller="",action=""} 1
http_requests_in_progress{method="POST",controller="Home",action="Index"} 0
http_requests_in_progress{method="GET",controller="Home",action="Index"} 0
# HELP http_requests_received_total Provides the count of HTTP requests that have been processed by the ASP.NET Core pipeline.
# TYPE http_requests_received_total counter
http_requests_received_total{code="200",method="POST",controller="Home",action="Index"} 1
http_requests_received_total{code="200",method="GET",controller="Home",action="Index"} 1288
http_requests_received_total{code="200",method="GET",controller="",action=""} 4944
# HELP http_request_duration_seconds The duration of HTTP requests processed by an ASP.NET Core application.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_sum{code="200",method="GET",controller="Home",action="Index"} 0.5861144999999994
http_request_duration_seconds_count{code="200",method="GET",controller="Home",action="Index"} 1288
http_request_duration_seconds_bucket{code="200",method="GET",controller="Home",action="Index",le="0.001"} 1262
...
http_request_duration_seconds_bucket{code="200",method="GET",controller="Home",action="Index",le="+Inf"} 1288
http_request_duration_seconds_sum{code="200",method="GET",controller="",action=""} 8.691159999999982
http_request_duration_seconds_count{code="200",method="GET",controller="",action=""} 4944
...

You can see metrics for memory like the process_working_set_bytes gauge. You can also see http_request_duration_seconds that exposes a histogram for the request duration. The process_working_set_bytes metric has time series per code, method, controller, and action. This lets us filter based on those labels. The histogram data is in the http_request_duration_seconds_bucket metric, which defines buckets using the le (less or equal) label. The histogram also includes a *_count and *_sum metric.

Monitoring the .NET application

Now, we need to configure metrics collection for the .NET application. We do this on OpenShift by adding a PodMonitor or ServiceMonitor configuration to the namespace. The OpenShift Prometheus Operator picks up these resources and configures monitoring.

Next, let’s look at the service we’ve deployed. We’ll use this information to configure the ServiceMonitor:

$ oc get service s2i-dotnetcore-ex -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: 2020-05-08T11:48:02Z
  labels:
    app: s2i-dotnetcore-ex
  name: s2i-dotnetcore-ex
  namespace: demoproject
  resourceVersion: "22076"
  selfLink: /api/v1/namespaces/demoproject/services/s2i-dotnetcore-ex
  uid: 2aa94ebe-2384-4544-bcbe-b8283bd2db60
spec:
  clusterIP: 172.30.35.187
  ports:
  - name: 8080-tcp
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: s2i-dotnetcore-ex
    deploymentconfig: s2i-dotnetcore-ex
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Configure the ServiceMonitor

We’ll add a ServiceMonitor that matches the app: s2i-dotnetcore-ex label, name: 8080-tcp port, and namespace: demoproject namespace from the service configuration.

First, we create an example-app-service-monitor.yaml file with the following content:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: prometheus-example-monitor
  name: prometheus-example-monitor
  namespace: demoproject
spec:
  endpoints:
  - interval: 30s
    port: 8080-tcp
    scheme: http
  selector:
    matchLabels:
      app: s2i-dotnetcore-ex

Note that the configuration includes the interval for monitoring, which in this case is set to 30s.

All that’s left to do is add the monitor:

$ oc apply -f example-app-service-monitor.yaml

Querying Prometheus

Now that Prometheus is monitoring our application, we can look at the metrics we’ve collected. To start, open the OpenShift web user interface (UI) and go to the Advanced > Metrics page. On this page, we can execute Prometheus queries. PromQL is the Prometheus Query Language and offers a simple, expressive language to query the time series that Prometheus collected.

As an example, we’ll use a query for calculating the 99% quantile response time of the .NET application service:

histogram_quantile(0.99, sum by(le) (rate(http_request_duration_seconds_bucket[5m])))*1000

Figure 1 shows the response-time graph generated by Prometheus.

A histogram graphs and display a .NET application service's 99% quantile response time.

Figure 1: A histogram graphs a .NET application service’s 99% quantile response time.

PromQL queries

In case you’re not familiar with PromQL, let’s dissect this query. We’re using the http_request_duration_seconds_bucket metric from the http_request_duration_seconds histogram that we saw under the /metrics path.

Because these values are ever-incrementing counters, we apply the rate operation over a five-minute window. This gives us the response-time change over the last five minutes. The http_request_duration_seconds_bucket metric is split into a number of time series (per code, method, and so on). We don’t care about these individual series, so we will sum them up. We add the by (le) argument to maintain the separate buckets that make up the histogram. We use histogram_quantile to obtain the 99% quantile value, and multiply by 1000 to get the time in milliseconds.

Conclusion

In this article, you’ve learned about using Prometheus to monitor .NET Core applications that are deployed on Kubernetes. If you wanted to continue with the example, you could use the metrics collected by Prometheus to generate alerts and view them in one or more Grafana dashboards. If you’re curious, check out OpenShift 4.3’s support for accessing Prometheus, the Alerting UI, and Grafana via the web console.

Share

How to fix .NET Core’s ‘Unable to obtain lock file access’ error on Red Hat OpenShift

Well, it finally happened. Despite the added assurances of working with containers and Kubernetes, the old “It works on my machine” scenario reared its ugly head in my .NET Core (C#) code. The image that I created worked fine on my local PC—a Fedora 32 machine—but it crashed when I tried running it in my Red Hat OpenShift cluster.

The error was “Unable to obtain lock file access on /tmp/NuGetScratch.” Let’s take a quick look at what happened, and then I’ll explain how I fixed it.

Identity issues

After a lot of web searching and a discussion with a Red Hat .NET Core engineer, I discovered the underlying problem. It turns out that within a container, the identity used to initially run the program (using the dotnet run command) must be the same for subsequent users.

The problem might be easy to understand, but what’s the solution?

Continue reading “How to fix .NET Core’s ‘Unable to obtain lock file access’ error on Red Hat OpenShift”

Share

Profile-guided optimization in Clang: Dealing with modified sources

Profile-guided optimization (PGO) is a now-common compiler technique for improving the compilation process. In PGO (sometimes pronounced “pogo”), an administrator uses the first version of the binary to collect a profile, through instrumentation or sampling, then uses that information to guide the compilation process.

Profile-guided optimization can help developers make better decisions, for instance, concerning inlining or block ordering. In some cases, it can also lead to using obsolete profile information to guide compilation. For reasons that I will explain, this feature can benefit large projects. It also puts the burden on the compiler implementation to detect and handle inconsistencies.

This article focuses on how the Clang compiler implements PGO, and specifically, how it instruments binaries. We will look at what happens when Clang instruments source code during the compilation step to collect profile information during execution. Then, I’ll introduce a real-world bug that demonstrates the pitfalls of the current approach to PGO.

Continue reading “Profile-guided optimization in Clang: Dealing with modified sources”

Share

C# 8 asynchronous streams

.NET Core 3.1 (December 2019) includes support for C# 8, a new major version of the C# programming language. In this series of articles, we’ll look at the new features in .NET’s main programming language. This first article, in particular, looks at asynchronous streams. This feature makes it easy to create and consume asynchronous enumerables, so before getting into the new feature, you first need to understand the IEnumerable interface.

Note: C# 8 can be used with the .NET Core 3.1 SDK, which is available on Red Hat Enterprise Linux, Fedora, Windows, macOS, and on other Linux distributions.

Read the whole series

Read all of the articles in this series introducing new features in .NET’s main programming language:

A brief history of IEnumerable

The classic IEnumerable<T> has been around since .NET Framework 2 (2005). This interface provides us with a type-safe way to iterate over any collection.

The iteration is based on the IEnumerator<T> type:

Continue reading “C# 8 asynchronous streams”

Share

.NET Core on Red Hat platforms

In this article, we look at the various ways .NET Core is made available on Red Hat platforms. We start with an overview of the available platforms, and then show how to install .NET Core on each of them.

Platform overview

Let’s start with the overview. If you are familiar with these platforms already, you can skip to the sections that cover specific platforms.

Operating systems

From an operating system point of view, we’ll look at four distributions:

  • Fedora is a community maintained distribution that moves fast. It is a great option for developers who want access to the latest development tools and the latest kernel.
  • Red Hat Enterprise Linux (RHEL) is a long-term support (LTS) distribution by Red Hat that is based on Fedora.
  • CentOS is a community maintained downstream rebuild of Red Hat Enterprise Linux.
  • CentOS Stream is a rolling preview distribution of future Red Hat Enterprise Linux versions.

All of these distributions are Free as in Freedom, so the software is available and open for change. Fedora and CentOS have no cost (Free as in Beer). For Red Hat Enterprise Linux, you pay a support subscription to Red Hat. For development purposes, however, you can use a no-cost Red Hat Developer subscription and download Red Hat Enterprise Linux, as well as our other products and tools at no cost.

Containers

Most public cloud vendors support creating Red Hat Enterprise Linux- and CentOS-based virtual machines (VMs). The VMs based on RHEL are supported by Red Hat, and the VMs based on CentOS are supported by the CentOS community.

With container technology, operating systems are now also packaged into container images. These images contain packages from the OS, but not the kernel. Red Hat provides Red Hat Enterprise Linux 7- and RHEL 8-based images. Red Hat Enterprise Linux 8-based images are based on the Universal Base Image (UBI). For Red Hat Enterprise Linux 7, some images are provided as UBI images and others as non-UBI images. The RHEL 7 non-UBI images are hosted on registry.redhat.io and require subscription credentials to be pulled. The UBI images are hosted on registry.access.redhat.com and require no subscription for download. These images may be used in production without a Red Hat subscription, on any platform. However, when the UBI images run on a Red Hat Enterprise Linux platform, subscribers get support from Red Hat. The CentOS community also provides images that are hosted on registry.centos.org.

OpenShift

For running container-based applications, Red Hat provides Red Hat OpenShift, which is based on Kubernetes. Kubernetes provides the core functionality for scheduling containers across multiple machines. OpenShift packages Kubernetes in a form that is usable, deployable, and maintainable. Red Hat provides a supported version of OpenShift that can be deployed on-prem and in public clouds (e.g., Azure, AWS, and GCP). Red Hat OpenShift is based on the open source, upstream okd project. To run OpenShift on your development machine, you can use Red Hat CodeReady Containers (CRC) for OpenShift 4.x, and the Red Hat Container Development Kit (CDK) for OpenShift 3.x.

.NET Core on each platform

Now, let’s take a look at how to install .NET Core onto each Red Hat platform.

.NET Core on Fedora

.NET Core on Fedora is built by the Fedora .NET Special Interest Group (SIG). Because .NET Core doesn’t meet the Fedora packaging guidelines, it is distributed from a separate repository.

To install .NET Core on Fedora, you need to enable the copr repository and install the sdk package:

$ sudo dnf copr enable @dotnet-sig/dotnet
$ sudo dnf install dotnet-sdk-2.1

When possible, the SIG also builds preview versions of .NET Core. These preview versions are distributed from a separate repository:

$ sudo dnf copr enable @dotnet-sig/dotnet-preview
$ sudo dnf install dotnet-sdk-3.1

For more information about running .NET Core on Fedora, see the Fedora .NET documentation.

.NET Core on Red Hat Enterprise Linux 7 and CentOS 7

On Red Hat Enterprise Linux 7 and CentOS 7, .NET Core versions are packaged into their own software collection (SCLs). This method of packaging allows .NET Core to include libraries that are newer than those provided by the base OS.

To install .NET Core on Red Hat Enterprise Linux, the machine needs to be registered with the Red Hat subscription management system. Depending on the OS flavor (e.g., Server, Workstation, or HPC Compute node), you need to enable the proper .NET Core repository:

$ sudo subscription-manager repos --enable=rhel-7-server-dotnet-rpms
$ sudo subscription-manager repos --enable=rhel-7-workstation-dotnet-rpms
$ sudo subscription-manager repos --enable=rhel-7-hpc-node-dotnet-rpms

Next, the SCL tooling needs to be installed:

$ sudo yum install scl-utils

Now we can install a specific version of .NET Core:

$ sudo yum install rh-dotnet21 -y

To use the .NET Core SCL, we first need to enable it. For example, we can run bash in the SCL by running:

$ scl enable rh-dotnet30 bash

For more information about running .NET Core on Red Hat Enterprise Linux 7, see the .NET Core Getting Started Guides.

.NET Core on Red Hat Enterprise Linux 8/CentOS 8

On the newer RHEL 8, .NET Core is included in the AppStream repositories, which are enabled by default. A version of .NET Core can be installed by running:

$ sudo dnf install dotnet-sdk-2.1

For more information about running .NET Core on Red Hat Enterprise Linux 8, see the .NET Core Getting Started Guides.

.NET Core in containers

Two images are provided per the different .NET Core versions: a runtime image that contains everything necessary to run a .NET Core application, and an SDK image that contains the runtime plus the tooling needed to build applications.

Table 1 shows the names of the images for .NET Core 2.1. For other versions, just change the version numbers in the name.

Table 1: .NET Core images for version 2.1.
Base image SDK/runtime image
rhel7 registry.redhat.io/dotnet/dotnet-21-rhel7:2.1
registry.redhat.io/dotnet/dotnet-21-runtime-rhel7:2.1
centos7 registry.centos.org/dotnet/dotnet-21-centos7:latest
registry.centos.org/dotnet/dotnet-21-runtime-centos7:latest
ubi8 registry.access.redhat.com/ubi8/dotnet-21:2.1
registry.access.redhat.com/ubi8/dotnet-21-runtime:2.1

Note: Depending on the version, an image might not (yet) be available for a specific base. For example, CentOS images are made available by the CentOS community after Red Hat Enterprise Linux images are created by Red Hat.

The following example prints out the SDK version that is available in the .NET Core 2.1 ubi8 image:

$ podman run registry.access.redhat.com/ubi8/dotnet-21:2.1 dotnet --version
2.1.509

For more information about the .NET Core images, see the s2i-dotnetcore GitHub repo, and the .NET Core Getting Started Guides.

.NET Core on OpenShift

The .NET Core images provide an environment for running and building .NET Core applications. They are also compatible with OpenShift’s source-to-image (S2I) build strategy. This factor means that the OpenShift container platform can build a .NET Core application from source.

.NET images are imported in OpenShift using the OpenShift CLI client (oc) with an image stream definition file from the s2i-dotnetcore GitHub repo. The file differs depending on the base image, as shown in Table 2.

Table 2: .NET Core image stream definitions by base image.
Base image Image stream definition
rhel7 https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams.json
centos7 https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams_centos.json
ubi8 https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams_rhel8.json

Retrieving the Red Hat Enterprise Linux 7 images requires authentication. Registry Authentication describes how you can set up the pull secret.

The image streams can be imported using oc. For example, to import the ubi8-based images, run:

$ oc create -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams_rhel8.json

Note: If there are already image streams present for .NET Core, you must use replace instead of create.

The s2i-dotnetcore repository includes a script for installing these image streams on Windows, Linux, and macOS. See Installing for more information. This script can also create the pull secret needed for pulling the Red Hat Enterprise Linux 7 images.

Once the image streams are installed, OpenShift can directly build and deploy an application from a Git repo, for example:

$ oc new-app dotnet:3.1~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnetcore-3.1 --context-dir=app

For more information about using .NET Core on OpenShift, see the corresponding chapter in the .NET Core Getting Started Guides.

Conclusion

In this article, you got an overview of the Red Hat platforms that support .NET Core, and how to install .NET Core on each of them.

Share

Tracing .NET Core applications

In this article, we’ll look at different ways of collecting and inspecting events from the .NET Core runtime and base class library (BCL).

EventListener

The EventListener class allows us to get events of the running application. Let’s learn how to use it with an example application. Our application performs an HTTP get and prints the length of the received response.

Continue reading “Tracing .NET Core applications”

Share