Ask a Red Hatter to help migrate a monolithic legacy application, and they will likely ask, "Why not move to microservices?" While modernization is often the ideal path, it frequently conflicts with tight timelines and technical constraints. Rehosting—also known as lift and shift—is a migration pattern that moves an application without significantly changing the source code. This approach works well as a first step to prove viability on a new platform, such as Red Hat OpenShift, before making a full commitment.
There are many benefits to modernizing .NET applications as well. In this article, we'll deploy a .NET application to OpenShift using the lift-and-shift pattern and identify key considerations. I selected a sample e-commerce website that uses a services-based architecture coded in .NET Aspire 9. Check out the source code in this Git repository.
Creating the Containerfile
I start the containerization process by creating a Containerfile to build the application image using a multi-stage build. In this approach, I use a Software Development Kit (SDK) image, such as the Red Hat .NET 9.0 SDK image. This image contains all the tooling needed to build the container. A common practice involves compiling the application to a smaller, lightweight runtime image, like the Red Hat .NET 9.0 Runtime image. However, to adhere to the lift-and-shift philosophy of minimal code changes, I opted to both compile and run the application using the SDK image.
Let's take a look at the Containerfile itself. First, I declare the desired starting image:
FROM registry.access.redhat.com/ubi8/dotnet-90:9.0 AS buildNext, I set the SSL_CERT_DIR variable to allow OpenSSL to trust my base image's certificates directory. I also set the ASPIRE_CONTAINER_RUNTIME variable to instruct .NET Aspire to use Podman instead of Docker within my image, as outlined in the Microsoft documentation:
ENV SSL_CERT_DIR=$HOME/.aspnet/dev-certs/trust:/etc/pki/tls/certs \
ASPIRE_CONTAINER_RUNTIME=podmanI then set the preferred working directory to place the source code to /opt/app-root/src, as this directory is read and writable accessible by the non-root user:
WORKDIR /opt/app-root/srcAs a security best practice, application containers running on OpenShift should run as a non-root user. My monolithic application's code runs containers. Because I wrap it inside an OpenShift container (also known as the Podman-in-Podman approach), I must ensure that the outer container runs rootless Podman and maps additional UIDs and GIDs to interact with multiple user namespaces. I also set file capabilities for newuidmap and newgidmap binaries to elevate privileges for extra users and groups:
USER root
# Use rootless podman to launch app processes
RUN microdnf install openssl podman -y && \
usermod --add-subuids 100000-165535 default && \
usermod --add-subgids 100000-165535 default && \
setcap cap_setuid+eip /usr/bin/newuidmap && \
setcap cap_setgid+eip /usr/bin/newgidmap && \
chmod -R g=u /etc/subuid /etc/subgidTo force my application image to run as a non-root user, I switch from root to user 1001. I then copy the application code to the source directory:
USER 1001
COPY --chown=1001:0 / /opt/app-root/srcNext, I need to trust the ASP.NET Core development certificate, and finally build the application:
RUN dotnet dev-certs https --trust && \
dotnet build src/eShop.AppHost/eShop.AppHost.csproj -c ReleaseThe port declaration here is unnecessary because OpenShift defines the exposed ports through the application's service. However, I set the port as a best practice to declare the author's intent for users to configure the port:
EXPOSE 19888A typical multi-stage build would swap out the SDK image for a lightweight runtime image. However, upon inspecting the base .NET project file, I found nested references to packages and other project files. Because I am using with a lift-and-shift approach, I kept the nested references intact. I run the application using the project definition rather than a compiled DLL, so I decided to use the SDK image as the runtime image. My Containerfile's default executable will look as follows:
ENTRYPOINT ["bash", "-c", "dotnet run -c Release --project src/eShop.AppHost/eShop.AppHost.csproj --no-build"]As a side note, when I attempted to build the Containerfile using the podman build command, I received an error stating the requested SDK version must be 9.0.200, as specified in the original application's global.json file. The Red Hat .NET SDK image version is 9.0.110, so I updated global.json for compatibility with the lower version.
The complete Containerfile can be referenced below:
FROM registry.access.redhat.com/ubi8/dotnet-90:9.0 AS build
ENV SSL_CERT_DIR=$HOME/.aspnet/dev-certs/trust:/etc/pki/tls/certs \
ASPIRE_CONTAINER_RUNTIME=podman
WORKDIR /opt/app-root/src
USER root
# Use rootless podman to launch app processes
RUN microdnf install openssl podman -y && \
usermod --add-subuids 100000-165535 default && \
usermod --add-subgids 100000-165535 default && \
setcap cap_setuid+eip /usr/bin/newuidmap && \
setcap cap_setgid+eip /usr/bin/newgidmap && \
chmod -R g=u /etc/subuid /etc/subgid
USER 1001
COPY --chown=1001:0 / /opt/app-root/src
# Trust the ASP.NET Core HTTPS dev cert and build the .NET project
RUN dotnet dev-certs https --trust && \
dotnet build src/eShop.AppHost/eShop.AppHost.csproj -c Release
EXPOSE 19888
# Building the nested project file creates several .dll binaries in respective folders. I opted for a
# lift & shift approach to run the project file, but a preferred approach would be to refactor as microservices and
# use a multistage build to execute a single binary with the .NET runtime image per container
ENTRYPOINT ["bash", "-c", "dotnet run -c Release --project src/eShop.AppHost/eShop.AppHost.csproj --no-build"]Building and deploying the application
After completing the Containerfile, I create a set of basic OpenShift objects to handle the build and deployment. You can find them in the manifests directory in the repository. For more details on these objects, refer to the the official OpenShift documentation.
Note that the application provides its own self-signed certificate, so I define the route as a passthrough route. Also, the application must trust the default .NET developer certificate. Because this certificate is trusted per user, the application must run using a static user ID (declared in the image as 1001). It also requires container capabilities to run Podman within Podman. The default restricted-v2 SecurityContextConstraint (SCC) is insufficient, so I opt to use the nonroot SCC to meet these requirements.
Finally, I need to expose the application endpoints to make them accessible from outside the container. I change the launchSettings.json configuration from localhost to 0.0.0.0 (a wildcard * would have also sufficed).
Because I have already defined my Kubernetes objects, I simply need to deploy the contents of the manifests directory to the cluster. I can do this with two simple commands:
`oc new-project eshop` `oc apply -f manifests/`After the objects are created, the deployment pod will fail with a CrashLoopBackoff status until the build pod completes the image build. After a few minutes, the application will be healthy and accessible through the route.
Conclusion
This blog demonstrates a quick and (relatively) painless migration strategy for a .NET application onto OpenShift. Now that the application is on the platform, I can take advantage of all the benefits this move has to offer.
While out of scope for this blog, the following next steps are worth exploring:
- Set up a CI/CD pipeline to simplify or even automate the build and deployment process.
- Configure the monitoring stack to track metrics and send alerts.
- Design a network policy to limit access & reduce security risk
Finally, as mentioned in the introduction, lift-and-shift is not meant to replace a more robust modernization effort. I recommend exploring a microservices architecture once the basic deployment is complete.