Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to run a Red Hat-powered local AI audio transcription

March 25, 2026
Seth Kenlon
Related topics:
Artificial intelligenceLinuxOpen sourcePythonSecurity
Related products:
Red Hat AI Inference ServerRed Hat AIRed Hat Enterprise Linux

    In my opinion, one of the best use cases of AI is audio transcription. As a wordsmith by nature, I'm frequently disappointed by generative AI, but I find AI inference extremely useful. I consider it the missing component between the input you provide and the input you actually mean to provide. This is useful for speech recordings, where background noise, microphone dynamics, or poor compression can distort words. AI inference is able to infer the most probable meaning of what otherwise is difficult to hear.

    However, audio transcription as a service presents a potential privacy risk since it requires sending your audio file to an external server. I will demonstrate how, with just a few Python and Git commands, you can easily run a local audio transcription application, powered by an open source training model from Red Hat AI. Once installed, you can use it without an Internet connection because it's entirely local, and your audio never leaves your computer.

    How to set up and run the application

    Open a terminal, and follow along.

    First install uv. The uv application is a Python package manager like the Python pip module, but with many additional features.

    Download the install script as follows:

    $ curl -LsSf https://astral.sh/uv/install.sh -O

    Review the script and then run it:

    $ bash ./install.sh

    Create a virtual environment

    Create a Python virtual environment for your work.

    $ uv venv --seed whisper-example

    Next install the whisper application:

    $ uv pip install whisper

    Install the HuggingFace repository tool to make it easy to obtain new AI models.

    uv tool install hf

    Download the model

    Red Hat has tested and validated the RedHatAI/whisper-large-v3-turbo-FP8-dynamic model for performance and accuracy. This is one of the models you can run on the Red Hat AI Inference Server, which provides a supported open source solution that allows you to deploy your AI models on a variety of hardware and AI accelerators to match your specific infrastructure needs.

    You can download the model from its HuggingFace repository using the hf tool:

    $ hf download RedHatAI/whisper-large-v3-turbo-FP8-dynamic

    The model size is about 1 GB. When the download is complete, you will receive the location of the model as follows:

    Download complete: : 0.00B [00:00, ?B/s]
    /home/tux/.cache/huggingface/hub/models--RedHatAI--whisper-large-v3-turbo-FP8-dynamic/snapshots/e72a6dca29d039a5c9ea13e622e496ca61e85c34

    Take note of the model location for the next step.

    Transcribe the audio

    Now that you have installed Whisper and a Red Hat AI model, you can transcribe an audio file. Keep in mind that you must activate your Python virtual environment before using this install of Whisper. Unless you've closed your terminal window during this install and setup process, the virtual environment is still active.

    Assuming you have an audio recording called example.flac, you can transcribe it by providing Whisper the base path to the Red Hat model and the audio file.

    $ whisper --model_dir ~/.cache/huggingface/hub/models--RedHatAI--whisper-large-v3-turbo-FP8-dynamic ~/example.flac

    Example output:

    Detecting language using up to the first 30 seconds. Use `--language` to specify the language
    Detected language: English
    
    [00:00.000 --> 00:10.200] This is a test of the Whisper and Red Hat AI combination running on my Red Hat Enterprise Linux laptop.

    Next steps

    Open source AI allows you to keep your data and computing local. Using familiar tools on Red Hat Enterprise Linux and Fedora Linux, you can implement your own in-house audio transcription service. If you're a Python programmer, you can even use the Red Hat models with your applications. 

    Visit Red Hat AI on HuggingFace for more information about the available models. Check out the Red Hat AI Inference Server page to learn how you can deploy AI-powered applications.

    Related Posts

    • From local prototype to enterprise production: Private speech transcription with Whisper and Red Hat AI

    • Speech-to-text with Whisper and Red Hat AI Inference Server

    • Estimate GPU memory for LLM fine-tuning with Red Hat AI

    • Why vLLM is the best choice for AI inference today

    Recent Posts

    • How to run a Red Hat-powered local AI audio transcription

    • Run Model-as-a-Service for multiple LLMs on OpenShift

    • Evaluate OpenShift cluster health with the cluster observability operator

    • Integrate Red Hat Advanced Cluster Management with Argo CD

    • Upgrade Advanced Cluster Management hubs without disruption

    What’s up next?

    Open source AI for developers share image

    Open source AI for developers

    Red Hat
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue