Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Introduction to supervised fine-tuning dataset formats

August 18, 2025
Fynn Schmitt-Ulms
Related topics:
Artificial intelligence
Related products:
Red Hat AIRed Hat Enterprise Linux AI

Share:

    Large language models (LLMs) are conventionally trained in two phases. In the first pre-training phase, the model sees gigantic corpuses of unlabeled text from the internet and is trained to predict only the next singular token. In the second post-training phase, the model sees data that resembles the behavior of a chatbot and learns to predict subsequent tokens that align with that behavior. Supervised fine-tuning (SFT) is the most common post-training technique used to improve the alignment of large language models after pre-training. 

    SFT uses datasets of formatted prompt response pairs, or simulated conversations to familiarize the model with chat/assistant style interactions. You can format these datasets in many ways for various purposes, which we will explore in this article.

    Unfortunately, there isn't much consistency in the input formats. Some standards have emerged (particularly for chat conversation datasets), but data keys often have different names or datasets will include extra fields which may or may not be relevant to the text generation task. Nevertheless, we can group these dataset formats into general buckets. However, it is useful to first understand how these formats are used to produce model training samples.

    What the model sees

    As with pre-training, during SFT the model loss is simply a measure of next-token prediction error compared to the ground truth next-token. Therefore, we need our model inputs to consist of flat tokenized sequences and their corresponding labels, which is usually the same sequence with some tokens masked out. If our dataset consists of prompt and response pairs, then we need to format these to produce concatenated strings, typically using a template with special tokens to denote which sections are part of the user prompt versus the chatbot response.

    This is where the Jinja templating library comes in. It defines a templating language that allows developers to create chat/prompt templates and then later fill them with inputs. It looks something like this:

    {% for message in messages %}
      {% if message['role'] == 'user' %}
        {{ '<|user|>\n' + message['content'] + eos_token }}
      {% elif message['role'] == 'system' %}
        {{ '<|system|>\n' + message['content'] + eos_token }}
      {% elif message['role'] == 'assistant' %}
        {{ '<|assistant|>\n'  + message['content'] + eos_token }}
      {% endif %}
    {% endfor %}
    {% if add_generation_prompt %}
      {{ '<|assistant|>' }}
    {% endif %}

    This template is applied to the following data as follows:

    [
      {
        "role": "system",
        "content": "You are a helpful assistant who answers questions respectfully and honestly."
      },
      {
        "role": "user",
        "content": "How are language model inputs formatted?"
      },
    ]

    Then it will produce the following formatted sequence:

    <|system|>
    You are a helpful assistant who answers questions respectfully and honestly.</s>
    <|user|>
    How are language model inputs formatted?</s>
    <|assistant|>

    If you'd like to verify this yourself or play around with these templates, you can visit this playground. If you are fine-tuning a model that has already been instruct/chat tuned, it is likely that this template will already exist and be available through the models tokenizer. For example, the entry "chat_template" at the bottom of granite-3.2-8b-instruct's tokenizer_config.json defines the template for this instruct model. When the template exists and is accessible, it is important to reuse it for further training so that the model sees only a single consistent template format.

    These templates may also be simpler for a prompt-response style dataset which may only wrap the prompt in "start prompt" and "end prompt" special tokens and append the response.

    After templating, the final data transformation step is to split the templated sequence into tokens, which are mapped to corresponding token indices. These indices are then used to select the token embedding (typically a floating point vector) for each token.

    What we see

    Starting at the file level, SFT datasets are often stored in JSON or JSONL files. Some datasets may also be compressed or stored in parquet files, which is an efficient column-based data storage format.

    Within these files, there are a few general types of dataset formats:

    1. Chat formats: Entries are lists of dicts containing "content" and "role" values forming a conversation.

    2. Instruct formats: Entries consist of "prompt" and "response" pairs. In some cases, the "prompt" is actually made up of two values "instruction" and "input."

    3. Text only: You can think of these as datasets where the prompt/response/chat has already been processed into the template and now entries just consist of a single formatted string, with pre-established separators between parts of speech.

    Examples of each type are shown below. However, for each of these there are many datasets that are similar but use different key names, have extra metadata, and in some cases, include extra data fields with additional context for the prompt.

    Take a look at these examples of various chat formats:

    OpenAI:

    {
        "messages": [
            {
                "role": "system" or "user" or "assistant",
                "content": "...",
            },
            ...
        ]
    },
    ...

    ShareGPT:

    {
        "conversations": [
            {
                "from": "system" or "human" or "gpt",
                "value": "...",
            },
            ...
        ]
    },
    ...

    Likewise, here are a few examples of instruct formats:

    Alpaca:

    [
        {
            "instruction": "...",
            "input": "..." or an empty string "",
            "output": "...",
        },
        ...
    ]

    Prompt-response:

    {
        "prompt": "...",
        "response": "...",
    },
    ...

    The templates for text datasets can vary substantially, but the important thing is that they have already been pre-processed and use special tokens to separate the different components.

    For example, with start and end prompt tokens:

    {
        "text": "<INST> ... </INST>: ...",
    },

    Or with different tokens to indicate different speakers:

    {
        "text": "<human> ... <bot>: ...",
    },

    Final thoughts

    While there is no single dataset format in use, the AI community has begun to establish a few standards, particularly for chat-style datasets. This standardization makes it easier to train different model-dataset combinations. At Red Hat, we use Jinja templating and standardized dataset formats in our InstructLab training library to ensure that models are always fine-tuned with consistent prompting templates. To learn more about LLM post-training at Red Hat, visit InstructLab.

    Related Posts

    • The road to AI: A guide to understanding AI/ML models

    • Red Hat publishes Docker Hub images for Granite 7B LLMs and InstructLab

    • LLMs and Red Hat Developer Hub: How to catalog AI assets

    • Level up your generative AI with LLMs and RAG

    Recent Posts

    • Skopeo: The unsung hero of Linux container-tools

    • Automate certificate management in OpenShift

    • Customize RHEL CoreOS at scale: On-cluster image mode in OpenShift

    • How to set up KServe autoscaling for vLLM with KEDA

    • How I used Cursor AI to migrate a Bash test suite to Python

    What’s up next?

    Learn how Red Hat Enterprise Linux AI provides a security-focused, low-cost environment for experimenting with large language models (LLMs).

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue