Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to use LLMs in Java with LangChain4j and Quarkus

Creating an AI-powered blog analyzer

February 7, 2024
Helber Belmiro
Related topics:
Artificial intelligenceIntegrationQuarkus
Related products:
Red Hat build of Quarkus

Share:

    In the ever-evolving landscape of artificial intelligence (AI), large language models (LLMs) have emerged as a game-changer, transforming how we interact with and derive insights from textual data. As a Java developer, diving into the world of AI might seem intimidating, but it's not! This tutorial is your gateway to use the power of LLMs through the integration of LangChain4j and Quarkus.

    Exploring the capabilities of LangChain4j and Quarkus

    LangChain4j, a Java library specializing in natural language processing (NLP), will play a crucial role in this tutorial. Using LangChain4j with Quarkus, a cloud-native, container-first framework, we will develop a tool proficient in analyzing and summarizing the content of blog posts. 

    The objective here is not just to create a tool, but rather to provide Java developers with the skills to seamlessly integrate LLMs, comprehend the nuances of AI, and refine their skill set through practical application.

    Prerequisites

    Before diving into the tutorial, make sure you have the following prerequisites in place:

    • OpenAI account: Ensure that you have an active OpenAI account.
    • API Key: Generate an API key from your OpenAI account. This key is essential for accessing OpenAI's services and will be used throughout the tutorial.
    • Credits: Confirm that your OpenAI account has sufficient credits to cover the usage of language models.

    Create the application

    To create a Quarkus application, run the following Maven command:

    mvn io.quarkus.platform:quarkus-maven-plugin:3.6.6:create \
        -DprojectGroupId=com.hbelmiro.demos \
        -DprojectArtifactId=intelligent-java-blog-reader \
        -Dextensions='resteasy-reactive'
    cd intelligent-java-blog-reader

    With your application created, you can then start writing the code. You can start by writing the code to read the blog post.

    Parse the HTML

    You need to create a class to read the HTML content for a URL that the user will specify. You can do that by using Jsoup, which is an HTML parser included in Quarkus.

    To make the process faster and avoid unnecessary charging, you can send to OpenAI only the HTML element that you know that has the blog content. In the case of this example, prepare your application to read the Red Hat blog. In that blog, the contents is inside a div element with the rh-push-content-main class. So, create a class named WebCrawler that gets the first HTML element of class rh-push-content-main and returns its HTML.

    package com.hbelmiro.demos.intelligentjavablogreader;
    
    import jakarta.enterprise.context.ApplicationScoped;
    import org.jsoup.Jsoup;
    import org.jsoup.nodes.Document;
    
    import java.io.IOException;
    import java.io.UncheckedIOException;
    
    @ApplicationScoped
    class WebCrawler {
    
        String crawl(String url) {
            Document doc;
            try {
                doc = Jsoup.connect(url).get();
            } catch (IOException e) {
                throw new UncheckedIOException(e);
            }
    
            return doc.body().getElementsByClass("rh-push-content-main").first().html();
        }
    }

    Now that you have the HTML where the blog content is, you can send it to OpenAI.

    Send the blog content to OpenAI

    LLMs work like a chat. You send a message (AKA prompt) using natural language and the model answers also with natural language, or the way you ask it to answer.

    So, you'll need to tell the model what you want it to do with the HTML you'll send. You can send something like:

    You are an assistant that receives the body of an HTML page and sums up the article on that page. Add key takeaways to the end of the sum-up. Here's the HTML:
    {html}
    That's it. You can sum up the article and add key takeaways to the end of the sum up.

    Info alert:

    The messages/prompts used in this application were tested with the gpt-3.5-turbo model. Using a different model may require changes to the messages/prompts, since different models can have different interpretations and responses.

    But, there's a limit of characters that you can send in each message to the LLM. The model you'll use in this example can handle well around 2000 characters per prompt. So, you won't be able to send everything in one single message. You'll have to break the HTML into pieces of 2000 characters each, send each of those pieces in a message, and at the end, ask the model to sum up the article for you.

    To do that, first prepare the LLM:

    You are an assistant that receives the body of an HTML page and sums up the article on that page. Add key takeaways to the end of the sum-up.
    The body will be sent in parts in the next requests. Don't return anything.

    Send each part of the HTML:

    Here's the next part of the body page:
    {html}.
    Wait for the next parts. Don't answer anything else.

    After sending all the parts, ask the model to sum up the article:

    That's it. You can sum up the article and add key takeaways to the end of the sum up.

    That's how your application will interact with the LLM. To do that, you'll use a library called LangChain4j. So, add the following dependency to your pom.xml:

    <dependency>
        <groupId>io.quarkiverse.langchain4j</groupId>
        <artifactId>quarkus-langchain4j-openai</artifactId>
        <version>0.6.3</version>
    </dependency>

    Then, create the service to interact with OpenAI. The service will contain three methods: one to prepare the model, one to send the HTML, and one to sum up the article:

    package com.hbelmiro.demos.intelligentjavablogreader;
    
    import dev.langchain4j.service.SystemMessage;
    import dev.langchain4j.service.UserMessage;
    import io.quarkiverse.langchain4j.RegisterAiService;
    
    @RegisterAiService
    public interface BlogReaderService {
    
        @SystemMessage("You are an assistant that receives the body of an HTML page and sum up the article in that page. Add key takeaways to the end of the sum up.")
        @UserMessage("""
                    The body will be sent in parts in the next requests. Don't return anything.
                """)
        String prepare();
    
        @UserMessage("""
                    Here's the next part of the body page:
                    ```html
                    {html}
                    ```
                    Wait for the next parts. Don't answer anything else.
                """)
        String sendBody(String html);
    
        @UserMessage("""
                    That's it. You can sum up the article and add key takeaways to the end of the sum up.
                """)
        String sumUp();
    }

    OK, once you create the AI Service, you need to break the HTML into small pieces. Create the following class to break the text and return a List with all the pieces:

    package com.hbelmiro.demos.intelligentjavablogreader;
    
    import jakarta.enterprise.context.ApplicationScoped;
    
    import java.util.ArrayList;
    import java.util.List;
    
    @ApplicationScoped
    class RequestSplitter {
    
        private static final int MAX_CHARACTERS = 2000;
    
        List<String> split(String text) {
            List<String> pieces = new ArrayList<>();
    
            if (text != null && !text.isEmpty() && MAX_CHARACTERS > 0) {
                int length = text.length();
    
                if (length <= MAX_CHARACTERS) {
                    return List.of(text);
                }
    
                int startIndex = 0;
                int endIndex = MAX_CHARACTERS;
    
                while (startIndex < length) {
                    String piece = text.substring(startIndex, endIndex);
                    pieces.add(piece);
                    startIndex = endIndex;
                    endIndex = Math.min(startIndex + MAX_CHARACTERS, length);
                }
            }
    
            return pieces;
        }
    }

    Now you have everything to process the user request. So, create the controller:

    package com.hbelmiro.demos.intelligentjavablogreader;
    
    import jakarta.inject.Inject;
    import jakarta.ws.rs.POST;
    import jakarta.ws.rs.Path;
    import jakarta.ws.rs.Produces;
    import jakarta.ws.rs.core.MediaType;
    import org.slf4j.Logger;
    
    import java.util.List;
    
    @Path("/")
    public class BlogReaderResource {
    
        private static final Logger LOGGER = org.slf4j.LoggerFactory.getLogger(BlogReaderResource.class);
    
        private final BlogReaderService blogReaderService;
    
        private final WebCrawler webCrawler;
    
        private final RequestSplitter requestSplitter;
    
        @Inject
        public BlogReaderResource(BlogReaderService blogReaderService, WebCrawler webCrawler, RequestSplitter requestSplitter) {
            this.blogReaderService = blogReaderService;
            this.webCrawler = webCrawler;
            this.requestSplitter = requestSplitter;
        }
    
        @Path("/read")
        @POST
        @Produces(MediaType.TEXT_PLAIN)
        public String read(String url) {
            // Read the HTML from the specified URL
            String content = webCrawler.crawl(url);
    
            LOGGER.info("\uD83D\uDD1C Preparing analysis of {}", url);
    
            // Prepare the model
            blogReaderService.prepare();
    
            // Split the HTML into small pieces
            List<String> split = requestSplitter.split(content);
    
            // Send each piece of HTML to the LLM
            for (int i = 0; i < split.size(); i++) {
                blogReaderService.sendBody(split.get(i));
                LOGGER.info("\uD83E\uDDD0 Analyzing article... Part {} out of {}.", (i + 1), split.size());
            }
    
            LOGGER.info("\uD83D\uDCDD Preparing response...");
    
            // Ask the model to sum up the article
            String sumUp = blogReaderService.sumUp();
    
            LOGGER.info("✅ Response for {} ready", url);
    
            // Return the result to the user
            return sumUp;
        }
    }

    Now you only need to configure your application.

    Configure the application

    You should set the OpenAI API key and configure timeouts to prevent errors as each interaction with the model takes a few seconds.

    Add the following properties to your application.properties file:

    quarkus.http.read-timeout=120s
    quarkus.langchain4j.openai.timeout=1m
    # This application was tested with gpt-3.5-turbo. Using a different model may require changes to the prompts.
    quarkus.langchain4j.openai.chat-model.model-name=gpt-3.5-turbo
    quarkus.langchain4j.openai.api-key=<YOUR_API_KEY>
     

    Info alert: Note

    Set your API Key in the application.properties file only for testing purposes. For production environments, use something more secure like a Kubernetes Secret or an environment variable.

    Now you're ready to run your application.

    Run the application

    Start your application:

    mvn quarkus:dev

    With your application up and running, send the following request to analyze the post in https://www.redhat.com/en/blog/the-power-of-ai-is-open:

    curl -X 'POST' \
      'http://localhost:8080/read' \
      -d 'https://www.redhat.com/en/blog/the-power-of-ai-is-open'

    You should see an output similar to:

    Summary:
    The article emphasizes the significance of artificial intelligence (AI) in today's world and how enterprises can no longer ignore its potential. It discusses the various applications of AI, such as chatbots, financial fraud detection, and patient diagnostics. The article emphasizes the importance of operationalizing AI use cases and leveraging existing tools and processes to drive agility and efficiency. It also highlights the need to adhere to security, regulatory, compliance, and governance standards when implementing AI.
    
    Red Hat is introduced as a company that integrates open-source technologies with AI to help organizations solve problems effectively and quickly. Red Hat offers platforms to develop and deploy AI at scale, increasing efficiency and productivity. The article concludes by stating that Red Hat's enterprise-ready AI solutions make it possible to apply AI to everyday business.
    
    Key Takeaways:
    1. AI is a significant technology that enterprises cannot ignore.
    2. Operationalizing AI use cases and leveraging existing tools and processes is crucial for success.
    3. Adhering to security, regulatory, compliance, and governance standards is essential in AI implementation.
    4. Red Hat integrates open-source technologies with AI to solve problems effectively and quickly.
    5. Red Hat provides platforms for developing and deploying AI at scale, increasing efficiency and productivity.
    6. Red Hat's enterprise-ready AI solutions enable the application of AI to everyday business.
    
    These key takeaways highlight the importance of AI in driving organizational success and the role Red Hat plays in enabling AI implementation.

    Conclusion

    That's it! You just created a Java application that uses artificial intelligence. To go further on the LangChain4j Quarkus extension, read its documentation. You can use it to create more complex applications and use other models than OpenAI, like Hugging Face and Ollama.

    You can find the source code of the application you created on GitHub.

    Last updated: January 15, 2025

    Related Posts

    • How to install Java 17 and 21 on Red Hat Enterprise Linux 8

    • Troubleshooting Java applications on OpenShift

    • How to run the correct Java version after an update

    • The process of migrating Java applications

    • How to choose the best Java garbage collector

    • Beyond Loom: Weaving new concurrency patterns

    Recent Posts

    • How to run a fraud detection AI model on RHEL CVMs

    • How we use software provenance at Red Hat

    • Alternatives to creating bootc images from scratch

    • How to update OpenStack Services on OpenShift

    • How to integrate vLLM inference into your macOS and iOS apps

    What’s up next?

    Quarkus for Spring Developers

    Learn how to optimize Java for today’s compute and runtime demands with Quarkus. Quarkus for Spring Developers introduces Quarkus to Java developers, with a special eye for helping those familiar with Spring’s conventions make a quick and easy transition.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue