![Diving Deeper with large language models and Node.js](/sites/default/files/styles/list_item_thumb/public/Overview%20graphics_Diving%20Deeper%20with%20large%20language%20models%20and%20Nodejs.png?itok=vbl1KL39)
Exploring an insurance use case with AI and Node.js
Summary of all the Node.js AI posts on the parasol application
Summary of all the Node.js AI posts on the parasol application
Discover how NVIDIA MIG technology on Red Hat OpenShift AI enhances GPU resource utilization.
Wanting to use your personal or organizational data in AI workflows, but it's stuck in PDFs and other document formats? Docling is here to help. It’s an open-source tool from IBM Research that converts files like PDFs and DocX into easy-to-use Markdown and JSON while keeping everything structured. In this video, join developer advocate Cedric Clyburn to see how it works, We'll walk through a demo using LlamaIndex for a question-answering app, and share some interesting details and benchmarks. Let’s dig in and see how Docling can make working with your data so much easier for RAG, Fine-Tuning models, and more.
The rise of large language models (LLMs) has opened up exciting possibilities for developers looking to build intelligent applications. However, the process of adapting these models to specific use cases can be difficult, requiring deep expertise and substantial resources. In this talk, we'll introduce you to InstructLab, an open-source project that aims to make LLM tuning accessible to developers and data scientists of all skill levels, on consumer-grade hardware.We'll explore how InstructLab's innovative approach combines collaborative knowledge curation, efficient data generation, and instruction training to enable developers to refine foundation models for specific use cases. Through a live demonstration, you’ll learn how IBM Research has partnered with Red Hat to simplify the process of enhancing LLMs with new knowledge and skills for targeted applications. Join us to explore how InstructLab is making LLM tuning more accessible, empowering developers to harness the power of AI in their projects.
Welcome back to Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! In this episode, senior principal software engineer Sergio Lopez Pascual joins to deep dive into Libkrun and Krunkit about getting the most out of VM technology with containers and much more!
Learn how to run distributed AI training on Red Hat OpenShift using RoCE with
Learn how to build a ModelCar container image and deploy it with OpenShift AI.
Let's take a look at how to effectively integrate Generative AI into an existing application through the InstructLab project, an open-source methodology and community to make LLM tuning accessible to all! Learn about the project, and how InstructLab can help to train a model on domain-specific skills and knowledge, then how Podman's AI Lab allows developers to easily setup an environment for model serving and AI-enabled application development.
The Konveyor community has developed "Konveyor AI" (Kai), a tool that uses Generative AI to accelerate application modernization. Kai integrates large language models with static code analysis to facilitate code modifications within a developer's IDE, helping transition to technologies like Quarkus efficiently. This video provides a short introduction and demo showcasing the migration of the Java EE "coolstore" application to Quarkus using Konveyor AI.
Welcome back to Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! Let's talk about tips & tricks when writing SELinux policies, and how you can use containers to your advantage! This weekly series will bring in guests from around the industry to highlight innovation and things you should know, and new episodes will be released right here, on the Red Hat Developer channel, each and every Wednesday at 9am EST! Stay tuned, and see you in the next episode!
Welcome to the new Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! This weekly series will bring in guests from around the industry to highlight innovation and things you should know, and new episodes will be released right here, on the Red Hat Developer channel, each and every Wednesday at 9am EST! Stay tuned, and see you in the next episode!
Kickstart your generative AI application development journey with Podman AI Lab, an open-source extension for Podman Desktop to build applications with LLMs on a local environment. The Podman AI Lab helps to make AI more accessible and approachable, providing recipes for example use cases with generative AI, curated models sourced from Hugging Face, model serving with integrated code snippets, and a playground environment to test and adjust model performance. Learn more on Red Hat Developer https://developers.redhat.com/product... and download Podman Desktop today to get started!
Welcome back to Red Hat Dan on Tech, where Senior Distinguished Engineer Dan Walsh dives deep on all things technical, from his expertise in container technologies with tools like Podman and Buildah, to runtimes, Kubernetes, AI, and SELinux! Let's talk about Podman and containers when it comes to Systemd, and how technologies like Quadlet abstracts the complexities of running containers under Systemd, featuring Principal Software Engineer Ygal Blum.
Let's take a look at how you can get started working with generative AI in your application development process using open-source tools like Podman AI Lab (https://podman-desktop.io/extensions/...) to help build and serve applications with LLMs, InstructLab (https://instructlab.ai) to fine-tune models locally from your machine, and OpenShift AI (https://developers.redhat.com/product...) to handle the operationalizing of building and serving AI on an OpenShift cluster.
Model Context Protocol (MCP) is a protocol that allows intergratrion between
The Red Hat Node.js Team shares their review of 2024 and what lies ahead for
Integrating large language models into applications is an important skill for
Headed to DeveloperWeek? Visit the Red Hat Developer booth on-site to speak to our expert technologists.
Explore the benefits of open source AI models and tools and learn how Red Hat OpenShift AI helps you build innovative AI-based applications in this e-book.
This tutorial demonstrates how to use Jupyter Notebooks within Red Hat OpenShift
LLM fine-tuning is the process of adjusting a pre-trained Large Language Model
Learn the basics of Kubernetes, Ansible, AI, and more with these popular learning paths, and get hands-on experience using the no-cost Developer Sandbox.
This year's top articles on AI include an introduction to GPU programming, a guide to integrating AI code assistants, and the KServe open source project.
In our previous blog post, we introduced the RamaLama project, a bold initiative