Cedric Clyburn (@cedricclyburn), Senior Developer Advocate at Red Hat, is an enthusiastic software technologist with a background in Kubernetes, DevOps, and container tools. He has experience speaking and organizing conferences including DevNexus, WeAreDevelopers, The Linux Foundation, KCD NYC, and more. Cedric loves all things open-source, and works to make developer's lives easier! Based out of New York.
Wanting to use your personal or organizational data in AI workflows, but it's stuck in PDFs and other document formats? Docling is here to help. It’s an open-source tool from IBM Research that converts files like PDFs and DocX into easy-to-use Markdown and JSON while keeping everything structured. In this video, join developer advocate Cedric Clyburn to see how it works, We'll walk through a demo using LlamaIndex for a question-answering app, and share some interesting details and benchmarks. Let’s dig in and see how Docling can make working with your data so much easier for RAG, Fine-Tuning models, and more.
The rise of large language models (LLMs) has opened up exciting possibilities for developers looking to build intelligent applications. However, the process of adapting these models to specific use cases can be difficult, requiring deep expertise and substantial resources. In this talk, we'll introduce you to InstructLab, an open-source project that aims to make LLM tuning accessible to developers and data scientists of all skill levels, on consumer-grade hardware.We'll explore how InstructLab's innovative approach combines collaborative knowledge curation, efficient data generation, and instruction training to enable developers to refine foundation models for specific use cases. Through a live demonstration, you’ll learn how IBM Research has partnered with Red Hat to simplify the process of enhancing LLMs with new knowledge and skills for targeted applications. Join us to explore how InstructLab is making LLM tuning more accessible, empowering developers to harness the power of AI in their projects.
Let's take a look at how to effectively integrate Generative AI into an existing application through the InstructLab project, an open-source methodology and community to make LLM tuning accessible to all! Learn about the project, and how InstructLab can help to train a model on domain-specific skills and knowledge, then how Podman's AI Lab allows developers to easily setup an environment for model serving and AI-enabled application development.
Kickstart your generative AI application development journey with Podman AI Lab, an open-source extension for Podman Desktop to build applications with LLMs on a local environment. The Podman AI Lab helps to make AI more accessible and approachable, providing recipes for example use cases with generative AI, curated models sourced from Hugging Face, model serving with integrated code snippets, and a playground environment to test and adjust model performance. Learn more on Red Hat Developer https://developers.redhat.com/product... and download Podman Desktop today to get started!
Let's take a look at how you can get started working with generative AI in your application development process using open-source tools like Podman AI Lab (https://podman-desktop.io/extensions/...) to help build and serve applications with LLMs, InstructLab (https://instructlab.ai) to fine-tune models locally from your machine, and OpenShift AI (https://developers.redhat.com/product...) to handle the operationalizing of building and serving AI on an OpenShift cluster.
Podman Desktop is a free, open-source tool that lets developers work with containers and Kubernetes from their local environment. It offers an easy-to-use dashboard to interact with and manage containers, images, pods, and more, powered by the Podman container engine. For developers and IT operations, Podman Desktop provides the capabilities to streamline the container development lifecycle, and bridges with the Kubernetes environment for simplified development and testing of containerized applications.
The rapid advancement of generative artificial intelligence (gen AI) has unlocked incredible opportunities. However, customizing and iterating on large language models (LLMs) remains a complex and resource intensive process. Training and enhancing models often involves creating multiple forks, which can lead to fragmentation and hinder collaboration.