Oleg Silkin

Oleg Silkin's contributions

Red Hat AI
Article

Unsloth and Training Hub: Lightning-fast LoRA and QLoRA fine-tuning

Aditi Saluja +2

Learn how to fine-tune large language models in enterprise environments with Training Hub, an open source library for LLM post-training. Discover the benefits of LoRA and QLoRA using Unsloth, including reduced VRAM requirements and faster training times.

Featured image for AI/ML
Article

Granite, LIMO, and small LLM reasoning

Akash Srivastava +8

On reproducing R1-like reasoning in small LLMs: LIMO dataset ineffective for Llama/Granite; synthetic data generation shows promise but fine-tuning is tricky.

Featured image for AI/ML
Article

How particle filtering makes small LLMs think big

Akash Srivastava +8

An update on reproducing R1-like reasoning in small LLMs: Granite models show big gains with particle filtering, outperforming GPT-4o on benchmarks.

Featured image for "Red Hat CodeReady Containers 1.31.2 makes the leap."
Article

GPU enablement on MicroShift

Oleg Silkin

MicroShift is a low-footprint alternative to OpenShift. Learn how to enable it to take advantage of GPU computing power.