Rich Naszcyniec
Rich Naszcyniec's contributions
Article
Vibes, specs, skills, and agents: The four pillars of AI coding
Rich Naszcyniec
Explore the four pillars of AI coding: vibes, secs, skills, and agents, and learn how they can improve the coding quality and reduce the encoding/decoding gap. Discover the benefits of a spec-driven approach and the importance of modular specs and skills in achieving harmony.
Article
How spec-driven development improves AI coding quality
Rich Naszcyniec
Learn how to implement spec coding, a structured approach to AI-assisted development that combines human expertise with AI efficiency.
Article
Integrate vLLM inference on macOS/iOS with Alamofire and Apple Foundation
Rich Naszcyniec
Learn how to establish communication with vLLM using Apple Foundation and Alamofire for low-level HTTP interactions in macOS and iOS applications.
Article
Integrate vLLM inference on macOS/iOS using OpenAI APIs
Rich Naszcyniec
Discover how to communicate with vLLM using the OpenAI spec as implemented by the SwiftOpenAI and MacPaw/OpenAI open source projects.
Article
Integrate vLLM inference on macOS/iOS with Llama Stack APIs
Rich Naszcyniec
Learn to build a chatbot leveraging vLLM for generative AI inference. This guide provides source code and steps to connect to a Llama Stack Swift SDK server.
Article
How to integrate vLLM inference into your macOS and iOS apps
Rich Naszcyniec
vLLM empowers macOS and iOS developers to build powerful AI-driven applications by providing a robust and optimized engine for running large language models.
Vibes, specs, skills, and agents: The four pillars of AI coding
Explore the four pillars of AI coding: vibes, secs, skills, and agents, and learn how they can improve the coding quality and reduce the encoding/decoding gap. Discover the benefits of a spec-driven approach and the importance of modular specs and skills in achieving harmony.
How spec-driven development improves AI coding quality
Learn how to implement spec coding, a structured approach to AI-assisted development that combines human expertise with AI efficiency.
Integrate vLLM inference on macOS/iOS with Alamofire and Apple Foundation
Learn how to establish communication with vLLM using Apple Foundation and Alamofire for low-level HTTP interactions in macOS and iOS applications.
Integrate vLLM inference on macOS/iOS using OpenAI APIs
Discover how to communicate with vLLM using the OpenAI spec as implemented by the SwiftOpenAI and MacPaw/OpenAI open source projects.
Integrate vLLM inference on macOS/iOS with Llama Stack APIs
Learn to build a chatbot leveraging vLLM for generative AI inference. This guide provides source code and steps to connect to a Llama Stack Swift SDK server.
How to integrate vLLM inference into your macOS and iOS apps
vLLM empowers macOS and iOS developers to build powerful AI-driven applications by providing a robust and optimized engine for running large language models.