<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>responsible-ai.blog</title><description>Research, infrastructure, and hard-won lessons at the intersection of AI security, responsible AI, and agentic systems.</description><link>https://responsible-ai.blog/</link><item><title>Building a Home AI Inference Node, Part 1: WSL, Ollama, and the Windows Problems Nobody Mentions</title><link>https://responsible-ai.blog/blog/home-ai-node-part-1/</link><guid isPermaLink="true">https://responsible-ai.blog/blog/home-ai-node-part-1/</guid><description>What I actually ran into turning a Windows desktop with an RTX 3060 Ti into a persistent local AI server accessible from my MacBook over LAN, and why the hardest problems had nothing to do with AI.</description><pubDate>Sun, 10 May 2026 00:00:00 GMT</pubDate></item><item><title>Building a Home AI Inference Node, Part 2: Adding llama.cpp and Going From Chat Server to Research Workstation</title><link>https://responsible-ai.blog/blog/home-ai-node-part-2/</link><guid isPermaLink="true">https://responsible-ai.blog/blog/home-ai-node-part-2/</guid><description>Part 1 made the node stable. Part 2 makes it useful for research — adding llama.cpp as a second runtime, GGUF model control, and the OpenAI-compatible API that ties it together.</description><pubDate>Tue, 12 May 2026 00:00:00 GMT</pubDate></item></channel></rss>