Blog
-
Building a Home AI Inference Node, Part 2: Adding llama.cpp and Going From Chat Server to Research Workstation
Part 1 made the node stable. Part 2 makes it useful for research — adding llama.cpp as a second runtime, GGUF model control, and the OpenAI-compatible API that ties it together.
-
Building a Home AI Inference Node, Part 1: WSL, Ollama, and the Windows Problems Nobody Mentions
What I actually ran into turning a Windows desktop with an RTX 3060 Ti into a persistent local AI server accessible from my MacBook over LAN, and why the hardest problems had nothing to do with AI.