Research & infrastructure
Responsible AI, from the inside out.
Hard-won lessons at the intersection of AI security, adversarial evaluation, and agentic systems — written by a practitioner for practitioners.
Latest
-
Building a Home AI Inference Node, Part 2: Adding llama.cpp and Going From Chat Server to Research Workstation
Part 1 made the node stable. Part 2 makes it useful for research — adding llama.cpp as a second runtime, GGUF model control, and the OpenAI-compatible API that ties it together.
-
Building a Home AI Inference Node, Part 1: WSL, Ollama, and the Windows Problems Nobody Mentions
What I actually ran into turning a Windows desktop with an RTX 3060 Ti into a persistent local AI server accessible from my MacBook over LAN, and why the hardest problems had nothing to do with AI.
Stay in the loop
New posts on AI security, responsible AI evaluation, and agentic systems — no noise, no cadence pressure. When there's something worth reading, it lands in your inbox.