Local LLMs on macOS in 2026: when they're worth the GPU
Local LLMs got dramatically better in 2025-2026. They're competitive with frontier APIs for some workflows; not all. Here's the honest picture.
7 posts
Local LLMs got dramatically better in 2025-2026. They're competitive with frontier APIs for some workflows; not all. Here's the honest picture.
AI dev workflows put unusual demands on file management — parallel agents, generated artifacts, fast iteration. Here are the seven file managers worth your attention in 2026, ranked by fit.
Both tools edit files. They have very different opinions about which files, how many at once, and where their working memory lives. Here's the honest comparison from someone who runs both daily.
OpenAI's Codex CLI doesn't get the attention Claude Code or Cursor do, but it's surprisingly capable for terminal-native workflows. The honest review.
Claude skills are reusable agent capabilities. They're powerful — but writing one for the wrong workflow is wasted effort. Here's the practical guide.
Cursor's Composer and Agent mode look similar but optimize for different work. Composer is for in-flow edits; Agent is for delegated multi-step. The decision tree.
Model Context Protocol matured in 2025-2026. Here are the seven MCP servers that earn their setup cost for AI dev workflows.