Updates / Engineering

How We Use AI Agents to Manage Infrastructure

We run AI agents in production for CI monitoring, code review, content, and project management. Here's what works, what doesn't, and what we've learned.

Most companies talking about AI agents are still in the demo phase. We've been running them in production since early 2026. Not as experiments — as team members with defined roles, real responsibilities, and output that ships. Here's an honest look at what works, what doesn't, and what we've learned.

The Team

Our engineering team includes AI agents alongside humans. Vigil monitors every CI pipeline and catches failures before they reach production. Ryan writes application code across all our platform apps. Sage builds and maintains the website you're reading right now — content, SEO, translations. Shuttle tracks project progress and flags blockers.

These aren't chatbots answering support tickets. They're specialised agents connected to our actual tools through MCP — the Model Context Protocol — which gives them direct access to Git, databases, deployment pipelines, and project management systems.

Why MCP Changes Everything

The reason AI agents work for us is MCP. Instead of copy-pasting context into a chat window, each agent has native access to the tools it needs. Vigil can read CI logs, trigger reruns, and diagnose failures without anyone forwarding a link. Sage can create pages, set SEO metadata, and publish content without anyone touching a CMS dashboard.

This isn't prompt engineering. It's systems integration. The agents are effective because they can take actions, not just generate text.

Where Agents Shine

Repetitive, well-defined tasks are where agents shine. CI monitoring is a perfect example: check if the build passed, diagnose if it failed, alert the right person. Content updates are another: translate a page into Danish, set meta descriptions, verify publishing stages. These are tasks that a human could do but shouldn't have to, because the pattern is predictable and the cost of a mistake is low.

The productivity gain is real. Work that used to wait for a human to context-switch now happens continuously in the background.

Where They Don't

Agents are bad at ambiguity. When a CI failure has an obvious cause, Vigil nails it. When it's a subtle race condition that only manifests under load, a human needs to step in. Agents are also bad at taste — they can write competent copy, but they need a human to decide whether the tone is right for the audience.

We've also learned that agents need guardrails. An agent with unrestricted access to production systems is a liability, not an asset. Every agent in our team has scoped permissions, review checkpoints, and a human who owns its output.

The Business Case

The AI team is why we can offer the level of service we do at the price we charge. A small team with AI agents can cover more ground than a traditional team of five — not because the AI is smarter, but because it doesn't sleep, doesn't forget to check the monitoring dashboard, and doesn't need to context-switch.

The Bottom Line

AI agents aren't going to replace sysadmins or developers. But they're very good at handling the routine work that keeps infrastructure running while humans focus on the hard problems. If you're curious about how this works in practice, the best way to see it is to use one of our services. Every interaction with My Server Guy is backed by this team.

Curious how this works in practice?

Every interaction with My Server Guy is backed by this team. Try a health check or get in touch — you'll see the agents at work.

Get in touch