Blog/AI

Build Your First NetBox AI Agent: Workshop Recap

|
12 min
Authors
Kris Beevers
Build Your First NetBox AI Agent: Workshop Recap
Key links
Share

Last week, I hosted a live workshop with a couple hundred participants from around the NetBox community in which we dug into the emerging world of AI-driven NetBox Agents. We got hands-on, reviewing the code of the NetBox MCP server and walking through a small “compliance” agent. We then built a completely new agent live – just by changing the system prompt in our compliance agent.

This post distills that session into a guide you can follow at your pace. We also captured some of the key themes that came up in the robust discussion among the audience live, answering some common questions about building NetBox Agents.

You can also register to watch the recording of the live Build Your First NetBox AI Agent workshop here.

Repos we used:

Why NetBox + Agents? (The 60-second mental model)

  • Agents are conceptually simple: a loop around an LLM that (a) reasons, (b) calls tools, (c) looks at results, and (d) repeats until done.
  • NetBox is the semantic map (knowledge graph / ontology) of your infrastructure. Agents need context; NetBox provides it in a normalized, widely understood model (devices, interfaces, IPAM, racks, etc.).
  • MCP (Model Context Protocol) gives LLMs a normalized way to use tools – think “APIs for LLMs.” Our NetBox MCP server exposes a small set of tools so agents can interact with NetBox safely and consistently to gather and reason about infrastructure context.

In short: LLM ↔ Agent Loop ↔ MCP Tools ↔ NetBox. The agent’s “intelligence” comes from the LLM, its “hands” are MCP tools, and NetBox is the map it navigates.

NetBox Agents needn’t be complex – we’ll dig into a very simple example agent shortly that, despite its simplicity, is valuable for real-world needs. At NetBox Labs, in addition to supporting the emerging NetBox Agent ecosystem, we’ve been experimenting and building production grade agents with NetBox Operator, our experimental product for agentic AI operations, and NetBox Copilot, our AI assistant for NetBox, available today in public preview.

What’s in the NetBox MCP Server?

The NetBox MCP server was first released in early 2025, with a goal of enabling experimentation in the community with a simple set of read-only tools to ensure safety.

  • get_objects(object_type, filters) – fetch lists of objects
  • get_object(object_type, id) – fetch a single object
  • get_changelog(filters) – recent changes

These tools let an LLM pull exactly what it needs, using the intrinsic knowledge that large foundation models have of NetBox and its data models, without drowning the LLM’s limited context window with a giant OpenAPI spec or tools for every object type. Good error messages in the MCP server or passed through from NetBox help LLMs self-correct (this matters a lot in practice).

Our approach has been to watch and learn from the community’s engagement with NetBox MCP, and from our other agentic work, especially in NetBox Copilot. Now, we’re preparing a number of improvements based on how the community has used and improved upon NetBox MCP, and based on our refined tools from NetBox Copilot which make more effective use of filters, field projection, pagination, and other strategies for minimizing context explosion – especially important in large infrastructure environments with lots of data in NetBox.

The Example Agent: Natural-Language Compliance Checks

In our workshop, we reviewed a reference agent (netbox-agent-compliance) that is deliberately lightweight. It:

  • Accepts a natural-language rule like “all devices should follow a common naming convention.”
  • Lets the model decide how to interpret that rule against your NetBox data.
  • Uses the MCP server tools to fetch needed objects.
  • Produces a Markdown PASS/FAIL report with supporting details.

The example agent leverages OpenAI’s Agents SDK for the core agent loop; LiteLLM for provider abstraction so you can easily try out different LLMs; and the NetBox MCP server to connect the agent with context in NetBox. The agent’s entire “behavior” is driven by the system prompt, which demonstrates how easy it is to build your own NetBox Agents: change just the prompt, and you can turn the same code into a “docs completeness checker,” “VLAN hygiene checker,” “rack capacity snapshot,” etc., without touching the loop, the CLI, or the MCP client.

Hands-On Tutorial: Run & Tweak the Agent

You’ll need:

  • Access to a NetBox instance with some data. NetBox Cloud’s free plan is a quick path if you don’t have one – visit netboxlabs.com and click “Sign up for free”
  • NetBox API token with read privileges
  • An LLM provider key (OpenAI, Anthropic, etc.)
  • The NetBox MCP server working locally (see its README)
  • uv (or your preferred Python environment tool)

1) Set up the MCP server

Follow the README in the NetBox MCP repo to make sure it runs locally. You’ll need to provide a couple environment variables:

  • NETBOX_URL (e.g., https://demo.netbox.dev/)
  • NETBOX_TOKEN (your API token)

You may want to try out the MCP server with an MCP client like Claude Desktop or Cursor – it can be very useful to chat directly with your NetBox data.  Our agent will start up the MCP server on its own.

2) Clone & Set Up the Compliance Agent

The agent launches the MCP server for you and whitelists which tools it can call for safety.

3) Run the Agent to Check a Compliance Rule

You’ll see the agent loop run: it tries a strategy, calls MCP tools, self-corrects if needed, and finally prints a Markdown report (PASS/FAIL + details). Try some different compliance rules, and study the agent’s code, which is full of embedded commentary. You’ll find it’s very simple: mostly setting up the agent loop and MCP server, with everything “interesting” driven by the system prompt in prompts.py.

4) Make It Your Agent (Prompt-Only Change)

Open the prompt file:

Replace the content with your desired behavior. For example, here’s the prompt we used live during our workshop to build a Documentation Completeness Checker Agent that audits devices to ensure required metadata fields are well documented:

Now run the same command as before (with the above prompt, the CLI-specified “rule” doesn’t matter):

That’s it. Same code, new behavior. Of course, you can go further: rename the agent’s command and align the rest of the code with the purpose of the agent, adjust command line parameters as needed, and so on. You can craft more complex prompts that take in multiple parameters or implement more complex behavior — there’s a lot you can accomplish with thoughtful prompting, a simple agent loop, and the tools provided by NetBox MCP. And of course, you can go further — pulling in tools from other MCP servers to connect your agents with more of your systems, for example. The best way to learn is by experimenting — go wild!

A Few Prompt-Driven Agent Ideas (you can build in minutes)

With the simple approach we just used, you can build all kinds of NetBox Agents. Here are a few ideas to get you started in your experimentation:

  1. IPAM Health Snapshot
    • Find overlapping prefixes; flag prefixes >80% utilized; list unused IPs by VRF/site.
    • Output: short Markdown summary + actionable list.
  2. “Docs Complete” Device Checklist (see above example)
    • Find devices missing: serial, asset tag, role, platform, primary IP.
    • Output: table with a completeness score.
  3. Orphaned Interfaces & Dead Ends
    • Surface active interfaces without cables/LAG membership; empty console/front/back ports.
    • Output: device-grouped findings and cleanup tips.
  4. VLAN Hygiene
    • VLANs defined but unused (no member interfaces, no SVIs/IPs); duplicates across sites.
    • Output: list of safe-to-retire candidates.
  5. Rack Capacity at a Glance
    • RU utilization per rack; flag >80% full and show empty slot pockets.
    • Output: table + “placement tip” for the next 2U/4U device.
  6. Role/Platform Sanity Drift
    • Flag mismatches (e.g., role=switch but zero L2 ports; role=router but no L3 IPs).
    • Output: list with one-line fix suggestions.

Each of these NetBox Agent examples is simple enough to craft using the existing compliance agent code — just by swapping the system prompt.

Did you build something interesting? Clean it up, publish it on GitHub, and let us know! We’ll help amplify your NetBox Agents within the broader community so we can all learn from each other.

Production-Readiness: What Changes Later?

The workshop agent is educational, not production-grade. If you move toward production, you’ll want to mature your agent. Here is some of what goes into building a production-grade NetBox Agent, like NetBox Copilot:

  • Evals: treat them like tests for non-deterministic systems. Track traces, expected/acceptable outputs, and regressions as you change prompts/models.
  • Observability: capture tool-call logs, error rates, timeouts, retry counts, token usage.
  • Guardrails: whitelist tools and scopes; restrict writes; least-privilege tokens.
  • Context Management: implement context compression, memory, and other context management techniques.
  • Orchestration: if you deploy many small agents, you’ll want job queues, scaling policies, and lifecycle controls.

Model Portability: the example uses a thin abstraction so you can try OpenAI, Anthropic, local models, etc.

Where We’re Headed with NetBox Agents

We’re thrilled with the increasing activity we’re seeing in the community around the NetBox MCP server and NetBox Agents. We believe NetBox Agents will ultimately become an integral part of the NetBox ecosystem. Our strategy is simple:

  1. Support the community — we’ll deepen our investment in NetBox MCP, deliver more education and example agents, and amplify the great work on AI and Agents that is happening across the NetBox community.
  2. Make building NetBox Agents easy — best practices around agentic AI continue to evolve at a breakneck pace. We’re on top of it, and we’ll continue to prioritize tooling for implementing effective agents with NetBox.
  3. Build sophisticated Agents — our work on NetBox Operator and NetBox Copilot is just a start. Building powerful Agents at NetBox Labs will continue to result in great new features and products for our customers and community, while also informing our investments and support of the broader NetBox Agent ecosystem.

We want to hear from you on your experimentation with NetBox Agents and the NetBox MCP server. Try the above tutorial: clone the compliance agent, adjust it to deliver new behaviors and outcomes, and share. Join us on Slack and tell us what worked (and didn’t). Agents don’t need to be heavyweight to be useful. With NetBox as your semantic map and MCP tools as your safe interface, you can build something genuinely helpful in minutes — and get smarter from there.

 

webinar

Build Your First NetBox AI Agent

Register
Featured Image

Frequently Asked NetBox Agent Questions

We were thrilled by the active conversation that played out with the live audience in our workshop last week. There were a ton of great questions, and we often hear similar questions in conversations across the community. Here are some of the most common themes:

Accuracy & Model Behavior

How accurate are NetBox Agents on complex tasks? What hallucination should we expect? Modern LLMs are much better than a year ago, but they remain non-deterministic. Expect occasional misreads or brittle steps – especially with fuzzy prompts or large scopes. Accuracy improves with tighter prompts, smaller result sets (filters/pagination), and better error messages from tools (we’re working now on improvements to the NetBox MCP server with that in mind, based on our approaches for NetBox Copilot).

Are LLMs “trained to understand” the NetBox semantic map? NetBox is open source; its docs and schema are widely available. Foundation models generally “know” core NetBox concepts (devices, interfaces, prefixes, VLANs, etc.). We couple the intrinsic knowledge of large models about NetBox with careful instruction in MCP tools to help LLMs best work with NetBox data.

Are you fine-tuning models for NetBox? Not for the examples we’ve worked through. We rely on system prompts + tools + context. Fine-tuning may make sense for specific, repetitive tasks later, but prompt-and-tools already gets you very far for interactive agents.

Will the LLM secretly call APIs beyond MCP? No – agents like our example or like NetBox Copilot only have access to the tools they’re provided. In our example, we whitelist read-only MCP tools. The model can “reason,” but it cannot escape the tool sandbox.

Does the model compute internally or use external tools? Basic arithmetic and reasoning are done “in-model.” Data fetching, filtering, and structure come from MCP tools (NetBox API behind the scenes). If you want external computation, expose it as another MCP tool.

What about memory? Can agents get better over time? The demo agent does not persist memory; it’s stateless by design. Production agents often add short-term traces and long-term memory stores (e.g., summaries of prior runs, cached object lookups) to improve stability and reduce redundant calls.

Integration & Interoperability

Can I integrate NetBox Agents with chat UIs or multi-agent systems (A2A)? Yes, you certainly can build agents that integrate in these ways. NetBox Copilot is a little different for now, as it’s specifically meant to be an interactive agent embedded directly in any NetBox, and it doesn’t support A2A – but other NetBox Agents built with NetBox MCP can easily be integrated into multi-agent setups. (If you build something like this, let us know – we’d love to feature it!)

Can NetBox Agents built with NetBox MCP make changes to my NetBox data? Our public NetBox MCP server is read-only today on purpose, but there are several community forks you can try that experiment with write operations. We’ll be adding write tools – very carefully, as we want to ensure safety connecting NetBox Agents with critical infrastructure data.

Can I use any LLM, including local models (e.g., Mistral)? Yes. The example uses LiteLLM so you can point at OpenAI, Anthropic, or dozens of other model providers including local models through Ollama or other mechanisms.

Architecture & Design

Why a NetBox Agent at all – why not call APIs directly? MCP packages common API patterns into concise tools with good error messages and filtering, which helps LLMs stay inside context limits and self-correct. It’s also much easier to maintain and evolve a tool interface than to constantly prompt an LLM with a massive OpenAPI schema.

Is the NetBox MCP server open source? Yes. See the repo linked at the top. Contributions and forks are welcome.

How will MCP support Custom Objects / plugins? The direction is to keep NetBox MCP extensible. We’ve not specifically focused on supporting custom object types and plugin models yet, but we do expect to do so.

Same VM as NetBox or separate? Either works. For workshops, keeping them separate is clean. In production, pick what aligns with your security and scaling model.

Does the Agent parse output itself or hand raw results to the LLM? The example lets the LLM parse raw JSON into Markdown. Production patterns often add structured outputs (JSON schemas) and validation.

Are NetBox Agents an alternative to NetBox scripts? They’re complementary. Use scripts for deterministic automation. Use NetBox Agents when the task benefits from reasoning, exploration, or loosely defined goals.

Security & Privacy

Will NetBox Agents leak my data to external services? You’re in control. An agent only sends what you configure to the LLM endpoint you choose (including local/private models). For production agents like NetBox Copilot supported by NetBox Labs, we support a variety of data governance options — get in touch to discuss.