Proxmox VE Integration FAQ and Troubleshooting
This document provides answers to commonly asked questions about the NetBox Proxmox VE integration.
General Questions
Q: Does the integration support multiple Proxmox VE clusters?
A: Yes. To ingest data from multiple Proxmox VE clusters, run separate agent instances with different configurations, each pointing to a different PROXMOX_HOST. Each instance will create its own cluster in NetBox.
Q: Can NetBox push configurations to Proxmox VE? A: No. The integration is data-in only, synchronizing from Proxmox VE into NetBox.
Q: Is the integration free to use? A: The Proxmox VE integration is available with commercial editions of NetBox.
Q: Do I have to buy NetBox Assurance to use the NetBox Proxmox VE integration? A: Yes. The integration extracts and transforms the latest state from Proxmox VE which is sent to NetBox Assurance where users can control how it is ingested into NetBox.
Q: Which versions of NetBox are supported? A: 4.2.3 and above.
Q: Which versions of Proxmox VE are supported? A: Proxmox VE 7.x and later with API token support.
Q: Does the integration sync Proxmox VE templates?
A: No. VM and LXC templates (template == 1) are always skipped and will not appear in NetBox.
Technical Questions
Q: What authentication methods does the integration support?
A: The integration uses API token authentication exclusively. You must create a Proxmox VE API token (user + token name + token secret) and provide these via the PROXMOX_USER, PROXMOX_TOKEN_NAME, and PROXMOX_TOKEN_VALUE config parameters. Username/password authentication is not supported.
Q: What permissions does the Proxmox VE API token need? A: The API token needs read-only access to the following resources:
- Cluster status and resources (
/cluster/resources,/cluster/status) - Node information (
/nodes) - Node network configuration (
/nodes/{node}/network) - VM/container configurations (
/nodes/\{node\}/qemu/{vmid}/config,/nodes/\{node\}/lxc/{vmid}/config) - QEMU guest agent (optional, for accurate IP discovery on VMs)
The built-in PVEAuditor role assigned at the / path provides sufficient read-only access.
Q: Why are my QEMU VM IP addresses not showing up in NetBox? A: QEMU VM IP addresses are retrieved from the QEMU guest agent. The guest agent must be:
- Installed inside the VM (e.g.,
qemu-guest-agentpackage on Linux) - Running and accessible (not blocked by firewall rules inside the VM)
- Enabled in the VM configuration in Proxmox VE (Agent checkbox enabled)
If the guest agent is not available, no IPs will be discovered for that VM. LXC container IPs are always available from the container configuration and do not require a guest agent.
Q: How are LXC container IP addresses discovered?
A: LXC container IPs are parsed directly from the container's network configuration (the ip= and ip6= fields in each net* config entry). Values of dhcp, manual, and auto are skipped. Static IPs configured in the container are always discovered regardless of whether a guest agent is installed.
Q: What is the cluster name and how is it determined? A: The cluster name is resolved in this order:
- The
CLUSTER_NAMEconfig parameter (if set) — takes priority - The cluster name from the Proxmox
/cluster/statusAPI - Falls back to the value of
PROXMOX_HOSTif no cluster name can be determined
For standalone Proxmox VE nodes that are not part of a named cluster, the fallback to PROXMOX_HOST ensures a cluster is still created.
Q: How are Proxmox node platforms determined?
A: Node platforms are derived from the pveversion field returned by the Proxmox node status API. The version string format is pve-manager/X.Y.Z/hash — the integration creates a platform named "Proxmox VE X.Y" (major.minor only). Each node may have a different platform if they run different PVE versions.
Q: How are VM/LXC platforms determined? A: VM platforms are resolved in this order:
- QEMU guest agent OS info (preferred) — if the guest agent is installed and running, the OS name and version-id are used to create a specific platform (e.g., "Ubuntu 24.04")
- ostype config field — the VM's configured ostype (e.g.,
l26→ "Linux",win10→ "Windows 10/2016/2019") is used as a fallback - LXC containers always use the ostype field from their configuration
Q: How are VMs assigned to NetBox sites? A: VM site assignment follows node site assignment. The node's site is determined by:
NODE_SITE_MAP— if the node name matches an entry, that site is usedSITE_NAMEconfig parameter — the default site used for all nodes not in the site map
VMs and containers inherit the site of the node they run on.
Q: What are the VLANs in NetBox sourced from? A: VLANs are collected from two sources:
- Node network interfaces of type
vlanwith avlanidfield (physical VLAN interfaces on hypervisor nodes) - VM/LXC interface VLAN tags — the
tag=field in QEMU net config ortagfield in LXC net config (untagged VLAN for the VM's network port)
VLANs are named "VLAN {vid}" and scoped to the site of the node they were discovered on.
Q: Are VRFs created for network views or any other Proxmox concept? A: No. The integration does not create VRF objects.
Q: What custom fields are required in NetBox? A: The integration creates 7 custom fields during bootstrap:
proxmox_vmid(VirtualMachine — Text) — Proxmox VM IDproxmox_node(VirtualMachine — Text) — Proxmox node the VM runs onproxmox_ha_state(VirtualMachine — Text) — HA state from Proxmox HA managerproxmox_cpu_count(Device — Integer) — CPU core count of the nodeproxmox_memory_mb(Device — Integer) — Physical memory of the node in MBproxmox_vm_type(VirtualMachine — Text) — Virtualisation type (qemuorlxc)proxmox_storage_pool(VirtualDisk — Text) — Storage pool the disk is allocated from
Q: How often should the integration run? A: The recommended schedule depends on how frequently your Proxmox VE environment changes. Common schedules include:
- Every 30 minutes for dynamic environments
- Every hour for moderately active clusters
- Every 6 hours or daily for stable environments
Q: Can I limit which nodes are synchronized?
A: Yes. Use the scope.nodes configuration to specify a list of node names. Only VMs, interfaces, and IPs from those nodes will be synchronized. Use ["*"] for all nodes (the default).
Q: What Proxmox VE disk types are synchronized as Virtual Disks?
A: For QEMU VMs: scsi*, virtio*, ide*, and sata* config keys are synchronized as Virtual Disks. Cloud-init drives, CD-ROMs (media=cdrom), and entries with value none are excluded. For LXC containers: rootfs and mount points (mp0, mp1, ...) are synchronized.
Troubleshooting
Connection Issues
Problem: Connection to Proxmox VE API fails with authentication errors Solution:
- Verify
PROXMOX_USER,PROXMOX_TOKEN_NAME, andPROXMOX_TOKEN_VALUEare correct - Ensure the API token has not expired or been revoked in Proxmox VE
- Check that the
PROXMOX_USERmatches the token owner (format:user@realm, e.g.,root@pam) - Verify the Proxmox VE host is accessible on port 8006 from your agent host
- If using SSL, verify certificate validity or set
PROXMOX_SSL_VERIFY: falsefor testing only
Problem: Authentication fails with "401 Unauthorized" Solution:
- Verify the token name and secret are correctly set in your
.envfile or environment - Check that the API token has not been deleted in Proxmox VE
- Ensure
PROXMOX_USERmatches the realm — the format must beuser@realm(e.g.,root@pam)
Problem: Authentication fails with "403 Forbidden" Solution:
- Check that the API token has read permissions on the relevant resources
- Assign the
PVEAuditorrole to the token at the/path in Proxmox VE - If "Privilege Separation" is enabled on the token, ensure the token has explicit role assignments
Problem: Connection timeout errors Solution:
- Verify network connectivity between the agent host and the Proxmox VE host on port 8006
- Check firewall rules allow HTTPS traffic from the agent to Proxmox VE
- Increase the
PROXMOX_API_TIMEOUTconfig parameter (default: 30 seconds) - Consider reducing the scope using
scope.nodesto limit the number of API calls
Common Error Messages
"PROXMOX_HOST must be provided in policy config"
- Add
PROXMOX_HOSTto your policy configuration
"PROXMOX_TOKEN_NAME and PROXMOX_TOKEN_VALUE must be provided in policy config"
- Add both
PROXMOX_TOKEN_NAMEandPROXMOX_TOKEN_VALUEto your policy configuration - These are required for all non-bootstrap runs
"Authentication error (401)"
- Token credentials are invalid — verify
PROXMOX_USER,PROXMOX_TOKEN_NAME, andPROXMOX_TOKEN_VALUE
"Authentication error (403)"
- Token lacks required permissions — assign the
PVEAuditorrole at minimum
Data Synchronization Issues
Problem: No IP addresses on QEMU VMs Solution:
- Check that the QEMU guest agent is installed inside the VM (
qemu-guest-agentpackage) - Verify the guest agent service is running inside the VM
- Ensure the "Guest Agent" option is enabled in the VM's Options tab in Proxmox VE
- Check that the VM is running — IP discovery requires the VM to be active
Problem: Some nodes or VMs are missing Solution:
- Check the
scope.nodesconfiguration — if set, only listed nodes are processed - Verify the agent can reach the Proxmox API for all nodes
- Check agent logs for warnings about specific nodes or VMs
- Templates are always skipped — verify the missing items are not marked as templates
Problem: Virtual disks are missing Solution:
- The integration only discovers data disks — cloud-init drives, CD-ROMs, and
noneentries are excluded - For QEMU VMs, only
scsi*,virtio*,ide*, andsata*keys are processed - For LXC containers, only
rootfsandmp*keys are processed - Check agent logs for any disk parsing errors
Problem: VLANs are not appearing in NetBox Solution:
- Node VLANs require network interfaces of type
vlanwith avlanidfield in the Proxmox node network config - VM VLANs require a
tag=parameter in the VM interface config (QEMU) ortagfield (LXC) - Check that VLANs exist in the Proxmox VE configuration — they are not discovered independently from the network config
Problem: Wrong cluster name in NetBox Solution:
- Use the
CLUSTER_NAMEconfig parameter to set an explicit cluster name - If
CLUSTER_NAMEis not set, the cluster name is auto-detected from the Proxmox API or falls back toPROXMOX_HOST - Standalone Proxmox nodes not in a named cluster will use
PROXMOX_HOSTas the cluster name
Problem: All nodes appear in the same NetBox site Solution:
- Use
NODE_SITE_MAPto assign different sites to individual nodes:"pve1:London,pve2:Amsterdam" SITE_NAMEis used as the default for any node not explicitly listed inNODE_SITE_MAP
Performance Issues
Problem: Integration takes a very long time to complete Solution:
- Limit synchronization to specific nodes using
scope.nodes - Check network latency between the agent and the Proxmox VE API
- The integration fetches individual VM/container configs for every non-template item — large inventories naturally take longer
- Guest agent queries (for QEMU IP discovery) add latency per running VM; these are skipped if the guest agent is not responding
Bootstrap Issues
Problem: Bootstrap mode fails to create custom fields Solution:
- Ensure the Diode credentials have permission to create custom fields in NetBox
- Check NetBox Assurance for any errors during custom field creation
- Verify the deviations are applied after bootstrap completes
- Review agent logs for specific error messages
Problem: Can't find deviations after bootstrap Solution:
- Ensure you stopped the agent after seeing
executed successfully - Navigate to NetBox Assurance → Deviation Types
- Look for deviations related to Custom Fields, Tags, and Cluster Type
- Note: the unique host tag (
proxmox-{hostname}) is NOT created in bootstrap mode — it is only created during regular operation - If no deviations appear, check agent logs for errors and verify
BOOTSTRAP: truewas set