Skip to main content
Enterprise

Installing Custom NetBox Plugins

This guide explains how to install custom NetBox plugins in NetBox Enterprise. While NetBox Enterprise comes with certified and community plugins built in, you can extend functionality by adding custom plugins through wheelhouse archives.

Overview

Custom plugins are distributed as Python wheel files bundled into a tarball called a wheelhouse. The NetBox Enterprise operator supports three wheelhouse sources:

FeatureMedia directoryPVCS3
Best forQuick setup, single-node, 1.x upgradesDevelopment, testing, single-nodeProduction, multi-node, HA
CRD wheelhouse configNot requiredRequiredRequired
PersistenceSurvives restarts (stored on media volume)Survives restarts via persistent volumeSurvives restarts via object storage
DistributionAlready on the pod filesystemMounted directly to podsDownloaded by each pod on startup
Upload methodkubectl cp to running podkubectl cp during maintenance modeS3 CLI upload
UpdatesCopy new file, restart podsEnter maintenance mode, copy new fileUpload to S3, restart pods

At startup, an entrypoint wrapper extracts the tarball and installs packages to /opt/netbox/venv/lib/python3.12/user-packages, which is already on sys.path via a .pth file in the base image. On each restart, the wheelhouse is re-applied to a fresh Python environment.

Key concepts:

  • Wheelhouse archive: A .tar.gz containing a wheelhouse/ directory with .whl files, a requirements.txt manifest, and a constraints.txt file
  • Constraints file: Version pins for packages already in the base image, preventing accidental upgrades of shared dependencies
  • CRD configuration: Wheelhouse sources (PVC or S3) are configured in the NetBoxEnterprise custom resource spec. For the media directory method, no wheelhouse configuration is needed — only customPythonConfig is required to enable plugins.

Installation process:

  1. Build a wheelhouse containing your plugins and dependencies
  2. Upload the wheelhouse tarball (media directory, PVC, or S3)
  3. Configure the wheelhouse source in the CRD (PVC and S3 only)
  4. Enable plugins via customPythonConfig in the CRD
  5. NetBox pods install plugins automatically on startup

Prerequisites

Before installing custom plugins, ensure you have:

  • kubectl CLI configured with access to your NetBox Enterprise cluster
  • Python 3.12 development environment (matching NetBox Enterprise)
  • pip package manager (pip 21.0 or later)
  • Network access to PyPI or your private package repository

Additional for S3 source:

  • S3-compatible storage with a bucket provisioned
  • S3 CLI tool (aws CLI or s3cmd) installed and configured
  • S3 credentials with write access to the target bucket

For building custom modules with compiled extensions:

  • Ubuntu 24 LTS with Python 3.12
  • Build tools: gcc, python3-dev, build-essential
  • Architecture matching your cluster nodes (see Architecture matters)

Building the wheelhouse

Step 1: Create a working directory

mkdir /tmp/wheelhouse

Step 2: Obtain the constraints file

The constraints.txt file pins versions of packages already in the NetBox Enterprise base image. This prevents pip from pulling newer versions of shared dependencies that could cause conflicts.

You can extract it from a running pod or from the base image:

NETBOX_POD=$(kubectl get pods -A \
-l com.netboxlabs.netbox-enterprise/custom-plugins-upload=true \
-o jsonpath='{.items[0].metadata.name}')
NETBOX_NS=$(kubectl get pods -A \
-l com.netboxlabs.netbox-enterprise/custom-plugins-upload=true \
-o jsonpath='{.items[0].metadata.namespace}')

kubectl cp "${NETBOX_NS}/${NETBOX_POD}:/opt/netbox/constraints.txt" \
/tmp/wheelhouse/constraints.txt

Step 3: Create a requirements file

Create /tmp/wheelhouse/requirements.txt listing each plugin you want to install. Use exact version pins (==) to ensure consistent deployments:

my-netbox-plugin==1.0.0
another-plugin==2.3.1

For details on the format, see the pip requirements file documentation.

Step 4: Download plugin wheels

Use pip download to populate the wheelhouse with plugin wheels and their dependencies. You must specify the correct platform and Python version to match the NetBox Enterprise container:

pip download \
--only-binary=:all: \
--platform manylinux_2_17_x86_64 \
--python-version 3.12 \
--implementation cp \
-d /tmp/wheelhouse \
-c /tmp/wheelhouse/constraints.txt \
-r /tmp/wheelhouse/requirements.txt

Replace manylinux_2_17_x86_64 with manylinux_2_17_aarch64 for ARM64 clusters. See Architecture matters for details.

Custom modules

If you have custom Python modules not available on PyPI, build wheel files from them first using pip wheel:

pip wheel \
--prefer-binary \
--wheel-dir /tmp/wheelhouse \
--constraint /tmp/wheelhouse/constraints.txt \
/path/to/your/python-module

Then add the package name to your requirements.txt.

After downloading, you should see .whl files in /tmp/wheelhouse/ for each package and its dependencies.

Step 5: Package the archive

tar czf /tmp/wheelhouse.tar.gz -C /tmp wheelhouse/

Verify the archive contents:

tar tzf /tmp/wheelhouse.tar.gz | head -20

The archive should contain:

  • wheelhouse/*.whl - wheel files for your plugins and dependencies
  • wheelhouse/requirements.txt - your requirements manifest
  • wheelhouse/constraints.txt - base image version pins

Architecture matters

Platform-specific wheels (compiled C extensions like pydantic-core) must match the cluster node architecture. Pure-Python wheels (py3-none-any) work on any architecture.

Check your cluster's node architecture:

kubectl get nodes -o jsonpath='{.items[0].status.nodeInfo.architecture}'

Use the matching platform tag in the pip download command:

  • x86_64 clusters: --platform manylinux_2_17_x86_64
  • ARM64 clusters: --platform manylinux_2_17_aarch64

Uploading the wheelhouse

Method 1: Media directory (simplest)

The media directory method is the simplest way to install custom plugins. You copy the wheelhouse tarball directly to a running NetBox pod's media directory — no CRD wheelhouse configuration is needed. The entrypoint automatically checks for a wheelhouse at this path on every startup.

Step 1: Find a NetBox pod

NETBOX_POD=$(kubectl get pods -A \
-l com.netboxlabs.netbox-enterprise/custom-plugins-upload=true \
-o jsonpath='{.items[0].metadata.name}')
NETBOX_NS=$(kubectl get pods -A \
-l com.netboxlabs.netbox-enterprise/custom-plugins-upload=true \
-o jsonpath='{.items[0].metadata.namespace}')

Step 2: Copy the wheelhouse to the media directory

kubectl cp /tmp/wheelhouse.tar.gz \
"${NETBOX_NS}/${NETBOX_POD}:/opt/netbox/netbox/media/wheelhouse.tar.gz"

Step 3: Enable plugins

Add your plugins to the custom Python configuration — no wheelhouse configuration is needed.

Option A: Admin Console (recommended)

  1. Navigate to the Config tab in the Admin Console
  2. Enable the Show Advanced Settings checkbox
  3. In the NetBox Python Configuration Overrides field, add your plugin configuration:
PLUGINS = ['my_plugin']
PLUGINS_CONFIG = {
'my_plugin': {
'setting': 'value',
},
}
  1. Click Save config and deploy the new version

For more details on this screen, see Enabling Built-in Plugins.

Option B: CRD patch

spec:
netbox:
config:
customPythonConfig: |
PLUGINS = ['my_plugin']
PLUGINS_CONFIG = {
'my_plugin': {
'setting': 'value',
},
}

Step 4: Restart NetBox pods

kubectl rollout restart deployment -n <namespace> <name>-netbox
kubectl rollout restart deployment -n <namespace> <name>-netbox-worker
Multi-node deployments

The media directory is local to each pod. In multi-node deployments where pods don't share a media volume, use the S3 method instead to ensure all pods receive the wheelhouse.

The PVC method uses a Kubernetes PersistentVolumeClaim to store the wheelhouse. You upload the tarball during maintenance mode, when the operator creates a dedicated maintenance pod with the PVC mounted read-write.

Step 1: Configure the PVC and enable maintenance mode

Apply both settings in a single patch:

kubectl patch netboxenterprise -n <namespace> <name> --type merge -p '{
"spec": {
"maintenanceMode": true,
"netbox": {
"config": {
"plugins": {
"wheelhouse": {
"pvc": {
"claimName": "<name>-wheelhouse",
"path": "wheelhouse.tar.gz",
"create": true,
"size": "1Gi"
}
}
}
}
}
}
}'

When create is true, the operator creates and manages the PVC automatically.

Step 2: Wait for the maintenance pod, then copy the wheelhouse

kubectl wait --for=condition=Ready pod \
-n <namespace> -l maintenance-mode=true --timeout=120s

MAINT_POD=$(kubectl get pod -n <namespace> \
-l maintenance-mode=true \
-o jsonpath='{.items[0].metadata.name}')

kubectl cp /tmp/wheelhouse.tar.gz \
<namespace>/$MAINT_POD:/wheelhouse-source/wheelhouse.tar.gz

Step 3: Disable maintenance mode

kubectl patch netboxenterprise -n <namespace> <name> --type merge -p '{
"spec": {
"maintenanceMode": false
}
}'

The operator starts NetBox normally. The entrypoint extracts and installs packages from the wheelhouse, then Django loads the plugins.

The S3 method stores the wheelhouse in an S3-compatible bucket. Each NetBox pod downloads and installs from S3 on startup, so all pods stay consistent without manual copying.

Step 1: Configure the S3 source in the CRD

spec:
netbox:
config:
plugins:
wheelhouse:
s3:
bucket: my-bucket
key: media/wheelhouse.tar.gz
region: us-east-1
endpoint: https://s3.example.com # optional, for S3-compatible stores
credentialsSecret:
name: s3-credentials
accessKeyId: AWS_ACCESS_KEY_ID
secretAccessKey: AWS_SECRET_ACCESS_KEY
verifySsl: true

Step 2: Upload the wheelhouse to S3

aws s3 cp /tmp/wheelhouse.tar.gz s3://my-bucket/media/wheelhouse.tar.gz

For S3-compatible storage (MinIO, DigitalOcean Spaces, etc.), add --endpoint-url:

aws s3 cp /tmp/wheelhouse.tar.gz \
s3://my-bucket/media/wheelhouse.tar.gz \
--endpoint-url=https://s3.example.com

Step 3: Verify the upload

aws s3 ls s3://my-bucket/media/wheelhouse.tar.gz

Step 4: Restart NetBox pods to trigger the download and installation. Both the web and worker deployments run the entrypoint wrapper and need the wheelhouse:

kubectl rollout restart deployment -n <namespace> <name>-netbox
kubectl rollout restart deployment -n <namespace> <name>-netbox-worker

Enabling plugins

Plugin configuration is mounted into the NetBox container and evaluated at startup. You can configure it through the Admin Console or by patching the CRD directly.

Admin Console:

  1. Navigate to the Config tab in the Admin Console
  2. Enable the Show Advanced Settings checkbox
  3. In the NetBox Python Configuration Overrides field, add your plugin configuration:
PLUGINS = ['my_plugin']
PLUGINS_CONFIG = {
'my_plugin': {
'setting': 'value',
},
}
  1. Click Save config and deploy the new version

For more details, see Enabling Built-in Plugins.

CRD patch:

spec:
netbox:
config:
customPythonConfig: |
PLUGINS = ['my_plugin']
PLUGINS_CONFIG = {
'my_plugin': {
'setting': 'value',
},
}

For details on available plugin settings, see the NetBox plugin configuration documentation.

Plugin load order

The customPythonConfig is mounted as zzz_80_user_extra.py. The operator's own zzz_99_default_plugins.py imports PLUGINS from it, filters out managed plugins (branching, changes, diode, etc.), then appends the built-in enterprise plugins after your custom ones.

Startup sequence

When a NetBox pod starts, the following sequence runs:

  1. Init container (init-dirs) creates runtime directories
  2. Entrypoint wrapper checks for a wheelhouse in this order: S3 (downloads), PVC (copies), media directory (uses in place). If found, it extracts the tarball, runs pip install --target, then runs collectstatic
  3. docker-entrypoint.sh waits for the database, runs migrations, creates the superuser
  4. launch-netbox.sh starts NGINX Unit, which spawns Python workers loading Django and your plugins

Verification

After pods restart, verify the installation:

# Check entrypoint logs for plugin installation output
kubectl logs -n <namespace> <pod> | grep -E 'wheelhouse|plugin'

# Verify the plugin is in the PLUGINS list
kubectl exec -n <namespace> <pod> -- \
python3 -c "from netbox.configuration import PLUGINS; print(PLUGINS)"

# Check pod health
kubectl get pod -n <namespace> <pod> \
-o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'

You can also log in to NetBox and navigate to /plugins/ to confirm the plugin appears in the installed plugins list.

Best practices

Version pinning

Always use exact version specifications (==) in your requirements.txt:

netbox-plugin-example==1.2.3

Avoid unpinned or range specifications (e.g., >=1.0.0) to prevent unexpected updates.

Testing before production

  1. Test your wheelhouse in a development environment first
  2. Check NetBox logs after deployment to confirm plugins loaded without errors
  3. Ensure plugin database migrations complete successfully
  4. Validate plugin functionality before promoting to production

Security considerations

  • Only install plugins from trusted sources
  • Review all dependencies for security vulnerabilities
  • Keep plugins updated with security patches
  • Configure plugin permissions following the least-privilege principle

Wheelhouse management

  • Store requirements.txt and constraints.txt in version control
  • Keep previous wheelhouse versions for rollback capability
  • Document which plugins are included and their versions

Troubleshooting

Wheelhouse creation issues

Problem: pip download fails with "No matching distribution found"

  • Verify the plugin name and version exist on PyPI
  • Check that the platform tag matches your cluster architecture
  • For custom modules, ensure wheel files were built successfully
  • Try running without --only-binary=:all: if pure Python packages are missing

Problem: Constraints conflict errors during wheelhouse creation

  • Ensure you obtained constraints.txt from the correct NetBox Enterprise version
  • Check if plugin dependencies conflict with NetBox's pinned dependencies
  • Consider using --no-deps for problematic packages, then manually adding compatible dependency versions

Upload and deployment issues

Problem: Plugins don't install using the media directory method

  • Verify the wheelhouse was copied to the correct path: /opt/netbox/netbox/media/wheelhouse.tar.gz
  • Check that the tarball is valid: kubectl exec -n <namespace> <pod> -- tar tzf /opt/netbox/netbox/media/wheelhouse.tar.gz | head -5
  • Ensure you restarted both netbox and netbox-worker deployments after copying the file
  • Check pod logs for extraction errors: kubectl logs -n <namespace> <pod> | grep wheelhouse

Problem: Maintenance pod does not start (PVC method)

  • Verify the netboxenterprise resource has maintenanceMode: true
  • Check operator logs for errors creating the PVC or maintenance pod
  • Ensure the PVC size is large enough for your wheelhouse

Problem: S3 download fails on pod startup

  • Check pod logs for S3-related errors: kubectl logs -n <namespace> <pod>
  • Verify the bucket, key, region, and endpoint in the CRD match your S3 configuration
  • Confirm the credentialsSecret exists and contains valid credentials
  • If using a custom endpoint, verify verifySsl is set correctly

Problem: Plugins don't appear after restart

  • Check NetBox logs for plugin loading errors or import tracebacks
  • Verify PLUGINS = [...] in customPythonConfig includes your plugin name
  • Confirm the tarball structure is correct: tar tzf wheelhouse.tar.gz should show wheelhouse/ at the root
  • Ensure the wheelhouse was uploaded to the correct location (PVC path or S3 key)

Plugin runtime issues

Problem: Plugin loaded but features don't work

  • Check that database migrations ran successfully during startup
  • Verify PLUGINS_CONFIG settings match the plugin's documented requirements
  • Confirm plugin dependencies are compatible with the NetBox version in your release

Problem: Performance degradation after plugin installation

  • Review plugin documentation for performance tuning options
  • Check if the plugin adds database queries to frequently accessed pages
  • Disable unnecessary plugin features via PLUGINS_CONFIG

Upgrades

When upgrading to a new NetBox Enterprise version that includes a different NetBox release, you will likely need to rebuild the wheelhouse to match changed dependencies.

Upgrade process

  1. Deploy the new NetBox Enterprise version with maintenance mode enabled
  2. Download the updated constraints.txt from the new pods
  3. Rebuild the wheelhouse using the new constraints file
  4. Test the new wheelhouse in a development environment
  5. Upload the rebuilt wheelhouse via PVC or S3
  6. Disable maintenance mode to start NetBox with the updated plugins

Managing plugin compatibility

  • Before upgrading, verify your plugins support the new NetBox version
  • Check NetBox and plugin release notes for breaking changes
  • Always test plugin migrations in a non-production environment first
  • Keep the previous wheelhouse available for quick rollback

Additional resources