Air-Gapped Appliance
Overview
The NetBox Enterprise Air-Gapped Appliance is a pre-built virtual machine image containing a fully self-contained NetBox Enterprise installation. It is designed for environments where the host has no internet access at all -- everything needed to run NetBox Enterprise is bundled into the VM image itself.
Under the hood, the appliance runs the same Embedded Cluster deployment as a standard NetBox Enterprise install. The difference is that the installer, container images, and Kubernetes components are all pre-loaded into the VM during the build process, so no internet connectivity is required at boot or runtime.
How it differs from a standard install
| Standard Embedded Cluster | Air-Gapped Appliance | |
|---|---|---|
| Delivery | Download installer via curl | Pre-built OVA (VMware) or QCOW2 (KVM) image |
| Internet required | Yes, during install and for updates | No -- fully offline |
| OS setup | You prepare a Linux host | Ubuntu 24.04 LTS pre-configured in the VM |
| Network config | You configure before install | Provided via vSphere OVF properties or interactive prompts |
| License | Entered in Admin Console | Provided at VM deployment time (base64-encoded) |
| Database | Built-in or external PostgreSQL | External PostgreSQL required |
| Upgrades | Admin Console checks for updates | New VM image with backup/restore (see Upgrading) |
What's included
The appliance VM ships with:
- Ubuntu 24.04 LTS (x86_64, EFI boot)
- NetBox Enterprise Embedded Cluster installer (air-gap bundle)
- All required container images and Helm charts
- Kubernetes cluster components (k0s)
- PostgreSQL client tools (v17) for troubleshooting
- K9s (Kubernetes TUI) for cluster inspection
- Pre-configured firewall (iptables)
- Diagnostic utilities (jq, curl, net-tools, dnsutils, netcat)
VM specifications
The appliance OVA is pre-configured with hardware settings that match NetBox Enterprise production requirements:
| Resource | Value |
|---|---|
| vCPU | 8 |
| Memory | 24 GB |
| Disk | 100 GB |
| Network adapter | VmxNet3 |
| Firmware | EFI with Secure Boot |
| vSphere compatibility | vSphere 7.0+ (VMX-18) |
These values match the production hardware requirements for Embedded Cluster. Reducing them may cause instability or installation failure.
Installation
Prerequisites
Before deploying the appliance, have the following ready:
- OVA image received from NetBox Labs
- vSphere 7.0+ environment with sufficient resources
- Static IP address allocated for the appliance
- Network details: subnet CIDR, default gateway, DNS server(s), NTP server(s)
- External PostgreSQL database reachable from the appliance's network (the appliance does not use the built-in database)
- License file: Base64-encoded license YAML provided by NetBox Labs
- SSH public key: For
netboxuser access to the VM
The IP address and hostname are set at first boot and cannot be changed afterward. This is a Kubernetes limitation. Make sure your network configuration is final before deploying.
Step 1: Deploy the OVA
- In vSphere, select Deploy OVF Template and point it to the OVA file.
- Walk through the deployment wizard, selecting your target datacenter, cluster, datastore, and network.
Step 2: Configure OVF properties
During OVA deployment (or afterward via Edit Settings > vApp Options), fill in the following properties:
| Property | Description | Example |
|---|---|---|
| IP Address | Static IPv4 address | 192.168.1.100 |
| Subnet CIDR | Subnet prefix length | 24 |
| Default Gateway | Gateway IP address | 192.168.1.1 |
| DNS Servers | Comma-separated, up to 3 | 192.168.1.1,8.8.8.8 |
| NTP Servers | Comma-separated, up to 2 | 0.pool.ntp.org,1.pool.ntp.org |
| License Key | Base64-encoded license YAML (no newlines) | (provided by NetBox Labs) |
| SSH Public Key | SSH public key for the netbox user | ssh-rsa AAAAB3... |
If you received a license.yaml file, encode it with:
base64 -w 0 < license.yaml
Use the output (a single line with no newlines) as the License Key value.
Step 3: Power on and bootstrap
- Power on the VM.
- On first boot, the appliance reads the OVF properties via VMware Tools and automatically:
- Configures the network interface (static IP via Netplan)
- Sets DNS and NTP servers
- Installs the SSH public key for the
netboxuser - Decodes and writes the license file
- Generates SSH host keys
- The bootstrap runs once and will not re-run on subsequent boots.
Step 4: Install NetBox Enterprise
After bootstrap completes, SSH into the appliance and run the Embedded Cluster installer in air-gap mode:
ssh netbox@<appliance-ip>
The installer bundle is located in /opt/nbe/. Extract it and run the installer:
cd /opt/nbe
tar -xzf nbe-*.tar.gz
sudo ./netbox-enterprise install --airgap --license /opt/nbe/license.yaml
This starts the Embedded Cluster installation. When it completes, it will print a URL and password for the Admin Console (port 30000).
Step 5: Configure via Admin Console
- Open your browser to
https://<appliance-ip>:30000. - Log in with the password provided by the installer.
- Configure NetBox Enterprise settings through the Admin Console, the same as a standard Embedded Cluster install. You will need to configure your external PostgreSQL database connection during this step.
Network and firewall
The appliance ships with a restrictive iptables firewall. Only the following ports are open by default:
| Port | Protocol | Purpose |
|---|---|---|
| 22 | TCP | SSH (management) |
| 80 | TCP | HTTP (redirects to HTTPS) |
| 443 | TCP | HTTPS (NetBox web UI and Diode gRPC) |
| 30000 | TCP | Admin Console (KOTS) |
ICMP (ping) and established/related connections are also allowed. All other inbound traffic is dropped.
Firewall considerations
If you have an external firewall or security group in front of the appliance, allow these ports from the appropriate sources:
- Ports 80 and 443 -- from users who need to access the NetBox web interface. Port 443 also serves Diode gRPC traffic for data ingestion.
- Port 30000 -- from administrators who need to access the Admin Console. Consider restricting this to management networks only.
- Port 22 -- from administrators who need SSH access for troubleshooting.
If you use Diode for automated data ingestion, Diode agents connect to the appliance on port 443 using gRPC. Ensure your firewall allows this traffic from wherever your Diode agents run.
SSH access
The appliance creates a netbox user with passwordless sudo. Access it using the SSH key you provided during deployment:
ssh netbox@<appliance-ip>
Password-based SSH authentication is not available. If you need to change the SSH key after deployment, update /opt/nbe/.ssh/authorized_keys on the VM directly (via the vSphere console, for example).
Upgrading
Air-gapped appliance upgrades work differently from standard Embedded Cluster upgrades. Because the VM has no internet access, you cannot use the Admin Console's built-in update mechanism. Instead, upgrades follow a backup-and-restore workflow using a new appliance image.
The NetBox Labs Customer Success team will guide you through the upgrade process. Contact them before starting an upgrade to confirm the target version and get the new appliance image.
Upgrade procedure
1. Get the new appliance image
Download the new OVA from NetBox Labs. This image contains the updated version of NetBox Enterprise with all container images and components pre-loaded.
2. Back up the existing appliance
Perform a manual backup of your current appliance's data before doing anything else. Follow the manual backup instructions to back up:
- Media, scripts, and reports -- runtime files stored in NetBox volumes
- Diode and Hydra secrets -- OAuth credentials (NetBox 1.10+)
- Admin Console configuration -- so you don't have to re-enter all your settings on the new appliance
Copy these backup files off the appliance (via scp or similar) and store them safely.
Since the appliance uses an external PostgreSQL database, back up your database using your own database backup procedures. The database itself does not need to be migrated as part of the appliance upgrade -- it remains on your external server.
3. Deploy the new appliance
Deploy the new OVA following the same installation steps as the original. Use the same network configuration (IP address, gateway, DNS, etc.) so that existing clients and bookmarks continue to work.
4. Install NetBox Enterprise on the new appliance
SSH into the new appliance and run the Embedded Cluster installer, just as you did during the initial setup. When prompted, bring it up in maintenance mode so you can restore data before NetBox starts serving traffic.
5. Restore your data
With the new appliance in maintenance mode, follow the manual restore instructions to restore:
- Apply your saved Admin Console configuration to reconnect to your external PostgreSQL database and restore your settings
- Media, scripts, and reports
- Diode and Hydra secrets
Then disable maintenance mode via the Admin Console to bring NetBox back online.
6. Verify and decommission
Once the new appliance is confirmed working:
- Verify NetBox is accessible and your data is intact.
- Test SSO, Diode connectivity, and any other integrations.
- Power off and remove the old appliance VM.
Troubleshooting
Checking bootstrap status
If the VM doesn't come up as expected after first boot, check the bootstrap log:
# Via vSphere console (if SSH isn't reachable yet)
journalctl -u nbe-bootstrap.service
The bootstrap writes a flag file at /opt/nbe/.bootstrap-complete when it finishes successfully. If this file doesn't exist, the bootstrap didn't complete.
Verifying network configuration
# Check the applied Netplan config
cat /etc/netplan/01-custom-network.yaml
# Check interface status
ip addr show
# Test DNS
dig netboxlabs.com @<your-dns-server>
Checking the Embedded Cluster
After the installer runs, enter the cluster shell before running any kubectl commands:
# Enter the Embedded Cluster shell (sets up KUBECONFIG)
/var/lib/embedded-cluster/bin/netbox-enterprise shell
# Check cluster status
kubectl get nodes
# Check pod status
kubectl get pods -A
# Use K9s for interactive inspection
k9s
# Exit when done
exit
For more details on cluster access and troubleshooting, see Advanced Tools and Troubleshooting.