Skip to main content
Enterprise

Air-Gapped Appliance

Overview

The NetBox Enterprise Air-Gapped Appliance is a pre-built virtual machine image containing a fully self-contained NetBox Enterprise installation. It is designed for environments where the host has no internet access at all -- everything needed to run NetBox Enterprise is bundled into the VM image itself.

Under the hood, the appliance runs the same Embedded Cluster deployment as a standard NetBox Enterprise install. The difference is that the installer, container images, and Kubernetes components are all pre-loaded into the VM during the build process, so no internet connectivity is required at boot or runtime.

How it differs from a standard install

Standard Embedded ClusterAir-Gapped Appliance
DeliveryDownload installer via curlPre-built OVA (VMware) or QCOW2 (KVM) image
Internet requiredYes, during install and for updatesNo -- fully offline
OS setupYou prepare a Linux hostUbuntu 24.04 LTS pre-configured in the VM
Network configYou configure before installProvided via vSphere OVF properties or interactive prompts
LicenseEntered in Admin ConsoleProvided at VM deployment time (base64-encoded)
DatabaseBuilt-in or external PostgreSQLExternal PostgreSQL required
UpgradesAdmin Console checks for updatesNew VM image with backup/restore (see Upgrading)

What's included

The appliance VM ships with:

  • Ubuntu 24.04 LTS (x86_64, EFI boot)
  • NetBox Enterprise Embedded Cluster installer (air-gap bundle)
  • All required container images and Helm charts
  • Kubernetes cluster components (k0s)
  • PostgreSQL client tools (v17) for troubleshooting
  • K9s (Kubernetes TUI) for cluster inspection
  • Pre-configured firewall (iptables)
  • Diagnostic utilities (jq, curl, net-tools, dnsutils, netcat)

VM specifications

The appliance OVA is pre-configured with hardware settings that match NetBox Enterprise production requirements:

ResourceValue
vCPU8
Memory24 GB
Disk100 GB
Network adapterVmxNet3
FirmwareEFI with Secure Boot
vSphere compatibilityvSphere 7.0+ (VMX-18)
Do not reduce these resources

These values match the production hardware requirements for Embedded Cluster. Reducing them may cause instability or installation failure.

Installation

Prerequisites

Before deploying the appliance, have the following ready:

  • OVA image received from NetBox Labs
  • vSphere 7.0+ environment with sufficient resources
  • Static IP address allocated for the appliance
  • Network details: subnet CIDR, default gateway, DNS server(s), NTP server(s)
  • External PostgreSQL database reachable from the appliance's network (the appliance does not use the built-in database)
  • License file: Base64-encoded license YAML provided by NetBox Labs
  • SSH public key: For netbox user access to the VM
IP address cannot change after deployment

The IP address and hostname are set at first boot and cannot be changed afterward. This is a Kubernetes limitation. Make sure your network configuration is final before deploying.

Step 1: Deploy the OVA

  1. In vSphere, select Deploy OVF Template and point it to the OVA file.
  2. Walk through the deployment wizard, selecting your target datacenter, cluster, datastore, and network.

Step 2: Configure OVF properties

During OVA deployment (or afterward via Edit Settings > vApp Options), fill in the following properties:

PropertyDescriptionExample
IP AddressStatic IPv4 address192.168.1.100
Subnet CIDRSubnet prefix length24
Default GatewayGateway IP address192.168.1.1
DNS ServersComma-separated, up to 3192.168.1.1,8.8.8.8
NTP ServersComma-separated, up to 20.pool.ntp.org,1.pool.ntp.org
License KeyBase64-encoded license YAML (no newlines)(provided by NetBox Labs)
SSH Public KeySSH public key for the netbox userssh-rsa AAAAB3...
Base64-encoding your license

If you received a license.yaml file, encode it with:

base64 -w 0 < license.yaml

Use the output (a single line with no newlines) as the License Key value.

Step 3: Power on and bootstrap

  1. Power on the VM.
  2. On first boot, the appliance reads the OVF properties via VMware Tools and automatically:
    • Configures the network interface (static IP via Netplan)
    • Sets DNS and NTP servers
    • Installs the SSH public key for the netbox user
    • Decodes and writes the license file
    • Generates SSH host keys
  3. The bootstrap runs once and will not re-run on subsequent boots.

Step 4: Install NetBox Enterprise

After bootstrap completes, SSH into the appliance and run the Embedded Cluster installer in air-gap mode:

ssh netbox@<appliance-ip>

The installer bundle is located in /opt/nbe/. Extract it and run the installer:

cd /opt/nbe
tar -xzf nbe-*.tar.gz
sudo ./netbox-enterprise install --airgap --license /opt/nbe/license.yaml

This starts the Embedded Cluster installation. When it completes, it will print a URL and password for the Admin Console (port 30000).

Step 5: Configure via Admin Console

  1. Open your browser to https://<appliance-ip>:30000.
  2. Log in with the password provided by the installer.
  3. Configure NetBox Enterprise settings through the Admin Console, the same as a standard Embedded Cluster install. You will need to configure your external PostgreSQL database connection during this step.

Network and firewall

The appliance ships with a restrictive iptables firewall. Only the following ports are open by default:

PortProtocolPurpose
22TCPSSH (management)
80TCPHTTP (redirects to HTTPS)
443TCPHTTPS (NetBox web UI and Diode gRPC)
30000TCPAdmin Console (KOTS)

ICMP (ping) and established/related connections are also allowed. All other inbound traffic is dropped.

Firewall considerations

If you have an external firewall or security group in front of the appliance, allow these ports from the appropriate sources:

  • Ports 80 and 443 -- from users who need to access the NetBox web interface. Port 443 also serves Diode gRPC traffic for data ingestion.
  • Port 30000 -- from administrators who need to access the Admin Console. Consider restricting this to management networks only.
  • Port 22 -- from administrators who need SSH access for troubleshooting.
Diode agent connectivity

If you use Diode for automated data ingestion, Diode agents connect to the appliance on port 443 using gRPC. Ensure your firewall allows this traffic from wherever your Diode agents run.

SSH access

The appliance creates a netbox user with passwordless sudo. Access it using the SSH key you provided during deployment:

ssh netbox@<appliance-ip>

Password-based SSH authentication is not available. If you need to change the SSH key after deployment, update /opt/nbe/.ssh/authorized_keys on the VM directly (via the vSphere console, for example).

Upgrading

Air-gapped appliance upgrades work differently from standard Embedded Cluster upgrades. Because the VM has no internet access, you cannot use the Admin Console's built-in update mechanism. Instead, upgrades follow a backup-and-restore workflow using a new appliance image.

Customer Success assistance

The NetBox Labs Customer Success team will guide you through the upgrade process. Contact them before starting an upgrade to confirm the target version and get the new appliance image.

Upgrade procedure

1. Get the new appliance image

Download the new OVA from NetBox Labs. This image contains the updated version of NetBox Enterprise with all container images and components pre-loaded.

2. Back up the existing appliance

Perform a manual backup of your current appliance's data before doing anything else. Follow the manual backup instructions to back up:

  • Media, scripts, and reports -- runtime files stored in NetBox volumes
  • Diode and Hydra secrets -- OAuth credentials (NetBox 1.10+)
  • Admin Console configuration -- so you don't have to re-enter all your settings on the new appliance

Copy these backup files off the appliance (via scp or similar) and store them safely.

Database backups

Since the appliance uses an external PostgreSQL database, back up your database using your own database backup procedures. The database itself does not need to be migrated as part of the appliance upgrade -- it remains on your external server.

3. Deploy the new appliance

Deploy the new OVA following the same installation steps as the original. Use the same network configuration (IP address, gateway, DNS, etc.) so that existing clients and bookmarks continue to work.

4. Install NetBox Enterprise on the new appliance

SSH into the new appliance and run the Embedded Cluster installer, just as you did during the initial setup. When prompted, bring it up in maintenance mode so you can restore data before NetBox starts serving traffic.

5. Restore your data

With the new appliance in maintenance mode, follow the manual restore instructions to restore:

  1. Apply your saved Admin Console configuration to reconnect to your external PostgreSQL database and restore your settings
  2. Media, scripts, and reports
  3. Diode and Hydra secrets

Then disable maintenance mode via the Admin Console to bring NetBox back online.

6. Verify and decommission

Once the new appliance is confirmed working:

  1. Verify NetBox is accessible and your data is intact.
  2. Test SSO, Diode connectivity, and any other integrations.
  3. Power off and remove the old appliance VM.

Troubleshooting

Checking bootstrap status

If the VM doesn't come up as expected after first boot, check the bootstrap log:

# Via vSphere console (if SSH isn't reachable yet)
journalctl -u nbe-bootstrap.service

The bootstrap writes a flag file at /opt/nbe/.bootstrap-complete when it finishes successfully. If this file doesn't exist, the bootstrap didn't complete.

Verifying network configuration

# Check the applied Netplan config
cat /etc/netplan/01-custom-network.yaml

# Check interface status
ip addr show

# Test DNS
dig netboxlabs.com @<your-dns-server>

Checking the Embedded Cluster

After the installer runs, enter the cluster shell before running any kubectl commands:

# Enter the Embedded Cluster shell (sets up KUBECONFIG)
/var/lib/embedded-cluster/bin/netbox-enterprise shell

# Check cluster status
kubectl get nodes

# Check pod status
kubectl get pods -A

# Use K9s for interactive inspection
k9s

# Exit when done
exit

For more details on cluster access and troubleshooting, see Advanced Tools and Troubleshooting.