Install the Nimbra Edge NVIDIA prerequisites

How to install the Nimbra Edge NVIDIA prerequisites #

If you plan to run transcoding workloads on Nimbra Edge appliances using NVIDIA GPUs for acceleration, you will need to prepare the node with the necessary software prerequisites.

Prerequisites vary depending on how Nimbra Edge will be installed on this node. Ensure you are following the correct instructions:

Docker-based appliances installed with the connectit installer require

  • NVIDIA drivers
  • NVIDIA Container Toolkit (Docker)

Kubernetes video nodes require

  • NVIDIA drivers
  • NVIDIA Container Toolkit (containerd for Kubernetes)
  • NVIDIA k8s-device-plugin

Check existing state #

Many cloud providers and OS installers preinstall NVIDIA drivers if a GPU is detected. These steps will help you verify if a GPU is present and if the drivers are already installed.

Follow the tab appropriate to your installation type. If some steps are already completed, you can skip them.

Ensure GPU is detected by the system

lspci | grep -i nvidia

Example output

00:08.0 VGA compatible controller: NVIDIA Corporation Device 2d30 (rev a1)
00:08.1 Audio device: NVIDIA Corporation Device 22eb (rev a1)

Check if NVIDIA drivers are installed

nvidia-smi

Example output

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 2000 Blac...    Off |   00000000:00:08.0 Off |                  Off |
| 30%   58C    P0             19W /   70W |     622MiB /  16311MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Check if NVIDIA Container Toolkit is installed and configured

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Example output

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
20043066d3d5: Pull complete 
06808451f0d6: Download complete 
Digest: sha256:c35e29c9450151419d9448b0fd75374fec4fff364a27f176fb458d472dfc9e54
Status: Downloaded newer image for ubuntu:latest
Mon Dec 15 12:36:54 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 2000 Blac...    Off |   00000000:00:08.0 Off |                  Off |
| 30%   58C    P0             19W /   70W |     622MiB /  16311MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Ensure GPU is detected by the system

lspci | grep -i nvidia

Example output

00:08.0 VGA compatible controller: NVIDIA Corporation Device 2d30 (rev a1)
00:08.1 Audio device: NVIDIA Corporation Device 22eb (rev a1)

Check if NVIDIA drivers are installed

nvidia-smi

Example output

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 2000 Blac...    Off |   00000000:00:08.0 Off |                  Off |
| 30%   58C    P0             19W /   70W |     622MiB /  16311MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Check if NVIDIA Container Toolkit and k8s-device-plugin are installed and configured

kubectl get nodes -o json | jq '.items[].status.allocatable'

Example output

"nvidia.com/gpu": "1"

Install the NVIDIA drivers #

Refer to the NVIDIA Driver Installation Guide for your specific operating system.

Verifying the installation #

After the installation, ensure the drivers are installed correctly by running the following command.

nvidia-smi

Example output

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 2000 Blac...    Off |   00000000:00:08.0 Off |                  Off |
| 30%   58C    P0             19W /   70W |     622MiB /  16311MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

The Nimbra Edge connectit installer will base its decisions on the available devices, which you can verify by running the following command:

ls -1 /dev | grep -i nvidia

If the drivers are installed correctly, you should see output similar to this:

nvidia-caps
nvidia-modeset
nvidia-uvm
nvidia-uvm-tools
nvidia0
nvidiactl

The Nimbra Edge connectit installer requires access to /dev/nvidia-uvm, /dev/nvidia-uvm-tools, /dev/nvidiactl, as well as the actual GPU devices, enumerated as /dev/nvidia0, /dev/nvidia1, etc.

Install the NVIDIA Container Toolkit #

The NVIDIA Container Toolkit allows containers running on the system to request access to the GPU.

To set it up, first follow the installation steps.

Then follow configuration instructions for Docker or Kubernetes.

Verifying the installation #

You should now be able to run the same commands as earlier, however now inside a container. The following command should output similar information as before:

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
Continue to the k8s-device-plugin instructions.

Install the NVIDIA k8s-device-plugin (Kubernetes only) #

Follow the installation steps

Verifying the installation #

After the installation, you can verify that the plugin is running correctly by running the NVIDIA example workload

Before installing Nimbra Edge (Kubernetes only) #

If you are using the NVIDIA gpu-operator, you’ll need to disable CDI as it conflicts with how Nimbra Edge accesses GPU resources. The following command will disable CDI in your cluster:

kubectl patch clusterpolicies.nvidia.com cluster-policy --type=merge -p '{"spec":{"cdi":{"enabled":false}}}'

While installing Nimbra Edge #

The connectit installer will automatically detect the presence of NVIDIA GPUs.

./connectit install video [...]

If any are found, the installer will display a message similar to this:

Detected Nvidia transcoding accelerator(s)
When installing or upgrading the Edge installation, use the --transcode-accelerator nvidia flag to enable NVIDIA GPU support.

After installing Nimbra Edge #

Once you have installed Nimbra Edge, the video appliances will report the presence of GPUs.

Accelerator identification #

In the Appliance page for your appliance, the specific model of accelerator will be listed under Metadata. If it is not present, delete the appliance and reinstall it to ensure it is registered correctly.

appliance accelerator information from network manager

Metrics #

  1. Navigate to the Appliance page and select the desired appliance.
  2. Click Appliance Metrics
  3. Under Details - Transcode accelerator, you will find the GPU metrics for the appliance which, if present, will show the type of accelerator and key metrics.