How to install the NVIDIA Kubernetes device plugin #
The NVIDIA Kubernetes device plugin is a DaemonSet that automatically detects NVIDIA GPUs in a Kubernetes cluster and exposes them to running containers.
Prerequisites #
- Kubernetes cluster with NVIDIA GPU enabled nodes
- NVIDIA drivers and container runtime installed on the nodes
To check if the NVIDIA GPU resources are already available, query your cluster with
kubectl get nodes -o json | jq '.items[].status.allocatable'
If the NVIDIA GPU resources are available, you should see a section like follows, which indicates that GPU resources are available on the node, and how many are allocatable to workloads inside Kubernetes.
"nvidia.com/gpu": "1"
Public cloud #
Most cloud providers preinstall the NVIDIA k8s-device-plugin on their Kubernetes clusters when GPU nodes are used. If kubectl does not report any GPU resources, but the nodes have GPUs, you may need to install the plugin manually.
Installation with kubectl #
The sources of the device plugin are available from NVIDIA at github.com. The following command will install the DaemonSet on your cluster:
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/<version>/deployments/static/nvidia-device-plugin.yml
Replace <version> with the latest release, e.g. v0.18.0.
Verification #
After the installation, you can verify that the plugin is running correctly by running the NVIDIA example workload
When Nimbra Edge is installed #
Once you have installed Nimbra Edge on your cluster, the video appliances will report the presence of GPUs.
- Navigate to the Appliance page and select the desired appliance.
- Click
Appliance Metrics - Under
Details - Transcode accelerator, you will find the GPU metrics for the appliance, which if present will show the type of accelerator and key metrics.