Difference between revisions of "GPU passthrough in VMWare"

From ZoneMinder Wiki
Jump to navigationJump to search
(Initial dump from my doc wiki)
(No syntax highlighting!)
Line 11: Line 11:


===ESXi host===
===ESXi host===
ssh into the host and edit /etc/vmware/passthru.map.  Change the word bridge to link.  This avoids a PSOD <ref>https://www.reddit.com/r/vmware/comments/f3xsgj/nvidia_gpu_esx_65_dell_t320_pci_passthrough_crash/ - Reddit post - NVIDIA GPU / ESX 6.5 / DELL T320 / PCI Pass-through crash on shutdown</ref> on the host when restarting the VM with the GPU passed through to it. <syntaxhighlight lang="text">
ssh into the host and edit /etc/vmware/passthru.map.  Change the word bridge to link.  This avoids a PSOD <ref>https://www.reddit.com/r/vmware/comments/f3xsgj/nvidia_gpu_esx_65_dell_t320_pci_passthrough_crash/ - Reddit post - NVIDIA GPU / ESX 6.5 / DELL T320 / PCI Pass-through crash on shutdown</ref> on the host when restarting the VM with the GPU passed through to it.  
# NVIDIA
10de  ffff  link  false


</syntaxhighlight>Pass the GPU through to the host using the DirectPath I/O mechanism <ref>https://blogs.vmware.com/apps/2018/09/using-gpus-with-virtual-machines-on-vsphere-part-2-vmdirectpath-i-o.html - Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O</ref> and reboot, then connect both devices to the VM.  There will be an audio card and the video card itself.
# NVIDIA
10de  ffff  link  false
 
Pass the GPU through to the host using the DirectPath I/O mechanism <ref>https://blogs.vmware.com/apps/2018/09/using-gpus-with-virtual-machines-on-vsphere-part-2-vmdirectpath-i-o.html - Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O</ref> and reboot, then connect both devices to the VM.  There will be an audio card and the video card itself.


===Ubuntu VM===
===Ubuntu VM===
Line 27: Line 28:
These instructions stay within the drivers etc provided by Ubuntu LTS. NVidia as upstream also provide drivers and these will be newer but may break something.
These instructions stay within the drivers etc provided by Ubuntu LTS. NVidia as upstream also provide drivers and these will be newer but may break something.


Use this command to decide which driver to install:<syntaxhighlight lang="shell-session">
Use this command to decide which driver to install:
# ubuntu-drivers devices
 
</syntaxhighlight>Install the "headless" version of the driver and reboot:<syntaxhighlight lang="shell-session">
# ubuntu-drivers devices
# apt install nvidia-headless-440
 
</syntaxhighlight>Run this to confirm it is working after rebooting:<syntaxhighlight lang="shell-session">
Install the "headless" version of the driver and reboot:
# nvidia-smi
# apt install nvidia-headless-440
</syntaxhighlight>If you just need decoding eg for Zoneminder - this provides ''libnvcuvid.so'':<syntaxhighlight lang="shell-session">
 
# apt install libnvidia-decode-440
Run this to confirm it is working after rebooting:
</syntaxhighlight>
# nvidia-smi
 
If you just need decoding eg for Zoneminder - this provides ''libnvcuvid.so'':
# apt install libnvidia-decode-440


===Testing===
===Testing===
Check ffmpeg has cuda support:<syntaxhighlight lang="shell-session">
Check ffmpeg has cuda support:
# ffmpeg -hwaccels
# ffmpeg -hwaccels
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
...
...
Hardware acceleration methods:
Hardware acceleration methods:
vdpau
vdpau
cuda
cuda
vaapi
vaapi
drm
drm
opencl
opencl
cuvid
cuvid
</syntaxhighlight>
 
There should be no error messages relating to libraries when you run something like this, which streams from a camera to /dev/null and uses CUDA:<syntaxhighlight lang="shell-session">
There should be no error messages relating to libraries when you run something like this, which streams from a camera to /dev/null and uses CUDA:
# ffmpeg -hwaccel cuda -i "rtmp://HOSTNAME_OR_IP/bcs/channel0_main.bcs?channel=0&stream=0&user=admin&password=PASSWORD"  -an -f rawvideo -y /dev/null
 
# ffmpeg -hwaccel cuda -i "rtmp://HOSTNAME_OR_IP/bcs/channel0_main.bcs?channel=0&stream=0&user=admin&password=PASSWORD"  -an -f rawvideo -y /dev/null
 


</syntaxhighlight>In another console, you could run nvidia-smi and see a process using the GPU.
In another console, you could run nvidia-smi and see a process using the GPU.


==References==
==References==
<references />
<references />

Revision as of 13:24, 6 August 2020

Using a GPU reduces the load on the CPUs and RAM. For Zoneminder, /dev/shm at 50% reduction and load is about 30% what it was before deploying a NVidia GTX 1050.

Nvidia GPU in VMware

The versions shown here are essential because it did not work at all prior to some updates, crashed the host and the VM would not start. The versions are the latest current at 6 Aug 2020.

  • Host: Dell T320, 1 socket Xeon E5-2407 2.2 GHz CPU, BIOS 2.9.0
  • VMware: ESXI 6.5.0 patch level 16576891
  • GPU: MSI Geforce GTX 1050 Ti (this card does not require any host BIOS settings changing, nor Memory Mapped I/O settings on the VM)
  • Cameras: Four Reolink RLC-520. Encoding at 2048 x 1536, 10 fps, High H.264 profile
  • VM: Ubuntu 20.04 LTS server with no extras. Four vCPUs, 6 GB RAM, 30GB root and EFI, 300GB XFS for /var

ESXi host

ssh into the host and edit /etc/vmware/passthru.map. Change the word bridge to link. This avoids a PSOD <ref>https://www.reddit.com/r/vmware/comments/f3xsgj/nvidia_gpu_esx_65_dell_t320_pci_passthrough_crash/ - Reddit post - NVIDIA GPU / ESX 6.5 / DELL T320 / PCI Pass-through crash on shutdown</ref> on the host when restarting the VM with the GPU passed through to it.

# NVIDIA
10de  ffff  link   false

Pass the GPU through to the host using the DirectPath I/O mechanism <ref>https://blogs.vmware.com/apps/2018/09/using-gpus-with-virtual-machines-on-vsphere-part-2-vmdirectpath-i-o.html - Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O</ref> and reboot, then connect both devices to the VM. There will be an audio card and the video card itself.

Ubuntu VM

The VM must use EFI so the install must use the Ubuntu server installer and not the minimal installer which will not work with efiboot. VM type set to Ubuntu 64 bit.

In Advanced settings for the VM, set the following flag to false. This setting disables informing the VM it is a VM. This avoids a problem where the GPU fails to initialise properly:<syntaxhighlight lang="ini"> hypervisor.cpuid.v0 = FALSE </syntaxhighlight>

Nvidia drivers and CUDA

These instructions stay within the drivers etc provided by Ubuntu LTS. NVidia as upstream also provide drivers and these will be newer but may break something.

Use this command to decide which driver to install:

# ubuntu-drivers devices

Install the "headless" version of the driver and reboot:

# apt install nvidia-headless-440

Run this to confirm it is working after rebooting:

# nvidia-smi

If you just need decoding eg for Zoneminder - this provides libnvcuvid.so:

# apt install libnvidia-decode-440

Testing

Check ffmpeg has cuda support:

# ffmpeg -hwaccels
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
...
Hardware acceleration methods:
vdpau
cuda
vaapi
drm
opencl
cuvid

There should be no error messages relating to libraries when you run something like this, which streams from a camera to /dev/null and uses CUDA:

# ffmpeg -hwaccel cuda -i "rtmp://HOSTNAME_OR_IP/bcs/channel0_main.bcs?channel=0&stream=0&user=admin&password=PASSWORD"  -an -f rawvideo -y /dev/null


In another console, you could run nvidia-smi and see a process using the GPU.

References

<references />