The NVIDIA Linux GPU Driver contains several kernel modules: nvidia.ko, nvidia-modeset.ko, nvidia-uvm.ko, nvidia-drm.ko, and nvidia-peermem.ko.
Two "flavors" of these kernel modules are provided:
Proprietary. This is the flavor that NVIDIA has historically shipped.
Open, i.e. source-published, kernel modules that are dual licensed MIT/GPLv2. With every driver release, the source code to the open kernel modules is published on https://github.com/NVIDIA/open-gpu-kernel-modules and a tarball is provided on https://download.nvidia.com/XFree86/.
The proprietary flavor supports the GPU architectures Maxwell, Pascal, Volta, Turing, and later GPUs.
The open flavor of kernel modules supports Turing and later GPUs. The open kernel modules cannot support GPUs before Turing, because the open kernel modules depend on the GPU System Processor (GSP) first introduced in Turing.
Most features of the Linux GPU driver are supported with the open flavor of kernel modules, including CUDA, Vulkan, OpenGL, OptiX, and X11. G-Sync with desktop GPUs is supported. Suspend, Hibernate, and Resume power management is supported, as is Run Time D3 (RTD3) on Ampere and later GPUs.
However, in the current release, some display and graphics features (notably: SLI, and G-Sync on notebooks) and NVIDIA virtual GPU (vGPU), are not yet supported. These features will be added in upcoming driver releases.
Use of the open kernel modules on GeForce and Workstation GPUs
should be considered alpha-quality in this release due to the
missing features listed above. To enable use of the open kernel
modules on GeForce and Workstation GPUs, set the
"NVreg_OpenRmEnableUnsupportedGpus" nvidia.ko kernel module
parameter to 1
. E.g.,
modprobe nvidia NVreg_OpenRmEnableUnsupportedGpus=1
or, in an /etc/modprobe.d/ configuration file:
options nvidia NVreg_OpenRmEnableUnsupportedGpus=1
The need for this kernel module parameter will be removed in a future release once performance and functionality in the open kernel modules matures and meets or exceeds that of the proprietary kernel modules.
Though the kernel modules in the two flavors are different, they are based on the same underlying source code. The two flavors are mutually exclusive: they cannot be used within the kernel at the same time, and they should not be installed on the filesystem at the same time.
The user-space components of the NVIDIA Linux GPU driver are identical and behave in the same way, regardless of which flavor of kernel module is used.
Because the two flavors of kernel modules are mutually exclusive, you need to choose which to use at install time. This can be selected with the "--kernel-module-build-directory" .run file option, or its short form "-m". Use "-m=kernel" to install the proprietary flavor of kernel modules (the default). Use "-m=kernel-open" to install the open flavor of kernel modules.
E.g.,
sh ./NVIDIA-Linux-[...].run -m=kernel-open
As a convenience, the open kernel modules distributed in the .run file are pre-compiled.
Advanced users, who want to instead build the open kernel modules from source, should do the following:
Uninstall any existing driver with `nvidia-uninstall`.
Install from the .run file with "--no-kernel-modules" option, to install everything except the kernel modules.
Fetch, build, and install the open kernel module source from https://github.com/NVIDIA/open-gpu-kernel-modules. See https://github.com/NVIDIA/open-gpu-kernel-modules/blob/main/README.md for details on building.
Note that you must use the same version of the .run file and the open kernel module source from https://github.com/NVIDIA/open-gpu-kernel-modules
You can determine which flavor of kernel modules is installed using either `modinfo` or looking at /proc/driver/nvidia/version.
E.g., the proprietary flavor will report:
# modinfo nvidia | grep license license: NVIDIA # cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module [...]
The open flavor will report:
# modinfo nvidia | grep license license: Dual MIT/GPL # cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX Open Kernel Module for x86_64 [...]