The NVIDIA Accelerated Linux Graphics Driver consists of the following components (filenames in parentheses are the full names of the components after installation; "x.y.z" denotes the current version. In these cases appropriate symlinks are created during installation):
An X driver (/usr/X11R6/lib/modules/drivers/nvidia_drv.so); this driver is needed by the X server to use your NVIDIA hardware.
A GLX extension module for X (/usr/X11R6/lib/modules/extensions/libglx.so.x.y.z); this module is used by the X server to provide server-side GLX support.
An X module for wrapped software rendering (/usr/X11R6/lib/modules/libnvidia-wfb.so.x.y.z and optionally, /usr/X11R6/lib/modules/libwfb.so); this module is used by the X driver to perform software rendering on GeForce 8 series GPUs. If libwfb.so already exists, nvidia-installer will not overwrite it. Otherwise, it will create a symbolic link from libwfb.so to libnvidia-wfb.so.x.y.z.
An OpenGL library (/usr/lib/libGL.so.x.y.z); this library provides the API entry points for all OpenGL and GLX function calls. It is linked to at run-time by OpenGL applications.
An OpenGL core library (/usr/lib/libnvidia-glcore.so.x.y.z); this library is implicitly used by libGL and by libglx. It contains the core accelerated 3D functionality. You should not explicitly load it in your X config file -- that is taken care of by libglx.
Three VDPAU (Video Decode and Presentation API for Unix-like systems) libraries: The top-level wrapper (/usr/X11R6/lib/libvdpau.so.x.y.z), a debug trace library (/usr/X11R6/lib/libvdpau_trace.so.x.y.z), and the NVIDIA implementation (/usr/X11R6/lib/libvdpau_nvidia.so.x.y.z). See Appendix G, VDPAU Support for details.
Two CUDA libraries (/usr/lib/libcuda.so.x.y.z, /usr/lib/libcuda.la); these libraries provide runtime support for CUDA (high-performance computing on the GPU) applications.
Two OpenCL libraries (/usr/lib/libOpenCL.so.1.0.0, /usr/lib/libnvidia-opencl.so.x.y.z); the former is a vendor-independent Installable Client Driver (ICD) loader, and the latter is the NVIDIA Vendor ICD. A config file /usr/lib/vendors/nvidia.icd is also installed, to advertise the NVIDIA Vendor ICD to the ICD Loader.
A kernel module (/lib/modules/`uname -r`/kernel/drivers/video/nvidia.ko); this kernel module provides low-level access to your NVIDIA hardware for all of the above components. It is generally loaded into the kernel when the X server is started, and is used by the X driver and OpenGL. nvidia.ko consists of two pieces: the binary-only core, and a kernel interface that must be compiled specifically for your kernel version. Note that the Linux kernel does not have a consistent binary interface like the X server, so it is important that this kernel interface be matched with the version of the kernel that you are using. This can either be accomplished by compiling yourself, or using precompiled binaries provided for the kernels shipped with some of the more common Linux distributions.
NVIDIA frontend module (/lib/modules/`uname -r`/kernel/drivers/video/nvidia-frontend.ko); this kernel module is built when the --multiple-kernel-modules parameter is passed to nvidia-installer (See “ How can I minimize software overhead when driving many GPUs in a single system? ”) or when the NV_BUILD_MODULE_INSTANCES variable is passed to make when building the NVIDIA kernel module (See “ How can I build multiple NVIDIA kernel modules? ”). The number of module instances can be a number between 1 and 8. This module registers the nvidia[0-7].ko modules and redirects operations to individual nvidia[0-7].ko modules.
Multiple kernel modules (/lib/modules/`uname -r`/kernel/drivers/video/nvidia[0-7].ko); these modules are built when the --multiple-kernel-modules parameter is passed to nvidia-installer (See “ How can I minimize software overhead when driving many GPUs in a single system? ”) or when the NV_BUILD_MODULE_INSTANCES variable is passed to make when building the NVIDIA kernel module (See “ How can I build multiple NVIDIA kernel modules? ”). The number of module instances can be a number between 1 and 8. Each of these modules is similar to nvidia.ko in terms of functionality.
NVIDIA Unified Memory kernel module (/lib/modules/`uname -r`/kernel/drivers/video/nvidia-uvm.ko); this kernel module provides functionality for sharing memory between the CPU and GPU in CUDA programs. It is generally loaded into the kernel when a CUDA program is started, and is used by the CUDA driver. Unified Memory is incompatible with multiple kernel modules.
The nvidia-tls libraries (/usr/lib/libnvidia-tls.so.x.y.z and /usr/lib/tls/libnvidia-tls.so.x.y.z); these files provide thread local storage support for the NVIDIA OpenGL libraries (libGL, libnvidia-glcore, and libglx). Each nvidia-tls library provides support for a particular thread local storage model (such as ELF TLS), and the one appropriate for your system will be loaded at run time.
The nvidia-ml library (/usr/lib/libnvidia-ml.so.x.y.z); The NVIDIA Management Library provides a monitoring and management API. See Chapter 25, The NVIDIA Management Library for more information.
The application nvidia-installer (/usr/bin/nvidia-installer) is NVIDIA's tool for installing and updating NVIDIA drivers. See Chapter 4, Installing the NVIDIA Driver for a more thorough description.
The application nvidia-xconfig (/usr/bin/nvidia-xconfig) is NVIDIA's tool for manipulating X server configuration files. See Chapter 6, Configuring X for the NVIDIA Driver for more information.
The application nvidia-settings (/usr/bin/nvidia-settings) is NVIDIA's tool for dynamic configuration while the X server is running. See Chapter 23, Using the nvidia-settings Utility for more information.
The application nvidia-smi (/usr/bin/nvidia-smi) is the NVIDIA System Management Interface for management and monitoring functionality. See Chapter 24, Using the nvidia-smi Utility for more information.
The application nvidia-debugdump (/usr/bin/nvidia-debugdump) is NVIDIA's tool for collecting internal GPU state. It is normally invoked by the nvidia-bug-report.sh (/usr/bin/nvidia-bug-report.sh) script. See Chapter 26, Using the nvidia-debugdump Utility for more information.
The daemon nvidia-persistenced (/usr/bin/nvidia-persistenced) is the NVIDIA Persistence Daemon for allowing the NVIDIA kernel module to maintain persistent state when no other NVIDIA driver components are running. See Chapter 27, Using the nvidia-persistenced Utility for more information.
The NVCUVID library (/usr/lib/libnvcuvid.so.x.y.z); The NVIDIA CUDA Video Decoder (NVCUVID) library provides an interface to hardware video decoding capabilities on NVIDIA GPUs with CUDA.
The NvEncodeAPI library (/usr/lib/libnvidia-encode.so.x.y.z); The NVENC Video Encoding library provides an interface to video encoder hardware on supported NVIDIA GPUs.
The NvIFROpenGL library (/usr/lib/libnvidia-ifr.so.x.y.z); The NVIDIA OpenGL-based Inband Frame Readback library provides an interface to capture and optionally encode an OpenGL framebuffer. NvIFROpenGL is a private API that is only available to approved partners for use in remote graphics scenarios. Please contact NVIDIA at GRIDteam@nvidia.com for more information.
The NvFBC library (/usr/lib/libnvidia-fbc.so.x.y.z); The NVIDIA Framebuffer Capture library provides an interface to capture and optionally encode the framebuffer of an X server screen. NvFBC is a private API that is only available to approved partners for use in remote graphics scenarios. Please contact NVIDIA at GRIDteam@nvidia.com for more information.
Problems will arise if applications use the wrong version of a library. This can be the case if there are either old libGL libraries or stale symlinks left lying around. If you think there may be something awry in your installation, check that the following files are in place (these are all the files of the NVIDIA Accelerated Linux Graphics Driver, as well as their symlinks):
/usr/lib/xorg/modules/drivers/nvidia_drv.so /usr/lib/xorg/modules/libwfb.so (if your X server is new enough), or /usr/lib/xorg/modules/libnvidia-wfb.so and /usr/lib/xorg/modules/libwfb.so -> libnvidia-wfb.so /usr/lib/xorg/modules/extensions/libglx.so.x.y.z /usr/lib/xorg/modules/extensions/libglx.so -> libglx.so.x.y.z (the above may also be in /usr/lib/modules or /usr/X11R6/lib/modules) /usr/lib/libGL.so.x.y.z /usr/lib/libGL.so.1 -> libGL.so.x.y.z /usr/lib/libGL.so -> libGL.so.1 /usr/lib/libnvidia-glcore.so.x.y.z /usr/lib/libcuda.so.x.y.z /usr/lib/libcuda.so -> libcuda.so.x.y.z /lib/modules/`uname -r`/video/nvidia.{o,ko}, or /lib/modules/`uname -r`/kernel/drivers/video/nvidia.{o,ko}
If there are other libraries whose "soname" conflicts with that of the NVIDIA libraries, ldconfig may create the wrong symlinks. It is recommended that you manually remove or rename conflicting libraries (be sure to rename clashing libraries to something that ldconfig will not look at -- we have found that prepending "XXX" to a library name generally does the trick), rerun 'ldconfig', and check that the correct symlinks were made. An example of a library that often creates conflicts is "/usr/X11R6/lib/libGL.so*".
If the libraries appear to be correct, then verify that the application is using the correct libraries. For example, to check that the application /usr/X11R6/bin/glxgears is using the NVIDIA libraries, run:
% ldd /usr/X11R6/bin/glxgears linux-gate.so.1 => (0xffffe000) libGL.so.1 => /usr/lib/libGL.so.1 (0xb7ed1000) libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0xb7ec0000) libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0xb7de0000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x00946000) libm.so.6 => /lib/tls/libm.so.6 (0x0075d000) libc.so.6 => /lib/tls/libc.so.6 (0x00631000) libnvidia-tls.so.343.13 => /usr/lib/tls/libnvidia-tls.so.__DRV_VER__ (0xb7ddd000) libnvidia-glcore.so.343.13 => /usr/lib/libnvidia-glcore.so.__DRV_VER__ (0xb5d1f000) libdl.so.2 => /lib/libdl.so.2 (0x00782000) /lib/ld-linux.so.2 (0x00614000)
Check the files being used for libGL -- if it is something other
than the NVIDIA library, then you will need to either remove the
library that is getting in the way or adjust your ld search path
using the LD_LIBRARY_PATH
environment
variable. You may want to consult the man pages for
ldconfig and
ldd.