The NVIDIA Accelerated Linux Graphics Driver consists of the following components (filenames in parentheses are the full names of the components after installation). Some paths may be different on different systems (e.g., X modules may be installed in /usr/X11R6/ rather than /usr/lib/xorg/).
An X driver (/usr/lib/xorg/modules/drivers/nvidia_drv.so
);
this driver is needed by the X server to use your NVIDIA
hardware.
A GLX extension module for X (/usr/lib/xorg/modules/extensions/libglx.so.352.21
);
this module is used by the X server to provide server-side GLX
support.
An X module for wrapped software rendering (/usr/lib/xorg/modules/libnvidia-wfb.so.352.21
and optionally, /usr/lib/xorg/modules/libwfb.so
); this module is
used by the X driver to perform software rendering on GeForce 8
series GPUs. If libwfb.so
already
exists, nvidia-installer will not overwrite it. Otherwise, it will
create a symbolic link from libwfb.so
to libnvidia-wfb.so.352.21
.
Graphics libraries (/usr/lib/libGL.so.352.21
, /usr/lib/libEGL.so.352.21
, /usr/lib/libGLESv1_CM.so.352.21
, and
/usr/lib/libGLESv2.so.352.21
);
these libraries provide the API entry points for all OpenGL, OpenGL
ES, GLX, and EGL function calls. They are loaded at run-time by
applications.
Various libraries that are used internally by other driver
components. These include /usr/lib/libnvidia-cfg.so.352.21
,
/usr/lib/libnvidia-compiler.so.352.21
,
/usr/lib/libnvidia-eglcore.so.352.21
,
/usr/lib/libnvidia-glcore.so.352.21
, and
/usr/lib/libnvidia-glsi.so.352.21
.
Three VDPAU (Video Decode and Presentation API for Unix-like
systems) libraries: The top-level wrapper (/usr/lib/libvdpau.so.352.21
), a debug trace
library (/usr/lib/libvdpau_trace.so.352.21
), and the
NVIDIA implementation (/usr/lib/vdpau/libvdpau_nvidia.so.352.21
).
See Appendix G, VDPAU
Support for details.
Source code for the wrapper and trace libraries is available at http://github.com/aaronp24/libvdpau/releases.
The CUDA library (/usr/lib/libcuda.so.352.21
) which provides
runtime support for CUDA (high-performance computing on the GPU)
applications.
Two OpenCL libraries (/usr/lib/libOpenCL.so.1.0.0
, /usr/lib/libnvidia-opencl.so.352.21
); the
former is a vendor-independent Installable Client Driver (ICD)
loader, and the latter is the NVIDIA Vendor ICD. A config file
/etc/OpenCL/vendors/nvidia.icd
is
also installed, to advertise the NVIDIA Vendor ICD to the ICD
Loader.
The nvidia-cuda-mps-control
and
nvidia-cuda-mps-server
applications,
which allow MPI processes to run concurrently on a single GPU.
A kernel module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia.ko
); this kernel module
provides low-level access to your NVIDIA hardware for all of the
above components. It is generally loaded into the kernel when the X
server is started, and is used by the X driver and OpenGL.
nvidia.ko consists of two pieces: the binary-only core, and a
kernel interface that must be compiled specifically for your kernel
version. Note that the Linux kernel does not have a consistent
binary interface like the X server, so it is important that this
kernel interface be matched with the version of the kernel that you
are using. This can either be accomplished by compiling yourself,
or using precompiled binaries provided for the kernels shipped with
some of the more common Linux distributions.
NVIDIA frontend module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-frontend.ko
); this kernel
module is built when the --multiple-kernel-modules parameter is
passed to nvidia-installer (See “ How can I minimize software
overhead when driving many GPUs in a single system? ”) or
when the NV_BUILD_MODULE_INSTANCES variable is passed to make when
building the NVIDIA kernel module (See “ How can I build multiple
NVIDIA kernel modules? ”). The number of module instances
can be a number between 1 and 8. This module registers the
nvidia[0-7].ko
modules and redirects
operations to individual nvidia[0-7].ko
modules.
Multiple kernel modules (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia[0-7].ko
); these modules are
built when the --multiple-kernel-modules parameter is passed to
nvidia-installer (See “
How can I minimize software overhead when driving many GPUs in a
single system? ”) or when the NV_BUILD_MODULE_INSTANCES
variable is passed to make when building the NVIDIA kernel module
(See “ How can I
build multiple NVIDIA kernel modules? ”). The number of
module instances can be a number between 1 and 8. Each of these
modules is similar to nvidia.ko in terms of functionality.
NVIDIA Unified Memory kernel module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-uvm.ko
); this kernel module
provides functionality for sharing memory between the CPU and GPU
in CUDA programs. It is generally loaded into the kernel when a
CUDA program is started, and is used by the CUDA driver on
supported platforms. Unified Memory is incompatible with multiple
kernel modules.
The nvidia-tls libraries (/usr/lib/libnvidia-tls.so.352.21
and
/usr/lib/tls/libnvidia-tls.so.352.21
); these
files provide thread local storage support for the NVIDIA OpenGL
libraries (libGL, libnvidia-glcore, and libglx). Each nvidia-tls
library provides support for a particular thread local storage
model (such as ELF TLS), and the one appropriate for your system
will be loaded at run time.
The nvidia-ml library (/usr/lib/libnvidia-ml.so.352.21
); The NVIDIA
Management Library provides a monitoring and management API. See
Chapter 25,
The NVIDIA Management Library for more information.
The application nvidia-installer (/usr/bin/nvidia-installer
) is NVIDIA's tool for
installing and updating NVIDIA drivers. See Chapter 4,
Installing the NVIDIA Driver for a more thorough
description.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-installer/.
The application nvidia-modprobe (/usr/bin/nvidia-modprobe
) is installed as setuid
root and is used to load the NVIDIA kernel module and create the
/dev/nvidia*
device nodes by
processes (such as CUDA applications) that don't run with
sufficient privileges to do those things themselves.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-modprobe/.
The application nvidia-xconfig (/usr/bin/nvidia-xconfig
) is NVIDIA's tool for
manipulating X server configuration files. See Chapter 6,
Configuring X for the NVIDIA Driver for more
information.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-xconfig/.
The application nvidia-settings (/usr/bin/nvidia-settings
) is NVIDIA's tool for
dynamic configuration while the X server is running. See Chapter 23,
Using the nvidia-settings Utility for more
information.
The libnvidia-gtk libraries (/usr/lib/libnvidia-gtk2.so.352.21
and on
some platforms /usr/lib/libnvidia-gtk3.so.352.21
); these
libraries are required to provide the nvidia-settings user
interface.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-settings/.
The application nvidia-smi (/usr/bin/nvidia-smi
) is the NVIDIA System
Management Interface for management and monitoring functionality.
See Chapter 24,
Using the nvidia-smi Utility for more information.
The application nvidia-debugdump (/usr/bin/nvidia-debugdump
) is NVIDIA's tool for
collecting internal GPU state. It is normally invoked by the
nvidia-bug-report.sh (/usr/bin/nvidia-bug-report.sh
) script. See
Chapter 26,
Using the nvidia-debugdump Utility for more
information.
The daemon nvidia-persistenced (/usr/bin/nvidia-persistenced
) is the NVIDIA
Persistence Daemon for allowing the NVIDIA kernel module to
maintain persistent state when no other NVIDIA driver components
are running. See Chapter 27,
Using the nvidia-persistenced Utility for more
information.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-persistenced/.
The NVCUVID library (/usr/lib/libnvcuvid.so.352.21
); The NVIDIA
CUDA Video Decoder (NVCUVID) library provides an interface to
hardware video decoding capabilities on NVIDIA GPUs with CUDA.
The NvEncodeAPI library (/usr/lib/libnvidia-encode.so.352.21
); The
NVENC Video Encoding library provides an interface to video encoder
hardware on supported NVIDIA GPUs.
The NvIFROpenGL library (/usr/lib/libnvidia-ifr.so.352.21
); The
NVIDIA OpenGL-based Inband Frame Readback library provides an
interface to capture and optionally encode an OpenGL framebuffer.
NvIFROpenGL is a private API that is only available to approved
partners for use in remote graphics scenarios. Please contact
NVIDIA at GRIDteam@nvidia.com for more information.
The NvFBC library (/usr/lib/libnvidia-fbc.so.352.21
); The
NVIDIA Framebuffer Capture library provides an interface to capture
and optionally encode the framebuffer of an X server screen. NvFBC
is a private API that is only available to approved partners for
use in remote graphics scenarios. Please contact NVIDIA at GRIDteam@nvidia.com
for more information.
An X driver configuration file (/usr/share/X11/xorg.conf.d/nvidia-drm-outputclass.conf
);
If the X server is sufficiently new, this file will be installed to
configure the X server to load the nvidia_drv.so
driver automatically if it is
started after the NVIDIA kernel module is loaded. This feature is
supported in X.Org xserver 1.16 and higher when running on Linux
3.9 or higher with CONFIG_DRM enabled.
Predefined application profile keys and documentation for those
keys can be found in the following files in the directory
/usr/share/nvidia/
: nvidia-application-profiles-352.21-rc
,
nvidia-application-profiles-352.21-key-documentation
.
See Appendix J, Application Profiles for more information.
Problems will arise if applications use the wrong version of a library. This can be the case if there are either old libGL libraries or stale symlinks left lying around. If you think there may be something awry in your installation, check that the following files are in place (these are all the files of the NVIDIA Accelerated Linux Graphics Driver, as well as their symlinks):
/usr/lib/xorg/modules/drivers/nvidia_drv.so /usr/lib/xorg/modules/libwfb.so (if your X server is new enough), or /usr/lib/xorg/modules/libnvidia-wfb.so and /usr/lib/xorg/modules/libwfb.so -> libnvidia-wfb.so /usr/lib/xorg/modules/extensions/libglx.so.352.21 /usr/lib/xorg/modules/extensions/libglx.so -> libglx.so.352.21 (the above may also be in /usr/lib/modules or /usr/X11R6/lib/modules) /usr/lib/libGL.so.352.21 /usr/lib/libGL.so.1 -> libGL.so.352.21 /usr/lib/libGL.so -> libGL.so.1 /usr/lib/libnvidia-glcore.so.352.21 /usr/lib/libcuda.so.352.21 /usr/lib/libcuda.so -> libcuda.so.352.21 /lib/modules/`uname -r`/video/nvidia.{o,ko}, or /lib/modules/`uname -r`/kernel/drivers/video/nvidia.{o,ko}
If there are other libraries whose "soname" conflicts with that of the NVIDIA libraries, ldconfig may create the wrong symlinks. It is recommended that you manually remove or rename conflicting libraries (be sure to rename clashing libraries to something that ldconfig will not look at -- we have found that prepending "XXX" to a library name generally does the trick), rerun 'ldconfig', and check that the correct symlinks were made. An example of a library that often creates conflicts is "/usr/lib/mesa/libGL.so*".
If the libraries appear to be correct, then verify that the application is using the correct libraries. For example, to check that the application /usr/bin/glxgears is using the NVIDIA libraries, run:
% ldd /usr/bin/glxgears linux-gate.so.1 => (0xffffe000) libGL.so.1 => /usr/lib/libGL.so.1 (0xb7ed1000) libXext.so.6 => /usr/lib/libXext.so.6 (0xb7ec0000) libX11.so.6 => /usr/lib/libX11.so.6 (0xb7de0000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x00946000) libm.so.6 => /lib/tls/libm.so.6 (0x0075d000) libc.so.6 => /lib/tls/libc.so.6 (0x00631000) libnvidia-tls.so.352.21 => /usr/lib/tls/libnvidia-tls.so.352.21 (0xb7ddd000) libnvidia-glcore.so.352.21 => /usr/lib/libnvidia-glcore.so.352.21 (0xb5d1f000) libdl.so.2 => /lib/libdl.so.2 (0x00782000) /lib/ld-linux.so.2 (0x00614000)
Check the files being used for libGL -- if it is something other
than the NVIDIA library, then you will need to either remove the
library that is getting in the way or adjust your ld search path
using the LD_LIBRARY_PATH
environment
variable. You may want to consult the man pages for
ldconfig and
ldd.