The following problems still exist in this release and are in the process of being resolved.
Known Issues
If close to or more than 4GB of system memory (RAM) are installed in a 32-bit Solaris system, and if some of this memory is mapped to physical addresses above 4GB, then the OpenGL immediate mode performance of NVIDIA Quadro GPUs in this system may be lower compared to a similar system on which no memory is mapped above 4GB.
Similarly, Quadro OpenGL immediate mode performance may also be lower on 32-bit/64-bit Solaris/5.11 Express systems on which support for large user pages (page sizes of 2MB or 4MB) has been enabled. This normally is the case when more than 1GB of RAM is installed.
For best immediate mode OpenGL performance, it is recommended to
use a 64-bit system and to disable support for large pages. The
latter can be achieved by adding the following line to the
/etc/system
configuration file:
set auto_lpg_disable=1
If you are using a notebook see the "Known Notebook Issues" in Chapter 17, Configuring a Notebook.
When FSAA is enabled (the __GL_FSAA_MODE environment variable is set to a value that enables FSAA and a multisample visual is chosen), the rendering may be corrupted when resizing the window.
When a multithreaded OpenGL application exits, it is possible for libGL's DSO finalizer (also known as the destructor, or "_fini") to be called while other threads are executing OpenGL code. The finalizer needs to free resources allocated by libGL. This can cause problems for threads that are still using these resources. Setting the environment variable "__GL_NO_DSO_FINALIZER" to "1" will work around this problem by forcing libGL's finalizer to leave its resources in place. These resources will still be reclaimed by the operating system when the process exits. Note that the finalizer is also executed as part of dlclose(3), so if you have an application that dlopens(3) and dlcloses(3) libGL repeatedly, "__GL_NO_DSO_FINALIZER" will cause libGL to leak resources until the process exits. Using this option can improve stability in some multithreaded applications, including Java3D applications.
XVideo will not work correctly when Composite is enabled unless using X.Org 7.1 or later. See Chapter 20, Using the X Composite Extension.
X servers prior to version 1.5.0 have a limitation in the number
of visuals that can be available when Xinerama is enabled.
Specifically, visuals with ID values over 255 will cause the server
to corrupt memory, leading to incorrect behavior or crashes. In
some configurations where many GLX features are enabled at once,
the number of GLX visuals will exceed this limit. To avoid a crash,
the NVIDIA X driver will discard visuals above the limit. To see
which visuals are being discarded, run the X server with the
-logverbose 6
option and then check the
X server log file.
Some versions of the X.Org server version 1.5.0 and higher have a bug that causes X to fail with an error similar to the following when there is more than one GPU in the computer:
(!!) More than one possible primary device found (II) Primary Device is: (EE) No devices detected. Fatal server error: no screens found
You can work around this problem by specifying the bus ID of the device you wish to use. For more details, please search the xorg.conf manual page for "BusID". You can configure the X server with an X screen on each NVIDIA GPU by running:
nvidia-xconfig --enable-all-gpus
Please see Bugzilla bug #18321 for more details on this X server problem.
This section describes problems that will not be fixed. Usually, the source of the problem is beyond the control of NVIDIA. Following is the list of problems:
Problems that Will Not Be Fixed
This motherboard uses a LinFinity regulator on the 3.3 V rail that is only rated to 5 A -- less than the AGP specification, which requires 6 A. When diagnostics or applications are running, the temperature of the regulator rises, causing the voltage to the NVIDIA GPU to drop as low as 2.2 V. Under these circumstances, the regulator cannot supply the current on the 3.3 V rail that the NVIDIA GPU requires.
This problem does not occur when the graphics card has a switching regulator or when an external power supply is connected to the 3.3 V rail.
On Athlon motherboards with the VIA KX133 or 694X chip set, such as the ASUS K7V motherboard, NVIDIA drivers default to AGP 2x mode to work around insufficient drive strength on one of the signals.
AGP 1x transfers are used on Athlon motherboards with the Irongate chipset to work around a problem with signal integrity.
On ALi1541 and ALi1647 chipsets, NVIDIA drivers disable AGP to work around timing issues and signal integrity issues. See Chapter 7, Common Problems for more information on ALi chipsets.
Version 1.8 of the NV-CONTROL X Extension introduced target types for setting and querying attributes as well as receiving event notification on targets. Targets are objects like X Screens, GPUs and G-Sync devices. Previously, all attributes were described relative to an X Screen. These new bits of information (target type and target id) were packed in a non-compatible way in the protocol stream such that addressing X Screen 1 or higher would generate an X protocol error when mixing NV-CONTROL client and server versions.
This packing problem has been fixed in the NV-CONTROL 1.10 protocol, making it possible for the older (1.7 and prior) clients to communicate with NV-CONTROL 1.10 servers. Furthermore, the NV-CONTROL 1.10 client library has been updated to accommodate the target protocol packing bug when communicating with a 1.8 or 1.9 NV-CONTROL server. This means that the NV-CONTROL 1.10 client library should be able to communicate with any version of the NV-CONTROL server.
NVIDIA recommends that NV-CONTROL client applications relink with version 1.10 or later of the NV-CONTROL client library (libXNVCtrl.a, in the nvidia-settings-1.0.tar.gz tarball). The version of the client library can be determined by checking the NV_CONTROL_MAJOR and NV_CONTROL_MINOR definitions in the accompanying nv_control.h.
The only web released NVIDIA Solaris driver that is affected by this problem (i.e., the only driver to use either version 1.8 or 1.9 of the NV-CONTROL X extension) is 1.0-8756.
For some models of CPU, the CPU throttling technology may affect not only CPU core frequency, but also memory frequency/bandwidth. On systems using integrated graphics, any reduction in memory bandwidth will affect the GPU as well as the CPU. This can negatively affect applications that use significant memory bandwidth, such as video decoding using VDPAU, or certain OpenGL operations. This may cause such applications to run with lower performance than desired.
To work around this problem, NVIDIA recommends configuring your CPU throttling implementation to avoid reducing memory bandwidth. This may be as simple as setting a certain minimum frequency for the CPU.
Depending on your operating system and/or distribution, this may be as simple as writing to a configuration file in the /sys or /proc filesystems, or other system configuration file. Please read, or search the Internet for, documentation regarding CPU throttling on your operating system.
If VDPAU gives the VDP_STATUS_NO_IMPLEMENTATION error message on a GPU which was labeled or specified as supporting PureVideo or PureVideo HD, one possible reason is a hardware defect. After ruling out any other software problems, NVIDIA recommends returning the GPU to the manufacturer for a replacement.
Some applications have bugs that are triggered when the extension string is longer than a certain size. As more features are added to the driver, the length of this string increases and can trigger these sorts of bugs.
You can limit the extensions listed in the OpenGL extension
string to the ones that appeared in a particular version of the
driver by setting the __GL_ExtensionStringVersion
environment
variable to a particular version number. For example,
__GL_ExtensionStringVersion=17700 quake3
will run Quake 3 with the extension string that appeared in the 177.* driver series. Limiting the size of the extension string can work around this sort of application bug.