Hardware Accelerated NVidia and Intel Graphics Together in Debian

In my last post on this topic–Various Official Approaches: All the Ways Multiseat Didn’t Work for Us–I came to the conclusion that all approaches using a single graphics card seem to be hacks that are not robust. The best way around that is to use two xservers–each running on a separate graphics card. So, I wondered if there was a BIOS hack that could circumvent the problem stopping me from executing my original plan of using both the Intel 3000 graphics on the CPU and the NVidia card. While researching that, I discovered that the Shuttle manual was wrong: If the CPU, chip set, and external video card all support “Displayport,” both internal and external graphics can run as long as the internal Intel is the primary adaptor in the computer BIOS settings!! The H67 chip-set, the i7 CPU with Intel 3000 graphics, as well as the NVidia Geforce 550Ti all support Displayport. We are in luck, therefore, and can use Gdm to launch two xservers as described in the some-what dated how-to by Bob Smith.

For our software setup, we have downgraded Gdm-3 to Gdm 2.20-series, the last version of Gdm to support multiple X servers–our specific version is 2.20.11-4. It is not clear to me why the latest and greatest Gdm-3 has lost the ability to support multiple X servers. We are using the latest squeeze-backports xorg version 1:7.6+8~bpo60+1 with xserver-xorg-core version 2.1.10.4-1~bpo60+1. We are using the latest proprietary NVidia driver version 295.59–the open source nv driver and the latest NVidia driver supported in standard Debian (270+) did not provide a sufficient CUDA level for Mathematica, so I do not use any Debian NVidia packages. We are using squeeze-backports kernel 3.2.20-1~bpo60+1. The 3.2-series kernel was required to support the Haupauge USB tuner. We have the standard Mesa OpenGL packages installed, but the NVidia driver installer overwrites the open source Glx driver. And that is a critical problem. With just the standard Glx driver, the NVidia has no hardware acceleration and with the NVidia drivers installed, the Intel has no hardware acceleration.

The typical single-user way I use the NVidia driver is that I download the latest proprietary version from NVidia, put it in the root user home directory, and chmod u+x to make it executable. In a terminal/tty as root, I stop my graphical login using: /etc/init.d/gdm stop. I then run ./NVIDIA-Linux-x86_64-295.59.run and answer the questions required to build a kernel module and the install drivers. I do not let the installer touch my xorg.conf. With that done, I start Gdm using: /etc/init.d/gdm start. I go back to the tty using ctl-alt-F1 and log out as root and then go back to my graphical stuff: ctl-alt-F7 (or -F8 depending). All works great until I get a new kernel. Then on the next boot, Gdm fails to start X. I get some messages about seeing the error messages. I already know what is wrong–there is no NVidia kernel module for the new kernel yet. I use ctl-alt-F1, log in as root where the latest driver still is, and repeat the above instructions. This method won’t work when I want to use both the Intel and NVidia graphics cards with hardware acceleration, because the installer checks for conflicting software and drivers and removes/overwrites anything it finds offensive. Thus, the Intel card loses hardware acceleration.

One really nice thing about relatively recent versions of the Linux xserver is that no configuration file is needed. I have struggled over the years to morph my xorg.conf as we buy newer monitors and get newer computers. Upon realizing that we could change the BIOS to use the Intel graphics, I renamed my xorg.conf so that it wouldn’t be seen, changed the monitor plugs to the graphics cards and rebooted.  The computer started in X right away. In older days, that would be considered a miracle. It would also have been sufficient. Now I want more from my computers, specifically multiseat with hardware acceleration on both seats, so we are going to need not one xorg.conf, but two: xorg_intel.conf and xorg_nvidia.conf.

My first approach was to follow Smith’s multiseat how-to except that I broke xorg.conf into two separate ones: one for each graphics card and containing the Device section describing the respective graphics card (and in /etc/gdm/gdm.conf, I use the X option -config xorg_intel.conf to start the first xserver and -config xorg_nvidia.conf to start the second–more on this in a later post). In addition to Smith’s standard way of configuring mouse and keyboard settings, I needed to setup a second keyboard description in xorg_nvidia.conf to send multimedia commands from the Logitech keyboard to my login on the ASUS monitor. I hooked up the new ASUS HDMI monitor the Intel 3000 GPU and the old Acer monitor to the NVidia. My logic is that Brenda will be using the ASUS monitor because it is IPS LCD and has great color rendition. I will therefore primarily be using the Acer monitor, but want access to the CUDA capable NVidia GPU. With that, I had two different logins on two different monitors. When logged in, even the multimedia keyboard worked! Further, this multiseat approach results in much lower energy consumption too–having two monitors connected to the NVidia card keeps the NVidia card in a higher power state for some reason.

On the video side there were still problems: the Intel monitor had no hardware acceleration at all. The Intel driver can use XvMC which is better than nothing. Using that acceleration method required making a configuration file pointing to the XvMC library. See “man intel” which states: “User should provide absolute path to libIntelXvMC.so in XvMCConfig file.” My /etc/X11/XvMCConfig file now contains the line:

/usr/lib/libIntelXvMC.so.1

I really want Glx graphics but there is still a conflict with the NVidia Glx libraries. I found a web post showing how to put NVidia libraries into a custom directory. That got NVidia isolated–or so I thought. Then I reinstalled all sorts of xorg packages (glx-alternative-mesa, xserver-xorg-core, libgl1-mesa-glx, and libgl1-mesa-dri) to undo the files that the original NVidia installer deleted. It took more reinstalling than I expected, but soon Glxgears (a great program for testing Glx functionality, but Glxinfo will show more detail) was running on the Intel graphics card. I later discovered that Glx was no longer working in NVidia: I hadn’t fully isolated the NVidia stuff–the libglx was not isolated as could be seen by the errors in the X logs.

The web post method used CLI options to the NVidia installer. I wondered if there were more options. I ran “./NVIDIA-Linux-x86_64-295.59.run –help” which often lists options and one option it listed is “-A.” This option lists all sorts of “advanced” commands some of which were used in the web post. It also contained a command option to specify a Glx directory. Adding the Glx option, I got the NVidia GL libraries isolated into their own directory, but X couldn’t find it so Glx graphics were still broken in NVidia.

A path statement in xorg_nvidia.conf fixed that and then the log files showed both the Intel and NVidia servers were using the correct libglx, but still the Glx graphics weren’t working. The command ‘ldd /usr/bin/glxgears’ showed that for the NVidia user, each application was using the wrong libglx and that was why they were failing. The user environment also had to be changed. The final successful approach required eight steps.

1. Install NVidia driver as root user with this command (assuming you have already downloaded the driver and made it executable) with these options (you may need to ensure the NVidia xserver is shut down so that the nvidia kernel module can be removed and reloaded–running from a tty (as described above) or remotely from a ssh login is recommended):

./NVIDIA-Linux-x86_64-295.59.run –accept-license –no-backup –no-x-check –no-questions –ui=none –no-x-check –no-distro-scripts –utility-prefix=/nvidia –installer-prefix=/nvidia –opengl-prefix=/nvidia –opengl-libdir=glx

2. For some reason, even with the above options, NVidia still puts their Glx driver in the default xorg modules directory. We next need to make a libglx.so link in the /nvidia/glx directory for NVidia to use. While in that directory run:

ln -s /usr/lib/xorg/modules/extensions/libglx.so.295.59 libglx.so

3. As root, make the file /etc/ld.conf.d/nvidia with two lines pointing to the new library directories:

/nvidia/lib
/nvidia/glx

4. Run this command as root to make the above paths active.

ldconfig

5. Use the ModulePath option to direct the nvidia xserver to the correct Glx. Add this to /etc/X/xorg_nvidia.conf in existing Files section or add section as follows:

Section “Files”
ModulePath “/nvidia/glx,/nvidia/lib,/usr/lib/xorg/modules”
EndSection

6. Setup user LD_LIBRARY_PATH environment in the files (this can be set universally in /etc/profile and /etc/bash.bashrc configuration files instead): ~/.profile ~/.bashrc and, if you are using it, before xscreensaver in your user ~/.xsession file by adding the following lines:

if [ `echo $DISPLAY |grep -c “:1″` -eq 1 ]; then
export LD_LIBRARY_PATH=/nvidia/glx
fi

7. Reinstall xorg packages with Glx so that damage from the previous NVidia install is corrected (specifically to replace the annoying link in /user/lib/xorg/modules/extensions/libglx.so to NVidia’s glx with a real xorg library.

apt-get install –reinstall glx-alternative-mesa xserver-xorg-core libgl1-mesa-glx libgl1-mesa-dri

8. Restart Gdm

/etc/init.d/gdm restart

Now if you look into the xorg logs for :0 you should see no references to NVidia–just to xorg libraries. In the :1 log, you should see that all NVidia stuff loaded with no errors. You now have hardware accelerated graphics on both Intel and NVidia graphic cards. If you upgrade to a new kernel, you will need to repeat steps 1, 2, 7, and 8 to make everything work again.

At this point in my multseat saga, I was really excited. This was a major breakthrough for me.  I didn’t know it yet, but there were some other serious potholes ahead! Stay tuned to see how it all works out…

Leave a Reply