Bugzilla – Bug 118372
VUL-0: wrong permissions on /dev/nvidia*
Last modified: 2009-10-13 23:09:47 UTC
After update 9.3 -> 10.0 the permissions for the nvidia devs look like this: theano:~ # l /dev/nvidia* crw-rw-rw- 1 root root 195, 0 2005-09-22 10:05 /dev/nvidia0 crw-rw-rw- 1 root root 195, 255 2005-09-22 10:05 /dev/nvidiactl I'm quite sure this is not intended. Let me know if you need further information.
What is used: static devs or dynamic udev/devfs
How can I find this out?
Default is udev nowadays.
Correct.
Does this issue still exist?
Well, since nothing has changed till now, yes, the issue persists and exists. If you need more information from my side please tell me what exactly you want and I will try to provide it.
What does: find /sys -name "nvidia0" print? And what does: udevtest /class/<whatever>/nvidia0 <whatever> print? Here is an example for a joystick: find /sys -name "js*" /sys/class/input/js0 udevtest /class/input/js0 input udev_rules_get_name: rule applied, 'js0' becomes 'input/js0' create_node: creating device node '/dev/input/js0', mode = '0644', uid = '0'
theano:~ # find /sys -name "nvidia0" theano:~ # (no output) theano:~ # find /sys -name "nvidia*" /sys/module/nvidia /sys/bus/pci/drivers/nvidia theano:~ # udevtest /sys/module/nvidia module main: looking at device '/module/nvidia' from subsystem 'module' main: opened class_dev->name='nvidia' main: only char and block devices with a dev-file are supported by this test program theano:~ # udevtest /sys/bus/pci/drivers/nvidia bus main: looking at device '/bus/pci/drivers/nvidia' from subsystem 'bus' main: opened class_dev->name='nvidia' main: only char and block devices with a dev-file are supported by this test program
Ok, these nodes don't come from udev. Please find out which init-script creates them and try to add the right permission settings to the init script.
Sorry, but I'm not the one to fix this. I'm just the reporter of this update problem. Please set needinfo to someone in R&D. Thanks.
No idea, who creates the nodes, it's not udev. I expect a init-script or similar. Reassign to default.
Ahmmm ... should I use a cristal ball? There is NO such script.
Don't know if it's a script or not. But for sure something happens to the nodes during boot. The last boot was on exactly the date/time the nodes show with ls -l.
You're usnig the nvidia driver, who creates these nodes. Where's the problem?
The 666 permissions are the problem and I consider this as a high security risk. You should forward this information to the guys that are developing the driver to let them know that this is nonsense. IMO it's our responsibility to take care about this since we offer the possibility to install the driver automatically via YOU without the warning that the system is now insecure because of the carelessness of some (Nvidia)developers. However, you surely will know the right thing to do here ;-)
Andy, is it possible to configure ownership and permissions of /dev/nvidia* devices for the driver? BTW, specifying ownership with /etc/udev/static_devices.txt is not possible as well. Hope that this gets fixed for the future.
From the FAQ of the NVIDIA driver README (ftp://download.nvidia.com/XFree86/Linux-x86/1.0-7676/README.txt): Q. How and when are the the NVIDIA device files created? A. Depending on the target system's configuration, the NVIDIA device files used to be created in one of three different ways: at installation time, using mknod at module load time, via devfs (Linux device file system) at module load time, via hotplug/udev With currrent NVIDIA driver releases, device files are created or modified by the X driver when the X server is started. By default, the NVIDIA driver will attempt to create device files with the following attributes: UID: 0 - 'root' GID: 0 - 'root' Mode: 0666 - 'rw-rw-rw-' Existing device files are changed if their attributes don't match these defaults. If you wish for the NVIDIA driver to create the device files with different attributes, you can specify them with the "NVreg_DeviceFileUID" (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA Linux kernel module parameters. For example, the NVIDIA driver can be instructed to create device files with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following module parameters to the NVIDIA Linux kernel module: NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=44 NVreg_DeviceFileMode=0660 The "NVreg_ModifyDeviceFiles" NVIDIA kernel module parameter will disable dynamic device file management, if set to 0. I hope that helps, - Andy
Oops. Didn't know that is configurable and even documented. :-) We'll need NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 So an enty options nvidia \ NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 in /etc/modprobe.conf should help.
Something like /etc/modprobe.d/nvidia should be used instead. I'll add this to package "tiny-nvidia-installer" for 10.1.
fixed for 10.1
updates released
On my SL 9.3 x86 I got the same problem: spiral:~ # l /dev/nvidia* crw-rw-rw- 1 root root 195, 0 Jul 28 16:52 /dev/nvidia0 crw-rw---- 1 root video 195, 0 Mar 19 2005 /dev/nvidia00 crw-rw---- 1 root video 195, 1 Mar 19 2005 /dev/nvidia01 crw-rw---- 1 root video 195, 2 Mar 19 2005 /dev/nvidia02 crw-rw---- 1 root video 195, 3 Mar 19 2005 /dev/nvidia03 crw-rw-rw- 1 root root 195, 1 Jul 28 16:52 /dev/nvidia1 crw-rw-rw- 1 root root 195, 2 Jul 28 16:52 /dev/nvidia2 crw-rw-rw- 1 root root 195, 3 Jul 28 16:52 /dev/nvidia3 crw-rw-rw- 1 root root 195, 4 Jul 28 16:52 /dev/nvidia4 crw-rw-rw- 1 root root 195, 5 Jul 28 16:52 /dev/nvidia5 crw-rw-rw- 1 root root 195, 6 Jul 28 16:52 /dev/nvidia6 crw-rw-rw- 1 root root 195, 7 Jul 28 16:52 /dev/nvidia7 crw-rw-rw- 1 root root 195, 255 Jul 28 16:52 /dev/nvidiactl
Which nvidia driver is this? I don't think this can happen with the driver version we provide via YOU (1.0-7167).
NVRM: loading NVIDIA Linux x86 NVIDIA Kernel Module 1.0-7167 Fri Feb 25 09:08:22 PST 2005
it is the same
Strange, so does it help to add a file to /etc/modprobe.d with the contents of comment #18? Make sure to unload the nvidia kernel module first and remove the devices with root.root ownership.
It's my workstation, I'll test is later. On another 9.3 workstation it's the same.
i put "options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660" in /etc/modprobe.conf, removed the device files and rebooted. result: unable to open display :0
i fiddled arround a bit, screen stays blank with 0660 root.video and modprobe.conf entry
The 1.0-7167 does not know these options yet, since at this time dynamic udev was not widely used yet. Probably the kernel module no longer can be loaded with these options applied. I have no idea who created the "root.root 660" nvidia devices. This nvidia driver version is likely not the reason. I close this bugreport therefore again. In case you want to reopen this bugreport again, it would help a *lot* to provide access to a freshly installed system with the nvidia driver YOU patch applied. It's always time consuming for me to reinstall SUSE 9.3 on one of my test machine.
I have a SuSE 10 that cannot load the module for legacy cards anymore (7174). I tried to add the devices to static_devices.txt as metioned in the SuSE HOWTO for nvidia, it did not help. Every time I try to inser the module I get: nvidia: module not supported by Novell, setting U taint flag. nvidia: Unknown parameter `NVreg_DeviceFileUID' I have to use this driver and cannot use the most recent, because I got a GeForce2. What can I do to get rid of that option that seems to prevent all users of legacy cards from using the nvidia-binary-driver? I habe all YOU patches installed, not the NVIDIA one, as that works only for > GeForce 2.
I've updated my HOWTO now. Please consult it again.
I think that the current "solution" is neither user-friendly nor satisfying. 1. Not everybody can read English and thus consult the HOWTO. 2. Even if they can read English, most people do not know what "legacy cards" are and will not scroll down to the bottom of the document to discover by chance that their card is one. There should be a way to not force people to remove some file from /etc and the HOWTO should at least put GeForce2 and the other types in brackets in the TOC, so that people might spot it by chance. Further people might have to run modules-update.dep. Although it might be useless to mention, but why not make 2 instead of 1 script available via YOU? One for legacy cards and one for recent ones. Users could easily distinguish them by their description. In 9.3 I have tons of useless YOU entries with OO-translations and other things. If there is room for those, a second script for legacy cards should have room too and would be useful as a side-effect. Users would not have to care about anything like this bug or putting some lines into the static_devices.txt, as it would be handled by script.
Thanks. I've added "(GeForce2 and older)" to TOC. It's not an option to support several NVIDIA driver versions for us.
SuSE does not support any nvidia-driver, as it cannot solve any bugs within the driver anyway. All it can do is forward bug-reports and all it does is to supply a script to d/l and install the module. As I stated before, multiple versionen of OO were supported for 9.3 and the translation packages spammed YOU. To not let users go through all this hassel would just mean another script and one more entry. But anyway, I put the lines into static_devices.txt but it still did not work. So is it me, or the HOWTO that got it wrong? Does the HOWTO work for you? I booted and X could not start. However, after running makedevices.sh from the nvidia-driver-package it did work. If I am not the only one that the static-thingy does not solve the problem for, it might be a good idea to add the following hints to the HOWTO: 1. Extract the the nvidia-package with --extract-only 2. Copy makedevices.sh from the created NVIDIA...-dir/usr/src/ to /usr/local 3. Add a line to you boot.local /usr/local/makedevices.sh However IMHO, all this extracting, copying and editing is not really an option for normal users and a very high price to pay in terms of user-friendliness just in order to keep the one driver-version policy. I do not understand why nvidia has dropped legacy cards from its driver, but I also do not understand, why SuSE forces users to do all this just because one does not want to add another script.
I have to correct myself: 2. Copy makedevices.sh from the created NVIDIA...-dir/usr/src/nv to /usr/local The nv was missing after /usr/src/
1) It's still not correct that providing a second nvidia download script would be enough. a) We would need to add a second prebuilt nvidia kernel interface to our kernel package b) The users would not chose the correct download script, no matter how you describe the patch. Users usually even install the nivdia patch, if they don't own a nvidia board at all! 2) There was an error in the HOWTO about the device nodes, which have been fixed. I'm sorry. This information should definitely help. -nvidia c 195 0 666 -nvidia c 195 1 666 -nvidia c 195 2 666 -nvidia c 195 3 666 -nvidia c 195 4 666 -nvidia c 195 5 666 -nvidia c 195 6 666 -nvidia c 195 7 666 +nvidia0 c 195 0 666 +nvidia1 c 195 1 666 +nvidia2 c 195 2 666 +nvidia3 c 195 3 666 +nvidia4 c 195 4 666 +nvidia5 c 195 5 666 +nvidia6 c 195 6 666 +nvidia7 c 195 7 666 nvidiactl c 195 255 666
It still does not work without the help of makedevices.sh. I'll attach my static_devices.txt for you to compare the lines.
Created attachment 57286 [details] static_devices.txt from /etc/udev Entries from the nvidia-HOWTO are at the bottom, of the file.
The entries look correct. The devices should exist after a reboot.
It does not work, because nvidiactl is not created, the others are present in /dev after booting.
Created attachment 57542 [details] right after booting
Created attachment 57543 [details] after running makedevices.sh
Is the device created on your system? I still have to use the makedevices.sh.
closest match_: CVE-2007-3532
CVE-2007-3532: CVSS v2 Base Score: 7.2 (AV:L/AC:L/Au:N/C:C/I:C/A:C)