Bugzilla – Bug 231171
xorg.conf is set up witto restrictive rights on DRI access
Last modified: 2008-09-25 05:49:04 UTC
In the xorg.conf : Section "DRI" Group "video" Mode 0660 EndSection This is ok with local users. The users are included in the group "video" automatically. When you have users authenticated wia an external mechanism like LDAP, then the users are not part of the video group automatically. This leads to problems for the users when they than need to use 3D graphics, My suggestion is that the limitation should either be - removed, - losened up (0666) - LDAP and SAMBA authentication automatically generates a new group and updates the xorg.conf file with this group in stead. (video_ldap?) This problems leads to frustration and troubleshooting in networks where theuser is not authenticated locally. Regards Birger
This is a security issue.
I remember that we tried to add /dev/dri to /etc/logindevperm in the past, but this was a bad idea since that means that all processes, which access this device are killed when the user's session is terminated. Unfortunately this is also the Xserver, which resulted in undefined behaviour after the first user logged out, mostly machine freezes. :-( BTW, /etc/logindevperm no longer exists. Don't know how in which way it has been replaced. The same problematic applies to /dev/nvidia*.
perhaps a resmgr ACL style solution might come in handy.
Maybe, but I don't know anything about it. So any help/suggestion would be appreciated. :-)
I guess the login manager would have to set the device rights / acls. On server startup time the allowed user is only known if the user started the xserver himself (no login manager). Ok, how should that be done? Using resmgr? Natively? Device owner or ACLs? Where to configure the needed devices?
ludwig can help here. he will be back thursday.
Hal needs to know about those devices first if you want to trigger permission changes upon user login. For that the devices need to properly appear in /sys, I don't know if that is the case already. Reassigning to hal maintainer.
I Hope it's ok for you remaining in Cc, Ludwig.
Please provide output of lshal and also the output of hald (in /var/log/messages) started with: --daemon=yes --verbose=yes --use-syslog @sndirsch: Is there a respondig device in sysfs for /dev/dri and /dev/nvidia* ?
> @sndirsch: Is there a respondig device in sysfs for /dev/dri > and /dev/nvidia* ? Probably, but I'm not sure. For nvidia you can verify this on machine "shannon". Matthias, can you help with DRI?
I cannot find any for nvidia. And I don't really have a clue about how dri drivers behave here right now.
Any news on this one, Danny? All the required information for NVIDIA can be found on machine "shannon", for DRI on any notebook with Intel graphics chipset when DRI is enabled.
Egbert, JFYI. Since Matthias or me is in Cc of this bugreport or the reported itself, it might be interesting for you as well.
Any news for this issue?
Ludwig, we just discussed about this issue. :-)
Ok, any update here? Beta1 is coming closer, and I guess 10.3 is affected, too?
Sure, 10.3 is affected, too.
Will there be done anything about this problem for 10.3? It seems a bit forgotten? How is this handled in the SLES line?
>Will there be done anything about this problem for 10.3? No. :-( >It seems a bit forgotten? Yes. :-( >How is this handled in the SLES line? Same problem. :-(
This will be much more visible when users start using OpenGL accelerated desktops.
Another victim.
Only to clarify: we speak about the drm subsystem (e.g. on intel: /dev/dri/card0)? Could someone please attach the part of /var/log/messages if HAL get started with --daemon=yes --verbose=yes --use-syslog ?
> Only to clarify: we speak about the drm subsystem (e.g. on intel: > /dev/dri/card0)? Yes.
Created attachment 183703 [details] messages Results in /var/log/messages of /usr/sbin/hald --daemon=yes --verbose=yes --use-syslog Feel free to investigate on shannon (intel onboard gfx machine).
Created attachment 183705 [details] messages
Thx. I can tested already a intel machine and there should it be easy to add the drm stuff. I forgot in comment #22 to say I need this from a NVIDIA system, sorry.
Created attachment 183809 [details] nvidia.msg output on nvidia driver system. Not sure if this output really helps.
Whatever I try to attach to this bugreport. Bugzilla tells me "The attachment you are attempting to access has been removed." when trying to look at it afterwards. :-(
hm, all attachments are now removed. Maybe a Bugzilla bug. @Stefan: IMO you should report it to the Bugzilla ppl. Btw. Could you send me the log via eMail? THX
done.
Danny, could you check this on f199. It's a NVIDIA driver based machine. It's a test machine. Do whatever you want with it. :-)
Okay, I have now a log and the taked a look at sysfs. Until NVIDIA (assume the same for ATI) don't add any dri/drm device to sysfs as e.g. Intel I can't fix it in HAL for these machines. It's easy to fix for intel, but need a aproval from upstream before add it. Any way that Nvidia fix there kernel driver to add a /sys/class/drm/* entry/device and to emit the related uevent?
NVIDIA does not use DRI at all, fglrx does. I'm not sure, which information you need. Permissions of /dev/nvidia* needs to be set appropriately. nvidia files I could find in /dev, /proc and /sys. /dev/nvidia0 /dev/nvidiactl /proc/irq/16/nvidia /proc/driver/nvidia /proc/driver/nvidia/registry /proc/driver/nvidia/version /proc/driver/nvidia/warnings /proc/driver/nvidia/warnings/README /proc/driver/nvidia/cards /proc/driver/nvidia/cards/0 /sys/module/nvidia /sys/module/nvidia/drivers /sys/module/nvidia/drivers/pci:nvidia /sys/module/nvidia/sections /sys/module/nvidia/sections/.strtab /sys/module/nvidia/sections/.symtab /sys/module/nvidia/sections/.bss /sys/module/nvidia/sections/.gnu.linkonce.this_module /sys/module/nvidia/sections/.data /sys/module/nvidia/sections/__versions /sys/module/nvidia/sections/__ksymtab_strings /sys/module/nvidia/sections/__kcrctab /sys/module/nvidia/sections/__ksymtab /sys/module/nvidia/sections/__param /sys/module/nvidia/sections/.altinstructions /sys/module/nvidia/sections/.smp_locks /sys/module/nvidia/sections/.rodata.str1.1 /sys/module/nvidia/sections/.parainstructions /sys/module/nvidia/sections/.rodata /sys/module/nvidia/sections/.altinstr_replacement /sys/module/nvidia/sections/.init.text /sys/module/nvidia/sections/.exit.text /sys/module/nvidia/sections/.text /sys/module/nvidia/refcnt /sys/module/nvidia/initstate /sys/module/nvidia/holders /sys/module/i2c_core/holders/nvidia /sys/module/agpgart/holders/nvidia /sys/bus/pci/drivers/nvidia /sys/bus/pci/drivers/nvidia/new_id /sys/bus/pci/drivers/nvidia/bind /sys/bus/pci/drivers/nvidia/unbind /sys/bus/pci/drivers/nvidia/uevent /sys/bus/pci/drivers/nvidia/module /sys/bus/pci/drivers/nvidia/0000:01:00.0
(In reply to comment #31 from Stefan Dirsch) > Danny, could you check this on f199. It's a NVIDIA driver based machine. > It's a test machine. Do whatever you want with it. :-) Well, currently I need it for my daily work since my other machine needs a BIOS/mainboard update before it can be rebooted again. :-(
The point is, that the driver send absolutely no uevents if /dev/nvidia get created. the module and driver events are useless (there is no way to assign them to the /dev device), HAL need a event for the device itself. Nvidia should fix their driver to send a uevent on device creation.
Andy (Ritger) @ NVIDIA, would it be possible to send such an uevent during device creation?
I'm happy to investigate sending a uevent if/when the NVIDIA driver creates a device file. FWIW, here is the flow from the NVIDIA X driver perspective: ScreenInit() ... |--> xf86LoadKernelModule("nvidia") ... |--> if /dev/nvidia* doesn't exist, then mknod, chmod, and chown The NVIDIA driver README has a little more detail: ----- Q. How and when are the the NVIDIA device files created? A. Depending on the target system's configuration, the NVIDIA device files used to be created in one of three different ways: o at installation time, using mknod o at module load time, via devfs (Linux device file system) o at module load time, via hotplug/udev With current NVIDIA driver releases, device files are created or modified by the X driver when the X server is started. By default, the NVIDIA driver will attempt to create device files with the following attributes: UID: 0 - 'root' GID: 0 - 'root' Mode: 0666 - 'rw-rw-rw-' Existing device files are changed if their attributes don't match these defaults. If you want the NVIDIA driver to create the device files with different attributes, you can specify them with the "NVreg_DeviceFileUID" (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA Linux kernel module parameters. For example, the NVIDIA driver can be instructed to create device files with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following module parameters to the NVIDIA Linux kernel module: NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=44 NVreg_DeviceFileMode=0660 The "NVreg_ModifyDeviceFiles" NVIDIA kernel module parameter will disable dynamic device file management, if set to 0. ----- Note that the NVIDIA driver used to create the device files from the NVIDIA kernel module, but there were sequencing problems with hotplug/udev that made that unworkable. I have to admit, though, that I'm not very familiar with the current state of uevent/HAL. Could you please spell out for me what exactly you'd like the NVIDIA driver to do, what calls the NVIDIA driver needs to make to do that, and how the NVIDIA driver should detect on what systems/distributions it should do this? Thanks, - Andy
Thanks, Andy! We're currently using NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 (33=video on our systems) NVreg_DeviceFileMode=0660 for security reasons. The plan is to set the appropriate permissions for the user, who is currently logged in. I'm clueless about HAL, but I think Danny can help you here.
Do I understand it correct: the Nvidia X module/driver create /dev/nvidia and not the kernel module? Why this? Why doesn't the kernel driver create the device (and a useful sysfs entry for the device, not only for the driver and module) as AFAIK e.g. Intel or ATI do? The ATI driver for example send a event if the device get set up in the kernel (note: this is not udev output, but from HAL which get the info from udev): action=add subsys=drm sysfs_path=/sys/class/drm/card0 dev=/dev/card0 parent_dev=0x080abd90
Hi Danny, Yes, you understand the current state of the NVIDIA driver correctly. There is a lot of history, here. Stefan may have additional records on some of this, as he and Greg K-H and I exchanged some correspondence on this back in 2005 (I've pasted some of that below). I've not tracked sysfs/udev development recently, but back in 2005 device file management in the NVIDIA driver was quite difficult: - large maintenance burden for NVIDIA, trying to interface properly with the varying device file creation schemes across all the various distributions - devfs/udev interfaces in the Linux kernel changed frequently and often required driver updates - there plans to make the interfaces for hotplug/udev sysfs support to be exported GPL-only - with various kernel versions, device file creation through udev had substantial latency It is entirely possible that the device file creation interfaces exposed in recent kernels have matured and stabilized and are standardized across Linux distributions since we last looked at this seriously. However, NVIDIA's current solution of creating the device files from user space has been quite robust over the past few years and required minimal maintenance work for us. We can definitely investigate making use of newer interfaces if there are strong benefits, but I would need help to understand what the interfaces are, and to understand when the new interfaces should be used. I've included some past exchanges Stefan, Greg, and me. The end result strongly encouraged NVIDIA to create the device files from user space. Thanks, - Andy ---- Stefan Dirsch: Andy, I wonder whether the nvidia video driver is udev-aware. Could you please comment on this? Greg thinks so: "In looking at the latest nvidia source tarball I could find, they already have code that calls class_simple_* which creates the entries needed in sysfs for udev to create the device nodes. So I don't think you have to do anything special, if you already have their latest release." Additional Comment #1 From Andy Ritger 2005-03-04 16:11 MST [reply] Yes, the NVIDIA driver is udev-aware. I should point out, though, some problems we encountered with udev: - it appears that device files do not get created until sometime after the kernel module is loaded - the kernel module is supposed to be autoloaded when the device files are opened From our experience in other distros, this led to bootstrapping problems due to this circular dependency. The 716x driver you have breaks this circular dependency by explicitly loading the kernel module from the X driver during X initialization. Our makedevices.sh script that runs during installation also creates device files in /etc/udev/devices/, which I believe get propogated to /dev/ by the OS, but I am not sure how configuration or distribution specific that is. All in all, NVIDIA's experience with udev has not been impressive. Hopefully that is just to due to bugs in the initial udev deployments we've seen. Thanks. Additional Comment #2 From Greg Kroah-Hartman 2005-03-04 16:43 MST [reply] Feedback from nvidia, to the udev developers about the issues they have with it, would be greatly appreciated. They can be contacted on the linux-hotplug-devel mailing list, or by direct email to the address in the README file in the udev releases. Otherwise, such issues will remain unknown, and hence, unfixed by the udev developers :) Additional Comment #3 From Andy Ritger 2005-04-06 08:18 MST [reply] Setting resolution to WorksForMe; not sure if that is the most appropriate setting. Additional Comment #4 From Stefan Dirsch 2005-04-06 08:21 MST [reply] Perfectly ok for me. Up to now I didn't hear from any udev related problems with the nvidia driver. We can reopen it again, in case there are any. I would like to thank you for commenting on this. Additional Comment #5 From Andy Ritger 2005-04-06 12:33 MST [reply] Based on recent developments, though, I wonder why this question was asked. If the class_simple interface is going to not be available to the NVIDIA driver in future kernels, what good does it do for the NVIDIA kernel module to be udev aware? Additional Comment #6 From Stefan Dirsch 2005-04-06 13:07 MST [reply] I asked this question since we somewhat switched to udev with SUSE 9.3. I can't comment on the technical details (class_simple interface) since I'm not a kernel developer. Hopefully Greg can ... Additional Comment #7 From Greg Kroah-Hartman 2005-04-06 13:27 MST [reply] The in-kernel apis have changed again, yes. But that code hasn't made it into mainline, and will not until some unspecified time after 2.6.12 is released. If nvidia's license isn't compatible with them, there's nothing I can do about it, sorry. Additional Comment #8 From Andy Ritger 2005-04-06 14:28 MST [reply] Thanks, Greg. All my previous experience with SuSE/Novell has been so positive, I'm disappointed to see that attitude taken. Stefan, to (re)answer your original question: NVIDIA is aware of udev, in that we are aware of its existence and attempt to use it when possible. However, apparently we will not be permitted to use udev in the future. Additional Comment #9 From Greg Kroah-Hartman 2005-04-06 14:47 MST [reply] I'm sorry, but this is not a SuSE/Novell attitude at all. It is the "linux kernel developer" hat that I must wear. I have held people off on marking the class_simple() code as GPL only for much longer than I ever expected it to be able to be done, it was only a matter of time. Sorry for any trouble this might have caused, and good luck with maintaining a kernel driver outside of the kernel tree, I know it is quite difficult and time-consuming. Legally, I have no idea how you are getting away with doing what you have done so far, but hey, that's why I'm not a lawyer :) Additional Comment #10 From Andy Ritger 2005-04-06 17:14 MST [reply] Understood, thanks. Stefan: I guess just be aware that the NVIDIA driver may not be able to utilize udev in the future. We're investigating other solutions. Additional Comment #11 From Greg Kroah-Hartman 2005-04-07 03:21 MST [reply] You might want to look at how vmware handles a udev system, without using sysfs or udev at all. They create their nodes statically in their startup script which is a valid way to handle this. Additional Comment #12 From Stefan Dirsch 2005-04-07 03:31 MST [reply] Thanke for the hint, Greg. I think this is something I should take care of. Additional Comment #13 From Stefan Dirsch 2005-04-07 09:01 MST [reply] Andy, this is now discussed internally here at SuSE. We'll let you know about the results ASAP. ------ Greg K-H: class_simple is now gone in the latest -mm releases. I've reworked the core class code to work much like class_simple used to work, and fixed up a number of issues along the way. The other class apis will be removed so that the new class functions are all that is left. This will take a while, but is a good thing as the new functions are much easier to use, understand, and more importantly, almost impossible to use incorrectly. This code will not go into mainline until after 2.6.12 is released. The new functions are marked EXPORT_SYMBOL_GPL(), so code like nvidia and vmware can't use them. The entire driver core and sysfs was marked this way a while ago (September 2004 to be exact), but the class_simple code was not marked as such because people complained that there was no warning (and they didn't like their closed source modules breaking.) They have had such warning now for a while, and vmware now no longer uses these functions at all. As the class_simple functions were tiny wrappers around the core class code, people argued a lot with me that they were circumventing the GPL markings of the driver code, and I agreed. nvidia can modify their startup scripts to just statically create the device nodes, just like vmware does, to have them work properly on a udev aware distro (in fact, that's what they used to do a while ago...) That will solve the issue for them of not being able to export any device major:minor number information in sysfs for udev to pick up on. I hope this helps clear up any confusion, and provides a solution for what nvidia can do to work with future releases from us.
@Stefan: I added the drm subsystem to HAL. This should fix it for Intel and Ati. Could you test HAL for Factory/STABLE from: http://download.opensuse.org/repositories/home:/dkukawka:/hal-beta/openSUSE_Factory/ There should be now a drm device (info.category/info.capability=drm). @Ludwig: You should now be able to extend hal-resmgr.
Thanks, Danny. What exactly should I test?
(In reply to comment #43 from Stefan Dirsch) > Thanks, Danny. What exactly should I test? If you get a device in lshal with property 'info.capability' and value 'drm' on intel and ati machines, and if the value of linux.device_file is what you need. About the nvidia case: We need to discuss this next week personally.
Looks good on Intel after installing your packages from http://download.opensuse.org/repositories/home:/dkukawka:/hal-beta/openSUSE_Factory/ and running "rchal restart". # lshal [...] udi = '/org/freedesktop/Hal/devices/pci_8086_2982_drm_i915_card0' drm.dri_library = 'i915' (string) drm.version = 'drm 1.1.0 20060810' (string) info.capabilities = {'drm'} (string list) info.category = 'drm' (string) info.parent = '/org/freedesktop/Hal/devices/pci_8086_2982' (string) info.product = 'Direct Rendering Manager Device' (string) info.udi = '/org/freedesktop/Hal/devices/pci_8086_2982_drm_i915_card0' (string) info.vendor = 'Intel Corporation' (string) linux.device_file = '/dev/dri/card0' (string) linux.hotplug_type = 2 (0x2) (int) linux.subsystem = 'drm' (string) linux.sysfs_path = '/sys/class/drm/card0' (string) I didn't try this with "fglrx" driver yet.
There's no 'info.capability' with value 'drm' when using the fglrx driver and no 'linux.device_file' with value '/dev/dri/card0'. Not sure if this is related to fglrx kernel module linked statically against drm objects (no drm module is loaded) or similar GPL issues ATI has like NVIDIA. Login to d22 in case you want to investigate this issue.
It's the same problem like with Nvidia. It works for me with the radeon driver.
Submitted a new hal to STABLE/Factory and a backport to SLES10-SP2. This should cover all drivers that use the drm subsystem. For NVidia/Intel we have currently no solution, and we can't fix it in HAL until there is a devices in the sysfs. I reassign it to Ludwig to change resmgr to set the correct permissions on these files.
Danny means nvidia/fglrx drivers here, not intel. intel should work fine.
new hal-resmgr already checked in The proprietary drivers could probably create fake entries in hal (e.g via hal-device).
Thanks. Could you explain the fake entries in hal in more detail, please?
Like this: https://bugzilla.novell.com/show_bug.cgi?id=235059#c14 I can't judge whether that's a practicable way to do it, just an idea.
But if I understand it correctly for this we would need a patched hald as well.
Danny, would you expect the patch in Bug #235059#c14?
No. 1) we don't want to add every device in sysfs to HAL (also because the most of them provide absolutely no useful info). This would fill HAL with complete useless information. 2) IMO it would not fix the problem of _this_ bug, there is simply no info for NVIDIA/ATI in sysfs (which e.g. tell us the releated /dev entry).
Ok. Then we need to give up with regard to ATI/NVIDIA for now.
With fake entries I was referring to the workaround using 'hal-device' to create objects in hal that don't exist in sysfs. Of course the patch to add all devices doesn't fix the problem with ATI/NVIDIA.
*** Bug 429590 has been marked as a duplicate of this bug. ***