Bug 231171 - xorg.conf is set up witto restrictive rights on DRI access
Summary: xorg.conf is set up witto restrictive rights on DRI access
Status: RESOLVED FIXED
: 429590 (view as bug list)
Alias: None
Product: openSUSE 10.3
Classification: openSUSE
Component: Security (show other bugs)
Version: RC 1
Hardware: i586 Linux
: P5 - None : Major (vote)
Target Milestone: ---
Assignee: Stefan Dirsch
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2006-12-30 16:37 UTC by Birger Kollstrand
Modified: 2008-09-25 05:49 UTC (History)
11 users (show)

See Also:
Found By: Customer
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
messages (deleted)
2007-11-16 15:01 UTC, Stefan Dirsch
Details
messages (deleted)
2007-11-16 15:03 UTC, Stefan Dirsch
Details
nvidia.msg (deleted)
2007-11-17 20:11 UTC, Stefan Dirsch
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Birger Kollstrand 2006-12-30 16:37:54 UTC
In the xorg.conf :
Section "DRI"
    Group      "video"
    Mode       0660
EndSection

This is ok with local users. The users are included in the group "video" automatically.

When you have users authenticated wia an external mechanism like LDAP, then the users are not part of the video group automatically. This leads to problems for the users when they than need to use 3D graphics,

My suggestion is that the limitation should either be 
- removed, 
- losened up (0666) 
- LDAP and SAMBA authentication automatically generates a new group and updates the xorg.conf file with this group in stead. (video_ldap?)

This problems leads to frustration and troubleshooting in networks where theuser is not authenticated locally.

Regards Birger
Comment 1 Stefan Dirsch 2006-12-30 17:12:41 UTC
This is a security issue.
Comment 2 Stefan Dirsch 2006-12-30 17:21:01 UTC
I remember that we tried to add /dev/dri to /etc/logindevperm in the past, but this was a bad idea since that means that all processes, which access this device are killed when the user's session is terminated. Unfortunately this is also the Xserver, which resulted in undefined behaviour after the first user logged out, mostly machine freezes. :-(

BTW, /etc/logindevperm no longer exists. Don't know how in which way it has been replaced.

The same problematic applies to /dev/nvidia*.
Comment 3 Marcus Meissner 2007-01-05 15:04:39 UTC
perhaps a resmgr ACL style solution might come in handy.
Comment 4 Stefan Dirsch 2007-01-05 15:10:40 UTC
Maybe, but I don't know anything about it. So any help/suggestion would be appreciated. :-)
Comment 5 Matthias Hopf 2007-01-05 18:11:11 UTC
I guess the login manager would have to set the device rights / acls. On server startup time the allowed user is only known if the user started the xserver himself (no login manager).

Ok, how should that be done? Using resmgr? Natively? Device owner or ACLs? Where to configure the needed devices?
Comment 6 Marcus Meissner 2007-01-09 12:45:24 UTC
ludwig can help here. he will be back thursday.
Comment 7 Ludwig Nussel 2007-01-11 09:35:36 UTC
Hal needs to know about those devices first if you want to trigger permission changes upon user login. For that the devices need to properly appear in /sys, I don't know if that is the case already. Reassigning to hal maintainer.
Comment 8 Stefan Dirsch 2007-01-11 09:51:01 UTC
I Hope it's ok for you remaining in Cc, Ludwig.
Comment 9 Danny Al-Gaaf 2007-01-11 10:22:46 UTC
Please provide output of lshal and also the output of hald (in /var/log/messages) started with:
 --daemon=yes --verbose=yes --use-syslog

@sndirsch: Is there a respondig device in sysfs for /dev/dri and /dev/nvidia* ?
Comment 10 Stefan Dirsch 2007-01-11 11:07:46 UTC
> @sndirsch: Is there a respondig device in sysfs for /dev/dri 
> and /dev/nvidia* ?
Probably, but I'm not sure. For nvidia you can verify this on machine "shannon". Matthias, can you help with DRI?
Comment 11 Matthias Hopf 2007-01-17 12:39:59 UTC
I cannot find any for nvidia. And I don't really have a clue about how dri drivers behave here right now.
Comment 12 Stefan Dirsch 2007-03-13 02:54:00 UTC
Any news on this one, Danny? All the required information for NVIDIA can be found on machine "shannon", for DRI on any notebook with Intel graphics chipset when DRI is enabled.
Comment 13 Stefan Dirsch 2007-05-12 10:28:08 UTC
Egbert, JFYI. Since Matthias or me is in Cc of this bugreport or the reported itself, it might be interesting for you as well.
Comment 14 Thomas Biege 2007-05-21 12:04:44 UTC
Any news for this issue?
Comment 15 Stefan Dirsch 2007-07-04 20:32:46 UTC
Ludwig, we just discussed about this issue. :-)
Comment 16 Stefan Behlert 2007-08-01 09:56:00 UTC
Ok, any update here? Beta1 is coming closer, and I guess 10.3 is affected, too?
Comment 17 Stefan Dirsch 2007-08-11 09:28:58 UTC
Sure, 10.3 is affected, too.
Comment 18 Birger Kollstrand 2007-09-24 11:45:06 UTC
Will there be done anything about this problem for 10.3? It seems a bit forgotten?

How is this handled in the SLES line?
Comment 19 Stefan Dirsch 2007-09-24 12:36:04 UTC
>Will there be done anything about this problem for 10.3? 
No. :-(

>It seems a bit forgotten?
Yes. :-(

>How is this handled in the SLES line?
Same problem. :-(
Comment 20 Birger Kollstrand 2007-09-24 12:40:34 UTC
This will be much more visible when users start using OpenGL accelerated desktops.
Comment 21 Stefan Dirsch 2007-10-10 21:20:44 UTC
Another victim.
Comment 22 Danny Al-Gaaf 2007-11-16 14:42:40 UTC
Only to clarify: we speak about the drm subsystem (e.g. on intel: /dev/dri/card0)? 

Could someone please attach the part of /var/log/messages if HAL get started with --daemon=yes --verbose=yes --use-syslog ?
Comment 23 Stefan Dirsch 2007-11-16 14:59:53 UTC
> Only to clarify: we speak about the drm subsystem (e.g. on intel:
> /dev/dri/card0)? 
Yes.
Comment 24 Stefan Dirsch 2007-11-16 15:01:59 UTC
Created attachment 183703 [details]
messages

Results in /var/log/messages of 

  /usr/sbin/hald --daemon=yes --verbose=yes --use-syslog

Feel free to investigate on shannon (intel onboard gfx machine).
Comment 25 Stefan Dirsch 2007-11-16 15:03:12 UTC
Created attachment 183705 [details]
messages
Comment 26 Danny Al-Gaaf 2007-11-17 18:55:54 UTC
Thx. I can tested already a intel machine and there should it be easy to add the drm stuff. I forgot in comment #22 to say I need this from a NVIDIA system, sorry.
Comment 27 Stefan Dirsch 2007-11-17 20:11:25 UTC
Created attachment 183809 [details]
nvidia.msg

output on nvidia driver system. Not sure if this output really helps.
Comment 28 Stefan Dirsch 2007-11-17 20:15:29 UTC
Whatever I try to attach to this bugreport. Bugzilla tells me

  "The attachment you are attempting to access has been removed."

when trying to look at it afterwards. :-(
Comment 29 Danny Al-Gaaf 2007-11-17 23:18:48 UTC
hm, all attachments are now removed. Maybe a Bugzilla bug.
 
@Stefan: IMO you should report it to the Bugzilla ppl. Btw. Could you send me the log via eMail? THX
Comment 30 Stefan Dirsch 2007-11-17 23:24:05 UTC
done.
Comment 31 Stefan Dirsch 2007-11-19 12:15:23 UTC
Danny, could you check this on f199. It's a NVIDIA driver based machine. It's a test machine. Do whatever you want with it. :-)
Comment 32 Danny Al-Gaaf 2007-11-19 13:11:18 UTC
Okay, I have now a log and the taked a look at sysfs. Until NVIDIA (assume the same for ATI) don't add any dri/drm device to sysfs as e.g. Intel I can't fix it in HAL for these machines. It's easy to fix for intel, but need a aproval from upstream before add it.

Any way that Nvidia fix there kernel driver to add a /sys/class/drm/* entry/device and to emit the related uevent?
Comment 33 Stefan Dirsch 2007-11-19 13:45:43 UTC
NVIDIA does not use DRI at all, fglrx does. I'm not sure, which information you need. Permissions of /dev/nvidia* needs to be set appropriately.

nvidia files I could find in /dev, /proc and /sys.

/dev/nvidia0
/dev/nvidiactl

/proc/irq/16/nvidia
/proc/driver/nvidia
/proc/driver/nvidia/registry
/proc/driver/nvidia/version
/proc/driver/nvidia/warnings
/proc/driver/nvidia/warnings/README
/proc/driver/nvidia/cards
/proc/driver/nvidia/cards/0

/sys/module/nvidia
/sys/module/nvidia/drivers
/sys/module/nvidia/drivers/pci:nvidia
/sys/module/nvidia/sections
/sys/module/nvidia/sections/.strtab
/sys/module/nvidia/sections/.symtab
/sys/module/nvidia/sections/.bss
/sys/module/nvidia/sections/.gnu.linkonce.this_module
/sys/module/nvidia/sections/.data
/sys/module/nvidia/sections/__versions
/sys/module/nvidia/sections/__ksymtab_strings
/sys/module/nvidia/sections/__kcrctab
/sys/module/nvidia/sections/__ksymtab
/sys/module/nvidia/sections/__param
/sys/module/nvidia/sections/.altinstructions
/sys/module/nvidia/sections/.smp_locks
/sys/module/nvidia/sections/.rodata.str1.1
/sys/module/nvidia/sections/.parainstructions
/sys/module/nvidia/sections/.rodata
/sys/module/nvidia/sections/.altinstr_replacement
/sys/module/nvidia/sections/.init.text
/sys/module/nvidia/sections/.exit.text
/sys/module/nvidia/sections/.text
/sys/module/nvidia/refcnt
/sys/module/nvidia/initstate
/sys/module/nvidia/holders
/sys/module/i2c_core/holders/nvidia
/sys/module/agpgart/holders/nvidia
/sys/bus/pci/drivers/nvidia
/sys/bus/pci/drivers/nvidia/new_id
/sys/bus/pci/drivers/nvidia/bind
/sys/bus/pci/drivers/nvidia/unbind
/sys/bus/pci/drivers/nvidia/uevent
/sys/bus/pci/drivers/nvidia/module
/sys/bus/pci/drivers/nvidia/0000:01:00.0
Comment 34 Stefan Dirsch 2007-11-19 22:36:19 UTC
(In reply to comment #31 from Stefan Dirsch)
> Danny, could you check this on f199. It's a NVIDIA driver based machine.
> It's a test machine. Do whatever you want with it. :-)
Well, currently I need it for my daily work since my other machine needs a BIOS/mainboard update before it can be rebooted again. :-(

Comment 35 Danny Al-Gaaf 2007-11-21 09:09:43 UTC
The point is, that the driver send absolutely no uevents if /dev/nvidia get created. the module and driver events are useless (there is no way to assign them to the /dev device), HAL need a event for the device itself. Nvidia should fix their driver to send a uevent on device creation.
Comment 36 Stefan Dirsch 2007-11-21 09:34:46 UTC
Andy (Ritger) @ NVIDIA, would it be possible to send such an uevent during device creation?
Comment 37 andy ritger 2007-11-22 00:23:56 UTC
I'm happy to investigate sending a uevent if/when the NVIDIA driver
creates a device file.

FWIW, here is the flow from the NVIDIA X driver perspective:

    ScreenInit()
       ...
        |--> xf86LoadKernelModule("nvidia")
       ...
        |--> if /dev/nvidia* doesn't exist, then mknod, chmod, and chown

The NVIDIA driver README has a little more detail:

-----

Q. How and when are the the NVIDIA device files created?

A. Depending on the target system's configuration, the NVIDIA device files
   used to be created in one of three different ways:

      o at installation time, using mknod

      o at module load time, via devfs (Linux device file system)

      o at module load time, via hotplug/udev

   With current NVIDIA driver releases, device files are created or modified
   by the X driver when the X server is started.

   By default, the NVIDIA driver will attempt to create device files with the
   following attributes:

         UID:  0     - 'root'
         GID:  0     - 'root'
         Mode: 0666  - 'rw-rw-rw-'

   Existing device files are changed if their attributes don't match these
   defaults. If you want the NVIDIA driver to create the device files with
   different attributes, you can specify them with the "NVreg_DeviceFileUID"
   (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA
   Linux kernel module parameters.

   For example, the NVIDIA driver can be instructed to create device files
   with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following
   module parameters to the NVIDIA Linux kernel module:

         NVreg_DeviceFileUID=0
         NVreg_DeviceFileGID=44
         NVreg_DeviceFileMode=0660

   The "NVreg_ModifyDeviceFiles" NVIDIA kernel module parameter will disable
   dynamic device file management, if set to 0.

-----

Note that the NVIDIA driver used to create the device files from the
NVIDIA kernel module, but there were sequencing problems with hotplug/udev
that made that unworkable.

I have to admit, though, that I'm not very familiar with the current
state of uevent/HAL.  Could you please spell out for me what exactly
you'd like the NVIDIA driver to do, what calls the NVIDIA driver needs
to make to do that, and how the NVIDIA driver should detect on what
systems/distributions it should do this?

Thanks,
- Andy
Comment 38 Stefan Dirsch 2007-11-22 06:26:21 UTC
Thanks, Andy!

We're currently using

 NVreg_DeviceFileUID=0 
 NVreg_DeviceFileGID=33      (33=video on our systems)
 NVreg_DeviceFileMode=0660

for security reasons. The plan is to set the appropriate permissions for the user, who is currently logged in. I'm clueless about HAL, but I think Danny
can help you here.

Comment 39 Danny Al-Gaaf 2007-11-22 18:56:12 UTC
Do I understand it correct: the Nvidia X module/driver create /dev/nvidia and not the kernel module? Why this? Why doesn't the kernel driver create the device (and a useful sysfs entry for the device, not only for the driver and module) as AFAIK e.g. Intel or ATI do?  

The ATI driver for example send a event if the device get set up in the kernel (note: this is not udev output, but from HAL which get the info from udev):

action=add 
subsys=drm 
sysfs_path=/sys/class/drm/card0 
dev=/dev/card0 
parent_dev=0x080abd90
Comment 41 andy ritger 2007-11-23 21:39:59 UTC
Hi Danny,

Yes, you understand the current state of the NVIDIA driver correctly.

There is a lot of history, here.  Stefan may have additional records
on some of this, as he and Greg K-H and I exchanged some correspondence
on this back in 2005 (I've pasted some of that below).

I've not tracked sysfs/udev development recently, but back in 2005 device
file management in the NVIDIA driver was quite difficult:

    - large maintenance burden for NVIDIA, trying to interface properly
      with the varying device file creation schemes across all the various
      distributions

    - devfs/udev interfaces in the Linux kernel changed frequently and
      often required driver updates

    - there plans to make the interfaces for hotplug/udev sysfs support
      to be exported GPL-only

    - with various kernel versions, device file creation through udev had
      substantial latency

It is entirely possible that the device file creation interfaces exposed
in recent kernels have matured and stabilized and are standardized across
Linux distributions since we last looked at this seriously.

However, NVIDIA's current solution of creating the device files from
user space has been quite robust over the past few years and required
minimal maintenance work for us.  We can definitely investigate making
use of newer interfaces if there are strong benefits, but I would need
help to understand what the interfaces are, and to understand when the
new interfaces should be used.

I've included some past exchanges Stefan, Greg, and me.  The end result
strongly encouraged NVIDIA to create the device files from user space.

Thanks,
- Andy


----


 Stefan Dirsch: 
Andy, I wonder whether the nvidia video driver is udev-aware. Could you please  
comment on this?  
  
Greg thinks so:  
"In looking at the latest nvidia source tarball I could find, they  
already have code that calls class_simple_* which creates the entries  
needed in sysfs for udev to create the device nodes.  So I don't think  
you have to do anything special, if you already have their latest  
release." 
 
Additional Comment   #1 From Andy Ritger 2005-03-04 16:11 MST [reply] 
 
Yes, the NVIDIA driver is udev-aware. 
 
I should point out, though, some problems we encountered with udev: 
 
- it appears that device files do not get created until sometime after the 
kernel module is loaded 
 
- the kernel module is supposed to be autoloaded when the device files are opened 
 
From our experience in other distros, this led to bootstrapping problems due to 
this circular dependency.  The 716x driver you have breaks this circular 
dependency by explicitly loading the kernel module from the X driver during X 
initialization.  Our makedevices.sh script that runs during installation also 
creates device files in /etc/udev/devices/, which I believe get propogated to 
/dev/ by the OS, but I am not sure how configuration or distribution specific 
that is. 
 
All in all, NVIDIA's experience with udev has not been impressive.  Hopefully 
that is just to due to bugs in the initial udev deployments we've seen. 
 
Thanks. 
 
Additional Comment   #2 From Greg Kroah-Hartman 2005-03-04 16:43 MST [reply] 
 
Feedback from nvidia, to the udev developers about the issues they have with it, 
would be greatly appreciated.  They can be contacted on the linux-hotplug-devel 
mailing list, or by direct email to the address in the README file in the udev 
releases. 
 
Otherwise, such issues will remain unknown, and hence, unfixed by the udev 
developers :) 
 
Additional Comment   #3 From Andy Ritger 2005-04-06 08:18 MST [reply] 
 
Setting resolution to WorksForMe; not sure if that is the most appropriate setting. 
 
Additional Comment   #4 From Stefan Dirsch 2005-04-06 08:21 MST [reply] 
 
Perfectly ok for me. Up to now I didn't hear from any udev related problems with 
the nvidia driver. We can reopen it again, in case there are any. I would like 
to thank you for commenting on this. 
 
Additional Comment   #5 From Andy Ritger 2005-04-06 12:33 MST [reply] 
 
Based on recent developments, though, I wonder why this question was asked.  If 
the class_simple interface is going to not be available to the NVIDIA driver in 
future kernels, what good does it do for the NVIDIA kernel module to be udev aware? 
 
Additional Comment   #6 From Stefan Dirsch 2005-04-06 13:07 MST [reply] 
 
I asked this question since we somewhat switched to udev with SUSE 9.3. I  
can't comment on the technical details (class_simple interface) since I'm  
not a kernel developer. Hopefully Greg can ...  
 
Additional Comment   #7 From Greg Kroah-Hartman 2005-04-06 13:27 MST [reply] 
 
The in-kernel apis have changed again, yes.  But that code hasn't made it into 
mainline, and will not until some unspecified time after 2.6.12 is released. 
If nvidia's license isn't compatible with them, there's nothing I can do about 
it, sorry. 
 
Additional Comment   #8 From Andy Ritger 2005-04-06 14:28 MST [reply] 
 
Thanks, Greg. 
 
All my previous experience with SuSE/Novell has been so positive, I'm 
disappointed to see that attitude taken. 
 
Stefan, to (re)answer your original question: NVIDIA is aware of udev, in that 
we are aware of its existence and attempt to use it when possible.  However, 
apparently we will not be permitted to use udev in the future. 
 
Additional Comment   #9 From Greg Kroah-Hartman 2005-04-06 14:47 MST [reply] 
 
I'm sorry, but this is not a SuSE/Novell attitude at all.  It is the  
"linux kernel developer" hat that I must wear.  I have held people off on marking 
the class_simple() code as GPL only for much longer than I ever expected it to 
be able to be done, it was only a matter of time. 
 
Sorry for any trouble this might have caused, and good luck with maintaining 
a kernel driver outside of the kernel tree, I know it is quite difficult and 
time-consuming. 
 
Legally, I have no idea how you are getting away with doing what you have done 
so far, but hey, that's why I'm not a lawyer :) 
 
Additional Comment   #10 From Andy Ritger 2005-04-06 17:14 MST [reply] 
 
Understood, thanks. 
 
Stefan: I guess just be aware that the NVIDIA driver may not be able to utilize 
udev in the future.  We're investigating other solutions. 
 
Additional Comment   #11 From Greg Kroah-Hartman 2005-04-07 03:21 MST [reply] 
 
You might want to look at how vmware handles a udev system, without using 
sysfs or udev at all.  They create their nodes statically in their startup script 
which is a valid way to handle this. 
 
Additional Comment   #12 From Stefan Dirsch 2005-04-07 03:31 MST [reply] 
 
Thanke for the hint, Greg. I think this is something I should take care of. 
 
Additional Comment   #13 From Stefan Dirsch 2005-04-07 09:01 MST [reply] 
 
Andy, this is now discussed internally here at SuSE. We'll let you know about 
the results ASAP. 

------

Greg K-H:

 class_simple is now gone in the latest -mm  
releases.  I've reworked the core class code to 
work much like class_simple used to work, and 
fixed up a number of issues along the way.  The 
other class apis will be removed so that the new 
class functions are all that is left.  This will 
take a while, but is a good thing as the new  
functions are much easier to use, understand, and 
more importantly, almost impossible to use 
incorrectly.  This code will not go into mainline  
until after 2.6.12 is released. 
 
The new functions are marked EXPORT_SYMBOL_GPL(),  
so code like nvidia and vmware can't use them. 
The entire driver core and sysfs was marked this 
way a while ago (September 2004 to be exact), but 
the class_simple code was not marked as such 
because people complained that there was no 
warning (and they didn't like their closed source  
modules breaking.) They have had such warning now 
for a while, and vmware now no longer uses these 
functions at all.  
 
As the class_simple functions were tiny wrappers  
around the core class code, people argued a lot 
with me that they were circumventing the GPL 
markings of the driver code, and I agreed. 
 
nvidia can modify their startup scripts to just  
statically create the device nodes, just like 
vmware does, to have them work properly on a udev 
aware distro (in fact, that's what they used to do 
a while ago...) That will solve the issue for them 
of not being able to export any device major:minor 
number information in sysfs for udev to pick up 
on. 
 
I hope this helps clear up any confusion, and  
provides a solution for what nvidia can do to work 
with future releases from us. 
Comment 42 Danny Al-Gaaf 2007-11-24 15:10:08 UTC
@Stefan: I added the drm subsystem to HAL. This should fix it for Intel and Ati. Could you test HAL for Factory/STABLE from: 

http://download.opensuse.org/repositories/home:/dkukawka:/hal-beta/openSUSE_Factory/

There should be now a drm device (info.category/info.capability=drm). 

@Ludwig: You should now be able to extend hal-resmgr.
Comment 43 Stefan Dirsch 2007-11-24 15:15:58 UTC
Thanks, Danny. What exactly should I test?
Comment 44 Danny Al-Gaaf 2007-11-24 18:19:51 UTC
(In reply to comment #43 from Stefan Dirsch)
> Thanks, Danny. What exactly should I test?

If you get a device in lshal with property 'info.capability' and value 'drm' on intel and ati machines, and if the value of linux.device_file is what you need.

About the nvidia case: We need to discuss this next week personally.

Comment 45 Stefan Dirsch 2007-11-24 20:11:32 UTC
Looks good on Intel after installing your packages from

http://download.opensuse.org/repositories/home:/dkukawka:/hal-beta/openSUSE_Factory/

and running "rchal restart".

# lshal
[...]
udi = '/org/freedesktop/Hal/devices/pci_8086_2982_drm_i915_card0'
  drm.dri_library = 'i915'  (string)
  drm.version = 'drm 1.1.0 20060810'  (string)
  info.capabilities = {'drm'} (string list)
  info.category = 'drm'  (string)
  info.parent = '/org/freedesktop/Hal/devices/pci_8086_2982'  (string)
  info.product = 'Direct Rendering Manager Device'  (string)
  info.udi = '/org/freedesktop/Hal/devices/pci_8086_2982_drm_i915_card0'  (string)
  info.vendor = 'Intel Corporation'  (string)
  linux.device_file = '/dev/dri/card0'  (string)
  linux.hotplug_type = 2  (0x2)  (int)
  linux.subsystem = 'drm'  (string)
  linux.sysfs_path = '/sys/class/drm/card0'  (string)

I didn't try this with "fglrx" driver yet.
Comment 46 Stefan Dirsch 2007-11-26 21:20:39 UTC
There's no 'info.capability' with value 'drm' when using the fglrx driver and no 'linux.device_file' with value '/dev/dri/card0'. Not sure if this is related to fglrx kernel module linked statically against drm objects (no drm module is loaded) or similar GPL issues ATI has like NVIDIA. Login to d22 in case you want to investigate this issue.
Comment 47 Danny Al-Gaaf 2007-11-26 21:37:05 UTC
It's the same problem like with Nvidia. It works for me with the radeon driver. 
Comment 48 Danny Al-Gaaf 2007-11-27 14:46:06 UTC
Submitted a new hal to STABLE/Factory and a backport to SLES10-SP2. This should cover all drivers that use the drm subsystem. For NVidia/Intel we have currently no solution, and we can't fix it in HAL until there is a devices in the sysfs.

I reassign it to Ludwig to change resmgr to set the correct permissions on these files.
Comment 49 Stefan Dirsch 2007-11-27 14:49:38 UTC
Danny means nvidia/fglrx drivers here, not intel. intel should work fine.
Comment 50 Ludwig Nussel 2007-11-27 14:52:35 UTC
new hal-resmgr already checked in

The proprietary drivers could probably create fake entries in hal (e.g via hal-device).
Comment 51 Stefan Dirsch 2007-11-27 15:01:22 UTC
Thanks. Could you explain the fake entries in hal in more detail, please?
Comment 52 Ludwig Nussel 2007-11-27 15:12:10 UTC
Like this:
https://bugzilla.novell.com/show_bug.cgi?id=235059#c14

I can't judge whether that's a practicable way to do it, just an idea.
Comment 53 Stefan Dirsch 2007-11-27 16:22:08 UTC
But if I understand it correctly for this we would need a patched hald as well.
Comment 54 Stefan Dirsch 2007-11-27 16:29:30 UTC
Danny, would you expect the patch in Bug #235059#c14?
Comment 55 Danny Al-Gaaf 2007-11-27 17:00:25 UTC
No. 1) we don't want to add every device in sysfs to HAL (also because the most of them provide absolutely no useful info). This would fill HAL with complete useless information. 2) IMO it would not fix the problem of _this_ bug, there is simply no info for NVIDIA/ATI in sysfs (which e.g. tell us the releated /dev entry).
Comment 56 Stefan Dirsch 2007-11-27 17:17:00 UTC
Ok. Then we need to give up with regard to ATI/NVIDIA for now.
Comment 57 Ludwig Nussel 2007-11-28 08:36:28 UTC
With fake entries I was referring to the workaround using 'hal-device' to create objects in hal that don't exist in sysfs. Of course the patch to add all devices doesn't fix the problem with ATI/NVIDIA.
Comment 58 Ludwig Nussel 2008-09-25 05:49:04 UTC
*** Bug 429590 has been marked as a duplicate of this bug. ***