Bugzilla – Bug 104350
scsi errorhandler from usb storage device not removed
Last modified: 2006-02-22 16:59:52 UTC
I added and removed a USB Harddisk several times. In this case the [scsi_eh_*] not full removed. Always if I plug in the USB storage device again the [scsi_eh_*] is increased. Because of this I can't unload the sd_mod module because it is 4 or more time used. Also a problem: one partition of the harddisk is xfs. There are after remove several xfs related processes. ---------------------- root 6040 0.0 0.0 0 0 ? S 13:32 0:00 [kjournald] root 6052 0.0 0.0 0 0 ? S< 13:32 0:00 [xfslogd/0] root 6054 0.0 0.0 0 0 ? S< 13:32 0:00 [xfsdatad/0] root 6055 0.0 0.0 0 0 ? S 13:32 0:00 [xfsbufd] root 6158 0.0 0.0 0 0 ? S 13:41 0:00 [scsi_eh_1] root 6217 0.0 0.0 0 0 ? S 13:41 0:00 [kjournald] ---------------------- and from an other machine: root 17423 0.0 0.0 0 0 ? S 11:41 0:00 [scsi_eh_2] root 17490 0.0 0.0 0 0 ? S< 11:41 0:00 [xfslogd/0] root 17492 0.0 0.0 0 0 ? S< 11:41 0:00 [xfsdatad/0] root 17493 0.0 0.0 0 0 ? S 11:41 0:00 [xfsbufd] root 17495 0.0 0.0 0 0 ? S 11:41 0:00 [xfssyncd] root 17608 0.0 0.0 0 0 ? S 11:41 0:00 [scsi_eh_6] root 17671 0.0 0.0 0 0 ? S 11:41 0:00 [kjournald]
Not being able to unload the sd_mod module does not qualify as a critical bug.
sorry ... the problem with sd_mod is solved: Because of fast add and remove the device several times not all partitions are unmounted. But the problem with [scsi_eh_*] and xfs processes exists anymore also if I umount the related partitions
Can we close this report? Can you reproduce the problem on beta4?
Can't reproduce this at the moment. I close the bug and reopen if this happen again
While tests with firewire and usb with a harddisk with 4 partitions the same happens again. The device is removed and all mountpoints are umounted and there are now again two [scsi_eh_*] left
Chris, can someone from your team look into this?
Jens, does this sound like a known bug?
asked once about that, but it happens also with usb, right? <olaf> jejb: sbp2 drives will get a new scsi_eh_N after each replugin. should I be concerned? <olaf> do you know if thats fixed in your trees, or by some patches? <jejb> olaf, I presume they remove and re-add the host? <olaf> I cant remember how it behaved in older kernels <jejb> olaf, if they remove and re-add the host then its expected <olaf> how large can N grow? <jejb> boundless <olaf> ok, guess they will fix it someday <hch> well, 32bits <hch> and then it starts from 0 and causes bad trouble <hch> we should maybe switch it to an idr allocator to reallocate them as soon as the old hose has been completely released <jejb> hch, yes, I was wondering that ... it would stop Ben Collins whining too ... <olaf> I think the physical connector will die before I fill an uint does the left over kernel thread cause problems?
Since the last question was not answered since more than 5 months, I do not assume that the xfs processes cause any problems. It seems that this is normal, I tested mounting and unmounting xfs with the current 10.1 kernel and also with this newer xfs, two xfs processes stay, but repeated mounting/unmounting works without problems and the processes are cleaned up when unloading the xfs module. xfs modules staying ore being removed is not a function of the usb storage driver, so I'd say that has also nothing to do with unplugging an usb disk (if it's unmounted before). Just in case, the very latest kernels can be found in ftp://ftp.suse.com/pub/projects/kernel/kotd/