Bugzilla – Bug 1215633
gvfs-udisks2-volume-monitor crashes, causing delays in applications
Last modified: 2024-07-02 05:24:55 UTC
Gimp opens Normally. I can make whatever edits I choose and it proceeds normally. If I use the “File” dropdown menu and select “New or Export”, the primary Gimp window becomes completely unresponsive and the child window does not open for 25 seconds or longer. Once the child window appears, all is well. Gimp only does this on the first use of the “File” menu, Subsequent uses of it are normal. If I open Gimp from a terminal, I get the following error a split second before Gimp returns to normal behavior: Error creating proxy: Error calling StartServiceByName for org.gtk.vfs.UDisks2VolumeMonitor: Timeout was reached (g-io-error-quark, 24) journalctl --user --follow gives: Sep 23 08:33:29 RYZEN9 plasmashell[2490]: Could not find the Plasmoid for Plasma::FrameSvgItem(0x556f296ad430) Q QmlContext(0x556f28f5cb10) QUrl("file:///usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/glo bal/Globals.qml") Sep 23 08:33:29 RYZEN9 plasmashell[2490]: Could not find the Plasmoid for Plasma::FrameSvgItem(0x556f296ad430) Q QmlContext(0x556f28f5cb10) QUrl("file:///usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/glo bal/Globals.qml") Sep 23 08:33:32 RYZEN9 dbus-daemon[2318]: [session uid=1000 pid=2318] Activating via systemd: service name='org. gtk.vfs.UDisks2VolumeMonitor' unit='gvfs-udisks2-volume-monitor.service' requested by ':1.83' (uid=1000 pid=5385 comm="gimp") Sep 23 08:33:32 RYZEN9 systemd[2285]: Starting Virtual filesystem service - disk device monitor... Sep 23 08:33:32 RYZEN9 kernel: gvfs-udisks2-vo[5642]: segfault at 0 ip 00007f214376f84d sp 00007ffc9fe99d68 erro r 4 in libc.so.6[7f2143626000+16c000] likely on CPU 8 (core 10, socket 0) Sep 23 08:33:32 RYZEN9 kernel: Code: 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 89 f8 48 89 fa c5 f9 ef c0 25 ff 0f 00 00 3d e0 0f 00 00 0f 87 b3 01 00 00 <c5> fd 76 0f c5 fd d7 c1 85 c0 0f 84 a3 00 00 00 f3 0f bc c0 c1 e8 Sep 23 08:33:32 RYZEN9 systemd[1]: Started Process Core Dump (PID 5664/UID 0). Sep 23 08:33:32 RYZEN9 systemd-coredump[5665]: I reported this on the openSUSE forums and was referred here to file a bug report.
I should add that Gimp and all gvfs packages are from standard Tumbleweed repos. gvfs-backend version is 1.52.1-1.1 and Gimp is 2.10.34-5.2
gvfs-udisks2-volume-monitor is crashing for some reasons on your system. what does `systemctl --user status gvfs-udisks2-volume-monitor` return?
(In reply to Christophe Marin from comment #2) > gvfs-udisks2-volume-monitor is crashing for some reasons on your system. > > what does `systemctl --user status gvfs-udisks2-volume-monitor` return? preferably executed after restarting your user session
(In reply to Christophe Marin from comment #3) > (In reply to Christophe Marin from comment #2) > > gvfs-udisks2-volume-monitor is crashing for some reasons on your system. > > > > what does `systemctl --user status gvfs-udisks2-volume-monitor` return? > > preferably executed after restarting your user session Apparently you're on to something. Here is the output: ' systemctl --use status gvfs-udisks2-volume-monitor × gvfs-udisks2-volume-monitor.service - Virtual filesystem service - disk device monitor Loaded: loaded (/usr/lib/systemd/user/gvfs-udisks2-volume-monitor.service; static) Active: failed (Result: core-dump) since Sat 2023-09-23 21:21:08 CDT; 2min 47s ago Process: 3883 ExecStart=/usr/libexec/gvfs/gvfs-udisks2-volume-monitor (code=dumped, signal=SEGV) Main PID: 3883 (code=dumped, signal=SEGV) CPU: 201ms '
Apparently this bug also affects Handbrake. I opened it and it took approximately 25 seconds for the primary window to open.
nullptr deref in gvfs, reassigning.
Could you please upload the core file here? Also, I used to have the same problem on poweroff, do you have `x-gvfs-show` flag for some mount points in your fstab?
First, no mention of the entry you inquired about in my fstab.Only hard drives, but several are Luks encrypted. The actual core dump is quite long. I hope it is useful: Process 5209 (gvfs-udisks2-vo) of user 1000 dumped core. Stack trace of thread 5209: #0 0x00007fbdb156f84d __wcslen_avx2 (libc.so.6 + 0x16f84d) #1 0x00007fbdb14c7b4c __wcsxfrm_l (libc.so.6 + 0xc7b4c) #2 0x00007fbdb19ca23a g_utf8_collate_key (libglib-2.0.so.0 + 0x9623a) #3 0x00007fbdb1bb6e0a n/a (libgio-2.0.so.0 + 0xd8e0a) #4 0x00007fbdb1bb70e5 n/a (libgio-2.0.so.0 + 0xd90e5) #5 0x00007fbdb1bb83a4 g_content_type_guess_for_tree (libgio-2.0.so.0 + 0xda3a4) #6 0x0000559bff1f32c9 n/a (gvfs-udisks2-volume-monitor + 0x142c9) #7 0x0000559bff1ff914 n/a (gvfs-udisks2-volume-monitor + 0x20914) #8 0x0000559bff1eddd3 n/a (gvfs-udisks2-volume-monitor + 0xedd3) #9 0x00007fbdb14281b0 __libc_start_call_main (libc.so.6 + 0x281b0) #10 0x00007fbdb1428279 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x28279) #11 0x0000559bff1ede85 n/a (gvfs-udisks2-volume-monitor + 0xee85) Stack trace of thread 5210: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebcb0 g_cond_wait (libglib-2.0.so.0 + 0xb7cb0) #2 0x00007fbdb195c02b n/a (libglib-2.0.so.0 + 0x2802b) #3 0x00007fbdb19bea92 n/a (libglib-2.0.so.0 + 0x8aa92) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5211: #0 0x00007fbdb1509d2f __poll (libc.so.6 + 0x109d2f) #1 0x00007fbdb1991abf n/a (libglib-2.0.so.0 + 0x5dabf) #2 0x00007fbdb19921cc g_main_context_iteration (libglib-2.0.so.0 + 0x5e1cc) #3 0x00007fbdb1992211 n/a (libglib-2.0.so.0 + 0x5e211) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5221: #0 0x00007fbdb1509d2f __poll (libc.so.6 + 0x109d2f) #1 0x00007fbdb1991abf n/a (libglib-2.0.so.0 + 0x5dabf) #2 0x00007fbdb19921cc g_main_context_iteration (libglib-2.0.so.0 + 0x5e1cc) #3 0x00007fbdaf2cf97d n/a (libdconfsettings.so + 0x697d) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5229: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5226: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5230: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5222: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5228: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5227: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5223: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5212: #0 0x00007fbdb1509d2f __poll (libc.so.6 + 0x109d2f) #1 0x00007fbdb1991abf n/a (libglib-2.0.so.0 + 0x5dabf) #2 0x00007fbdb19923ef g_main_loop_run (libglib-2.0.so.0 + 0x5e3ef) #3 0x00007fbdb1c038e6 n/a (libgio-2.0.so.0 + 0x1258e6) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5225: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) Stack trace of thread 5224: #0 0x00007fbdb151616d syscall (libc.so.6 + 0x11616d) #1 0x00007fbdb19ebe5c g_cond_wait_until (libglib-2.0.so.0 + 0xb7e5c) #2 0x00007fbdb195c003 n/a (libglib-2.0.so.0 + 0x28003) #3 0x00007fbdb19bee0a n/a (libglib-2.0.so.0 + 0x8ae0a) #4 0x00007fbdb19be44e n/a (libglib-2.0.so.0 + 0x8a44e) #5 0x00007fbdb148ff64 start_thread (libc.so.6 + 0x8ff64) #6 0x00007fbdb151847c __clone3 (libc.so.6 + 0x11847c) ELF object binary architecture: AMD x86-64 Sep 25 07:10:02 RYZEN9 systemd[2277]: gvfs-udisks2-volume-monitor.service: Main process exited, code=dumped, status=11/SEGV Sep 25 07:10:02 RYZEN9 systemd[2277]: gvfs-udisks2-volume-monitor.service: Failed with result 'core-dump'. Sep 25 07:10:02 RYZEN9 systemd[1]: systemd-coredump@1-5231-0.service: Deactivated successfully. Sep 25 07:10:02 RYZEN9 systemd[2277]: Failed to start Virtual filesystem service - disk device monitor. Sep 25 07:10:33 RYZEN9 kwin_x11[2436]: kwin_core: XCB error: 152 (BadDamage), sequence: 21447, resource id: 9450343, major code: 143 (DAMAGE), minor code: 3 (Subtract) Sep 25 07:10:36 RYZEN9 systemd[2277]: app-org.kde.konsole-86b7e785a75c406a8a0246e429a34674.scope: Consumed 3.456s CPU time.
Could you try `coredumpctl list` to see whether there is a core file for the process, and attach it here? Thanks.
Created attachment 869765 [details] output of coredump list in "txt" format Posting per request on Comment #9
Sorry for the late reply, could you please try `coredumpctl -o gvfs-udisks2-volume-monitor-coredump dump /usr/libexec/gvfs/gvfs-udisks2-volume-monitor`, and upload the `gvfs-udisks2-volume-monitor-coredump` file here?
(In reply to Alynx Zhou from comment #11) > Sorry for the late reply, could you please try `coredumpctl -o > gvfs-udisks2-volume-monitor-coredump dump > /usr/libexec/gvfs/gvfs-udisks2-volume-monitor`, and upload the > `gvfs-udisks2-volume-monitor-coredump` file here? Actually, the bug seems to have been fixed by a recent update. I am now getting output of a different type, which is no doubt a different problem. Gimp seems to work, but I get the same lines repeating as soon as I open the "Export As" window: (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1633 (g_file_info_get_is_hidden): should not be reached (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-backup (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1655 (g_file_info_get_is_backup): should not be reached (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-hidden The file does seem to export correctly. I am not sure where to report this, or if it is even serious enough to report, as Gimp does seem to work correctly.If I hadn't opened Gimp from a command prompt, I would never have noticed the issue.
I could add that the gvfs issue seems to have been fixed concerning all programs that displayed the problem: Handbrake (initial opening of the app), Gimp (opening a dialog window), and Firefox (opening the print dialog).
> (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1633 (g_file_info_get_is_hidden): should not be reached > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-backup > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1655 (g_file_info_get_is_backup): should not be reached > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-hidden I also got those but I think they are harmless currently. Did you have some special mount points? Like google drive or other, maybe that is the reason of the crash.
(In reply to Alynx Zhou from comment #14) > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1633 (g_file_info_get_is_hidden): should not be reached > > > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-backup > > > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: file ../gio/gfileinfo.c: line 1655 (g_file_info_get_is_backup): should not be reached > > > > (gimp:5307): GLib-GIO-CRITICAL **: 05:37:00.899: GFileInfo created without standard::is-hidden > > I also got those but I think they are harmless currently. > > Did you have some special mount points? Like google drive or other, maybe > that is the reason of the crash. I agree that the new messages seem to be harmless and do not appear to affect function. I have never had mount points that were not directly connected to my computer. All are physical drives that are always connected. Any external connections are done via samba and are manually initiated as well as rarely used.
Could we close this if no crash is happening now? The GFileInfo output seems related to this bug: https://gitlab.gnome.org/GNOME/gimp/-/issues/9994, and should be fixed upstream.