Bug 1212918 - The visible session of the guest OS crashes when watching a stream
Summary: The visible session of the guest OS crashes when watching a stream
Status: RESOLVED NORESPONSE
Alias: None
Product: openSUSE Tumbleweed
Classification: openSUSE
Component: KVM (show other bugs)
Version: Current
Hardware: x86-64 openSUSE Tumbleweed
: P5 - None : Normal (vote)
Target Milestone: ---
Assignee: E-mail List
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-07-02 16:29 UTC by Stakanov Schufter
Modified: 2023-08-02 16:57 UTC (History)
4 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---
dfaggioli: needinfo? (carnold)
carnold: needinfo? (stakanov)


Attachments
last one (not the current) (1023.61 KB, application/x-troff-man)
2023-07-05 19:45 UTC, Stakanov Schufter
Details
log.3 (1023.91 KB, application/x-troff-man)
2023-07-05 19:46 UTC, Stakanov Schufter
Details
the most recent 01 (sorry I overlooked) (1022.54 KB, application/x-troff-man)
2023-07-05 19:49 UTC, Stakanov Schufter
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Stakanov Schufter 2023-07-02 16:29:13 UTC
Open a guest inside KVM and put it full screen. 
Watch a stream.
The whole guest vanishes (crashes) however the OS itself is still running in background and can be reopened. It is only its display output that does crash. 
This is the case since the mesa update of today, but I am not sure if mesa is related. 

What I am getting in the journal is as follows (and everytime the same):



lug 02 18:17:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
lug 02 18:17:11 localhost systemd[1]: Finished Cleanup of Temporary Directories.
lug 02 18:17:11 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
lug 02 18:17:11 localhost systemd-tmpfiles[8659]: /usr/lib/tmpfiles.d/suse.conf:10: Duplicate line for path "/run/lock", ignoring.
lug 02 18:17:11 localhost systemd[1]: Starting Cleanup of Temporary Directories...
lug 02 18:15:50 localhost plasmashell[2797]: file:///usr/lib64/qt5/qml/org/kde/plasma/core/private/DefaultToolTip.qml:69:13: QML Label: Binding loop detected for property "verticalAlignment"
lug 02 18:15:25 localhost kded5[2763]: Service  ":1.182" unregistered
lug 02 18:15:25 localhost kded5[2763]: Registering ":1.182/StatusNotifierItem" to system tray
lug 02 18:15:25 localhost virt-manager[8565]: Warning no automount-inhibiting implementation available
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnodedevd-sock': File o directory non esistente
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock': File o directory non esistente
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnodedevd-sock': File o directory non esistente
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock': File o directory non esistente
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnodedevd-sock': File o directory non esistente
lug 02 18:15:23 localhost virtqemud[2218]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock': File o directory non esistente
lug 02 18:15:23 localhost kded5[2763]: Registering ":1.179/org/ayatana/NotificationItem/virt_manager" to system tray
lug 02 18:15:23 localhost virt-manager[8565]: AT-SPI: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files
lug 02 18:15:23 localhost systemd[2604]: Started Gestore macchine virtuali.
lug 02 18:15:22 localhost plasmashell[2797]: QString::arg: 2 argument(s) missing in virt-manager
lug 02 18:15:22 localhost plasmashell[2797]: kf.service.services: KApplicationTrader: mimeType "x-scheme-handler/file" not found
lug 02 18:14:42 localhost akonadi_maildir_resource[3685]: "Item query returned empty result set"
lug 02 18:14:42 localhost akonadiserver[3090]: org.kde.pim.akonadiserver: Handler exception when handling command FetchItems on connection akonadi_maildir_resource_0 (0x564d430fc990) : Item query returned empty result set
lug 02 18:14:42 localhost akonadiserver[3090]: org.kde.pim.akonadiserver: Handler exception when handling command FetchItems on connection MailFilter Kernel ETM (0x564d431045b0) : Item query returned empty result set
lug 02 18:14:42 localhost akonadiserver[3090]: org.kde.pim.akonadiserver: Handler exception when handling command FetchItems on connection KMail Kernel ETM (0x564d43106630) : Item query returned empty result set
lug 02 18:14:42 localhost kontact[2947]: "Item query returned empty result set"
lug 02 18:14:42 localhost akonadi_mailfilter_agent[3701]: "Item query returned empty result set"
lug 02 18:14:42 localhost akonadi_archivemail_agent[3679]: "Item query returned empty result set"
lug 02 18:14:42 localhost akonadiserver[3090]: org.kde.pim.akonadiserver: Handler exception when handling command FetchItems on connection Archive Mail Kernel ETM (0x564d43105080) : Item query returned empty result set
lug 02 18:13:28 localhost systemd[1]: snapperd.service: Consumed 2.642s CPU time.
lug 02 18:13:28 localhost systemd[1]: snapperd.service: Deactivated successfully.
lug 02 18:13:24 localhost plasmashell[2797]: error creating screencast "Could not find window id {825c6f80-9d42-4b68-a7b9-dafc15edb9eb}"
lug 02 18:13:24 localhost kded5[2763]: Service  ":1.175" unregistered
lug 02 18:13:24 localhost systemd[2604]: app-virt\x2dmanager-3351630286ab4fa08f01f137ea4a8683.scope: Consumed 3min 6.511s CPU time.
lug 02 18:13:24 localhost virtqemud[2218]: End of file while reading data: Errore di input/output
lug 02 18:13:24 localhost virt-manager[4380]: Error flushing display: Risorsa temporaneamente non disponibile

End of excerpt. 

When opening the manager the OS is running, double click restores, the stream continued normally. It is hence the full screen output that is troubled. As I am not even understanding what is happening, I can only for the time being, give you this output.
Comment 1 Stakanov Schufter 2023-07-02 17:04:57 UTC
Observation: the crash happens only in full screen. If you leave the session of the guest in a window, even nearly maximal of the monitor size, it will work and not crash but max screen plus max screen of guest = crash after about a minute (not immediate).
Comment 2 Stakanov Schufter 2023-07-02 17:11:49 UTC
Well was too optimistic, but it did run much longer before crashing, the output in journalctl was somewhat more synthetic when it finally crashed again:


lug 02 19:08:29 localhost plasmashell[2797]: error creating screencast "Could not find window id {69985de2-e83e-4533-a140-0c8524e76af2}"
lug 02 19:08:29 localhost kded5[2763]: Service  ":1.194" unregistered
lug 02 19:08:29 localhost systemd[2604]: app-virt\x2dmanager-b5970512dde644eda853b1245fddce68.scope: Consumed 9min 20.744s CPU time.
lug 02 19:08:29 localhost virtqemud[2218]: End of file while reading data: Errore di input/output
lug 02 19:08:29 localhost virt-manager[10031]: Error flushing display: Risorsa temporaneamente non disponibile
Comment 3 Dario Faggioli 2023-07-03 16:53:44 UTC
Can you upload/post the content of /var/log/libvirt/qemu/<guestname>.log (on the host)?
Comment 4 Dario Faggioli 2023-07-03 17:02:31 UTC
(In reply to Stakanov Schufter from comment #0)
> lug 02 18:13:28 localhost systemd[1]: snapperd.service: Consumed 2.642s CPU
> time.
> lug 02 18:13:28 localhost systemd[1]: snapperd.service: Deactivated
> successfully.
> lug 02 18:13:24 localhost plasmashell[2797]: error creating screencast
> "Could not find window id {825c6f80-9d42-4b68-a7b9-dafc15edb9eb}"
> lug 02 18:13:24 localhost kded5[2763]: Service  ":1.175" unregistered
> lug 02 18:13:24 localhost systemd[2604]:
> app-virt\x2dmanager-3351630286ab4fa08f01f137ea4a8683.scope: Consumed 3min
> 6.511s CPU time.
> lug 02 18:13:24 localhost virtqemud[2218]: End of file while reading data:
> Errore di input/output
> lug 02 18:13:24 localhost virt-manager[4380]: Error flushing display:
> Risorsa temporaneamente non disponibile
>
Mm... wait. These lines above (especially the one from virtqemud) seems to indicate that QEMU crashed. And yet, you say:

> When opening the manager the OS is running, double click restores, the
> stream continued normally. It is hence the full screen output that is
> troubled. 
>
What does this mean, exactly? Like, what is it that actually happens?

So, you have the VM open in a virt-manager window, start a stream (does stream means, like, watching a video on YouTube, Netflix or something like that, inside of the guest?) and then what?
The VM window disappears?

Assuming all the above is true, how do you make it appear again?
Do you double-click on the VM again, in the virt-manager manager window?
And what does happen at this point? The VM screen shows up again with all the programs opened and doing what they were doing before it disappeared?

Sorry for asking so many questions... I'm just trying to understand what is actually crashing.

In the meanwhile, let's also ping Jim and Charles :-P
Comment 5 Dario Faggioli 2023-07-03 17:04:56 UTC
(In reply to Stakanov Schufter from comment #0)
> Open a guest inside KVM and put it full screen. 
> Watch a stream.
> The whole guest vanishes (crashes) however the OS itself is still running in
> background and can be reopened. It is only its display output that does
> crash. 
> This is the case since the mesa update of today, but I am not sure if mesa
> is related. 
> 
Mesa update where, on the host? On the guest? In both?

Also, can you rollback to a previous snapshot and confirm that, by doing that, everything works again?
Comment 6 Dario Faggioli 2023-07-03 17:07:02 UTC
Oh, and last but not least, can we see the VM config file?

# virsh dumpxml <guestname>
Comment 7 Charles Arnold 2023-07-03 17:38:35 UTC
Another experiment to try is to view the guest using the virt-viewer program
in full screen mode. Does it also crash?
Comment 8 Stakanov Schufter 2023-07-03 18:17:05 UTC
(In reply to Dario Faggioli from comment #4)
> (In reply to Stakanov Schufter from comment #0)
> > lug 02 18:13:28 localhost systemd[1]: snapperd.service: Consumed 2.642s CPU
> > time.
> > lug 02 18:13:28 localhost systemd[1]: snapperd.service: Deactivated
> > successfully.
> > lug 02 18:13:24 localhost plasmashell[2797]: error creating screencast
> > "Could not find window id {825c6f80-9d42-4b68-a7b9-dafc15edb9eb}"
> > lug 02 18:13:24 localhost kded5[2763]: Service  ":1.175" unregistered
> > lug 02 18:13:24 localhost systemd[2604]:
> > app-virt\x2dmanager-3351630286ab4fa08f01f137ea4a8683.scope: Consumed 3min
> > 6.511s CPU time.
> > lug 02 18:13:24 localhost virtqemud[2218]: End of file while reading data:
> > Errore di input/output
> > lug 02 18:13:24 localhost virt-manager[4380]: Error flushing display:
> > Risorsa temporaneamente non disponibile
> >
> Mm... wait. These lines above (especially the one from virtqemud) seems to
> indicate that QEMU crashed. And yet, you say:
> 
> > When opening the manager the OS is running, double click restores, the
> > stream continued normally. It is hence the full screen output that is
> > troubled. 
> >
> What does this mean, exactly? Like, what is it that actually happens?
> 
> So, you have the VM open in a virt-manager window, start a stream (does
> stream means, like, watching a video on YouTube, Netflix or something like
> that, inside of the guest?) and then what?
> The VM window disappears?
> 
> Assuming all the above is true, how do you make it appear again?
> Do you double-click on the VM again, in the virt-manager manager window?
> And what does happen at this point? The VM screen shows up again with all
> the programs opened and doing what they were doing before it disappeared?
> 
> Sorry for asking so many questions... I'm just trying to understand what is
> actually crashing.
> 
> In the meanwhile, let's also ping Jim and Charles :-P
You are doing well asking many questions as you need info, no worries. 

The program is open as virtualized session full screen and the browser inside the maximized virtual session is also maximized. I am watching a stream. Suddenly the whole window "implodes" that is, the fullscreen image vanishes, the window of the manager is gone, it is as if you are coming from starting the machine. You click on the virutal manager and expect the virtual machine to have crashed. 
But you find the session is still open, alive and kicking. To do the Lazarus thing you have to click on it to open it fullscreen...and you find even the browser is still running, the stream is still buffering, the playback went on, no problems of any kind. It just "hides" by closing to the manager so to say. 
So you have no dataloss in the guest, you have just the stupid thing that you have to go back to the moment when the movie "vanished". Just to find that after a while it does Britney Spears (oops I did it again!). 
I will try to give you the log output you asked for. 
I tried with not fully expanded, fully expanded. But it did not really change. If it is maximized but the window of the OS is just maximized but not "full screen", so ... you see the bar with the three dots to be clear, it worked. 
(In reply to Dario Faggioli from comment #6)
> Oh, and last but not least, can we see the VM config file?
> 
> # virsh dumpxml <guestname>

error: failed to get domain 'tumbleweed'
but the machine is there, the name is right So I am doing something wrong I guess
Comment 9 Stakanov Schufter 2023-07-03 18:18:32 UTC
(In reply to Charles Arnold from comment #7)
> Another experiment to try is to view the guest using the virt-viewer program
> in full screen mode. Does it also crash?

I am not aware of its existence, is this a standalone program from the repos?
Comment 10 Charles Arnold 2023-07-03 18:56:36 UTC
(In reply to Stakanov Schufter from comment #9)
> (In reply to Charles Arnold from comment #7)
> > Another experiment to try is to view the guest using the virt-viewer program
> > in full screen mode. Does it also crash?
> 
> I am not aware of its existence, is this a standalone program from the repos?

Yes, written in 'C' whereas virt-manager is a python application. Both
run as libvirt clients. Use the following command to install it.

zypper in virt-viewer
Comment 11 Dario Faggioli 2023-07-04 09:27:56 UTC
(In reply to Stakanov Schufter from comment #8)
> The program is open as virtualized session full screen and the browser
> inside the maximized virtual session is also maximized. I am watching a
> stream. Suddenly the whole window "implodes" that is, the fullscreen image
> vanishes, the window of the manager is gone, it is as if you are coming from
> starting the machine. You click on the virutal manager and expect the
> virtual machine to have crashed. 
> But you find the session is still open, alive and kicking. To do the Lazarus
> thing you have to click on it to open it fullscreen...and you find even the
> browser is still running, the stream is still buffering, the playback went
> on, no problems of any kind. It just "hides" by closing to the manager so to
> say. 
> So you have no dataloss in the guest, you have just the stupid thing that
> you have to go back to the moment when the movie "vanished". Just to find
> that after a while it does Britney Spears (oops I did it again!). 
>
Ok, so it seems definitely to be virt-manager crashing, not QEMU.

Try virt-viewer, as Charles said, and let's see how that goes.

> (In reply to Dario Faggioli from comment #6)
> > Oh, and last but not least, can we see the VM config file?
> > 
> > # virsh dumpxml <guestname>
> 
> error: failed to get domain 'tumbleweed'
> but the machine is there, the name is right So I am doing something wrong I
> guess
>
Can I see the output of:

virsh list --all
sudo virsh list --all
Comment 12 Stakanov Schufter 2023-07-04 12:44:57 UTC
OK, that helped (I have to set up a cheat sheat for kvm. 

entropy@localhost:~> sudo virsh dumpxml opensusetumbleweed
<domain type='kvm'>
  <name>opensusetumbleweed</name>
  <uuid>b61ba116-e869-48cb-89ae-e5a1aa9f19e6</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://opensuse.org/opensuse/tumbleweed"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>8192000</memory>
  <currentMemory unit='KiB'>8192000</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-8.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'/>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/opensusetumbleweed-clone.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x19'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x1a'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x1b'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0x1c'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0x1d'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:88:7f:6e'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
    </graphics>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <audio id='1' type='spice'/>
    <video>
      <model type='virtio' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <watchdog model='itco' action='reset'/>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>


about the original demand:
entropy@localhost:~> virsh list --all
 Id   Nome   Stato
--------------------

entropy@localhost:~> sudo virsh list --all
[sudo] password di root: 
Riprovare.
[sudo] password di root: 
 Id   Nome                 Stato
--------------------------------------
 -    Deepin21             terminato
 -    opensusetumbleweed   terminato
 -    win11                terminato



What I found out: yesterday it worked normally and did not crash, then it came to my mind: when it happened I was on Wayland, not on X11. 
I have also another bug open that presents only with wayland, so, tonight I will run the same show with wayland, and if crashes now, the issue is just "wayland". I know that wayland has a lot of issues, if it is related, do you wish me to close the bug as invalid?
Comment 13 James Fehlig 2023-07-05 14:02:45 UTC
(In reply to Dario Faggioli from comment #4)
> In the meanwhile, let's also ping Jim and Charles :-P

The only thing I would add is that systemd typically collects a coredump of crashing processes. You can view such coredumps with 'coredumpctl list'. See the coredumpctl man page for more info. If you do have coredumps of virt-related processes, please attach to the bug. Thanks!
Comment 14 Stakanov Schufter 2023-07-05 18:29:51 UTC
(In reply to James Fehlig from comment #13)
> (In reply to Dario Faggioli from comment #4)
> > In the meanwhile, let's also ping Jim and Charles :-P
> 
> The only thing I would add is that systemd typically collects a coredump of
> crashing processes. You can view such coredumps with 'coredumpctl list'. See
> the coredumpctl man page for more info. If you do have coredumps of
> virt-related processes, please attach to the bug. Thanks!

I do not see virt-related, but could be 
Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present  /usr/bin/python3.11  45.1M
of interest? 
that is the only "exotic" that I found. 

sudo coredumpctl list
[sudo] password di root: 
TIME                           PID  UID  GID SIG     COREFILE EXE                   SIZE
Fri 2023-06-23 12:56:31 CEST 12374 1002 1002 SIGSEGV missing  /usr/bin/okular          -
Fri 2023-06-23 12:56:31 CEST 12323 1002 1002 SIGSEGV missing  /usr/bin/dolphin         -
Fri 2023-06-23 19:20:17 CEST  2949 1000 1000 SIGSEGV missing  /usr/bin/kaffeine        -
Tue 2023-06-27 17:32:41 CEST  2922 1002 1002 SIGSEGV missing  /usr/bin/plasmashell     -
Thu 2023-06-29 22:14:54 CEST 14818 1000 1000 SIGSEGV missing  /usr/bin/python3.11      -
Fri 2023-06-30 16:13:22 CEST 22911 1000 1000 SIGSEGV missing  /usr/bin/dolphin         -
Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present  /usr/bin/python3.11  45.1M
Comment 15 Charles Arnold 2023-07-05 18:55:41 UTC
(In reply to Stakanov Schufter from comment #14)
> (In reply to James Fehlig from comment #13)
> > (In reply to Dario Faggioli from comment #4)
> > > In the meanwhile, let's also ping Jim and Charles :-P
> > 
> > The only thing I would add is that systemd typically collects a coredump of
> > crashing processes. You can view such coredumps with 'coredumpctl list'. See
> > the coredumpctl man page for more info. If you do have coredumps of
> > virt-related processes, please attach to the bug. Thanks!
> 
> I do not see virt-related, but could be 
> Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present 
> /usr/bin/python3.11  45.1M
> of interest? 
> that is the only "exotic" that I found. 
> 
> sudo coredumpctl list
> [sudo] password di root: 
> TIME                           PID  UID  GID SIG     COREFILE EXE           
> SIZE
> Fri 2023-06-23 12:56:31 CEST 12374 1002 1002 SIGSEGV missing 
> /usr/bin/okular          -
> Fri 2023-06-23 12:56:31 CEST 12323 1002 1002 SIGSEGV missing 
> /usr/bin/dolphin         -
> Fri 2023-06-23 19:20:17 CEST  2949 1000 1000 SIGSEGV missing 
> /usr/bin/kaffeine        -
> Tue 2023-06-27 17:32:41 CEST  2922 1002 1002 SIGSEGV missing 
> /usr/bin/plasmashell     -
> Thu 2023-06-29 22:14:54 CEST 14818 1000 1000 SIGSEGV missing 
> /usr/bin/python3.11      -
> Fri 2023-06-30 16:13:22 CEST 22911 1000 1000 SIGSEGV missing 
> /usr/bin/dolphin         -
> Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present 
> /usr/bin/python3.11  45.1M

This seem to be a likely problem to affect virt-manager. virt-manager is a
python application and it appears the crash is happening below virt-manager. Could you please attach ~/.cache/virt-manager/virt-manager.log

Thanks
Comment 16 Stakanov Schufter 2023-07-05 19:45:16 UTC
Created attachment 868022 [details]
last one (not the current)

file:///home/entropy/.cache/virt-manager/virt-manager.log.2
file:///home/entropy/.cache/virt-manager/virt-manager.log.3
will be the attachments (which I think correspond to the crashes)
Comment 17 Stakanov Schufter 2023-07-05 19:46:59 UTC
Created attachment 868023 [details]
log.3

one before, going back in time (in total there are still 04 and 05, please tell if you need them too (in case these ones are not fruitful)
Comment 18 Stakanov Schufter 2023-07-05 19:49:12 UTC
Created attachment 868024 [details]
the most recent 01 (sorry I overlooked)

so this is from 07.03 as date.
Comment 19 Charles Arnold 2023-07-05 20:07:05 UTC
There are no exceptions thrown in these logs. The only errors involve
connecting to various missing libvirt sockets which I presume to be
resolved with a new enough version of Tumbleweed.
What is the version of your Tumbleweed? (cat /etc/os-release)
Comment 20 James Fehlig 2023-07-05 20:55:02 UTC
(In reply to Stakanov Schufter from comment #16)
> file:///home/entropy/.cache/virt-manager/virt-manager.log.2
> file:///home/entropy/.cache/virt-manager/virt-manager.log.3

Do you run virt-manager as root or as the 'entropy' user? The proper log file is /root/.cache/virt-manager/virt-manager.log if running virt-manager as root.
Comment 21 James Fehlig 2023-07-05 21:00:50 UTC
(In reply to Stakanov Schufter from comment #14)
> Thu 2023-06-29 22:14:54 CEST 14818 1000 1000 SIGSEGV missing 
> /usr/bin/python3.11      -
> Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present 
> /usr/bin/python3.11  45.1M

'coredumpctl info' will show the command line for these processes, along with other useful info. E.g. 'sudo coredumpctl info 10156' and 'sudo coredumpctl info 14818'.
Comment 22 Stakanov Schufter 2023-07-05 21:04:57 UTC
I think I run it as entropy but I am by no means sure. I gave entropy membership of kvm and quemu, so I do not have to put a root password when I virtualize, but that may just mean that I am sudo without knowing it. 
How can I tell that I am running virtualization as user vs root. 

Excuse my ignorance please, have patience.
Comment 23 Stakanov Schufter 2023-07-05 21:07:50 UTC
(In reply to James Fehlig from comment #21)
> (In reply to Stakanov Schufter from comment #14)
> > Thu 2023-06-29 22:14:54 CEST 14818 1000 1000 SIGSEGV missing 
> > /usr/bin/python3.11      -
> > Mon 2023-07-03 23:14:30 CEST 10156 1000 1000 SIGSEGV present 
> > /usr/bin/python3.11  45.1M
> 
> 'coredumpctl info' will show the command line for these processes, along
> with other useful info. E.g. 'sudo coredumpctl info 10156' and 'sudo
> coredumpctl info 14818'.

[sudo] password di root: 
           PID: 10156 (virt-manager)
           UID: 1000 (entropy)
           GID: 1000 (entropy)
        Signal: 11 (SEGV)
     Timestamp: Mon 2023-07-03 23:14:28 CEST (1 day 23h ago)
  Command Line: /usr/bin/python3 /usr/bin/virt-manager
    Executable: /usr/bin/python3.11
 Control Group: /user.slice/user-1000.slice/user@1000.service/app.slice/app-virt\x2dmanager-c7190c7c7664404eb5686b742b7e8fa8.scope
          Unit: user@1000.service
     User Unit: app-virt\x2dmanager-c7190c7c7664404eb5686b742b7e8fa8.scope
         Slice: user-1000.slice
     Owner UID: 1000 (entropy)
       Boot ID: f95d306a35b94ed199e610104643f35a
    Machine ID: e908fbf41e1546e696dc06951a998a0f
      Hostname: localhost
       Storage: /var/lib/systemd/coredump/core.virt-manager.1000.f95d306a35b94ed199e610104643f35a.10156.1688418868000000.zst (present)
  Size on Disk: 45.1M
       Message: Process 10156 (virt-manager) of user 1000 dumped core.
                
                Stack trace of thread 10156:
                #0  0x00007f8d69cbf365 g_type_check_instance_is_fundamentally_a (libgobject-2.0.so.0 + 0x3b365)
                #1  0x00007f8d69ca0729 g_object_unref (libgobject-2.0.so.0 + 0x1c729)
                #2  0x00007f8d35bdf8e3 n/a (libspice-client-glib-2.0.so.8 + 0x2f8e3)
                #3  0x00007f8d35bfcdd4 n/a (libspice-client-glib-2.0.so.8 + 0x4cdd4)
                #4  0x00007f8d69d3e41e n/a (libglib-2.0.so.0 + 0x5941e)
                #5  0x00007f8d69d428d8 g_main_context_dispatch (libglib-2.0.so.0 + 0x5d8d8)
                #6  0x00007f8d69d42ce8 n/a (libglib-2.0.so.0 + 0x5dce8)
                #7  0x00007f8d69d42d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #8  0x00007f8d69ab583d g_application_run (libgio-2.0.so.0 + 0xe683d)
                #9  0x00007f8d6ab61962 n/a (libffi.so.8 + 0x7962)
                #10 0x00007f8d6ab5e2df n/a (libffi.so.8 + 0x42df)
                #11 0x00007f8d6ab60f26 ffi_call (libffi.so.8 + 0x6f26)
                #12 0x00007f8d69e4fef6 n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x23ef6)
                #13 0x00007f8d69e4e20c n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x2220c)
                #14 0x00007f8d6a7dcab1 _PyObject_Call (libpython3.11.so.1.0 + 0x1dcab1)
                #15 0x00007f8d6a7bd27f _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1bd27f)
                #16 0x00007f8d6a7b552a n/a (libpython3.11.so.1.0 + 0x1b552a)
                #17 0x00007f8d6a83803f PyEval_EvalCode (libpython3.11.so.1.0 + 0x23803f)
                #18 0x00007f8d6a855a73 n/a (libpython3.11.so.1.0 + 0x255a73)
                #19 0x00007f8d6a8520ba n/a (libpython3.11.so.1.0 + 0x2520ba)
                #20 0x00007f8d6a867d32 n/a (libpython3.11.so.1.0 + 0x267d32)
                #21 0x00007f8d6a867814 _PyRun_SimpleFileObject (libpython3.11.so.1.0 + 0x267814)
                #22 0x00007f8d6a8672b4 _PyRun_AnyFileObject (libpython3.11.so.1.0 + 0x2672b4)
                #23 0x00007f8d6a860e78 Py_RunMain (libpython3.11.so.1.0 + 0x260e78)
                #24 0x00007f8d6a827fa7 Py_BytesMain (libpython3.11.so.1.0 + 0x227fa7)
                #25 0x00007f8d6a42abb0 __libc_start_call_main (libc.so.6 + 0x27bb0)
                #26 0x00007f8d6a42ac79 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x27c79)
                #27 0x000055eba2f26085 _start (python3.11 + 0x1085)
                
                Stack trace of thread 31114:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d35809a3f n/a (libusb-1.0.so.0 + 0xda3f)
                #2  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #3  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10158:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d69d42c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f8d69d42d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #3  0x00007f8d69d42dc1 n/a (libglib-2.0.so.0 + 0x5ddc1)
                #4  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10159:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d69d42c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f8d69d42f9f g_main_loop_run (libglib-2.0.so.0 + 0x5df9f)
                #3  0x00007f8d69af18c6 n/a (libgio-2.0.so.0 + 0x1228c6)
                #4  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10157:
                #0  0x00007f8d6a5103dd syscall (libc.so.6 + 0x10d3dd)
                #1  0x00007f8d69d9c35f g_cond_wait (libglib-2.0.so.0 + 0xb735f)
                #2  0x00007f8d69d0cf4b n/a (libglib-2.0.so.0 + 0x27f4b)
                #3  0x00007f8d69d6f552 n/a (libglib-2.0.so.0 + 0x8a552)
                #4  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10170:
                #0  0x00007f8d6a48d1ce __futex_abstimed_wait_common (libc.so.6 + 0x8a1ce)
                #1  0x00007f8d6a4990e0 __new_sem_wait_slow64.constprop.0 (libc.so.6 + 0x960e0)
                #2  0x00007f8d6a79f437 PyThread_acquire_lock_timed (libpython3.11.so.1.0 + 0x19f437)
                #3  0x00007f8d6a84d9bf n/a (libpython3.11.so.1.0 + 0x24d9bf)
                #4  0x00007f8d6a84d3eb n/a (libpython3.11.so.1.0 + 0x24d3eb)
                #5  0x00007f8d6a7d2dd7 n/a (libpython3.11.so.1.0 + 0x1d2dd7)
                #6  0x00007f8d6a7c52c3 PyObject_Vectorcall (libpython3.11.so.1.0 + 0x1c52c3)
                #7  0x00007f8d6a7b93a3 _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1b93a3)
                #8  0x00007f8d6a7b552a n/a (libpython3.11.so.1.0 + 0x1b552a)
                #9  0x00007f8d6a7ef440 n/a (libpython3.11.so.1.0 + 0x1ef440)
                #10 0x00007f8d6a7bd27f _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1bd27f)
                #11 0x00007f8d6a7b552a n/a (libpython3.11.so.1.0 + 0x1b552a)
                #12 0x00007f8d6a7ef440 n/a (libpython3.11.so.1.0 + 0x1ef440)
                #13 0x00007f8d6a893b0c n/a (libpython3.11.so.1.0 + 0x293b0c)
                #14 0x00007f8d6a865764 n/a (libpython3.11.so.1.0 + 0x265764)
                #15 0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #16 0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 31483:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3b5c4e31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f8d3b5ae854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f8d3b5b90d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f8d3b5b9180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f8d3b5c8dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f8d3b56723f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10167:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d69d42c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f8d69d42d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #3  0x00007f8d659fe97d n/a (libdconfsettings.so + 0x697d)
                #4  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 31115:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3580cad0 n/a (libusb-1.0.so.0 + 0x10ad0)
                #2  0x00007f8d3580e7e0 libusb_handle_events_timeout_completed (libusb-1.0.so.0 + 0x127e0)
                #3  0x00007f8d3580e83a libusb_handle_events (libusb-1.0.so.0 + 0x1283a)
                #4  0x00007f8d35c10334 n/a (libspice-client-glib-2.0.so.8 + 0x60334)
                #5  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #6  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #7  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 10198:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3b5c4e31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f8d3b5ae854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f8d3b5b90d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f8d3b5b9180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f8d3b5c8dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f8d3b56723f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 9146:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3580cad0 n/a (libusb-1.0.so.0 + 0x10ad0)
                #2  0x00007f8d3580e7e0 libusb_handle_events_timeout_completed (libusb-1.0.so.0 + 0x127e0)
                #3  0x00007f8d3580e83a libusb_handle_events (libusb-1.0.so.0 + 0x1283a)
                #4  0x00007f8d35c10334 n/a (libspice-client-glib-2.0.so.8 + 0x60334)
                #5  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #6  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #7  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 8526:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3580cad0 n/a (libusb-1.0.so.0 + 0x10ad0)
                #2  0x00007f8d3580e7e0 libusb_handle_events_timeout_completed (libusb-1.0.so.0 + 0x127e0)
                #3  0x00007f8d3580e83a libusb_handle_events (libusb-1.0.so.0 + 0x1283a)
                #4  0x00007f8d35c10334 n/a (libspice-client-glib-2.0.so.8 + 0x60334)
                #5  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #6  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #7  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 31487:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3b5c4e31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f8d3b5ae854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f8d3b5b90d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f8d3b5b9180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f8d3b5c8dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f8d3b56723f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 9275:
                #0  0x00007f8d6a50a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f8d3580cad0 n/a (libusb-1.0.so.0 + 0x10ad0)
                #2  0x00007f8d3580e7e0 libusb_handle_events_timeout_completed (libusb-1.0.so.0 + 0x127e0)
                #3  0x00007f8d3580e83a libusb_handle_events (libusb-1.0.so.0 + 0x1283a)
                #4  0x00007f8d35c10334 n/a (libspice-client-glib-2.0.so.8 + 0x60334)
                #5  0x00007f8d69d6ef0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #6  0x00007f8d6a490c24 start_thread (libc.so.6 + 0x8dc24)
                #7  0x00007f8d6a518510 __clone3 (libc.so.6 + 0x115510)
                ELF object binary architecture: AMD x86-64
Comment 24 Stakanov Schufter 2023-07-05 21:10:22 UTC
entropy@localhost:~> sudo coredumpctl info 14818
           PID: 14818 (virt-manager)
           UID: 1000 (entropy)
           GID: 1000 (entropy)
        Signal: 11 (SEGV)
     Timestamp: Thu 2023-06-29 22:14:53 CEST (6 days ago)
  Command Line: /usr/bin/python3 /usr/bin/virt-manager
    Executable: /usr/bin/python3.11
 Control Group: /user.slice/user-1000.slice/user@1000.service/app.slice/app-virt\x2dmanager-d86fe80d287c4fe2a26f6423d1d82def.scope
          Unit: user@1000.service
     User Unit: app-virt\x2dmanager-d86fe80d287c4fe2a26f6423d1d82def.scope
         Slice: user-1000.slice
     Owner UID: 1000 (entropy)
       Boot ID: dd1cdeee3037458fade9da01b27a56c9
    Machine ID: e908fbf41e1546e696dc06951a998a0f
      Hostname: localhost
       Storage: /var/lib/systemd/coredump/core.virt-manager.1000.dd1cdeee3037458fade9da01b27a56c9.14818.1688069693000000.zst (missing)
       Message: Process 14818 (virt-manager) of user 1000 dumped core.
                
                Stack trace of thread 14818:
                #0  0x00007f9f1170d365 g_type_check_instance_is_fundamentally_a (libgobject-2.0.so.0 + 0x3b365)
                #1  0x00007f9f116ee729 g_object_unref (libgobject-2.0.so.0 + 0x1c729)
                #2  0x00007f9f0a01b8e3 n/a (libspice-client-glib-2.0.so.8 + 0x2f8e3)
                #3  0x00007f9f0a038dd4 n/a (libspice-client-glib-2.0.so.8 + 0x4cdd4)
                #4  0x00007f9f1178c41e n/a (libglib-2.0.so.0 + 0x5941e)
                #5  0x00007f9f117908d8 g_main_context_dispatch (libglib-2.0.so.0 + 0x5d8d8)
                #6  0x00007f9f11790ce8 n/a (libglib-2.0.so.0 + 0x5dce8)
                #7  0x00007f9f11790d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #8  0x00007f9f1150383d g_application_run (libgio-2.0.so.0 + 0xe683d)
                #9  0x00007f9f11a0c962 n/a (libffi.so.8 + 0x7962)
                #10 0x00007f9f11a092df n/a (libffi.so.8 + 0x42df)
                #11 0x00007f9f11a0bf26 ffi_call (libffi.so.8 + 0x6f26)
                #12 0x00007f9f1189def6 n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x23ef6)
                #13 0x00007f9f1189c20c n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x2220c)
                #14 0x00007f9f121dd361 _PyObject_Call (libpython3.11.so.1.0 + 0x1dd361)
                #15 0x00007f9f121bd6da _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1bd6da)
                #16 0x00007f9f121b56da n/a (libpython3.11.so.1.0 + 0x1b56da)
                #17 0x00007f9f122388df PyEval_EvalCode (libpython3.11.so.1.0 + 0x2388df)
                #18 0x00007f9f122562e3 n/a (libpython3.11.so.1.0 + 0x2562e3)
                #19 0x00007f9f1225292a n/a (libpython3.11.so.1.0 + 0x25292a)
                #20 0x00007f9f122685a2 n/a (libpython3.11.so.1.0 + 0x2685a2)
                #21 0x00007f9f12268084 _PyRun_SimpleFileObject (libpython3.11.so.1.0 + 0x268084)
                #22 0x00007f9f12267b24 _PyRun_AnyFileObject (libpython3.11.so.1.0 + 0x267b24)
                #23 0x00007f9f122616e8 Py_RunMain (libpython3.11.so.1.0 + 0x2616e8)
                #24 0x00007f9f12228877 Py_BytesMain (libpython3.11.so.1.0 + 0x228877)
                #25 0x00007f9f11e2abb0 __libc_start_call_main (libc.so.6 + 0x27bb0)
                #26 0x00007f9f11e2ac79 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x27c79)
                #27 0x000055cc92a0c085 _start (python3.11 + 0x1085)
                
                Stack trace of thread 14819:
                #0  0x00007f9f11f103dd syscall (libc.so.6 + 0x10d3dd)
                #1  0x00007f9f117ea35f g_cond_wait (libglib-2.0.so.0 + 0xb735f)
                #2  0x00007f9f1175af4b n/a (libglib-2.0.so.0 + 0x27f4b)
                #3  0x00007f9f117bd552 n/a (libglib-2.0.so.0 + 0x8a552)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14821:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f11790c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f9f11790f9f g_main_loop_run (libglib-2.0.so.0 + 0x5df9f)
                #3  0x00007f9f1153f8c6 n/a (libgio-2.0.so.0 + 0x1228c6)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14820:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f11790c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f9f11790d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #3  0x00007f9f11790dc1 n/a (libglib-2.0.so.0 + 0x5ddc1)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14837:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f0806fe31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f9f08059854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f9f080640d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f9f08064180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f9f08073dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f9eeb7d523f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14979:
           PID: 14818 (virt-manager)
           UID: 1000 (entropy)
           GID: 1000 (entropy)
        Signal: 11 (SEGV)
     Timestamp: Thu 2023-06-29 22:14:53 CEST (6 days ago)
  Command Line: /usr/bin/python3 /usr/bin/virt-manager
    Executable: /usr/bin/python3.11
 Control Group: /user.slice/user-1000.slice/user@1000.service/app.slice/app-virt\x2dmanager-d86fe80d287c4fe2a26f6423d1d82def.scope
          Unit: user@1000.service
     User Unit: app-virt\x2dmanager-d86fe80d287c4fe2a26f6423d1d82def.scope
         Slice: user-1000.slice
     Owner UID: 1000 (entropy)
       Boot ID: dd1cdeee3037458fade9da01b27a56c9
    Machine ID: e908fbf41e1546e696dc06951a998a0f
      Hostname: localhost
       Storage: /var/lib/systemd/coredump/core.virt-manager.1000.dd1cdeee3037458fade9da01b27a56c9.14818.1688069693000000.zst (missing)
       Message: Process 14818 (virt-manager) of user 1000 dumped core.
                
                Stack trace of thread 14818:
                #0  0x00007f9f1170d365 g_type_check_instance_is_fundamentally_a (libgobject-2.0.so.0 + 0x3b365)
                #1  0x00007f9f116ee729 g_object_unref (libgobject-2.0.so.0 + 0x1c729)
                #2  0x00007f9f0a01b8e3 n/a (libspice-client-glib-2.0.so.8 + 0x2f8e3)
                #3  0x00007f9f0a038dd4 n/a (libspice-client-glib-2.0.so.8 + 0x4cdd4)
                #4  0x00007f9f1178c41e n/a (libglib-2.0.so.0 + 0x5941e)
                #5  0x00007f9f117908d8 g_main_context_dispatch (libglib-2.0.so.0 + 0x5d8d8)
                #6  0x00007f9f11790ce8 n/a (libglib-2.0.so.0 + 0x5dce8)
                #7  0x00007f9f11790d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #8  0x00007f9f1150383d g_application_run (libgio-2.0.so.0 + 0xe683d)
                #9  0x00007f9f11a0c962 n/a (libffi.so.8 + 0x7962)
                #10 0x00007f9f11a092df n/a (libffi.so.8 + 0x42df)
                #11 0x00007f9f11a0bf26 ffi_call (libffi.so.8 + 0x6f26)
                #12 0x00007f9f1189def6 n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x23ef6)
                #13 0x00007f9f1189c20c n/a (_gi.cpython-311-x86_64-linux-gnu.so + 0x2220c)
                #14 0x00007f9f121dd361 _PyObject_Call (libpython3.11.so.1.0 + 0x1dd361)
                #15 0x00007f9f121bd6da _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1bd6da)
                #16 0x00007f9f121b56da n/a (libpython3.11.so.1.0 + 0x1b56da)
                #17 0x00007f9f122388df PyEval_EvalCode (libpython3.11.so.1.0 + 0x2388df)
                #18 0x00007f9f122562e3 n/a (libpython3.11.so.1.0 + 0x2562e3)
                #19 0x00007f9f1225292a n/a (libpython3.11.so.1.0 + 0x25292a)
                #20 0x00007f9f122685a2 n/a (libpython3.11.so.1.0 + 0x2685a2)
                #21 0x00007f9f12268084 _PyRun_SimpleFileObject (libpython3.11.so.1.0 + 0x268084)
                #22 0x00007f9f12267b24 _PyRun_AnyFileObject (libpython3.11.so.1.0 + 0x267b24)
                #23 0x00007f9f122616e8 Py_RunMain (libpython3.11.so.1.0 + 0x2616e8)
                #24 0x00007f9f12228877 Py_BytesMain (libpython3.11.so.1.0 + 0x228877)
                #25 0x00007f9f11e2abb0 __libc_start_call_main (libc.so.6 + 0x27bb0)
                #26 0x00007f9f11e2ac79 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x27c79)
                #27 0x000055cc92a0c085 _start (python3.11 + 0x1085)
                
                Stack trace of thread 14819:
                #0  0x00007f9f11f103dd syscall (libc.so.6 + 0x10d3dd)
                #1  0x00007f9f117ea35f g_cond_wait (libglib-2.0.so.0 + 0xb735f)
                #2  0x00007f9f1175af4b n/a (libglib-2.0.so.0 + 0x27f4b)
                #3  0x00007f9f117bd552 n/a (libglib-2.0.so.0 + 0x8a552)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14821:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f11790c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f9f11790f9f g_main_loop_run (libglib-2.0.so.0 + 0x5df9f)
                #3  0x00007f9f1153f8c6 n/a (libgio-2.0.so.0 + 0x1228c6)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14820:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f11790c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f9f11790d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #3  0x00007f9f11790dc1 n/a (libglib-2.0.so.0 + 0x5ddc1)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14837:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f0806fe31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f9f08059854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f9f080640d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f9f08064180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f9f08073dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f9eeb7d523f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14979:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f0822fa3f n/a (libusb-1.0.so.0 + 0xda3f)
                #2  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #3  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 15084:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f0806fe31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f9f08059854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f9f080640d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f9f08064180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f9f08073dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f9eeb7d523f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14822:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f11790c5e n/a (libglib-2.0.so.0 + 0x5dc5e)
                #2  0x00007f9f11790d7c g_main_context_iteration (libglib-2.0.so.0 + 0x5dd7c)
                #3  0x00007f9f0ca4897d n/a (libdconfsettings.so + 0x697d)
                #4  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #5  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #6  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14980:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f08232ad0 n/a (libusb-1.0.so.0 + 0x10ad0)
                #2  0x00007f9f082347e0 libusb_handle_events_timeout_completed (libusb-1.0.so.0 + 0x127e0)
                #3  0x00007f9f0823483a libusb_handle_events (libusb-1.0.so.0 + 0x1283a)
                #4  0x00007f9f0a04c334 n/a (libspice-client-glib-2.0.so.8 + 0x60334)
                #5  0x00007f9f117bcf0e n/a (libglib-2.0.so.0 + 0x89f0e)
                #6  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #7  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 14825:
                #0  0x00007f9f11e8d1ce __futex_abstimed_wait_common (libc.so.6 + 0x8a1ce)
                #1  0x00007f9f11e990e0 __new_sem_wait_slow64.constprop.0 (libc.so.6 + 0x960e0)
                #2  0x00007f9f1219f617 PyThread_acquire_lock_timed (libpython3.11.so.1.0 + 0x19f617)
                #3  0x00007f9f1224e23f n/a (libpython3.11.so.1.0 + 0x24e23f)
                #4  0x00007f9f1224dc6b n/a (libpython3.11.so.1.0 + 0x24dc6b)
                #5  0x00007f9f121d3657 n/a (libpython3.11.so.1.0 + 0x1d3657)
                #6  0x00007f9f121c5b03 PyObject_Vectorcall (libpython3.11.so.1.0 + 0x1c5b03)
                #7  0x00007f9f121b95ca _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1b95ca)
                #8  0x00007f9f121b56da n/a (libpython3.11.so.1.0 + 0x1b56da)
                #9  0x00007f9f121efcf0 n/a (libpython3.11.so.1.0 + 0x1efcf0)
                #10 0x00007f9f121bd6da _PyEval_EvalFrameDefault (libpython3.11.so.1.0 + 0x1bd6da)
                #11 0x00007f9f121b56da n/a (libpython3.11.so.1.0 + 0x1b56da)
                #12 0x00007f9f121efcf0 n/a (libpython3.11.so.1.0 + 0x1efcf0)
                #13 0x00007f9f1229438c n/a (libpython3.11.so.1.0 + 0x29438c)
                #14 0x00007f9f12265fd4 n/a (libpython3.11.so.1.0 + 0x265fd4)
                #15 0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #16 0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                
                Stack trace of thread 15088:
                #0  0x00007f9f11f0a44f __poll (libc.so.6 + 0x10744f)
                #1  0x00007f9f0806fe31 n/a (libpulse.so.0 + 0x33e31)
                #2  0x00007f9f08059854 pa_mainloop_poll (libpulse.so.0 + 0x1d854)
                #3  0x00007f9f080640d6 pa_mainloop_iterate (libpulse.so.0 + 0x280d6)
                #4  0x00007f9f08064180 pa_mainloop_run (libpulse.so.0 + 0x28180)
                #5  0x00007f9f08073dd9 n/a (libpulse.so.0 + 0x37dd9)
                #6  0x00007f9eeb7d523f n/a (libpulsecommon-16.1.so + 0x5d23f)
                #7  0x00007f9f11e90c24 start_thread (libc.so.6 + 0x8dc24)
                #8  0x00007f9f11f18510 __clone3 (libc.so.6 + 0x115510)
                ELF object binary architecture: AMD x86-64
lines 88-152/152 (END)
Comment 25 James Fehlig 2023-07-05 21:18:27 UTC
(In reply to Stakanov Schufter from comment #22)
> How can I tell that I am running virtualization as user vs root. 

In virt-manager, you can right-click on the connection, select 'Details', then check the 'Libvirt URI'. 'qemu:///session' for user vs 'qemu:///system' for root.
Comment 26 Stakanov Schufter 2023-07-05 21:25:31 UTC
(In reply to James Fehlig from comment #25)
> (In reply to Stakanov Schufter from comment #22)
> > How can I tell that I am running virtualization as user vs root. 
> 
> In virt-manager, you can right-click on the connection, select 'Details',
> then check the 'Libvirt URI'. 'qemu:///session' for user vs 'qemu:///system'
> for root.

O.K.thank you very much. 
So you got the wrong file from me, it is ///system so I am root (and thank you for teaching me this). 
I will post here the desired files but it will take some hour because I am "done" and need some sleep. In about 7 hours or so, check here, I will post the root ones. 
Thanks for your understanding.
Comment 27 James Fehlig 2023-07-05 21:38:33 UTC
(In reply to Stakanov Schufter from comment #26)
> So you got the wrong file from me, it is ///system so I am root (and thank
> you for teaching me this).

The more you've described, the more I think we have the right log file. IIUC, you run virt-manager as your user, but have a root-authenticated connection to qemu:///system. If so, we have the correct file. If instead you run virt-manager as root (e.g. 'sudo virt-manager'), then yes, we need /root/.cache/virt-manager/virt-manager.log.
Comment 28 Dario Faggioli 2023-07-19 13:56:04 UTC
So... No news on this for a while, is the issue still there?

We discussed this further, and we're thinking that virt-manager could just be the victim of some Python --or other-- changes here. Which does not mean we don't want to help fixing things, but it's important for figuring out what to fix (if there is still anything that needs fixing).

E.g., one thing that was requested by Charles, in comment 7, and that I don't think has been tried yet, is to check virt-viewer. Did you manage to do such test? How did it go?
Comment 29 Dario Faggioli 2023-08-02 16:57:15 UTC
Long time, no update. And the bug might have been due to temporary Python issue, so I'm closing it. Do reopen it if ti's still a problem