Bug 1212195

Summary: libvirtd terminates when virt-manager connects to qemu:///system on a fresh tumbleweed installation
Product: [openSUSE] openSUSE Tumbleweed Reporter: Hillwood Yang <hillwoodroc>
Component: Virtualization:ToolsAssignee: James Fehlig <jfehlig>
Status: RESOLVED FIXED QA Contact: E-mail List <qa-bugs>
Severity: Critical    
Priority: P5 - None CC: 95kreaninw95, carnold, emiliano.langella, felix.niederwanger, hillwoodroc, hp.jansen, ioannis.bonatakis, jfehlig, manzek, oliver, opensuse_buildservice, tvarsis
Version: Current   
Target Milestone: ---   
Hardware: Other   
OS: Other   
Whiteboard:
Found By: --- Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---
Attachments: virt-manager

Description Hillwood Yang 2023-06-10 10:25:02 UTC
Created attachment 867497 [details]
virt-manager

It occurs on a a fresh installation Tumbleweed. The libvirtd is running. But virt-manager can not connect qemu:///system. I do not find any useful information by systemctl and journalctl.
Comment 1 Hillwood Yang 2023-06-10 12:26:27 UTC
libvirtd crashes when virt-manager connect qemu:///system. Here is the gdb track:

#0  0x00007ff16c1a844f in __GI___poll (fds=0x564d85dd08b0, nfds=8, timeout=119099)
    at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ff16c516c5e in g_main_context_poll
    (priority=<optimized out>, n_fds=8, fds=0x564d85dd08b0, timeout=<optimized out>, context=0x564d85dd0180) at ../glib/gmain.c:4584
#2  g_main_context_iterate
    (context=context@entry=0x564d85dd0180, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4271
#3  0x00007ff16c516d7c in g_main_context_iteration (context=0x564d85dd0180, 
    context@entry=0x0, may_block=may_block@entry=1) at ../glib/gmain.c:4343
#4  0x00007ff16c6e2b10 in virEventGLibRunOnce () at ../src/util/vireventglib.c:515
#5  0x00007ff16c7f9142 in virNetDaemonRun (dmn=0x564d85d63d10 [virNetDaemon], dmn@entry=0x7)
    at ../src/rpc/virnetdaemon.c:838
#6  0x0000564d84fb53eb in main (argc=<optimized out>, argv=<optimized out>)
    at ../src/remote/remote_daemon.c:1213
Comment 2 James Fehlig 2023-06-11 14:19:38 UTC
(In reply to Hillwood Yang from comment #1)
> libvirtd crashes when virt-manager connect qemu:///system. Here is the gdb
> track:
> 
> #0  0x00007ff16c1a844f in __GI___poll (fds=0x564d85dd08b0, nfds=8,
> timeout=119099)
>     at ../sysdeps/unix/sysv/linux/poll.c:29
> #1  0x00007ff16c516c5e in g_main_context_poll
>     (priority=<optimized out>, n_fds=8, fds=0x564d85dd08b0,
> timeout=<optimized out>, context=0x564d85dd0180) at ../glib/gmain.c:4584
> #2  g_main_context_iterate
>     (context=context@entry=0x564d85dd0180, block=block@entry=1,
> dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4271
> #3  0x00007ff16c516d7c in g_main_context_iteration (context=0x564d85dd0180, 
>     context@entry=0x0, may_block=may_block@entry=1) at ../glib/gmain.c:4343
> #4  0x00007ff16c6e2b10 in virEventGLibRunOnce () at
> ../src/util/vireventglib.c:515
> #5  0x00007ff16c7f9142 in virNetDaemonRun (dmn=0x564d85d63d10
> [virNetDaemon], dmn@entry=0x7)
>     at ../src/rpc/virnetdaemon.c:838
> #6  0x0000564d84fb53eb in main (argc=<optimized out>, argv=<optimized out>)
>     at ../src/remote/remote_daemon.c:1213

That's not a crash. It's a backtrace of the main thread waiting in poll. If libvirtd crashed systemd should have recorded a core dump. Do you see any with 'coredumpctl list' or in /var/lib/systemd/coredump/? If so, please attach the core file.
Comment 3 Hillwood Yang 2023-06-12 12:24:17 UTC
I don't see any useful information in coredumpctl and /var/lib/systemd/coredump:

localhost:/home/hillwood # coredumpctl
TIME                          PID  UID  GID SIG     COREFILE EXE                  SIZE
Sat 2023-06-10 20:51:49 CST 17426 1000 1000 SIGSEGV present  /usr/bin/zenity      6.1M
Sat 2023-06-10 21:49:53 CST  8907 1000 1000 SIGABRT present  /usr/bin/gjs-console 7.2M


But after I try to connect qemu:///system by virt-manager, the systemctl status libvirtd output these informations:

6月 12 20:11:29 localhost systemd[1]: Stopping Virtualization daemon...
6月 12 20:11:29 localhost systemd[1]: libvirtd.service: Deactivated successfully.
6月 12 20:11:29 localhost systemd[1]: Stopped Virtualization daemon

I seems that the connecting work trigger the exit of libvirtd.
Comment 4 James Fehlig 2023-06-12 14:16:43 UTC
So nothing virtualization related is crashing. Is libvirtd enabled? E.g. 'systemctl is-enabled libvirtd'? You said it was a fresh Tumbleweed installation. How did you install/enable the virtualization components? Did you select "KVM server" pattern at install time, or did you install libvirt, qemu, etc after installing Tumbleweed?

Along with answering the above questions, it might be helpful to attach the contents of /root/.cache/virt-manager/virt-manager.log and libvirtd log (e.g. 'journalctl -fu libvirtd') when attempting to connect.
Comment 5 Hillwood Yang 2023-06-12 14:42:17 UTC
Before connecting:

hillwood@localhost:~> sudo systemctl status libvirtd
● libvirtd.service - Virtualization daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: disabled)
     Active: active (running) since Mon 2023-06-12 22:27:15 CST; 1min 14s ago
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd.socket
             ● libvirtd-ro.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
   Main PID: 3922 (libvirtd)
      Tasks: 20 (limit: 32768)
        CPU: 364ms
     CGroup: /system.slice/libvirtd.service
             └─3922 /usr/sbin/libvirtd --timeout 120

6月 12 22:27:14 localhost systemd[1]: Starting Virtualization daemon...
6月 12 22:27:15 localhost systemd[1]: Started Virtualization daemon.

After connecting:
hillwood@localhost:~> sudo systemctl status libvirtd
○ libvirtd.service - Virtualization daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: disabled)
     Active: inactive (dead) since Mon 2023-06-12 22:28:48 CST; 15s ago
   Duration: 1min 33.834s
TriggeredBy: ● libvirtd-admin.socket
             ● libvirtd.socket
             ● libvirtd-ro.socket
       Docs: man:libvirtd(8)
             https://libvirt.org
    Process: 3922 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 3922 (code=exited, status=0/SUCCESS)
        CPU: 375ms

6月 12 22:27:14 localhost systemd[1]: Starting Virtualization daemon...
6月 12 22:27:15 localhost systemd[1]: Started Virtualization daemon.
6月 12 22:28:48 localhost systemd[1]: Stopping Virtualization daemon...
6月 12 22:28:48 localhost systemd[1]: libvirtd.service: Deactivated successfully.
6月 12 22:28:48 localhost systemd[1]: Stopped Virtualization daemon.

hillwood@localhost:~> sudo journalctl -fu libvirtd
6月 12 22:34:49 localhost systemd[1]: Starting Virtualization daemon...
6月 12 22:34:50 localhost systemd[1]: Started Virtualization daemon.
6月 12 22:34:52 localhost systemd[1]: Stopping Virtualization daemon...
6月 12 22:34:52 localhost systemd[1]: libvirtd.service: Deactivated successfully.
6月 12 22:34:52 localhost systemd[1]: Stopped Virtualization daemon.
6月 12 22:35:47 localhost systemd[1]: Starting Virtualization daemon...
6月 12 22:35:48 localhost systemd[1]: Started Virtualization daemon.
6月 12 22:35:49 localhost systemd[1]: Stopping Virtualization daemon...
6月 12 22:35:49 localhost systemd[1]: libvirtd.service: Deactivated successfully.
6月 12 22:35:49 localhost systemd[1]: Stopped Virtualization daemon.


I did not stop the libvirtd, but it stopped after I connected qemu:///system by virt-manager.

I install a Tumbleweed in KVM, this issue reproduces.
Comment 6 James Fehlig 2023-06-12 15:39:14 UTC
(In reply to Hillwood Yang from comment #5)
> I install a Tumbleweed in KVM, this issue reproduces.

Do you mean Tumbleweed is installed in a KVM VM?
Comment 7 Hillwood Yang 2023-06-13 01:22:47 UTC
(In reply to James Fehlig from comment #6)
> (In reply to Hillwood Yang from comment #5)
> > I install a Tumbleweed in KVM, this issue reproduces.
> 
> Do you mean Tumbleweed is installed in a KVM VM?

Yes, I do.
Comment 8 James Fehlig 2023-06-13 13:57:43 UTC
(In reply to Hillwood Yang from comment #7)
> (In reply to James Fehlig from comment #6)
> > Do you mean Tumbleweed is installed in a KVM VM?
> 
> Yes, I do.

How did you install kvm/qemu+libvirt? Did you select the kvm_server or kvm_tools pattern? Or did you use the yast "Install hypervisor and tools" module? Or maybe installed manually with "zypper install ..."? I'd like to know how the libvirt-daemon package, which contains libvirtd, got installed. If using yast or the kvm_tools pattern to install kvm+libvirt on a fresh Tumbleweed install, the libvirt-daemon package (and hence libvirtd) should not be installed. virtqemud from the libvirt-daemon-driver-qemu package should be used instead. Think of it as a libvirtd specifically for qemu/kvm. You can enable it as usual with 'systemctl enable virtqemud.socket'.
Comment 9 James Fehlig 2023-06-13 20:56:30 UTC
After some experimenting I think I understand the problem. I installed a fresh TW KVM guest and within it used the yast "Install hypervisor and tools" to install "kvm_tools". The yast module installed all packages required for a libvirt+kvm host and enabled virtqemud.socket. The installation works fine with virsh. But when trying to start virt-manager, it complains that "The libvirtd service does not appear to be installed. Install and run the libvirtd service to manage virtualization on this host."

That's not correct in the modern times of libvirt modular daemons. It should connect to the per-hypervisor daemon, or perhaps use the default URI for local connections and allow the library to figure out the correct thing to connect to.

BTW, if one takes virt-manager's advise and installs the libvirt-daemon package and enables libvirtd, it will be terminated as the reporter observes since virtqemud.service contains 'Conflicts=libvirtd.service'.
Comment 10 Hillwood Yang 2023-06-14 13:07:14 UTC
According to your inference, I come up with a workaround.

sudo systemctl disable virtqemud.service
sudo systemctl stop virtqemud.service virtqemud.socket virtqemud-ro.socket virtqemud-admin.socket
sudo systemctl enable libvirtd
sudo systemctl start libvirtd

This workaround has been tested to be effective. The virt-manager can not work with virtqemud. We have to do this If we use virt-manager to manage qemu:///system.
Comment 11 James Fehlig 2023-06-14 16:10:28 UTC
(In reply to Hillwood Yang from comment #10)
> According to your inference, I come up with a workaround.
> 
> sudo systemctl disable virtqemud.service
> sudo systemctl stop virtqemud.service virtqemud.socket virtqemud-ro.socket
> virtqemud-admin.socket
> sudo systemctl enable libvirtd
> sudo systemctl start libvirtd
> 
> This workaround has been tested to be effective.

Yes, this is one workaround. The other would be to disable libvirtd and enable all the necessary modular daemons. E.g.

systemctl stop libvirtd
systemctl disable libvirtd
for drv in qemu network nodedev nwfilter secret storage
  do
    systemctl enable virt${drv}d.service
    systemctl enable virt${drv}d{,-ro,-admin}.socket
  done

I know this is not very user friendly, so I'm investigating using systemd presets to enable all the necessary sockets during package installation.

> The virt-manager can not
> work with virtqemud. We have to do this If we use virt-manager to manage
> qemu:///system.

It can work with virtqemud with my above configuration and Charles' virt-manager fix. We'll see a comment in the bug when the fixed virt-manager is submitted to Factory.

(BTW: I no longer need info from you, so clearing the flag.)
Comment 12 James Fehlig 2023-06-14 16:14:00 UTC
Changing the summary to clarify libvirtd is terminated, not crashing.
Comment 13 OBSbugzilla Bot 2023-06-14 17:55:02 UTC
This is an autogenerated message for OBS integration:
This bug (1212195) was mentioned in
https://build.opensuse.org/request/show/1093153 Factory / virt-manager
Comment 14 Yiannis Bonatakis 2023-06-18 20:01:04 UTC
i had the same problem with a not fresh TW. And yes steps provided[see comment 11] seems to work for me as well.
Comment 15 Emiliano Langella 2023-06-18 20:06:29 UTC
I had the same issue, and comment#11 fixed it.
Comment 16 James Fehlig 2023-06-21 21:28:13 UTC
IMO there are essentially two aspects of this bug. One is virt-manager insisting on communicating with libvirtd, which is incorrect since it's fine and valid to run a modular daemon setup. Charles has fixed that issue and submitted an updated virt-manager to Factory (see #13).

The other aspect is all of the modular daemons needing enabled on a fresh TW install. In the old days of the monolithic daemon, it was already a stretch to require users to enable libvirtd.socket and virtlogd.socket. In the new world of modular daemons, it's unreasonable to expect users to enable sockets of all these daemons. To avoid that, I'm considering the following change to the systemd-presets-branding-openSUSE package. Comments or suggestions welcomed.

Index: default-openSUSE.preset
===================================================================
--- default-openSUSE.preset     (revision c7219f66e211bee1da3b31cafa3830cb)
+++ default-openSUSE.preset     (working copy)
@@ -10,3 +10,19 @@
 enable hylafax-usage.timer
 enable storeBackup.timer
 enable drkonqi-coredump-processor@.service
+enable virtqemud.socket
+enable virtqemud-ro.socket
+enable virtqemud-admin.socket
+enable virtxend.socket
+enable virtxend-ro.socket
+enable virtxend-admin.socket
+enable virtlogd.socket
+enable virtlogd-ro.socket
+enable virtnetworkd.socket
+enable virtnetworkd-ro.socket
+enable virtnodedevd.socket
+enable virtnodedevd-ro.socket
+enable virtsecretd.socket
+enable virtsecretd-ro.socket
+enable virtstoraged.socket
+enable virtstoraged-ro.socket
Comment 17 Oliver Schwabedissen 2023-06-22 04:58:53 UTC
For me Comment #11 didn't work. I switched from Virtualbox to qemu/kvm 2 years ago and it worked flawlessly since then. On Monday virt-manager stopped working for me (I was on snapshot 20230610 until Sunday because of problems with apparmor and snap).

I tried Comment #11 yesterday, but this morning after starting the system virt-manager again didn't connect. So I enabled and started libvirtd again and now virt-manager was able to connect to qemu:///system.
Comment 18 James Fehlig 2023-06-22 16:45:52 UTC
(In reply to Oliver Schwabedissen from comment #17)
> I tried Comment #11 yesterday, but this morning after starting the system
> virt-manager again didn't connect. So I enabled and started libvirtd again
> and now virt-manager was able to connect to qemu:///system.

It sounds to me like you are getting hit by the systemd bug mentioned in comments 2 and 3 of bug#1212396. I verified it also affects the monolithic libvirtd. I suspect you'll eventually hit the problem with libvirtd. E.g. with no client activity and no VMs running, libvirtd should terminate after 120 seconds. Once it terminates, start virt-manager again and see if it still connects. Repeat if necessary, but in my testing I quickly hit the systemd issue and the service was not started.

If libvirtd is working for you, please try modular daemons again as per #11 and let's debug your issue. Although here's a better version of the setup

systemctl stop libvirtd.service
systemctl disable libvirtd.service
systemctl stop libvirtd{,-ro,-admin}.socket
systemctl disable libvirtd{,-ro,-admin}.socket
for drv in qemu log network nodedev nwfilter secret storage
  do
    systemctl enable virt${drv}d.service
    systemctl enable virt${drv}d{,-ro,-admin}.socket
  done

If virt-manager is unable to connect, check which daemons got started? E.g. as root run 'ps aux | grep virt' and ensure at least virtqemud, virtnetworkd, and virtstoraged (all needed by virt-manager) have been started.
Comment 19 Oliver Schwabedissen 2023-06-23 10:15:03 UTC
That's too many bugs at the same time ;-).

Yes, 'systemctl status libvirtd.service' shows a '--timeout 120' for /usr/sbin/libvirtd. So now I understand, why I have to restart libvirtd most of the time.

I switched to the modular daemons as described:

frodo:~ # systemctl stop libvirtd.service
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd-ro.socket
  libvirtd.socket
  libvirtd-admin.socket
frodo:~ # systemctl disable libvirtd.service
Removed "/etc/systemd/system/multi-user.target.wants/libvirtd.service".
Removed "/etc/systemd/system/sockets.target.wants/virtlockd.socket".
Removed "/etc/systemd/system/sockets.target.wants/virtlogd.socket".
Removed "/etc/systemd/system/sockets.target.wants/libvirtd.socket".
Removed "/etc/systemd/system/sockets.target.wants/libvirtd-ro.socket".
frodo:~ # systemctl stop libvirtd{,-ro,-admin}.socket
frodo:~ # systemctl disable libvirtd{,-ro,-admin}.socket
frodo:~ # for drv in qemu log network nodedev nwfilter secret storage
>   do
>     systemctl enable virt${drv}d.service
>     systemctl enable virt${drv}d{,-ro,-admin}.socket
>   done
Created symlink /etc/systemd/system/multi-user.target.wants/virtqemud.service → /usr/lib/systemd/system/virtqemud.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /usr/lib/systemd/system/virtlogd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /usr/lib/systemd/system/virtlockd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtqemud.socket → /usr/lib/systemd/system/virtqemud.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-ro.socket → /usr/lib/systemd/system/virtqemud-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtqemud-admin.socket → /usr/lib/systemd/system/virtqemud-admin.socket.
Failed to enable unit: Unit file virtlogd-ro.socket does not exist.
Created symlink /etc/systemd/system/multi-user.target.wants/virtnetworkd.service → /usr/lib/systemd/system/virtnetworkd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtnetworkd.socket → /usr/lib/systemd/system/virtnetworkd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnetworkd-ro.socket → /usr/lib/systemd/system/virtnetworkd-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnetworkd-admin.socket → /usr/lib/systemd/system/virtnetworkd-admin.socket.
Created symlink /etc/systemd/system/multi-user.target.wants/virtnodedevd.service → /usr/lib/systemd/system/virtnodedevd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtnodedevd.socket → /usr/lib/systemd/system/virtnodedevd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnodedevd-ro.socket → /usr/lib/systemd/system/virtnodedevd-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnodedevd-admin.socket → /usr/lib/systemd/system/virtnodedevd-admin.socket.
Created symlink /etc/systemd/system/multi-user.target.wants/virtnwfilterd.service → /usr/lib/systemd/system/virtnwfilterd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtnwfilterd.socket → /usr/lib/systemd/system/virtnwfilterd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnwfilterd-ro.socket → /usr/lib/systemd/system/virtnwfilterd-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtnwfilterd-admin.socket → /usr/lib/systemd/system/virtnwfilterd-admin.socket.
Created symlink /etc/systemd/system/multi-user.target.wants/virtsecretd.service → /usr/lib/systemd/system/virtsecretd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtsecretd.socket → /usr/lib/systemd/system/virtsecretd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtsecretd-ro.socket → /usr/lib/systemd/system/virtsecretd-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtsecretd-admin.socket → /usr/lib/systemd/system/virtsecretd-admin.socket.
Created symlink /etc/systemd/system/multi-user.target.wants/virtstoraged.service → /usr/lib/systemd/system/virtstoraged.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged.socket → /usr/lib/systemd/system/virtstoraged.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-ro.socket → /usr/lib/systemd/system/virtstoraged-ro.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtstoraged-admin.socket → /usr/lib/systemd/system/virtstoraged-admin.socket.


virt-manager doesn't connect to qemu:///system

When I try to connect it I get

Unable to connect to libvirt qemu:///system.

Socket-Erstellung zu '/var/run/libvirt/virtqemud-sock' fehlgeschlagen: No such file or directory

Libvirt URI is: qemu:///system

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/connection.py", line 925, in _do_open
    self._backend.open(cb, data)
  File "/usr/share/virt-manager/virtinst/connection.py", line 171, in open
    conn = libvirt.openAuth(self._open_uri,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/site-packages/libvirt.py", line 147, in openAuth
    raise libvirtError('virConnectOpenAuth() failed')
libvirt.libvirtError: Socket-Erstellung zu '/var/run/libvirt/virtqemud-sock' fehlgeschlagen: No such file or directory

I then rebooted the system and tried 'ps aux' immediately afterwards:

frodo:~ # ps aux | grep [v]irt
root      1694  0.1  0.0 1438808 26488 ?       Ssl  12:00   0:00 /usr/sbin/virtnetworkd --timeout 120
root      1696  0.5  0.0 1587040 28024 ?       Ssl  12:00   0:00 /usr/sbin/virtnodedevd --timeout 120
root      1698  0.1  0.0 1442016 23520 ?       Ssl  12:00   0:00 /usr/sbin/virtnwfilterd --timeout 120
root      1699  0.1  0.0 1438720 26228 ?       Ssl  12:00   0:00 /usr/sbin/virtsecretd --timeout 120
root      1809  0.2  0.0 1440648 31820 ?       Ssl  12:00   0:00 /usr/sbin/virtqemud --timeout 120
root      1813  0.1  0.1 1597680 42264 ?       Ssl  12:00   0:00 /usr/sbin/virtstoraged --timeout 120
dnsmasq   1967  0.0  0.0  12072  2840 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      1968  0.0  0.0  12044  1816 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper


3 minutes later:
frodo:~ # ps aux | grep [v]irt
dnsmasq   1967  0.0  0.0  12072  2840 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      1968  0.0  0.0  12044  1816 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

And as expected virt-manager doesn't connect anymore, probably because all modular daemons were also terminated after 2 minutes.

However, when I start virt-manager virtqemud and virtstoraged are started again. 
frodo:~ # ps aux | grep [v]irt
dnsmasq   1967  0.0  0.0  12072  2840 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root      1968  0.0  0.0  12044  1816 ?        S    12:00   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
oliver    3794  0.8  0.4 1342132 151864 ?      Ssl  12:03   0:01 /usr/bin/python3 /usr/bin/virt-manager
root      3816  0.0  0.1 1440816 34924 ?       Ssl  12:03   0:00 /usr/sbin/virtqemud --timeout 120
root      3848  0.0  0.1 1597680 44756 ?       Ssl  12:03   0:00 /usr/sbin/virtstoraged --timeout 120

It doesn't show virtnetworkd, but the service is running:
frodo:~ # systemctl status virtnetworkd.service
● virtnetworkd.service - Virtualization network daemon
     Loaded: loaded (/usr/lib/systemd/system/virtnetworkd.service; enabled; preset: disabled)
     Active: active (running) since Fri 2023-06-23 12:00:54 CEST; 7min ago
TriggeredBy: ● virtnetworkd-ro.socket
             ● virtnetworkd-admin.socket
             ● virtnetworkd.socket
       Docs: man:virtnetworkd(8)
             https://libvirt.org
    Process: 1694 ExecStart=/usr/sbin/virtnetworkd $VIRTNETWORKD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 1694 (code=exited, status=0/SUCCESS)
      Tasks: 2 (limit: 4915)
        CPU: 237ms
     CGroup: /system.slice/virtnetworkd.service
             ├─1967 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
             └─1968 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Jun 23 12:00:54 frodo dnsmasq[1967]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 Lua TFTP conntrack ipset no-nftset auth cryptohash DNSSEC loop-detect inotify dumpfile
Jun 23 12:00:54 frodo dnsmasq-dhcp[1967]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jun 23 12:00:54 frodo dnsmasq-dhcp[1967]: DHCP, sockets bound exclusively to interface virbr0
Jun 23 12:00:54 frodo dnsmasq[1967]: reading /etc/resolv.conf
Jun 23 12:00:54 frodo dnsmasq[1967]: using nameserver 1.1.1.1#53
Jun 23 12:00:54 frodo dnsmasq[1967]: using nameserver 8.8.8.8#53
Jun 23 12:00:54 frodo dnsmasq[1967]: using nameserver 192.168.178.1#53
Jun 23 12:00:54 frodo dnsmasq[1967]: read /etc/hosts - 40 names
Jun 23 12:00:54 frodo dnsmasq[1967]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 names
Jun 23 12:00:54 frodo dnsmasq-dhcp[1967]: read /var/lib/libvirt/dnsmasq/default.hostsfile

Nevertheless, virt-manager hangs with "Connecting..."
Comment 20 James Fehlig 2023-06-23 14:20:16 UTC
(In reply to Oliver Schwabedissen from comment #19)
> Yes, 'systemctl status libvirtd.service' shows a '--timeout 120' for
> /usr/sbin/libvirtd. So now I understand, why I have to restart libvirtd most
> of the time.

You shouldn't need to restart services that are socket activated. systemd should activate the services when needed, e.g. when a client connects to the service's socket. But current TW has a systemd bug that doesn't always start the services. I have verified the systemd in this request fixes the issue

https://build.opensuse.org/request/show/1094372

Sorry, but you'll have to wait for that request to get accepted to Factory and subsequently available in a TW update. Or if you are brave, install the updated systemd from the source of the request

http://download.opensuse.org/repositories/Base:/System/standard/

[snip]

> However, when I start virt-manager virtqemud and virtstoraged are started
> again. 
> frodo:~ # ps aux | grep [v]irt
> dnsmasq   1967  0.0  0.0  12072  2840 ?        S    12:00   0:00
> /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
> root      1968  0.0  0.0  12044  1816 ?        S    12:00   0:00
> /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
> oliver    3794  0.8  0.4 1342132 151864 ?      Ssl  12:03   0:01
> /usr/bin/python3 /usr/bin/virt-manager
> root      3816  0.0  0.1 1440816 34924 ?       Ssl  12:03   0:00
> /usr/sbin/virtqemud --timeout 120
> root      3848  0.0  0.1 1597680 44756 ?       Ssl  12:03   0:00
> /usr/sbin/virtstoraged --timeout 120
> 
> It doesn't show virtnetworkd, but the service is running:
> frodo:~ # systemctl status virtnetworkd.service
> ● virtnetworkd.service - Virtualization network daemon
>      Loaded: loaded (/usr/lib/systemd/system/virtnetworkd.service; enabled;
> preset: disabled)
>      Active: active (running) since Fri 2023-06-23 12:00:54 CEST; 7min ago
> TriggeredBy: ● virtnetworkd-ro.socket
>              ● virtnetworkd-admin.socket
>              ● virtnetworkd.socket
>        Docs: man:virtnetworkd(8)
>              https://libvirt.org
>     Process: 1694 ExecStart=/usr/sbin/virtnetworkd $VIRTNETWORKD_ARGS
> (code=exited, status=0/SUCCESS)
>    Main PID: 1694 (code=exited, status=0/SUCCESS)
>       Tasks: 2 (limit: 4915)
>         CPU: 237ms
>      CGroup: /system.slice/virtnetworkd.service
>              ├─1967 /usr/sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>              └─1968 /usr/sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper

This is precisely the symptoms of the systemd bug. virnetworkd doesn't actually get started. If you 'systemctl start virtnetworkd.service' at this point, virt-manager would suddenly connect.

BTW, I'll take this bug and use it for the preset changes mentioned in #16.
Comment 21 Oliver Schwabedissen 2023-06-24 09:44:50 UTC
Thanks for the clarification.

I now have libvirtd completely disabled. However, 'systemctl start virtnetworkd.service' doesn't help. Instead I have to _stop_ it: 'systemctl stop virtnetworkd.service'.

When I then start virt-manager, it also starts libnetworkd.service (and virtqemud and virtstoraged, if not running) and it can connect to qemu:///system.

Ok, so now I have to wait for apparmor 3.1.6 (currently locked on 3.1.4) and a new systemd...
Comment 22 Oliver Schwabedissen 2023-06-26 16:06:45 UTC
I'm now on snapshot 20230624 and after rebooting (and waiting at least half an hour) virt-manager connected without problems to qemu:///system.
Comment 23 Gus Fos 2023-06-26 17:15:47 UTC
I'm on 24th snapshot, and I can connect just fine in virtman. But when I boot up a Windows 10 VM, it crashes/shutdown the libvirtd service and VM is stuck at "Connecting to graphical console for guest". Is this related to this issue, or is this something else? I have tested advises in this thread without success. It was working fine on earlier snapshots about 2 weeks ago when I last updated.
Comment 24 James Fehlig 2023-06-26 17:20:46 UTC
(In reply to Gus Fos from comment #23)
> I'm on 24th snapshot, and I can connect just fine in virtman. But when I
> boot up a Windows 10 VM, it crashes/shutdown the libvirtd service and VM is
> stuck at "Connecting to graphical console for guest". Is this related to
> this issue, or is this something else?

Definitely sounds like a different issue that should be investigated in a new bug report.
Comment 25 Oliver Schwabedissen 2023-06-27 05:06:56 UTC
(In reply to Gus Fos from comment #23)
> I'm on 24th snapshot, and I can connect just fine in virtman. But when I
> boot up a Windows 10 VM, it crashes/shutdown the libvirtd service and VM is
> stuck at "Connecting to graphical console for guest". Is this related to
> this issue, or is this something else? I have tested advises in this thread
> without success. It was working fine on earlier snapshots about 2 weeks ago
> when I last updated.

I have 2 Windows 10 VMs and one Ubuntu VM and they all run without problems.
Comment 26 Gus Fos 2023-06-27 13:39:48 UTC
(In reply to James Fehlig from comment #24)
> (In reply to Gus Fos from comment #23)
> > I'm on 24th snapshot, and I can connect just fine in virtman. But when I
> > boot up a Windows 10 VM, it crashes/shutdown the libvirtd service and VM is
> > stuck at "Connecting to graphical console for guest". Is this related to
> > this issue, or is this something else?
> 
> Definitely sounds like a different issue that should be investigated in a
> new bug report.

Tested this a lot now, going through all things I could come up with, moving back and forth between monolithic and modular setup, reinstalling libvirt etc, but nothing got it working. I then tried to boot kernel 6.2.12 and it worked directly. So this issue seems to be related to 6.3.9 and maybe some of the earlier 6.3.x versions as well. I could create a new bug report for this, but I don't have much info to provide. Logs pretty much only showed "Failed to initialize libnetcontrol.  Management of interface devices is disabled".
Comment 27 James Fehlig 2023-06-27 13:56:38 UTC
(In reply to Gus Fos from comment #26)
> Tested this a lot now, going through all things I could come up with, moving
> back and forth between monolithic and modular setup, reinstalling libvirt
> etc, but nothing got it working. I then tried to boot kernel 6.2.12 and it
> worked directly. So this issue seems to be related to 6.3.9 and maybe some
> of the earlier 6.3.x versions as well. I could create a new bug report for
> this, but I don't have much info to provide. Logs pretty much only showed
> "Failed to initialize libnetcontrol.  Management of interface devices is
> disabled".

If the issue is resolved by booting a different kernel, then we're looking in the wrong place. No use fiddling with libvirt daemons or looking through libvirt related logs. We should be looking for kernel messages in syslog or anything suspicious in dmesg. Does the windows VM use virtio drivers? I.e., does it have VMDP drivers installed?
Comment 28 Gus Fos 2023-06-28 04:02:12 UTC
(In reply to James Fehlig from comment #27)
> (In reply to Gus Fos from comment #26)
> > Tested this a lot now, going through all things I could come up with, moving
> > back and forth between monolithic and modular setup, reinstalling libvirt
> > etc, but nothing got it working. I then tried to boot kernel 6.2.12 and it
> > worked directly. So this issue seems to be related to 6.3.9 and maybe some
> > of the earlier 6.3.x versions as well. I could create a new bug report for
> > this, but I don't have much info to provide. Logs pretty much only showed
> > "Failed to initialize libnetcontrol.  Management of interface devices is
> > disabled".
> 
> If the issue is resolved by booting a different kernel, then we're looking
> in the wrong place. No use fiddling with libvirt daemons or looking through
> libvirt related logs. We should be looking for kernel messages in syslog or
> anything suspicious in dmesg. Does the windows VM use virtio drivers? I.e.,
> does it have VMDP drivers installed?

Yes, it uses "Video Virtio" and "Display Spice". I installed the Spice agent inside the VM as well. But not sure what the VMDP drivers are. When I Googled it it sounded like VMware Toolkit or the open-vm-tools from repos? If so, I have none of those installed on the host or VM. Is that something that I should be add somewhere?
Comment 29 James Fehlig 2023-06-28 16:54:30 UTC
(In reply to Gus Fos from comment #28)
> Yes, it uses "Video Virtio" and "Display Spice". I installed the Spice agent
> inside the VM as well. But not sure what the VMDP drivers are. When I
> Googled it it sounded like VMware Toolkit or the open-vm-tools from repos?
> If so, I have none of those installed on the host or VM. Is that something
> that I should be add somewhere?

VMDP == Virtual Machine Driver Pack

VMDP contains drivers for Windows that support KVM and Xen virtual devices. Using virtual devices provides much better performance than emulated disks, network devices, etc. VMDP also provides functionality like graceful shutdown from the host. E.g. 'virsh shutdown win-vm-with-vmdp' will trigger Windows to shutdown gracefully. Check the VMDP project page for more info and a link to download the community version

https://github.com/SUSE/vmdp
Comment 30 James Fehlig 2023-07-07 20:22:22 UTC
FYI, submitted the following changes to systemd-presets-branding-openSUSE

https://build.opensuse.org/request/show/1097450
Comment 31 James Fehlig 2023-07-28 14:44:00 UTC
(In reply to James Fehlig from comment #30)
> FYI, submitted the following changes to systemd-presets-branding-openSUSE
> 
> https://build.opensuse.org/request/show/1097450

This has now made it's way to Factory and fixes the second aspect of the bug I described in #16. I'll close the bug now. Please open new bugs for any issues not directly related to modular daemon connection issues.
Comment 35 Maintenance Automation 2024-02-27 08:30:01 UTC
SUSE-RU-2024:0629-1: An update that has four fixes can now be installed.

Category: recommended (moderate)
Bug References: 1212195, 1213790, 1219791, 1220012
Sources used:
SUSE Linux Enterprise Micro 5.5 (src): virt-manager-4.1.0-150500.3.6.1
Server Applications Module 15-SP5 (src): virt-manager-4.1.0-150500.3.6.1
openSUSE Leap 15.5 (src): virt-manager-4.1.0-150500.3.6.1, virt-manager-test-4.1.0-150500.3.6.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 36 Maintenance Automation 2024-07-12 16:31:15 UTC
SUSE-RU-2024:0629-2: An update that has four fixes can now be installed.

Category: recommended (moderate)
Bug References: 1212195, 1213790, 1219791, 1220012
Maintenance Incident: [SUSE:Maintenance:32669](https://smelt.suse.de/incident/32669/)
Sources used:
SUSE Linux Enterprise Micro 5.5 (src):
 virt-manager-4.1.0-150500.3.6.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.