Bug 1215984

Summary: virt-manager silently fails to connect
Product: [openSUSE] openSUSE Distribution Reporter: Michal Suchanek <msuchanek>
Component: Virtualization:ToolsAssignee: Charles Arnold <carnold>
Status: RESOLVED FIXED QA Contact: E-mail List <qa-bugs>
Severity: Normal    
Priority: P5 - None CC: jfehlig, msuchanek
Version: Leap 15.5   
Target Milestone: ---   
Hardware: Other   
OS: Other   
Whiteboard:
Found By: --- Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---

Description Michal Suchanek 2023-10-05 20:30:20 UTC
virt-manager fails to connect, no error is reported in the UI.

Only debug output shows missing services.

virt-manager --debug
[Thu, 05 Oct 2023 22:27:55 virt-manager 17933] DEBUG (cli:205) Version 4.1.0 launched with command line: /usr/bin/virt-manager --debug
[Thu, 05 Oct 2023 22:27:55 virt-manager 17933] DEBUG (virtmanager:167) virt-manager version: 4.1.0
[Thu, 05 Oct 2023 22:27:55 virt-manager 17933] DEBUG (virtmanager:168) virtManager import: /usr/share/virt-manager/virtManager
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (virtmanager:208) PyGObject version: 3.42.2
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (virtmanager:212) GTK version: 3.24.34
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (systray:72) Imported AppIndicator3=<IntrospectionModule 'AppIndicator3' from '/usr/lib64/girepository-1.0/AppIndicator3-0.1.typelib'>
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (systray:74) AppIndicator3 is available, but didn't find any dbus watcher.
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (systray:464) Showing systray: False
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (inspection:206) python guestfs is not installed
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (engine:114) Loading stored URIs:
qemu:///system
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (engine:462) processing cli command uri= show_window=manager domain=
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (engine:464) No cli action requested, launching default window
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (manager:185) Showing manager
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (engine:316) window counter incremented to 1
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (engine:211) Initial gtkapplication activated
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (connection:485) conn=qemu:///system changed to state=Connecting
[Thu, 05 Oct 2023 22:27:56 virt-manager 17933] DEBUG (connection:906) Scheduling background open thread for qemu:///system
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:131) libvirt URI versions library=9.0.0 driver=9.0.0 hypervisor=7.1.0
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:109) Fetched capabilities for qemu:///system: <capabilities>

  <host>
    <uuid>af7d05fd-06bf-ed11-a129-218475400338</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>Broadwell-noTSX-IBRS</model>
      <vendor>Intel</vendor>
      <microcode version='1068'/>
      <signature family='6' model='154' stepping='3'/>
      <counter name='tsc' frequency='2112000000' scaling='yes'/>
      <topology sockets='1' dies='1' cores='16' threads='1'/>
      <maxphysaddr mode='emulate' bits='39'/>
      <feature name='vme'/>
      <feature name='ds'/>
      <feature name='acpi'/>
      <feature name='ss'/>
      <feature name='ht'/>
      <feature name='tm'/>
      <feature name='pbe'/>
      <feature name='dtes64'/>
      <feature name='monitor'/>
      <feature name='ds_cpl'/>
      <feature name='vmx'/>
      <feature name='smx'/>
      <feature name='est'/>
      <feature name='tm2'/>
      <feature name='xtpr'/>
      <feature name='pdcm'/>
      <feature name='osxsave'/>
      <feature name='f16c'/>
      <feature name='rdrand'/>
      <feature name='arat'/>
      <feature name='tsc_adjust'/>
      <feature name='clflushopt'/>
      <feature name='clwb'/>
      <feature name='intel-pt'/>
      <feature name='sha-ni'/>
      <feature name='umip'/>
      <feature name='pku'/>
      <feature name='ospke'/>
      <feature name='waitpkg'/>
      <feature name='gfni'/>
      <feature name='vaes'/>
      <feature name='vpclmulqdq'/>
      <feature name='rdpid'/>
      <feature name='movdiri'/>
      <feature name='movdir64b'/>
      <feature name='pks'/>
      <feature name='fsrm'/>
      <feature name='md-clear'/>
      <feature name='serialize'/>
      <feature name='arch-lbr'/>
      <feature name='stibp'/>
      <feature name='arch-capabilities'/>
      <feature name='core-capability'/>
      <feature name='ssbd'/>
      <feature name='avx-vnni'/>
      <feature name='xsaveopt'/>
      <feature name='xsavec'/>
      <feature name='xgetbv1'/>
      <feature name='xsaves'/>
      <feature name='pdpe1gb'/>
      <feature name='abm'/>
      <feature name='invtsc'/>
      <feature name='rdctl-no'/>
      <feature name='ibrs-all'/>
      <feature name='skip-l1dfl-vmentry'/>
      <feature name='mds-no'/>
      <feature name='pschange-mc-no'/>
      <feature name='taa-no'/>
      <pages unit='KiB' size='4'/>
      <pages unit='KiB' size='2048'/>
      <pages unit='KiB' size='1048576'/>
    </cpu>
    <power_management>
      <suspend_mem/>
    </power_management>
    <iommu support='yes'/>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>65537512</memory>
          <pages unit='KiB' size='4'>16384378</pages>
          <pages unit='KiB' size='2048'>0</pages>
          <pages unit='KiB' size='1048576'>0</pages>
          <distances>
            <sibling id='0' value='10'/>
          </distances>
          <cpus num='16'>
            <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0-1'/>
            <cpu id='1' socket_id='0' die_id='0' core_id='0' siblings='0-1'/>
            <cpu id='2' socket_id='0' die_id='0' core_id='4' siblings='2-3'/>
            <cpu id='3' socket_id='0' die_id='0' core_id='4' siblings='2-3'/>
            <cpu id='4' socket_id='0' die_id='0' core_id='8' siblings='4-5'/>
            <cpu id='5' socket_id='0' die_id='0' core_id='8' siblings='4-5'/>
            <cpu id='6' socket_id='0' die_id='0' core_id='12' siblings='6-7'/>
            <cpu id='7' socket_id='0' die_id='0' core_id='12' siblings='6-7'/>
            <cpu id='8' socket_id='0' die_id='0' core_id='16' siblings='8'/>
            <cpu id='9' socket_id='0' die_id='0' core_id='17' siblings='9'/>
            <cpu id='10' socket_id='0' die_id='0' core_id='18' siblings='10'/>
            <cpu id='11' socket_id='0' die_id='0' core_id='19' siblings='11'/>
            <cpu id='12' socket_id='0' die_id='0' core_id='20' siblings='12'/>
            <cpu id='13' socket_id='0' die_id='0' core_id='21' siblings='13'/>
            <cpu id='14' socket_id='0' die_id='0' core_id='22' siblings='14'/>
            <cpu id='15' socket_id='0' die_id='0' core_id='23' siblings='15'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <cache>
      <bank id='0' level='3' type='both' size='12' unit='MiB' cpus='0-15'/>
    </cache>
    <secmodel>
      <model>apparmor</model>
      <doi>0</doi>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
      <baselabel type='kvm'>+107:+107</baselabel>
      <baselabel type='qemu'>+107:+107</baselabel>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/bin/qemu-system-i386</emulator>
      <machine maxCpus='255'>pc-i440fx-7.1</machine>
      <machine canonical='pc-i440fx-7.1' maxCpus='255'>pc</machine>
      <machine maxCpus='288'>pc-q35-5.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.12</machine>
      <machine maxCpus='255'>pc-i440fx-2.0</machine>
      <machine maxCpus='1'>xenpv</machine>
      <machine maxCpus='255'>pc-i440fx-6.2</machine>
      <machine maxCpus='288'>pc-q35-4.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.5</machine>
      <machine maxCpus='255'>pc-i440fx-4.2</machine>
      <machine maxCpus='255'>pc-i440fx-5.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.5</machine>
      <machine maxCpus='255'>pc-q35-2.7</machine>
      <machine maxCpus='1024'>pc-q35-7.1</machine>
      <machine canonical='pc-q35-7.1' maxCpus='1024'>q35</machine>
      <machine maxCpus='255'>pc-i440fx-2.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.7</machine>
      <machine maxCpus='288'>pc-q35-6.1</machine>
      <machine maxCpus='128'>xenfv-3.1</machine>
      <machine canonical='xenfv-3.1' maxCpus='128'>xenfv</machine>
      <machine maxCpus='255'>pc-q35-2.4</machine>
      <machine maxCpus='288'>pc-q35-2.10</machine>
      <machine maxCpus='288'>pc-q35-5.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.7</machine>
      <machine maxCpus='288'>pc-q35-2.9</machine>
      <machine maxCpus='255'>pc-i440fx-2.11</machine>
      <machine maxCpus='288'>pc-q35-3.1</machine>
      <machine maxCpus='255'>pc-i440fx-6.1</machine>
      <machine maxCpus='288'>pc-q35-4.1</machine>
      <machine maxCpus='255'>pc-i440fx-2.4</machine>
      <machine maxCpus='255'>pc-i440fx-4.1</machine>
      <machine maxCpus='255'>pc-i440fx-5.1</machine>
      <machine maxCpus='255'>pc-i440fx-2.9</machine>
      <machine maxCpus='1'>isapc</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.4</machine>
      <machine maxCpus='255'>pc-q35-2.6</machine>
      <machine maxCpus='255'>pc-i440fx-3.1</machine>
      <machine maxCpus='288'>pc-q35-2.12</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='255'>pc-i440fx-2.1</machine>
      <machine maxCpus='288'>pc-q35-6.0</machine>
      <machine maxCpus='255'>pc-i440fx-2.6</machine>
      <machine maxCpus='288'>pc-q35-4.0.1</machine>
      <machine maxCpus='255'>pc-i440fx-7.0</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.6</machine>
      <machine maxCpus='288'>pc-q35-5.0</machine>
      <machine maxCpus='288'>pc-q35-2.8</machine>
      <machine maxCpus='255'>pc-i440fx-2.10</machine>
      <machine maxCpus='288'>pc-q35-3.0</machine>
      <machine maxCpus='255'>pc-i440fx-6.0</machine>
      <machine maxCpus='288'>pc-q35-4.0</machine>
      <machine maxCpus='128'>xenfv-4.2</machine>
      <machine maxCpus='288'>microvm</machine>
      <machine maxCpus='255'>pc-i440fx-2.3</machine>
      <machine maxCpus='255'>pc-i440fx-4.0</machine>
      <machine maxCpus='255'>pc-i440fx-5.0</machine>
      <machine maxCpus='255'>pc-i440fx-2.8</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>
      <machine maxCpus='255'>pc-q35-2.5</machine>
      <machine maxCpus='255'>pc-i440fx-3.0</machine>
      <machine maxCpus='288'>pc-q35-2.11</machine>
      <domain type='qemu'/>
      <domain type='kvm'/>
    </arch>
    <features>
      <pae/>
      <nonpae/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc'>
      <wordsize>32</wordsize>
      <emulator>/usr/bin/qemu-system-ppc</emulator>
      <machine maxCpus='1'>g3beige</machine>
      <machine maxCpus='1'>virtex-ml507</machine>
      <machine maxCpus='1'>mac99</machine>
      <machine maxCpus='32'>ppce500</machine>
      <machine maxCpus='1'>pegasos2</machine>
      <machine maxCpus='1'>sam460ex</machine>
      <machine maxCpus='1'>bamboo</machine>
      <machine maxCpus='1'>40p</machine>
      <machine maxCpus='1'>ref405ep</machine>
      <machine maxCpus='15'>mpc8544ds</machine>
      <machine maxCpus='1' deprecated='yes'>taihu</machine>
      <domain type='qemu'/>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64'>
      <wordsize>64</wordsize>
      <emulator>/usr/bin/qemu-system-ppc64</emulator>
      <machine maxCpus='2147483647'>pseries-7.1</machine>
      <machine canonical='pseries-7.1' maxCpus='2147483647'>pseries</machine>
      <machine maxCpus='2048'>powernv9</machine>
      <machine canonical='powernv9' maxCpus='2048'>powernv</machine>
      <machine maxCpus='1' deprecated='yes'>taihu</machine>
      <machine maxCpus='2147483647'>pseries-4.1</machine>
      <machine maxCpus='15'>mpc8544ds</machine>
      <machine maxCpus='2147483647'>pseries-6.1</machine>
      <machine maxCpus='2147483647'>pseries-2.5</machine>
      <machine maxCpus='2048'>powernv10</machine>
      <machine maxCpus='2147483647'>pseries-4.2</machine>
      <machine maxCpus='2147483647'>pseries-6.2</machine>
      <machine maxCpus='2147483647'>pseries-2.6</machine>
      <machine maxCpus='32'>ppce500</machine>
      <machine maxCpus='2147483647'>pseries-2.7</machine>
      <machine maxCpus='2147483647'>pseries-3.0</machine>
      <machine maxCpus='2147483647'>pseries-5.0</machine>
      <machine maxCpus='1'>40p</machine>
      <machine maxCpus='2147483647'>pseries-2.8</machine>
      <machine maxCpus='1'>pegasos2</machine>
      <machine maxCpus='2147483647'>pseries-3.1</machine>
      <machine maxCpus='2147483647'>pseries-5.1</machine>
      <machine maxCpus='2147483647'>pseries-2.9</machine>
      <machine maxCpus='1'>bamboo</machine>
      <machine maxCpus='1'>g3beige</machine>
      <machine maxCpus='2147483647'>pseries-5.2</machine>
      <machine maxCpus='2147483647'>pseries-2.12-sxxm</machine>
      <machine maxCpus='2147483647'>pseries-2.10</machine>
      <machine maxCpus='2147483647'>pseries-7.0</machine>
      <machine maxCpus='1'>virtex-ml507</machine>
      <machine maxCpus='2147483647'>pseries-2.11</machine>
      <machine maxCpus='2147483647'>pseries-2.1</machine>
      <machine maxCpus='2147483647'>pseries-2.12</machine>
      <machine maxCpus='2147483647'>pseries-2.2</machine>
      <machine maxCpus='1'>mac99</machine>
      <machine maxCpus='1'>sam460ex</machine>
      <machine maxCpus='1'>ref405ep</machine>
      <machine maxCpus='2147483647'>pseries-2.3</machine>
      <machine maxCpus='2048'>powernv8</machine>
      <machine maxCpus='2147483647'>pseries-4.0</machine>
      <machine maxCpus='2147483647'>pseries-6.0</machine>
      <machine maxCpus='2147483647'>pseries-2.4</machine>
      <domain type='qemu'/>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64le'>
      <wordsize>64</wordsize>
      <emulator>/usr/bin/qemu-system-ppc64</emulator>
      <machine maxCpus='2147483647'>pseries-7.1</machine>
      <machine canonical='pseries-7.1' maxCpus='2147483647'>pseries</machine>
      <machine maxCpus='2048'>powernv9</machine>
      <machine canonical='powernv9' maxCpus='2048'>powernv</machine>
      <machine maxCpus='1' deprecated='yes'>taihu</machine>
      <machine maxCpus='2147483647'>pseries-4.1</machine>
      <machine maxCpus='15'>mpc8544ds</machine>
      <machine maxCpus='2147483647'>pseries-6.1</machine>
      <machine maxCpus='2147483647'>pseries-2.5</machine>
      <machine maxCpus='2048'>powernv10</machine>
      <machine maxCpus='2147483647'>pseries-4.2</machine>
      <machine maxCpus='2147483647'>pseries-6.2</machine>
      <machine maxCpus='2147483647'>pseries-2.6</machine>
      <machine maxCpus='32'>ppce500</machine>
      <machine maxCpus='2147483647'>pseries-2.7</machine>
      <machine maxCpus='2147483647'>pseries-3.0</machine>
      <machine maxCpus='2147483647'>pseries-5.0</machine>
      <machine maxCpus='1'>40p</machine>
      <machine maxCpus='2147483647'>pseries-2.8</machine>
      <machine maxCpus='1'>pegasos2</machine>
      <machine maxCpus='2147483647'>pseries-3.1</machine>
      <machine maxCpus='2147483647'>pseries-5.1</machine>
      <machine maxCpus='2147483647'>pseries-2.9</machine>
      <machine maxCpus='1'>bamboo</machine>
      <machine maxCpus='1'>g3beige</machine>
      <machine maxCpus='2147483647'>pseries-5.2</machine>
      <machine maxCpus='2147483647'>pseries-2.12-sxxm</machine>
      <machine maxCpus='2147483647'>pseries-2.10</machine>
      <machine maxCpus='2147483647'>pseries-7.0</machine>
      <machine maxCpus='1'>virtex-ml507</machine>
      <machine maxCpus='2147483647'>pseries-2.11</machine>
      <machine maxCpus='2147483647'>pseries-2.1</machine>
      <machine maxCpus='2147483647'>pseries-2.12</machine>
      <machine maxCpus='2147483647'>pseries-2.2</machine>
      <machine maxCpus='1'>mac99</machine>
      <machine maxCpus='1'>sam460ex</machine>
      <machine maxCpus='1'>ref405ep</machine>
      <machine maxCpus='2147483647'>pseries-2.3</machine>
      <machine maxCpus='2048'>powernv8</machine>
      <machine maxCpus='2147483647'>pseries-4.0</machine>
      <machine maxCpus='2147483647'>pseries-6.0</machine>
      <machine maxCpus='2147483647'>pseries-2.4</machine>
      <domain type='qemu'/>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/bin/qemu-system-x86_64</emulator>
      <machine maxCpus='255'>pc-i440fx-7.1</machine>
      <machine canonical='pc-i440fx-7.1' maxCpus='255'>pc</machine>
      <machine maxCpus='288'>pc-q35-5.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.12</machine>
      <machine maxCpus='255'>pc-i440fx-2.0</machine>
      <machine maxCpus='1'>xenpv</machine>
      <machine maxCpus='255'>pc-i440fx-6.2</machine>
      <machine maxCpus='288'>pc-q35-4.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.5</machine>
      <machine maxCpus='255'>pc-i440fx-4.2</machine>
      <machine maxCpus='255'>pc-i440fx-5.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.5</machine>
      <machine maxCpus='255'>pc-q35-2.7</machine>
      <machine maxCpus='1024'>pc-q35-7.1</machine>
      <machine canonical='pc-q35-7.1' maxCpus='1024'>q35</machine>
      <machine maxCpus='255'>pc-i440fx-2.2</machine>
      <machine maxCpus='255'>pc-i440fx-2.7</machine>
      <machine maxCpus='288'>pc-q35-6.1</machine>
      <machine maxCpus='128'>xenfv-3.1</machine>
      <machine canonical='xenfv-3.1' maxCpus='128'>xenfv</machine>
      <machine maxCpus='255'>pc-q35-2.4</machine>
      <machine maxCpus='288'>pc-q35-2.10</machine>
      <machine maxCpus='288'>pc-q35-5.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.7</machine>
      <machine maxCpus='288'>pc-q35-2.9</machine>
      <machine maxCpus='255'>pc-i440fx-2.11</machine>
      <machine maxCpus='288'>pc-q35-3.1</machine>
      <machine maxCpus='255'>pc-i440fx-6.1</machine>
      <machine maxCpus='288'>pc-q35-4.1</machine>
      <machine maxCpus='255'>pc-i440fx-2.4</machine>
      <machine maxCpus='255'>pc-i440fx-4.1</machine>
      <machine maxCpus='255'>pc-i440fx-5.1</machine>
      <machine maxCpus='255'>pc-i440fx-2.9</machine>
      <machine maxCpus='1'>isapc</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.4</machine>
      <machine maxCpus='255'>pc-q35-2.6</machine>
      <machine maxCpus='255'>pc-i440fx-3.1</machine>
      <machine maxCpus='288'>pc-q35-2.12</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='255'>pc-i440fx-2.1</machine>
      <machine maxCpus='255'>pc-i440fx-2.6</machine>
      <machine maxCpus='288'>pc-q35-6.0</machine>
      <machine maxCpus='288'>pc-q35-4.0.1</machine>
      <machine maxCpus='255'>pc-i440fx-7.0</machine>
      <machine maxCpus='255' deprecated='yes'>pc-i440fx-1.6</machine>
      <machine maxCpus='288'>pc-q35-5.0</machine>
      <machine maxCpus='288'>pc-q35-2.8</machine>
      <machine maxCpus='255'>pc-i440fx-2.10</machine>
      <machine maxCpus='288'>pc-q35-3.0</machine>
      <machine maxCpus='255'>pc-i440fx-6.0</machine>
      <machine maxCpus='288'>pc-q35-4.0</machine>
      <machine maxCpus='128'>xenfv-4.2</machine>
      <machine maxCpus='288'>microvm</machine>
      <machine maxCpus='255'>pc-i440fx-2.3</machine>
      <machine maxCpus='255'>pc-i440fx-4.0</machine>
      <machine maxCpus='255'>pc-i440fx-5.0</machine>
      <machine maxCpus='255'>pc-i440fx-2.8</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>
      <machine maxCpus='255'>pc-q35-2.5</machine>
      <machine maxCpus='255'>pc-i440fx-3.0</machine>
      <machine maxCpus='288'>pc-q35-2.11</machine>
      <domain type='qemu'/>
      <domain type='kvm'/>
    </arch>
    <features>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
    </features>
  </guest>

</capabilities>

[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:741) Using domain events
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:783) Error registering network events: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock': No such file or directory
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:802) Error registering storage pool events: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock': No such file or directory
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:827) Error registering node device events: Failed to connect socket to '/var/run/libvirt/virtnodedevd-sock': No such file or directory
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (pollhelpers:23) Unable to list all networks: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock': No such file or directory
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (engine:300) Error polling connection qemu:///system
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 294, in _handle_tick_queue
    conn.tick_from_engine(**kwargs)
  File "/usr/share/virt-manager/virtManager/connection.py", line 1319, in tick_from_engine
    self._tick(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/connection.py", line 1207, in _tick
    initial_poll, pollvm, pollnet, pollpool, pollnodedev)
  File "/usr/share/virt-manager/virtManager/connection.py", line 1140, in _poll
    new_pools = _process_objects("pools")
  File "/usr/share/virt-manager/virtManager/connection.py", line 1126, in _process_objects
    gone, new, master = pollcb(self._backend, keymap, cb)
  File "/usr/share/virt-manager/virtinst/pollhelpers.py", line 57, in fetch_pools
    objs = backend.listAllStoragePools()
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 6223, in listAllStoragePools
    raise libvirtError("virConnectListAllStoragePools() failed")
libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock': No such file or directory
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:840) conn.close() uri=qemu:///system
[Thu, 05 Oct 2023 22:27:57 virt-manager 17933] DEBUG (connection:485) conn=qemu:///system changed to state=Disconnected
[Thu, 05 Oct 2023 22:28:01 virt-manager 17933] DEBUG (manager:196) Closing manager
[Thu, 05 Oct 2023 22:28:01 virt-manager 17933] DEBUG (engine:323) window counter decremented to 0
[Thu, 05 Oct 2023 22:28:01 virt-manager 17933] DEBUG (engine:343) No windows found, requesting app exit
[Thu, 05 Oct 2023 22:28:01 virt-manager 17933] DEBUG (engine:369) Exiting app normally.
Comment 1 Charles Arnold 2023-10-05 21:19:48 UTC
It looks like you are using the new modular libvirt daemons.
Could you please try the virt-manager packages from here?

https://download.opensuse.org/repositories/Virtualization/15.5/noarch/

You will need virt-manager-common, virt-manager, and virt-install

Thanks
Comment 2 Michal Suchanek 2023-10-06 10:30:51 UTC
Yes, looks like these fix the problem, thanks
Comment 3 Michal Suchanek 2023-11-29 20:15:59 UTC
And now the bug is back

rpm -qa | grep virt
libvirt-daemon-driver-network-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-qemu-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-iscsi-direct-9.0.0-150500.6.11.1.x86_64
virtiofsd-1.6.1-Virt.150500.8.1.x86_64
virt-what-1.21-3.3.1.x86_64
virt-manager-common-4.1.0-Virt.150500.734.1.noarch
system-group-libvirt-20170617-150400.22.33.noarch
libvirt-daemon-driver-storage-rbd-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-secret-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-mpath-9.0.0-150500.6.11.1.x86_64
qemu-hw-display-virtio-gpu-pci-8.1.0-Virt.150500.943.1.x86_64
libvirt-daemon-driver-interface-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-9.0.0-150500.6.11.1.x86_64
virt-install-4.1.0-Virt.150500.734.1.noarch
libvirt-daemon-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-scsi-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-qemu-9.0.0-150500.6.11.1.x86_64
qemu-hw-display-virtio-gpu-8.1.0-Virt.150500.943.1.x86_64
libvirt-glib-1_0-0-4.0.0-150400.1.10.x86_64
typelib-1_0-LibvirtGLib-1_0-4.0.0-150400.1.10.x86_64
libvirt-daemon-driver-nodedev-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-iscsi-9.0.0-150500.6.11.1.x86_64
libvirt-libs-9.0.0-150500.6.11.1.x86_64
libvirt-daemon-driver-storage-disk-9.0.0-150500.6.11.1.x86_64
virt-manager-4.1.0-Virt.150500.734.1.noarch
libvirt-daemon-driver-storage-logical-9.0.0-150500.6.11.1.x86_64
python3-libvirt-python-9.0.0-150500.1.4.x86_64
libvirt-daemon-driver-nwfilter-9.0.0-150500.6.11.1.x86_64
libvirt-client-9.0.0-150500.6.11.1.x86_64
virtme-0.1.1-bp155.4.7.noarch
libvirt-daemon-driver-storage-core-9.0.0-150500.6.11.1.x86_64
qemu-hw-display-virtio-vga-8.1.0-Virt.150500.943.1.x86_64
Comment 4 Charles Arnold 2023-11-29 21:14:23 UTC
(In reply to Michal Suchanek from comment #3)
> And now the bug is back

I assume you pulled the latest virt-manager from the location specified in
comment #1.

Please attach,
~/.cache/virt-manager/virt-manager.log

or if just using virt-install,
~/.cache/virt-manager/virt-install.log

Thanks
Comment 5 Charles Arnold 2023-11-29 22:59:15 UTC
(In reply to Charles Arnold from comment #4)
> (In reply to Michal Suchanek from comment #3)
> > And now the bug is back
> 
> I assume you pulled the latest virt-manager from the location specified in
> comment #1.
> 
> Please attach,
> ~/.cache/virt-manager/virt-manager.log
> 
> or if just using virt-install,
> ~/.cache/virt-manager/virt-install.log
> 
> Thanks

Is this the error you are seeing?


Unable to connect to libvirt qemu:///system.

Verify that an appropriate libvirt daemon is running.

Libvirt URI is: qemu:///system

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/connection.py", line 926, in _do_open
    self._backend.open(cb, data)
  File "/usr/share/virt-manager/virtinst/connection.py", line 174, in open
    open_flags)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 147, in openAuth
    raise libvirtError('virConnectOpenAuth() failed')
libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/virtqemud-sock': No such file or directory
Comment 6 Michal Suchanek 2023-11-30 09:34:44 UTC
It is not but how am I supposed to know there are three different daemons that need to be started for the virt-manager to work?
Comment 7 Charles Arnold 2023-11-30 13:20:01 UTC
(In reply to Michal Suchanek from comment #6)
> It is not but how am I supposed to know there are three different daemons
> that need to be started for the virt-manager to work?

Valid question.

Jim,

Just so you know, upstream has completely removed the checks in virt-manager
for seeing if libvirtd is installed.

From 775edfd5dc668c26ffbdf07f6404ca80d91c3a3a
"
    Nowadays with libvirt split daemons, libvirtd isn't required to
    be installed for a first run local connection to succeed, so we
    are needlessly blocking the app from 'just working' in many cases.
    Especially considering that many distros often have libvirt running
    out of the box due to gnome-boxes pulling it in.
    
    Drop the daemon checking entirely.
"

Should I add a hard requirement in the spec file for virt-manager to
include some of these other daemons?
Comment 8 Michal Suchanek 2023-11-30 14:12:49 UTC
And you may not need it running if you want to connect to some other virtualization service - either something other than kvm, or some other host.

The problem is when you want to connect to qemu, and one of 3 daemons required for that is not running the connection silently fails, without any explanation whatsoever.
Comment 9 James Fehlig 2023-11-30 15:29:34 UTC
(In reply to Charles Arnold from comment #1)
> It looks like you are using the new modular libvirt daemons.

By default, the monolithic libvirtd is enabled in Leap 15.5. AFAIK, Michal has not actively switched to using modular daemons. Is the libvirtd socket unit enabled/started (systemctl status libvirtd.socket)? Does virsh work fine without modular daemons? E.g. network functionality can be checked with 'virsh net-list --all' and storage functionality with 'virsh pool-list --all'.
Comment 10 James Fehlig 2023-11-30 15:34:05 UTC
(In reply to Charles Arnold from comment #7)
> Just so you know, upstream has completely removed the checks in virt-manager
> for seeing if libvirtd is installed.

We'll need to ensure libvirtd socket is active on 15.5 and older, and modular daemon sockets are active on anything newer. Last I checked TW, all daemons were enabled/active.

> Should I add a hard requirement in the spec file for virt-manager to
> include some of these other daemons?

No. We want to ensure it's possible to install virt-manager on a client machine with no hypervisor.
Comment 11 Michal Suchanek 2023-11-30 17:19:40 UTC
I don't know about modular vs monolithic daemons.

We ship with the virtualization services disabled (and they are not installed either unless the KVM host role is picked during installation).

I enabled some service that looked like one that would manage local VMs but virt-manager would not work, and only in debug logs it shows that it's looking for two other services.

Like how is anybody ever supposed to get this running?
Comment 12 Charles Arnold 2023-11-30 17:31:21 UTC
There are two main ways.
The first is as you mentioned. The role is selected at host installation for
KVM and tools. This should get users a functioning setup.

The second is after host installation. Users may run 'Install Hypervisor and
Tools' from a GUI menu or 'yast2 virtualization' from the command line and
there select their desired virtualization pattern (KVM server, KVM Tools,
or both).

Other ways of enabling KVM virtualization on the host such as using zypper
and systemctl are possible but not recommended.
Comment 13 Michal Suchanek 2023-11-30 17:45:28 UTC
Ok, that's if people are running SLE/Leap.

How are they supposed to this in general?

This is failure of the upstream tool showing any diagnostics about the problem.
Comment 14 Michal Suchanek 2023-11-30 18:14:44 UTC
Also 'yast2 virtualization' is unusable - it wants to install *a lot* of stuff that's definitely not needed for 'minimal kvm host', and it wants to create a bridge which does not work for me because WiFi cannot be bridged.
Comment 15 James Fehlig 2023-12-01 00:15:41 UTC
(In reply to Michal Suchanek from comment #11)
> I don't know about modular vs monolithic daemons.

Yeah, and you really shouldn't have to. Unfortunately for Leap, the systemd presets do not include libvirtd. It is included in the SLE15 SP5 presets, and the TW presets contain all the required modular daemons. With the correct presets, the daemon(s) are started when the packages are installed.

> Like how is anybody ever supposed to get this running?

As Charles mentioned, you can install with roles, patterns, or the yast module. If those don't suite your needs, you can install a functional libvirt+kvm stack with

zypper install libvirt-daemon-qemu libvirt-client

On SLE15 SP5 or TW, that should get you a working setup with necessary daemons enabled and started. 'virsh list --all; virsh net-list --all; virsh pool-list --all' should all work without further configuration.

I tried installing those packages on a fresh Leap 15.5 install, and the virsh commands worked fine after I enabled/started libvirtd (systemctl enable libvirtd.service; systemctl start libvirtd.service). I then installed virt-manager, started it as a normal user, and it connected to the system daemon just fine after providing root passwd.

BTW, I'm using packages from the standard 15.5 update repos, nothing from the Virtualization project.
Comment 16 James Fehlig 2023-12-01 00:21:46 UTC
Hopefully my change to the SP6 presets will makes it's way to Leap 15.6, so users wont have to fuss with or care about these daemons. Perhaps I need to explicitly submit it to 15.6?

https://build.suse.de/package/rdiff/SUSE:SLE-15-SP6:GA/systemd-presets-branding-SLE?linkrev=base&rev=2
Comment 17 Charles Arnold 2024-06-14 20:10:40 UTC
(In reply to James Fehlig from comment #16)
> Hopefully my change to the SP6 presets will makes it's way to Leap 15.6, so
> users wont have to fuss with or care about these daemons. Perhaps I need to
> explicitly submit it to 15.6?
> 
> https://build.suse.de/package/rdiff/SUSE:SLE-15-SP6:GA/systemd-presets-
> branding-SLE?linkrev=base&rev=2

It should be automatically pushed to Leap 15.6 when submitted to SLE-15-SP6.
I think this is resolved now. Closing.