|
Bugzilla – Full Text Bug Listing |
| Summary: | Snapshot 20240606 breaks libvirt NAT networks | ||
|---|---|---|---|
| Product: | [openSUSE] openSUSE Tumbleweed | Reporter: | Eyad Issa <eyadlorenzo> |
| Component: | Virtualization:Other | Assignee: | James Fehlig <jfehlig> |
| Status: | RESOLVED INVALID | QA Contact: | E-mail List <qa-bugs> |
| Severity: | Normal | ||
| Priority: | P5 - None | CC: | alexander.graul, aplanas, eyadlorenzo, santiago.zarate |
| Version: | Current | ||
| Target Milestone: | --- | ||
| Hardware: | x86-64 | ||
| OS: | Other | ||
| Whiteboard: | |||
| Found By: | --- | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
| Attachments: |
Output of virsh net-dumpxml default
Output of virsh test-slem5 Output of nft list ruleset ip |
||
|
Description
Eyad Issa
2024-06-07 22:33:49 UTC
I'm unable to reproduce using 20240607. VM is connected to the 'default' NAT network, obtains an IP address, and can reach external hosts. Nor do I see any of the journal messages you mention. To rule out apparmor, can you stop it, unload all profiles, and see if your VM can then reach the external net? E.g. systemctl stop apparmor.service aa-teardown Can you also provide the configuration of your libvirt NATed network? E.g. virsh net-dumpxml <network-name> I tried to upgrade to 20240609 and now it apparently works again. Sorry for the trouble! I have the same issue. Existing or new VMs alike can't communicate to external hosts, e.g.:
localhost:~ # tracepath suse.com
1?: [LOCALHOST] pmtu 1500
1: 192.168.122.1 0.159ms
1: 192.168.122.1 0.077ms
2: no reply
3: no reply
The VM has the following network configuration:
localhost:~ # ip a
[ skipping lo device ]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:2d:e6:f1 brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 192.168.122.40/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0
valid_lft 2946sec preferred_lft 2946sec
inet6 fe80::5054:ff:fe2d:e6f1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
localhost:~ # ip r
default via 192.168.122.1 dev eth0 proto dhcp src 192.168.122.40 metric 20100
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.40 metric 100
`virsh domifaddr test-slem5` reports the same from the host (running 20240609).
I'm running the modular deamons (libvirtd.service is not running) and will attach the output of `virsh dumpxml test-slem5` and `virsh net-dumpxml default`. I also tried other NAT networks I have defined, with the same result (all packets are dropped, tracepath only shows one hop to the hypervisor).
Please let me if there is anything else I can share to track this down.
Created attachment 875420 [details]
Output of virsh net-dumpxml default
Created attachment 875421 [details]
Output of virsh test-slem5
I'm using similar network and VM '<interface>' config, but don't see the problem. Do you have nftables installed? If so, what's the output of 'nft list ruleset ip'? Also, any iptables* packages installed? I have these nftables / iptables packages installed: ➤ zypper se -i '/(nf|ip)tables.*/' S | Name | Summary | Type --+----------------------+-----------------------------------------------------------------------------+-------- i | iptables | IP packet filter administration utilities | package i | iptables-backend-nft | Metapackage to make nft the default backend for iptables/arptables/ebtables | package i | libnftables1 | nftables firewalling command interface | package i | nftables | Userspace utility to access the nf_tables packet filter | package i | python311-nftables | Python bindings for nftables | package The 'nft list ruleset ip' output is a bit long for a comment, I'll add it as an attachment. Created attachment 875435 [details]
Output of nft list ruleset ip
(In reply to Alexander Graul from comment #7) > i | iptables-backend-nft | Metapackage to make nft the default backend for > iptables/arptables/ebtables | package iptables-backend-nft creates two top level nftable tables that are shared by all applications using iptables. You can see them in your 'nft list ruleset ip' output, with a warning not to touch :-). With libvirt's switch to using nftables directly, it now creates its own top level table. And recall in nftables, for a packet to get through it must be allowed by all top level tables. Notice in your output the 'ip filter' table has type filter hook forward priority filter; policy drop; This github issue comment has more details https://gitlab.com/libvirt/libvirt/-/issues/644#note_1940628451 The issue also describes why it works prior to the switch to using nftables directly, and a workaround. E.g. in /etc/libvirt/network.conf, set firewall_backend = "iptables" and restart virtnetworkd (or libvirtd if you're using monolithic daemon). Does this work for you? FYI, this post has a nice description of relationship between iptables, iptables-nft, and nftables https://developers.redhat.com/blog/2020/08/18/iptables-the-two-variants-and-their-relationship-with-nftables# Thank you, setting firewall_backend = "iptables" works! I'll read up on the linked issue and documentation, it's an area I know too little about. I'm closing the bug as invalid again. |