Bug 1213811 - podman network unreachable after starting docker
Summary: podman network unreachable after starting docker
Status: RESOLVED WONTFIX
Alias: None
Product: PUBLIC SUSE Linux Enterprise Server 15 SP5
Classification: openSUSE
Component: Containers (show other bugs)
Version: unspecified
Hardware: Other Other
: P5 - None : Normal
Target Milestone: ---
Assignee: Danish Prakash
QA Contact:
URL: https://openqa.suse.de/tests/11704712...
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-07-31 09:47 UTC by Martin Loviska
Modified: 2024-02-21 08:13 UTC (History)
6 users (show)

See Also:
Found By: openQA
Services Priority:
Business Priority:
Blocker: Yes
Marketing QA Status: ---
IT Deployment: ---


Attachments
tcpdump comparison (215.98 KB, image/png)
2023-07-31 13:08 UTC, Felix Niederwanger
Details
sysctl -a output before and after restoring the podman network (100.00 KB, application/x-tar)
2023-08-03 11:50 UTC, Felix Niederwanger
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Martin Loviska 2023-07-31 09:47:15 UTC
## Observation

openQA test in scenario sle-15-SP5-BCI-Updates-x86_64-openjdk-devel_17_on_SLES_15-SP5@64bit fails in
[_root_BCI-tests_all_podman](https://openqa.suse.de/tests/11704712/modules/_root_BCI-tests_all_podman/steps/7)

Running any BCI container on sle15sp5 host fails with DNS resolution symptoms.

e.g. zypper dup 

```
zypper -n dup --from SLE_BCI -l -d -D --no-allow-vendor-change --allow-downgrade --no-allow-arch-change', exit_status=106, stdout=b"Refreshing service 'container-suseconnect-zypp'.\nWarning: Skipping service 'container-suseconnect-zypp' because of the above error.\nRetrieving repository 'SLE_BCI' metadata [.error]\nWarning: Skipping repository 'SLE_BCI' because of the above error.\nLoading repository data...\nReading installed packages...\nComputing distribution upgrade...\nNothing to do.\n", stderr=b"Problem retrieving the repository index file for service 'container-suseconnect-zypp':\n[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp] \nRepository 'SLE_BCI' is invalid.\n[SLE_BCI|https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/] Valid metadata not found at specified URL\nHistory:\n - [|] Error trying to read from 'https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/'\n - Download (curl) error for 'https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/content':\n   Error code: Connection failed\n   Error message: Could not resolve host: updates.suse.com\n\nPlease check if the URIs defined for this repository are pointing to a valid repository.\nSome of the repositories have not been refreshed because of an error.\n")
```

After manual testing, the issue occurs only in case podman is used as container runtime.

#### packages

slirp4netns-1.2.0-150500.1.1.x86_64
podman-4.4.4-150500.1.4.x86_64
podman-cni-config-4.4.4-150500.1.4.noarch
cni-1.1.2-150500.1.20.x86_64
cni-plugins-1.1.1-150500.1.19.x86_64

#### Steps to reproduce

1) failing podman
podman run -t registry.suse.de/suse/sle-15-sp5/update/cr/totest/images/bci/openjdk:17 zypper -n dup

```
Refreshing service 'container-suseconnect-zypp'.

Problem retrieving the repository index file for service 'container-suseconnect-zypp':

[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp] 

Warning: Skipping service 'container-suseconnect-zypp' because of the above error.


Retrieving repository 'SLE_BCI' metadata ------------------------------------------------------------------------------------[\]
Retrieving repository 'SLE_BCI' metadata ................................................................................[error]

Repository 'SLE_BCI' is invalid.

[SLE_BCI|https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/] Valid metadata not found at specified URL

History:

 - [|] Error trying to read from 'https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/'

 - Download (curl) error for 'https://updates.suse.com/SUSE/Products/SLE-BCI/15-SP5/x86_64/product/content':

   Error code: Connection failed

   Error message: Could not resolve host: updates.suse.com



Please check if the URIs defined for this repository are pointing to a valid repository.

Warning: Skipping repository 'SLE_BCI' because of the above error.

Some of the repositories have not been refreshed because of an error.

Loading repository data...

Reading installed packages...

Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.

Computing distribution upgrade...

Nothing to do.
```

2) works well with docker
docker run -t registry.suse.de/suse/sle-15-sp5/update/cr/totest/images/bci/openjdk:17 zypper -n dup
```
Refreshing service 'container-suseconnect-zypp'.


Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Pool for sle-15-x86_[\]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Pool for sle-15-x86_[\]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Pool for sle-15-x86_[|]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Pool for sle-15-x[done]


Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Updates for sle-15-x[/]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Updates for sle-15-x[|]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Updates for sle-15-x[-]
Adding repository 'SLE-Module-Basesystem15-SP5-Debuginfo-Updates for sle-1[done]


Adding repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' ------[\]
Adding repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' ------[/]
Adding repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' <100%>[|]
Adding repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' ...[done]


Adding repository 'SLE-Module-Basesystem15-SP5-Source-Pool for sle-15-x86_64'[/]
Adding repository 'SLE-Module-Basesystem15-SP5-Source-Pool for sle-15-x86_64'[-]
Adding repository 'SLE-Module-Basesystem15-SP5-Source-Pool for sle-15-x86_64'[-]
Adding repository 'SLE-Module-Basesystem15-SP5-Source-Pool for sle-15-x86_[done]


Adding repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' ---[\]
Adding repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' ---[\]
Adding repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' ...[|]
Adding repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' [done]


Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Pool for sl[/]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Pool for sl[|]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Pool for sl[-]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Pool for[done]


Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Updates for[\]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Updates for[/]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Updates for[|]
Adding repository 'SLE-Module-Server-Applications15-SP5-Debuginfo-Updates [done]


Adding repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86_6[/]
Adding repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86_6[-]
Adding repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86_6[-]
Adding repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x8[done]


Adding repository 'SLE-Module-Server-Applications15-SP5-Source-Pool for sle-1[\]
Adding repository 'SLE-Module-Server-Applications15-SP5-Source-Pool for sle-1[\]
Adding repository 'SLE-Module-Server-Applications15-SP5-Source-Pool for sle-1[|]
Adding repository 'SLE-Module-Server-Applications15-SP5-Source-Pool for sl[done]


Adding repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-x8[/]
Adding repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-x8[|]
Adding repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-x8[-]
Adding repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15[done]


Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Pool for sle-15-x86_64' -[\]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Pool for sle-15-x86_64' -[/]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Pool for sle-15-x86_64' .[|]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Pool for sle-15-x86_64[done]


Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Updates for sle-15-x86_64[/]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Updates for sle-15-x86_64[-]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Updates for sle-15-x86_64[-]
Adding repository 'SLE-Product-SLES15-SP5-Debuginfo-Updates for sle-15-x86[done]


Adding repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' -----------[\]
Adding repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' -----------[\]
Adding repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' .....<100%>[|]
Adding repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' ........[done]


Adding repository 'SLE-Product-SLES15-SP5-Source-Pool for sle-15-x86_64' ----[/]
Adding repository 'SLE-Product-SLES15-SP5-Source-Pool for sle-15-x86_64' ----[|]
Adding repository 'SLE-Product-SLES15-SP5-Source-Pool for sle-15-x86_64' ....[-]
Adding repository 'SLE-Product-SLES15-SP5-Source-Pool for sle-15-x86_64' .[done]


Adding repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' --------[\]
Adding repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' --------[/]
Adding repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' ..<100%>[|]
Adding repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' .....[done]


Retrieving repository 'SLE_BCI' metadata ------------------------------------[-]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[/]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[-]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[\]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[|]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[/]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[-]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[\]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[|]
Retrieving repository 'SLE_BCI' metadata ------------------------------------[/]
Retrieving repository 'SLE_BCI' metadata .................................[done]


Building repository 'SLE_BCI' cache -----------------------------------------[-]
Building repository 'SLE_BCI' cache -----------------------------------------[\]
Building repository 'SLE_BCI' cache ...................................<100%>[\]
Building repository 'SLE_BCI' cache ...................................<100%>[|]
Building repository 'SLE_BCI' cache ......................................[done]


Building repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' cach[/]
Building repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' cach[|]
Building repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' cach[-]
Building repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' cach[\]
Building repository 'SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64' c[done]


Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[/]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[|]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[/]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[-]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[\]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[|]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[/]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[-]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64'[\]
Retrieving repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_[done]


Building repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' c[|]
Building repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' c[-]
Building repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' c[/]
Building repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64' c[-]
Building repository 'SLE-Module-Basesystem15-SP5-Updates for sle-15-x86_64[done]


Building repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86[\]
Building repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86[\]
Building repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-x86[|]
Building repository 'SLE-Module-Server-Applications15-SP5-Pool for sle-15-[done]


Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[|]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[/]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[-]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[\]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[|]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[/]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[-]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-1[\]
Retrieving repository 'SLE-Module-Server-Applications15-SP5-Updates for sl[done]


Building repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-[|]
Building repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-[/]
Building repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-15-[/]
Building repository 'SLE-Module-Server-Applications15-SP5-Updates for sle-[done]


Building repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' cache ---[-]
Building repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' cache ---[-]
Building repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' cache ...[\]
Building repository 'SLE-Product-SLES15-SP5-Pool for sle-15-x86_64' cache [done]


Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[\]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[|]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[/]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[-]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[\]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[|]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' meta[/]
Retrieving repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' m[done]


Building repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' cache [-]
Building repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' cache [|]
Building repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' cache [\]
Building repository 'SLE-Product-SLES15-SP5-Updates for sle-15-x86_64' cac[done]

Loading repository data...

Reading installed packages...

Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.

Computing distribution upgrade...



The following package is going to be downgraded:

  libsigc-2_0-0



1 package to downgrade.

Overall download size: 53.6 KiB. Already cached: 0 B. After the operation,

additional 24.0 B will be used.


Continue? [y/n/v/...? shows all options] (y): y

Retrieving: libsigc-2_0-0-2.10.7-150400.1.7.x86_64

(SLE-Module-Basesystem15-SP5-Pool for sle-15-x86_64)

                                                            (1/1),  53.6 KiB    


Retrieving: libsigc-2_0-0-2.10.7-150400.1.7.x86_64.rpm ---------------[starting]
Retrieving: libsigc-2_0-0-2.10.7-150400.1.7.x86_64.rpm ----------------------[\]
Retrieving: libsigc-2_0-0-2.10.7-150400.1.7.x86_64.rpm <59%>====[| (32.0 KiB/s)]
Retrieving: libsigc-2_0-0-2.10.7-150400.1.7.x86_64.rpm ......[done (32.0 KiB/s)]




Checking for file conflicts: ------------------------------------------------[|]
Checking for file conflicts: ------------------------------------------------[/]
Checking for file conflicts: .............................................[done]


(1/1) Installing: libsigc-2_0-0-2.10.7-150400.1.7.x86_64 --------------------[/]
(1/1) Installing: libsigc-2_0-0-2.10.7-150400.1.7.x86_64 --------------------[-]
(1/1) Installing: libsigc-2_0-0-2.10.7-150400.1.7.x86_64 .................[done]
```


## Test suite description
The base test suite is used for job templates defined in YAML documents. It has no settings of its own.


## Reproducible

Fails since (at least) Build [10.34_openjdk-17-devel-image](https://openqa.suse.de/tests/11696121)


## Expected result

Last good: (unknown) (or more recent)


## Further details

Always latest result in this scenario: [latest](https://openqa.suse.de/tests/latest?arch=x86_64&distri=sle&flavor=BCI-Updates&machine=64bit&test=openjdk-devel_17_on_SLES_15-SP5&version=15-SP5)
Comment 1 Dirk Mueller 2023-07-31 11:29:10 UTC
I think this is simply the dns flooding issue that we have in github ci as well. the issue seems to be that the 8.8.8.8/8.8.4.4 resolvers are not responding anymore if you ask for the same thing too quickly. 

the workaround in github ci is

https://github.com/SUSE/BCI-tests/blob/main/.github/workflows/ci.yaml#L301-L306
Comment 2 Dan Čermák 2023-07-31 12:08:20 UTC
(In reply to Dirk Mueller from comment #1)
> I think this is simply the dns flooding issue that we have in github ci as
> well. the issue seems to be that the 8.8.8.8/8.8.4.4 resolvers are not
> responding anymore if you ask for the same thing too quickly. 
> 
> the workaround in github ci is
> 
> https://github.com/SUSE/BCI-tests/blob/main/.github/workflows/ci.yaml#L301-
> L306

Only podman is affected and only on SLE 15 SP5, hence I think this might be a bug in podman/netavark/cni.
Comment 3 Felix Niederwanger 2023-07-31 13:08:30 UTC
Created attachment 868537 [details]
tcpdump comparison

This issue is weird.

I could not reproduce this in a 15-SP5 VM, nor could I reproduce it by using the openQA qcow2 image on my Laptop. However it is present consistently within openQA and even a cloned job to a different worker fails with the same symptoms: https://duck-norris.qe.suse.de/tests/13377

I'm using the same hard disk image on my laptop and it works nicely.

It's rather simple to reproduce the issue

> podman run registry.opensuse.org/opensuse/tumbleweed curl https://suse.com

tcpdump of the running container shows that the DNS requests go out to 10.0.2.3 (qemu) but no replies are coming back. When doing the same on my laptop using the openQA disk images, I get replies.

See the attached Screenshot for a comparison of tcpdump `-n -i cni-podman0` between the VM on my Laptop (left) and the same VM in openQA (right)
Comment 4 Dirk Mueller 2023-07-31 13:55:25 UTC
(In reply to Felix Niederwanger from comment #3)

> tcpdump of the running container shows that the DNS requests go out to
> 10.0.2.3 (qemu) but no replies are coming back. When doing the same on my
> laptop using the openQA disk images, I get replies.

so then it needs to be traced on the qemu host. what is the /etc/resolv.conf setup on the host?
Comment 5 Felix Niederwanger 2023-07-31 14:33:48 UTC
resolv.conf

> nameserver 10.0.2.3
> nameserver fec0::3

You can see in my previous screenshot that on both hosts the DNS packet for 10.0.2.3 is being sent, but only on the local VM (not openQA) a response is being received.
Comment 6 Dirk Mueller 2023-07-31 15:13:28 UTC
(In reply to Felix Niederwanger from comment #5)

> resolv.conf
> > nameserver 10.0.2.3
> > nameserver fec0::3

thats the resolv.conf on the host that the VM is running on?

> You can see in my previous screenshot that on both hosts the DNS packet for
> 10.0.2.3 is being sent, but only on the local VM (not openQA) a response is
> being received.

I understand, but 10.0.2.3 is the magic ip for "qemu's internal dns server", aka gethostbyname() being executed on the host on which qemu is running on. so we need to look at the dns setup there.
Comment 7 Felix Niederwanger 2023-08-03 11:47:38 UTC
I found a reproducer. On SLES 15-SP5 (Minimal VM), after installing docker and podman, and starting docker, the podman network does not work.

See the following terminal session:

> rofflkischte:~ # zypper in docker podman
> ...
> 
> The following 14 NEW packages are going to be installed:
>   catatonit cni cni-plugins conmon containerd docker fuse-overlayfs libcontainers-common libcontainers-sles-mounts libfuse3-3 libslirp0 podman runc slirp4netns
> 
> 14 new packages to install.
> ...
> rofflkischte:~ # systemctl start docker
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
> Trying to pull registry.opensuse.org/opensuse/tumbleweed:latest...
> ...
> WARN[0005] Path "/etc/SUSEConnect" from "/etc/containers/mounts.conf" doesn't exist, skipping
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org

To resolve the issue, docker needs to be stopped and firewalld needs to be restarted. Restart of firewalld alone does not restore the podman network connectivity, as shown below (cropped for relevant output):

> rofflkischte:~ # systemctl --no-pager restart firewalld
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> rofflkischte:~ # systemctl stop docker
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> rofflkischte:~ # systemctl --no-pager restart firewalld
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

Furthermore, the issue persists a reboot. If docker and firewalld are enabled, after a reboot the default podman network does not work

> # After reboot with enabled docker
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org

Connectivity can be again restored by stopping docker and restarting firewalld.

Interestingly, docker can be started stopping, and the connectivity remains intact:

> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> rofflkischte:~ # systemctl stop docker
> rofflkischte:~ # systemctl restart firewalld
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
> rofflkischte:~ # systemctl start docker
> rofflkischte:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
Comment 8 Felix Niederwanger 2023-08-03 11:50:11 UTC
Created attachment 868613 [details]
sysctl -a output before and after restoring the podman network

I also extracted the sysctl variables (`sysctl -a`) before and after the podman network has been restored via `systemctl stop docker && systemctl restart firewalld`. There are some differences visible in the network configuration, I'm attaching a tar archive with the differences

* sysctl-before.txt - broken podman network, BEFORE stopping docker & restarting firewalld
* sysctl-after.txt - working podman network, AFTER stopping docker & restarting firewalld
* sysctl-diff.txt - Diff between sysctl-before.txt and sysctl-after.txt
Comment 9 Danish Prakash 2023-08-09 12:06:53 UTC
Looks like this is related to [1]. For podman, could you try `podman network reload` after restarting firewalld? 
For docker, I don't know of a command yet, but I think restarting the daemon would rewrite the IP table rules which should bring the network back up?

As mentioned in [1], we're looking into providing the fix and also documenting the issue for podman. For docker, otoh, I've yet to explore the possible solutions or workarounds, if any, to see how we can proceed.

[1] - https://bugzilla.suse.com/show_bug.cgi?id=1214080
Comment 10 Felix Niederwanger 2023-08-09 12:11:01 UTC
Thanks for the update. I'm working on our workaround now and check, if your suggestion is applicable also for the docker case.
Comment 11 Felix Niederwanger 2023-08-09 12:17:20 UTC
(In reply to Danish Prakash from comment #9)
> Looks like this is related to [1]. For podman, could you try `podman network
> reload` after restarting firewalld? 

Nope, that doesn't seem to work, at least not on Tumbleweed:

> tumbleweed:~ # zypper in docker podman
> ...
> tumbleweed:~ # systemctl start docker
> tumbleweed:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
> ...
> curl: (6) Could not resolve host: opensuse.org
> tumbleweed:~ # podman network reload -a
> tumbleweed:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
> ...
> curl: (6) Could not resolve host: opensuse.org

I assume `podman network reload --all` should be the right approach?
Comment 12 Felix Niederwanger 2023-08-17 11:17:35 UTC
Ping?
Comment 13 Felix Niederwanger 2023-09-04 07:14:26 UTC
Ping?
Comment 14 Bruno Leon 2023-09-05 07:57:32 UTC
I cannot reproduce on TBLW:

9:54:29 root@tblw-3 ~# podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
9:54:31 root@tblw-3 ~# docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[1]    952 exit 1     docker ps
9:54:32 root@tblw-3 ~# systemctl start docker
9:54:38 root@tblw-3 ~# podman ps             
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
9:54:40 root@tblw-3 ~# docker ps             
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
9:54:42 root@tblw-3 ~# podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
9:54:47 root@tblw-3 ~# docker run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/ 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
9:54:56 root@tblw-3 ~#
Comment 15 Felix Niederwanger 2023-09-05 14:21:37 UTC
Bruno, the reproducer you used appears correct to me. However I still can reproduce the issue on a fresh Tumbleweed VM using the Minimal-VM image from get.opensuse.org. I used the Minimal-VM image because it's the easiest way to setup a TW VM.

> macflurry:~ # zypper in docker podman
> macflurry:~ # reboot
> macflurry:~ # systemctl start docker
> macflurry:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
> 
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> macflurry:~ # > ...
> macflurry:~ # podman network reload -a
> macflurry:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> macflurry:~ # 

Here is the IP configuration of the VM. I have IPv4 and IPv6 enabled:

> macflurry:~ # ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host noprefixroute 
>        valid_lft forever preferred_lft forever
> 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
>     link/ether 52:54:00:d3:82:05 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.122.248/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
>        valid_lft 3385sec preferred_lft 3385sec
>     inet6 fee8::3482:6b24:439a:9db5/64 scope site temporary dynamic 
>        valid_lft 604587sec preferred_lft 86095sec
>     inet6 fee8::15c2:7797:8b8e:a81e/64 scope site mngtmpaddr noprefixroute 
>        valid_lft forever preferred_lft forever
>     inet6 fe80::f9ef:b5c5:e547:3579/64 scope link noprefixroute 
>        valid_lft forever preferred_lft forever
> 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
>     link/ether 02:42:72:2d:90:f4 brd ff:ff:ff:ff:ff:ff
>     inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
>        valid_lft forever preferred_lft forever
> 4: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
>     link/ether 6a:20:89:0d:97:24 brd ff:ff:ff:ff:ff:ff
>     inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::6820:89ff:fe0d:9724/64 scope link proto kernel_ll 
>        valid_lft forever preferred_lft forever

Disabling IPv6 had no effect, the state is still broken.

Stopping firewalld resolves the issue for me:

> macflurry:~ # systemctl stop firewalld
> macflurry:~ # systemctl restart docker
> 
> # This will work now:
> macflurry:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
> 
> macflurry:~ # systemctl start firewalld
> # will not work:
> macflurry:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0curl: (6) Could not resolve host: opensuse.org
> macflurry:~ # systemctl stop firewalld
> # will work again:
> macflurry:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
>                                  Dload  Upload   Total   Spent    Left  Speed
>   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
> macflurry:~ # 

Was firewalld enabled in your test runs?
Comment 16 Bruno Leon 2023-09-15 09:29:49 UTC
Hello,

I managed to reproduce on a fresh Tumbleweed installed form the latest ISO.

After many trials I could not identify firewalled as the reason for this behavior, as the ruleset does not change whether docker is started or not.

A second reboot fixes the issue but I digged a bit deeper, and simply resetting podman fixes the issue too.

I could not determine why but once docker is started, podman packets seems to not be nated anymore on the output interface (I initialy thought they did not reach it but pinging it does work so it proved me wrong).

Resetting podman with
> podman system reset --force
does fix it be recreating the network interface.

However, while debug this on a new TBLW the main difference with my previous machine was simply that my other setup uses netavark as network backend.

I tried it on a fresh machine forcing usage of netavark as podman network backend:
```
localhost:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
Trying to pull registry.opensuse.org/opensuse/tumbleweed:latest...
Getting image source signatures
Copying blob f09e6956248a done  
Copying config 3685de2eb5 done  
Writing manifest to image destination
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
localhost:~ # docker run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
localhost:~ # systemctl start docker.service 
localhost:~ # docker run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
Unable to find image 'registry.opensuse.org/opensuse/tumbleweed:latest' locally
latest: Pulling from opensuse/tumbleweed
f09e6956248a: Pull complete 
Digest: sha256:47b178b461471cceb6b157da228bb794e7a7d06913c459a27af978d939a38bbd
Status: Downloaded newer image for registry.opensuse.org/opensuse/tumbleweed:latest
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
localhost:~ # podman run --rm registry.opensuse.org/opensuse/tumbleweed curl https://opensuse.org/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
localhost:~ # podman info --format {{.Host.NetworkBackend}}
netavark
localhost:~ # 
```

And it works out of the box.
So I would advice for strongly recommending usage of Netavark, given that cni is the old way to do network related stuff in podman.
Comment 17 Felix Niederwanger 2024-02-21 07:44:42 UTC
Hello Bruno. Since this is a rare corner case, which likely affects nobody and even less people are willing to work on this issue, perhaps it's time to admit defeat and close this as a WONTFIX?
Comment 18 Dan Čermák 2024-02-21 08:13:14 UTC
(In reply to Felix Niederwanger from comment #17)
> Hello Bruno. Since this is a rare corner case, which likely affects nobody
> and even less people are willing to work on this issue, perhaps it's time to
> admit defeat and close this as a WONTFIX?


It shall be, as you say