Bug 1215970

Summary: podman-compose will silently fail due to missing netavark dependency
Product: [openSUSE] openSUSE Tumbleweed Reporter: samuel norbury <samuel>
Component: ContainersAssignee: Containers Team <containers-bugowner>
Status: RESOLVED WONTFIX QA Contact: E-mail List <qa-bugs>
Severity: Normal    
Priority: P5 - None CC: dcermak
Version: Current   
Target Milestone: ---   
Hardware: aarch64   
OS: openSUSE Tumbleweed   
Whiteboard:
Found By: --- Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---

Description samuel norbury 2023-10-05 12:16:53 UTC
User-Agent:       Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0
Build Identifier: 

I had a raspberry pi 4b running opensuse tumbleweed on which I installed podman a while ago (probably before netavark became the default networking system. After updating podman recently to 4.x.x and installing podman-compose-python311, I couldn't get inter-container communication to work due to dns issues. This turned out to be due to podman-compose (the current version) using different flags than what podman 4.x.x in CNI networking mode was compatible with. Setting the networking mode to 'netavark' in the podman configuration resulted in podman becoming unusable because netavark was not installed alongside podman-compose.

Reproducible: Always

Steps to Reproduce:
1. Install podman (4.x.x) and podman-compose (any version)
2. Create a simple docker-compose.yml with two alpine containers
3. try reach 1 container from the other with e.g. curl


Actual Results:  
Failure to resolve dns

Expected Results:  
the curl fails because there is no server responding in the other container

This is almost a duplicate of bug https://bugzilla.opensuse.org/show_bug.cgi?id=1212282, but in my opinion, netavark should be a dependency of podman-compose and not of podman itself.
Comment 1 Dan Čermák 2023-10-24 08:18:10 UTC
I am afraid you are facing a general upgrade issue. If you installed podman before the switch to netavark, then it will use cni for networking and podman will *continue* to use CNI, even if netavark is installed. The only way to switch an existing podman setup is a full reset. There is no migration from CNI to netavark, only tearing everything down and setting it back up.

Hence requiring netavark will not solve the issue that you are facing.