Bugzilla – Full Text Bug Listing |
Summary: | offline upgrade aborted if snapper fails | ||
---|---|---|---|
Product: | [openSUSE] openSUSE Distribution | Reporter: | Olaf Hering <ohering> |
Component: | YaST2 | Assignee: | YaST Team <yast-internal> |
Status: | CONFIRMED --- | QA Contact: | Jiri Srain <jsrain> |
Severity: | Normal | ||
Priority: | P5 - None | CC: | ancor, aschnell, ohering |
Version: | Leap 15.1 | ||
Target Milestone: | --- | ||
Hardware: | Other | ||
OS: | Other | ||
URL: | https://trello.com/c/pMzijcRv | ||
See Also: | http://bugzilla.suse.com/show_bug.cgi?id=1160918 | ||
Whiteboard: | |||
Found By: | --- | Services Priority: | |
Business Priority: | Blocker: | --- | |
Marketing QA Status: | --- | IT Deployment: | --- |
Description
Olaf Hering
2019-03-04 16:33:13 UTC
This "installation-helper" call tries to create a pre-update snapshot. It shouldn't do that in the first place. That this fails because there is no .snapshots directory is only a consequence of that. Now I wonder why it even wants to create that snapshot. We definitely need the installation logs to debug this scenario. https://en.opensuse.org/openSUSE:Report_a_YaST_bug (In reply to Stefan Hundhammer from comment #1) > This "installation-helper" call tries to create a pre-update snapshot. It > shouldn't do that in the first place. That this fails because there is no > .snapshots directory is only a consequence of that. > > Now I wonder why it even wants to create that snapshot. From the attached logs: Checking if Snapper is configured: "/usr/bin/snapper --no-dbus --root=%{root} list-configs | /usr/bin/grep "^root " >/dev/null" returned: {"exit"=>0, "stderr"=>"", "stdout"=>""} Since the exit code of that command is zero, the installer concludes Snapper if configured and a snapshot can/must be performed. (In reply to Ancor Gonzalez Sosa from comment #4) > (In reply to Stefan Hundhammer from comment #1) > > This "installation-helper" call tries to create a pre-update snapshot. It > > shouldn't do that in the first place. That this fails because there is no > > .snapshots directory is only a consequence of that. > > > > Now I wonder why it even wants to create that snapshot. > > From the attached logs: > > Checking if Snapper is configured: "/usr/bin/snapper --no-dbus > --root=%{root} list-configs | /usr/bin/grep "^root " >/dev/null" returned: > {"exit"=>0, "stderr"=>"", "stdout"=>""} > > Since the exit code of that command is zero, the installer concludes Snapper > if configured and a snapshot can/must be performed. Needless to say, %{root} is substituted by /mnt when the command is executed. Olaf, since your system is so special, would you mind to paste the full output of this command? /usr/bin/snapper --no-dbus --root=/mnt list-configs Executed in the installation media, after the system to upgrade has been mounted. ==== If that's too hard, I guess it would be enough to just execute this in any of the systems that are installed into that filesystem: /usr/bin/snapper --no-dbus list-configs ==== Or alternatively do something like this with a rescue system ...activate the volume... mount -t btrfs /dev/sd240_crypt_lvm/sd240_btrfs /mnt /usr/bin/snapper --no-dbus --root=/mnt list-configs The first alternative would be the best, but whatever works for you... I guess you get the idea. 0:esprimo:~ # /usr/bin/snapper --no-dbus --root=/mnt list-configs Konfiguration | Subvolumen --------------+----------- root | / 0:esprimo:~ # 0:esprimo:~ # cat /mnt/etc/snapper/configs/root # subvolume to snapshot SUBVOLUME="/" # filesystem type FSTYPE="btrfs" # users and groups allowed to work with config ALLOW_USERS="" ALLOW_GROUPS="" # start comparing pre- and post-snapshot in background after creating # post-snapshot BACKGROUND_COMPARISON="yes" # run daily number cleanup NUMBER_CLEANUP="yes" # limit for number cleanup NUMBER_MIN_AGE="1800" NUMBER_LIMIT="50" # create hourly snapshots TIMELINE_CREATE="yes" # cleanup hourly snapshots after some time TIMELINE_CLEANUP="yes" # limits for timeline cleanup TIMELINE_MIN_AGE="1800" TIMELINE_LIMIT_HOURLY="10" TIMELINE_LIMIT_DAILY="10" TIMELINE_LIMIT_MONTHLY="10" TIMELINE_LIMIT_YEARLY="10" # cleanup empty pre-post-pairs EMPTY_PRE_POST_CLEANUP="yes" # limits for empty pre-post-pair cleanup EMPTY_PRE_POST_MIN_AGE="1800" 0:esprimo:~ # ls -l /mnt/etc/snapper/configs/root -rw-r--r-- 1 root root 800 Mar 22 2016 /mnt/etc/snapper/configs/root I'm sure I never used snapper, perhaps a config as included in earlier pkgs? (In reply to Ancor Gonzalez Sosa from comment #4) > Since the exit code of that command is zero, the installer concludes Snapper > is configured and a snapshot can/must be performed. Which normally is fine. But the 'list-configs' command does not verify that the configs are actually sound/available. Here that is not the case. (In reply to Olaf Hering from comment #7) > I'm sure I never used snapper, perhaps a config as included in earlier pkgs? The config file is not included in an RPM. Likely it was generated during installation by YaST (using snapper). (In reply to Arvin Schnell from comment #8) > (In reply to Ancor Gonzalez Sosa from comment #4) > > > Since the exit code of that command is zero, the installer concludes Snapper > > is configured and a snapshot can/must be performed. > > Which normally is fine. But the 'list-configs' command does not verify > that the configs are actually sound/available. Here that is not the > case. And how can YaST check that? (In reply to Ancor Gonzalez Sosa from comment #9) > And how can YaST check that? So far there is no snapper command to do that. Adding one might look trivial at first but likely is not. Apart from checking mount points one would also have to check mount flags (e.g. read-only, which btrfs also sets during certain errors). Additional ACLs and SELinux might make the checks more complicated. I suggest to inform the user about the failure to create the snapshot and let the user decide whether to continue or abort. As a workaround, just delete the false snapper configuration and then the upgrade process should not try to create a snapshot and everything should work. For a better long-term solution, I have created a Trello card so a more robust fix is implemented as other tasks and priorities permit. |