Bug 1220912 - [Data loss] Btrfs filesystem self destructs with many snapshots due to faulty quota qgroups
Summary: [Data loss] Btrfs filesystem self destructs with many snapshots due to faulty...
Status: NEW
Alias: None
Product: openSUSE Tumbleweed
Classification: openSUSE
Component: Kernel:Filesystems (show other bugs)
Version: Current
Hardware: x86-64 openSUSE Tumbleweed
: P2 - High : Major (vote)
Target Milestone: ---
Assignee: Wenruo Qu
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-03-05 07:43 UTC by Pavin Joseph
Modified: 2024-03-07 11:06 UTC (History)
3 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
Btrfs check error in VM (171.62 KB, image/jpeg)
2024-03-05 10:51 UTC, Pavin Joseph
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Pavin Joseph 2024-03-05 07:43:42 UTC
Hello everyone,

OpenSuse installer enables quota/qgroups when creating btrfs filesystem with snapshots enabled, possibly because snapper would want that to keep snapshot space utilization in check.

Unfortunately, this irrecoverably destroyed my btrfs filesystem which would've caused data loss and a lot of grief had I not had backups.

More details on the troubleshooting process and identifying the root cause can be found in the forum thread [0].

The gist of it is:
1. Both my primary and secondary machines were impacted (running OpenSuse Tumbleweed Slowroll)
2. The primary machine had an NVMe SSD used for root filesystem and a SATA SSD used for snapshot backups.
3. Only the btrfs root filesystem in the NVMe SSD was impacted, the btrfs filesystem in the SATA SSD (created by me previously when using LMDE) housing all the snapshots from the root fs wasn't impacted at all.
4. Running btrfs check from rescue ISO showed many errors relating to quota/qgroup. As the errors were few, the secondary machine's btrfs fs could be fixed by disabling quota. The primary machine's FS was corrupted beyond repair.
5. Only difference between the btrfs fs in the NVMe SSD and SATA SSD was that the latter had quota disabled by default when I created it previously on LMDE while the former had quota enabled by default.
6. I have now disabled btrfs quota on both my machines and that has prevented any more errors.

[0]: https://forums.opensuse.org/t/btrfs-filesystem-on-primary-machine-completely-destroyed-itself-secondary-in-the-process-of-dying/172862
Comment 1 Wenruo Qu 2024-03-05 08:32:34 UTC
Firstly, the more snapshots the more overhead it would be qgroups.
That's already known and well documented in man pages.


Secondly, for TW kernels with transaction-update, btrfs would automatically disable qgroup when a new snapshot is created.
As that type of creating snapshot and assigning it to a higher level qgroup would mark qgroup inconsistent.

Thus as long as snapper is not trying to rescan the qgroup, it would not cause any overhead.

The problem is when snapper is trying to get the amount of bytes each snapshot is using, it would need to rescan for an accurate qgroup.
With that many snapshots, rescan would also be hugely slowed down.

Thus I have already mentioned this problem to snapper/transaction-update team, that it's not a good idea to use qgroup for transactional-update, due to its snapshot-happy nature.

Thankfully in the future, we have simple-quota, which greatly reduces the qgroup overhead, at the cost of slightly in-accurate accounting methods.

Finally, your data lose claim is ambiguous at best, the only message you mentioned about a seemingly corrupted fs is:

> Feb 28 07:51:16 erlangen kernel: BTRFS error (device nvme0n1p2): couldn't find block (412207087616) (level 1) in tree (2405) with key (650 96 3847)

Which is an extent tree corruption, unrelated to qgroup at all.

And please provide the full "btrfs check --readonly" output for that corrupted fs for more details.

And even with that corruption, you can still mount the "corrupted" fs with "-o ro,rescue=all" to rescue all your files.
Comment 2 Pavin Joseph 2024-03-05 08:46:25 UTC
(In reply to Wenruo Qu from comment #1)
> Finally, your data lose claim is ambiguous at best, the only message you
> mentioned about a seemingly corrupted fs is:
> 
> > Feb 28 07:51:16 erlangen kernel: BTRFS error (device nvme0n1p2): couldn't find block (412207087616) (level 1) in tree (2405) with key (650 96 3847)
> 
> Which is an extent tree corruption, unrelated to qgroup at all.

This was provided by a different forum user responding to my post, I'm user "pavinjoseph".

The only thing I managed to capture from the completely failed btrfs filesystem was a console output:
https://forums.opensuse.org/uploads/default/original/2X/b/bca97b9489c255abf925d8b7399f332148a8c037.jpeg

After this, the filesystem failed to mount.
Running "btrfs check" from a rescue ISO showed many screenful of errors. I felt I had nothing to lose at that point and ran "btrfs check --repair" on it and it failed to fix the issue.

Unfortunately I don't have access to the "btrfs check" logs from the failed filesystem as I created a new btrfs filesystem on the device. From rescue ISO, I took a picture as there were errors even after reinstalling OpenSuse since quota is enabled by default:
https://forums.opensuse.org/uploads/default/original/2X/8/89ca83ab521945360a8d2f19b4f1dec26d778615.jpeg

Since then I have disabled quota and the errors are gone.

> And even with that corruption, you can still mount the "corrupted" fs with
> "-o ro,rescue=all" to rescue all your files.

Oh I did not know of this. I tried "btrfs check --repair" and it failed to fix the errors.

From my experience the default option of enabling quota/qgroups is dangerous as it can result in the btrfs filesystem self-destructing because if btrfs itself is reporting errors with this feature, then clearly it's nowhere close to being production ready.
Comment 3 Pavin Joseph 2024-03-05 10:48:26 UTC
I was able to easily reproduce the bug in a VM (virt-manager, QEMU/KVM).
Note: the VM also had quota enabled by default. I ran "btrfs check" from rescue ISO before I did anything and it reported some quota issues but the final verdict was no errors.

Then I performed the following steps and the errors popped up (attached image) though it didn't explain what was wrong:
1. Enable timeline snapshots for root
2. Enable snapshots (timeline) for home, opt, root, var, usr-local, etc.
Waited 1 hour for it to create 1 timeline snap each.
3. Perform transactional update (TU) and reboot using sudo transactional-update cleanup reboot dup. Successfully done with no errors.

Next I boot into rescue and "btrfs check" shows errors (attached image).
I don't think this has anything to do with TU, my host system (a laptop) is quite limited so I can't leave the VM running for a couple hours for it to create a timeline snap every hour but I think the more snaps the more bugs until the FS collapses under its own weight.

Quota/qgroups in btrfs is broken and a ticking time-bomb 💥
Comment 4 Pavin Joseph 2024-03-05 10:51:18 UTC
Created attachment 873218 [details]
Btrfs check error in VM

Of course the VM's storage is fine as per smart. It's a SATA SSD.
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.4-1-default] (SUSE RPM)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     SPCC Solid State Disk
Serial Number:    AA000000000000001759
Firmware Version: W0201A0
User Capacity:    1,024,209,543,168 bytes [1.02 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
TRIM Command:     Available
Device is:        Not in smartctl database 7.3/5528
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Mar  5 16:21:00 2024 IST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x11) SMART execute Offline immediate.
					No Auto Offline data collection support.
					Suspend Offline collection upon new
					command.
					No Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					No Selective Self-test supported.
SMART capabilities:            (0x0002)	Does not save SMART data before
					entering power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 (  10) minutes.
SCT capabilities: 	       (0x0001)	SCT Status supported.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   100   100   050    Old_age   Always       -       0
  5 Reallocated_Sector_Ct   0x0032   100   100   050    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   050    Old_age   Always       -       4077
 12 Power_Cycle_Count       0x0032   100   100   050    Old_age   Always       -       128
160 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       0
161 Unknown_Attribute       0x0033   100   100   050    Pre-fail  Always       -       100
163 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       6
164 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       4657
165 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       35
166 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       3
167 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       20
168 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       3808
169 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       100
175 Program_Fail_Count_Chip 0x0032   100   100   050    Old_age   Always       -       0
176 Erase_Fail_Count_Chip   0x0032   100   100   050    Old_age   Always       -       0
177 Wear_Leveling_Count     0x0032   100   100   050    Old_age   Always       -       0
178 Used_Rsvd_Blk_Cnt_Chip  0x0032   100   100   050    Old_age   Always       -       0
181 Program_Fail_Cnt_Total  0x0032   100   100   050    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   050    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   050    Old_age   Always       -       56
194 Temperature_Celsius     0x0022   100   100   050    Old_age   Always       -       49
195 Hardware_ECC_Recovered  0x0032   100   100   050    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   100   100   050    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   050    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0032   100   100   050    Old_age   Always       -       0
199 UDMA_CRC_Error_Count    0x0032   100   100   050    Old_age   Always       -       0
232 Available_Reservd_Space 0x0032   100   100   050    Old_age   Always       -       100
241 Total_LBAs_Written      0x0030   100   100   050    Old_age   Offline      -       94038
242 Total_LBAs_Read         0x0030   100   100   050    Old_age   Offline      -       41638
245 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       136422

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      4066         -
# 2  Short offline       Completed without error       00%      4043         -
# 3  Short offline       Completed without error       00%      4023         -
# 4  Extended offline    Interrupted (host reset)      90%      4021         -
# 5  Short offline       Completed without error       00%      4000         -
# 6  Short offline       Completed without error       00%      3975         -
# 7  Short offline       Completed without error       00%      3951         -
# 8  Short offline       Completed without error       00%      3931         -
# 9  Short offline       Completed without error       00%      3908         -
#10  Short offline       Completed without error       00%      3892         -
#11  Short offline       Completed without error       00%      3865         -
#12  Short offline       Completed without error       00%      3835         -
#13  Short offline       Completed without error       00%      3818         -
#14  Short offline       Completed without error       00%      3794         -
#15  Short offline       Completed without error       00%      3763         -
#16  Short offline       Completed without error       00%      3746         -
#17  Short offline       Completed without error       00%      3715         -
#18  Short offline       Completed without error       00%      3690         -
#19  Short offline       Completed without error       00%      3668         -
#20  Short offline       Completed without error       00%      3643         -
#21  Short offline       Completed without error       00%      3620         -

Selective Self-tests/Logging not supported

The above only provides legacy SMART information - try 'smartctl -x' for more