Bug 1213361 - Tumbleweed looses RAID configuration after reboot
Summary: Tumbleweed looses RAID configuration after reboot
Status: RESOLVED FIXED
Alias: None
Product: openSUSE Tumbleweed
Classification: openSUSE
Component: Other (show other bugs)
Version: Current
Hardware: Other openSUSE Tumbleweed
: P5 - None : Normal (vote)
Target Milestone: ---
Assignee: E-mail List
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-07-15 20:33 UTC by t neo
Modified: 2024-05-17 20:50 UTC (History)
1 user (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description t neo 2023-07-15 20:33:38 UTC
Fresh installed Tumbleweed looses my RAID configuration after reboot of system.

- Backup RAID array to separate disk
- Restart system to re-install
- 2 Disks are configured in RAID1 during installation (/dev/sda and /dev/sdb)
- Disk is formatted XFS with encryption
- Mount point is set to /data
- Boot system after installation
- /cat/proc/mdstat reports status of RAID
- Transfer all data back to RAID device
- Install desired packages after fresh install
- Reboot
- cat: /proc/mdstat: No such file or directory
- Partitioner in YAST2 only reports 1 disk to be mounted as /data
- No RAID array is shown in Yast
- Adding RAID array fails “not enough suitable devices”

sudo lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
    sda 8:0 0 931.5G 0 disk
    sdb 8:16 0 931.5G 0 disk
    └─cr_data 254:1 0 931.5G 0 crypt /data

cat /etc/mdadm.conf
    DEVICE containers partitions
    ARRAY /dev/md0 UUID=14fbead6:3304801a:ead6cb97:07d808d0
    ARRAY /dev/md0 UUID=972654a6:fd4d9f5e:6a6f1176:2f47cad5

sudo mdadm -A -s
    mdadm: Devices UUID-14fbead6:3304801a:ead6cb97:07d808d0 and UUID-972654a6:fd4d9f5e:6a6f1176:2f47cad5 have the same name: /dev/md0
    mdadm: Duplicate MD device names in conf file were found.


- Ran 15.5 live image, RAID array is recognized
- Array was synced in the live image session and completed successfully
- Rebooted back to normal Tumbleweed installation
- Boot failed
- Boot dropped to maintenance terminal

systemctl status data.mount
    × data.mount - /data
    Loaded: loaded (/etc/fstab; generated)
    Active: failed (Result: exit-code) since Fri 2023-07-14 19:33:17 CST; 1min 10s ago
    Where: /data
    What: /dev/mapper/cr_data
    Docs: man:fstab(5)
    man:systemd-fstab-generator(8)
    CPU: 3ms

    Jul 14 19:33:17 localhost systemd[1]: Mounting /data…
    Jul 14 19:33:17 localhost mount[1082]: mount: /data: wrong fs type, bad option, bad superblock on /dev/mapper/cr_data, missing codepage or helper program, or other error.
    Jul 14 19:33:17 localhost mount[1082]: dmesg(1) may have more information after failed mount system call.
    Jul 14 19:33:17 localhost systemd[1]: data.mount: Mount process exited, code=exited, status=32/n/a
    Jul 14 19:33:17 localhost systemd[1]: data.mount: Failed with result ‘exit-code’.
    Jul 14 19:33:17 localhost systemd[1]: Failed to mount /data.

- Ran xfs_repair on the md array
- xfs_repair repaired a node
- Reboot system
- RAID array is not recognized.

mdadm -D /dev/dm-0
    mdadm: /dev/dm-0 does not appear to be an md device

Do I now have RAID enabled or not. I kind of think I do as booting gparted shows the raid array as well a live image. Though in Tumbleweed it appears that I don't have a RAID array at the moment. Which is concerning to me a bit.
Comment 1 Arvin Schnell 2023-07-17 07:54:58 UTC
The output of lsblk shows clearly that the encryption device is using
/dev/sdb, so one of the underlying disks of the RAID instead of the
RAID itself. Maybe the boot process started devices (raid, crypto) in
the wrong order.

Since you do not need the RAID as a boot device you could use metadata
1.2 for the RAID. Unfortunately this cannot be created in YaST so you have
to use other tools. See bug #1168914.

To start the devices correctly you could try the following commands:

- First deactivate encryption:
  umount /data
  cryptsetup close cr_data

- Then start RAID and activate encryption:

  mdadm --assemble --scan
  cryptsetup open --type luks /dev/md0 cr_data
  mount /dev/mapper/cr_data /data
Comment 2 Arvin Schnell 2023-07-17 09:49:05 UTC
Looks like a duplicate of bug #1213227.
Comment 3 t neo 2023-07-17 13:17:39 UTC
Thanks. The provided work-around works. Closing this to follow the bug report given.