Bugzilla – Bug 1194355
Unable to install openSUSE on HP Spectre laptop with Optane and SSD memory
Last modified: 2023-06-26 09:13:54 UTC
This bug report also applies to Leap 15.4 and Tumbleweed. I have an HP Spectre x360 Convertible 15t-eb000 laptop with Windows 10 already installed on it. I want to make this into a dual boot system with OpenSuSE as the second OS. When I tried to use the OpenSuSE installations for a USB stick I run into a serious problem when the installation process reaches the point where it wants to set up partitions on the disks. (My laptop has a 1TB Optane SSD drive with 32Gb Optane memory which is some sort of very fast cache.) At this point I get the following error messages - "An error was found in one of the devices in the system. The information displayed may not be accurate and the installation may fail if you continue." The "Details..." button shows - "cannot delete MdContainer" I can only choose "OK" to dismiss this error message. The partitioner only shows two disks, one for the large SSD drive and one for the smaller Optane memory. No partitions within these drives are shown and any attempt to add or modify the drives results in an error message saying the device is busy. At this point the attempts to install Leap 15.3, Leap 15.4, and Tumbleweed fail and I have to give up. The Windows 10 partitioner shows the following partitions - E: 260 MB (EFI System Partition) 515 MB (Recovery Partition) C: 171.84 GB (Windows NTFS) 781.25 GB Unallocated Total: 953.85 GB You can follow a discussion about this problem on the two OpenSuSE mail lists, one for users with a thread titled "Troubles installing OpenSuSE 15.3 on a HP Spectre laptop" https://lists.opensuse.org/archives/list/users@lists.opensuse.org/thread/D3F4YSV6ARHNEVSMU6AE77JTYPB4XTR4/ The other thread is on the factory mail list with a thread titled "Troubles installing OpenSUSE 15.3 on a HP Spectre laptop - Intel Optane memory" https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/XWU4RH6JWKZCBQZBNWQROGVKMD6BLFDF/ I created a Live CD for Leap 15.3 and with it I was able to get the information about what fdisk -l shows: localhost:/home/linux # fdisk -l Disk /dev/loop0: 821.1 MiB, 860946432 bytes, 1681536 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 4.5 GiB, 4818206720 bytes, 9410560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes The backup GPT table is not on the end of the device. This problem will be corrected by write. Disk /dev/nvme0n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors Disk model: INTEL HBRPEKNX0203AH Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B32172FB-2CEE-4E2C-B94B-46DAABE35591 Device Start End Sectors Size Type /dev/nvme0n1p1 2048 534527 532480 260M EFI System /dev/nvme0n1p2 534528 567295 32768 16M Microsoft reserved /dev/nvme0n1p3 567296 360937471 360370176 171.9G Microsoft basic data /dev/nvme0n1p4 1999337472 2000392191 1054720 515M Windows recovery environment Disk /dev/nvme1n1: 27.3 GiB, 29260513280 bytes, 57149440 sectors Disk model: INTEL HBRPEKNX0203AHO Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 57.8 GiB, 62026416128 bytes, 121145344 sectors Disk model: USB 3.1 FD Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xf727289d Device Boot Start End Sectors Size Id Type /dev/sda1 * 64 1886035 1885972 920.9M cd unknown /dev/sda2 1886036 1916755 30720 15M ef EFI (FAT-12/16/32) /dev/sda3 1916928 121145343 119228416 56.9G 83 Linux But any attempt to modify or add new partitions, with the Live USB stick, only gets my hands slapped with the error message saying the device is busy, and I am not allowed to make any changes. I am attaching some additional log files to help show what is going on during the installation process. Also I am attaching the output from mdadm --examine /dev/nvme0n1 while using a Tumbleweed Live USB stick.
Created attachment 855018 [details] Log files and other info from installation attempts. Apparently there is a size limit to what can be uploaded. The /var/log/Yast directory contained a large file - memsample.zcat which I removed from the tar ball in order to upload the rest of the log and other files needed to help debug this bug. If the memsample.zcat file is also needed I will try to compress and upload it separately, just ask.
The laptop has two SSDs: nvme0n1: INTEL HBRPEKNX0203AH, 954G nvme1n1: INTEL HBRPEKNX0203AHO, 27G (this is the Optane drive I guess) As discussed on the ML, the idea of this setup seems to be to use the Optane drive as ultra-fast cache for the larger "normal" NVMe SSD. > /sbin/mdadm --assemble --scan --config='/tmp/libstorage-odcSFq/mdadm.conf' > mdadm: Container /dev/md/imsm0 has been assembled with 1 drive > mdadm: (IMSM): Unsupported attributes : 3000000 > mdadm: Unsupported attributes in IMSM metadata.Arrays activation is blocked. > mdadm: Cannot activate member /md127/0 in /dev/md/imsm0. Similar messages follow. So mdadm assembles the "array", but refuses to activate it because it doesn't understand the attributes. Weirdly, the only device is classified as "spare". > /sbin/mdadm --detail '/dev/md127' --export > MD_LEVEL=container > MD_DEVICES=1 > MD_METADATA=imsm > MD_UUID=16845a7c:b68db0dd:097c5398:5fb3cf8a > MD_DEVNAME=imsm0" > MD_DEVICE_dev_nvme0n1_ROLE=spare" > MD_DEVICE_dev_nvme0n1_DEV=/dev/nvme0n1" Then we have a second "array" with the Optane SSD as "spare". > /sbin/mdadm --detail '/dev/md126' --export > MD_LEVEL=container > MD_DEVICES=1 > MD_METADATA=imsm > MD_UUID=d99e5685:7d6dff89:00d3c4f6:74127040 > MD_DEVNAME=imsm1 > MD_DEVICE_dev_nvme1n1_ROLE=spare So we have two "RAID arrays", both consisting of just a single disk. From the Linux side, the main problem is that the arrays can't be activated. But that may actually be a good thing: Unless the cache device was fully synchronized with the main SSD under Windows (iow, the cache was empty/flushed), any activation of either array under Linux might have corrupted the storage. Apparently Intel has extended its RST metadata to cover not only RAID configurations, but also caching configurations. As I said on the ML, under Linux such a configuration would be handled by bcache or lvmcache, not MD. It would be tempting to assume that we could support this using an existing Linux-native solution such as bcache. But every caching solution requires a mapping between sectors on the large SSD and sectors in the cache. This mapping is saved in some form in meta data, and it's certainly not the same between bcache and Intel's RST solution as used under Windows. So data corruption would be imminent. It might be possible to reverse-engineer the format. Perhaps Intel would even be willing to provide information about it, I don't know. But AFAICS, nobody has yet attempted to do this. So I suppose that this storage setup is simply unsupported under Linux at the present time. I suppose you complain to your hardware vendor, asking for a "Linux driver" for this RST setup. Even if that's nonsense, it's language that the support people are likely to understand. Googling for 'mdadm: (IMSM): Unsupported attributes : 3000000' lead me to this page: https://askubuntu.com/questions/1204386/windows-10-wont-boot-after-dual-boot-installation-optane-volume where someone under Ubuntu actually destroyed his Windows installation on a system like this. It seems that openSUSE handled this more gracefully than Ubuntu. The response on that page give you a clue how to recover: - download "Intel® Optane™ Memory User Interface app." (https://downloadcenter.intel.com/product/99745/Intel-Optane-Memory) - In that app, disable the Optane volume Also have a look here: > https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/optane-memory/intel-optane-memory-user-installation.pdf (§4.1) In theory you shouldn't have to disable the optane device completely, just the storage caching functionality. But I don't know what options the software offers. Basically, there are 3 possiblities after disabling the caching: 1 the optane memory will be added to your system as NVdimm, leaving you the options to configure it to your preferences 2 the optane memory will be added to your system memory as (almost) regular RAM 3 the optane memory will show up as an additional storage on both Windows and Linux, to be used as a separate "disk" 4 the optane memory will be invisible / unusable in the system 1) would be ideal fromt he Linux PoV, but I doubt it will be offered on consumer laptops. Also, it's questionable whether Windows would understand settings made with Linux-based tools. 4) is the worst case, obviously. Please read the manuals carefully. It's possible that disabling this functionality would break your Windows installation, so that you need to reinstall Windows afterwards. After all, from the Windows PoV the storage stack for the main volume C: will have changed and the RST meta data might change as well. I'm adding Neil (our mdadm expert) and Coly (our expert for both bcache and NVdimm/Optane) to this bug, in the hope that they may be able to add more insight.
At least it needs support from mdadm for IMSM cache format. The format is public released yet, and there is no announced plan when Intel will support the cache format on mdadm. I will try to send a question to Intel developer, hope there can be some response at least. Coly Li
(In reply to Coly Li from comment #5) > At least it needs support from mdadm for IMSM cache format. The format is > public released yet, and there is no announced plan when Intel will support > the cache format on mdadm. > > I will try to send a question to Intel developer, hope there can be some > response at least. > The response was currently this private format won't be public and there is no plan to release the private format be public at this moment. So what we can do is very limited, and there is no way to support such cache from existing open source tools now days.
Indeed this is a requirement for new format support and unfortunately we have no way to work it out so far. Now I close this bug report because it is more like a feature request which we cannot make it. Thanks for the input and discussion. Coly Li
*** Bug 1212678 has been marked as a duplicate of this bug. ***