Bug 525237

Summary: mkinitrd fail? if root is on lvm which is on raid 0
Product: [openSUSE] openSUSE 11.2 Reporter: Piotrek Juzwiak <piotrek.juzwiak>
Component: InstallationAssignee: Xin Wei Hu <xwhu>
Status: VERIFIED FIXED QA Contact: Jiri Srain <jsrain>
Severity: Critical    
Priority: P1 - Urgent CC: blackhole999, dutchguy69, forgotten_1GBkbCnI0A, forgotten_aFJloKvMbR, forgotten_gRveQ1K55E, hare, hawke, jplack, lnussel, mmarek, mvancura, rccj, wmerriam
Version: FactoryFlags: coolo: SHIP_STOPPER+
Target Milestone: ---   
Hardware: All   
OS: Other   
Whiteboard:
Found By: --- Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---
Attachments: Yast 2 Log
Directory listing of MC listings of /boot and /grub showing that initrd and menu.lst were created after M8 was updated to RC1
disk layout with root on lvm on raid and boot on raid works with Build0334
Partition Layout of test machine

Description Piotrek Juzwiak 2009-07-25 03:54:47 UTC
User-Agent:       Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.0) Gecko/20090623 SUSE/3.5.0-4.6 Firefox/3.5

I tried to install openSUSE M4 on an LVM which was created on a RAID 0 made of two disks. Installation is fine until kernel gets installed. When it goes to installing kernel there is a message that it failed at 72nd line of some script.
When it restarts after the install GRUB reports Error number 15 (file not found) I will try to reproduce it again on a VM (from my repartitioned openSUSE M4).

Reproducible: Didn't try

Steps to Reproduce:
1.Set / on a LVM which is on a RAID 0
2.Install
3.It fails to build initrd
Actual Results:  
It fails to build initrd?

Expected Results:  
It should build initrd
Comment 1 Piotrek Juzwiak 2009-07-25 04:11:49 UTC
Message when installing this way:

Device md0 not handled

Script /lib/mkinitrd/setup/72-block.sh failed!
Comment 2 Piotrek Juzwiak 2009-07-25 04:13:45 UTC
Is this BUG any important or is this way of installing too "exotic"?
Comment 3 Piotrek Juzwiak 2009-07-25 04:28:13 UTC
By the way to be more specific, a /boot partition was separate as a primary partition (ext2).
Comment 4 Forgotten User 7645792743 2009-08-11 22:52:06 UTC
i've a similar scenario.

/boot on /dev/md0, RAID-1

VG backed on /dev/md1, RAID-1
   swap             |
   /root            | LVMs on VG
   other ...        |

on OS 11.1, no problems installing/booting/upgrading, or exec of mkinitrd.

run zypper-based upgrade (today) from OS 11.1 -> Factory, fail (as above) at kernel install.

'ignore' fail, complete install (no other errors), then attempt,

mkinitrd -v

 Kernel image:   /boot/vmlinuz-2.6.31-rc5-git3-2-default
 Initrd image:   /boot/initrd-2.6.31-rc5-git3-2-default
 Root device:	/dev/root (mounted on / as ext3)
 Device md1 not handled
 Script /lib/mkinitrd/setup/72-block.sh failed!

 Kernel image:   /boot/vmlinuz-2.6.31-rc5-git3-2-xen
 Initrd image:   /boot/initrd-2.6.31-rc5-git3-2-xen
 Root device:	/dev/root (mounted on / as ext3)
 Device md1 not handled
 Script /lib/mkinitrd/setup/72-block.sh failed!

fyi, saw similar issue on 11.1/Factory @,
 https://bugzilla.novell.com/show_bug.cgi?id=421379
Comment 5 Forgotten User 7645792743 2009-08-11 23:04:10 UTC
since 'mkinitrd -v' doesn't seem to be any more verbose than just 'mkinitrd', here's the (relevant?) output with "set -x",

...
++ blockminor=1
+++ block_driver 9
+++ sed -n '/^Block devices:/{n;: n;s/^[ ]*9 \(.*\)/\1/p;n;b n}'
++ blockdriver=md
++ '[' '!' md ']'
++ '[' md = device-mapper ']'
++ false
+++ majorminor2blockdev 9 1
+++ local major=9 minor=1
+++ '[' '!' 1 ']'
+++ '[' 9 -lt 0 ']'
++++ cat /proc/partitions
++++ egrep '^[ ]*9[ ]*1 '
+++ local 'retval=   9     1  243987048 md1'
+++ echo /dev/md1
++ blkpart=/dev/md1
++ '[' /dev/md1 ']'
+++ echo md1
+++ sed 's./.!.g'
++ blkpart=md1
+++ echo md1
+++ sed 's/\([a-z]\)[0-9]*$/\1/;s/p$//'
++ blkdev=md
++ '[' -d /sys/block/md/md1 ']'
+++ update_list /dev/md1
+++ local elem=/dev/md1
+++ shift
+++ case " $@ " in
+++ echo ' /dev/md1'
++ blockpart_blockdev=' /dev/md1'
++ blockdev=' /dev/md1'
+ '[' 0 -ne 0 ']'
+ for setupfile in '$INITRD_PATH/setup/*.sh'
+ '[' -d /dev/shm/mkinitramfs.53DRvf/mnt ']'
+ cd /dev/shm/mkinitramfs.53DRvf/mnt
+ '[' '!' -d /lib/mkinitrd/setup/72-block.sh ']'
+ curscript=block.sh
+ source /lib/mkinitrd/setup/72-block.sh
++ '[' '' ']'
++ all_libata_modules_included=0
++ for bd in '$blockdev'
++ case $bd in
++ update_blockdev /dev/md1
++ local curblockdev=/dev/md1
++ '[' /dev/md1 ']'
++ '[' /dev/md1 ']'
++ blockmajor=-1
++ blockminor=-1
++ '[' -e /dev/md1 ']'
+++ devnumber /dev/md1
++++ ls -lL /dev/md1
+++ set -- brw-rw---- 1 root disk 9, 1 2009-03-17 16:21 /dev/md1
+++ mkdevn 9 1
+++ local major=9 minor=1
+++ echo 9437185
++ blockdevn=9437185
+++ devmajor 9437185
+++ local devn=9437185
+++ echo 9
++ blockmajor=9
++ '[' '!' 9 ']'
+++ devminor 9437185
+++ local devn=9437185
+++ echo 1
++ blockminor=1
+++ block_driver 9
+++ sed -n '/^Block devices:/{n;: n;s/^[ ]*9 \(.*\)/\1/p;n;b n}'
++ blockdriver=md
++ '[' '!' md ']'
++ '[' md = device-mapper ']'
++ false
++ get_devmodule md1 curmodule
++ local result=
+++ echo md1
+++ sed 's./.!.g'
++ local blkdev=md1
++ '[' '!' -d /sys/block/md1 ']'
++ case "$blkdev" in
++ '[' '!' -d /sys/block/md1/device ']'
++ echo 'Device md1 not handled'
Device md1 not handled
++ return 1
++ '[' 1 -eq 0 ']'
++ return 1
+ '[' 1 -ne 0 ']'
+ oops 1 'Script /lib/mkinitrd/setup/72-block.sh failed!'
+ echo 'Script /lib/mkinitrd/setup/72-block.sh failed!'
Script /lib/mkinitrd/setup/72-block.sh failed!
+ cleanup
+ rm -f /dev/shm/mkinitramfs.53DRvf/initrd /dev/shm/mkinitramfs.53DRvf/initrd.gz
+ '[' -d /dev/shm/mkinitramfs.53DRvf/mnt ']'
+ rm -rf /dev/shm/mkinitramfs.53DRvf/mnt
+ initrd_bins=()
+ exit_code=1
+ exit 1
+ exit_code=1
+ '[' -e /dev/shm/mkinitramfs.53DRvf/error ']'
+ (( i++ ))
+ (( 1<2  ))
+ echo
...
++ echo 'Device md1 not handled'
Device md1 not handled
++ return 1
++ '[' 1 -eq 0 ']'
++ return 1
+ '[' 1 -ne 0 ']'
+ oops 1 'Script /lib/mkinitrd/setup/72-block.sh failed!'
+ echo 'Script /lib/mkinitrd/setup/72-block.sh failed!'
Script /lib/mkinitrd/setup/72-block.sh failed!
+ cleanup
+ rm -f /dev/shm/mkinitramfs.53DRvf/initrd /dev/shm/mkinitramfs.53DRvf/initrd.gz
+ '[' -d /dev/shm/mkinitramfs.53DRvf/mnt ']'
+ rm -rf /dev/shm/mkinitramfs.53DRvf/mnt
+ initrd_bins=()
+ exit_code=1
+ exit 1
+ exit_code=1
+ '[' -e /dev/shm/mkinitramfs.53DRvf/error ']'
+ (( i++ ))
+ (( 2<2  ))
+ cleanup_finish
+ umount_proc
+ '[' '' ']'
+ mounted_proc=
+ '[' '' ']'
+ mounted_sys=
+ '[' '' ']'
+ mounted_usr=
+ '[' -d /dev/shm/mkinitramfs.53DRvf ']'
+ rm -rf /dev/shm/mkinitramfs.53DRvf
+ '[' '!' -x /sbin/update-bootloader ']'
+ '[' 1 -eq 0 ']'
+ exit 1
Comment 6 Forgotten User 7645792743 2009-08-11 23:07:05 UTC
changing severity, as the machine's unbootable in this state ...
Comment 7 Forgotten User 7645792743 2009-08-11 23:42:20 UTC
just fyi,

upgrading mkinitrd to,

  http://download.opensuse.org/repositories/home:/thomasbiege:/branches:/Base:/System/openSUSE_Factory/x86_64/mkinitrd-2.5.10-25.1.x86_64.rpm


does NOT solve the problem ...
Comment 8 Stephan Binner 2009-08-12 03:25:48 UTC
Please read http://en.opensuse.org/Bugs/Definitions#Blocker
Comment 9 Forgotten User 7645792743 2009-08-12 13:41:07 UTC
(In reply to comment #8)
> Please read http://en.opensuse.org/Bugs/Definitions#Blocker


I *DID* read it:

"Blocker

    * Prevents developers or testers from performing their jobs. Impacts the development process. "


we ARE doing 'development or testing' -- not just _of_ Factory , but of our own products/services.

this bug, and the inability to boot as a result, DOES prevent our folks from doing their jobs, and DOES impact the development process.

the *specific* examples given for 'Blocker' severity are,

    * Unable to login
    * Unable to perform certification tests
    * Unable to update system 

all three of which are tru in in this case.

if you have different definitions of the criteria, or suggest that it's only relevant for *suse's developers/tester, then that should be clearly stated in the docs you reference.  in the meantime ...
Comment 10 Torsten Duwe 2009-08-13 11:17:09 UTC
Mr. Pgnet (or Mr. Dev?):

What is impossible here is to have /boot on md, not installation per se, i.e. a workaround exists. This makes it IMO not suitable as a blocker. YMMV, but we have to work on this.

Yast folks: IIRC RAID-0 is not suited for /boot, unless it maps perfectly to a BIOS fake RAID. RAID-1 might work. LVM does not.

What strikes me from the log pasted above is the line
 md = device-mapper 

So please make boot setup/initrd creation consistent with the installation scenario offered initially.
Comment 11 Jozef Uhliarik 2009-08-13 12:45:03 UTC
The first please attach YaST logs.

Next if you play with md raid it is possible boot only from MBR. There is problem with kernel caching and it is not possible boot from /boot (or "/") partition if it is on md raid. I mean scenario (which doesn't work): 
* write generic boot code to MBR
* set boot flag on /boot (or "/") partition on md raid
* write GRUB stage1 to /boot (or "/") partition on md raid

I am sure that if you use only md raid yast propose boot from MBR (resp. write GRUB to MBR) I am not sure about LVM on md raid but I see same problem like only with md raid and finally if you want to use LVM you HAVE TO separate /boot partition out of LVM and in this case out of md raid!
Comment 12 Piotrek Juzwiak 2009-08-13 13:34:11 UTC
(In reply to comment #11)
> The first please attach YaST logs.
> 
> Next if you play with md raid it is possible boot only from MBR. There is
> problem with kernel caching and it is not possible boot from /boot (or "/")
> partition if it is on md raid. I mean scenario (which doesn't work): 
> * write generic boot code to MBR
> * set boot flag on /boot (or "/") partition on md raid
> * write GRUB stage1 to /boot (or "/") partition on md raid
> 
> I am sure that if you use only md raid yast propose boot from MBR (resp. write
> GRUB to MBR) I am not sure about LVM on md raid but I see same problem like
> only with md raid and finally if you want to use LVM you HAVE TO separate /boot
> partition out of LVM and in this case out of md raid!

Tell me what logs and how can i get them. I ALWAYS have separate /boot since GRUB can't understand LVM? 

What i did is i created a RAID-0 stripe (software linux kernel RAID) made of two partitions marked as RAID. Then after YaST created that stripe i created an LVM group on that stripe. /boot was separate as i said.
Comment 13 Piotrek Juzwiak 2009-08-13 13:49:59 UTC
Nevermind, i will test both M4 and M5 with that scenario.
Comment 14 Forgotten User 7645792743 2009-08-13 14:23:10 UTC
"What is impossible here is to have /boot on md"?

as i said,

  /boot on /dev/md0, RAID-1
  VG backed on /dev/md1, RAID-1
     swap             |
     /root            | LVMs on VG
     other ...        |

works just fine on OS 11.1, and has been on production systems, ... no workarounds required.  just follow the instructions @ http://en.opensuse.org/How_to_install_SUSE_Linux_on_software_RAID.

as for the severity -- clearly a non-bootable system is not a concern to others.  i happen to disagree ...

Piotrek, if you're interested in this scenario, my suggestion is 'back to 11.1'.  which is exactly what i'm doing ...
Comment 15 Piotrek Juzwiak 2009-08-13 14:29:48 UTC
Well to be honest you misinterpreted my bug report. I reported that the system fails to build initrd when the / is on on an LVM which is on a RAID-0. From what i've heard /boot on RAID-1 is fine but not on RAID-0. My scenario involves creating a RAID-0 stripe with separate /boot partition and on top of that RAID-0 stripe (which is not formatted as suggested normally) is a LVM group in which i create / root partition.
Comment 16 Piotrek Juzwiak 2009-08-13 15:44:43 UTC
Created attachment 312726 [details]
Yast 2 Log

Here comes the yast log made on VirtualBox with the same setup as on my main machine (i guess it isn't hardware related since it happened on both VirtualBox and my main machine?)
Comment 17 Jozef Uhliarik 2009-08-14 08:51:45 UTC
Piotrek, thanks for yast logs.

I know there is little bit misunderstanding of terminology. I am sorry for it. I try to use complete desription of boot settings instead of names of checkbox from y2-bootloader. ;-)

OK lets start:

comment#12:
==========
* Please take a fact that GRUB doesn't know to boot from LVM you need separate /boot partition out of LVM (yast2-storage should warning you during creating LVM that you need separate /boot partition)

* Boot from raid0 is not possible if it is software raid. GRUB works with physical partitions not with sw raid partitions and in this case it is not possible use physical partitions because raid0 doesn't dulicate(mirroring) data on physical partitions but use only part from each physical partition. It means GRUB hasn't chance to read from (for example) /dev/sda1 (/dev/sda1 is one of physical partition from raid0) content of /boot directory. But basic problem is that GRUB has not code for using software raids and it means it doesn't know to use software raid partitions.

comment#14:
==========

I am sorry I wrote it wrong. You can have /boot partition or if you haven't separate /boot partition you can have "/" and there /boot directory on md raid (but only raid1)

Next it is mandatory to write GRUB in to MBR other scenarios where is uses "write generic boot code" to MBR doesn't work as I wrote it before (comment#11). 

Yes your link is valid but for older openSUSE somebody change kernel caching and it doesn't allow to use other scenario of booting than write GRUB to MBR. y2-bootloader propose write GRUB to MBR during installation. Please doesn't ask me about details. It is kernel problem.

comment#15:
==========
 I see in bugzilla problem with mkinitrd and LVM maybe it is solved now. The bug will be reassign to maintainer of mkinitrd.

Finally:
========

I hope that now it is clear for everybody how GRUB works with md raids and LVM
Comment 18 Jozef Uhliarik 2009-08-14 08:53:12 UTC
Milan could you look at the problem please?
Comment 19 Piotrek Juzwiak 2009-08-14 09:02:54 UTC
Jozef, i am aware that GRUB can't boot from /boot which is on RAID-0 thus this /boot was "separate" and NOT on this RAID stripe. It was simple primary partition. I created three partitions, one primary for /boot and two for RAID-0.
Comment 20 Milan VanĨura 2009-08-14 15:01:16 UTC
I'm sorry I'm out for two weeks.
Comment 21 Forgotten User 1GBkbCnI0A 2009-09-11 08:12:25 UTC
Any news on that now?

I am fiddeling with M7 right now: /dev/sda1 /boot (ext3), /dev/sda2 type fd, /dev/sdb1 (not used), /dev/sdb2 type fd
Both fd's are a raid1. /dev/md0 is member of LVM vg system, lv /dev/system/root is /

Installation is not able to create a initrd.

Updating a working system from 11.1 to 11.2M7 with this configuration also renders the system unbootable.

Creating an initrd from rescue system with 'mkinitrd -f md' also give non-bootable system, as the md is not started, hence the LVM vg is not found and started.
Comment 22 Forgotten User gRveQ1K55E 2009-09-25 13:32:58 UTC
Hello, my scenario:

1)
/boot on md0 RAID1, yes, it works!! I use this system for servel years now and if you say it is not working for grub, thats not true. Nonetheless I startet this years ago with lilo. It may be that it is not possible to install suche a soulution now, But for upgraded systems it is definitvly a blocker since it takes upgraded systems into an unbootable state.

2)
/ on lvm2 on md1

To say so: 
linux:/ # mkinitrd

Kernel image:   /boot/vmlinuz-2.6.31-rc9-7-default
Initrd image:   /boot/initrd-2.6.31-rc9-7-default
Root device:    /dev/vg00/root (mounted on / as xfs)
Device md1 not handled
Script /lib/mkinitrd/setup/72-block.sh failed!

Kernel image:   /boot/vmlinuz-2.6.31-rc9-7-trace
Initrd image:   /boot/initrd-2.6.31-rc9-7-trace
Root device:    /dev/vg00/root (mounted on / as xfs)
Device md1 not handled
Script /lib/mkinitrd/setup/72-block.sh failed!

I tried the workaround with rootdev I found in the mailinglist, but then the only thing that happens is:
linux:/ # mkinitrd -d /dev/md1

Kernel image:   /boot/vmlinuz-2.6.31-rc9-7-default
Initrd image:   /boot/initrd-2.6.31-rc9-7-default
node name not found
Could not find the filesystem type for root device /dev/md1

Currently available -d parameters are:
        Block devices   /dev/<device>
        NFS             <server>:<path>
        URL             <protocol>://<path>

Ok, hear is what I discoverd:
linux:/ # diff /lib/mkinitrd/setup/62-dm.sh /lib/mkinitrd/setup/62-lvm2.sh
linux:/ #

linux:/ # ls -l /lib/mkinitrd/setup/62-dm.sh /lib/mkinitrd/setup/62-lvm2.sh
lrwxrwxrwx 1 root root 22 Sep 25 16:53 /lib/mkinitrd/setup/62-dm.sh -> ../scripts/setup-dm.sh
lrwxrwxrwx 1 root root 24 Sep 25 16:53 /lib/mkinitrd/setup/62-lvm2.sh -> ../scripts/setup-lvm2.sh
linux:/ #


I certainly think that this is an error. ../scripts/setup-lvm2.sh is just a copy of ../scripts/setup-dm.sh. Therfor /md1 (in my case) is not handled as lvm2.

I also think that this is an error because of this:
http://www.novell.com/products/linuxpackages/opensuse/mkinitrd.html
848 Sep 19 14:28 /lib/mkinitrd/scripts/setup-dm.sh
1093 Sep 19 14:28 /lib/mkinitrd/scripts/setup-lvm2.sh

Sometimes between 10.3 and now someone seems to have copied  ../scripts/setup-dm.sh over ../scripts/setup-lvm2.sh.

The other side is this:
http://www.novell.com/products/linuxpackages/opensuse11.1/mkinitrd.html

There are no such scripts noticed.

Well, 
a) It is back again
b) It's leftover from old installations

If its back again, than it seems to be a package error
If it is a leftover, than it seems such also, because it was not deleted and is now interfering with the new package

I bootet into the livefilesystem opensuse 11.2 factory, mounted all needed filesystemsm copied /dev to /mnt/dev, mountet sys, devpts and proc and did a chroot. Thats my rescuebasis at the moment.

regards,
anniyka
Comment 23 Forgotten User gRveQ1K55E 2009-09-25 14:52:54 UTC
Sorry, just saw that here is raid 0 discussion, nonetheless I think its all relateted ...

linux:/ # rpm -qf /lib/mkinitrd/scripts/setup-dm.sh
device-mapper-1.02.31-7.6.x86_64
linux:/ # rpm -qf /lib/mkinitrd/scripts/setup-lvm2.sh
lvm2-2.02.45-8.1.x86_64
linux:/ # rpm -qf /lib/mkinitrd/scripts/setup-dm.sh
device-mapper-1.02.31-7.6.x86_64

Well ...

Ok, I mad an initrd by putting all needed (I think so) and more modules into /etc/sysconfig/kernel:

INITRD_MODULES="processor thermal fan xfs raid456 async_xor async_memcpy async_tx raid6_pq xor sr_mod amd74xx pata_amd sata_nv ata_generic sg ide_core ide_gd_mod ide_cd_mod ide_pci_generic dm_mod edd raid1"

Than I put in another line in /lib/mkinitrd/scripts/setup-block.sh, I added 

            md*)
                echo dm_mod
                ;;

in function get_devmodule() ( easy to see where ;) )

Then I run mkinitrd:
linux:/ # mkinitrd -d /dev/vg00/root

Kernel image:   /boot/vmlinuz-2.6.31-rc9-7-default
Initrd image:   /boot/initrd-2.6.31-rc9-7-default
Root device:    /dev/vg00/root (mounted on / as xfs)
WARNING: no dependencies for kernel module 'usbcore' found.
WARNING: no dependencies for kernel module 'ohci_hcd' found.
WARNING: no dependencies for kernel module 'uhci-hcd' found.
WARNING: no dependencies for kernel module 'ehci_hcd' found.
WARNING: no dependencies for kernel module 'usbhid' found.
Kernel Modules: thermal_sys processor thermal fan exportfs xfs xor async_tx async_memcpy async_xor raid6_pq raid456 cdrom sr_mod ide-core amd74xx pata_amd sata_nv ata_generic sg ide-gd_mod ide-cd_mod ide-pci-generic dm-mod edd raid1 dm-snapshot
Features:       dm lvm2 block usb resume.userspace resume.kernel
Bootsplash:     openSUSE (1280x1024)
27854 blocks

Kernel image:   /boot/vmlinuz-2.6.31-rc9-7-trace
Initrd image:   /boot/initrd-2.6.31-rc9-7-trace
Root device:    /dev/vg00/root (mounted on / as xfs)
WARNING: no dependencies for kernel module 'usbcore' found.
WARNING: no dependencies for kernel module 'ohci_hcd' found.
WARNING: no dependencies for kernel module 'uhci-hcd' found.
WARNING: no dependencies for kernel module 'ehci_hcd' found.
WARNING: no dependencies for kernel module 'usbhid' found.
Kernel Modules: thermal_sys processor thermal fan exportfs xfs xor async_tx async_memcpy async_xor raid6_pq raid456 cdrom sr_mod ide-core amd74xx pata_amd sata_nv ata_generic sg ide-gd_mod ide-cd_mod ide-pci-generic dm-mod edd raid1 dm-snapshot
Features:       dm lvm2 block usb resume.userspace resume.kernel
Bootsplash:     openSUSE (1280x1024)
28206 blocks
mdadm: cannot open /dev/md/1: No such file or directory
2009-09-25 18:49:28 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-25 18:49:28 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-25 18:49:28 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-25 18:49:28 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-25 18:49:28 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
linux:/ # ls /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 -l
lrwxrwxrwx 1 root root 9 Sep 25 18:48 /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 -> ../../sda
linux:/ # ls /dev/disk/by-id/../../sda
/dev/disk/by-id/../../sda
linux:/ # ls /dev/md*
/dev/md0   /dev/md11  /dev/md14  /dev/md17  /dev/md2   /dev/md22  /dev/md25  /dev/md28  /dev/md30  /dev/md5  /dev/md8
/dev/md1   /dev/md12  /dev/md15  /dev/md18  /dev/md20  /dev/md23  /dev/md26  /dev/md29  /dev/md31  /dev/md6  /dev/md9
/dev/md10  /dev/md13  /dev/md16  /dev/md19  /dev/md21  /dev/md24  /dev/md27  /dev/md3   /dev/md4   /dev/md7
linux:/ #

Nonetheless I tried to boot and run into the annoying fact that no lvm was started (so no rootdevice) .. I think this ist something that should be in /lib/mkinitrd/scripts/setup-lvm2.sh?

regards,
anniyka
Comment 24 Forgotten User gRveQ1K55E 2009-09-26 13:15:25 UTC
linux:~/cpio # find | grep -e dm -e lvm -e md | grep -v amd
./lib/udev/collect_lvm
./lib/udev/rules.d/64-md-raid.rules
./lib/udev/rules.d/64-lvm2.rules
./lib/udev/idedma.sh
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md/raid6_pq.ko
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md/raid456.ko
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md/raid1.ko
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md/dm-snapshot.ko
./lib/modules/2.6.31-rc9-7-default/kernel/drivers/md/dm-mod.ko
./boot/03-lvm2.sh
./boot/03-dm.sh
./config/dm.sh
./config/lvm2.sh
./sbin/lvm
./sbin/udevadm
./sbin/dmsetup
linux:~/cpio # diff ./config/dm.sh ./config/lvm2.sh
linux:~/cpio #

Shoulden`t there something to set up MDs or LVs? Well, if the scripts (notet in previous comment) are the same maybe thats the reason? ...
Comment 25 Forgotten User 7645792743 2009-09-26 18:13:18 UTC
clearing queue
Comment 26 Forgotten User 1GBkbCnI0A 2009-09-28 09:49:31 UTC
I am not really sure, why this bug has been close, allthough there is no solution in sight.
Can someone please clarify?
Comment 27 Forgotten User gRveQ1K55E 2009-09-28 19:14:39 UTC
It's annoying ...
Comment 28 Bill Merriam 2009-09-29 01:02:35 UTC
I copied /lib/mkinitrd/scripts/setup-lvm2.sh and /lib/mkinitrd/scripts/ boot-lvm2.sh from an 11.1 system to my 11.2M7 system and it fixed this problem.  You need to send the bug to the maintainer of the lvm2 rpm.
Comment 29 Forgotten User gRveQ1K55E 2009-09-29 09:21:34 UTC
Tried this to but it didn't worked. It's always the same as in my last comment in Bug 505670
Comment 30 Ray Sutton 2009-10-02 02:01:13 UTC
I'm having a similar problem, tried copying setup-lvm.sh/boot-lvm2.sh as outlined above - no joy!

my configuration is LVM2 on top of software raid1, /boot is /dev/md0 (raid1/ext3) and / is a lvm2 partition on top of /dev/md1(raid1/PV). 

I don't claim to understand the initrd process but tracing linuxrc it looks as if this may be a sequencing problem looking at the run_all.sh script on the generated initrd it executes

02-start
03-dm
03-rtc
03-storage
04-udev
........
51-md
61-lvm2

udev times out waiting for the root device but as yet the md and lvm2 initialization hasn't occurred so the root device is not visible. Also in the initial install of 11.2M7 failed during initrd creatation in 72-block.sh as md* isn't recognized as a block device.
Comment 31 Ray Sutton 2009-10-02 02:02:20 UTC
I'm having a similar problem, tried copying setup-lvm.sh/boot-lvm2.sh as outlined above - no joy!

my configuration is LVM2 on top of software raid1, /boot is /dev/md0 (raid1/ext3) and / is a lvm2 partition on top of /dev/md1(raid1/PV). 

I don't claim to understand the initrd process but tracing linuxrc it looks as if this may be a sequencing problem looking at the run_all.sh script on the generated initrd it executes

02-start
03-dm
03-rtc
03-storage
04-udev
........
51-md
61-lvm2

udev times out waiting for the root device but as yet the md and lvm2 initialization hasn't occurred so the root device is not visible. Also in the initial install of 11.2M7 failed during initrd creatation in 72-block.sh as md* isn't recognized as a block device.
Comment 32 Ray Sutton 2009-10-02 02:07:07 UTC
Also to answer Piotrek in comment #2, in my experience LVM on top of software raid (typically raid1) is a common install scenario.
Comment 33 Ray Sutton 2009-10-03 19:31:01 UTC
I just installed M8 on LVM2 over software raid1, initrd problem still
encountered.
reports error creating initrd (2 distinct errors)

1: cp cannot stat /etc/scsi_id.config

2: Script /lib/mkinitrd/setup/72-block.sh failed
   Device md1 not handled

This is identical to the error I got trying to install M7

Hardware config is core I7 12Gb ram 2x500Gb disk as a mirrored pair. 

partition 1 /dev/md0 ext4 for /boot 200mb
partition 2 /dev/md1 remainder of disk as physical volume
            /dev/sys/dom0 10G LVM2 partition ext4 for /
            /dev/sys/dom0swap 1G for swap

fixed 1 by copying file from 11.1
fixed 2 or at least bypassed the problem by adding

md*)
   result = "dm-mod raid1"
   ;;

to the case statement in the get module subroutine

regenerated initrd, failed to boot waiting for /dev/sys/dom0 to appear

I believe the problem is LVM setup and udev run before md setup has occurred
hence the volume group is not accessible
Comment 34 Ray Sutton 2009-10-03 21:49:50 UTC
I was able to get the system to boot using the following procedure

Unpack initrd to a directory
edit run_all.sh before the call to 03-lvm as follows: 

define (mknod) sda,sda1,sda2,sdb,sdb1,sdb2,md0,md1 manually
assemble raid devices md0/md1 manually
vgchange to bring the lvm devices online.

repack initrd & reboot.
Comment 35 Marco Bakker 2009-10-04 13:51:43 UTC
I'm having the same issue as described. See the 2 errors as defined by Ray Sutton. 

I'm trying to install 11.2 M8 as a new install using autoYast. /boot on md1 and / on lvm. This is working fine for the same install with 11.1.
Comment 36 Stephan Kulow 2009-10-12 08:10:09 UTC
*** Bug 546022 has been marked as a duplicate of this bug. ***
Comment 37 Stephan Kulow 2009-10-12 08:11:19 UTC
*** Bug 540522 has been marked as a duplicate of this bug. ***
Comment 40 Forgotten User 1GBkbCnI0A 2009-10-12 09:29:08 UTC
I can confirm the solution from Petri Asikainen from Bug 540522, but the lvm2
package from 11.2M8 differs greatly from the one from factory - which Petri
used and modified. The factory lvm2 seems to be more like the one from 11.1.

Can we have the lvm2 from factory with the trivial fix for 11.2?
Comment 41 Xin Wei Hu 2009-10-12 09:30:53 UTC
(In reply to comment #40)
> I can confirm the solution from Petri Asikainen from Bug 540522, but the lvm2
> package from 11.2M8 differs greatly from the one from factory - which Petri
> used and modified. The factory lvm2 seems to be more like the one from 11.1.
> 
> Can we have the lvm2 from factory with the trivial fix for 11.2?

I've submitted the related patch to factory already.
pending for acceptance now
Comment 42 Forgotten User gRveQ1K55E 2009-10-13 12:31:42 UTC
Ok, are these the packages?
Different errors now.

linux:/ # for i in mkinitrd lvm2; do rpm -q $i; rpm -qi $i | grep "Build Date"; done ; mkinitrd ; mkinitrd -A; ls -l /boot/init*
mkinitrd-2.5.10-3.5.x86_64
Release     : 3.5                           Build Date: Wed Oct  7 05:08:56 2009
lvm2-2.02.45-9.1.x86_64
Release     : 9.1                           Build Date: Sat Oct  3 06:42:58 2009

Kernel image:   /boot/vmlinuz-2.6.31-10-default
Initrd image:   /boot/initrd-2.6.31-10-default
Root device:    /dev/vg00/root (mounted on / as xfs)
mkdir: cannot create directory `etc/sysconfig': File exists
/lib/mkinitrd/setup/62-dm.sh: line 32: etc/sysconfig/kernel: Not a directory
mkdir: cannot create directory `/dev/shm/mkinitramfs.HpH1ov/mnt/etc/sysconfig': File exists
/lib/mkinitrd/setup/91-clock.sh: line 19: /dev/shm/mkinitramfs.HpH1ov/mnt/etc/sysconfig/clock: Not a directory
Script /lib/mkinitrd/setup/91-clock.sh failed!

Kernel image:   /boot/vmlinuz-2.6.31-10-trace
Initrd image:   /boot/initrd-2.6.31-10-trace
Root device:    /dev/vg00/root (mounted on / as xfs)
mkdir: cannot create directory `etc/sysconfig': File exists
/lib/mkinitrd/setup/62-dm.sh: line 32: etc/sysconfig/kernel: Not a directory
mkdir: cannot create directory `/dev/shm/mkinitramfs.HpH1ov/mnt/etc/sysconfig': File exists
/lib/mkinitrd/setup/91-clock.sh: line 19: /dev/shm/mkinitramfs.HpH1ov/mnt/etc/sysconfig/clock: Not a directory
Script /lib/mkinitrd/setup/91-clock.sh failed!

Kernel image:   /boot/vmlinuz-2.6.31-10-default
Initrd image:   /boot/initrd-2.6.31-10-default
Root device:    /dev/vg00/root (mounted on / as xfs)
mkdir: cannot create directory `etc/sysconfig': File exists
/lib/mkinitrd/setup/62-dm.sh: line 32: etc/sysconfig/kernel: Not a directory
mkdir: cannot create directory `/dev/shm/mkinitramfs.EvM2xq/mnt/etc/sysconfig': File exists
/lib/mkinitrd/setup/91-clock.sh: line 19: /dev/shm/mkinitramfs.EvM2xq/mnt/etc/sysconfig/clock: Not a directory
Script /lib/mkinitrd/setup/91-clock.sh failed!

Kernel image:   /boot/vmlinuz-2.6.31-10-trace
Initrd image:   /boot/initrd-2.6.31-10-trace
Root device:    /dev/vg00/root (mounted on / as xfs)
mkdir: cannot create directory `etc/sysconfig': File exists
/lib/mkinitrd/setup/62-dm.sh: line 32: etc/sysconfig/kernel: Not a directory
mkdir: cannot create directory `/dev/shm/mkinitramfs.EvM2xq/mnt/etc/sysconfig': File exists
/lib/mkinitrd/setup/91-clock.sh: line 19: /dev/shm/mkinitramfs.EvM2xq/mnt/etc/sysconfig/clock: Not a directory
Script /lib/mkinitrd/setup/91-clock.sh failed!
lrwxrwxrwx 1 root root 24 Oct 13 15:20 /boot/initrd -> initrd-2.6.31-10-default

mkinitrd -A is not working anymore.

linux:/ # rpm -qf `find /lib/mkinitrd/` | sort | uniq | grep -v "not owned by any package"
bootsplash-3.3-146.112.x86_64
cifs-mount-3.4.1-1.7.x86_64
cryptsetup-1.0.7-9.1.x86_64
device-mapper-1.02.31-9.1.x86_64
dmraid-1.0.0.rc15-8.1.x86_64
kpartx-0.4.8-43.2.x86_64
lvm2-2.02.45-9.1.x86_64
mdadm-3.0.2-1.1.x86_64
mkinitrd-2.5.10-3.5.x86_64
multipath-tools-0.4.8-43.2.x86_64
nfs-client-1.1.3-20.1.x86_64
splashy-0.3.13-2.23.x86_64
suspend-0.80.20081103-2.17.x86_64
sysvinit-2.86-214.1.x86_64

No modified scripts:
linux:/ # touch /lib/mkinitrd/setup/test~
linux:/ # rpm -qf `find /lib/mkinitrd/ | grep "~"` | sort | uniq
file /lib/mkinitrd/setup/test~ is not owned by any package
Comment 43 Forgotten User gRveQ1K55E 2009-10-13 12:48:25 UTC
Well:

linux:/lib/mkinitrd/setup # for i in  01-prepare.sh 01-splashy.sh 02-start.sh 03-udev.sh 03-usb.sh 11-storage.sh  21-luks.sh 31-lvm2.sh ; do grep -H sysconfig $i; done | grep -v "#"
02-start.sh:    . $root_dir/etc/sysconfig/kernel
02-start.sh:    . $root_dir/etc/sysconfig/kernel
31-lvm2.sh:     cp -a /etc/sysconfig/lvm $tmp_mnt/etc/sysconfig

Without a directory sysconfig ...
Comment 44 Xin Wei Hu 2009-10-13 12:55:49 UTC
(In reply to comment #43)
> Well:
> 
> linux:/lib/mkinitrd/setup # for i in  01-prepare.sh 01-splashy.sh 02-start.sh
> 03-udev.sh 03-usb.sh 11-storage.sh  21-luks.sh 31-lvm2.sh ; do grep -H
> sysconfig $i; done | grep -v "#"
> 02-start.sh:    . $root_dir/etc/sysconfig/kernel
> 02-start.sh:    . $root_dir/etc/sysconfig/kernel
> 31-lvm2.sh:     cp -a /etc/sysconfig/lvm $tmp_mnt/etc/sysconfig
> 
> Without a directory sysconfig ...

It's not the updated package yet.
It has be accepted 12 hours ago, but don't know how long it takes to hit the downloadable area.

Thanks for testing.
Comment 45 Forgotten User gRveQ1K55E 2009-10-13 13:05:36 UTC
Well, you can check this patch if you could still need it. With this I could build my initrd:


linux:/lib/mkinitrd/scripts # diff -cB setup-prepare.sh setup-prepare.sh~
*** setup-prepare.sh    Tue Oct 13 16:51:59 2009
--- setup-prepare.sh~   Tue Aug 11 11:59:50 2009
***************
*** 137,143 ****
  cp $INITRD_PATH/bin/linuxrc $linuxrc
  mkdir "$tmp_mnt/boot"

! mkdir -p $tmp_mnt/{sbin,bin,etc,dev,proc,sys,root,config,etc/sysconfig}

  mkdir -p -m 4777 $tmp_mnt/tmp

--- 137,143 ----
  cp $INITRD_PATH/bin/linuxrc $linuxrc
  mkdir "$tmp_mnt/boot"

! mkdir -p $tmp_mnt/{sbin,bin,etc,dev,proc,sys,root,config}

  mkdir -p -m 4777 $tmp_mnt/tmp
Comment 46 Forgotten User gRveQ1K55E 2009-10-13 13:27:22 UTC
It booted :) The packages I mentioned above and the patch I send did it.

Now I do have some other problems ;)
Comment 47 Stephan Kulow 2009-10-13 15:00:25 UTC
I just did a test installation of Build327 with LVM and mkinitrd works fine. I hope it's ok to close the bug
Comment 48 Stephan Kulow 2009-10-13 15:02:44 UTC
*** Bug 544361 has been marked as a duplicate of this bug. ***
Comment 49 Richard Creighton 2009-10-14 02:05:15 UTC
I hope you do NOT.   I had no problems with M8, did a factory zyp dup to what appears to be labled RC1 and it would not boot...Fortunately, Grub had saved M8 and I could get it to boot from there.

It was waiting on MD0.

My system has a separate /boot on a separate partition,

/ is on a Software raid MD0 which is 20G and excludes /home and a couple of other directories like /srv

/home and /srv are on a LVM across 4 drives grabbing bits and pieces of otherwise unused space totalling about 700G.   This works, works well in M8, with NO changes other than the upgrade mentioned, RC1 fails during boot while waiting for MD0, falls back to MD0 and waits until I reset the machine.   Remember, this is a separate PRIMARY partion for the /boot, 2 identical drives with primary partitions dedicated to a RAID1 MD0 and the rest of the drive partitions are allocated to the LVM for home, etc.   This worked fine in M5 on to 8.   Prior to that, I didn't use RAID or LVM as I wasn't testing that aspect yet on that machine.
Comment 50 Piotrek Juzwiak 2009-10-14 09:13:22 UTC
But is your / on an LVM which sits on top of a RAID stripe?? Or do you have ONLY RAID stripe set and / on it?
Comment 51 Stephan Kulow 2009-10-14 13:12:07 UTC
we need more data than what Richard's setup. Is the new package working for everyone else? (please remember to call mkinitrd explicitly if you do zypper dup).
Comment 52 Forgotten User gRveQ1K55E 2009-10-14 13:29:33 UTC
The Packages I mentiond in Comment #42

-----------
mkinitrd-2.5.10-3.5.x86_64
Release     : 3.5                           Build Date: Wed Oct  7 05:08:56
2009
lvm2-2.02.45-9.1.x86_64
Release     : 9.1                           Build Date: Sat Oct  3 06:42:58
2009
-----------

Together with my patch in Comment #45 did it for me, but not your packages alone! In the meaning of your question: No, it worked not. If there is a newer package available now I will try it this evening.
Comment 54 Richard Creighton 2009-10-14 16:27:21 UTC
What, if any additional information would you need from me?   I believe this bug came closest to describing the problem I experienced, ie, it is not about LVM,or RAID specifically, it is about mkinitrd when either or both of those are used to store all or part of the root of the file system.   In my case, / is on a RAID 1, but /boot is on a normal primary partition, not part of either a LVM or a RAId, but directories needed for building a new module or load image certainly reside on (in my case) a RAID or LVM or possibly both in my case because /usr is on the LVM for instance, however, at the time things are being updated or being built, this tree is (or should be) fully available as the system hasn't been rebooted yet as it is still being updated by the update scripts and fwiw, the failure also fails when run from YAST as opposed to zypper dup.   Previous updates have not failed using YAST (update all with newer) (only factory enabled) (from M8 to RC1) and with zypper dup as an effort to repair the failure as a last resort with no positive effect.

I hope this does not become a pissing match about technicalities about raid version or lvm vs raid, It is a failure of mkinitrd when using one or both of those in some or any combination that causes a failure that needs to be identified and I will do my best to help identify the cause if you remember, I am NOT a technical person (anymore), I am a stroke victim that USED to be a systems engineer/programmer that hasn't programmed in 'modern' low level languages in many years.
Comment 55 Richard Creighton 2009-10-14 16:36:16 UTC
I would like to add that I created a VM running under VirtualBox (SUN) without a LVM or RAID and the update from M8 to RC1 went perfectly and even fixed a problem with VLC that had been plaguing all instances of 11.2 factory that I had previously tried much to my great pleasure, so the RAID/LVM/mkinitrd problem is (to me) a major problem but is isolated to those that use RAID/LVM as part of their container holding their root partition, or so it would seem.   This probably is because of mkinitrd not being able to get at everything it needs when it needs it?
Comment 56 Forgotten User gRveQ1K55E 2009-10-14 20:28:13 UTC
:-/ ...

Did zypper dup just a minute ago (well, it finished a minute ago):

anniys:~ # mkinitrd

Kernel image:   /boot/vmlinuz-2.6.31.3-1-default
Initrd image:   /boot/initrd-2.6.31.3-1-default
Root device:    /dev/vg00/root (mounted on / as xfs)
Resume device:  /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part1 (/dev/hda1)
setup-md.sh: md1 found multiple times
Kernel Modules: thermal_sys processor thermal fan exportfs xfs xor async_tx async_memcpy async_xor raid6_pq raid456 cdrom sr_mod ide-core amd74xx pata_amd sata_nv ata_generic sg ide-gd_mod ide-cd_mod ide-pci-generic dm-mod edd raid1 dm-snapshot pata_acpi raid0 linear
Features:       dm block usb md lvm2 resume.userspace resume.kernel
Bootsplash:     openSUSE (1280x1024)
30689 blocks

Kernel image:   /boot/vmlinuz-2.6.31.3-1-trace
Initrd image:   /boot/initrd-2.6.31.3-1-trace
Root device:    /dev/vg00/root (mounted on / as xfs)
Resume device:  /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part1 (/dev/hda1)
setup-md.sh: md1 found multiple times
Kernel Modules: thermal_sys processor thermal fan exportfs xfs xor async_tx async_memcpy async_xor raid6_pq raid456 cdrom sr_mod ide-core amd74xx pata_amd sata_nv ata_generic sg ide-gd_mod ide-cd_mod ide-pci-generic dm-mod edd raid1 dm-snapshot pata_acpi raid0 linear
Features:       dm block usb md lvm2 resume.userspace resume.kernel
Bootsplash:     openSUSE (1280x1024)
31048 blocks


What the heck is this:
setup-md.sh: md1 found multiple times

At least it build the initrd, I will try if it boots up ...
Comment 57 Richard Creighton 2009-10-14 22:10:48 UTC
Created attachment 322553 [details]
Directory listing of MC listings of /boot and /grub showing that initrd and menu.lst were created after M8 was updated to RC1

Despite the apparant correctness of the directory, once reboot occurred, infinite waiting on MD to appear occurred and answering either yes or no to shall I default to /dev/mdX results in infinite wait.   Entering a CTRL-C will drop to a $ shell and cd /var/log shows NO ENTRIES of any kind, eg, no files, no messages, no logs.  Same for any other directories I log to.   There are 3 directories in /proc, /dev etc mounted.   If I reboot to the M8 version, It boots with no problem normally and with no reported errors, of course, with a different kernel flavor, but has no problems with the new versions of programs that end up running.
Comment 58 Ludwig Nussel 2009-10-15 09:10:06 UTC
Suppose the volume group of the root volume is called 'system'. What should be the output of the following command?
# vgs --noheadings --options pv_name system

For lvm on top of raid it's /dev/md0 on 11.1 and /dev/dm-0 on 11.2 AFAICT.
Comment 59 Ludwig Nussel 2009-10-15 12:10:37 UTC
Forget what I said, I can't reproduce without LUKS involved. Even a config with /boot on raid works. So the bug here seems to be fixed indeed. The message 'md1 found multiple times' is harmless.

Due to lack of better knowledge my advice for anyone still having problems would be to post a screenshot of the graph displayed by the yast2 partitioner. Also uncomment the line
echo "[$curscript] $blockdev"
around line 454 in /sbin/mkinitrd before posting the output of mkinitrd. That helps to understand what devices are considered by mkinitrd.
Comment 60 Ludwig Nussel 2009-10-15 12:14:00 UTC
Created attachment 322652 [details]
disk layout with root on lvm on raid and boot on raid works with Build0334
Comment 61 Forgotten User gRveQ1K55E 2009-10-15 15:24:29 UTC
anniys:~ # vgs --noheadings --options pv_name vg00
  /dev/md1

And yes, it boots now.
Comment 62 Ray Sutton 2009-10-16 01:06:43 UTC
Mostly works for me :-)

Configuration /boot on  /dev/md0 (raid1/ext4)
              /     on  /dev/sys/dom0 (ext4 on LVM2 on raid1)

RC1 (Build 334 X86_64) installed no problem, reboot fine after install to desktop mode, rebooting to dom0 under xen failed on switching to runlevel 5 (system locked)

Regenerated initrd as a test, reported the missing scsi_config file but this didn't prevent initrd being recreated.

On reboot desktop mode worked fine but xen-dom0 crapped out switching to run level 5, this time I was able to recover to runlevel 3.

The initrd problem is resolved for me, I just need to go beat up (gently of course) the x-windows or xen guys (probably the latter).

Thanks to everybody who had time to work this problem!
Comment 63 Richard Creighton 2009-10-16 04:28:00 UTC
Created attachment 322777 [details]
Partition Layout of test machine

Test machine layout showing /boot as physical partition, root on MD1 and most of the rest of the machine on an LVM
Comment 64 Richard Creighton 2009-10-16 06:13:33 UTC
My comment associated with attachment of comment 63 didn't seem to take:  I downloaded the ISO for RC1 and created the DVD and installed it as an update successfully on the test machine using the configuration  as shown in the above attachment.   It installed successfully with the exception that GRUB did not update properly.   It left the old RC1, the non-bootable version marked as the default even though it added the new kernel into the menu but did not mark it as the default and when it rebooted, of course, it went into the  old waiting game for MD1, which of course never occured.   When I rebooted manually, I noticed the menu was not 'right' in that the default was the 2nd set of RC1 choices, so I manually selected the top set of RC1 default flavor and it booted.   I then went into grub and selected that first (top) menu item as default, deleted the bottom two items from the menu (the old pae flavor) and then rebooted just to be sure it would do so without intervention from a power off reboot, and it did.   I would say that aside from grub messing up, the initrd problem with LVM and RAID is fixed and this bug from my perspective at least could be marked as closed.

Richard
Comment 65 Hawke Robinson 2009-10-16 20:22:49 UTC
This is with 11.2 Milestone 8.
Have tried fresh installs twice with same result. 

With 2 identical 320 GB hard drives in HP dv9000 laptop with LVM on top of RAID1 with /dev/system/home running LUKS.

/dev/sda1 = FAT 1GB
/dev/sda2 = NTFS 100MB (Windows 7 RTM auto-created for "system" requirements)
/dev/sda3 = NTFS 73.24GB (Windows 7 RTM)
/dev/sda4 = extended 223.74GB
/dev/sda5 = ext2 1GB /boot
/dev/sda6 = swap 1.25GB
/dev/sda7 = raid part 1 of md0 221.48GB


/dev/sdb1 = FAT 1GB
/dev/sdb2 = NTFS 100MB (Windows 7 RTM)
/dev/sdb3 = NTFS 73.24GB (Windows 7 RTM)
/dev/sdb4 = extended 223.74GB
/dev/sdb5 = ext2 1GB /boot2
/dev/sdb6 = swap 1.25GB
/dev/sdab7 = raid part 2 of md0 221.48GB

lvm = volume: system =   GB
/dev/system/root = ext4   GB
/dev/system/home = ext3 with crypto  GB


After installation nearly complete, final line of error popup is:

"script /lib/mkinitrd/setup/72-block.sh failed"

Other than the more complex partitioning with LVM on top of RAID1, the installation is default software automatic configuration (including laptop package pattern already selected by default), with KDE desktop.
Comment 66 Hawke Robinson 2009-10-16 20:26:16 UTC
I see that 11.2 RC1 is now out. I am downloading that, and will see how that behaves with the same setup on the same system.
Comment 67 Hawke Robinson 2009-10-18 05:29:47 UTC
I have now tested it under RC1, same exact setup, and it worked just fine. LUKS encrypted /home partition via LVM on top of RAID1, all working fine now, when through install fine. At least from what I can tell on my end, all fixed in RC1. So can reclose?
Comment 68 Xin Wei Hu 2009-10-19 03:02:34 UTC
So closing it again as reports indicate that bug is fixed in RC1.

Thank you all for participating in testing and resolving this issue!
Comment 69 Hawke Robinson 2009-11-02 18:33:51 UTC
And appears to remain fixed in RC2 (just tested it). ;-)