Bug 505670 - Unable to boot - LVM/mapper does not create devices
Summary: Unable to boot - LVM/mapper does not create devices
Status: RESOLVED FIXED
: 508109 521367 530833 (view as bug list)
Alias: None
Product: openSUSE 11.2
Classification: openSUSE
Component: Kernel (show other bugs)
Version: Milestone 7
Hardware: All Other
: P2 - High : Critical with 10 votes (vote)
Target Milestone: ---
Assignee: Xin Wei Hu
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-05-20 14:53 UTC by Lukas Lipavsky
Modified: 2009-11-13 11:10 UTC (History)
17 users (show)

See Also:
Found By: System Test
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---
coolo: SHIP_STOPPER-


Attachments
screenshot (321.49 KB, image/jpeg)
2009-05-20 14:53 UTC, Lukas Lipavsky
Details
installed packages (34.56 KB, text/plain)
2009-05-20 15:02 UTC, Lukas Lipavsky
Details
hwinfo (67.72 KB, text/plain)
2009-05-20 15:02 UTC, Lukas Lipavsky
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Lukas Lipavsky 2009-05-20 14:53:00 UTC
Created attachment 293325 [details]
screenshot

I am using LVM for / direcocotry. After last update from factory (2009-05-19), I am unable to boot anymore. All messages I get are on attached screenshot (I wanted to attach serial console output as well, but the error occurs before the serial console get initialized)

messages (short):
mkdir: cannot create directory `/dev/mapper': File exists
mknod: `/dev/mapper/control': File exists
Creating device nodes with udev
udevf-event[425]: device node '/dev/mapper/control' already exists, link to '/dev/device-mapper' will not overwrite it
(...)
Waiting for device /dev/system/factory to appear: ..............Could not find /dev/system/factory.
Want me to fall back to /dev/system/factory? (Y/n)

Obviously, fall back to /dev/system/factory solves nothing.

After the message I got minimal shell, but dmesg does not work :(

I'd like to provide any other information, but just tell me how ;-)

(I can chroot to the installation and will provide hwinfo and list of packages in attachment)
Comment 1 Lukas Lipavsky 2009-05-20 15:02:19 UTC
Created attachment 293328 [details]
installed packages
Comment 2 Lukas Lipavsky 2009-05-20 15:02:54 UTC
Created attachment 293329 [details]
hwinfo
Comment 3 James Oakley 2009-05-22 12:24:10 UTC
I had the same problem. It looks like the issue is with the initrd scripts in the lvm2 RPM. I reverted to the one from 11.1 and rebuilt my initrds and I can boot now.
Comment 4 Lukas Lipavsky 2009-05-22 13:46:53 UTC
seems to be problem in lvm2 package -> assigning to the lvm2 maintainer
Comment 5 Xin Wei Hu 2009-05-25 08:53:35 UTC
A LVM2 issue indeed. 

Thanks for the report. And also please try the latest Factory build (just submitted, may take days to accept and build).
Comment 6 Lukas Lipavsky 2009-05-25 11:44:01 UTC
I've just tried with lvm2 from home:xwhu:Factory and the problem is fixed there.
Comment 7 Piotrek Juzwiak 2009-05-30 10:22:17 UTC
*** Bug 508109 has been marked as a duplicate of this bug. ***
Comment 8 Mario Guzman 2009-05-30 20:45:17 UTC
In case this helps: The comment in bug 508109 implies the bug was also in Milestone 1. For me Milestone 1 installed fine on my 2 disk system with no problem. Milestone 2 however gave me the same messages mentioned in bug 508109 and is not installable.
Comment 9 Piotrek Juzwiak 2009-05-31 08:49:02 UTC
@8 Mario Guzman
It may be true as all factory images after Milestone 1 were also called Milestone 1 and because i didn't yet have LVM on the Milestone 1 release i can't say for sure that the M1 release (Build 0066) was booting succesfully. Perhaps it has got to do with the change to GCC 4.4 ?
Comment 10 James Oakley 2009-06-01 13:19:00 UTC
(In reply to comment #9)
> @8 Mario Guzman
> It may be true as all factory images after Milestone 1 were also called
> Milestone 1 and because i didn't yet have LVM on the Milestone 1 release i
> can't say for sure that the M1 release (Build 0066) was booting succesfully.
> Perhaps it has got to do with the change to GCC 4.4 ?

It occurred in Factory sometime between M1 and M2.

The problem is simply that newer builds of lvm2 have an initrd script that is missing the appropriate commands to set up the LVM.
Comment 11 Xin Wei Hu 2009-06-01 14:56:06 UTC
(In reply to comment #10)
> (In reply to comment #9)
> > @8 Mario Guzman
> > It may be true as all factory images after Milestone 1 were also called
> > Milestone 1 and because i didn't yet have LVM on the Milestone 1 release i
> > can't say for sure that the M1 release (Build 0066) was booting succesfully.
> > Perhaps it has got to do with the change to GCC 4.4 ?
> 
> It occurred in Factory sometime between M1 and M2.
> 
> The problem is simply that newer builds of lvm2 have an initrd script that is
> missing the appropriate commands to set up the LVM.

Indeed.
Actually the initrd script is simplified because udev can setup logical volume automatically now. However, we forgot to include the udev rules for that.

The new version has been submitted for a while, but seems to be too late for M2 ......

Thanks and happy testing ;)
Comment 12 Tomas Cech 2009-06-02 13:22:20 UTC
Hi,

I trace this issue into mkinitrd.

lvm2-2.02.39-8.13 is not affected
lvm2-2.02.45-4.1 is affected.

Main problem from my point of view is lack of /sbin/lvm (and vgscan, vgchange symlinks in initrd). lvm2-2.02.45-4.1 contains /lib/mkinitrd/scripts/setup-lvm2.sh and /lib/mkinitrd/scripts/boot-lvm2.sh which doesn't have anymore all that #%programs, #%modules etc. IIRC these lines are used necessary.

adding mkinitrd maintainer to CC
Comment 13 Tomas Cech 2009-06-05 07:14:53 UTC
Raising priority for this bug since it is fatal for everyone with rootfs on LVM2.
Comment 14 Piotrek Juzwiak 2009-06-22 12:25:27 UTC
*** Bug 512251 has been marked as a duplicate of this bug. ***
Comment 15 Mario Guzman 2009-07-01 21:55:45 UTC
Just tried to install 11.2 milestone 3 and it's a nogo with same problem. No one with root on LVM will be able to test anything on ms 3.
Comment 16 Xin Wei Hu 2009-07-02 02:56:36 UTC
It has been more then 1 month, and the package doesn't make its way into Factory yet.
I'm really sorry about this.
And please try the lvm2 package in home:xwhu:Factory before it's finally accepted.

Thanks.
Comment 17 Piotrek Juzwiak 2009-07-02 14:41:42 UTC
Xin, i used Your lvm rpm package, unpacked it, copied over the not working LVM installation, chrooted to the system and invoked mkinitrd_setup and after that i could succesfully boot :D
Comment 18 Xin Wei Hu 2009-07-08 04:00:49 UTC
The new package has been accepted.
Mark it fixed.

Thanks and happy testing. ;)
Comment 19 Mario Guzman 2009-07-31 02:25:39 UTC
I am not able to test this fix since bug 525282 occurs sooner and I can't install 11.2 at all. I am just mentioning this but in case anyone thinks the two could be related? There is no response to bug 525282 so far. In any case, I can't verify that the original bug was fixed since the new bug stops the install sooner. Odd that both are related to lvm/partitioning.
Comment 20 Piotrek Juzwiak 2009-07-31 03:29:33 UTC
I can confirm that it has been fixed. I successfully installed 11.2 M4 on / on LVM.
Comment 21 Mario Guzman 2009-08-08 18:45:20 UTC
FYI the problem in M4 that prevented me from getting to test this issue is resolved in M5 and this problem is now resolved for me as well. M5 install went to completion with no problems.
Comment 22 remo strotkamp 2009-09-14 10:27:44 UTC
Could it be that this one is back in M7? Just did an install from dvd and am getting the exact same errors as initially reported.
Comment 23 Brunno Prego 2009-09-16 16:44:30 UTC
I had the exact same problem with M7 with VirtualBox
Comment 24 Brunno Prego 2009-09-16 16:46:06 UTC
problem turn back in M7
Comment 25 Mario Guzman 2009-09-16 16:50:46 UTC
Interesting.... I installed M7 fine this time, but there is a difference: I installed using the 64 bit M7 whereas my prior installs were the 32 bit version. Brunno please mention what version you are using (32 or 64).
Comment 26 Brunno Prego 2009-09-16 16:54:38 UTC
*** Bug 521367 has been marked as a duplicate of this bug. ***
Comment 27 Brunno Prego 2009-09-16 16:56:41 UTC
I'm using 64bit version (M7-Build0268)
Comment 28 Piotrek Juzwiak 2009-09-16 18:27:57 UTC
I can't reproduce this. I made a clean install of M7, /boot separate, and the rest on LVM and everything is fine. Boots with no problems. I'm using 64 bit.
Comment 29 Brunno Prego 2009-09-16 20:00:28 UTC
Piotrek,

How is your machine? I'm using a virtual machine from VirtualBox inside a Windows Vista host [ :( ]. 
The hardware from the host is a amd turion 64 x2 and in the vm is replicated this processor, but with others components virtualized.
The same vm i had used for install suse 11.1 and i had sucess in perform this but M7 doesn't terminate the install with similar error message.
Comment 30 Piotrek Juzwiak 2009-09-16 20:52:40 UTC
I installed 11.2 M7 on my main machine. As above, /boot is separate and the rest is LVM, / and /var on one lvm group and /home on the other lvm group. And it boots fine. I am not using any virtual machines.
Comment 31 Brunno Prego 2009-09-17 01:35:38 UTC
How is formated yours partitions? I make other try over a instaled suse 11.1 inside a virtual box vm and it's works. I noticed that the difference is that in 11.2 partitions were formatted with ext4 and 11.1 with ext3.
Comment 32 remo strotkamp 2009-09-17 06:40:24 UTC
My system has the following setup:

2disks of different sizes:
/boot on raid1 (sda1 and sdb1) (ext4)
/ on raid1 (sda2 and sdb2) LVMed and ext4 
swap (sdb3)
currently unused sdb4 with LVM 

and I do get exact same error messages as original poster, just with different LVM names of course:
mkdir: cannot create directory `/dev/mapper': File exists
mknod: `/dev/mapper/control': File exists

(...)
Waiting for device /dev/system/factory to appear: ..............Could not find
/dev/system/factory.
Want me to fall back to /dev/system/factory? (Y/n)


as a sidenote, during the original install something went wrong and my /boot dind't have the initrd and was utterly unbootable of course. I used rescue mode
and chroot to mkinitrd a new one. always failed in 72-block.sh with /dev/mdxyz
unhandled error... (did a lil change to return raid1 in md* cases)...
Comment 33 Piotrek Juzwiak 2009-09-17 07:02:48 UTC
You guys mistake this BUG with this one, https://bugzilla.novell.com/show_bug.cgi?id=525237

If you install the system on LVM WITHOUT any RAID it will install fine BUT if you create the LVM group on RAID (x) then it WILL fail. 

You should report this to the one i mentioned above.
Comment 34 Forgotten User gRveQ1K55E 2009-09-27 09:07:03 UTC
Well, I DID reported there all that I stumbled about while tracing it till now, but it's closed now, the comment was:

  -------  Comment #25 From pgnet Dev  2009-09-26 12:13:18 MDT   (-) -------

clearing queue
Comment 35 Xin Wei Hu 2009-09-27 09:13:37 UTC
I'm watching these 2 bugs both anyway. 
An updated version of lvm2 is submitted already. Let's see if it works for you this time ;)
Comment 36 Forgotten User gRveQ1K55E 2009-09-27 10:52:48 UTC
Did a "zypper dup; zypper ve"

booted to live 11.2 kde
started installation (only to get VGs)

mounted disk system

linux:/ # mount
/dev/loop0 on / type defaults (rw,0)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/md0 on /boot type ext2 (rw)
/dev/mapper/vg00-root on /mnt type xfs (rw)
/dev/mapper/vg00-admin on /mnt type reiserfs (rw)
/dev/mapper/vg00-root on /mnt type xfs (rw)
/dev/mapper/vg00-samba on /mnt/samba type xfs (rw)
/dev/mapper/vg00-space on /mnt/space type reiserfs (rw)
/dev/mapper/vg00-srv on /mnt/srv type xfs (rw)
/dev/mapper/vg00-tmp on /mnt/tmp type xfs (rw)
/dev/mapper/vg00-usr on /mnt/usr type xfs (rw)
/dev/mapper/vg00-usr_local on /mnt/usr/local type xfs (rw)
/dev/mapper/vg00-usr_share on /mnt/usr/share type xfs (rw)
/dev/mapper/vg00-usr_src on /mnt/usr/src type ext3 (rw)
/dev/mapper/vg00-var on /mnt/var type xfs (rw)
/dev/mapper/vg00-var_lib on /mnt/var/lib type xfs (rw)
/dev/mapper/vg00-var_log on /mnt/var/log type xfs (rw)
proc on /mnt/proc type proc (rw)
devpts on /mnt/dev/pts type devpts (rw)
sysfs on /mnt/sys type sysfs (rw)
/dev/md0 on /mnt/boot type ext2 (rw)

linux:/ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
      462117504 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      104320 blocks [4/4] [UUUU]

unused devices: <none>


(I'm irritated about md127 ... it should be just md1, it was previously ...)
changerooted from live-11.2

updated system
 "zypper dup; zypper ve"

mkinitrd
2009-09-27 14:49:07 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:49:07 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:49:07 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:49:07 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:49:07 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.

linux:~ # mkinitrd -d /dev/vg00/root
2009-09-27 14:51:27 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:51:27 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:51:27 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:51:27 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
2009-09-27 14:51:27 WARNING: GRUB::GrubDev2UnixDev: No partition found for /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647 with 2.
linux:~ #


For me it works less than befor now ...

Any suggestions
Comment 37 Forgotten User gRveQ1K55E 2009-09-27 10:55:03 UTC
P.S.

linux:~ # ls /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647*
/dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647        /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part3
/dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part1  /dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part4
/dev/disk/by-id/ata-SAMSUNG_SP2014N_S088J10L405647-part2
linux:~ #
Comment 38 Forgotten User gRveQ1K55E 2009-09-27 11:05:01 UTC
PPS:

[    1.618450] ata1.00: ATA-7: SAMSUNG SP2014N, VC100-41, max UDMA/100
[    1.659509] scsi 0:0:0:0: Direct-Access     ATA      SAMSUNG SP2014N  VC10 PQ: 0 ANSI: 5
[    1.659753] sd 0:0:0:0: [sda] 390721968 512-byte logical blocks: (200 GB/186 GiB)
[    1.659806] sd 0:0:0:0: [sda] Write Protect is off
[    1.659810] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    1.659838] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.659982]  sda: sda1 sda2 sda3 sda4 < >


linux:/ # sfdisk -l /dev/sda

Disk /dev/sda: 24321 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1          0+    261     262-   2104483+  82  Linux swap / Solaris
/dev/sda2   *    262     274      13     104422+  fd  Linux raid autodetect
/dev/sda3        275   19451   19177  154039252+  fd  Linux raid autodetect
/dev/sda4      19452   24320    4869   39110242+   f  W95 Ext'd (LBA)
linux:/ #
Comment 39 Forgotten User gRveQ1K55E 2009-09-29 09:04:05 UTC
/lib/mkinitrd/setup/72-block.sh

++ '[' '' ']'                                                   
++ '[' '' -a '!' '' ']'                                         
++ for bd in '$blockdev'                                        
++ case $bd in                                                  
++ update_blockdev /dev/md1                                     
++ local curblockdev=/dev/md1                                   
++ '[' /dev/md1 ']'                                             
++ '[' /dev/md1 ']'                                             
++ blockmajor=-1                                                
++ blockminor=-1                                                
++ '[' -e /dev/md1 ']'                                          
+++ devnumber /dev/md1                                          
++++ ls -lL /dev/md1                                            
+++ set -- brw-rw---- 1 root disk 9, 1 Sep 27 15:03 /dev/md1    
+++ mkdevn 9 1                                                  
+++ local major=9 minor=1                                       
+++ echo 9437185                                                
++ blockdevn=9437185                                            
+++ devmajor 9437185                                            
+++ local devn=9437185                                          
+++ echo 9                                                      
++ blockmajor=9                                                 
++ '[' '!' 9 ']'                                                
+++ devminor 9437185                                            
+++ local devn=9437185                                          
+++ echo 1                                                      
++ blockminor=1                                                 
+++ block_driver 9                                              
+++ sed -n '/^Block devices:/{n;: n;s/^[ ]*9 \(.*\)/\1/p;n;b n}'
++ blockdriver=md                                               
++ '[' '!' md ']'                                               
++ '[' md = device-mapper ']'                                   
++ false                                                        
+++ get_devmodule md1                                           
++++ echo md1                                                   
++++ sed 's./.!.g'                                              
+++ local blkdev=md1                                            
+++ '[' '!' -d /sys/block/md1 ']'                               
+++ case "$blkdev" in                                           
+++ '[' '!' -d /sys/block/md1/device ']'                        
+++ echo 'Device md1 not handled'                               
Device md1 not handled                                          
+++ return 1                                                    
++ curmodule=                                                   
++ '[' 1 -eq 0 ']'                                              
++ return 1                                                     
+ '[' 1 -ne 0 ']'                                               
+ oops 1 'Script /lib/mkinitrd/setup/72-block.sh failed!'       
+ echo 'Script /lib/mkinitrd/setup/72-block.sh failed!'         
Script /lib/mkinitrd/setup/72-block.sh failed!                  
+ cleanup                                                       
+ rm -f /dev/shm/mkinitramfs.t9UnLx/initrd /dev/shm/mkinitramfs.t9UnLx/initrd.gz
+ '[' -d /dev/shm/mkinitramfs.t9UnLx/mnt ']'                                    
+ rm -rf /dev/shm/mkinitramfs.t9UnLx/mnt                                        
+ initrd_bins=()                                                                
+ exit_code=1                                                                   
+ exit 1


Well:   +++ '[' '!' -d /sys/block/md1/device ']'
 /sys/block/md1 do exist, but not /sys/block/md1/device

By the way, I recently updatet to the newest in factory just minutes ago.

linux:/ # rpm -qf `find /lib/mkinitrd/ | grep -v ~ ` | grep -v "not owned by any package" | sort | uniq
bootsplash-3.3-146.110.x86_64
cifs-mount-3.4.1-1.5.x86_64
cryptsetup-1.0.7-6.1.x86_64
device-mapper-1.02.31-7.6.x86_64
dmraid-1.0.0.rc15-7.2.x86_64
kpartx-0.4.8-43.2.x86_64
lvm2-2.02.45-8.1.x86_64
mdadm-3.0-21.4.x86_64
mkinitrd-2.5.10-3.3.x86_64
multipath-tools-0.4.8-43.2.x86_64
nfs-client-1.1.3-19.17.x86_64
splashy-0.3.13-2.21.x86_64
suspend-0.80.20081103-2.17.x86_64
sysvinit-2.86-213.3.x86_64
Comment 40 Forgotten User gRveQ1K55E 2009-09-29 10:57:31 UTC
It would be nice to know if this is done by intention or not ...

linux:/lib/mkinitrd/scripts # diff setup-lvm2.sh setup-dm.sh
linux:/lib/mkinitrd/scripts # rpm -qf setup-lvm2.sh setup-dm.sh
lvm2-2.02.45-8.1.x86_64
device-mapper-1.02.31-7.6.x86_64


This is on my production system:
mail:/lib/mkinitrd/scripts # rpm -qf  setup-lvm2.sh setup-dm.sh
lvm2-2.02.39-8.8
device-mapper-1.02.27-7.1

And it's different.

Doing a reboot now ...
Comment 41 Forgotten User gRveQ1K55E 2009-09-29 13:01:31 UTC
Just when you get a workaround and you think you will came along ...

I had found a workaround for the not building of the initrd. just do "mkinitrd -A" than it will not go through all scripts to figure out what modules to use but just takes every module it finds.

A little bad but for a workaround in a testsystem ok.


Than I bootet my system and guess ... "waiting for /dev/vg00/root to appear ..."

I started the console and noticed to my liking that now mdadm and vgchange wher again in the initrd. So I tried to mount /dev/sda2 since it is one off my /boot devices in an RAID1.

It won't and I found out that it is not sda but hda (it's all SATA except this one is PATA)

Ok nonetheless I used mdadm with -A --scan and got my RAIDs build. Why this time again the md127 instead of md1 appears is beyond me.

I used "vgchange -a y" and all LVs appear, including /dev/vg00/root.

As I had all the older scripts from mkinitrd (the ones weher lvm and dm is different) I copied the lvm boot script to /boot, umounted /dev/hda2 and tried to boot by usning "init".

Then the devices /dev/vg00 went away and I ended where I started: "waiting for /dev/vg00/root to appear ..."

Any further notions, at least, which bug I should report this too now?

Seems to me that the whole initrd is broken as long it hast to deal with MD and LVM in some way. Perhaps even something in udev or kernel since we are back to /dev/hda now.
Comment 42 Stephan Kulow 2009-09-29 19:35:29 UTC
many users complain about this, so let's make this ship stopper
Comment 43 Xin Wei Hu 2009-09-30 03:57:33 UTC
(In reply to comment #41)
> Ok nonetheless I used mdadm with -A --scan and got my RAIDs build. Why this
> time again the md127 instead of md1 appears is beyond me.
md127 is the auto assigned device name here. So my best guest is mkinitrd doesn't include the conf file of MD into initrd file.
> I used "vgchange -a y" and all LVs appear, including /dev/vg00/root.
> As I had all the older scripts from mkinitrd (the ones weher lvm and dm is
> different) I copied the lvm boot script to /boot, umounted /dev/hda2 and tried
> to boot by usning "init".
> Then the devices /dev/vg00 went away and I ended where I started: "waiting for
> /dev/vg00/root to appear ..."
> Any further notions, at least, which bug I should report this too now?
> Seems to me that the whole initrd is broken as long it hast to deal with MD and
> LVM in some way. Perhaps even something in udev or kernel since we are back to
> /dev/hda now.

Milan,
  Any idea about this ?
Comment 44 Forgotten User gRveQ1K55E 2009-09-30 05:02:23 UTC
That could be. I will check. Before there where both MDs renamed (md0 to md128) and there was no mdadm.conf. I didn't had a look last time. Just need a few hours.
Comment 45 Forgotten User gRveQ1K55E 2009-10-01 04:50:48 UTC
Still there is no mdadm.conf in /etc of the initrd.
Comment 46 patrick shanahan 2009-10-01 16:01:25 UTC
I have the same problem with boot failing becasue lvm partition not recognized.  Started when updating (zdup) M7->M8.  

Skirting the issue by commenting out the lvm entry in /etc/fstab before boot and reversing after and duing manual mount.  

But my lvm partition is only used to store photographs and is not essential to booting or operation other than the web display of my photo album.
Comment 47 Piotrek Juzwiak 2009-10-02 06:52:48 UTC
I can not reproduce your problems guys, i used M8 64 bit, i deleted any partition i had including lvm groups and logical volumes. Then i created a separate /boot (normal partition x83 linux type), then a system LVM group and home LVM group. On the system LVM group i created two logical disks, / and /var while on home LVM group i made one logical disk /home. 

The system LVM group consists of two physical disks (whole disks partitions as LVM type) and the home LVM group consists from the third physical disk.

I CAN NOT REPRODUCE IT, EVERYTHING WENT FINE. 

Are you guys putting /boot on the LVM?? IIRC it is not possible to have /boot on LVM and be able to boot ?
Comment 48 Milan Vančura 2009-10-02 09:43:32 UTC
As far as I can see the problem is in MD array detection (in Anniyka's environment). We can see that even there are just md0 and md127 devices (as mdstat in Comment #36 says) but 72-block.sh gets '/dev/md1' as a valua of $blockdev. This could be the result of lvm2.sh script (which asks lvdisplay about devices used for lvm2) or md.sh which rewrites $blockdev. More probably lvdisplay is what shows the wrong device - check it by manual run and, if I'm right, check the configuration of lvm2 on your machine, Anniyka.

Even I can't be 100% sure, I think this is the problem of one machine which is in unknown state after many trials to set it up.
Comment 49 Forgotten User gRveQ1K55E 2009-10-02 10:34:13 UTC
No, but /boot on md.

Well as said bevor:   +++ '[' '!' -d /sys/block/md1/device ']'
 /sys/block/md1 do exist, but not /sys/block/md1/device

There is a directory /sys/block/md1
There is no subdirectory device

(Im on the system on which I CREATE the initrd now).

Here I do have md0 and md1.

If I do a "mkinitrd -A" it works.


Ok, booting with that initrd:

On the initrd itself there is no mdadm.conf, 
On initrd now there is an md0 and an md127
The MDs are not activated
Therefor no LVs

After manually activation of the MDs and manually activation of th VGs/LVs all devices are present in /dev (I can manually mount the devices).
After just doing "./init" to start the system the VG disapears.
Comment 50 Forgotten User gRveQ1K55E 2009-10-02 10:41:51 UTC
By the way, the only adjustments I had done to track down the error are overwritten by the packages from "zypper dup". So it's as clean as possible.

Steps:

Booted from Live 11.2 KDE
Mounted the system from harddisk to /mnt
Copied /dev to /mnt/dev
Mounted proc,sys,devpts
chrooted into /mnt

Updated packages from factory-snapshot, later factory
Run mkinitrd
Run into errors
Tried servel "set -x" and straces
Got a workaround "mkinitrd -A"
Booted system with this mkinitrd
Run into errors

Reported every step

Will check M8 today

Will report ;)
Comment 51 Forgotten User gRveQ1K55E 2009-10-02 10:47:40 UTC
I forgot:

Why do I have one /dev/hda in my initrd and in 11.1 and live 11.2 I do have only /dev/sda?
Comment 52 Ray Sutton 2009-10-03 19:06:49 UTC
I just installed M8 on LVM2 over software raid1, initrd problem still encountered.
reports error creating initrd (2 distinct errors)

1: cp cannot stat /etc/scsi_id.config

2: Script /lib/mkinitrd/setup/72-block.sh failed
   Device md1 not handled

This is identical to the error I got trying to install M7

Hardware config is core I7 12Gb ram 2x500Gb disk as a mirrored pair. 

partition 1 /dev/md0 ext4 for /boot 200mb
partition 2 /dev/md1 remainder of disk as physical volume
            /dev/sys/dom0 10G LVM2 partition ext4 for /
            /dev/sys/dom0swap 1G for swap

fixed 1 by copying file from 11.1
fixed 2 or at least bypassed the problem by adding

md*)
   result = "dm-mod raid1"
   ;;

to the case statement in the get module subroutine

regenerated initrd, failed to boot waiting for /dev/sys/dom0 to appear

I believe the problem is LVM setup and udev run before md setup has occurred
hence the volume group is not accessible
Comment 53 Piotrek Juzwiak 2009-10-03 19:12:04 UTC
Ray, your installation scenario should apply to this BUG http://bugzilla.novell.com/show_bug.cgi?id=525237

and not HERE.

This BUG here should be reported if you install on LVM which is NOT on any raid in which case i encountered no problems at all in M8 creating completely new LVM partitions and lvm groups.

For your partition scenario go to the BUG above.
Comment 54 Mario Guzman 2009-10-03 19:24:30 UTC
In case this is of any value to someone: I had this problem with LVM (but NO RAID) as I mentioned above. It has NOT been a problem for me in M5 through M8 so far with everything on LVM volumes (except /BOOT of course).
Comment 55 Ray Sutton 2009-10-03 19:29:18 UTC
Re Comment #53

Tilt - I was on the wrong tab in my browser - sorry
Comment 56 Forgotten User gRveQ1K55E 2009-10-04 08:49:36 UTC
Piotrek.

Well, I have the same scenario as Ray and just wrote here because:

a) Bug 525237 was closed for "cleaning queue":  Comment #25
b) The other Bug is for /boot ond RAID 0 specific, we do have RAID 1
c) Comment #34 and Comment #35
d) "Unable to boot - LVM/mapper does not create devices" is the Headline and it's exactly whats happen
e) The error described in "Description" is exactly the same we get stumble over

Conclusion:
There are/may be serval related bugs! in initrd/mkinitrd.
Comment 57 patrick shanahan 2009-10-14 04:13:14 UTC
I still have the same problem I reported in comment #46 and can boot only by following the work-a-round I detailed, commenting out the /etc/fstab entry for my lvm partition....

Note that I do not have a separate boot partition and do not have root system on lvm, only a data partition.

This is definitely a STOPPER!
Comment 58 patrick shanahan 2009-10-14 04:15:02 UTC
addendum:

this report is for   
  openSUSE 11.2 RC 1 (x86_64)
  VERSION = 11.2
Comment 59 Xin Wei Hu 2009-10-19 06:14:49 UTC
(In reply to comment #58)
> addendum:
> 
> this report is for   
>   openSUSE 11.2 RC 1 (x86_64)
>   VERSION = 11.2

Would you also attach the version number of the lvm2 package as well ?
I assume this has been fixed in the latest update already, so it'll also
great if you can give the latest package a try.

Thanks.
Comment 60 Stephan Kulow 2009-10-28 11:51:54 UTC
without people testing updates and no new dups coming in, I claim this is no longer ship stopper
Comment 61 patrick shanahan 2009-10-28 16:57:06 UTC
lvm2-2.02.45-10.1

installing lvm2-2.02.45-19.1

will advise shortly

ps, didn't see comment #59 until now
Comment 62 patrick shanahan 2009-10-28 17:18:13 UTC
on my system lvm2-2.02.45-19.1 solves my problems booting

tks,
Comment 63 Xin Wei Hu 2009-11-13 07:53:29 UTC
It's fixed with the latest lvm2 package.
Close it as fixed.

Thank you all for reporting and testing this.
Comment 64 Xin Wei Hu 2009-11-13 07:55:08 UTC
*** Bug 530833 has been marked as a duplicate of this bug. ***