Bug 174795 - Yast2 LVM lacks multi-TB support
Summary: Yast2 LVM lacks multi-TB support
Status: VERIFIED DUPLICATE of bug 127896
Alias: None
Product: SUSE LINUX 10.0
Classification: openSUSE
Component: YaST2 (show other bugs)
Version: RC 4
Hardware: x86 Linux
: P5 - None : Normal
Target Milestone: ---
Assignee: Thomas Fehr
QA Contact: Klaus Kämpf
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2006-05-11 10:15 UTC by Jan Engelhardt
Modified: 2006-07-14 14:05 UTC (History)
1 user (show)

See Also:
Found By: Beta-Customer
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
/var/log/YaST2 directory (46.99 KB, application/x-bzip)
2006-05-15 11:48 UTC, Jan Engelhardt
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Jan Engelhardt 2006-05-11 10:15:38 UTC
During the initial install... (80x25!)

When adding 4 950 GB drives to a LVM volume (default name is "system"), the size counter mis-displays the size in the dialog named "Logical Volume Manager: Physical Volume Setup".

After adding the 1st disk: "949.9 GB" (ok)
2nd disk: "1.8 TB" (ok)
3rd disk: "-1275928."
4th disk: "-303136.0"

The next dialog, "Logical Volume Manager: Logical Volumes" has it right, saying "Avialable size: 3.7 TB"
Comment 1 Jan Engelhardt 2006-05-11 10:28:34 UTC
Second dialog does not have it right everywhere. After adding an LV for /, the table lists:

Device /dev/system/root
Mount /
Vol. Grp. system
Size -314572.0 MB

The "available size" counter above the table remains correct as it seems. It displays 11.1 GB, but I suppose that's due to the standard last-cylinder-thing.

You can test that using VMware, creating big disks - but without allocating the host's diskspace. That way, they stay small even though they work as really big drives.

Next dialog, the "Expert Partitioner" also gets it wrong,
/dev/system      -303136.0 MB . LVM2 System
/dev/system/root -314572.0 MB F LV

There's also a dialog telling me
"With yourcurrent setup, your SUSE LINUX installation will encounter problems when booting, because you have no "boot" partition and your "root" partition is an LVM logical volume. This does not work."
Why should not this work? After all, RAID setup and LVM is (should be) done in the initrd.

(I did then create a separate /boot partition along to the LVM group.)

Next problem is that on the shell on tty2, the LV is not listed in `df`, and `df /mnt` shows tmpfs! This can't seriously work out.
Comment 2 Michael Gross 2006-05-11 12:14:24 UTC
Please attach /var/log/YaST2 as tarball.
Comment 3 Jan Engelhardt 2006-05-15 11:48:20 UTC
Created attachment 83429 [details]
/var/log/YaST2 directory
Comment 4 Thomas Fehr 2006-05-15 16:33:21 UTC
Hi Ihno, to be able to verify my fixes. I need access to such a system.
Comment 5 Jan Engelhardt 2006-05-15 16:36:51 UTC
Or use VMware5 (if you have it) and create disks that do _not_ have the option
active "allocate all disk space now". That way, a 950 GB disk only takes like
100 MB on the host when empty. (And around 800 MB when SUSE minimal is
installed.)
Comment 6 Thomas Fehr 2006-05-15 16:55:36 UTC
I have no vmware license.
Comment 7 Thomas Fehr 2006-05-15 17:15:07 UTC
Now I see that the bug is against 10.0 is this a typo and you meant really 10.1
or is this bug really against 10.0?
Comment 8 Jan Engelhardt 2006-05-15 18:56:47 UTC
Really 10.0. My 10.1 box has not arrived yet, but I already had purged the last beta ISOs. (So I could only test on 10.0.)
Comment 10 Thomas Fehr 2006-05-16 09:25:14 UTC
For SL 10.0 this is a known problem that is fixed in 10.1.

*** This bug has been marked as a duplicate of 127896 ***
Comment 11 Ihno Krumreich 2006-07-14 14:05:55 UTC
Closed.