Bugzilla – Bug 121960
Core dumps in VM's after update from SuSE 9.3 with XEN 2 to SuSE 10.0 with XEN 3
Last modified: 2006-02-16 16:22:06 UTC
After updating a XEN Server with SuSE 9.3 to SuSE 10.0 different programs within the existing SuSE 9.3 VM's generate a core dump. Major players here are hwscan and YaST - which crash both. The result is an unusable Virtual Machine. It starts with it's inability to configure a network card and ends to be hard to configure without YaST. /sbin/hwscan --silent --boot --fast --isapnp --pci --block --floppy --mouse Segmentation fault (core dumped) ==> (gdb) bt #0 0x4003e94f in read_udevinfo () from /lib/libhd.so.10 #1 0x4004bf49 in hd_scan_int () from /lib/libhd.so.10 #2 0x40040023 in hd_scan () from /lib/libhd.so.10 #3 0x4004167c in hd_list () from /lib/libhd.so.10 #4 0x08048da4 in ?? () #5 0x0804c050 in ?? () #6 0x00000019 in ?? () #7 0x00000001 in ?? () #8 0x00000000 in ?? () #9 0x00000001 in ?? () #10 0x00001728 in ?? () #11 0x00000001 in ?? () #12 0x00000001 in ?? () #13 0xbfc9f764 in ?? () #14 0x00000000 in ?? () #15 0xbfc9f6c8 in ?? () #16 0x0804962e in ?? () #17 0x0804bb00 in optind () #18 0x08049fa7 in _IO_stdin_used () #19 0x0804b640 in ?? () #20 0x00000000 in ?? () #21 0x00000000 in ?? () #22 0xbfc9f764 in ?? () #23 0x00000000 in ?? () #24 0xbfc9f764 in ?? () #25 0x00000000 in ?? () #26 0x00000000 in ?? () #27 0xbfc9f718 in ?? () #28 0x08049b2e in ?? () #29 0x0804bb00 in optind () #30 0x40016ff4 in ?? () from /lib/ld-linux.so.2 #31 0x00000001 in ?? () #32 0x40017518 in ?? () #33 0x00000000 in ?? () #34 0x0804b574 in ?? () #35 0xbfc9f6f8 in ?? () #36 0x080489e1 in _init () Previous frame inner to this frame (corrupt stack?) The System has been a SuSE 9.3 32-Bit minimal installation with XEN 2 on a DELL PowerEdge SC 1425 and was updated to SuSE 10.0 with XEN 3.
kurt, something for you?
Hmmm, does domain0 survive this? Does a 9.3 domU run stable except for the hwscan crash?
Both do survive - the domU is just kind of hard to administrate ... no yast, no you ...
Hmmm, just tested with the latest 10.0 update packages on my machine: root@g46: ~ [0]# /sbin/hwscan --silent --boot --fast --isapnp --pci --block --floppy --mouse root@g46: ~ [0]# cat /etc/SuSE-release SuSE Linux 9.3 (i586) VERSION = 9.3 root@g46: ~ [0]# uname -a Linux g46 2.6.13-15.7-xen #1 SMP Tue Nov 29 14:32:29 UTC 2005 i686 i686 i386 GNU/Linux root@g46: ~ [0]# So it does work here. Can you confirm it's fixed for you as well with the 10.0 updates?
(In reply to comment #4) > Hmmm, just tested with the latest 10.0 update packages on my machine: > [ worked for you ] Sorry - but I can't confirm that it works here. I have the same problem as before: skel:~ # /sbin/hwscan --silent --boot --fast --isapnp --pci --block --floppy --mouse Segmentation fault skel:~ # cat /etc/SuSE-release SuSE Linux 9.3 (i586) VERSION = 9.3 skel:~ # uname -a Linux skel 2.6.13-15.7-xen #1 SMP Tue Nov 29 14:32:29 UTC 2005 i686 i686 i386 GNU/Linux
This doesn't sound like a supported configuration. SUSE 9.3 VMs just won't work (at least, not easily) on SUSE 10.0 with Xen 3. If you don't upgrade the kernel (the one within the VM) to a Xen-3 compatible one (e.g., install the SUSE 10.0 kernel within the SUSE 9.3 VM), then it won't work, because the Xen 2->3 transition broke the ABI between the kernel and hypervisor. But if you do upgrade the kernel, we can't guarantee the 2.6.16-era kernel will work with the 2.6.13-era user-space tools in the VM (/sys has changed; /proc has changed; hotplug/udev have changed, etc.) The only way to "fix" this is to maintain a Xen-3 compatible 2.6.13 kernel, but we aren't doing that. We're supporting SLES 9 (2.6.5) and SLES 10 / SL 10.0 (2.6.16) on Xen 3.