Bugzilla – Bug 119407
Cannot use disk image in XenU domain
Last modified: 2006-06-15 19:05:31 UTC
If I use complete disk image in domainU (configured as disk = [ 'file:/var/lib/xen/images/domain3/hda,ioemu:hda,w' ]) I get this error message during boot of the domain: ... Xen virtual console successfully installed as tty1 Event-channel device installed. Neither TPM-BE Domain nor INIT domain! xen_blk: Initialising virtual block device driver xen_blk: Timeout connecting to device! xen_net: Initialising virtual ethernet driver. xen_net: Using grant tables. xen_tpm_fr: Initialising the vTPM driver. ... The imge file exists and has non-zero size: # ls -l /var/lib/xen/images/domain3/hda -rw-r--r-- 1 root root 1073741825 Sep 22 11:01 /var/lib/xen/images/domain3/hda As a result the virtual machine has no disk. This bug makes Xen yast module completely useless because it uses a disk image for VM installation.
The ioemu is for machines with VT (Vanderpool = HW support for virtualization in intel CPUs). Please drop "ioemu:" fromthe config and try again.
Please test.
Without "ioemu" option the domain stops at detecting the disk, it prints: Xen virtual console successfully installed as tty1 Event-channel device installed. Neither TPM-BE Domain nor INIT domain! xen_blk: Initialising virtual block device driver Registering block device major 3 hda: and then it hangs.
Seems to be fixed in current xen unstable tree.
since Xen is called technical preview in 10.0 -> not a blocker bug
Hmm, looks like problems with hotplug. Should be fixed in latest xen update packages that should go out this week.
Can you please verify it works now?
I reported the same problem for Xen 2.0 in SL9.3, it was fixed. May be that the patch can be reused?
What kernel/tools version is this? I don't have the "xen_blk: Initialising virtual block device driver" line in my log here. I'm running SL 10.0 with all you updates (=> xen-3.0_7608, kernel-xen-2.6.13-15.7), whole disk virtual block devices work ok for me, no matter whenever I name them xvd, hd or sd ...
*** Bug 144010 has been marked as a duplicate of this bug. ***
Still happens in Beta1 (xen-3.0_8513-1, kernel-xen-2.6.15_git12-6, i386-32bit), see the output in bug #144010 for more information.
Installed Beta2 today, I see it as well, investigating ...
Seems to be a problem in our kernel, the blkfront driver doesn't enter the "connected" state for some reason. Doesn't happen with a kernel built fresh from the "merge tree". Jan, any chance you have seen this before?
I've been looking into this for a bit as I have two machines (a Dell and and AMD running i386 or x86_64) that at 'hda:' hangs 9 out of 10 times. I believe I've found the problem. The blkfront driver calles xlvbd_add() before setting info->connected = BLKIF_STATE_CONNECTED and informing xenbus with xenbus_switch_state(). Calling xlvdb_add() results in an IO request from efi_partition() before returning. The IO request fails in blkif_queue_request() because info->connected is not set. read_chache_page(), which originated the IO request, waits for the request to complete before returning, which never happens, thus the xlvdb_add() never returns to allow the driver to set info->connected. HUNG! Moving info->connected = BLKIF_STATE_CONNECTED; (void)xenbus_switch_state(info->xbdev, NULL, XenbusStateConnected); before xlvdb_add() lets me bring up domu everytime.
Thanks Ross, fix committed to CVS. Next kernel-of-the-day and beta3 should work ok.
Looks good in beta3. I opened (duplicate) bug #144010 but reluctant (perhaps foolishly) to close this one since I am not the reporter.