|
Bugzilla – Full Text Bug Listing |
| Summary: | cannot burn DVD iso from NFS source | ||
|---|---|---|---|
| Product: | [openSUSE] SUSE Linux 10.1 | Reporter: | Forgotten User ZhJd0F0L3x <forgotten_ZhJd0F0L3x> |
| Component: | Basesystem | Assignee: | Vladimir Nadvornik <nadvornik> |
| Status: | RESOLVED FIXED | QA Contact: | E-mail List <qa-bugs> |
| Severity: | Major | ||
| Priority: | P5 - None | CC: | adrian.schroeter, aj, nfbrown |
| Version: | Beta 3 | ||
| Target Milestone: | --- | ||
| Hardware: | Other | ||
| OS: | Other | ||
| Whiteboard: | |||
| Found By: | Component Test | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
| Bug Depends on: | |||
| Bug Blocks: | 140732 | ||
| Attachments: |
"strace growisofs....."
strace -f -o foo growisofs... Proposed patch Proposed patch to growisofs |
||
|
Description
Forgotten User ZhJd0F0L3x
2006-02-10 11:01:37 UTC
Created attachment 67550 [details]
"strace growisofs....."
What NFS server? Is it mounted as NFS v2? The error indicates that it's NFS v2 which cannot handle files larger than 2 GB. growisofs uses the correct calls according to your strace log. What confuses me is that you say "copy the file to local dist". Did you copy it via NFS - or via other means? (In reply to comment #2) > What NFS server? Is it mounted as NFS v2? machcd3, see the path whatever autofs gave me. > The error indicates that it's NFS v2 which cannot handle files larger than 2 > GB. > growisofs uses the correct calls according to your strace log. > > What confuses me is that you say "copy the file to local dist". Did you copy > it via NFS - or via other means? IIRC it was via NFS. I saw the same error last week, i could burn with cdrecord-dvd directly from NFS. I am pretty sure you should be able to reproduce easily. Danny will put the burner onto fix.suse.de, then you can test it there. > or joe will do it :-) Joe, please plug the DVD burner into fix.suse.de (hp nw8000 on my desk). No need for that - I can do it myself since I now have all the details, will test later... Ok, can reproduce. No idea what goes wrong here - this is NFSv3. Henne, can somebody from your team look at this? Let's assign to NFS expert since this only occurs with NFS. *** Bug 156451 has been marked as a duplicate of this bug. *** I don't see why this is failing. Using dd it works just fine, even with large reads of 0x2000000 bytes which builtin_dd is doing. Please run strace with the -f option so I can see what the helper thread(s) are doing. Thanks! Created attachment 75786 [details]
strace -f -o foo growisofs...
Ahhh... it's doing DIRECTIO, and the NFS code limits the size of these requests to 4096 pages, ie 16M. Growisofs tries to do twice that. So, what's the proper fix? Use 16 M only? Can we fix this in general? Btw. why does this only happen with large DVDs? I'd first suggest to test this hypothesis. Please try running growisofs with -use-the-force-luke=bufsize:16M Concerning the proper fix, it's probably safe to just increase the limit in nfs directio. I've sent mail to the NFS list about this. Alternatively, we can change the default bufsize (on NFS) to 16M in growisofs, but that sounds like a hack to me. I don't know why that would happen on large DVDs only. Are you sure that is the case? I can reproduce it with smaller files as well. I think the "big file" error message somehow created the misperception that is was related to file size. (In reply to comment #13) > I'd first suggest to test this hypothesis. Please try running > growisofs with > > -use-the-force-luke=bufsize:16M seems to work fine (not finished yet, but i have no reason to believe it won't) > I don't know why that would happen on large DVDs only. Are you sure that > is the case? _I_ never said that :-). It is just that i usually do not burn less than some GB to DVD, but use CDs for that. > I can reproduce it with smaller files as well. I will try that once the first burn is ready. root@fix:~> l /mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent -rw-r--r-- 1 root root 647997 2006-03-30 03:37 /mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent root@fix:~> growisofs -dvd-compat -Z /dev/sr0=/mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent Executing 'builtin_dd if=/mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent of=/dev/sr0 obs=32k seek=0' :-( write failed: File too large so smaller files also do not work. Created attachment 77012 [details]
Proposed patch
Neil, does this patch look right? Not completely. I understand that the reason for the limit is that atomic_t has limited range on some platforms, though I'm not sure if that is true any more Certainly SPARC used to be be 24bit, but now uses the full 32. There is a ->count which is an atomic_t which counts bytes, so we need to limit the number of bytes to what atomic_t holds. However I suspect that is now the same as what size_t holds, and as we automatically have that limit, there is no need to impose a limit. And I cannot guess where (1<<24) comes from. But beyond that, I am worried about array_size = (page_count * sizeof(struct page *)); *pages = kmalloc(array_size, GFP_KERNEL); The current limit already allows array_size to get to 16384 (on 32bit platforms) which requires a order-3 allocation. Allowing it to get bigger than that would seem to be inviting a kmalloc failure, and I think I would rather a fixed limit that has to be worked around in user-space rather than an appearance of working, but a high probability of failure if memory is at all tight or fragmented. It is worth noting that the limit is gone in 2.6.17-rc1 due to the convertion of the atomic_t's to int's protected by a spinlock. I don't know if any thought was given to the size-blowout of array_size. Oops, I thought that atomic_t was counting pages. atomic_t is not the same width as size_t. On x86-64, the former is 32bit and the latter 64bit. But I agree, maybe it's better to make growisofs use smaller chunks when doing its directio reads, so we avoid kmalloc failures. Created attachment 77743 [details]
Proposed patch to growisofs
Reassigning to Adrian as the package maintainer I submitted a package. I get the "File too large" error still with RC1. Vladimir takes over the maintership for the package. Thanks a lot ! Jan, can you reproduce it on final 10.1 or anywhere? It seems fixed to me. Seems fixed to me too. OK, closing the bug. |