Bug 149905

Summary: cannot burn DVD iso from NFS source
Product: [openSUSE] SUSE Linux 10.1 Reporter: Forgotten User ZhJd0F0L3x <forgotten_ZhJd0F0L3x>
Component: BasesystemAssignee: Vladimir Nadvornik <nadvornik>
Status: RESOLVED FIXED QA Contact: E-mail List <qa-bugs>
Severity: Major    
Priority: P5 - None CC: adrian.schroeter, aj, nfbrown
Version: Beta 3   
Target Milestone: ---   
Hardware: Other   
OS: Other   
Whiteboard:
Found By: Component Test Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---
Bug Depends on:    
Bug Blocks: 140732    
Attachments: "strace growisofs....."
strace -f -o foo growisofs...
Proposed patch
Proposed patch to growisofs

Description Forgotten User ZhJd0F0L3x 2006-02-10 11:01:37 UTC
i cannot burn a DVD iso from a NFS source:

root@strolchi:/mounts/machcd3/iso> growisofs -dvd-compat -Z /dev/sr0=SLES-9-NLD-SP-3-XDVD-i386-RC4.iso
WARNING: /dev/sr0 already carries isofs!
About to execute 'builtin_dd if=SLES-9-NLD-SP-3-XDVD-i386-RC4.iso of=/dev/sr0 obs=32k seek=0'
:-( write failed: File too large

it works fine if i first copy the file to the local dist.
I will attach strace output file
Comment 1 Forgotten User ZhJd0F0L3x 2006-02-10 11:04:40 UTC
Created attachment 67550 [details]
"strace growisofs....."
Comment 2 Andreas Jaeger 2006-03-24 08:38:51 UTC
What NFS server?  Is it mounted as NFS v2?

The error indicates that it's NFS v2 which cannot handle files larger than 2 GB.
growisofs uses the correct calls according to your strace log.

What confuses me is that you say "copy the file to local dist".  Did you copy it via NFS - or via other means?
Comment 3 Forgotten User ZhJd0F0L3x 2006-03-24 08:45:21 UTC
(In reply to comment #2)
> What NFS server?  Is it mounted as NFS v2?

machcd3, see the path
whatever autofs gave me.

> The error indicates that it's NFS v2 which cannot handle files larger than 2
> GB.
> growisofs uses the correct calls according to your strace log.
> 
> What confuses me is that you say "copy the file to local dist".  Did you copy
> it via NFS - or via other means?

IIRC it was via NFS.

I saw the same error last week, i could burn with cdrecord-dvd directly from NFS.

I am pretty sure you should be able to reproduce easily.

Danny will put the burner onto fix.suse.de, then you can test it there.
> 

Comment 4 Forgotten User ZhJd0F0L3x 2006-03-24 09:00:12 UTC
or joe will do it :-)

Joe, please plug the DVD burner into fix.suse.de (hp nw8000 on my desk).
Comment 5 Andreas Jaeger 2006-03-24 09:10:13 UTC
No need for that - I can do it myself since I now have all the details, will test later...
Comment 6 Andreas Jaeger 2006-03-24 11:30:11 UTC
Ok, can reproduce.

No idea what goes wrong here - this is NFSv3.

Henne, can somebody from your team look at this?
Comment 7 Andreas Jaeger 2006-03-24 19:02:39 UTC
Let's assign to NFS expert since this only occurs with NFS.
Comment 8 Andreas Jaeger 2006-03-24 19:03:16 UTC
*** Bug 156451 has been marked as a duplicate of this bug. ***
Comment 9 Olaf Kirch 2006-03-30 13:10:26 UTC
I don't see why this is failing. Using dd it works just fine, even with large
reads of 0x2000000 bytes which builtin_dd is doing.

Please run strace with the -f option so I can see what the helper
thread(s) are doing. Thanks!
Comment 10 Forgotten User ZhJd0F0L3x 2006-03-30 14:24:32 UTC
Created attachment 75786 [details]
strace -f -o foo growisofs...
Comment 11 Olaf Kirch 2006-03-30 15:09:19 UTC
Ahhh... it's doing DIRECTIO, and the NFS code limits the
size of these requests to 4096 pages, ie 16M. Growisofs
tries to do twice that.
Comment 12 Andreas Jaeger 2006-03-30 15:23:18 UTC
So, what's the proper fix?  Use 16 M only? Can we fix this in general?

Btw. why does this only happen with large DVDs? 
Comment 13 Olaf Kirch 2006-03-30 15:41:16 UTC
I'd first suggest to test this hypothesis. Please try running
growisofs with

-use-the-force-luke=bufsize:16M

Concerning the proper fix, it's probably safe to just increase the
limit in nfs directio. I've sent mail to the NFS list about this.
Alternatively, we can change the default bufsize (on NFS) to 16M
in growisofs, but that sounds like a hack to me.

I don't know why that would happen on large DVDs only. Are you sure that
is the case? I can reproduce it with smaller files as well. I think the
"big file" error message somehow created the misperception that is was
related to file size.
Comment 14 Forgotten User ZhJd0F0L3x 2006-03-30 16:04:01 UTC
(In reply to comment #13)
> I'd first suggest to test this hypothesis. Please try running
> growisofs with
> 
> -use-the-force-luke=bufsize:16M

seems to work fine (not finished yet, but i have no reason to believe it won't)

> I don't know why that would happen on large DVDs only. Are you sure that
> is the case?

_I_ never said that :-). It is just that i usually do not burn less than some GB to DVD, but use CDs for that.

> I can reproduce it with smaller files as well.

I will try that once the first burn is ready.
Comment 15 Forgotten User ZhJd0F0L3x 2006-03-30 19:22:36 UTC
root@fix:~> l /mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent
-rw-r--r-- 1 root root 647997 2006-03-30 03:37 /mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent
root@fix:~> growisofs -dvd-compat  -Z /dev/sr0=/mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent
Executing 'builtin_dd if=/mounts/machcd2/iso/SUSE-Linux-10.1-DVD9-Build_831.torrent of=/dev/sr0 obs=32k seek=0'
:-( write failed: File too large

so smaller files also do not work.

Comment 16 Olaf Kirch 2006-04-06 19:48:15 UTC
Created attachment 77012 [details]
Proposed patch
Comment 17 Olaf Kirch 2006-04-06 19:49:21 UTC
Neil, does this patch look right?
Comment 18 Neil Brown 2006-04-07 01:15:30 UTC
Not completely.

I understand that the reason for the limit is that atomic_t has limited range on some platforms, though I'm not sure if that is true any more  Certainly SPARC used to be be 24bit, but now uses the full 32.

There is a ->count which is an atomic_t which counts bytes, so we need to limit the number of bytes to what atomic_t holds.  However I suspect that is now the same as what size_t holds, and as we automatically have that limit, there is no need to impose a limit.

And I cannot guess where (1<<24) comes from.

But beyond that, I am worried about 
 	array_size = (page_count * sizeof(struct page *));
 	*pages = kmalloc(array_size, GFP_KERNEL);
The current limit already allows array_size to get to 16384 (on 32bit platforms) which requires a order-3 allocation.  Allowing it to get bigger than that would seem to be inviting a kmalloc failure, and I think I would rather a fixed limit that has to be worked around in user-space rather than an appearance of working, but a high probability of failure if memory is at all tight or fragmented.

It is worth noting that the limit is gone in 2.6.17-rc1 due to the convertion
of the atomic_t's to int's protected by a spinlock.  I don't know if any thought was given to the size-blowout of array_size.
Comment 19 Olaf Kirch 2006-04-11 10:40:44 UTC
Oops, I thought that atomic_t was counting pages.

atomic_t is not the same width as size_t. On x86-64, the former is 32bit
and the latter 64bit.

But I agree, maybe it's better to make growisofs use smaller chunks
when doing its directio reads, so we avoid kmalloc failures.
Comment 20 Olaf Kirch 2006-04-11 10:41:27 UTC
Created attachment 77743 [details]
Proposed patch to growisofs
Comment 21 Olaf Kirch 2006-04-11 10:45:07 UTC
Reassigning to Adrian as the package maintainer
Comment 22 Adrian Schröter 2006-04-12 09:49:03 UTC
I submitted a package.
Comment 23 Jan Karjalainen 2006-04-15 11:26:47 UTC
I get the "File too large" error still with RC1.
Comment 25 Adrian Schröter 2006-11-24 14:36:14 UTC
Vladimir takes over the maintership for the package.

Thanks a lot !
Comment 26 Vladimir Nadvornik 2007-02-22 16:45:22 UTC
Jan, can you reproduce it on final 10.1 or anywhere?
It seems fixed to me.
Comment 27 Jan Karjalainen 2007-02-22 20:11:29 UTC
Seems fixed to me too.
Comment 28 Vladimir Nadvornik 2007-02-23 08:52:58 UTC
OK, closing the bug.