Bugzilla – Bug 139376
compiler issue with OO.o
Last modified: 2005-12-21 14:19:10 UTC
A simple OOo somefile.odp gives a segmentation fault. This happens with all .odp presentations that I tried to open which were created with OOo under 9.3/10.0.
I agree - OOo after upgrade from A3 to A4 is extremely unstable, failing to open every 2nd document.
Petr - looks like the compiler issue ...
*** Bug 136871 has been marked as a duplicate of this bug. ***
It works when I use /usr/lib/ooo-2.0/program/libicuuc.so.26.0 compiled with gcc-4.0. It does not help to compile the module with -O0. It really looks like a miscompilation. I am going to investigate where it exactly happens. I will try to get a small test case. Be patient please.
Copying the compiler experts...
I have found it. There is used a strange expression in OOo: # define U_MAX_PTR(base) ((void *)(((char *)(base)+0x7fffffff) > (char *)(base) ? ((char *)(base)+0x7fffffff) : (char *)-1)) It gives different results when compilled with gcc-4.0 and gcc-4.1. I'll attach a small test case which shows the problem. There is also described what this macro is for. gcc-4.0: --- cut --- hope:/ # gcc --version gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. hope:/ # gcc test.c hope:/ # ./a.out p1=0xffffa0ec p2=0xffffffff --- cut --- gcc-4.1: --- cut --- hope: # gcc --version gcc (GCC) 4.1.0 20051129 (prerelease) (SUSE Linux) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. hope: # gcc test.c hope: # ./a.out p1=0xffffa0ec p2=0x7fffa0eb --- cut --- Is this a bug in gcc?
Created attachment 61478 [details] a test case that shows the problem
This is a bug in gcc and I have a fix. The only thing is that you rely on pointers being unsigned and thus overflow being defined. Also from the description of the (interesting) macro I wonder what semantics this pointer returned from U_MAX_PTR should have (i.e., how is it used?). Consider ia64 or s390 where (char*)-1 is never a valid pointer. For the signed/unsigned issue the best would be to use (uintptr_t) instead of (char *).
Just to clarify, I want you to fix the OpenOffice code to not do such interesting but appearantly useless stuff. Whether gcc will be fixed depends on some language lawyer opinion (because the std only talks about _integer_ overflows, and a pointer is not an integer, so pointer overflow could be thought be always undefined). So, please use integer arithmethic or get rid of this completely.
i.e. # define U_MAX_PTR(base) ((void *)(((uintptr_t)(base)+0x7fffffff) > (uintptr_t)(base) ? ((uintptr_t)(base)+0x7fffffff) : (uintptr_t)-1)) this will fix it.
I found this comment that describes how the macro U_MAX_PTR is used: --- cut --- * Maximum value of a (void*) - use to indicate the limit of an 'infinite' buffer. * In fact, buffer sizes must not exceed 2GB so that the difference between * the buffer limit and the buffer start can be expressed in an int32_t. --- cut --- It is used several times to initialize a variable that is used to check a buffer limit. For example: --- cut --- if(destCapacity==0) { destLimit=dest=0; } else if(destCapacity==-1) { // Pin the limit to U_MAX_PTR if the "magic" destCapacity is used. destLimit=(char*)U_MAX_PTR(dest); // for NUL-termination, translate into highest int32_t destCapacity=0x7fffffff; } else { destLimit=dest+destCapacity; } --- cut --- on another place, the variable is initialized this way: --- cut --- char buffer[1024]; destLimit=buffer+sizeof(buffer); --- cut --- The limit is checked this way. It is in a different function, so the value of the variable destLimit is included in the variable targetLimit. --- cut --- const char* t = *target; if (targetLimit < t || sourceLimit < *source) { *err = U_ILLEGAL_ARGUMENT_ERROR; return; } /* * Make sure that the target buffer size does not exceed the number range for int3 * because some functions use the size rather than comparing pointers. * size_t is guaranteed to be unsigned. */ if((size_t)(targetLimit - t) > (size_t)0x7fffffff && targetLimit > t) { targetLimit = t + 0x7fffffff; } --- cut --- They have a special hack for S390: --- cut --- # ifdef OS390 # define U_MAX_PTR(base) ((void *)0x7fffffff) # elif defined(OS400) --- cut --- They probably ignore the problems on ia64. You can look into OOo sources into src680-m143/icu/download/icu-2.6.tar.gz for more details. Your solution looks fine. Is it usable on all platforms without any change? Note: I have to leave for some hours now. I'll check it more when I am back.
Yes, my solution will work on all platforms it worked previously ;) (i.e. ignoring s390 and ia64) Btw. the code as of now is invalid as of 6.5.6/8 of the C standard, so closing as invalid.
Note that the following uses invoke undefined behavior wrt the standard const char* t = *target; if (targetLimit < t || sourceLimit < *source) if((size_t)(targetLimit - t) > (size_t)0x7fffffff && targetLimit > t) iff the two pointers you compare or get the difference from do not point to the same memory object (or one element beyond). Basically, pointer arithmetic and comparison is only defined for pointers pointing to the same memory object (or one element beyond). All this checking should be done in the respective unsigned integer types.