Bugzilla – Bug 643369
HARDLOCKLIMIT for new install is set to @256 not enough for Open Office
Last modified: 2010-10-14 09:33:21 UTC
User-Agent: Mozilla/5.0 (en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Not a bug as such. However with this setting Open Office runs poorly and tend to crash. I looked up this setting on a friends system ...same entry. After increasing the HARDLOCKLIMIT to @512 Performance of OO was perfect. Tested with 10 (real) OO documents plus with Gimp being open Performance was great. I also had some screen freeze which I attributed to running KDE 4.5.1 RC. Sofar no more problems. In addition all other application running faster with increased HARDLOCKLIMIT Reproducible: Always Steps to Reproduce: 1.Set HARDLOCKLIMIT to @256 2. 3. This issue may explain a number of problems posted on the forum. If the "bug" (not really a bug) is critical please make a judgement
With the new HARDLOCKLIMIT @1024 Desktop: AMD Athlon(tm) 64 X2 Dual Core Processor 5200+, 8GB RAM, Gigabyte Motherboard, ASUS NVIDIA EN210 Graphic, Raid 1, Suse 11.3, KDE 4.5 RC and my Laptop which encountered the same problem now set at @512 Laptop ASUS: Intel(R) Core2 Duo CPU T7300 2Ghz NVIDIA GeForce 8400 G , 2GB Ram, Suse 11.3, KDE 4.5 RC OO Runs perfect and the systems are faster Did not encounter any more screen freezes Best regards Otto Hase
I am running SUN java 1.6.0.u21-0.1.1 the problem occurs with open jdk too
So ... this looks very much like a typical victim-of-the-second-start problem. Indeed - I find it very hard to believe that this setting: ## Type: string ## Default: 5 # # Limit the size of the memory that a single process may lock in # physical memory (thus preventing it to be swapped out). # Hard limit: Can not be increased by non-root. # This value corresponds to ulimit -Hl # Parameter is in percent of physical memory (unless you prefix # it by @, in which case it means kilobytes), 0 means no adjustment. # HARDLOCKLIMIT="@256" is at all significant - particularly since OO.o will not try to lock anything in memory anyway - you can verify this by stracing - and looking for 'mlock' syscalls [ there are none ]. So - unless there is some mechanism by which things should be faster it looks to me like an abberation caused by noise in measurement. Furthermore - if OO.o crashes - we want a stack-trace, not a rumor of one: we can fix real crashes. If you -really- want to measure such things; you need to have a repeatable setup: you need to boot your machine from cold, and execute soffice as part of the startup; you then need to repeat that several (at least 4) times both before and after your suggested change. There is some built-in logging in OO.o to help you with that. export RTL_LOGFILE=/tmp/startup.nopid ; rm -f $RTL_LOGFILE ; export OOO_EXIT_POST_STARTUP=1 ; oowriter that should give you a /tmp/startup.nopid file with timestamps in it for the startup. Bear in mind we need a statistically significant number of these, and low jitter to make any meaningful comparison. Of course, there is an outside chance that this setting has some amazing effect on the Linux VM - if so, we need to carefully isolate that, and make sure that this works without endangering the system by tweaking random settings unrelated to OO.o :-) [ that is useful work to do ] Thanks for caring about OO.o startup time though ! :-)