Why does performance decrease as cache size increases?

Hi,
We have some very serious performance problems with our
database. I have been trying to help by tuning the cache size,
but the results are the opposite of what I expect.
To create new databases with my data set, it takes about
8200 seconds with a 32 Meg cache. Performance gets worse
as the cache size increases, even though the cache hit rate
improves!
I'd appreciate any insight as to why this is happening.
32 Meg does not seem like such a large cache that it would
strain some system limitation.
Here are some stats from db_stat -m
Specified a 128 Meg cache size - test took 16076 seconds
160MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
160MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
34M Requested pages found in the cache (93%)
2405253 Requested pages not found in the cache
36084 Pages created in the cache
2400631 Pages read into the cache
9056561 Pages written from the cache to the backing file
2394135 Clean pages forced from the cache
2461 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
40048 Current total page count
40048 Current clean page count
0 Current dirty page count
16381 Number of hash buckets used for page location
39M Total number of times hash chains searched for a page (39021639)
11 The longest hash chain searched for a page
85M Total number of hash chain entries checked for page (85570570)
Specified a 64 Meg cache size - test took 10694 seconds
80MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
80MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
31M Requested pages found in the cache (83%)
6070891 Requested pages not found in the cache
36104 Pages created in the cache
6066249 Pages read into the cache
9063432 Pages written from the cache to the backing file
5963647 Clean pages forced from the cache
118611 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
20024 Current total page count
20024 Current clean page count
0 Current dirty page count
8191 Number of hash buckets used for page location
42M Total number of times hash chains searched for a page (42687277)
12 The longest hash chain searched for a page
98M Total number of hash chain entries checked for page (98696325)
Specified a 32 Meg cache size - test took 8231 seconds
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10812846)
35981 Pages created in the cache
10M Pages read into the cache (10808327)
9200273 Pages written from the cache to the backing file
9335574 Clean pages forced from the cache
1498651 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
10012 Current clean page count
0 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47429232)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118218066)
vmstat says that a few minutes into the test, the box is
spending 80-90% of its time in iowait. That worsens as
the test continues.
System and test info follows
We have 10 databases (in 10 files) sharing a database
environment. We are using a hash table since we expect
data accesses to be pretty much random.
We are using the default cache type: a memory mapped file.
Using DB_PRIVATE did not improve performance.
The database environment created with these flags:
DB_CREATE | DB_THREAD | DB_INIT_CDB | DB_INIT_MPOOL
The databases are opened with only the DB_CREATE flag.
There is only one process accessing the db. In my tests,
only one thread access the db, doing only writes.
We do not use transactions.
My data set is about 550 Meg of plain ASCII text data.
13 million inserts and 2 million deletes. Key size is
32 bytes, data size is 4 bytes.
BDB 4.6.21 on linux.
1 Gig of RAM
Filesystem = ext3 page size = 4K
The test system is not doing anything else while I am
testing.

Surprising result: I tried closing the db handles with DB_NOSYNC and performance
got worse. Using a 32 Meg cache, it took about twice as long to run my test:
15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
Here is some data from db_stat -m when using DB_NOSYNC:
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10811882)
44864 Pages created in the cache
10M Pages read into the cache (10798480)
7380761 Pages written from the cache to the backing file
3452500 Clean pages forced from the cache
7380761 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
5001 Current clean page count
5011 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47428268)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118169805)
It looks like not flushing the cache regularly is forcing a lot more
dirty pages (and fewer clean pages) from the cache. Forcing a
dirty page out is slower than forcing a clean page out, of course.
Is this result reasonable?
I suppose I could try to sync less often than I have been, but more often
than never to see if that makes any difference.
When I close or sync one db handle, I assume it flushes only that portion
of the dbenv's cache, not the entire cache, right? Is there an API I can
call that would sync the entire dbenv cache (besides closing the dbenv)?
Are there any other suggestions?
Thanks,
Eric

Similar Messages

  • Why does opening and closing FPGA references increase Windows XP handles by 3?

    Hello.
    We run a test system here that uses TestStand to communicate
    to a number of Labview VI modules which in turn communicates to a
    PXI-7833R FPGA. Everything has been working fine except after a long time running Labview would get an out of memory error. I discovered
    after a particular VI was run 100,000 times, the number of handles in Windows XP Task
    Manager grew to about 370,000. The number of threads and processes remains stable and normal. Closing Labview removes all the excessive handles.
    I tried an experiment to create a standalone VI (ie: no TestStand) which simply opens the FPGA VI reference then closes the reference. This is repeated 4 times in the same VI. The number of handles in Windows XP Task
    Manager increased by 12 each time the VI was run. No errors. This indicates closing the FPGA reference might not be working.
    Why does this happen? Is there a way to avoid it? The version of Labview is 8.5.1.
    Appreciate any help here. Thanks.
    -Dave

    could be related to this known issue--
    FPGA FIFO reset behavior—When you use an FPGA target
    emulator, FPGA FIFOs reset when the VI is stopped and then started
    again. When you use an FPGA target with Interactive Front Panel
    Communication, FPGA FIFOs do not reset when the FPGA VI is stopped and
    then started again. To reset the FIFO, right-click the FPGA target in
    the Project Explorer window and select Download
    from the shortcut menu. When you control an FPGA VI using Programmatic
    FPGA Interface Communication, use the Close FPGA VI Reference function
    with the Close and Reset shortcut menu option selected or the Invoke Method with the Reset method selected to reset FPGA FIFOs
    see Knowledgebase
    Also remeber to load the FPGA Read/Write VI's dynamically from testStand and dump them after.  Thechange to the data type causes the vi to need to recompile so it can't stay in memory if you need different types of data.
    Message Edited by Jeff Bohrer on 10-27-2009 02:21 PM
    Jeff

  • Why does Acrobat 11 open cached or "ghost" versions of my PDFs?

    Here's my work flow...
    Save a PDF from an Illustrator doc/open PDF in Acrobat to sanitize/email PDF
    -get revision request-
    Revise same Illustrator file/save as a PDF/open PDF in Acrobat and it views as the original PDF!
    When I "quick look" on the desktop the PDF displays the revised comp. So my question is why does Acrobat 11 keep the cached image of the PDFs and how can I prevent this? Quitting Acrobat will fix the issue once so if I go to revise the same file for a 3rd, 4th time it'll show the cached version every time.

    If you choose 'Preserve Illustrator editing' when exporting from AI, it actually embeds the entire AI file inside the PDF as a hidden attachment. Opening the PDF in Illustrator it will ignore the 'real' PDF content and open the AI object instead, so you never see the changes from Acrobat.

  • Why does Pages file explodes in size?

    When I attach a 1.1 MB Pages file to a mail: why does the file grows to 14,3 MB??? Even a normal blank pages document of 63 KB becomes 9,5 MB when attaching it to an email?!!!
    What am I missing?

    There has to be graphics in there somewhere to make it that big.
    Do you have Captured pages (the master sections) with graphics in them, possibly repeating, in the document?
    If you like click on my blue name and email me the document and I'll look at it.
    Peter

  • Why does my Full HD .mp4 file increase 10 times in size when imported to iMovie as .mov?

    Is the conversion to .mov really necessary? Can I not work directly with the .mp4 files in the event library?

    Thanks for explaining what the import does.
    Yes it is H.264.
    However, this is what I found:
    I copied the files in Finder so I did not have to go through the import conversion.
    After this I opened a new project and trimmed the clip and added a theme after which I could Finalize the project.
    In a 2nd test I skipped Finalize the project and just exported the edited clip to a Full HD movie.
    It seems iMovie allows me to edit without converting it to .mov as you mention.

  • SQL Server 2005 performance decreases with DB size while SQL Server 2012 is fine

    Hi,
    We have a C# windows service running that polls some files and inserts/updates some fields in database.
    The service was tested on a local dev machine with SQL Server 2012 running and performance was quite decent with any number of records. Later on the service was moved to a test stage environment where SQL Server 2005 is installed. At that point database
    was still empty and service was running just fine but later on, after some 500k records were written, performance problems came to light. After some more tests we've founds out that, basically, database operation performance in SQL Server 2005 decreases with
    a direct correlation with the database size. Here are some testing results:
    Run#
    1
    2
    3
    4
    5
    DB size (records)
    520k
    620k
    720k
    820k
    920k
    SQL Server 2005
    TotalRunTime
    25:25.1
    32:25.4
    38:27.3
    42:50.5
    43:51.8
    Get1
    00:18.3
    00:18.9
    00:20.1
    00:20.1
    00:19.3
    Get2
    01:13.4
    01:17.9
    01:21.0
    01:21.2
    01:17.5
    Get3
    01:19.5
    01:24.6
    01:28.4
    01:29.3
    01:24.8
    Count1
    00:19.9
    00:18.7
    00:17.9
    00:18.7
    00:19.1
    Count2
    00:44.5
    00:45.7
    00:45.9
    00:47.0
    00:46.0
    Count3
    00:21.7
    00:21.7
    00:21.7
    00:22.3
    00:22.3
    Count4
    00:23.6
    00:23.9
    00:23.9
    00:24.9
    00:24.5
    Process1
    03:10.6
    03:15.4
    03:14.7
    03:21.5
    03:19.6
    Process2
    17:08.7
    23:35.7
    28:53.8
    32:58.3
    34:46.9
    Count5
    00:02.3
    00:02.3
    00:02.3
    00:02.3
    00:02.1
    Count6
    00:01.6
    00:01.6
    00:01.6
    00:01.7
    00:01.7
    Count7
    00:01.9
    00:01.9
    00:01.7
    00:02.0
    00:02.0
    Process3
    00:02.0
    00:01.8
    00:01.8
    00:01.8
    00:01.8
    SQL Server 2012
    TotalRunTime
    12:51.6
    13:38.7
    13:20.4
    13:38.0
    12:38.8
    Get1
    00:21.6
    00:21.7
    00:20.7
    00:22.7
    00:21.4
    Get2
    01:38.3
    01:37.2
    01:31.6
    01:39.2
    01:37.3
    Get3
    01:41.7
    01:42.1
    01:35.9
    01:44.5
    01:41.7
    Count1
    00:20.3
    00:19.9
    00:19.9
    00:21.5
    00:17.3
    Count2
    01:04.5
    01:04.8
    01:05.3
    01:10.0
    01:01.0
    Count3
    00:24.5
    00:24.1
    00:23.7
    00:26.0
    00:21.7
    Count4
    00:26.3
    00:24.6
    00:25.1
    00:27.5
    00:23.7
    Process1
    03:52.3
    03:57.7
    03:59.4
    04:21.2
    03:41.4
    Process2
    03:05.4
    03:06.2
    02:53.2
    03:10.3
    03:06.5
    Count5
    00:02.8
    00:02.7
    00:02.6
    00:02.8
    00:02.7
    Count6
    00:02.3
    00:03.0
    00:02.8
    00:03.4
    00:02.4
    Count7
    00:02.5
    00:02.9
    00:02.8
    00:03.4
    00:02.5
    Process3
    00:21.7
    00:21.0
    00:20.4
    00:22.8
    00:21.5
    One more thing is that it's not Process2 table that constantly grows in size but is Process1 table, that gets almost 100k records each run.
    After that SQL Server 2005 has also been installed on a dev machine just to test things and we got exactly the same results. Both SQL Server 2005 and 2012 instances are installed using default settings with no changes at all. The same goes for databases
    created for the service.
    So the question is - why are there such huge differences between performance of SQL Server 2005 and 2012? Maybe there are some settings that are set by default in SQL Server 2012 database that need to be set manually in 2005?
    What else can I try to test? The main problem is that production SQL Server will be updated god-knows-when and we can't just wait for that.
    Any suggestions/advices are more than welcome.

    ...One more thing is that it's not Process2 table that constantly grows in size but is
    Process1 table, that gets almost 100k records each run....
    Hi,
    It is not clear to me what is that you are doing, but now we have a better understanding on ONE of your tables an it is obviously you will get worse result as the data become bigger. Actually your table look like a automatic build table by ORM like Entity
    Framework, and it's DDL probably do not much your needs. For example if your select query is using a filter on the other column that [setID] then you have no index and the server probably have to scan the entire table in order to find the records that you
    need.
    Forum is a suitable place to seek advice about a specific system (as I mentioned before we are not familiar with your system), and it is more suitable for general questions. For example the fact that you have no index except the index on the column [setID]
    can indicate a problem. Ultimately to optimize the system will need to investigate it more thoroughly (as it is no longer appropriate forum ... but we're not there yet). Another point is that now we can see that you are using [timestamp] column, an this
    implies that your are using this column as a filter for selecting the data. If so, then maybe a better DDL will be to use clustered index on this column and if needed a nonclustered index on the [setID] if it is needed at all...
    what is obviously is that next is to check if this DDL fit
    your specific needs (as i mentioned before).
    Next step is to understand what action do you do with this table. (1) what is your query which become slowly in a bigger data set. (2) Are you using ORM (object relational mapping, like Entity Framework
    code first), and if so then which one.
    [Personal Site] [Blog] [Facebook]

  • Why does Acrobat X Reduce File Size make significantly bigger files?

    Drawings at AO size, probably from AutoCad, each pdf ranging in size from 600 to 800kb.  No encription.  No attachments.  No embedded fonts.  Reprinted them through Acrobat X Pro to A3 then combined them. Binder size was about 5MB, too large for my client's email system. 
    Ran the binder through Reduce File size with compatibilty to Version 7 and the binder blew out in size by 20%.  Increased the version compatibility to Version 8 and it increased yet again, to Version 9 stayed the same as 8.
    Optimised the same binder and it blew out yet again.  Auditing showed no difference no matter what i deleted.

    If you have a file made from AutoCAD it is probably just line drawing. There is really no scope to reduce the size of these (except by taking away lines, which wouldn't be popular). I don't know why it gets bigger, but it's unlikely to get smaller, so I'd recommend finding a different way to deliver the file (many web sites exist for delivering large files).

  • Why Such a Huge Photoshop File Size Increase When Saving In CC from other versions?

    Hello,
    I have limited experience in Photoshop, and am working with a file with two other people.  We are creating web comic pages.  The line artist and colorist are both using CS 6.  I have CC.  The file I receive from them is around 238MB.  If I open it and make even the slightest edit, Photoshop will not save it as it warns that the file size is too large (greater than 2GB!).  How does this happen?  Is it something in my preferences that causes 10x increase in file size?
    Any help is appreciated.
    Thanks!

    Thanks for replying. First I didn't have this problem in the beginning. I do the same in CS5 and I can do it with no problem.

  • Why does text in Smallest File Size PDF change point size and vertical scaling?

    If you have a document with 6 pt. type (no scaling) and Save As a Screen-Quality PDF, when you open that resulting PDF in Illustrator, the 6 pt. type has become 5.1 pt. type with vertical scaling of 117.65%! Normally, I would have no reason to open the PDF in Illustrator, but our clients are doing it. They want the smallest possible PDFs but still want to be able to check the point sizes for legal reasons. If you Save As a High Quality Print PDF, the type is fine, but that won't always work if there are alot of images and the file size needs to be kept as small as possible.

    sorry for 2 posts, technical problem with forums and images.
    Leading changed to 7.2, but as you know when going to pdf paragraphs are broken up into seperate lines.

  • Why does my resolution (or font size, or whatever) suddenly grow or shrink for a given web site while browsing?

    I recently installed Firefox on a new Toshiba notebook, and have had one web site unexpectedly shrink, twice, and another one grow, as I'm browsing. When I leave and come back later, the particular site, but no others, is still affected. Am I inadvertently doing something to cause this?

    OK, so I've just learned that I can reset the size change using control-0(zero). But the size up or down is enabled by control-+ or control--, and I'm not doing those -- it seems to be caused by something my fingers are doing on the cursor pad, since I'm not touching the keyboard as I'm browsing. Any thoughts as to what is causing this?

  • Why does 4k "Set to Frame Size" shrink on playback ?

    When I add 4k video to a 1080 timeline and shrink (scale) to 50% to make it fit, it fits perfectly when paused, but then plays back at half size within the 1080 program sequence window.  I believe this is the preferred way to put 4k on a 1080 sequence but I am not sure why my playback shinks the size of the video further when I play it.  Thanks for helping me understand what's going on here or if there is a better way to bring 4k into a 1080 sequence. 

    PR CC 2014.1. I just tested some GH4 4K (3840X2160) in a 1920X1080 sequence, scaling the 4K to 50%. Program monitor settings 1/4 and Full are okay, and 1/2  gives me constant jumps from full frame to a small box in the monitor. (Same whether playback and pause settings are matched or not.)
    Removing the scale, gives the same type of effect. Stepping through frame by frame, I see that most frame are showing as full frame, then "zooming" in to the "regular condition" (no scale should display the too large 4K centered, right?).
    Generating previews results in relatively smooth in all combinations.
    Also, using the "scale to frame" results in smooth motion even if not rendered.
    GH4 4K clips are 100Mbps recording. I suspected my system (just on my laptop) was not up to the task. But since unscaled is not the issue, I really don't know. I'll try to test this again on my PC at home tonight.

  • When using Preview, why does combining 2 PDFs (total 1MB) increase in 18MB?

    I have 2 pdfs, one 300KB, the other 700KB. I drag the second one into the first one to make one PDF. But the final PDF is 18MB? Why? Or better yet, is there a solution?
    I've tried this on a 10.6.8 and a 10.9
    Thanks.

    Disregard, I found my answer already answered in a similar question I asked 4 months ago   Thanks to APple Communites for keeping archives. https://discussions.apple.com/thread/5692295

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

  • Why are my images positioned further down the page as size increases?

    Hello,
    I'm experiencing a problem with images that I'm a little baffled about.
    My image tag is listed as below and comes below text in other plain paragraph tags.
    <p><img id="p_image" height="411 width=312 source={src} styleName="image"/></p>
    The image is not appearing.  If I decrease the height to 50 it appears (obviously skewed though) and is positioned below the text.
    If I increase the height to 300 it appears but now the top of the image is further below the text.
    If I set the height to 411 it doesn't appear I assume because the combination of the image height as well as the increased space between the top of the image and the preceding paragraph puts it off the page.
    Why does the space below the paragraph increase as the image height increases?
    Images attached.
    Any help is appreciated.
    Thanks, peter

    I think the effect you are seeing is a result of how the lineHeight is set up. By default, the lineHeight will be calculated as 120% of the size of the largest thing on the line -- in your case, this is the inline graphic. This is designed to leave space between the line before and the line with the graphic. So it is taking the height of the graphic, multiplying by 120%, and making this the distance between the previous line and the line with the graphic. So the bigger the graphic, the bigger the space above the graphic. Normally this works well with text, but in this case you may want to get closer to "set solid". You can do this by setting the lineHeight to 100%. Or you may wish to leave a couple of percent for the descenders of the previous line. Or, another alternative that may work well for you if you really know exactly where you want the line set, you could measure the graphic, add on the number of extra pixels to leave for the descenders, and make the lineHeight a constant. So you could do something like this:
    <img lineHeight="104%" height="411"/>
    or this:
    <img lineHeight="423" height="411"/>
    Obviously you would still need to specify the source and any other parameters you want on the image.
    Note that if you are trying to fit a 200 pixel high graphic into a 200 pixel high container, you will hit the same problem -- in order to fit the graphic, the container will have to be slightly bigger than the graphic in order to fit the descenders on the last line. This is true even if the last line contains only the graphic, and no descenders (or text) at all.
    Hope this helps,
    - robin

  • Why does paintbrush size and tools change with zooming?

    Why does the paintbrush tool change sizes when zooming in and out? It's so annoying. Is there a way to fix this? Btw I use version CS4. Does it happen to all the other tools? I tried them it seems not but I can't tell unless it's a big difference in brush size like the paintbrush tool.  Thanks for any help!

    I was going to answer the same Rob, but then I tried it and found that it doesn't do that.  If I draw a stroke with the paintbrush and then zoom in, the brush remains the same size and if I draw another stroke it is thinner than the first and so on as you zoom in further.
    I think that is what the OP is asking about... since the paint brush is not working with lines, the thickness of the stroke probably is not treated a a pixel-based property, more as a fill, so the thickness of the stroke does change on the stage relative to the coordinate system, getting thinner as you zoom in, but the tool/stroke itself stays the same size to the user -- meaning... if it looks like it's 1/4 inch thick relative to holding a ruler to the screen, you will see it as being 1/4" thick on your screen regardless of the zoom factor.

Maybe you are looking for

  • How to get CLEAR photos on the Touch?

    I love my iPod touch, but there's one thing that has always bothered my - photos sync'd through iPhoto-->iTunes are grainy and have bad color. Until now, I've been unable to prove this - but finally I've found a way to prove that it's not just in my

  • Can't get palettes onto second monitor

    I am using Photoshop CS4 and I just upgraded to Mavericks. When I opened PS I discovered that all the palettes that normally reside on a second monitor are now all on the same monitor with my PS document and I can't drag them to the second monitor. I

  • MS Word opens in Acrobat Reader

    Running Win XP Pro and Acrobat Reader 9. Whenever I attempt to open a MS-Word document, Acrobat reader opens up and tells me that the file type is not supported. Why would my opening an MS-Word .doc file make Acrobat Reader open up instead of the .do

  • Multi Language support in applications

    Hi, I am working with an application which need to support langs like Japanese, French, Sweedish, Thai etc. Requirement is: user enters the text in his local lang from the keyboard then we need to store that into Oracle DB and display it on UI (JSPs)

  • JDev 10.0.1.3 CPU Usage Spikes - Somewhat Serious!

    I have noticed that JdevW.exe process in task menager has annoying CPU usage spikes every 2-3 seconds. This does NOT happen if I use 10.0.1.2 version. I have fairly powerful Dell workstation. Any ideas? Configuration problem? BTW embedded OC4J is not