Bdb cachesize impact on diskspace

Hello, I have a simple question : if you increase the bdb cache, does this increase the size of the database on disk in the same proportions ?
For example, if the bdb cache is increased to the database's disk storage size + 10%, will the storage needs on disk also increase by 10% (or in any way) ?
Thanks !
Peter.

cachesize will effect the allocated space in memory for db pages.   Database on disk can grow depending on activity in the database. 
thanks
mike

Similar Messages

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • DS5.2 (AIX) cachesize warning message

    Hi,
    I've been seeing this warning message in the error logs on some of our DS5.2 servers (running on AIX) when they are restarted:
    WARNING<21022> - Backend Database - conn=-1 op=-1 msgId=-1 -  cachesize 1000000000 too bigIt doesn't happen on the same server every time, and it doesn't seem to happen on all of our servers. The ones I have observed it on have varying amounts of memory from 3GB to 16GB.
    I know there are inidividual and overall limits for cache size on AIX, but I believed the configuration was set up correctly to cater for this (total of all caches for each server is 1.4GB, and all servers are configured in exactly the same way).
    Does anyone have any additional information about this warning over what I can find in the admin guide? Any help with why it would occur, and why it seems to happen randomly would be gratefully received!
    Thanks,
    Mark.

    Do you have an attribute nsslapd-dbncache set in the dse.ldif (backend configuration entry) ?
    The warning is just informational and happens in a pre-test to split the db cache into multiple chunks.
    If the warning is displayed, the server will continue but will completely delegate the creation of the dbcache to the Database library (which should allocate a single contiguous buffer, or use mmap).
    This should not have any major impact on the server behavior, unless there is truly not enough memory and DS will not start at all.
    Regards,
    Ludovic.

  • BDB v5.0.73 - EnvironmentFailureException: (JE 5.0.73) JAVA_ERROR

    Hi there!
    Im using java berkeley db as caching tool between two applications (backend application is quite slow so we cache some requests). We are caching xml data (one request xml can be ~4MB), if xml data exists we update the cache entry, if does not exists we insert new entry. there can be many concurrent hits same time.
    Currently I tested with JMeter and with 10 Threads all works fine but if I increase to 20 Threads following error occurs:
    +2013-05-14 15:31:15,914 [ERROR] CacheImpl - error occured while trying to get data from cache.+
    com.sleepycat.je.EnvironmentFailureException: (JE 5.0.73) JAVA_ERROR: Java Error occurred, recovery may not be possible. fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0
    +     at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1507)+
    +     at com.sleepycat.je.Environment.checkEnv(Environment.java:2185)+
    +     at com.sleepycat.je.Environment.beginTransactionInternal(Environment.java:1313)+
    +     at com.sleepycat.je.Environment.beginTransaction(Environment.java:1284)+
    +     at com.ebcont.redbull.bullchecker.cache.impl.CacheImpl.get(CacheImpl.java:157)+
    +     at com.ebcont.redbull.bullchecker.handler.EndpointHandler.doPerform(EndpointHandler.java:132)+
    +     at com.ebcont.redbull.bullchecker.WSCacheEndpointServlet.doPost(WSCacheEndpointServlet.java:86)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)+
    +     at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)+
    +     at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)+
    +     at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)+
    +     at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)+
    +     at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)+
    +     at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)+
    +     at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)+
    +     at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)+
    +     at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)+
    +     at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)+
    +     at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)+
    +     at java.lang.Thread.run(Unknown Source)+
    Caused by: java.lang.OutOfMemoryError: Java heap space
    +2013-05-14 15:31:15,939 [ERROR] CacheImpl - error occured while trying to get data from cache.+
    com.sleepycat.je.EnvironmentFailureException: (JE 5.0.73) JAVA_ERROR: Java Error occurred, recovery may not be possible. fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0 fetchTarget of 0x11/0x1d1d parent IN=8 IN class=com.sleepycat.je.tree.IN lastFullVersion=0x1b/0x4cd lastLoggedVersion=0x1b/0x4cd parent.getDirty()=false state=0
    +     at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1507)+
    +     at com.sleepycat.je.Environment.checkEnv(Environment.java:2185)+
    +     at com.sleepycat.je.Environment.beginTransactionInternal(Environment.java:1313)+
    +     at com.sleepycat.je.Environment.beginTransaction(Environment.java:1284)+
    +     at com.ebcont.redbull.bullchecker.cache.impl.CacheImpl.get(CacheImpl.java:157)+
    +     at com.ebcont.redbull.bullchecker.handler.EndpointHandler.doPerform(EndpointHandler.java:132)+
    +     at com.ebcont.redbull.bullchecker.WSCacheEndpointServlet.doPost(WSCacheEndpointServlet.java:86)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)+
    +     at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)+
    +     at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)+
    +     at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)+
    +     at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)+
    +     at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)+
    +     at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)+
    +     at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)+
    +     at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)+
    +     at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)+
    +     at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)+
    +     at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)+
    +     at java.lang.Thread.run(Unknown Source)+
    after restarting the server I get following error while trying to get data from cache:
    java.lang.OutOfMemoryError: Java heap space
    +     at com.sleepycat.je.log.LogUtils.readBytesNoLength(LogUtils.java:365)+
    +     at com.sleepycat.je.tree.LN.readFromLog(LN.java:786)+
    +     at com.sleepycat.je.log.entry.LNLogEntry.readBaseLNEntry(LNLogEntry.java:196)+
    +     at com.sleepycat.je.log.entry.LNLogEntry.readEntry(LNLogEntry.java:130)+
    +     at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:1008)+
    +     at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:848)+
    +     at com.sleepycat.je.log.LogManager.getLogEntryAllowInvisibleAtRecovery(LogManager.java:809)+
    +     at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1412)+
    +     at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1251)+
    +     at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2261)+
    +     at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1466)+
    +     at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1593)+
    +     at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:2924)+
    +     at com.sleepycat.je.Cursor.retrieveNextNoDups(Cursor.java:2801)+
    +     at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:2775)+
    +     at com.sleepycat.je.Cursor.getNextNoDup(Cursor.java:1244)+
    +     at com.ebcont.redbull.bullchecker.cache.impl.BDBCacheImpl.getStoredKeys(BDBCacheImpl.java:244)+
    +     at com.ebcont.redbull.bullchecker.CacheStatisticServlet.doPost(CacheStatisticServlet.java:108)+
    +     at com.ebcont.redbull.bullchecker.CacheStatisticServlet.doGet(CacheStatisticServlet.java:74)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)+
    +     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)+
    +     at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)+
    +     at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)+
    +     at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)+
    +     at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)+
    +     at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)+
    +     at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)+
    +     at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)+
    +     at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)+
    +     at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)+
    +     at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)+
    +     at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)+
    my bdb configuration:
    environmentConfig.setReadOnly(false);
    databaseConfig.setReadOnly(false);
    environmentConfig.setAllowCreate(true);
    databaseConfig.setAllowCreate(true);
    environmentConfig.setTransactional(true);
    databaseConfig.setTransactional(true);
    environmentConfig.setCachePercent(60);
    environmentConfig.setLockTimeout(2000, TimeUnit.MILLISECONDS);
    environmentConfig.setCacheMode(CacheMode.DEFAULT);
    environment path: C:/tmp/berkeleydb
    Tomcat JVM Parameters: Initial Memory Pool: 1024
    Maximum Memory Pool: 2048
    Server: Windows Server 2008
    Memory: 8 GB
    Edited by: 1005842 on 14.05.2013 07:22
    Edited by: 1005842 on 14.05.2013 07:23
    Edited by: 1005842 on 14.05.2013 07:37

    Hi,
    The stack trace shows an OOME error due to running out of heap space.
    Could you detail what is the exact Java version you are using, on what OS, and what are the JVM options, in particular the max heap size (-Xmx), you are using?
    Also, what is the JE cache size you use? (if you do not set any of the MAX_MEMORY or MAX_MEMORY_PERCENT then the JE cache size will default to 60% of the JVM max heap size)
    You should look into the way you are using transactions, cursors etc. It might be possible that you are using long running transactions that accumulate a large number of locks, or you might be opening more and more transactions without closing/completing them (by aborting or committing them). Is any of this the case for your application? You can check the lock and transaction statistics using Environment.getStats() and respectively Environment.getTransactionStats().
    Aside properly ending/closing transactions and cursors, you should also examine your cache statistics to understand the memory profile. See the following documentation sections on this:
    http://docs.oracle.com/cd/E17277_02/html/GettingStartedGuide/cachesize.html
    http://www.oracle.com/technetwork/database/berkeleydb/je-faq-096044.html#HowcanIestimatemyapplicationsoptimalcachesize
    http://www.oracle.com/technetwork/database/berkeleydb/je-faq-096044.html#WhyshouldtheJEcachebelargeenoughtoholdtheBtreeinternalnodes
    Regards,
    Andrei

  • Shmget failure when using a large cachesize with DB_SYSTEM_MEM flag enabled

    This is probably due to me not understanding the requrements well enough. I am using bdb with given env flags (notabley DB_PRIVATE disabled, DB_SYSTEM_MEM enabled). I call set_shm_key with required args. However, this app fails at runtime with given error if I define a 1.5 GB cache size.
    It seems that the system is calling shmget with value (117441140) that is beyond the kernel constraints (shmmax=33554432). This app runs well if I do NOT setup cachesize and let it default. This implies some relationship between cachesize & shared memory size. Is that correct? What am I doint wrong here. Any help is appreciated.
    Thanks,
    Usamah
    Error:
    =========================
    shmget: key: 117441140: unable to create shared system memory region: Invalid argument
    =========================
    Setup:
    ==========================
    envFlags = DB_INIT_CDB | DB_CREATE | DB_INIT_MPOOL | DB_THREAD | DB_SYSTEM_MEM;
    dbEnv->set_cachesize(dbEnv, 1, 500 * 1024 * 1024, 4);
    shMemSegId = ftok(errorFile, 7);
    dbEnv->set_shm_key(hornetDbAbstract->dbEnv, (key_t)shMemSegId);
    ===========================
    Kernel shared memory settings:
    ===========================
    [umalik@tamerlane buzzer]$ cat /proc/sys/kernel/shmall
    2097152
    [umalik@tamerlane buzzer]$ cat /proc/sys/kernel/shmmax
    33554432
    [umalik@tamerlane buzzer]$ cat /proc/sys/kernel/shmmni
    4096
    ============================
    System:
    ============================
    [umalik@tamerlane buzzer]$ cat /proc/version
    Linux version 2.6.17-1.2142_FC4smp ([email protected]) (gcc version 4.0.2 20051125 (Red Hat 4.0.2-8)) #1 SMP Tue Jul 11 22:57:02 EDT 2006
    [umalik@tamerlane buzzer]$ cat /proc/meminfo
    MemTotal: 1814900 kB
    MemFree: 157236 kB
    Buffers: 39484 kB
    Cached: 888456 kB
    SwapCached: 0 kB
    Active: 1222584 kB
    Inactive: 356452 kB
    HighTotal: 917312 kB
    HighFree: 48980 kB
    LowTotal: 897588 kB
    LowFree: 108256 kB
    SwapTotal: 1966072 kB
    SwapFree: 1965500 kB
    Dirty: 100 kB
    Writeback: 0 kB
    Mapped: 753796 kB
    Slab: 55552 kB
    CommitLimit: 2873520 kB
    Committed_AS: 1052532 kB
    PageTables: 8592 kB
    VmallocTotal: 116728 kB
    VmallocUsed: 4104 kB
    VmallocChunk: 108876 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB
    bdb version: 4.5.20
    ============================

    Hi Usamah,
    The values that you presented denote that you are running with the defaults settings for the shared system memory. If you print out the value of the generated key resulted after the "ftok" call, you will observe that the key 117441140 should be the base segment ID (that is, the key generated with the "ftok" call) plus 2, meaning the "shmget" call failed when attempting to allocate memory segments for the Memory Pool subsystem. You will need a total of 3 segment IDs; more information on this can be found here:
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/env_set_shm_key.html
    It's possible that your system wasn't able to find and allocate contiguous memory, even though you specified that the 1,5GB cache should be broken in 4 portions.
    I suggest you try to reconfigure the shared system memory by doing the following:
    - edit /etc/sysctl.conf and add the following lines:
    kernel.shmmax = 536870912
    kernel.shmall = 536870912
    - run the following from the console:
    /sbin/sysctl -p
    - reboot your system, and run again the test application.
    Let us know if the issue gets resolved this way, and if you need further help.
    Regards,
    Andrei

  • Isolation level and performance impact?

    Hi
    I'm new to BDB JE and building some prototypes to evaluate it.
    Given a simple usecase of storing the following key/value pair <String,List<Event>> mapping a user to his/her list of events, in the db. New events are added for the user, this happens (although fairly rarely) concurrently.
    Using Serializable isolation will prevent any corruption to the list of events, since the events are effectively added serially to the user. I was wondering:
    1. if there are any lesser levels of isolation that would still be adequate
    2. using Serializable isolation, is there a performance impact on updating users non concurrently (ie there's no lock contention since for the majority of cases concurrent updates won't happen) vs the default isolation level?
    3. building on 2. is there performance impact (other than obtaining and releasing locks) on using transactions with X isolation during updates of existing entries if there are no lock contention (ie, no concurrent updates) vs not using transactions at all?
    Thanks!
    Peter

    Have you seen this section of the Getting Started Guide on isolation levels in JE? http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html
    Our default is Repeatable Read, and that could be sufficient for your application depending on your access patterns, and the semantic sense of the items in your list. I think you're saying that the data portion of a record is the list of events itself. With RepeatableRead, you'll always see only committed data, and retrieving that record from a JE database will always return a consistent view of a given list. See http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html#serializable for an explanation of what additional guarantee you get with Serializable.
    2. using Serializable isolation, is there a
    performance impact on updating users non concurrently
    (ie there's no lock contention since for the majority
    of cases concurrent updates won't happen) vs the
    default isolation level?Yes, there is an additional cost. When using Serializable isolation, additional locks are taken on adjacent data records. In addition to the cost of acquiring the lock (which would be low in a non-contention case), there may be additional I/O needed to fetch adjacent data records.
    3. building on 2. is there performance impact (other
    than obtaining and releasing locks) on using
    transactions with X isolation during updates of
    existing entries if there are no lock contention (ie,
    no concurrent updates) vs not using transactions at
    all? In (2) we compared the cost of Serializable to RepeatableRead. In (3), we're comparing the cost of non-transactional access to the default Repeatable Read transaction.
    Non-transactional is always a bit cheaper, even if there is no lock contention. On top of the cost of acquiring the locks, non-transactional operations use less memory and disk space, and execute some transaction setup and teardown code. If there are concurrent operations, even in there is no contention on a given lock, there could be some stress on the lock table latches and transaction tables. That said, if your application is I/O bound, the cpu differences between non-txnal and txnal operations becomes more of a secondary factor. If you're I/O bound, the memory and disk space overhead does matter, because the cache is more efficiently used with non-txnal operations.
    Regards,
    Linda
    >
    Thanks!
    Peter

  • Possible to set CacheSize for the single-JVM version of the data cache?

    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les

    The actual size of the cache is a bit more complex than just the CacheSize
    setting. The CacheSize is the number of hard references to maintain in the
    cache. So, the most-recently-used 10 elements will have hard refs to them,
    and the other 15 will be moved to a SoftValueCache. Soft references are not
    garbage-collected immediately, so you might see the cache size remain at
    25 until you run out of memory. (The JVM has a good deal of flexibility in
    how it implements soft references. The theory is that soft refs should stay
    around until absolutely necessary, but many JVMs treat them the same as
    weak refs.)
    Additionally, pinning objects into the cache has an impact on the cache
    size. Pinned objects do not count against the cache size. So, if you have
    15 pinned objects, the cache size could be 25 even if there are no soft
    references being maintained.
    -Patrick
    In article <aqrpqo$rl7$[email protected]>, Les Selecky wrote:
    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Bdb cache size setiings

    Hi,
    I have one question about bdb cache.
    If I set je.maxMemory=1073741824 in je.properties to limit bdb cache to 1G, is it possible that the real size of bdb cache is larger than 1G in a long period?
    Thanks,
    Yu

    To add to what Linda said, you are correct that is not a good idea to make the JE cache size too large, relative to the heap size. Some extra room is needed for three reasons:
    1) JE's calculations of memory size are approximate.
    2) Your application may use variable amounts of memory, in addition to JE.
    3) Java garbage collection will not perform well in some circumstances, if there is not enough free space.
    The last reason is a large variable. To find out how this impacts performance, and how much extra room you need, you'll really have to go though a Java GC tuning exercise using a system that is similar to your production machine, and do a lot of testing.
    I would certainly never use less than 10 or 20% free space, and with large heaps where GC is active, you will probably need more free space than that.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Column addition to 1 TB table - impact?

    Hi,
    This is just an evaluation question. My application is going to do a FIX predefined thing all the time - that's why I am thinking about using BDB in the first place. However, I may need to "add" some columns to my 1 TB database later - after I've added 1 TB of data to it. Do you think that can have an adverse impact? What? Would you elaborate a bit? Speed is not that critical but it should be acceptable - it's going to be used in a simulation. If adding a column to a 1 TB table "can" have a negative impact then I hope I can "program" this "changeability" into the application so that my end-users can add the columns themselves, Isn't it?
    The primary key of the table will remain the same.
    I am still reading the manual pages along with my work! as advised by Bogdan earlier! :)
    Many thanks in advance,
    Asif

    Hi,
    This is just an evaluation question. My application
    is going to do a FIX predefined thing all the time -
    that's why I am thinking about using BDB in the first
    place. However, I may need to "add" some columns to
    my 1 TB database later - after I've added 1 TB of
    data to it. Do you think that can have an adverse
    impact? What? Would you elaborate a bit? Speed is
    not that critical but it should be acceptable - it's
    going to be used in a simulation. If adding a column
    to a 1 TB table "can" have a negative impact then I
    hope I can "program" this "changeability" into the
    application so that my end-users can add the columns
    themselves, Isn't it?
    The primary key of the table will remain the same.Since BDB has nothing to do with the record layout, you have all the control.
    For instance, you could add a version number to all your records, and design
    your application code in such a way that when it reads an "old"record,
    it will calculate/derive a value for the new column (with whatever logic fits
    your application).
    When you then write that record back it will write the new version.
    That is, the record version update will only happen when you have
    to write to the record anyway. This approach requires basically no extra
    database activity.
    Karl

  • Impact if FIGL10 DSO is not staged b4 FIGL10 Cube

    Heallo SDNer's,
    How wud it impact if I dont stage 0FIGL_O10 - DSO between 0FIGL_C10 ?
    Structures wud be similar - no additional fields in Cube.
    Even SAP defined DSO n Cube have similar fields.
    Datasource is <b>delta capable</b> - 0FI_GL_10.
    Some background - > After having enhanced FIGL10 DSO with some fields which do exist in the datasource I have to have 19Keys to determine the unique records or to pull all records from PSA w/o any records being overwritten.
    However, datasource relevant OSS note sugggested to take d help of artificial keys ( for the same DSO...here 1 artificial key will concatinate 4 keys  ) .
    <b>Data volumes will be less - 200thousands a month.</b>
    Wud SAP suggested method ( artificial keys ) be a better bet ?
    Please share your thoughts.....any inputs wud really help

    Jr Roberto,
    You can do this. In BW system.
    1. Go to table RSOLTPSOURCE.
    2. Enter 0FI_GL_10 as datasource.
    3. Check the field delta process value, refer that value to table RODELTAM.
    That will tell you what kind of data is coming from that extractor. Based on the delta data (After image, before image and after image, etc), you can decide whether you need  DSO or not.
    -Saket

  • [bdb bug]repeatly open and close db may cause memory leak

    my test code is very simple :
    char *filename = "xxx.db";
    char *dbname = "xxx";
    for( ; ;)
    DB *dbp;
    DB_TXN *txnp;
    db_create(&dbp,dbenvp, 0);
    dbenvp->txn_begin(dbenvp, NULL, &txnp, 0);
    ret = dbp->open(dbp, txnp, filename, dbname, DB_BTREE, DB_CREATE, 0);
    if(ret != 0)
    printf("failed to open db:%s\n",db_strerror(ret));
    return 0;
    txnp->commit(txnp, 0);
    dbp->close(dbp, DB_NOSYNC);
    I try to run my test program for a long time opening and closing db repeatly, then use the PS command and find the RSS is increasing slowly:
    ps -va
    PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
    1986 pts/0 S 0:00 466 588 4999 980 0.3 -bash
    2615 pts/0 R 0:01 588 2 5141 2500 0.9 ./test
    after a few minutes:
    ps -va
    PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
    1986 pts/0 S 0:00 473 588 4999 976 0.3 -bash
    2615 pts/0 R 30:02 689 2 156561 117892 46.2 ./test
    I had read bdb's source code before, so i tried to debug it for about a week and found something like a bug:
    If open a db with both filename and dbname, bdb will open a db handle for master db and a db handle for subdb,
    both of the two handle will get an fileid by a internal api called __dbreg_get_id, however, just the subdb's id will be
    return to bdb's log region by calling __dbreg_pop_id. It leads to a id leak if I tried to open and close the db
    repeatly, as a result, __dbreg_add_dbentry will call realloc repeatly to enlarge the dbentry area, this seens to be
    the reason for RSS increasing.
    Is it not a BUG?
    sorry for my pool english :)
    Edited by: user9222236 on 2010-2-25 下午10:38

    I have tested my program using Oracle Berkeley DB release 4.8.26 and 4.7.25 in redhat 9.0 (Kernel 2.4.20-8smp on an i686) and AIX Version 5.
    The problem is easy to be reproduced by calling the open method of db handle with both filename and dbname being specified and calling the close method.
    My program is very simple:
    #include <stdlib.h>
    #include <stdio.h>
    #include <sys/time.h>
    #include "db.h"
    int main(int argc, char * argv[])
    int ret, count;
    DB_ENV *dbenvp;
    char * filename = "test.dbf";
    char * dbname = "test";
    db_env_create(&dbenvp, 0);
    dbenvp->open(dbenvp, "/home/bdb/code/test/env",DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_TXN|DB_INIT_MPOOL, 0);
    for(count = 0 ; count < 10000000 ; count++)
    DB *dbp;
    DB_TXN *txnp;
    db_create(&dbp,dbenvp, 0);
    dbenvp->txn_begin(dbenvp, NULL, &txnp, 0);
    ret = dbp->open(dbp, txnp, filename, dbname, DB_BTREE, DB_CREATE, 0);
    if(ret != 0)
    printf("failed to open db:%s\n",db_strerror(ret));
    return 0;
    txnp->commit(txnp, 0);
    dbp->close(dbp, DB_NOSYNC);
    dbenvp->close(dbenvp, 0);
    return 0;
    DB_CONFIG is like below:
    set_cachesize 0 20000 0
    set_flags db_auto_commit
    set_flags db_txn_nosync
    set_flags db_log_inmemory
    set_lk_detect db_lock_minlocks
    Edited by: user9222236 on 2010-2-28 下午5:42
    Edited by: user9222236 on 2010-2-28 下午5:45

  • Impact of Addition of New Value Fields in the existing Op. Concern-COPA

    Hi All,
    Want to know the steps of adding new value fields in the existing operating concern in COPA?
    What is the overall impact of addition of New Value fields in the running Operating Concern?
    How do we test the addition of new value fields?
    Is the addition of New Value fields to the running Operating Concern advisable?
    Your support and advice is highly anticipated and appreciated.
    Thanks & Regards
    9819528669
    Ohter details are as follows...
    VALUE FIELDS : Defines the Structure of your Costs & Revenues. (Op. Concern 120 Value Fields) 
    1)     The client requires three new value fields to be created. Value fields for :
    -     Other Airport Charges - International
    -     Cargo Commission Cost
    -     Personal Cost (Direct)
    2)     What are the Steps involved in creation of new value fields? The steps are :
    1)     Before creating new value field we need to check whether we can use already existing UNUSED value fields (There are 62 value fields created for the op concern 1000, in production the value fields TBUL1-L7 i.e. to be used (I assume) screen shot1 provided. My predecessor has used value field VV291, VV292, VV380 original description being TBUL3, TBUL4, and TBUL1. I believe he has changed the description in development client and created a transport request ref screen shot 2)
    2)     You can create new value field thru T-Code KEA6 (4-5 characters beginning with VV) u2013 My predecessor has reused the value fields originally created he has not created new one I believe. please provide give your comments)
    3)     Specify whether this field is for Currency or Quantity (currency defined in attribute of op concern and quantity defined by its own field u2013 unit of measure) u2013 My predecessor has configured the three value fields as Currency.
    4)     Describe how the values in this field are aggregated over characteristics. (SUM, LAS, AVG) u2013 My predecessor has aggregated all the three value fields as SUM and they are in Active status.
    5)     After the value field is created you have to add the value field (active status only) to the operating concern by Editing the Data Structure. (I guess this is done in the next step)
    6)     Assign newly created Value fields to the Operating Concern u2013 T-Code KEA0 (In development client the value fields are assigned to the op concern 1000 refer screen shot 3. In the production client also those three value fields exist (does it mean they are assigned? your comments please.) As they have the original description TBUL3, TBUL4, and TBUL1 refer screen shot 4.
    7)     After the Data Structure is defined you need to activate them. (creates plan vs actual database) u2013 Go to the data structure tab and Choose Activate. The status in dev client is Active with correct description but in the production client it is Active with the OLD description. After addition of the value field you need to regenerate the operating concern. T-Code KEA0 u2013 right?
    8)     Condition Types are assigned to Value Fields? Donu2019t know u2013 T-Code KE45 (I think this is NOT required in our case u2013 Please give your comments)
    9)     Define and Assign Valuation Strategy u2013 Cost assigned to Value fields u2013 T-Code KE4U (I think this is also NOT required in our case)
    10)     Define PA Transfer Structure for Settlement u2013 T-Code KEI1 (This step is crucial u2013 I think I have to to include newly created value fields, but am not aware how to do it and what is the connectivity or what happens after this is done)
    Note: My predecessor has created a Transport Request No# KEDK902167 for the value fields created by him.
    3)     Whether my predecessor has performed all the steps for value field creation? How to test it and check that?
    4)     If yes then,  Do I need to perform additional configuration or can I proceed to transport it to Production?
    5)     What is COPA Allocation Structure & PA Transfer Structure? Where and Why It is used?
    6)     What are the different methods of cost and revenue allocations in SAP?
    7)     I have checked the status of the value fields across clients, It is as followsu2026
         Value Field     Value Field For     Description     Development      Quality     Production
    1     VV291     Other Airport Charges International     TBUL3     Exists      DNE     DNE
    2     VV292     Cargo Commission Cost     TBUL4     Exists      DNE     DNE
    3     VV380     Personal Cost u2013 Direct     TBUL1     Exists      DNE     DNE
    #DNE : Does Not Exist (assumed since the description given to value field is not the same as in development client.)

    HI sree,
    ofter creation value field and saving that time reqwest number appeare copy the reqwest number and go through the se01 select that reqwest number select and transport click the truck symbole, and draft a mail to basis guyw.
    Thank You!
    Best Regards,
    pradheep.

  • Help needed in index creation and its impact on insertion of records

    Hi All,
    I have a situation like this, and the process involves 4 tables.
    Among the 4 tables, 2 tables have around 30 columns each, and the other 2 has 15 columns each.
    This process contains validation and insert procedure.
    I have already created some 8 index for one table, and an average of around 3 index on other tables.
    Now the situation is like, i have a select statement in validation procedure, which involves select statement based on all the 4 tables.
    When i try to run that select statement, it takes around 30 secs, and when checked for plan, it takes around 21k memory.
    Now, i am in a situation to create new index for all the table for the purpose of this select statement.
    The no.of times this select statement executes, is like, for around 1000 times of insert into table, 200 times this select statement gets executed, and the record count of these tables would be around one crore.
    Will the no.of index created in a table impacts insert statement performace, or can we create as many index as we want in a table ? Which is the best practise ?
    Please guide me in this !!!
    Regards,
    Shivakumar A

    Hi,
    index creation will most definitely impact your DML performance because when inserting into the table you'll have to update index entries as well. Typically it's a small overhead that is acceptable in most situations, but the only way to say definitively whether or not it is acceptable to you is by testing. Set up some tests, measure performance of some typical selects, updates and inserts with and without an index, and you will have some data to base your decision on.
    Best regards,
    Nikolay

  • Photoshop CS2 trial to full lic - impact on LR

    Hi,
    Here are what I have installed in this order:
    - Expired trial Photoshop CS2
    - Lightroom (CS2 already expired when I install LR)
    I am going to reinstall a full license Photostop CS2 soon, would this have an impact on my existing LR?
    1. Do I need to reinstall LR so that LR can use (or rather can 'see')
    2. Photoshop CS2 to open individual files?
    3. Would changes on the RAW images done by LR are reflected in CS2?
    4. I like LR's Library layout, do I really need to install Adobe Bridge?
    5. If I use CS2 top open files direct from LR (bypassing Bridge), what colour management is best for both worlds? ProColor RGB?
    I use a Mac by the way.
    Thanks.
    Thomas

    1. No, LR is not affected by CS2 licencing procedures. LR not even uses activiation like CS2 which is very much appreciated.
    2,3 You have to ensure that you use the latest Camera Raw plugin for CS2 (ACR 3.7). If you activate the automatic update of XMP metadata you will be able to see changes you made in LR in Bridge and CS2.
    4. You do not have to. But if you are using CS2 heavily besides LR I personally would recommend Bridge
    5. It makes absolutely no difference if you open a file directly in CS2 or via Bridge concerning color management. ProPhoto RGB is very well suited to deal with 16Bit raw photos and it is the internally used color space of LR. But please note that there is no best choice. It really depends on your workflow and output.

  • Can't  view Impact and Lineage tab in Meta data report

    Post Author: mohideen_km
    CA Forum: Data Integration
    Hi guys,
    I have a problem in meta Data Report ..
    IN Impact and Lineage Analysis
    I can View only OVerview Tab.I can not view Impact and Lineage
    Help me to figure out..
    Mohideen

    Hi,
    Select text column (varchar column) -> Treat Text as -> custom text format ->Remove @ (default) -> add [html]<script>sometext(‘@’)</script> followed by your javascript
    for eg:
    [html]<script>buildGoogleChart(‘@’)</script> <head> <script type="text/javascript"> function show_alert() { alert("Hello! I am an alert box!"); } </script> </head> <body> <input type="button" onclick="show_alert()" value="Show alert box" /> </body> </html>
    hope helps u.........
    cheers,
    Aravind

Maybe you are looking for