Structure of  OLA P cache

hi friends,
structure of OLAP cache.
Regards,
Rajesh

Hi rajesh,
Pls check this link which explains about different types of cache
http://help.sap.com/saphelp_bw320/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm
For cache structure:
http://help.sap.com/saphelp_bw320/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm
For OLAP cache monitor:
http://help.sap.com/saphelp_bw320/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm
Hope this helps
***pls assign points, if links are useful**
Regards
CSM Reddy

Similar Messages

  • Decument about buffer cache mechanism

    hi gurus
    I want some documentations about the buffer cache internal mechanism,for example the internal structure of the buffer cache,how does the buffer cache works and the LRU mechanism.The more detailed the better.
    I have searched on Oracle Documentation Linrary with buffer head or buffer descriptor as keywords.I did not get what i want.
    many thanks in advance.

    Hi Kevin,
    I believe information which you are looking is not published by Oracle and is avaialable only to Oracle Support and other employees. You can refer to Note:62172.1 for buffer cache mechanism. It mentioned about a note with "@" in beginning meaning the remark was supposed to be unpublished.
    "@ See Note 104937.1 for a description "
    May be the above note has got some description. Anyways I will suggest you to go through the Note:62172.1 and get basic understanding which should be sufficient for dba's.
    Cheers
    Amit

  • Coherence and database backend updates

    Hi
    I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
    My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
    Rgds
    Anil

    Hi Anil,
    it really depends on what you need to achieve.
    There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
    However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
    The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
    Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
    It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
    Due to these behaviors write-behind has a profound effect on write-heavy applications.
    However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
    Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
    The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
    The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
    Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
    So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
    There can be other approaches as well.
    It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
    Best regards,
    Robert

  • Relinking missing photo/exclamation mark

    I have a photo that "can't be found" in LR and has one of those exclamation marks i the top right of the photo.
    I have clicked on it and pressed the Locate button on the pop up message and comes up with a Folder "iPod photo cache". clicking on the Folder, it has a series of empty Folders numbered F00-F50, an Apple tv photo database and a photos database. I don't see my photo anywhere. I have clicked on a number of the folders and they all appear empty.
    I thought that the idea was that the missing photos would appear so can you tell me how to find and re link it.
    Seeing that the file is called IPOD Photo cache, I have plugged my iPod into the laptop but that does not appear to make any difference and the exclamation mark is still present and cannot find the photo.
    Hope there is a simple trick that I am missing here and that you can point it out to me.
    thanks
    john

    Missing photos don't just automatically appear when you choose the find missing photo option. You have to browse to the folder where you know the image is located and select the image. This will show Lightroom where the image is located on your hard drive. When the browser window opens in showing you the folder structure for the preview cache. These are previews are generated by Lightroom to facilitate the editing process. You need to use that dialogue box to navigate to the folder containing the real image and select the image. If you have a missing image, it's possible that you also have a missing folder. If you right-click on the missing folder you can choose to update the folder location. But again, you have to navigate to where the folder is located. Lightroom isn't going to find it automatically.

  • DBA_HIST_SQLSTAT reliability

    We are on 10.2.0.4.
    We execute an insert statement 30000 times to insert 30000 records.
    The statement is correctly captured into AWR and we can see it in DBA_HIST_SQLSTAT. The problem is that executions_delta column is far less than 30000.
    Can somebody explain us why there is this gap from executions_delta and the real number of executions?

    AlleT wrote:
    I did not check v$sql while the statement was running, but how the number of child cursors would affect executions_delta?
    Initially it was a just a collection of thoughts about why the figures might be misleading - the multiple sessins / multiple child cursors thing could give some indication of stats getting overwritten rather than being accumulated.  (Multiple sessions would all have to be updating the same stats structure in the library cache - in your version of Oracle - so there might be an opportunity for "lost writes, for example;  multiple child cursors could have some accidental side effect like zeroing some of the stats as a new child cursor is created - shouldn't happen, but I have seen some strange behaviour in execution stats in various older versions of Oracle when there were multiple child cursor (e.g: a set of child stats dropping to zero, and then "acquiring" the stats from another child cursor).
    Was the java program connecting to the instance through a single dedicated server, or could it have been connecting through a connection pool ?
    Regards
    Jonathan Lewis

  • Cache directory structure

    I clear my Cache regularly (with EACH shutdown at minimum). I am aware of and use the Cache location setting "browser.cache.disk.parent_directory".
    Is there a setting to go back to the Cache directory structure of 3.6?
    I am not experiencing Cache access disk slowdown issues and I liked the single directory structure.

    No. That new structure was created to allow more and larger files to be cached and overcome the limitations of the cache that is used in older Firefox versions.
    *[https://bugzilla.mozilla.org/show_bug.cgi?id=597224 Bug 597224] - HTTP Cache: use directory tree to store cache files

  • Boot caches, missing file structure, external HDs

    I installed Leopard two days ago along with buying a new 320GB external. I backed up 20 gigs of files to give my 160 gig internal HD some breathing room. I was showing my friend how to format an external using disk utility and accidentally started formatting my new external. I realized my error as soon as I hit the format HFS button and stopped it before it started formatting my disc.
    When I shut down later Finder told me my boot caches needed to be updated. This took about 10 minutes. When I booted up later that day my external drive had a different icon(one of the orange drives) and a different name. It said it didn't have any data on it, like the directory information was deleted.
    I need to be able to access the data that's still on the external. How can I do this?

    I've given up trying to recover that data I guess I just fried the file structure information.

  • How to organise cache structure in distributed and fault tolerant way?

    Suppose I have two caches - cacheA that contains objects of type A and cacheB containing objects of type B; caches are distributed across several physical servers.
    B-objects should listen to updates of objects in cacheA.
    How should the objects be implemented? In particular - how to organize subscription? For example, if in JVM1 I create Object B and subscribe to ObjectA, then launch JVM2 in the same cluster, then shut down JVM1, ObjectB will be available in CacheB on JVM 2 but will not receive any updates about ObjectA anymore.
    How can this problem be solved?

    That does not answer my question unfortunately. Let's say that I have something that listens to all updates to all objects in the cache that is distributed across several physical servers. The requirements are
    - whenever an object in the cache is changed the change is handled somehow
    - the code that handles the update should have read/write access to the cache
    - there must be one and only one notification of that kind
    - network traffic is minimized, i.e. there is no single dedicated JVM that listens to all updates, ideally each node in a cluster serves updates for its local data
    I'm thinking about setting up a listener to backing map, but not so sure

  • Changing firefox structure cache

    I have a problem. I want to see my cache in one directory, because when I use firefox 3.x all the cache files had save in one directory like as IE Temporary Internet Files. but now they are save in the cache directory and then in subdirectory 0,1,2... and in subdirectory 1e,6d,...
    Please help me to change this manner andsave all the cache files
    in one main cache directory not distribute in these subdirectories.
    My email is [email protected]

    * [https://bugzilla.mozilla.org/show_bug.cgi?id=597224 Bug 597224] - HTTP Cache: use directory tree to store cache files
    You can do a search for all files *.* in Windows Explorer (or another file manager) to see all the files in the cache in one list.

  • Error while accessing a library using content and structure

    I have a library having document and folder inside it. When I open the library using content and structure I get an error with a correlation ID. When checked the the logs with Correlation ID got an error message "View 'All Document' does not exist."
    'All Document' is name of default view on the library.
    When I open the library from view all site content the library is being opened.
    Please help!!!

    Hello Victoria,
    Thanks for  the response.
    I have tried troubleshooting steps given by you. 
    Check if the issue occurs with other users. Use another user to access the library in Content and Structure and then compare the results. --
    I tried with different users but no luck
    Make sure that the user account with issue has permission to view the All Documents view of the library. --
    Yes, user Account have the permission
    Check if the issue occurs with other libraries in the Content and Structure. If not, I recommend to save the library as
    a template including contents and then create a new library based on this template. After that use the new library instead of the old library. --
    No other library have this problem. I cannot save the library as template including the contents as the it has many folders and  files. The current size of library is 786 MB
    Clear cache in the browser or use another browser to see if the issue still occurs. --
    tries but issue persists.
    Best regards,
    Ratnesh

  • Dynamic Creation of Physical Data Server / Agent cache Refresh

    Scenario:
    I have a requirement to load data from xml source to oracle DB, and the xml source will change at run time,but the xsd of the xml would remain same ( so I don't have to change the Logical data Server, models, mappings, interfaces and scenarios - only the Physical Data Server will change at runtime).I have created all the ODI artifacts using ODI studio in my Work Repo and then I'm using odi sdk to create the physical dataserver for the changed xml data source and then invoking the agent programmatically.
    Problem:
    The data is being loaded from the xml source to oracle DB for the first time, but it is not working fine from the second time onwards. If I restart the agent, it is again working fine for one more time. on the first run, I think the agent maintains some sort of cache for the physical data server details and so when ever I change the data server, something is going wrong and that is leading to the following exception. So I want to know, if there is any mechanism to handle dynamic data servers or if there is any way of clearing the agent cache, if any.
    Caused By: org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 41, in <module>
    AttributeError: 'NoneType' object has no attribute 'createStatement'
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1596)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$2.doAction(StartScenRequestProcessor.java:582)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor.doProcessStartScenTask(StartScenRequestProcessor.java:513)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$StartScenTask.doExecute(StartScenRequestProcessor.java:1070)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$1.run(DefaultAgentTaskExecutor.java:50)
         at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor.executeAgentTask(DefaultAgentTaskExecutor.java:41)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doExecuteAgentTask(TaskExecutorAgentRequestProcessor.java:93)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.process(TaskExecutorAgentRequestProcessor.java:83)
         at oracle.odi.runtime.agent.support.DefaultRuntimeAgent.execute(DefaultRuntimeAgent.java:68)
         at oracle.odi.runtime.agent.servlet.AgentServlet.processRequest(AgentServlet.java:445)
         at oracle.odi.runtime.agent.servlet.AgentServlet.doPost(AgentServlet.java:394)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:503)
         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:389)
         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
         at org.mortbay.jetty.Server.handle(Server.java:326)
         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
         at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879)
         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747)
         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
         at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520)

    Hi ,
    If you want to load multiple files ( same structure) through one connection then in topology create M.XSD for M.XML file
    Create three directories
    RAW -- It will contain file with original name
    PRO- Processing area where file will be moved one by one & renamed it as M.XML.
    OUT- Once file data will be loaded into tables move the file M.XML from PRO to OUT.
    Go to odiexperts to create loop,
    Use odifilemove ( to move & rename/masking) to move A.XML from RAW to PRO & rename to M.XML
    use ODIfilemove to move M.XML to OUT folder & then rename back to A.XML
    Use variables to store file names & refresh
    NoneType' object has no attribute 'createStatement' : It seems that structure of your file is different & your trying to load different files in same schema. If stucture is same then use Procedure "SYNCHRONIZE ALL" after every load...
    Edited by: neeraj_singh on Feb 16, 2012 4:47 AM

  • Changes in ECC50 BAPI Structures

    Hi,
    I am working in upgrade project (46C to ECC50), I would like to know what are the BAPIs changed like structures added to BAPI or deleted some of the fields or added some of the fields in the structures.
    Please let me know if any SAP tool or good way to find differences.
    Thanks,
    Kiran.

    Ernest,
    Choose Tools --> Options --> Compiler.
    In Data Service Retrieval combo box
    choose Donot cache data service get latest.
    Log off from your model.
    log on again and goto the context menu of your bapi.
    then choose Define data Service your import parameters should appear now.
    Regards,
    Ahmed Salah
    Please reward points if this is helpful

  • File name with symbols won't delete from trash.cache\trash\cache folder.

    found this weird file name with symbols (squares nad the like) in the trash.cache\trash\cache folder. Can't seem to delete it from windows, can't get at from the dos prompt. Windows safe mode won't delete it.
    Any suggestions as to what it is and how to get rid of it.
    At present am trying reinstall of firefox and virus scan.
    Thanks
    Peter

    I tried to do the instructions Adobe gave me but when
    I put in the disc that came with my mac and hold down
    C when it restarts it takes me to the screen to do a
    fresh install.
    At that point go to the Menu & select Disk Utility - I can't remember exactly which menu but you should be able to find it easily... there isn't too much there
    I went into the disk utility through the Apps folder
    and for somereason the option to repair isnt
    highlighted and it wont let me click it. I tried to
    repair permissions/verify but it doesnt change
    anything. I looked at the info and it says the volume
    can be repaired, but it wont let me.
    You can't Repair the disk the system is currently running the OS from - That's why you have to boot from the Installer disk (or some other start-up disk). Repair Disk addresses directory structure issues - totally separate from what Repair Permissions does.
    HTH|:>)
    Bob J.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Questions on cache and general RSRT settings for plancube

    Hi,
    we would like to:
    1) set request status 1 in RSRT for our planqueries, in order to automatically refresh the query after executing a planfunction (problem we have now is that the results of a planfunction are not automatically updated in the query. Only when doing something else like executing other function, saving, check locks, ... the results are visible).
    2) activate delta cache for our planqueries
    we have read OSS note 1136163 on RSRT settings. It says:
    Aggregation level "A" is implemented internally by the automatically created query "P/!!1P" (plan buffer query). This query acts like an InfoProvider. It reads the data of the database or provides the data from InfoProvider "P", it adds the data of the delta buffer "Dp" (or the delta buffer Dpi if P is a MultiProvider with several PartProviders Pi that can be planned) and transfers the data manager as data of the InfoProvider "A" of type "ALVL". The query "P/!!1P" can use aggregates and the cache; this is exactly like each normal query in "P". If "P" is a MultiProvider, it is useful to set PARTITIONMODE to "1".
                  For the query "P/!!1P" that is created automatically for an aggregation level or for all aggregation levels using the InfoProvider P, we recommend the following setting:
                  Read mode "H", request status "1", cache mode "1" or higher, delta cache "true" and SP grouping "1".
                  Furthermore, the selection to use the structure element (KIDSEL) should be "true".
                  The input-ready queries in "A" should not, and cannot, use a cache. The request status is irrelevant since queries in "A" are automatically set to current data. The delta buffer does not currently support hierarchy processing. Therefore, aggregation level "A" cannot completely support read mode "H". For input-ready queries at A:
                  Read mode "X", request status "0", cache mode "0".
                 The delta cache and SP grouping are not visible
    Problems we have:
    1) for query P/!!1P (PCA_AGQF/!!1PCA_AGQF in our example) does not allow changing the request status (greyed out). It now has value 0 instead of 1. It also does not allow to activate the delta cache flag. How to change this? In RSDIPROP we have set partitionmode to 1 for the multiprovider and activated the delta cache flag...
    2) can we use the cache / delta cache principle for our planqueries? If so, how to ensure these settings remain activated in RSRT?
    regards
    Dries
    regards
    dries

    Hi,
    To change the cache settings for your cube.
    Open the cube in RSA1 and click in 'Chance'.
    - click in the 'Environment' menu;
    - expand 'InfoProvider Properties'
      - select the option 'Change'.
    You will be able to set the cache mode for this provider.
    I don't think it will be possible use cache for a multiprovider, it is
    not possible.
    Regards,
    Amit

Maybe you are looking for

  • Passing Jaas form parameters

    Hello, I develop web application with authentication with this form : <html>      <body>      <form action='j_security_check' method='post'> <table align='center' style='font-size:12px' width='100%'> <tr> <td><img src='./images/logo.gif' style='borde

  • Issues with hooking up an older kodak aio3 printer to a new hp windows 8 computer

    I have a new HP p2-1343wb setup that came with a monitor. It is running Windows 8. I have an older Kodak aio3 printer that i used to use on both a Windows Vista and a  Windows XP computer. When I tried to intall the software and drivers for the print

  • Cluster Resource offline, Windows 2008R2

    Hi All,  I have cluster nodes. My cluster was working fine untill last week, after that it failed to bring the resource online. I see that cluster IP is online, but name not online,. I have checked from the DNS side the cluster name has host entry, b

  • Finding the right Mac

    I have been looking at a Mac for a while now And since my pc laptop just died I gotta get a new laptop The uses for my MacBook: College: u know basic word producing, PowerPoint, excel, mail, surfing, UBS transfer and online free gaming :-) Home: YouT

  • 3 node

    Hi all ! I have demo system xi3.1 for training at my home. How i can modeling landscape with Dev Test and Prod system in 1 bo installation? My plan... 3 oracle scheme for Dev Test and Prod CMS 3 folders for Dev Test and Prod  IFRS/OFRS and 3 node (64