OC4J jsp caching issues.

Hi All,
We have deployed a set of jsp reports using Oracle reports 10g (9.0.4.0.0) and are noticing that the reports server appears to have cached prior versions of the reports on the server. We have attempted to restart the OC4J instance several times, and have found that if we remove delete the report files from the deployment directory, we get a 404 error. Copying the new files back causes the old versions to still load. If we modify the files in that directory in some way, and then save them, that appears to force the reports server to load the new files, but we are hoping that there is a way to not have to do this every time we want to deploy new copies of these files.
Any suggestions for what might cause this?
Thanks,
Allan

It is standalone OC4J at the moment as in UAT. I notice memory usage going up and has to be restarted every 4 hours. I used a memory profiler from JDeveloper. From there I saw the memory go up and up without doing anything with our applications. There are a lot of String objects in memory that keep going up coming from similar methods to what I described above.

Similar Messages

  • Caching issue in jsp or servlet

    I am not sure where I should post the issue that I have. I have a j2ee web app (struts) running on Apache Web Server + Glassfish + MySQL. My server has multi core.
    The problem is that whenever I insert a new data or delete it from a table, I don't see the change right away. Sometimes, I see the change. Sometimes I don't. It's very inconsistent.
    If the old data is cached in JSP, I shouldn't see the change in log file, but I do see it even in the log file.
    For example, I have a page managing user's folders. when I delete a certain folder name from jsp page, this folder is deleted from a db table. but when I refresh the page, I still see the folder name that is not supposed to show up. or when I go to a different page and come back to this page, I don't see it. The behavior is very inconsistent.
    If it's a browser caching issue, I don't think that I should see it in the log file, but I still see the folder name(which is supposed to be deleted) in the log file.
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1);
    does anybody have any opinion about this?
    It's hard to debug it and describe the behavior.
    But it would be very helpful if someone who had a same experience about this explains about this and tells me how to fix it.
    Thanks.

    caesarkim1 wrote:
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1); Instead of including these lines in all jsp's, make a filter and add these lines to it.

  • EJB CMP remove create cache issue? DuplicateKeyException

    EJB CMP remove create cache issue? DuplicateKeyException
    Hi I have an EJB 2.1 application using CMP. Most things work fine. But if I in a transaction tries to remove a bean and create a new one with the same primary key I get a DuplicateKeyException:
    2007-11-16 09:25:31,963 ERROR [RMICallHandler-6] AdminGroupData_ConcreteSubClass147 - Error adding AccessRules:
    javax.ejb.DuplicateKeyException: Exception [EJB - 10007]: Exception creating bean of type [AccessRulesData]. Bean already exists.
    at oracle.toplink.internal.ejb.cmp.EJBExceptionFactory.duplicateKeyException(EJBExceptionFactory.java:195)
    I suspect that the remove call only removes from the cache (until commit), but that the create call checks the database or something?
    My code is simple like the following:
    AdminPreferencesDataLocal apdata1 = adminpreferenceshome.findByPrimaryKey(certificatefingerprint);
    adminpreferenceshome.remove(certificatefingerprint);
    adminpreferenceshome.create(certificatefingerprint,newadminpreference);
    Is there some configuration I can set in toplink-ejb-jar.xml to fix this?
    I use OC4j 10.1.3.3
    Cheers,
    Tomas

    The bean.remove() was executed but the sql DELETE was executed to the database as the result, oc4j manages all sql statements and they are optimized to commit to the database in batch at once when the transaction commit. Oc4j ejb container is smart and preventing you from creating the same entity cached in the transaction because the sql delete was not really committed to the database when the create is called.
    My guess is that the reason it works with IBM WAS was because it issued sql for each remove/create call right away instead of doing them in batch as oc4j, or your WAS remove then create were called in separate transactions.

  • OC4J11 deployment problem - TLD Caching issue

    Hello,
    I'm having problem deploying a JSF 1.2 web app. This web app uses third-party tags which are packaged under "WEB-INF/lib". When deploy this web app (as an EAR file), I'm getting these errors -- which seemed to relate to TLD CACHING:
    2007-12-17 20:09:23.875 NOTIFICATION Binding PETMailWeb web-module for application PETMailWeb to site default-web-site under context root PETMailWeb
    2007-12-17 20:09:27.734 WARNING J2EE HTTP-00009 Error while building tldcache for application: PETMailWeb
    2007-12-17 20:09:30.593 NOTIFICATION Application Deployer for PETMailWeb FAILED.
    2007-12-17 20:09:30.593 NOTIFICATION Application UnDeployer for PETMailWeb START
    S.
    I looked in the Oracle® Containers for J2EE Support for JavaServer Pages Developer's Guide 10g Release 3 (10.1.3) document (http://download-west.oracle.com/docs/cd/B25221_04/web.1013/b14430.pdf) and it recommended that parameter "jsp-cache-tlds" in global-web-application.xml is "Set to on to search all directories within the Web application for TLD files. Any found will be added to the list of TLD files inherited from the global Web application".
    So I made the changes, restarted my OC4J11 instance and deployed my JSP app again. Still getting the same error.
    Has anybody encountered this same issue ?

    Caching is turned on by default. If you want to modify your roles and have them take effect immediately then you should turn caching off. This is mentioned in the docs:
    http://download-uk.oracle.com/docs/cd/B25221_04/web.1013/b14429/configldap.htm#BEHBGDFE

  • The cache issue about indexcol function

    Hi Experts,
    We have one requirement as below,
    Firstly, we have some columns prompt created by JAVA in external portal page, not OBIEE internal prompts,such as Year, Month,Day.
    Secondly, we have one report contains one dynamic column which showes the lowest selected level value.
    For example,
    When users muti-select value in Year, and not select Month,Day, so this report will display as below:
    Dynamic--------Sales
    2010-------------1000
    2011-------------2000
    When users muti-select value in Year and Month, and not select Day, so this report will display as below:
    Dynamic--------Sales
    201001-----------100
    201002-----------300
    201101-----------500
    When users muti-select value in Year , Month, and Day , so this report will display as below:
    Dynamic--------Sales
    20100101--------50
    20100202--------80
    20110101--------100
    So we have one method to solve this requirement, but faced new cache or update issue immediately. More detail information as below:
    We create one temp table which has one flag column ,
    when users select Year level, it will record 0,
    when users select Month level, it will record 1,
    when users select Day level, it will record 2
    Then import this temp table into physical layer, and create session init block like below:
    select flag from temp
    Finially, we create new indexcol column in BMM :
    INDEXCOL(VALUEOF(NQ_SESSION.FLAG),Year,Month,Day) and drag it into presentation layer.
    When users select value in Year or Month or Day, and the flag will be record into database.However, the dynamic column in report will not be changed.
    I must click "Reload Server Metedata" firstly, it will show correctly.(Users do not click this )
    Is there any better method to solve cache issue? Or any other better methods for solving this requirement?
    Thanks very much.
    Note: I have disabled the cache in temp table in physical layer.

    Sorry to bother you again, I found I was wrong in describing my problem. I found out that I sent the CiscoIPhoneExecute xml to the phone last time, not the CiscoIPhoneImageFile xml.
    The CiscoIPPhoneExecute xml is like:
    http://192.168.1.2:8080/test/push2phone.jsp?action=get"/>
    And "http://192.168.1.2:8080/test/push2phone.jsp?action=get" returns the CiscoIPPhoneImageFile xml, like this:
    Image Title goes here
    Prompt text goes here
    0
    0
    http://192.168.1.2:8080/test/pngs/attention.png
    so when I send the CiscoIPPhoneExecute xml to a phone, the phone will display the file attention.png on the screen with the sound chime.raw, then I press the "exit" button, the attention.png on the screen disappears. But if I then send the CiscoIPPhoneExecute xml with the "http://192.168.1.2:8080/test/push2phone.jsp?action=get" that returns the CiscoIPPhoneImageFile that contains a different png file, for example "warning.png", the "attention.png" will appear for a short time during the time when "warning.png" is downloading, then "warning.png" appears. I have no idea why that happens. I don't want to see "attention.png" when I'm sending "warning.png". What can I do?
    I've tried to directly sent the CiscoIPPhoneImageFile xml to a phone, everything goes well, no "cache" problems. but I have no idea how to add the sound "chime.raw". I want the png files to be sent like short messages with sounds. Please help!
    Thanks a lot.

  • About cache issue when using push2phone and callerino

    I have made push2phone and callerino functions working well,but the issue is when we push messages to the phone,the phoen always displayed the message sent last time firstly then the message this time was displayed,seems there was cache issue,so does the callerinfo function.I have no idea about where the cache issue occurs,also how can I let the message be displayed as "tab" method on the phone? Please help !Thanks in advance.

    Sorry to bother you again, I found I was wrong in describing my problem. I found out that I sent the CiscoIPhoneExecute xml to the phone last time, not the CiscoIPhoneImageFile xml.
    The CiscoIPPhoneExecute xml is like:
    http://192.168.1.2:8080/test/push2phone.jsp?action=get"/>
    And "http://192.168.1.2:8080/test/push2phone.jsp?action=get" returns the CiscoIPPhoneImageFile xml, like this:
    Image Title goes here
    Prompt text goes here
    0
    0
    http://192.168.1.2:8080/test/pngs/attention.png
    so when I send the CiscoIPPhoneExecute xml to a phone, the phone will display the file attention.png on the screen with the sound chime.raw, then I press the "exit" button, the attention.png on the screen disappears. But if I then send the CiscoIPPhoneExecute xml with the "http://192.168.1.2:8080/test/push2phone.jsp?action=get" that returns the CiscoIPPhoneImageFile that contains a different png file, for example "warning.png", the "attention.png" will appear for a short time during the time when "warning.png" is downloading, then "warning.png" appears. I have no idea why that happens. I don't want to see "attention.png" when I'm sending "warning.png". What can I do?
    I've tried to directly sent the CiscoIPPhoneImageFile xml to a phone, everything goes well, no "cache" problems. but I have no idea how to add the sound "chime.raw". I want the png files to be sent like short messages with sounds. Please help!
    Thanks a lot.

  • Directory Caching issue with Cisco Jabber client for Windows

    Hi ,
    I am facing cache issue with Cisco Jabber client for Windows. If I do any change related to modification or deletion of contacts in Active Directory/ Callmanager, it does not reflect in the Jabber. Because jabber takes the contacts from the locally stored cache file in the Windows system.
    Every time I have to remove the cache file to overcome this issue, practically it's not possible to do the same with all the Widows users. As, if any employee leaves the company and still I can see his contact appears in the "Cisco Jabber client". I have not seen this issue with Android/Apple iOS.
    Is there any automated way to remove the cache file? 
    Here is the detail of CUCM,Presence and Jabber.
    CUCM version: 9.1.x
    Presence          : 9.1.X
    Jabber              : 10.5 and 10.6

    Hello
    On our environment we had to install a dedicated Microsoft Certificate Authority "just for Cisco Jabber usage" to house the
    Network Device Enrollment Service.
    Our certificate for the CUPS were generated on this Certification Authority too.
    I discussed this certificate matter with my colleagues this afternoon and nobody seems to remember how these certificates were deployed into the
    Enterprise Trust store for the users.
    But I think they asked all 400 users to accept the 3 certificates by answering "yes" to the popup instead of using a script deployed by GPO...
    I wish you success with that deployment and really hope you have a technical partner that *Knows* this subject.
    Our partner left us alone with that unfortunately.
    Florent
    EDIT: If the "Certutil script method" works, please let me know. This could be useful in our own deployment.

  • Sql query cache issue

    I am trying to see the log file in Manage sessions for the sql query in Answers. I see that if we run the same report multiple times, the sql query is showing up only the first time. Second time if I run it is not showing up. If I do a brand new report with diff columns picked it is giving me the sql then. Where do I set this option to show the sql query everytime I run a report even if it is the same report run multiple times. Is this caching issue?

    It shouldn't.... Have you unchecked the "Cache" on the physical layer for this table? If you go onto the Advanced tab, is the option "Bypass the Oracle BI cache" checked?

  • WEB u2013 I cache issue.

    We have BO XI 3.1 PF 1.7 and SAP BW 7.0 SP 18
    We are facing strong caching issue  with WEB- I XI 3.1 , what we observed is it cache previously fetched data and does not recognize data been updated in BW Infoprovider ( in our case we have BEx query on Direct update DSO )
    I have tried u2013
    1.     I made caching inactive for that BEx query in BW side.
    2.     I have check caching setting in BO , they are not helping me much to in active cache in BO
    On executing BEx query in BW, I have checked RSRT cash monitor; the query result is not getting cached, and get up-to-date data. Query works perfectly file.
    Also observed Events in RSDDSTAT_OLAP table, I could find very well Event 9000. This shows me that BEx query hits database and bring data back.
    When I execute BO report on the top of same BEx query, I do not get up to date data , it shows me previously fetched data and does not recognize data been updated in BW.
    Now question is where this data being getting cashed , and how to make that inactive to make sure I do get up to date data in WEB-I .
    Many thanks for your kind response.
    Regards
    Ashutosh D

    HI Ashutosh,
    This would be stored in the C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Data\servername_6400\storage folder.  Try renaming this folder and then testing again.  If this allows for updated records, then that proves that the Webi Processing Server is getting the cached data from this directory.
    There are also a few settings you can try to disable in the Webi Processing Server settings within the CMC:
    Disable cache Sharing (Checked)
    Enable Real-Time Caching (Unchecked)
    Enable Document Cache (Unchecked)
    Cache Timeout (1 min) - sets the cache to expire after 1 min.
    Try playing with these settings and see if that resolves the issue. 
    Thanks
    Jb

  • SharedObject and Caching issues

    Background:
         I am working on a project where I am using sharedObject to store data for a favorites list. The project contains 2 swfs, one in AS 3.0 (using Flex 4.5) and the other an AS 2.0 flash 8 project.  Basically the AS 2.0 project read and writes data to the shared object and the AS 3.0 project just reads the data.
    Setup:
         OS: Windows 7
         Flash Player: 10.3     Browsers: IE 9, IE 8, Chrome 14, and some misc Firefox testing -- Chrome seems to be working the best.  
    Problems:
         First off it has been extremely hard to get consistent results across different computers so if anyone has a suggestion as to why that would be (other than different browsers, different flash player versions, different flash player settings/global settings and different OS versions) because I have been making sure all that stuff is consistent across devices for testing purposes.
         1. It appears that some computers/browers won't store the shared object at all even though everything to allow it is turned on.
         2. Some computers/browsers will store it, but won't update until I refresh the page
    General Questions:
         1. Is the storage of sharedObject object data in a .sol file configured the same in both AS 2.0 and AS 3.0 and if so, would this be a means of communicating between two swfs running off the same domain?
         2. Is it possible to have a browser cache sharedObject data(the .sol file)?
          3. I have noticed as new flash player releases are coming out(especially recently i.e. 10.3) I have had to make a lot of repairs to the actionscipt 2 stuff.  Is actionscript 2 support slowly being phased out? Is anyone else experiencing the same problems? 
    Possible Solutions/Findings:
         I am starting to believe that it has something to do with browser caching since Chrome seems to perform the best and Chrome has performed best when it comes to browser caching issues.
    ** code is available if needed but since it was working before I don't think it is relevant.

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • SecurityDomain and Caching Issues

    I am running into some caching issues when setting the securityDomain of an imported SWF to match the calling SWF file and I was curious if anyone had any ideas on how to get around this issue.
    Here is the scenario:
    A.swf is launched on Domain A, it sends a Loader request for B.swf on Domain B.
    B.swf will be updated frequently so caching is disabled, A.swf however can be cached.
    B.swf has some ExternalInterface.calls so it requires being in the same securityDomain as A.swf otherwise it will receive a Security Error #2006.
    The code I am using for this is fairly straightforward:
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    var context:LoaderContext = new LoaderContext();
    context.securityDomain = SecurityDomain.currentDomain;
    loader.load(request, context);
    I believe B.swf is inheriting the caching setting of A.swf because it is residing in the same securityDomain. If I make a small update to B.swf and refresh A.swf in a browser it will not load the B.swf updates until I clear cache.
    If I get rid of the securityDomain context on the load it will always update B.swf with the most current version, but I run into security issues with ExternalInterface.
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    loader.load(request);
    I have tried appending random strings to the end of the URLRequest while using the securityDomain context but it will always used a cached version of B.swf. I have also tried loading B.swf into a ByteArray then using loadBytes on a Loader but that didn't work either.
    Any ideas are appreciated!

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • Acrobat Connect Pro LMS 7.5 server cache issue - displaying old content

    Our Adobe Acrobat Connect Pro server is showing old Captivate-created content from about 4-6 weeks ago.
    I loaded 35+ sets of Captivate 3 (SCORM 1.2, HTML, zipped) content onto our Acrobat Connect Pro LMS about 6 weeks ago.
    I converted all training content from Captivate 3.0.1 to 4.0.1, 4 weeks ago by opening each file in Captivate 4 and following prompts to "Save As" new files with different file names.
    I reloaded all content 4 weeks ago, and again 2 weeks ago.
    I reloaded about half of all content again last week.
    End User Acceptance testing performed this week showed that most of the courses are showing old content, ranging from 2-6 weeks old
    Attempted fixes and workarounds:
    Deleting content entirely and reloaded from scratch - this will not work long term, as we lose usage data each time we reload completely new files.
    Contacted Adobe, provided times to track incidents of the issue.  We reached Tier 2 - who told us it was our problem and that everything appeared to be working fine from their side.
    New workaround - load new content and reattach course to new content.  This presents the same long term issue as the first workaround, but enables us to retain older versions of content in the system, should we need to revert or report on it.
    Gaining server side access is a bit challenging due to the hosting situation we have, so I am looking (ideally) for a solution that can be performed from the Administrator/Author Frontend.  However, I want to learn the real cause of the problem, wherever it might reside, so that it can be properly corrected and avoided in the future.  I am calling this a server cache issue, as it seems the server has somewhere retained unwanted old versions of content, preventing current content from being displayed to end users.  Viewing content as an end user = see old content.  Viewing content from the Content area (Author view) shows the current files, so I know they are on the server and are loading correctly, up to a point.
    I am preparing all content for another round of loading/reloading due to other issues and updates, so republishing and reloading all 35+ files into the LMS is unavoidable at this point.
    This issue is keeping our LMS from launching to several thousand users across the country, so any suggestions or helpful tips are much appreciated.

    I think I have isolated the source of this problem. It's the Pitstop Professional 9 plug in. I un-installed this, and everything opens quicker than greased lightning. I re-installed it and it's back to slowsville.
    Unfortunately Pitstop is essential to my workflow.
    Until recently I did my pre-press on a Mac G5 with Acrobat Pro 7 and Pitstop 6.5. I never had this problem with slow file opening. But it seems that the delays would occur when I used the plug-in with large complex files.. So it would open files as fast as you'd expect from an elderly machine. But starting to use Pitstop would result in a prolonged period of staring at a spinning beachball.
    I wonder is there any way to stop the Pitstop plug-in from initializing until it is used? So the plug-in stays inert until you select the tool from from the menus.

  • IE Caching Issue while closing and opening a UI thrice or more

    Hi Everybody..
    Greetings of the day..
    I am working on a web application (Can not disclose the name) where at certain level a word document generated contains a customized toolbar.
    It contains one print option, when clicked opens a UI which shows default printer available (Its different from our normal printing).
    When I repeat opening and then closing of this UI thrice or more times, the entire application gets hanged and I have
    to terminate it by killing the associated process from task manager.
    When this issue was analysed our by onsite team they arrived at a conclusion saying that its an IE caching issue.
    But upto my knowledge I know that any application which is .SSL secured, by default IE never does cache control.
    What could be the solution if application is not .SSL secured or if it is.
    Kindly help asap.
    Thanks.

    Hi all
    Did anyone ever manage to fix this? I'm having this issue across multiple iPads. Even if i completely reset the iPad (erasing all content and settings) it does it again when restored.
    I've just got an iPad Air and that does it too. I even created a new Apple ID to see if that would work (trying to avoid any kind of caching aross iCloud, etc) and it does it straight away in those books. It's massivley frsustrating.
    I have noticed it only seems to do it for iBooks formatted books, not ePubs.
    Anyone have a resolution - it's quite embarrassing when you're doing a presentation about book features and the audience assumes youve only the wrong book each time. More annoying when clsoing the book and the wrong cover appears again, but also takes you to that books collection page, rather than the one you were in.

  • Possible session caching issue in SSRS2014

    Using custom FormsAuth, User A can sign into our own main asp.net mvc app (WIF cookie), then SSRS (FormsAuth cookie) and all is well.  Here is where things go bad.  User A signs out of our main application (WIF cookie deleted) then back in into
    our main application as User B then back into SSRS.  SSRS report that displays User!UserID show UserA instead of current User B.  Its like there is either a session or cookie caching issue going on but not for sure.  
    1. What is the proper way to sign out of SSRS and prevent session caching?
    2. Do I need to worry about making my SSRS logon page non-cacheable?  If so, what is the recommended way of doing this?
    thanks
    scott

    Hi scott_m,
    According to your description, you used custom FormsAuthentication in Reporting Services, after user A sign out the application an sign in as user B, SSRS built-in user is shows user A instead of user B.
    Based on my search, once we configured SSRS to use Custom (Forms) authentication by deploying a custom security extension, we can logon to MS Report Manager (MSRM) using credentials of our custom security framework via a logon web page. But there is no way
    to logout or to expire the authentication cookie, so we need to close the browser manually. As a workaround, we can add a logout button to the Report Manager which is using Forms Authentication, then use code to clear the cookie and redirect to home page.
    In addition, if you extend Reporting Services to use Forms Authentication, it’s better to use Secure Sockets Layer (SSL) for all communications with the report server to prevent malicious users from gaining access to another user's cookie. SSL enables clients
    and a report server to authenticate each other and to ensure that no other computers can read the contents of communications between the two computers. All data sent from a client through an SSL connection is encrypted so that malicious users cannot intercept
    passwords or data sent to a report server.
    Here is a relevant thread you can reference:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5e33949d-7757-45d1-9c43-6dc3911c3ced/how-do-you-logout-of-report-manager
    For more information about Forms Authentication, please refer to the following document:
    https://technet.microsoft.com/en-us/library/aa902691%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Parallel Caching Issue

    We have found issues with the parallel loading of the OLAP cache using reporting agent jobs where entries are not populated correctly.We rely on the OLAP cache for delivering the very high levels of concurrency during the peak times.
    Once the main batch data loading has been completed we run some background reporting agent jobs to pre-populate the OLAP cach.Each job contains a web application template that holds 1 or more queries and will process 1500+ stores for each run.
    We have different reporting agent jobs for the different web application templates, but we have discovered that if we run these jobs in parallel we do not get the full benefit of the OLAP cache compared to if we run them sequentially.
    If we run the jobs in parallel, when we look in RSRCACHE for these queries it would appear to have populated correctly, but when we check RSDDSTATS for the query performance the following day, we can see that a large number of the
    stores still hit the database and did not benefit from the cache entries. Sometime this can be as much as 60% failure to hit the cache.
    If we run the same job sequentially, then check RSDDSTATS the following
    day, we can see 100% success rate with every store hitting the OLAP cache.
    Is anyone able to advise how we can resolve this parallel caching issue?

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

Maybe you are looking for