Data caching issue

Hi,
I am running a SAPUI5 application as a hwc using phonegap. I have three screens in my app.
1. Main screen with a button to create object
2. Form to fill in
3. second form to fill in and create object
When the submit button is clicked on page 3, I redirect to page 1 (app.to("pageOne")). However from there when I click the button to create a new object again, the form seems to have the previously filled values. Instead, they should be empty. I am guessing the values are being loaded from a cache, since the page is not being reloaded. I want to know how to sort this issue.
How can I clear the cache? Or is there some other option? I want to make sure the form fields are all clear everytime I create a new object.
Thanks.

Hi,
I am running a SAPUI5 application as a hwc using phonegap. I have three screens in my app.
1. Main screen with a button to create object
2. Form to fill in
3. second form to fill in and create object
When the submit button is clicked on page 3, I redirect to page 1 (app.to("pageOne")). However from there when I click the button to create a new object again, the form seems to have the previously filled values. Instead, they should be empty. I am guessing the values are being loaded from a cache, since the page is not being reloaded. I want to know how to sort this issue.
How can I clear the cache? Or is there some other option? I want to make sure the form fields are all clear everytime I create a new object.
Thanks.

Similar Messages

  • _WLS_ADMIN000000.DAT Caching Issue

    Hi all,
    I have a strange situation here. My intention is to point my application to a new database. My application is deployed using an .ear file in weblogic 11g and reads database connection properties from a properties file placed in the classpath of the weblogic server.
    Now, I kept the old database up and changed the properties file in Weblogic with the new database server details and restarted the weblogic server and it worked. Then I reverted back my changes to point my application to the old database using the same procedure.
    Then, I turned off my old database and then tried doing the same and while I was trying to start my application from weblogic console,it failed saying "Could not establish a Network Connection", though the new database which I was trying to point was up and running.
    Then I found a file _WLS_<domain_name>_ADMIN000000.DAT in the path ".../weblogic/<domain_name>/servers/payments_admin/data/store/default that was caching the old database details and then I removed the file and restarted the server.
    Now here are the inferences I can draw from this...
    1. While the old database is up and running, the application starts with those old cached details in the .DAT file and still points to the new server.
    2. When the old database is down, the application tries to connect to the old database and hence fails.
    3. So the application uses .DAT file only while starting and then points to the new database.
    I just want to verify if my findings are correct or am I missing something here.
    Please suggest.
    Thanks in advance.
    Regards,
    Pritam

    Pl do not post duplicates - _WLS_ADMIN000000.DAT Caching Issue

  • Weblogic _WLS_ADMIN000000.DAT caching issue

    Hi all,
    I have a strange situation here. My intention is to point my application to a new database. My application is deployed using an .ear file in weblogic 11g and reads database connection properties from a properties file placed in the classpath of the weblogic server.
    Now, I kept the old database up and changed the properties file in Weblogic with the new database server details and restarted the weblogic server and it worked. Then I reverted back my changes to point my application to the old database using the same procedure.
    Then, I turned off my old database and then tried doing the same and while I was trying to start my application from weblogic console,it failed saying "Could not establish a Network Connection", though the new database which I was trying to point was up and running.
    Then I found a file _WLS_<domain_name>_ADMIN000000.DAT in the path ".../weblogic/<domain_name>/servers/payments_admin/data/store/default that was caching the old database details and then I removed the file and restarted the server.
    Now here are the inferences I can draw from this...
    1. While the old database is up and running, the application starts with those old cached details in the .DAT file and still points to the new server.
    2. When the old database is down, the application tries to connect to the old database and hence fails.
    3. So the application uses .DAT file only while starting and then points to the new database.
    I just want to verify if my findings are correct or am I missing something here.
    Please suggest.
    Thanks in advance.
    Regards,
    Pritam

    Pl do not post duplicates - _WLS_ADMIN000000.DAT Caching Issue

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • WEB u2013 I cache issue.

    We have BO XI 3.1 PF 1.7 and SAP BW 7.0 SP 18
    We are facing strong caching issue  with WEB- I XI 3.1 , what we observed is it cache previously fetched data and does not recognize data been updated in BW Infoprovider ( in our case we have BEx query on Direct update DSO )
    I have tried u2013
    1.     I made caching inactive for that BEx query in BW side.
    2.     I have check caching setting in BO , they are not helping me much to in active cache in BO
    On executing BEx query in BW, I have checked RSRT cash monitor; the query result is not getting cached, and get up-to-date data. Query works perfectly file.
    Also observed Events in RSDDSTAT_OLAP table, I could find very well Event 9000. This shows me that BEx query hits database and bring data back.
    When I execute BO report on the top of same BEx query, I do not get up to date data , it shows me previously fetched data and does not recognize data been updated in BW.
    Now question is where this data being getting cashed , and how to make that inactive to make sure I do get up to date data in WEB-I .
    Many thanks for your kind response.
    Regards
    Ashutosh D

    HI Ashutosh,
    This would be stored in the C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Data\servername_6400\storage folder.  Try renaming this folder and then testing again.  If this allows for updated records, then that proves that the Webi Processing Server is getting the cached data from this directory.
    There are also a few settings you can try to disable in the Webi Processing Server settings within the CMC:
    Disable cache Sharing (Checked)
    Enable Real-Time Caching (Unchecked)
    Enable Document Cache (Unchecked)
    Cache Timeout (1 min) - sets the cache to expire after 1 min.
    Try playing with these settings and see if that resolves the issue. 
    Thanks
    Jb

  • SharedObject and Caching issues

    Background:
         I am working on a project where I am using sharedObject to store data for a favorites list. The project contains 2 swfs, one in AS 3.0 (using Flex 4.5) and the other an AS 2.0 flash 8 project.  Basically the AS 2.0 project read and writes data to the shared object and the AS 3.0 project just reads the data.
    Setup:
         OS: Windows 7
         Flash Player: 10.3     Browsers: IE 9, IE 8, Chrome 14, and some misc Firefox testing -- Chrome seems to be working the best.  
    Problems:
         First off it has been extremely hard to get consistent results across different computers so if anyone has a suggestion as to why that would be (other than different browsers, different flash player versions, different flash player settings/global settings and different OS versions) because I have been making sure all that stuff is consistent across devices for testing purposes.
         1. It appears that some computers/browers won't store the shared object at all even though everything to allow it is turned on.
         2. Some computers/browsers will store it, but won't update until I refresh the page
    General Questions:
         1. Is the storage of sharedObject object data in a .sol file configured the same in both AS 2.0 and AS 3.0 and if so, would this be a means of communicating between two swfs running off the same domain?
         2. Is it possible to have a browser cache sharedObject data(the .sol file)?
          3. I have noticed as new flash player releases are coming out(especially recently i.e. 10.3) I have had to make a lot of repairs to the actionscipt 2 stuff.  Is actionscript 2 support slowly being phased out? Is anyone else experiencing the same problems? 
    Possible Solutions/Findings:
         I am starting to believe that it has something to do with browser caching since Chrome seems to perform the best and Chrome has performed best when it comes to browser caching issues.
    ** code is available if needed but since it was working before I don't think it is relevant.

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • SecurityDomain and Caching Issues

    I am running into some caching issues when setting the securityDomain of an imported SWF to match the calling SWF file and I was curious if anyone had any ideas on how to get around this issue.
    Here is the scenario:
    A.swf is launched on Domain A, it sends a Loader request for B.swf on Domain B.
    B.swf will be updated frequently so caching is disabled, A.swf however can be cached.
    B.swf has some ExternalInterface.calls so it requires being in the same securityDomain as A.swf otherwise it will receive a Security Error #2006.
    The code I am using for this is fairly straightforward:
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    var context:LoaderContext = new LoaderContext();
    context.securityDomain = SecurityDomain.currentDomain;
    loader.load(request, context);
    I believe B.swf is inheriting the caching setting of A.swf because it is residing in the same securityDomain. If I make a small update to B.swf and refresh A.swf in a browser it will not load the B.swf updates until I clear cache.
    If I get rid of the securityDomain context on the load it will always update B.swf with the most current version, but I run into security issues with ExternalInterface.
    ActionScript Code:
    var request:URLRequest = new URLRequest("http://domainB/B.swf");
    var loader:Loader = new Loader();
    loader.load(request);
    I have tried appending random strings to the end of the URLRequest while using the securityDomain context but it will always used a cached version of B.swf. I have also tried loading B.swf into a ByteArray then using loadBytes on a Loader but that didn't work either.
    Any ideas are appreciated!

    Hello,
    #1
    are you using explicit call to flush method immediately afer you're writing data to reference to SharedObject?
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/SharedObject. html#flush()
    If not you could have different data read by different browsers sessions run at the same time (e.g. IE/FireFox both hosting your test Flash movies) - as data are flushed only when given Flash runtime session is to be terminated (e.g. movie is to be unloaded) and in other cases as outlined in documentation.
    #2
    the SharedObject is reference to binary file stored on local machine by Flash runtime (Air runtime too). It has nothing similar to web-fetched content to be cached:
    http://en.wikipedia.org/wiki/Local_Shared_Object
    regards,
    Peter

  • Error in Starting Active Data Cache

    Hi All,
    I am trying to start the BAM services, all other services are getting started except 'Active Data Cache'.
    Its erroring out with the following msg in the log....
    Could any one look in to this issue......Thanks in Advance.
    here is the log msg
    2009-01-16 15:56:56,538 [10688] ERROR - ActiveDataCache DPAPI was unable to decrypt data. CryptUnprotectData failed. Error -2146893813: Key not valid for use in specified state.
    2009-01-16 15:56:56,538 [10688] WARN - ActiveDataCache Exception occurred in method Startup
    Stack trace:
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(Byte[] cipherTextBytes, Byte[] entropyBytes, String& description)
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(String cipherText, String entropy, String& description)
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(String cipherText)
    at Oracle.BAM.Common.Security.DataProtector.DataProtector.Decrypt(String strData)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.ConnectionStringDecrypter.Decrypt(String strEncrypted)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleDataFactory.GetInstance()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetDataFactory()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    2009-01-16 15:56:56,553 [10688] ERROR - ActiveDataCache The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> System.Exception: DPAPI was unable to decrypt data. CryptUnprotectData failed. Error -2146893813: Key not valid for use in specified state.
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(Byte[] cipherTextBytes, Byte[] entropyBytes, String& description)
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(String cipherText, String entropy, String& description)
    at Oracle.BAM.Common.Security.DataProtector.DPAPI.Decrypt(String cipherText)
    at Oracle.BAM.Common.Security.DataProtector.DataProtector.Decrypt(String strData)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.ConnectionStringDecrypter.Decrypt(String strEncrypted)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleDataFactory.GetInstance()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetDataFactory()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()

    Hi.
    I encountered the same problem. In my case, the reason was that I created a new user for BAM (to be provided when installing BAM) but ran the installation under another user account. It seems like these users must be the same to make it work.
    Greetings,
    cor

  • Error starting Oracle BAM active data cache service

    Hi
    after installing BAM every thing working fine ,but if restart my system Oracle BAM active data cache service throwing following error
    "The Oracle BAM Active Data Cache service on Local computer started and then stopped.Some services stop automatically if they have no work to do,for example the performance logs and alerts service"
    Database is running fine
    Following is the ADC log file error
    2007-12-07 17:19:29,640 [2928] ERROR - ActiveDataCache The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> System.DllNotFoundException: Unable to load DLL (OraOps10.dll).
    at Oracle.DataAccess.Client.OpsTrace.GetRegTraceInfo(UInt32& TrcLevel, UInt32& StmtCacheSize)
    at Oracle.DataAccess.Client.OraTrace.GetRegistryTraceInfo()
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleDataFactory.GetConnection()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()
    2007-12-07 17:24:45,250 [1524] ERROR - ActiveDataCache Unable to load DLL (OraOps10.dll).
    2007-12-07 17:24:45,265 [1524] WARN - ActiveDataCache Exception occurred in method Startup
    Please help me in resolving this issue .Am getting this issue every time
    Thanks
    BS

    Make sure the path to the ODAC used by BAM (C:\OracleBAM\ClientForBAM\bin) is the first item in the system PATH
    environment variable. Restart your computer after fixing this.
    If that doesn't fix it, please check the Troubleshooting section in the BAM Install Guide.
    Regards, Stephen

  • Caching issue in jsp or servlet

    I am not sure where I should post the issue that I have. I have a j2ee web app (struts) running on Apache Web Server + Glassfish + MySQL. My server has multi core.
    The problem is that whenever I insert a new data or delete it from a table, I don't see the change right away. Sometimes, I see the change. Sometimes I don't. It's very inconsistent.
    If the old data is cached in JSP, I shouldn't see the change in log file, but I do see it even in the log file.
    For example, I have a page managing user's folders. when I delete a certain folder name from jsp page, this folder is deleted from a db table. but when I refresh the page, I still see the folder name that is not supposed to show up. or when I go to a different page and come back to this page, I don't see it. The behavior is very inconsistent.
    If it's a browser caching issue, I don't think that I should see it in the log file, but I still see the folder name(which is supposed to be deleted) in the log file.
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1);
    does anybody have any opinion about this?
    It's hard to debug it and describe the behavior.
    But it would be very helpful if someone who had a same experience about this explains about this and tells me how to fix it.
    Thanks.

    caesarkim1 wrote:
    I am including these lines in all included jsp pages.
    response.setHeader("Cache-Control","no-cache");
    response.setHeader("Pragma","no-cache");
    response.setDateHeader ("Expires", -1); Instead of including these lines in all jsp's, make a filter and add these lines to it.

  • How to check if table or index bound to data cache loaded in memory?

    In my system, thare are many named data caches created and certain objects bind to those data cache.
    For example, I have a large table mytab which is bound to data cache mycache.
    When I issue a sql like select * from mytab where x=y, run it at first time is is very slow.  run it again and again. then it very fast. it means data loaded in cahce and ready for same query.
    Question:
    1. How can I know if the table mytab has been loaded in data cache mycache?
    2. How to load mytab into data cahce mycache one time for all query in the feature?

    one way to monitor is:
    select CacheName,DBName,OwnerName,ObjectName,IndexID,sum(CachedKB) as "CachedKb"
    from master..monCachedObject
    group by CacheName,DBName,OwnerName,ObjectName,IndexID
    order by CacheName,DBName,OwnerName,ObjectName,IndexID
    But, you should understand that caches have a finite size, and caches contain "pages" of objects including data pages, index pages, and LOB pages.  Also, caches may have different pool sizes, so a page can be in only one cache pool.  So, if you want  a table and all of it's indexes, text/image pages  to be loaded into a dedicated cache, you need a large enough cache to fit all of those pages, and decide which buffer pool you want them in (typically either the 1 page pool, or the 8 page pool).
    Then, simply execute SQL (or dbcc) commands that access all of those pages in the manner you wish to find them in the cache.  For example, two statements, one that scans the table using 2k reads, and another that scans the index (mytab_ind1) using 2k reads.
    select count(*) from mytab plan '( i_scan mytab_cl mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    select count(*) from mytab plan '( i_scan mytab_ind1 mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    etc etc.
    used count(*) to limit result sets of examples

  • Acrobat Connect Pro LMS 7.5 server cache issue - displaying old content

    Our Adobe Acrobat Connect Pro server is showing old Captivate-created content from about 4-6 weeks ago.
    I loaded 35+ sets of Captivate 3 (SCORM 1.2, HTML, zipped) content onto our Acrobat Connect Pro LMS about 6 weeks ago.
    I converted all training content from Captivate 3.0.1 to 4.0.1, 4 weeks ago by opening each file in Captivate 4 and following prompts to "Save As" new files with different file names.
    I reloaded all content 4 weeks ago, and again 2 weeks ago.
    I reloaded about half of all content again last week.
    End User Acceptance testing performed this week showed that most of the courses are showing old content, ranging from 2-6 weeks old
    Attempted fixes and workarounds:
    Deleting content entirely and reloaded from scratch - this will not work long term, as we lose usage data each time we reload completely new files.
    Contacted Adobe, provided times to track incidents of the issue.  We reached Tier 2 - who told us it was our problem and that everything appeared to be working fine from their side.
    New workaround - load new content and reattach course to new content.  This presents the same long term issue as the first workaround, but enables us to retain older versions of content in the system, should we need to revert or report on it.
    Gaining server side access is a bit challenging due to the hosting situation we have, so I am looking (ideally) for a solution that can be performed from the Administrator/Author Frontend.  However, I want to learn the real cause of the problem, wherever it might reside, so that it can be properly corrected and avoided in the future.  I am calling this a server cache issue, as it seems the server has somewhere retained unwanted old versions of content, preventing current content from being displayed to end users.  Viewing content as an end user = see old content.  Viewing content from the Content area (Author view) shows the current files, so I know they are on the server and are loading correctly, up to a point.
    I am preparing all content for another round of loading/reloading due to other issues and updates, so republishing and reloading all 35+ files into the LMS is unavoidable at this point.
    This issue is keeping our LMS from launching to several thousand users across the country, so any suggestions or helpful tips are much appreciated.

    I think I have isolated the source of this problem. It's the Pitstop Professional 9 plug in. I un-installed this, and everything opens quicker than greased lightning. I re-installed it and it's back to slowsville.
    Unfortunately Pitstop is essential to my workflow.
    Until recently I did my pre-press on a Mac G5 with Acrobat Pro 7 and Pitstop 6.5. I never had this problem with slow file opening. But it seems that the delays would occur when I used the plug-in with large complex files.. So it would open files as fast as you'd expect from an elderly machine. But starting to use Pitstop would result in a prolonged period of staring at a spinning beachball.
    I wonder is there any way to stop the Pitstop plug-in from initializing until it is used? So the plug-in stays inert until you select the tool from from the menus.

  • Adapter Engine Cache Issue

    Hello All,
        In SXMB_MONI i got an error saying
    SAP:AdditionalText>Error while writing to secure store: ERROR_UNKNOWN: Error when updating buffer for Adapter Engine</SAP:AdditionalText>
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>Error while reading access data (URL, user, password) for the Adapter Engine</SAP:Stack>
    My Analysis :
    1, In SXI_CACHE i could see 'Unable to refresh cache contents' and 'Error during last attempt to refresh cache' also I could not see any entry in Adapter Engine Connection Data Cache.
    2, when execute the RFC connection 'INTEGRATION_DIRECTORY_HMI' Its asking for userId and password .
    3, I tried reset the password and do a JAVA restart again with same error.
    Clarificaiton need :
    1 Looks some authorization error , but where exactly to maintain this ,
    2, How the adatper engince connection data will populate in to sxi_cache transaction.
    could you guide me clearly for this issue.
    Regards.
    Sethu.

    Thanks venkat,
    now that Adaper engine url gotregistered , but no my cache content is not updating correctly , i.e when i execute the any cache refresh its loading for a long time and ending up an error and when i tried in Administator tool--> cache overivew > ID> and did cache refresh its end up with error.
    Method fault! Message : IBaseException:Connection to system DIRECTORY using application DIRECTORY lost. Detailed information: Error accessing "http://c1bfxr41.unix.marksandspencercate.com:8136/dir/hmi_cache_refresh_service/int?container=any" with user "PIISUSER". Response code is 401, response message is "Unauthorized" com.sap.aii.ib.server.abapcache.CacheRefreshException: Connection to system DIRECTORY using application DIRECTORY lost. Detailed information: Error accessing "http://c1bfxr41.unix.marksandspencercate.com:8136/dir/hmi_cache_refresh_service/int?container=any" with user "PIISUSER". Response code is 401, response message is "Unauthorized" at com.sap.aii.ibrun.server.valueMapping.cache.RefreshClient.refresh_VMCache(RefreshClient.java:60) at com.sap.aii.ibrun.server.valueMapping.cache.HmiMethod_Invalidate.refresh(HmiMethod_Invalidate.java:79) at com.sap.aii.ibrun.server.valueMapping.cache.HmiMethod_Invalidate.process(HmiMethod_Invalidate.java:40) at com.sap.aii.utilxi.hmis.server.HmisServiceImpl.invokeMethod(HmisServiceImpl.java:169) at
    But in sm59 INTEGRATION_DIRECTORY_HMI - i used pisuper user id also in SLDAPICUST i used pisuper only, i dont know where the piisuser is getting conflict. any idea please?

  • Possible session caching issue in SSRS2014

    Using custom FormsAuth, User A can sign into our own main asp.net mvc app (WIF cookie), then SSRS (FormsAuth cookie) and all is well.  Here is where things go bad.  User A signs out of our main application (WIF cookie deleted) then back in into
    our main application as User B then back into SSRS.  SSRS report that displays User!UserID show UserA instead of current User B.  Its like there is either a session or cookie caching issue going on but not for sure.  
    1. What is the proper way to sign out of SSRS and prevent session caching?
    2. Do I need to worry about making my SSRS logon page non-cacheable?  If so, what is the recommended way of doing this?
    thanks
    scott

    Hi scott_m,
    According to your description, you used custom FormsAuthentication in Reporting Services, after user A sign out the application an sign in as user B, SSRS built-in user is shows user A instead of user B.
    Based on my search, once we configured SSRS to use Custom (Forms) authentication by deploying a custom security extension, we can logon to MS Report Manager (MSRM) using credentials of our custom security framework via a logon web page. But there is no way
    to logout or to expire the authentication cookie, so we need to close the browser manually. As a workaround, we can add a logout button to the Report Manager which is using Forms Authentication, then use code to clear the cookie and redirect to home page.
    In addition, if you extend Reporting Services to use Forms Authentication, it’s better to use Secure Sockets Layer (SSL) for all communications with the report server to prevent malicious users from gaining access to another user's cookie. SSL enables clients
    and a report server to authenticate each other and to ensure that no other computers can read the contents of communications between the two computers. All data sent from a client through an SSL connection is encrypted so that malicious users cannot intercept
    passwords or data sent to a report server.
    Here is a relevant thread you can reference:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5e33949d-7757-45d1-9c43-6dc3911c3ced/how-do-you-logout-of-report-manager
    For more information about Forms Authentication, please refer to the following document:
    https://technet.microsoft.com/en-us/library/aa902691%28v=sql.80%29.aspx?f=255&MSPPError=-2147217396
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Parallel Caching Issue

    We have found issues with the parallel loading of the OLAP cache using reporting agent jobs where entries are not populated correctly.We rely on the OLAP cache for delivering the very high levels of concurrency during the peak times.
    Once the main batch data loading has been completed we run some background reporting agent jobs to pre-populate the OLAP cach.Each job contains a web application template that holds 1 or more queries and will process 1500+ stores for each run.
    We have different reporting agent jobs for the different web application templates, but we have discovered that if we run these jobs in parallel we do not get the full benefit of the OLAP cache compared to if we run them sequentially.
    If we run the jobs in parallel, when we look in RSRCACHE for these queries it would appear to have populated correctly, but when we check RSDDSTATS for the query performance the following day, we can see that a large number of the
    stores still hit the database and did not benefit from the cache entries. Sometime this can be as much as 60% failure to hit the cache.
    If we run the same job sequentially, then check RSDDSTATS the following
    day, we can see 100% success rate with every store hitting the OLAP cache.
    Is anyone able to advise how we can resolve this parallel caching issue?

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

Maybe you are looking for

  • Problem in Creating a new database in Oracle 9i

    I have installed Oracle 9i for Windows successfully. Now when I try to create a new database either using the Database Configuration Assistant or using the script generated by the Assistant, I get an error message that DATABASE NOT OPEN. So the datab

  • BOE XI 3.1 (SP2) and BW

    I have installed SAP Integration Kits XI 3.1 + SP2 on my BOE XI 3.1 machine.I am in the process of configuring the SAP Authentication. According to Ingou2019s document ( BusinessObjects Integration Kit for SAP - Installation and Configuration, page 1

  • Planned order to purchase requisition conversion

    Hi, I have one query ref. to planned order to purchase requisition conversion for raw material. Is system convert planned order to purchase requisition automatically or to be converted manually by md14/md15 ? If system convert it automaticallly what

  • Cannot cancel a creditcard off my account

    i cannot cancel a credit card off my account - i have followed the steps  stores payment details = delete the system is not asking me to confirm deletion so the credit card remains there.

  • New testing builds

    Hi community, due to time constraints, we have decided to make a slight alteration in the release process. That is: - we (releng) build testing images and publish them on the buildserver. - releng does only minimal testing.  we rely on the community