Database cache settings for 4.13 on NT 4

The database cache settings and the documentation have always been
confusing.
I have about 35000 entries in my directory. If I have enough available ram
should I try to put all the entries into ram? The default setting is a 1000,
should I change it to 35000? Will I notice any great speed increases.
And the second setting has to do with index cache. Should I try to get that
into ram also? My id2entry.db2 file is about 40MBs. Should I change the
default from 10MBs to 40MBs? Will there be a significant increase in speed?
Thanks.

Thanks Russ H for your quick reply.  I am still fairly new to FCP X coming from using FCE not iMovie.
The project I'm working on I had already started before posting yesterday.  Is it possible to change the settings for what I have already done, or no I need to start all over again?
Thanks again
Matt

Similar Messages

  • "In-Memory Database Cache" option for Oracle 10g Enterprise Edition

    Hi,
    In one of our applications, we are using TimesTen 5.1.24 and Oracle 9i
    databases (platform - Solaris 9i).
    TimesTen holds application information which needs to be accessed quickly
    and Oracle 9i is a master application database.
    Now we are looking at an option of migrating from Oracle 9i to Oracle 10g
    database. While exploring about Oracle 10g features, came to know about
    "In-Memory Database Cache" option for Oracle Enterprise Edition. This made
    me to think about using Oracle 10g Enterprise Edition with "In-Memory
    Database Cache" option for our application.
    Following are the advantages that I could visualize by adopting the
    above-mentioned approach:
    1. Data reconciliation between Oracle and TimesTen is not required (i.e.
    data can be maintained only in Oracle tables and for caching "In-Memory
    Database Cache" can be used)
    2. Data maintenance is easy and gives one view access to data
    I have following queries regarding the above-mentioned solution:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    In "Options and Packs" chapter in Oracle documentation
    (http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/options.htm
    #CIHJJBGA), I encountered the following statement:
    "For the purposes of licensing Oracle In-Memory Database Cache, only the
    processors on which the TimesTen In-Memory Database component of the
    In-Memory Database Cache software is installed and/or running are counted
    for the purpose of determining the number of licenses required."
    We have servers with the following configuration. Is there a way to get the
    count of processors on which the Cache software could be installed and/or
    running? Please assist.
    Production box with 12 core 2 duo processors (24 cores)
    Pre-production box with 8 core 2 duo processors (16 cores)
    Development and test box with 2 single chip processors
    Development and test box with 4 single chip processors
    Development and test box with 6 single chip processors
    Thanks & Regards,
    Vijay

    Hi Vijay,
    regarding your questions:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    ==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    ==> Seperate Installation
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    ==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
    This explains the differences.
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    ==> Please see above mentioned papers
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    ==> Again ... ;-)
    Kind regards
    Mike

  • Need to knw how to maintain cache settings for infoprovider in one go

    Hi,
    We are trying change the OLAP cache settings for the queries, as we know queries inherit the properties of InfoProvider so instead of changing for queries we need to change it for infoprovider. But we have 1000+ infoproviders, i want to know is there any way to change for all the infoproviders at one go"
    we have around 1100+ infoproviders in our BW system
    we would like to know what is the option to activate all the queries at
    a time
    Can you please let us know how to maintain the cache settings for all
    the infoproviders in one go
    and also like to know whether this will change the cache settings of
    the queries present now in the system
    Thanks & Regards,
    Syeda Nausheen Sulthana
    SAP BASIS | AtoS

    Hi Nausheen,
    You capture the steps that you do in changing the infocube properties in BDC.
    Create an upload file with all the cubes technical names and execute the BDC program in automated way.
    You can take help from ABAP expert in your team.
    Let me know, if you need more details.
    Thanks,
    Krishnan

  • Ideal Cache Settings for "One Processor Mode"

    Hey there,
    I got a AMD Dual Core 4200+, 3 gig ram comp, XP 32bit
    Anyways, AFX seems to work better for me in "single processor" mode.
    So, what would the ideal cache/memory settings be for my computer would you say? Including the secret memory purging info?
    Thanks,
    james

    Mmh, there isn't really any secret to this. The default 60/ 120% combo is mostly considered to be the best compromise in most situations. AE's memory allocation is dynamic, anyways... If it's of any importance to you, you might increase the average RAM usage to 80% or so, but unless you work with larger comps that benefit from that setting, you won't see much of a difference.
    Depending on the type of work, you can also go to the other extreme and lower the RAM usage, but instead use the disk cache. It will force AE to dump more frames to disk rather than holding them in RAM. This can be beneficial if you are using many precomps with effects or do color space conversions e.g. working with 32bpc OpenEXR files in 16bpc or something.
    For the purging you shouldn't overdo. Emptying the memory every 5 or 10 frames can be enough many times. Only use lower values if your comp doesn't go thru at all. The downside to purging is that AE needs to load everything from scratch after a purge and depending on the amount of stuff, this can drastically increase render times. So it's not always wise to use this option. It will be even more so, if you are using temporal trickery with time-remapped comps or effects such as Echo or CC Force Motionblur.
    Mylenium

  • Questions on cache and general RSRT settings for plancube

    Hi,
    we would like to:
    1) set request status 1 in RSRT for our planqueries, in order to automatically refresh the query after executing a planfunction (problem we have now is that the results of a planfunction are not automatically updated in the query. Only when doing something else like executing other function, saving, check locks, ... the results are visible).
    2) activate delta cache for our planqueries
    we have read OSS note 1136163 on RSRT settings. It says:
    Aggregation level "A" is implemented internally by the automatically created query "P/!!1P" (plan buffer query). This query acts like an InfoProvider. It reads the data of the database or provides the data from InfoProvider "P", it adds the data of the delta buffer "Dp" (or the delta buffer Dpi if P is a MultiProvider with several PartProviders Pi that can be planned) and transfers the data manager as data of the InfoProvider "A" of type "ALVL". The query "P/!!1P" can use aggregates and the cache; this is exactly like each normal query in "P". If "P" is a MultiProvider, it is useful to set PARTITIONMODE to "1".
                  For the query "P/!!1P" that is created automatically for an aggregation level or for all aggregation levels using the InfoProvider P, we recommend the following setting:
                  Read mode "H", request status "1", cache mode "1" or higher, delta cache "true" and SP grouping "1".
                  Furthermore, the selection to use the structure element (KIDSEL) should be "true".
                  The input-ready queries in "A" should not, and cannot, use a cache. The request status is irrelevant since queries in "A" are automatically set to current data. The delta buffer does not currently support hierarchy processing. Therefore, aggregation level "A" cannot completely support read mode "H". For input-ready queries at A:
                  Read mode "X", request status "0", cache mode "0".
                 The delta cache and SP grouping are not visible
    Problems we have:
    1) for query P/!!1P (PCA_AGQF/!!1PCA_AGQF in our example) does not allow changing the request status (greyed out). It now has value 0 instead of 1. It also does not allow to activate the delta cache flag. How to change this? In RSDIPROP we have set partitionmode to 1 for the multiprovider and activated the delta cache flag...
    2) can we use the cache / delta cache principle for our planqueries? If so, how to ensure these settings remain activated in RSRT?
    regards
    Dries
    regards
    dries

    Hi,
    To change the cache settings for your cube.
    Open the cube in RSA1 and click in 'Chance'.
    - click in the 'Environment' menu;
    - expand 'InfoProvider Properties'
      - select the option 'Change'.
    You will be able to set the cache mode for this provider.
    I don't think it will be possible use cache for a multiprovider, it is
    not possible.
    Regards,
    Amit

  • Properly and accurately calculating application cache settings

    Hello everyone.
    We are running Hyperion Planning 11.1.2.1 and one of the dataforms we have set up is quite heavy (it includes several DynamicCalc members with considerable ammount of scripting) and it fails to load from time to time, just saying "Cannot open dataform X. Check logs" and such.
    I have tried to increase cache sizing in the databases of the Essbase application (right clic on each database > edit > properties > caches), as well as buffer sizes and commit blocks.
    Little by little I have managed to improve performance by modifying the above mentioned caches (it crashes less often), but still I guess it's nuts to infinitely increase caches to make sure it works.
    So, my question is: Is there a way to calculate the accurate cache settings for a given application?
    My current settings are:
    Buffer size: 200 KB
    Short buffer size: 200 KB
    Index cache setting: 30720 KB
    Data file cache setting: 61440 KB
    Data cache setting: 51200 KB
    Commit blocks: 30000
    Do you think these are accurate?
    Thanks a lot,
    G.S.Feliu

    You haven't really provided enough information to be honest, for example are you running a 64bit system?
    But that is rhetorical, as usual the first port of call is the DBAG. I don't see why a link should be posted, you must have access to it already if you're administering a production system. That will point out things like that the Data file cache setting is only relevant if Direct I/O is being used and that the index size should be at least as big as the file if memory allows.
    Commit blocks....is interesting, personally I have set it to 0 in some projects and seen some improvement but as usual testing is the key.
    However their is a performance tuning guide out there that you may find very useful:
    https://blogs.oracle.com/pa/entry/epm_11_1_2_epm1
    that focuses on the infrastructure a bit more. Bit complicated though and I would thoroughly recommend recording a set of benchmarks before applying any of those changes, and applying said changes one by one or you'll never know what is good and what is not so good.
    Learn from the pain others have already endured ;-)
    Good Luck
    Steve

  • Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache

    Hi,
    What is difference in Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache.
    For 32 bit on windows OS i am not able to insert data's more than 500k rows with 150 columns (with combinations of CHAR,BINARY_DOUBLE,BINARY_FLOAT, TT_BIGINT,REAL,DECIMAL,NUMERIC etc).
    [TimesTen][TimesTen 11.2.2.2.0 ODBC Driver][TimesTen]TT0802: Database permanent space exhausted -- file "blk.c", lineno 3450, procedure "sbBlkAlloc"
    I have set Perm size as 700 mb,Temp size as 100mb
    What is the max size we can given for PermSize,TempSize,LogBufMB for 32 bit on windows OS.
    What is the max size we can given for PermSize,TempSize,LogBufMB for 64 bit on windows OS.
    What is the Max configuration of TT for 32 bit what i can set for Perm size Temp size.
    Thanks!

    They are the same product but they are licensed differently and the license limits what functionality you can use.
    TimesTen In-Memory Database is a product in its own right allows you to use TimesTen as a standalone database and also allows replication.
    IMDB Cache is an Oracle DB Enterprise Edition option (i.e. it can only be licensed as an option to an Oracle DB EE license). This includes all the functionality of TImesTen In-Memory Database but adds in cache functionality (cache groups, cache grid etc.).
    32-bit O/S are in general a poor platform to try and create an in-memory database of any significant size (32-bit O/S are very limited in memory addressing capability) and 32-bit Windows is the worst example. The hard coded limit for total datastore size on 32-bit O/S is 2 GB but in reality you probably can;'t achieve that. On Windows the largest you can get is 1.1 GB and most often less than that. If you need something more than about 0.5 Gb on Windows then you really need to use 64-bit Windows and 64-bit TimesTen. There are no hard coded upper limit to database size on 64-bit TimesTen; the limit is the amount of free physical memory (not virtual memory) in the machine. I have easily created a 12 GB database on a Win64 machine with 16 GB RAM. On 64-bit Unix machines we have live database of over 1 TB...
    Chris

  • Database level settings

    Please advise database level settings for all our databases for following items:
     Virtual Log File
     Database file growth settings
    - And suggest what are the best practices around these items that we need to follow for future new databases. What are different things to consider?
    - And also what would we need to do before making these changes on current databases?
    Thanks,

    Can you refer the below link
    https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/
    http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
    --Prashanth

  • BW cache settings and WebI

    Hello,
    we're trying to tune the BW cache for WebI queries, we'd like to change the default cache settings for a number of queries, including ones used for BO universes.
    I'd like to know if this changes will be picked and used by WebI or if it is not sensitive to such changes (Cache Mode and Persistence Mode).
    Thanks in advance for your help.

    Hello Pablo,
    I haven't tried yet but the OLAP cache should be used by WebI when it accesses the queries. You can ensure it does in transaction RSRCACHE by looking at the last access of a cached query. Could also look at the BI stats (if you have installed them and turned then on of course).
    Regarding the cache settings, I would personally recommend using the persistent cache (across App Servers if you have more than one app server) into a Cluster table (or Transparent table if the data set is quite large). Can find more details here: http://help.sap.com/saphelp_nw70ehp1/helpdata/en/d9/31363dc992752de10000000a114084/frameset.htm
    You could also look into MDX cache instead of OLAP cache, but I haven't played with it yet so I'm not sure it would help with WebI. Anyone else did?
    Hope it helps...

  • Value settings for database query

    I have a database set up for item prices. Currently, the
    database is set for currency and I have 1.99 and 2.99 entered in
    the two fields. However, the cfm page form displays them as 1.9900
    and 2.9900. Why is it adding the extra "00" at the end? The value
    in the page coding is integer. Should I have it set at something
    different? Also, how would I get a "$" to show before the price?
    Thanks,
    Dave

    It is probable that your database is adding the extra zeros
    as an
    indication of the precision the field contains for currency.
    Many
    people care about those fractions of a cent and would not
    want them
    rounded off.
    To display the data the way your desire, check out the
    DecimalFormat(),
    DollarFormat() and|or NumberFormat() functions.
    Dave Blake wrote:
    > I have a database set up for item prices. Currently, the
    database is set for
    > currency and I have 1.99 and 2.99 entered in the two
    fields. However, the cfm
    > page form displays them as 1.9900 and 2.9900. Why is it
    adding the extra "00"
    > at the end? The value in the page coding is integer.
    Should I have it set at
    > something different? Also, how would I get a "$" to show
    before the price?
    >
    > Thanks,
    > Dave
    >

  • Data Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!
    Edited by: user4958421 on Dec 15, 2009 10:43 AM

    Hi,
    1. For error no 1130203, here are the possible problems and solutions, straight from document.
    Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
    1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
    2. If you are on a UNIX computer, check the user limit profile.
    3. Check the block size of the database. If necessary, reduce the block size.
    4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!

    You have posted the same question on the essbase forum > Data Cache Settings
    You are more likely to get more responses on the essbase forum to this question.
    From the docs
    Error - 1130203     
    Essbase is unable to allocate memory.
    Possible solutions
    Try any of these suggestions to fix the problem. After you fix the problem, determine whether the database is corrupt (see Checking for Database Corruption).
    Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occurring, add more memory to the server computer.
    If you are on a UNIX computer, check the user limit profile (see Checking the User Limit Profile).
    Check the block size of the database. If necessary, reduce the block size.
    Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    Ensure that the Essbase computer has enough resources. Consult the Oracle Hyperion Enterprise Performance Management System Installation Start Here for system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Essbase needs.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Maintaining data of view objects in cache memory for repeated usage

    Hi,
         We are developing an application which is having around 800 viewobjects that will be used as LOV in different screens. Therefore, it is decided to create a separate project for all such LOV view objects and keep the same in shared scope so that the data can be made availabe across the application.
         The application also communicates with different database schemas based on the logged-in county. For a particular user, LOV view object LovView1 should get the data fetched from Schema1 whereas for user2, the same LovView1 should get the data from Schema2.
         For this, we have created n number of ApplicationModules like AM1, AM2 etc in the project each one being connected to different database. A base application module also has been created and all the county specific AMs extend this base AM. Also all the LOV view object instances are included in this base AM so that they will be available in the county specific AMs also.The entire project is made as an ADF Library jar and this base AM is utilized by other projects for mapping the LOV by attaching the library.
         At runtime, whenever a particular viewobject is accessed, the findViewObject() method of the baseAM has been overridden and the logic is built in such a way to get the logged in user's county code from a session variable and based on the county, the corresponding AM is communicated with and the view object is returned.
         The view objects of the LOV project is used as LOV as well as for doing some other backend processes. In such cases, the view object is obtained and necessary filter conditions are appended to the view criteria and is executed to get the filtered rowset.
    Now, my questions are,
    1. Is it enough to create the jar for the LOVProject and access the view objects from the same baseAM across the application?
    2. I wish to keep all the data in cache memory to avoid repeated DB hits for all the LOV view objects. How it can be achieved? To be more precise, consider two users user1 and user2 logging into the application with different county. When user1 access a LOV viewobject for the first time, data needs to be fetched from the DB, kept in application scoped cache memory and return the rowset. On subsequent calls to the viewobject, the data needs to be retreived from the cache and not from the DB. When user2 also access the same LOV viewobject, the same logic as explained for user1 should happen. How can I achieve this? Actually my doubt is when user2 access, will the data pertaining to user1 remains available in cache? If not, how to make it retain the data in cache?
    3. I also wish to append a particular where condition to a viewobject irrespective of other considerations like logged in county, existing view criteria etc.. How can I do this? A separate thread for this requirement has been posted in the forum Including additional where clause conditions to view criteria dynamically
    Kindly give me your suggessions.
    Thanks in advance.
    Regards.

    Hi Vijay,
    regarding your questions:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    ==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    ==> Seperate Installation
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    ==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
    This explains the differences.
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    ==> Please see above mentioned papers
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    ==> Again ... ;-)
    Kind regards
    Mike

  • How to use JNDI datasource instead of database connection settings JDev 10g

    Hi,
    In order to use the different database from other environments, we are not able to use the JNDI datasource configuration settings, all the time need to configure the database connection settings from JDeveloper by changing the database connectivity settings in the JDeveloper for each environment separately, need a solution on how to make the database connectivity unique using the JNDI datasource name for all the environments for database connectivity settings through the application server console rather than changing the database adapter configuration in JDeveloper.
    Please provide the update at the earliest. Your help is greatly appreciated. Thanks in advance..

    What are you not clear on?
    What you need to do is get your developers to conform to a database naming standard, as stated above, so if you have an oracle database that is for eBusiness Suite you get all developers to create a DB connection in JDev called, ora_esb as an example.
    When the developer creates a DB adapter this will create a JNDI name of eis/DB/ora_ebs. When the BPEL project is deployed it looks for the JNDI name in the oc4j-ra.xml file to see its connection details. If they don't exist then they use the developers connection details. The issue with this is that they generally always point to the development DB. It is best practice for the developers to remove the mcf settings in the DB adapter WSDL. This way if the JNDI name has not been configured it will fail.
    So when you migrate from dev-test-prod what you have is the JNDI name eis/DB/ora_ebs. The dev points to the dev instance of ebs, test points to the test instance and so on. This means that you don't need to adjust any code in the BPEL projects.
    cheers
    James

  • Cache Refresh for Adapter Engine is not working

    Hi,
    our adaper engine is not working. At RTW no there are no channels visible and the button "Test Cache Connectivity" leads to a yellow icon for the adapter engine with the error text: "Attempt to fetch cache data from Integration Directory not yet started or still in process".
    The error reason is at <i>http://<j2ee host>:<port>/CPACache/history.jsp</i> available: <i>com.sap.aii.af.service.cpa.impl.exception.CPADirectoryCacheException: Couldn't open Directory URL (http://pixd1.digoff.no:55200/dir/hmi_cache_refresh_service/ext?method=CacheRefresh&mode=C&consumer=af.xd1.dat-dof-xi), due to: HTTP 403: Forbidden</i>
    <b>The cache refreh request from adapter engine to directory gets an 403 error.</b> What is strange: No problem, to execute that request in a browser, i get the right message, no fault.
    I searched for notes or other threads, i could find only <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bc_xi/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d373531383536%7d">SAP note 751856</a> , but it is valid for XI 3.0; we are on PI 7.0.
    I checked out:
    - no user blocked
    - all system users have standard authorisations
    - reimport of SAP BASIS SWC
    - SICF services for XI active
    - destination INTEGRATION_DIRECTORY_HMI working
    Any idea, how i can get the adapter engine working?
    Regards,
    Udo

    Hi,
    There are two ways to display the content of the CPA cache refresh.
    1. The Cache-Update Content-XML
    Document can be stored in the Adapter Framework database (J2EE schema) for further analysis. Here you have to set the property in the J2EE Visual Administrator tool.
    To do so, proceed as follows:
    =>J2EE-service SAP XI AF CPA Cache
    => set property trackCacheUpdateXML= true
    => You can now display the cache update XML with the alias  CPACache
    => View Cache Update History
    => View Cache Update XML
    2. The content of the cache refresh from the Integration Directory intothe CPA cache can be written to an XML file: Here you have to set the property inthe J2EE Visual Administrator tool:
    =>J2EE-service SAP XI AF CPA Cache
    => set property J2EE service SAP XI AF CPA Cache cacheUpdateDebugFile = true
    => the CPA cache refresh XML file will be in the server(0) directory search for the following file names:
    => value of the property cacheUpdateDebugFile.deltaif you intend to see the delta refresh accordingly and the value of the property cacheUpdateDebugFile.full if
    you intend to see the full cache refresh content
    Note: After your analysis, do not forget to reset the properties value to false.
    You can delete the XML cache-
    refresh table: alias CPACache
    => View Cache Update History
    => Delete History
    Also check out the user and its permission.
    regards
    Aashish Sinha
    PS : Reward points if Helpful

Maybe you are looking for