Metadata Caching-Performance Improvement

Hi all,
Could u plz give me suggestions to improve speed of metadata caching?
Now, I'm retrieving the metadata from the database and writing it as xml file. Connection pooling has helped improve performance, but further improvement is required
Besides, is there some means to determine optimum values for MaxIdle, MinIdle and EvictionInterval?
I'd be grateful for any help. Thanks

For performance first try to identify your bottlenecks.
Also saving the cache data in XML can itself become a bottleneck. Becouse accessing the cache will invole IO operations and XML parsing.
Normaly the best thing to do to keep the data that you think will be accessed frequantly in memory. Also make your algorighm keep only a small amount of data in the cache by regulerly cleaning it up by removing old entries. Small cache means better search performance in the cache.

Similar Messages

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • Adaptive RFC... metadata cache.

    Just curious...Is there a way to force a re cache of the RFC metadata.
    Regards
    Pran

    Hi Pran,
    currently, the only way to "recache" or initialize the Metadata cache is by restarting the Netweaver J2EE Engine.
    As a matter of fact, whenever (structural) changes are done in a backend (such as adding fields to DDIC Structures), then the engine must be restarted. It is not necessary to reimport, rebuild, or redeploy, but it is necessary to restart
    This will be improved in future releases.
    Ciao, Markus

  • Performance improvement  from patching/upgrading wl 5.1

    Hi,
    We have started looking at a lot of issues that are currently affecting the performance of our web site. One thing that has been proposed as a potential area for improvement would be to patch/upgrade our weblogic servers.
    We are currently running 5.1 patched at level 4 on hpux.
    I was wondering if anybody could provide some info or point me to some documentation that would confim any actual gains we might get by patching our servers or upgrading them to 6.1
    Thanks,
    Karl

    Hi Karl,
    "Karl Raphael" <[email protected]> wrote in message
    news:3f4d6858$[email protected]..
    We have started looking at a lot of issues that are currently affectingthe performance of our web site. One thing that has been proposed as a
    potential area for improvement would be to patch/upgrade our weblogic
    servers.
    >
    We are currently running 5.1 patched at level 4 on hpux.
    I was wondering if anybody could provide some info or point me to somedocumentation that would confim any actual gains we might get by patching
    our servers or upgrading them to 6.1
    You should definitely consider upgrading to the newest 8.1.
    It offers a lot of performance improvements, including normal
    connection pools and prepared statement caches, CMP batching
    and many others. There is no reason for upgrading to 6.1
    when 8.1 is out.
    If such a long jump is not an option, at least you may upgrade
    to the latest SP for 5.1 - you will get caching for prepared
    statetements - if your app is database-bound, it will definitely
    give a boost.
    Regards,
    Slava Imeshev

  • Tabular Model Performance Improvements

    Hi !
    We have a bulitv tabular model inline which has a fact table and 2 dimension tables .The performance of SSRS report is very slow and we have bottle neck in deciding SSRS as reporting tool.
    Can you help us on performance improvements with Tabular Inline
    Regards,

    Hi Bhadri,
    As Sorna said, it hard to give you the detail tips to improve the tabular model performance according the limited information. Here are some useful link about performance Tuning of Tabular Models in SQL Server 2012 Analysis Services, please refer to the
    link below.
    http://msdn.microsoft.com/en-us/library/dn393915.aspx
    If this is not what you want, please elaborate the detail information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • WD Java RFC metadata cache invalidation without recycle JAVA

    Since we are in a high availability production system, we cannot recycle or CE instance. There are times were the RFC metdata invalidation does not work and the only things that forces it is a recycle of the CE system.
    This is contrary to what SAP has publicized, so is there a process in NWA or a single process (and NOT the server0 process) that we can refresh without stopping and starting the java engine.
    Much like Visual Administration is there a single service we can stop and start to ensure the engine stays up and running and available?
    Thanks
    Weyland Yutani

    Hi Weyland ,
    did you check the guide ?
    Metadata Cache Invalidation for Adaptive RFC Models
    http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10465350-b4f5-2910-61ba-a58282b3b6df
    There's a solution in this guide which explains how to proceed to avoid engine restart in case of ARFC.
    (I don't know your engine release, but if it is >= 711, then you may welcome this guide as well:
    Using an Adaptive RFC 2 Model in Web Dynpro Java
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/50f7192d-2808-2d10-189b-df3fa5f71abf )
    I hope this helps.
    Best Regards,
    Ervin

  • DS 5.2 P4 performance improvement

    We have +/- 300,000 users that regularly authenticate using our DS. The user ou is divided in ou=internal (20,000 ids) and ou=external (280,000) uids. Approximately 85-90% percent of the traffic happens on the internal ou. The question is: Could I get any performance improvement by separating the internal branch into its own suffix/database? Would running two databases adversely affect the performance instead? We see performance impacts when big searches are performed on the ou=external branch. Would the separation isolate the issue, or those searches will most likely affect the DS as a whole?
    Thanks for your help!
    Enrique.

    Thank you for the info. Are u a Sun guy - do you work
    for sun?Yes I am. I'm the Architect for Directory Server Enterprise Edition 6.0. Previously I worked on all DS 5 releases (mostly on Replication).
    You are getting the Dukes!Thanks.
    Ludovic.

  • Performance improvement in a function module

    Hi All,
    I am using SAP 6.0 version. I have a function module to retrive the PO's . for just 10,000 records its taking long time.
    Can any one sugguest the ways to improve the performance.
    Thanks in advance.

    Moderator message - Welcome to SCN.
    But
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting
    Just 10,000 records? The first rule in performance improvement is to reduce the amount of selected data. If you cannot do that, it's going to take time.
    I wouldn't bother with a BAPI for so many records. Write some custom code to get only the data you need.
    Tob

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

  • Clear Metadata Cache

    I had an error that I have listed below and had to clear the metadata cache to fix it.  I do not have the metadata cache activated under user option so I am not sure why I had to clear the cache to get things working.  Can someone explain why I had to do this and where the metadata cache is located?
    Error:
    Index was out of range. Must be non-negative and less than the size of
    the collection.
    Parameter name: startindex

    Hello Jeroen,
    There is no other possibility. If you adjusted and reimported a RFC model, you need to restart the J2EE server.
    Regards,
    Christophe

  • Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?

    Hi All,
    We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
    The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
    This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
    Hence we are planning to move towards HANA for better improvement in performance.
    We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
    Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
    This is very disappointing thinking the performance improvement we should have on HANA landscape.
    Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
    Also is there any configuration related changes to be done on HANA landscape for better performance.?
    The details regarding both the systems are as below.
    Suite on HANA:
    SAP_BASIS : 740
    SAP_APPL  : 617
    ECC
    SAP_BASIS : 731
    SAP_APPL  : 606
    Also see the below screenshots of the system details.
    HANA:
    ECC:
    Thanks & regards,
    Naseem

    Hi,
    just to fill in on Lars' already exhaustive comments:
    Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
    I wrote a blog on this; you might find it useful as a general introduction:
    A practical example of ABAP on HANA optimization
    When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
    But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
    Also - as a good starting point - check out the HANA developer course on open.sap.com.
    Regards,
    Trond

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • How to install the metadata cache invalidation tool

    Hi,
    I have  a problem in using JCO destinations for Adaptive RFC models.
    I went through the below pdf and they mentioned metadata cahce invalidation tool and it is not installed in the portal
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10465350-b4f5-2910-61ba-a58282b3b6df
    please let me know how to install metadata cache invalidation tool
    points will be rewarded for sure for the helpful answers

    I installed the Meta data cache invalidation tool using SDM and when i am trying to run that application from Content administrator it is throwing an exception
    The currently executed application, or one of the components it depends on, has been compiled against class file versions that are different from the ones that are available at runtime.
    If the exception message indicates, that the modified class is part of the Web Dynpro Runtime (package com.sap.tc.webdynpro.) then the running Web Dynpro Runtime is of a version that is not compatible with the Web Dynpro Designtime (Developer Studio or Component Build Server) which has been used to build + compile the application.*
    Note: the above hints are only a guess. They are automatically derived from the exception that occurred and therefore can't be guaranteed to address the original problem in all cases.
    please let me know how to resolve this issue
    Points will be awarded for sure
    Bala

  • How to install metadata cache invalidation tool

    Hi,
    I have  a problem in using JCO destinations for Adaptive RFC models.
    I went through the below pdf and they mentioned metadata cahce invalidation tool and it is not installed in the portal
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10465350-b4f5-2910-61ba-a58282b3b6df
    please let me know how to install metadata cache invalidation tool
    points will be rewarded for sure for the helpful answers

    This tool is already installed on your system.
    1) navigate to http://<hostname>:<port>/index.html
    2) Click on "Webdynpro"
    3) Click on "Web Dynpro Console"
    3) Login as an Administrator (any user with admin priv's is ok).
    4) Notice the last entry in the list

Maybe you are looking for