Data and Index lookup Cache

Hi All,
We are facing performance issue on executing a task WC_SIL_APInvoiceDistributionFact, this takes 1:20:13mins to complete the load.
Issue is with the Lookup table W_EMPLOYEE_D that takes around 35mins, so we have increased the data and index cache size of that lookup table. Now the Lookup table W_EMPLOYEE_D takes 7mins and the load duration is around 28mins.
Calculation for Data and Index Cache
Count of rows + 16 = Index Cache
(Count of rows + 16) * 2 = Data Cache.
Since the Data volume will not be same in all the instances do we need to change the cache size for each instance? And will be any other impact if we add this cache size.
Can you please help on this to proceed further.
Thanks ,
Shalini Peddibhotla

Hi Shalini,
In that case: the cache will be filled and used as long as the number of rows sourced from the table fits in the defined size. If it exceeds the max size, it will not store the rows in cache and every mapping using the lookup operator will wait for the query generated by the lookup operator to return the resultset.
So basically, you will potentially hit the same problem you are trying to solve.
Regards,
Marco Siliakus

Similar Messages

  • Moving data and index to different drive letter

    A follow up to my previous question... is the same directory structure necessary on the new volume or can the page and index be placed under the root directory?

    No - Essbase will automatically create the same Essbase\Apps\App\Db structure on the target volume.Regards,Jade-------------------------------------Jade ColeSenior Business Intelligence ConsultantClarity [email protected]

  • XSLT and Java lookup cache

    Hi,
    I´m trying the "Easy RFC lookup from XSLT mappings using a Java helper class" article and I getting a weird problem.
    The result of the RFC lookup called inside the java class is maintained in a kind of cache and  I always get the same results independent of the parameters I use in the following calls.
    Just after calling a Complete Cache Refresh (SXI_CACHE) I got a new result to the lookup.
    If I call in the Interface Mapping Test option it runs fine. However, when I call it from my scenario (SOAP Adapter Sender) the first result of the lookup will be returned until a forced cache refresh.
    Any ideas?
    Thank you,
    Fabiano.

    Hello Fabiano,
    I had the same problem like you had.
    The main Problem is that with the example code the request variable is created as NodeList object. In XSLT a variable is somekind of a constant and can't be changed. As the request object is empty after the first request the programm fails at the following line:
    Source source = new DOMSource(request.item(0));
    So I've created a workaround for this problem.
    In the call of the template I've put the request as a parameter object at the template call:
    <xsl:with-param name="req">
    <rfc:PLM_EXPLORE_BILL_OF_MATERIAL xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
      <APPLICATION>Z001</APPLICATION>
      <FLAG_NEW_EXPLOSION>X</FLAG_NEW_EXPLOSION>
      <MATERIALNUMBER><xsl:value-of select="value"/></MATERIALNUMBER>
      <PLANT>FSD0</PLANT>
      <VALIDFROM><xsl:value-of select="//Recordset/Row[name='DTM-031']/value"/></VALIDFROM>
      <BOMITEM_DATA/>
    </rfc:PLM_EXPLORE_BILL_OF_MATERIAL>
    </xsl:with-param>
    With this change the request will be provided as a String object and not as a NodeList object.
    Afterwards the RfcLookup.java has to be changed to the following:
    package com.franke.mappings;
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.PrintWriter;
    import java.io.StringWriter;
    import java.util.Map;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    import javax.xml.transform.Source;
    import javax.xml.transform.Transformer;
    import javax.xml.transform.TransformerFactory;
    import javax.xml.transform.dom.DOMSource;
    import javax.xml.transform.stream.StreamResult;
    import org.w3c.dom.Document;
    import org.w3c.dom.Node;
    import org.w3c.dom.NodeList;
    import com.sap.aii.mapping.lookup.Channel;
    import com.sap.aii.mapping.api.StreamTransformationConstants;
    import com.sap.aii.mapping.api.AbstractTrace;
    import com.sap.aii.mapping.lookup.RfcAccessor;
    import com.sap.aii.mapping.lookup.LookupService;
    import com.sap.aii.mapping.lookup.XmlPayload;
    * @author Thorsten Nordholm Søbirk, AppliCon A/S
    * Helper class for using the XI Lookup API with XSLT mappings for calling RFCs.
    * The class is generic in that it can be used to call any remote-enabled
    * function module in R/3. Generation of the XML request document and parsing of
    * the XML response is left to the stylesheet, where this can be done in a very
    * natural manner.
    * TD:
    * Changed the class that request is sent as String, because of IndexOutOfBound-exception
    * When sending multiple requests in one XSLT mapping.
    public class RfcLookup {
         * Execute RFC lookup.
         * @param request RFC request - TD: changed to String
         * @param service name of service
         * @param channelName name of communication channel
         * @param inputParam mapping parameters
         * @return Node containing RFC response
         public static Node execute( String request,
                 String service,
                 String channelName,
                 Map inputParam)
              AbstractTrace trace = (AbstractTrace) inputParam.get(StreamTransformationConstants.MAPPING_TRACE);
              Node responseNode = null;
              try {
                  // Get channel and accessor
                  Channel channel = LookupService.getChannel(service, channelName);
                  RfcAccessor accessor = LookupService.getRfcAccessor(channel);
                   // Serialise request NodeList - TD: Not needed anymore as request is String
                   /*TransformerFactory factory = TransformerFactory.newInstance();
                   Transformer transformer = factory.newTransformer();
                   Source source = new DOMSource(request.item(0));
                   ByteArrayOutputStream baos = new ByteArrayOutputStream();
                   StreamResult streamResult = new StreamResult(baos);
                   transformer.transform(source, streamResult);*/
                    // TD: Add xml header and remove linefeeds for the request string
                    request = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"+request.replaceAll("[\r\n]+", ""); 
                    // TD: Get byte Array from request String to send afterwards
                    byte[] requestBytes = request.getBytes();
                   // TD: Not used anymore as request is String
                    //byte[] requestBytes = baos.toByteArray();
                    trace.addDebugMessage("RFC Request: " + new String(requestBytes));
                    // Create input stream representing the function module request message
                    InputStream inputStream = new ByteArrayInputStream(requestBytes);
                    // Create XmlPayload
                    XmlPayload requestPayload =LookupService.getXmlPayload(inputStream);
                    // Execute lookup
                    XmlPayload responsePayload = accessor.call(requestPayload);
                    InputStream responseStream = responsePayload.getContent();
                    TeeInputStream tee = new TeeInputStream(responseStream);
                    // Create DOM tree for response
                    DocumentBuilder docBuilder =DocumentBuilderFactory.newInstance().newDocumentBuilder();
                    Document document = docBuilder.parse(tee);
                    trace.addDebugMessage("RFC Response: " + tee.getStringContent());
                    responseNode = document.getFirstChild();
              } catch (Throwable t) {
                   StringWriter sw = new StringWriter();
                   t.printStackTrace(new PrintWriter(sw));
                   trace.addWarning(sw.toString());
              return responseNode;
         * Helper class which collects stream input while reading.
         static class TeeInputStream extends InputStream {
               private ByteArrayOutputStream baos;
               private InputStream wrappedInputStream;
               TeeInputStream(InputStream inputStream) {
                    baos = new ByteArrayOutputStream();
                    wrappedInputStream = inputStream;
               * @return stream content as String
               String getStringContent() {
                    return baos.toString();
              /* (non-Javadoc)
              * @see java.io.InputStream#read()
              public int read() throws IOException {
                   int r = wrappedInputStream.read();
                   baos.write(r);
                   return r;
    Then you need to compile and upload this class and it should work.
    I hope that this helps you.
    Best regards
    Till

  • File Properties (Meta Data) and Indexing

    Meta Edit™
    Find what you need, when you need it.
    If you manage a large network or portal, chances are that there are thousands of documents floating around that are difficult to locate due to poor file property information.
    With MetaEdit™, it’s easy to gain the control you seek and create rich document repositories you can count on.
    At it’s core, MetaEdit™ is a simple and easy-to-use software solution that allows users to quickly search, index, modify and maintain the META properties of ALL your files, such as Author, Company Name, etc…
    It’s EASY! Simply set a base folder to scan, then let MetaEdit™ do the work of gathering all the related meta information for you to efficiently manage and edit.
    Why wait? Maximize the value of your information today.
    For more detailed information and a FREE demo, visit http://www.spheric.ca/MetaEdit.aspx

    File -> Library -> Import playlist and select the old .XML file.

  • File Properties (Meta Data) and Indexing for your Portal

    Meta Edit™
    Find what you need, when you need it.
    If you manage a large network or portal, chances are that there are thousands of documents floating around that are difficult to locate due to poor file property information.
    With MetaEdit™, it’s easy to gain the control you seek and create rich document repositories you can count on.
    At it’s core, MetaEdit™ is a simple and easy-to-use software solution that allows users to quickly search, index, modify and maintain the META properties of ALL your files, such as Author, Company Name, etc…
    It’s EASY! Simply set a base folder to scan, then let MetaEdit™ do the work of gathering all the related meta information for you to efficiently manage and edit.
    Why wait? Maximize the value of your information today.
    For more detailed information and a FREE demo, visit http://www.spheric.ca/MetaEdit.aspx

    Acath?
    (1) Why would you need to change the color in HTML when you can specify it in Catalyst (Artboard Size & Color)?
    (2) I think the question was about specifying an "Image"... you can't in Catalyst, but is there any plan for beta 3?
    (3) Can you put a tranparent Artboard in Catalyst and specify a background image in the HTML?
    Thanks
    Roger

  • Import dumpfile with seperate tablespaces for table and index

    Hi,
    We have a schema for which the its tables are stored in seperate tablespace and indexes are stored in different tablespace. Now we have take full schema export. Now we want to import it on another schema. Now I want to know if the we have difference in the tablespace name we use REMAP_TABLESPACE clause of the impdp command but what about the seperate tablespace for table and indexes. How would Oracle handle this.
    Regards,
    Abbasi

    Hi,
    I hope you created the same tablespace structure on the target side if not so remap_tablespace option you have to use for specifying different tablespaces.Oracle will take care of putting data and index.Any how if a index is moved from one tablespace to other you have to rebuild them,once you rebuild them than only stattistics are gathered otherwise you
    might face some performance issue.
    Better option is to keep same tablespace structures in source and target environment.
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com
    Edited by: Rafi (Oracle DBA) on May 9, 2011 7:07 AM

  • Import data and ındexes diffrent tablespace

    Assume that
    I have tablespaces "user_data" and "index_data".
    If I export the entıre schema
    and
    import it into another database, all the data and index are stored in one tablespace.But they are in diffrent tablespaces in the source.
    What should I do to ımport ındexes to diffrent tablespace?
    Oracle 10g

    In Import, there's an parameter "indexfile".
    imp file=abc.dmp show=y indexfile=abc.sql fully=y
    will generate a sql file with metadata definition.i.e. Table creation scripts/index creation scripts.
    modify the script and change the tablespace names for the indexes and execute them.

  • Is it possible to cache and index and not cache the table?

    can someone point me to the syntax? I have been messing with it and can't get the cache command on the indexes to work. I dont want to cache the table. Just the index blocks.

    i have to joins between tables with denormalized data and join non-unique columns. The indexes I am using have high clustering factors. i have no way of solving this right now.
    in performnace tests queries use a lot of physical IO and take a long time to return. If I run them a second time, they still use alot of logical IO, but return quickly. I have enough CPU to handle the logical IO and I need to speed up queries.
    I dont have enough memory to cache the tables data involved, but I do have enough to cache the indexes. When I run a 10046 trace virtually all of the work is done in the index searches, so I was hoping to cache the indexes in order to speed up the queries.
    again I can't solve the data issues and I am not concerned about the high logical IOs since there is limited concurrency and I have plenty of CPU.
    I guess my only other option is to find out which table in the join is would benefit most from caching and cache that table since these are big tables and I can really only cache one of them.

  • Some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file mana

    some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file manager, but it still is not in the music library in play music.

    Cyndi6858, help is here! We'd be happy to help figure this out. Just to be sure though, the Droid Maxx should not have an SD card. Is this the Droid Razr Maxx? How did you add the music to the device? Are you able to see the files and folders located on the SD card or device when plugged in?
    Thanks,
    MichelleH_VZW
    Follow us on Twitter @VZWSupport

  • Separate table and index data in RAC database

    Hi Experts,
    Our database is Oracle11g RAC database. I need your expertise on this
    Do we need to retain the table and index data in two different tablespaces for performance perspective in RAC database too?
    Please share your practical experience…Thanks in advance.
    Regards
    Richard

    g777 wrote:
    In my opinion, if there is striping implemented then performance shouldn't degrade even if the index and table blocks are in one tablespace. Exactly.. striping is NOT a good idea at tablespace level as a tablespace is a logical storage device. It is very difficult to stripe comprehensively/correctly at that level, if not impossible.
    Striping is a function of the actual storage system and need to happen at physical level. A proper RAID0 implementation.
    So the question about multiple tablespaces for a performance increase should not be about striping - but about issues such as data management, block sizes, transportable tablespaces and so on.
    Thus my question (at the OP) - what performance problems are expected and are these relevant to the number of tablespaces?

  • Data movement from R/3 DB to APO DB and to Live cache

    Dear APO Experts,
    I have few questions on live cache and how it works, I understand that Live cache is memory resident database. below are my questions.
    1) What sought of data move from R/3 DB to APO DB and then to Live cache DB
    2) When data moves from APO DB to Live cache, till how much duration that data stays in live cache
    3) And how does data gets pulled in to live cache from APO DB
    4) Why do we require live cache logs while data never gets commited to hard disk
    5) Do we ever add a datafile to Live cache DB
    any info that you provide on this will be really helpfull to me
    Thanks,
    Chetan

    Hello Chetan,
    As you know, itu2019s MAXDB/liveCache forum.
    What is the version of your system?
    The is documentation available at:
       SAP liveCache technology
    < Please review document u201CWhat is SAP liveCache technology?u201D >
    Sap documents at service.sap.com/scm -> Technology:
    u201CliveCache overviewu201D and u201CIntegration overviewu201D
    For SAP liveCache documentation also see the SAP note 767598.
    Go to SAP link Best Practices for Solution Management: mySAP SCM at   SAP liveCache technology ...
    And Review the document u201CManage APO Core Interface in SAP APO (3.x) / mySAP SCM (4.x/ 5.0)u201D
    1. As you saw in the reference documents the data transferred from the connected R/3 system to APO.
    And the transactional data could be uploaded to the liveCache, if you downloaded them
    to the APO database cluster tables first. This procedure steps are running during the
    system upgrade < for example, from SCM 5.0 to SCM 7.0 ) u2013 report /SAPAPO/OM_LC_UPGRADE_70 steps. Or you want to migrate the          
    liveCache to another operating system or convert your system to Unicode,  
    and therefore you want to back up the liveCache data first so that you    
    can reload it into the liveCache afterwards - SAP Note No. 632357.                              
    2. Could you collaborate more on this sentence. Could you give examples?
    3.  u201CAnd how does data gets pulled in to live cache from APO DBu201D
        The data are not pulled from the APO DB to liveCache.
         When you changed the APO data on the system the LCA procedures have been called from ABAP. The objects stored in the class containers in liveCache can be accessed and manipulated only via LCA routines. The registration of the LCA routines is done automatically when the liveCache is started by the LC10. The LCA procedures in the LiveCache are written in C++                          
    and shipped to the customers as binary LCA libraries(LCA build) together with the LiveCache.
    < See more details in SAP Note No. 824489 or 1278897 as of SCM 7.0 >
    4. See the SAP notes:
                   869267     FAQ: SAP MaxDB LOG area
                 1377148     FAQ: SAP MaxDB backup/recovery
    5. You could run the quicksizer & estimate how much data you are planning to have in liveCache.
         Also the amount of data could be increased by the creation of the new data in APO by users
         Or another reason u2026
         In general you will add the datavolume to solve or prevent the DB_FULL issue,
         See u201C17. What do I do if the data area is full? u201C in SAP note 846890.
    Thank you and best regards, Natalia Khlopina

  • How import data on one tablespace and indexes on another tablespace

    i have import dump from from database in oracle 10g as
    c:> imp userid=system/password full=y file=d:\ful.dmp log=d:\full.log
    Now i want import tables data on tablespace datatb and indexes on tablespace indextb. how i can do this job
    Thanks

    After importing the database you may move the indexes to other tablespace by rebuilding it.
    c:>sqlplus /nolog
    SQL> conn /as sysdba
    connected
    SQL> spool c:\indx_rbld.log
    SQL> select 'alter index '||owner||'.'||index_name||' rebuild online parallel tablespace <tablespace_name> nologging;' from dba_indexes where owner=<username>;
    SQL> spool off
    SQL> @c:\indx_rbld.log
    Hope following link will help you:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:901906930328
    Message was edited by:
    Santosh Kumar

  • Importing data tables into data tablespace and indexes into tablespaces

    Hi
    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...

    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...
    imp userid=/user/passwd show=y indexfile=import.sql indexes=n full=y
    imp userid=/user/passwd show=y indexfile=import2.sql full=y
    Edit the import.sql and import2.sql to modify the tables' tablespace and indexes tablespace.
    execute import.sql the script in the database. this will create the tables in their respective tablespace.
    imp userid=/user/passwd full=y ignore=y indexes=n constraints=y - to import just the data since the tables have already been created.
    imp userid=/user/passwd full=y ignore=y rows=n  - to import just the indexes since the tables and data have already been imported.

  • Force Crawler in Maint Mode to Ignore Last Modified Data and Re-Index All

    We are running SES 10.1.8.4 with the Siebel 7.8 data source patches 8533402 and 8624308 to index documents (PDF, XLS, DOC, PPT) via RMI and Decompression Tool in Siebel. This data source is virtually identical to the database data source.
    The crawler will not "fail" if the actual indexing of the attachment file itself fails. For example, if RMI is down on the Siebel server and the crawler runs, SES will index the document metadata returned in the SQL but not the document. If the client then starts the RMI utility and re-runs the crawler, the document that was not indexed will not be re-indexed because the data source's last crawl time is used against the data sources last modified date attribute.
    Here is the pseudo SQL for the initial crawler query in maintenance mode....
    SELECT .... FROM MY_VIEW WHERE LASTMODIFIEDATE > TIME OF LAST DATA SOURCE CRAWL
    We have found a way to update the data source's last crawl time (DS_LAST_CRAWL) in the EQ$_DATA_SOURCE table. For now, we simply are using SYSDATE - 1.
    This will allow the missed document to be returned in the initial query.
    However, it's the crawler's document-level last modified date check within the crawler that is preventing the re-index. Since the document's last modified date is the same as what SES has, SES skips over it thinking it has not changed. True, it has not changed but it was missed during the last re-crawl and we want it to be indexed again.
    The client does NOT want to perform a full re-index because of the anticipated volume of attachments.
    Is there any way to tell the crawler to ignore the document-level last modified date and re-index everything returned from the initial query? Perhaps one of the other columns in the EQ$_DATA_SOURCE (DS_STATUS, DS_CRAWLING_MODE) holds the key?
    Thanks!

    I think it might be possible to set ENQUEUE_STATUS = 'Y' in the table EQ_TEST.EQ$URL. You may need to call eq_adm.use_instance(1) before doing so.
    However I'm not sure of the full implications of doing this - DON'T DO IT ON A PRODUCTION SYSTEM without carefully testing on a development system.

  • Why are date and time strings lost when indexing an array in a for loop

    Hi, 
    I have an application where i'm reformatting data from a spreadsheet for graphical display on a LabVIEW dashboard.
    the original spreadsheet has date and time values in separate columns, and i'm merging them and converting to a timestamp value.  But something's not working.  Does anyone know why the string value is lost when the array indexes into this for loop?
    Attachments:
    failed array index.png ‏142 KB

    Can you attach your VI with some typical values? (e.g. create in indicator on the 2D array, run your VI, then turn the indicator (now containing data) into a diagram constant). Place the FOR loop related code and that diagram constant into a new VI and attach it here.
    How many times does the FOR loop run? Could it be that the last element of each 1D array is an empty string? (unless you put a wait inside the FOR loop, you'll never see the other elements in the probe)
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for

  • Unable to retrieve topology component health states

    Hi, SharePoint 2013 SP1 + Sep 2014 CU and SQL Server 2012 SP1 i have created search service application using power shell and also using central administration but both times i am getting following error message when access search service application

  • Material Document Number Range

    Hi, What is the meaning of setting number range as Year    From Number    To Number     Current number 9999    4916000000       4916999999   4916568199 My understanding is we do not need to set sepeate number for every year once we set 9999. Is that

  • Borderless prints much darker

    My document prints fine from Illustrator with the paper setting as U.S. Letter but it clips the bottom of the image.  So I select "borderless 8.5 x 11" and it no longer clips the image but it prints WAY darker, from golden do a dark gray brown.  Plea

  • Connected Servers in Devices?

    How do I get my connected servers to show up in the Devices area of a finder window? Kevin

  • Recover standby database error

    Hi here is the error message i got when i run this command 09:30:35 SYS@MOZAI> alter database recover standby database until cancel; alter database recover standby database until cancel ERROR at line 1: ORA-00279: change 126425376421 generated at 02/