Loading performance help me

Hi all,
I had a query regarding the loading performance. v can improve the performance.
1. first load md before td
2. packet sizing
3. multiple initialization.
4. deleting indices
can any body help in knowing the rest of the procedures to improve the loading performance.
thanks & regards
KK

Hi,
when loading transactional data the first time you can activate the number range buffering for your dimensions as well:
goto se37 and execute function module RSD_CUBE_GET in order to get the number ranges for dimension:
I_INFOCUBE: <yourcube techname>
OBJVERS: A  
I_BYPASS_BUFFER: X
I_WITH_ATR_NAV    
Goto to the very right of return table E_T_DIME and get NOBJECT (BIDx) for the number range of your DIMs
Goto Tx SNRO; enter your BIDx number range in change mode / menu edit / setup buffering / main memory; enter something around 50'000 into no. of numbers to buffer.
Doing that for dimensions loaded with high number of record will definitively boost your performance.
Do NOT buffer the DataPackage dimension in any case!!
You can do the same for master data... (BIMx)
hope this helps...
Olivier.

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • FORMS CRASHES (FRM-92101) ON AS 10.1.2.0.2 DURING LOAD PERFORMANCE TESTING

    Hiya
    We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
    The Load Testing software start up each process very quickly, about every 10 seconds.
    The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
    However, those users who have already logged-in into our application, they are able to carry on their tasks.
    We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
    The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
    Any help will be much appreciated.
    Cheers
    Mayur

    Hi Shekhawat
    In Windows Registry go to
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
    In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
    %SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
    Now if you read it carefully in the above string, you will find this parameter
    SharedSection=1024,20480,768
    Here SharedSection specifies the system and desktop heaps using the following format:
    SharedSection=xxxx,yyyy,zzzz
    The default values are 1024,3072,512
    All the values are in Kilobytes (KB)
    xxxx = System-wide Heapsize. There is no need to modify this value.
    yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
    zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
    On our server the values were as follows :
    1024,20480,768
    We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
    Cheers
    Mayur

  • Data Load scenario- help needed

    Hi gurus,
    Please help me with the below scenarios for the LO mechanism:
    Rec 1 entered in R3 between T1 and T2 (during R3 setup table population). When does Rec1 get into BW?
    Rec 2 entered in R3 between T2 and T3 (after setup table population but before initialization loads). When does Rec 2 get into BW?
    Rec 3 entered in R3 between T3 and T4 (during initial loads in BW). When does Rec 3 get into BW?
    Rec 4 entered in R3 after initialization is completed after T4. When does Rec 4 get into BW?
    Time
    T1                                          T2    T3                                                                               T4                             
    |--|---||--
           Rec 1                                Rec2                                   Rec 3                                                   Rec 4
    T1 – T2 – Setup Tables populated
    T3 – T4 – Initial Loads performed
    T4 onwards – Delta Loads performed

    Hi ,
    senario1;Rec 1 entered in R3 between T1 and T2 (during R3 setup table population). When does Rec1 get into BW?
    during the setup table filling ,v3 jobs will be descheduled
    you need to lock all the user's (no user can post documents)
    Rec 2 entered in R3 between T2 and T3 (after setup table population but before initialization loads). When does Rec 2 get into BW?
    here just setup table is filled,however init is not performed
    so still  v3 jobs will be in descheduled mode ,the posting which were/are happened  sits in application tables
    Rec 3 entered in R3 between T3 and T4 (during initial loads in BW). When does Rec 3 get into BW
    here init is running,it will be down time for R/3 due to heavy load processing to BW
    so still  v3 jobs will be in descheduled mode ,the posting which were/are happened still sits in application tables
    Rec 4 entered in R3 after initialization is completed after T4. When does Rec 4 get into BW?
    after init,now we need to schedule v3 jobs, so all the postings(which were posted earlier/which are posted for the moment) now will transfer from application tables to delta que depending on the update mode(direct/queued/un serialized)
    Hope this helps you!!!!!!!
    cheers,
    Swapna.G
    Message was edited by:
            swapna gollakota
    Message was edited by:
            swapna gollakota

  • Data Load performance in BI7.0

    Hi,
    I have a generic question regarding BI7.0.
    From the perspective of data load performance what are the features  that BI7.0 has compared to earlier versions.
    Thanks in advance,,
    Rama Murthy

    Hi,
    In BI, the entry layer is PSA, and it is mandatory to maintain the PSA to maintain the data in BI from any source. Here(in BI) it is possible to make the PSA as typed and untyped.
    The Infopackage functionality is reduced, it will loads data up to PSA only.
    The DTP upload the data between BI object in BI. Transformations replaces the updates rules and transfer rules.
    DTP and Transformations removes the Data mart interface between the BI objects.
    It is possible that, if no transformation of data is required, we can load data directly to target without maintaining the InfoSource.
    All these properties are available bcoz of the new concept and new object type DataSource ie., BI DataSource (object type RSDS).
    Depending on the situation, the InfoSource is not mandatory and at some times it is mandatory but PSA is mandatory in BI and rest of all are same as in 3.x.
    Hope this helps in solving u r problem
    Regards
    Ramakrishna Kamurthy

  • SCD 2 load performance with 60 millions records

    Hey guys!
    I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
    60 millions records
    50 columns (including 17 to be tracked for changes)
    Has anyone come across a similar case?
    Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
    Any help would be greatly appreciated.
    Thanks,
    Rene

    Rene,
    It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
    On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
    Regards:
    Igor

  • In SAP BW, Errors related to authorizations,loading,performance,locking

    Hi gurus,
    can any one send the document regarding the errors in SAP BW related to authorizations,loading,performance,locking,retraction.Please send it ASAP.

    Hi Sudheer,
    Check this link which explains you the document on Authorization
    <removed link farm>
    hope it helps u..........
    Regards
    chandra sekhar
    Edited by: Siegfried Szameitat on Nov 3, 2008 11:01 AM
    posting link farms is against the rules.

  • Unable to Load Performance Pack / No muxer in java.library.path

    When I start weblogic (version 7.0, running under jdk 1.4.1 on Win XP),
    I'm getting the following error:
    <Sep 30, 2002 9:03:02 AM CDT> <Error> <socket> <000433> <Unable to load
    performance pack, using Java I/O instead.
    java.lang.UnsatisfiedLinkError: no muxer in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1403)
    at java.lang.Runtime.loadLibrary0(Runtime.java:788)
    at java.lang.System.loadLibrary(System.java:832)
    at weblogic.socket.PosixSocketMuxer.<init>
    (PosixSocketMuxer.java:179)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance
    (NativeConstruct
    orAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance
    (DelegatingC
    onstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance
    (Constructor.java:274)
    at java.lang.Class.newInstance0(Class.java:306)
    at java.lang.Class.newInstance(Class.java:259)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:54)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:37)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:199)
    >
    Can anyone tell me what this error means, and if there is a fix or
    workaround? Any help will be appreciated!
    Thanks,
    Tim Perrigo

    This happens with WebLogic every time when Windows version changes:
    in 1.3.1 system property "os.name" is "windows 2000" when running on XP, and
    in 1.4 it is "windows XP". Since Windows names are hardcoded somewhere in
    WebLogic,
    and it doesn't know anything about Windows XP, it thinks that it is running
    on Unix and
    attempts to load Posix performance pack (as you can see in the exception
    stacktrace).
    You can fix this by adding -Dos.name="windows 2000" to the command line.
    "Tim Perrigo" <[email protected]> wrote in message
    news:[email protected]..
    When I start weblogic (version 7.0, running under jdk 1.4.1 on Win XP),
    I'm getting the following error:
    <Sep 30, 2002 9:03:02 AM CDT> <Error> <socket> <000433> <Unable to load
    performance pack, using Java I/O instead.
    java.lang.UnsatisfiedLinkError: no muxer in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1403)
    at java.lang.Runtime.loadLibrary0(Runtime.java:788)
    at java.lang.System.loadLibrary(System.java:832)
    at weblogic.socket.PosixSocketMuxer.<init>
    (PosixSocketMuxer.java:179)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance
    (NativeConstruct
    orAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance
    (DelegatingC
    onstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance
    (Constructor.java:274)
    at java.lang.Class.newInstance0(Class.java:306)
    at java.lang.Class.newInstance(Class.java:259)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:54)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:37)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:199)
    >
    Can anyone tell me what this error means, and if there is a fix or
    workaround? Any help will be appreciated!
    Thanks,
    Tim Perrigo--
    Dimitri

  • Unable to load performance pack, using Java I/O on WL60, sp2

    Dear friends,
    I am seeking help from you. When we start WL60 SP2 on Sun Soloris 5.6, we got
    the following exception:
    <Jul 31, 2001 5:39:53 PM EDT> <Error> <Performance Pack> <Unable to load performance
    pack, using Java I/O.
    java.lang.UnsatisfiedLinkError: getFdLimit
    at weblogic.socket.PosixSocketMuxer.getFdLimit(Native Method)
    at weblogic.socket.PosixSocketMuxer.<init>(PosixSocketMuxer.java:104)
    at java.lang.Class.newInstance0(Native Method)
    at java.lang.Class.newInstance(Class.java:237)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:128)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:83)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:224)
    >
    Hoever, the server itself started, and our applications runs ok
    (at least so far). But this exception appears every time on some user accounts.
    I was wondering what causes this exception. Some user accounts in the same machine
    don't have this problem.
    I also wondering if it will cause performance problem when the traffic is high.
    We already applied the patches.
    Any hits and suggestions are welcome.
    Thanks in advance.
    -Ju

    Dear Deyan,
    Thanks for your help. We do have $WEBLOGIC_HOME/lib/solaris in LD_LIBRARY_PATH,
    which is set when running ". setEnv.sh" before startWebLogic.sh.
    We failed on one patch: 105210-27, for some reason.
    The strange thing is: in the same machine, all WL60 instances running under user
    accounts (under /users/developers/) have no such error. But it happens under some
    account, like accounts under /export/home/, etc. /user/developers is mounted on
    another physical machine.
    -Ju
    "Deyan D. Bektchiev" <[email protected]> wrote:
    >
    You should have the $WEBLOGIC_HOME/lib/solaris directory in your LD_LIBRARY_PATH
    so that
    the server can load the performance pack (which is a shared library called
    libmuxer.so).
    If it is present then do a ldd libmuzer.so and you will see if any libraries
    that it
    depends on are missing.
    Also make sure you have all of the requered patches for 2.6 installed.
    --dejan
    Ju Rao wrote:
    Dear friends,
    I am seeking help from you. When we start WL60 SP2 on Sun Soloris 5.6,we got
    the following exception:
    <Jul 31, 2001 5:39:53 PM EDT> <Error> <Performance Pack> <Unable toload performance
    pack, using Java I/O.
    java.lang.UnsatisfiedLinkError: getFdLimit
    at weblogic.socket.PosixSocketMuxer.getFdLimit(Native Method)
    at weblogic.socket.PosixSocketMuxer.<init>(PosixSocketMuxer.java:104)
    at java.lang.Class.newInstance0(Native Method)
    at java.lang.Class.newInstance(Class.java:237)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:128)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:83)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:224)
    >
    Hoever, the server itself started, and our applications runs ok
    (at least so far). But this exception appears every time on some useraccounts.
    I was wondering what causes this exception. Some user accounts in thesame machine
    don't have this problem.
    I also wondering if it will cause performance problem when the trafficis high.
    We already applied the patches.
    Any hits and suggestions are welcome.
    Thanks in advance.
    -JuContent-Description: Card for Deyan D. Bektchiev
    begin:vcard
    n:Bektchiev;Deyan
    tel;home:1-650-363-6055
    tel;work:1-650-289-1046
    x-mozilla-html:TRUE
    url:http://www.appl.net/
    org:Application Networks
    adr:;;444 Ramona St;Palo Alto;CA;94301;USA
    version:2.1
    email;internet:[email protected]
    fn:Deyan D. Bektchiev
    end:vcard

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • How to improve loading performance

    Hi,
       How to improve loading performanace

    Hi Prasanth,
    You have to take few measures to optimize load performance. Few are listed below.
    -> Consider the Packet sizing
    -> Delete index
    -> Table Partitioning
    -> Data Model
    -> Load Sequencing
    -> Parellel Processing
    Go through the link
    Business Intelligence Performance Tuning [original link is broken]
    http://help.sap.com/saphelp_nw2004s/helpdata/en/06/b5f8926ba22b45bc9eaa589f1c835b/frameset.htm
    Hope it helps you and suffice.
    Cheers
    SRS

  • Load performance Write-Optimized DSO

    Dear all,
    I'm looking for some practical tips concerning improving load performance from the PSA to a write-optimized DSO (41M records) via a DTP.
    All parameters that could be tweaked have been checked (e.g. packet size, batch jobs, uniqueness of data flag, etc.) and optimized.
    However this load stays extremely slow (init load from source to BW is faster).
    The BW system runs on SP18.
    Please share all your tips our recommendations for us to solve this major issue.
    Your help is much appreciated!
    Thanks
    JvB

    Hi JvB,
    Here are some options I can think of and let you know if I remember something:
    1) Increase number of prallel processes in DTP i.e. DTP->GOTO->Settings for bacth monitor->Number of prallel process to 4 or 5.
    2) Note 409641 - Examples of packet size dependency on ROIDOCPRMS
    The general formula for data transfer is:
    packet size = MAXSIZE * 1000 \ transfer structure size
    but not more than MAXLINES.
    eg. if MAXLINES < than the result of the formula, MAXLINES size is transferred into BW.
    3) you should check if there are any locks or deadlocks in ST04 and also System Analysis SM21....it will show u all the details,,, and Short dump in ST22.....there may be some ..
    4) Check and Analyze DSO in RSRV. If there are any issues repair it.
    5) If you are loading huge volumes of data split the records by Filters in DTP ex. Plant, calendar year or calendar month selections.
    And also Try
    Try partioning,craeting indexes/secondary indexes or archiving old data.
    Check these links for furtehr reference:
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    /message/2987899#2987899 [original link is broken]
    Good Luck.
    Regards
    Satish Arra
    Edited by: Satish Arra on Feb 7, 2009 9:28 PM

  • BI Statistics Highly Aggregated cube 0TCT_CA1 poor load performance

    BI Statistics Highly Aggregated cube 0TCT_CA1 load ffrom DataSource 0TCT_DSA1 has very poor load performance.
    In our DEV BW, it ran 8 min for 12,000 records. We have even worse performance in test box.
    Initial loads then run very long since DataSource 0TCT_DSA1 does not allow us to load by calendar month.
    If you have seen this issue, please let me know.
    Jay Roble
    General Mills

    Compressing the cube would not help, since the cube is empty & we are trying to load 90 days of history.
    The source table has an index on the timestamp field that the extractor uses in it's delta loads.
    The loads run very slow, even with index dropped & no PSA.
    We know that in production there will be appx rows 400,000 loaded 14,400 added with daily delta loads due to aggregation.
    So we are seeing slow delta loads in our QA testing.
    Not sure why the extractor can't just deliver the 14K aggregated rows vs. 400K.
    Note: DS 0TCT_DSA1 has no selction criteria when doing initial full loads.

Maybe you are looking for

  • Stock on posting date

    Hi, there will be a function module that allows me to get the stock at a date? Something similar to the transaction MB5B. Greetings and Thanks

  • My iPod is not recognized by my car during iPod integration

    When I connect my iPod to my car, it does not recognize the iPod as a media device. It recognizes several other people's iPods, iPads, and iPhones, and my home computer still recognizes it and syncs just fine. Help please!!!

  • Transferring Hard Drives between G4's

    HI......I may have the chance to purchase a Quicksilver G4 (2002) from work. It is the 800MHz model with 1.25Gb RAM and a 160Gb hard drive..... It is running OS X 10.3.9.......... I am currently on OS X 10.4.8...... If I was to get it how do I transf

  • Impossible to pair my Smartband with a Samsung S3

    Hello, I've recieved my smartband yesterday, and i went from excited to mad. I'm running android 4.4.2 on my samsung galaxy S3 (which have a 4.0 LBE support). I've downloaded Lifelog / SmartConnect and Smartband applications yesterday, so they're all

  • E-Recruiting Requisition Overview -- Management Involvement

    In MSS, there is Requisition Overview, that display Requisition Request. What query is used to display that, no? My customer want customize it? How you find which query used? SAP Help does not say