QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURU

BW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP..

Similar Messages

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • Data Load performance in BI7.0

    Hi,
    I have a generic question regarding BI7.0.
    From the perspective of data load performance what are the features  that BI7.0 has compared to earlier versions.
    Thanks in advance,,
    Rama Murthy

    Hi,
    In BI, the entry layer is PSA, and it is mandatory to maintain the PSA to maintain the data in BI from any source. Here(in BI) it is possible to make the PSA as typed and untyped.
    The Infopackage functionality is reduced, it will loads data up to PSA only.
    The DTP upload the data between BI object in BI. Transformations replaces the updates rules and transfer rules.
    DTP and Transformations removes the Data mart interface between the BI objects.
    It is possible that, if no transformation of data is required, we can load data directly to target without maintaining the InfoSource.
    All these properties are available bcoz of the new concept and new object type DataSource ie., BI DataSource (object type RSDS).
    Depending on the situation, the InfoSource is not mandatory and at some times it is mandatory but PSA is mandatory in BI and rest of all are same as in 3.x.
    Hope this helps in solving u r problem
    Regards
    Ramakrishna Kamurthy

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Check data load performance for DSO

    Hi,
        Please can any one provide the detials, to check the data load performance for perticular DSO.
       Like how much time it took to load perticular (e.g 200000) records in DSO from R/3 system. The DSO data flow is in BW 3.x version.
    Thanks,
    Manjunatha.

    Hi Manju,
    You can take help of BW statistics and its standard content.
    Regards,
    Rambabu

  • Data Load Performance Tuning

    Hi All,
        My requirement is to extract the transaction data from the external MS SQL server to BI 7.0. UDC connection successfully established, datasources available, but I need to wait for long time to check preivew data in preview tab of datasource. Just to display the data in preview mode if datasource takes this much time than we are worried about the data load to DSO or Cube. Would anyone faced this problem, is there any performance settings am I missing. We are on BI 7.0 and on Unix OS, answers will be appreciated.
    Thanks,
    Eric.

    Hi Sriee,
       How can we restrict the data in BI datasource preview? I cannot see any filters button nor any option to restrict in preview tab? I also tried by restricting the no. of records in preview to 5 records. can you please mail me ([email protected]) the steps to improve data load performance in UDC.
    Thanks,
    Eric.

  • Data load performance using infoset Vs View.

    Hi Guru,
    I am performing generic extraction in that i am loading data to cube but my Data source is based on Infoset in R/3.
    it is taking 30 MIn. to load 10,00000 Lakh (Ten Lakh) records ideally it has to take 10 to 15 min. rit ?
    can anybody suggest me where i need to check for increase in performance or shall i create datasource over a view and try to load data will it help me in data load performance ?
    thanks,
    ganesh.

    hi Ganesh,
    Primary Index ->
    When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. The key fields must be specified at the top of the table field list, and define them as key fields. A minimum of 1 key field and a maximum of 16 key fields can be defined.
    When the table is activated, an index formed from all key fields is created on the database (with Oracle, Informix, DB2), in addition to the table itself. This index is called the primary index The primary index is unique by definition.
    In addition to the primary index you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique.
    If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index ( index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
    The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index The table records in the table blocks are not sorted.
    An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
    To create Index ->
    Yo have to use trx SE11 into Dev system.
    Enter the database table name and press
    Display -> Indexes -> Create
    Enter index name.
    Choose Maintain logon language.
    Enter short description and index fields.
    Then press save and create the request to transport the index to QA and PRD. Then press activate.
    Hope this helps,
    VA
    Edited by: Vishwa  Anand on Sep 29, 2010 12:50 PM

  • EIS Member and Data Load-Getting OS Error-Please help!

    Please help! I have created a OLAP model then created a Metaoutline.
    Then I went ahead to do the Member and Data load. I logged into my server and started the member and data load.
    Then it gives me the following error:
    SELECT /*+ */ .. FROM <my_view_name>
    OS Error No such file or directory IS Error Member load terminated with error.
    The load terminated with errors.
    Thanks in advance for any replies.

    thanks all! the error has been resolved.
    Jus had to create a directory in the Integration services folder: $ISHOME/loadinfo
    the loadinfo folder was missing.
    Prathap,
    Is that view available at that time? --the query is generated automatically.
    Which Data Source and which version of the Hyperion - datasource is Oracle10g and 9.3 is Hyp version.

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

    Hi All
    I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
    I am willing to pay for your time. Please let me know.
    Thanks

    Hi Friend,
    As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
    After that go through
    https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
    and all this for you free free free..........
    Thanks,
    Avneet

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • ERPi Data load mapping Issue

    Hi,
    We are facing issue with ERPi data load mappings issue. Mapping file (txt file) has 36k records, whenever we are trying to load mappings, it's taking very long time, nearly 1 hour 30mins. but we want to reduce that time. is there any way to reduce data load mapping time??
    Hyperion verion: 11.1.2.2.300
    Please help, thanks in advance!!
    Thanks.

    Any one face the same kind of issue??

  • Poor Data Load Performance Issue - BP Default Addr (0BP_DEF_ADDR_ATTR)

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

Maybe you are looking for

  • Monitor says VIDEO MODE NOT SUPPORTED after changing display settings

    HELP! I am about to have a nervous breakdown with my mac mini! I hooked up the mac to my Philips LCD via DVI. No problem at all. But because the screen is only 15" I wanted to play around with the display settings a bit. So I changed the resolution f

  • PSA no deletion SAP BW on HANA Rev 61

    Hi, PSAs are not getting deleted for some of the datasources due to error loads and multiple runs. System is SAP BW on HANA Rev 61. The previous job failure was not corrected  and the subsequent job was started based on the process chain schedule and

  • [iPhone4 4.2.1] Video audio no longer plays when put to sleep

    I have a video of a concert on my iPhone 4. Before the update, I was able to play it, put the phone in sleep mode and listen to the audio only. Now, whenever I try to do it, the audio just stops. Is there any way to fix this? I liked being able to po

  • What files can be safely deleted?

    I love my 400 MHz TiBook but its 10 GB hard drive is filling up. I have approx. 1.5 GB of free space left. What files that can be safely deleted? Can I delete extraneous printer files, modem files, packages, etc? I use MacJanitor to clean out caches

  • Trial version - registered version

    I cant convert my trial version into a registered version. Application manager keeps asking for license number, even though I´m logged in as a registered team-member.