Improve data load performance using ABAP code

Hi all,
         I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
in BW side? if give sample code to improve load performance it will be usefull. please guide me.

There are several points that can improve performance of your ABAP code:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
4.Avoid using nested SELECT and SELECT statements within LOOPs.
5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
6. Avoid using SELECT * and select only the required fields from the table.
7. Avoid Executing a SELECT multiple times in the program.
8. Avoid nested loops when working with large internal tables.
9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
12. Use CASE instead of IF/ENDIF whenever possible.
13. Runtime transaction code se30 can be used to measure the application performance.
14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
Hope it helps.

Similar Messages

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

  • Data load performance using infoset Vs View.

    Hi Guru,
    I am performing generic extraction in that i am loading data to cube but my Data source is based on Infoset in R/3.
    it is taking 30 MIn. to load 10,00000 Lakh (Ten Lakh) records ideally it has to take 10 to 15 min. rit ?
    can anybody suggest me where i need to check for increase in performance or shall i create datasource over a view and try to load data will it help me in data load performance ?
    thanks,
    ganesh.

    hi Ganesh,
    Primary Index ->
    When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. The key fields must be specified at the top of the table field list, and define them as key fields. A minimum of 1 key field and a maximum of 16 key fields can be defined.
    When the table is activated, an index formed from all key fields is created on the database (with Oracle, Informix, DB2), in addition to the table itself. This index is called the primary index The primary index is unique by definition.
    In addition to the primary index you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique.
    If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index ( index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
    The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index The table records in the table blocks are not sorted.
    An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
    To create Index ->
    Yo have to use trx SE11 into Dev system.
    Enter the database table name and press
    Display -> Indexes -> Create
    Enter index name.
    Choose Maintain logon language.
    Enter short description and index fields.
    Then press save and create the request to transport the index to QA and PRD. Then press activate.
    Hope this helps,
    VA
    Edited by: Vishwa  Anand on Sep 29, 2010 12:50 PM

  • Create new survey data for opportunity using abap code

    Hello to all,
    I have one requirement.
    create survey data for given opportunity ID using abap code. or copy survey data from one opportunity to another opportunity using abap.
    any ideas..
    many thanks
    Babu

    Hello Babu,
    You can do this through the CRM_ORDER_MAINTAIN FM, this is the generic FM to edit any transaction in CRM.
    The FM contains an internal table IT_SURVEY that must be filled. Actually, this table will only contain a reference to the (completed) survey, it doesn't store the data of the survey itself.
    For some example coding, take a look at the SrvTSurvey.do controller. If you want to see what data is stored in the IT_SURVEY table, create a transaction and attach a completed survey to it (you can easily do it in the standard views of the webclient) and take a look at the survey structure in the CRM_ORDER_READ FM/Report.
    Let me know if you need more guidance,
    Reward points if useful,
    Regards,
    Joost

  • How to activate data in ODS using ABAP code

    Hi
    Question like in subject.
    THX
    Adam

    Hi........
    U can use Program : RSDG_ODSO_ACTIVATE ...........During activation the M version will be read.
    There are two other options besides the activation:
    Check only >> this will only check consistency of the ODS and no activation will be executed
    Clean up DDIC structures >> this deletes old generated tables, which were only necessary in previous releases
    Also check this link..............it is very useful.......
    https://www.sdn.sap.com/irj/scn/wiki?path=/pages/viewpage.action%3fpageid=35458
    Regards,
    Debjani........

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Data load performance statistics on package level

    Hello,
    table RSMONIPTAB shows me the data packages which belong to a specific request.
    Unfortunately in this table there is only timestamp information. Is there a possibility
    to calculate how many seconds each data package required for processing?
    Or can I join some other table e.g. via column MESS_ID which provides information
    about how long the processing required?
    Regards,
    Mark

    we can check job log of that particular back ground job and no of data packets it processed and time taken to complete this job
    data packets processing time may vary for each data load depending upon ABAP code that exists in transfer/update routines and start routines
    -DU

  • Data Load Performance Tuning

    Hi All,
        My requirement is to extract the transaction data from the external MS SQL server to BI 7.0. UDC connection successfully established, datasources available, but I need to wait for long time to check preivew data in preview tab of datasource. Just to display the data in preview mode if datasource takes this much time than we are worried about the data load to DSO or Cube. Would anyone faced this problem, is there any performance settings am I missing. We are on BI 7.0 and on Unix OS, answers will be appreciated.
    Thanks,
    Eric.

    Hi Sriee,
       How can we restrict the data in BI datasource preview? I cannot see any filters button nor any option to restrict in preview tab? I also tried by restricting the no. of records in preview to 5 records. can you please mail me ([email protected]) the steps to improve data load performance in UDC.
    Thanks,
    Eric.

  • How to transfer data in change log table of dso to z-table using abap code

    Hi  can you please explain me how to transfer data in change log table of dso to z-table using abap code ,with out using Function module concept

    PROGRAM NAME:   ZBW_DELTA_TO_GSTAR                                 **
    report ZBW_DELTA_TO_GSTAR no standard page heading
                                     line-size 120
                                     line-count 75
                                     message-id ZBW_MSG_CLS.
    tables:   ZGIV_DLTA_EBV_BB,
              ZGIV_DLTA_EM2_BL,
              ZGIV_DLTA_EM2_BK.
    Selection Screen Definitions
    SELECTION-SCREEN: BEGIN OF BLOCK INNER WITH FRAME TITLE TEXT-001.
    SELECTION-SCREEN: SKIP 1.
    PARAMETERS:       EBVBB RADIOBUTTON GROUP ROLL,
                      EM2BL RADIOBUTTON GROUP ROLL,
                      EM2BK RADIOBUTTON GROUP ROLL.
    SELECTION-SCREEN: END OF BLOCK INNER.
    Data:  WS_UPDATE_FLAG  Type C,
           UCounter(9)      Type N,
           ICounter(9)      Type N.
    DATA:  T_ZGIV_DLTA_EBV_BB Type Standard Table of ZGIV_DLTA_EBV_BB,
           s_ZGIV_DLTA_EBV_BB LIKE line of T_ZGIV_DLTA_EBV_BB.
    DATA:  T_ZGIV_DLTA_EM2_BK Type Standard Table of ZGIV_DLTA_EM2_BK,
           s_ZGIV_DLTA_EM2_BK LIKE line of T_ZGIV_DLTA_EM2_BK.
    DATA:  T_ZGIV_DLTA_EM2_BL Type Standard Table of ZGIV_DLTA_EM2_BL,
           s_ZGIV_DLTA_EM2_BL LIKE line of T_ZGIV_DLTA_EM2_BL.
    Standard Internal Tables - Describe usage.
    data: begin of i_AEPSD_O0140 occurs 0.
            include structure /BIC/AEPSD_O0140.
    data: end of i_AEPSD_O0140.
    data: begin of i_AEPSD_O0240 occurs 0.
            include structure /BIC/AEPSD_O0240.
    data: end of i_AEPSD_O0240.
    data: begin of i_AEPSD_O0340 occurs 0.
            include structure /BIC/AEPSD_O0340.
    data: end of i_AEPSD_O0340.
    data: begin of i_GIV_DLTA_EBV_BB occurs 0.
            include structure ZGIV_DLTA_EBV_BB.
    data: end of i_GIV_DLTA_EBV_BB.
    data: begin of i_GIV_DLTA_EM2_BK occurs 0.
            include structure ZGIV_DLTA_EM2_BK.
    data: end of i_GIV_DLTA_EM2_BK.
    data: begin of i_GIV_DLTA_EM2_BL occurs 0.
            include structure ZGIV_DLTA_EM2_BL.
    data: end of i_GIV_DLTA_EM2_BL.
    Miscellaneous Program Variables and Constants.
    TOP-OF-PAGE
    top-of-page.
    START-OF-SELECTION
    start-of-selection.
      Clear: i_GIV_DLTA_EBV_BB,
             i_GIV_DLTA_EM2_BK,
             i_GIV_DLTA_EM2_BL,
             UCounter, ICounter.
      IF EBVBB = 'X'.
        PERFORM 100_EXTRACT_EBV_BB_DELTA_RECS.
      ELSEIF EM2BK = 'X'.
        PERFORM 100_EXTRACT_EM2_BK_DELTA_RECS.
      ELSE.
        PERFORM 100_EXTRACT_EM2_BL_DELTA_RECS.
      ENDIF.
    FORM 100_EXTRACT_EBV_BB_DELTA_RECS
    FORM 100_EXTRACT_EBV_BB_DELTA_RECS.
      Refresh:   i_AEPSD_O0140,
                 i_GIV_DLTA_EBV_BB.
      Clear:      UCounter, ICounter, s_ZGIV_DLTA_EBV_BB .
      Select * From /BIC/AEPSD_O0140
        Into TABLE i_AEPSD_O0140.
      IF SY-Subrc = 0.
        LOOP AT i_AEPSD_O0140.
          MOVE-CORRESPONDING i_AEPSD_O0140 TO s_ZGIV_DLTA_EBV_BB.
          MOVE SY-DATUM to s_ZGIV_DLTA_EBV_BB-create_dt.
          INSERT ZGIV_DLTA_EBV_BB FROM s_ZGIV_DLTA_EBV_BB.
          IF SY-Subrc = 0.
            ICounter = ICounter + 1.
          ELSE.
            UPDATE ZGIV_DLTA_EBV_BB FROM  s_ZGIV_DLTA_EBV_BB.
            IF SY-Subrc = 0.
              UCounter = UCounter + 1.
            ELSE.
              Message E067 with SY-DATUM ' ' SY-UZEIT ' '.
            ENDIF.
          ENDIF.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    "100_EXTRACT_EBV_BB_DELTA_RECS
    FORM 100_EXTRACT_EM2_BK_DELTA_RECS
    FORM 100_EXTRACT_EM2_BK_DELTA_RECS.
    Refresh:   i_AEPSD_O0240,
               i_GIV_DLTA_EM2_BK.
      Clear:      UCounter, ICounter, s_ZGIV_DLTA_EM2_BK .
      Select * From /BIC/AEPSD_O0240
        Into TABLE i_AEPSD_O0240.
      IF SY-Subrc = 0.
        LOOP AT i_AEPSD_O0240.
          MOVE-CORRESPONDING i_AEPSD_O0240 TO s_ZGIV_DLTA_EM2_BK.
          MOVE SY-DATUM to s_ZGIV_DLTA_EM2_BK-create_dt.
            INSERT ZGIV_DLTA_EM2_BK FROM s_ZGIV_DLTA_EM2_BK.
          IF SY-Subrc = 0.
            ICounter = ICounter + 1.
          ELSE.
            UPDATE ZGIV_DLTA_EM2_BK FROM  s_ZGIV_DLTA_EM2_BK.
            IF SY-Subrc = 0.
              UCounter = UCounter + 1.
            ELSE.
              Message E067 with SY-DATUM ' ' SY-UZEIT ' '.
            ENDIF.
          ENDIF.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    "100_EXTRACT_EM2_BK_DELTA_RECS
    FORM 100_EXTRACT_EM2_BL_DELTA_RECS
    FORM 100_EXTRACT_EM2_BL_DELTA_RECS.
    Refresh:   i_AEPSD_O0340,
               i_GIV_DLTA_EM2_BL.
      Clear:      UCounter, ICounter, s_ZGIV_DLTA_EM2_BL .
      Select * From /BIC/AEPSD_O0340
        Into TABLE i_AEPSD_O0340.
      IF SY-Subrc = 0.
        LOOP AT i_AEPSD_O0340.
          MOVE-CORRESPONDING i_AEPSD_O0340 TO s_ZGIV_DLTA_EM2_BL.
          MOVE SY-DATUM to s_ZGIV_DLTA_EM2_BL-create_dt.
            INSERT ZGIV_DLTA_EM2_BL FROM s_ZGIV_DLTA_EM2_BL.
          IF SY-Subrc = 0.
            ICounter = ICounter + 1.
          ELSE.
            UPDATE ZGIV_DLTA_EM2_BL FROM  s_ZGIV_DLTA_EM2_BL.
            IF SY-Subrc = 0.
              UCounter = UCounter + 1.
            ELSE.
              Message E067 with SY-DATUM ' ' SY-UZEIT ' '.
            ENDIF.
          ENDIF.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    "100_EXTRACT_EM2_BL_DELTA_RECS
    END-OF-SELECTION
    end-of-selection.
      perform D1000_REPORT_DATA.
    D1000_REPORT_DATA
    form D1000_REPORT_DATA.
    *Display the title of the program
      write: /25 SY-TITLE.
      skip.
    Diaplay the details of the user and time
      write: /1 'Executed by', 15 SY-UNAME, 30 'Date',
      38 SY-DATUM, 53 'Time', 60 SY-UZEIT.
      skip 2.
      write: /  'Delta Records have been extracted  ',
             /   'Updates : ', UCounter,
             /   'Inserts : ', ICounter.
      skip.
      skip 3.
      write: /20 'End of the report'.
    endform.                                           "D1000_REPORT_DATA
    chgeck it out this also may hep you

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • How to extract data from info cube into an internal table using ABAP code

    HI
    Can Anyone plz suggest me
    How to extract data from info cube into an internal table using ABAP code like BAPI's or function modules.
    Thankx in advance
    regds
    AJAY

    HI Dinesh,
    Thankq for ur reply
    but i ahve already tried to use the function module.
    When I try to Use the function module RSDRI_INFOPOV_READ
    I get an information message "ERROR GENERATION TEST FRAME".
    can U plz tell me what could be the problem
    Bye
    AJAY

  • Changing Field Label length for Data Element using ABAP code.

    Hi Experts,
    We have a scenario where we have to update the maximum length of the various Field Labels (Short, Medium, Long..) for a Data Element using ABAP code.
    Does anyone know how to do this ? Is there a Function Module available for this purpose ?
    Appreciate your valuable inputs.

    Hi ,
    Use the view
    DD03M
    Give
    TABNAME as table name,
    DDLANGUAGE = EN / sy-langu
    and order by position. (for correct sequencing)
    You will get description of the fields (short,medium, long etc). You will also get the length and other details.

  • How to Improve DSO loading performance

    Hello,
    I have a DSO having 3 infosources. This DSO is Generic means based on generic Data Sources. Daily we have a full upload (last 2 months data). Initially it was taking around 55 mins to load the data but now a days all are taking 2.5 Hrs daily.
    Can u please tell me how can i improve the performance in other word how can i reduce the time.
    Please give some solution or document to resolve this.
    amit

    Hi,
    Genearl tips you can try to improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    ODS Query Performance  
    Thanks,
    JituK

  • Extractor Designing to improve the Load performance.

    Hi all,
    I am extracting the data from MM application for this i am using the LO  2lis_02_itm extractor and i had enhanced it with 32 field and its happering my data load performance.
    Could u pls let me know,  how can i improve the data load performance.
    Do i need to create the different Generic Extractors instead of enhancing the LO.
    The DSO is also having many fields in it. Should i split it into 2 and create the Multiprovider for reporting.
    Regards
    KK

    Hello,
    my suggestion would be to create another generic DS for the logcal set of fields to be required in BI.
    then you can load then seperately to different DSOs and then to single IC or to two IC and use MP to report on them.
    Further you can check the below links:
    Extraction-Enhancement-Performance problem
    Increase dataload performance
    Dataload Performance
    Performance Enhancement for Custom Data Extractor
    Regards,
    Dhanya

Maybe you are looking for

  • F150 Dunning printout error

    Hello, While trying to print the dunning report we get the following error: Dunning notice for account: 'D ' 'D50016 ' was not printed > Forms for MAHNR: 'LAXM ', RBUKM: '1000 ', KOART: 'D ' do not exist (T047E) Is this a configuration error what ste

  • Gray spot since Security Update 2014-005 1.0 ?

    Hello, yesterday I installed "Security Update 2014-005 1.0" on a Mid 2010 27'' iMac running Mavericks 10.9.5 and a small gray spot appeared at the upper left corner of the screen, above the Apple logo. I know that dust enters sometimes to the screen

  • Visa error 0xBFFF00A6 after session is idle for an extended time.

    Just switched development platforms from Windows XP/LabWindows 8.5/Test Stand 4.0 to Windows 7/LabWindows 2012/Test Stand 2012/NI Visa 5.3.  I'm posting this as a LabWindows question, but could be related to Test Stand.  I have a test sequence that c

  • APPV folder on client pc after pushing Updated package

    1. Currently published package is 125MB. 2. Updated that package in sequencer. One file of 250KB was changed. 3. Published updated package to the test computer. Application works fine after update. 4.  Under Program Data\Appv in package GUID folder s

  • Help - Creating a Custom Layout

    Hy, I'm creating a Grphical Programming Language, where user draw flowcharts by linking buttons. My problem is: After adding some buttons on the panel, the scrollBars does not appear. (I have set layout to be NULL so that I can add the buttons at spe