Performance Issue on 0FI_GL_14 Extraction

Hi experts,
We have a client scenario whose wants to see the report in a daily basis.
Unfortunately, the report consists of those data from a standard extractor named 0FI_GL_14 which involve with really huge SAP tables such as FAGLFLEXA.
The current BW system (SAP BW 7.01) schedules a daily job for this extraction daily and it takes approximately around 12 - 16 hours for daily data load. Yes, the table is really huge and there are approximately more than 300 million records reside in the table.
Therefore, we cannot leverage this daily data to be reported in time and would like to improve extraction performance.
We are considering two options to resolve this matter.
Develop a customize data source (ABAP program) in SAP which will only extracts necessary data. However, we still need to query data with Posting Date up to 2 months back, the performance of this program might be worse.
We have researched that SAP Note 1531175 might be helpful for this issue. It states that we should create a secondary index on the table. However, since the table is really huge and the functional user concerns on the impact of creating the index whether it will impact their existing reports, performance of their transaction, downtime required during the index creation, etc.
Would you please have any suggestion or advise in regard to this? We would like to get back to the client to recommend with the better solution.
Thanks in advance.
Ps. Figure attached is a current configuration for the mentioned table.

Hi,
Adding new index will not affect other report directly.
But it may affect other report indirectly as there are some new stuff for database optimizer to consider and the decision may change and lead to slow performance.
But I don't think it is a common case, and it is more likely the statistics is not update and/or not enough, which is a database admin issue.
So, make sure you rebuild the statistics for the tables you created the new index then you should be fine.
Regards
Bill

Similar Messages

  • Generic extraction loading and performances issues

    hi,
          any one can give the details about the generic extraction loading as well as performance issues.
    advance thanks
    regards
    praveen

    Hi,
    when there is no suitable business content datasource we go for creating generic data source.
    by using generic data source we can extract data present in single or multiple tables.
    If the data is present in the single table we go for generic data source extracting from table.
    If the data that is to be extracted is present in multiple tables and the relation between the tables is one to one and if there is a common field in both the tables we create a view on these tables and create a generic data source extracting from view.
    if you want to extract data from different tables and there is no common field we create a info set on these tables and create a generic datasource extracting from query.
    if you want to extarc the data from different tables and the relation is one to many  or many to many we create a generic data source from function module.
    If we extarct from function module ,at run time it has to execute the code and brings the data to BW.so it degrades the loading performance.
    regards,

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Performance issues with 0CO_OM_WBS_1

    We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
    Is there a way to delta-enable this datasource?

    Hi,
    This DS is not delta enabled and you can only do a full load.  For a delta enabled one, you need to use 0CO_OM_WBS_6.  This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
    What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
    As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open.  This will reduce the data load.
    You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
    cheers...

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance Issue-;How to restrict the total output of the report.

    Hi Experts
    I need your advise to resolve one performance issue in my BI Publisher report.
    My report query is extracting more than 80000 records at once. To load these records into the report template it is taking all most 14 to 15 hours. Unfortunately i cannot change my logic to put some more filter to restrict the query output, as this is the requirement from the client.
    Is there any way i can restrict my report so that it will extract the first 1000 record and on the event of pressing next it will extract the next 1000 record and so on from the point when it left in last time.
    Kindly let me know if you have any solution for this.
    Thanks in advance.
    Regards
    Srikant

    Hi experts...
    Any update on this...

  • Performance issue while generating Query

    Hi BI Gurus.
    I am facing performance issue while generating query on 0IC_C03.
    It has a variable as (from & to) for generating the report for a particular time duration.
    if the variable (from & to) fields is filled then after taking a long time it shows run time error.
    & if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
    & after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
    Regards
    Ritika

    HI RITIKA,
    WEL COME TO SDN.
    YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
    High Database Runtime
    High OLAP Runtime
    High Frontend Runtime
    if its high Database Runtime :
    - check the aggregates or create aggregates on cube and this helps you.
    if its High OLAP Runtime :
    - check the user exits if any.
    - check the hier. are used and fetching in deep level.
    If its high frontend runtime:
    - check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    For From and to date variables, create one more set and use it and try.
    Regs,
    VACHAN

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Performance Issue Problem

    SELECT PSPHI
             A~PSPNR
             B~POST1
             A~POST1
      INTO TABLE T_PRPS
      FROM PRPS AS A
      JOIN PROJ AS B
      ON PSPHI = B~PSPNR
      WHERE PSPHI IN S_PSPHI.
    user wont enter the project number.so first select query vl fetch the entire table reocrds bcoz s_psphi is the select-option.
    after getting the records from prps i m passing these records into AFPO table to fetch network numbers.
    IF T_PRPS[] IS NOT INITIAL.
    SELECT AUFNR INTO TABLE T_AFPO
                   FROM AFPO
                   FOR ALL ENTRIES IN T_PRPS
                   WHERE PROJN = T_PRPS-PSPNR
                   %_HINTS ORACLE 'INDEX("AFPO" "AFPO~002")'.
    ENDIF.
    after getting records from AFPO i passed these records to AUFK to fetch text and other data.
    IF T_AFPO[] IS NOT INITIAL.
    SELECT AUFNR
               PSPEL
               KTEXT
               INTO TABLE T_AUFK
               FROM AUFK
               FOR ALL ENTRIES IN T_AFPO
               WHERE AUFNR = T_AFPO-AUFNR
    ENDIF.
    The second select query is taking long time.
    At first the 2nd and 3rd select queries are written by using JOIN,but performance issue came.so i split the JOIN condition.After that also i got same performance problem.Then i used Secondary index syntax.time reduced to 17mins,but user wants it less time.Please any body vl gIve solution for this as soon as possible.
    My analysis is the field which is ther in  where condition is not a primary key filed and records in the database table also hav large in amount.
    Thanks&Regards,
    R.P.Sastry

    It looks like the problem is entirely due to the large amount of data. You are extracting a large number of entries from PRPS and then using all of the results to get orders from AFPO using an non-unique secondary index. this will take time and there's not much you can do about this except run it in the background.
    Or enter the project number.
    Rob
    Edited by: Rob Burbank on Feb 20, 2008 9:27 AM
    Edited by: Rob Burbank on Feb 20, 2008 9:29 AM

  • E-Sourcing Performance Issues

    Hi ,
          In E-sourcing I'm trying to handle some performance issues . Is it possible to import dummy data into application . How to import .. can any one help me out regarding this  .. It's a critical issue
    Thanks in advance
    Deepika

    Hi Deepika,
    It is certainly possible to import dummy data into the application. There are several work book templates that are provided out of the box in the application. Refer to Enterprise, Company Deployment workbooks available on the RG link in the application.
    Each work books provided you all the config details. These work books explain contain sample data and outline many details such as required and non required fields etc.
    If transactional data needs to be loaded, you should extract OMA files from source system and then import them into the target system.
    Regards,
    Parankush

  • OBIEE  Performance Issues

    I am experiencing Performance issues with queries generated by OBIEE. The query generated by OBIEE run 2+ hours. Looking at the generated SQL, the execution plan is not utilizing indexes on the FACT table.
    We have dimension table linked to a partitioned FACT table. We have created local bitmap indexes on all dimension keys. The execution plan generated for the OBIEE generated SQL statement does not use indexes, it executes a FULL table scan on our FACT table which has approximately 260 million rows. When I extract out the SELECT portion retrieving the information from the tables, the execution plan changes and indexes are used. Does anyone know what would cause oracle not to execute the same execution plan for the OBIEE generated SQL?
    OBIEE generated SQL
    WITH SAWITH0
    AS ( SELECT SUM (T92891.DEBIT_AMOUNT) AS c1,
    SUM (T92891.CREDIT_AMOUNT) AS c2,
    T91932.COMPL_ACCOUNT_NBR AS c3,
    T92541.APPROP_SYMBOL AS c4,
    T92541.FUND_CODE AS c5,
    T91992.ACCOUNT_SERIES_NAME AS c6,
    T91932.ACCOUNT_NBR AS c7
    FROM DW_FRR.DIM_FUND_CODE_FISCAL_YEAR T92149,
    DW_ICE.DIM_FUND T92541,
    DW_FRR.DIM_ACCOUNT T91932,
    DW_FRR.DIM_ACCOUNT_SERIES T91992,
    DW_ICE.FACT_GL_TRANSACTION_DETAIL T92891
    WHERE (T91932.ACCOUNT_SID_PK = T92891.ACCOUNT_SID_FK
    AND T91932.ACCOUNT_SERIES_SID_FK =
    T91992.ACCOUNT_SERIES_SID_PK
    AND T92149.FUND_CODE_FISCAL_YEAR_SID_PK =
    T92891.FUND_CODE_FISCAL_YEAR_SID_FK
    AND T92541.FUND_SID_PK = T92891.FUND_SID_FK
    AND T92149.FISCAL_YEAR >= :"SYS_B_0")
    GROUP BY T91932.ACCOUNT_NBR,
    T91932.COMPL_ACCOUNT_NBR,
    T91992.ACCOUNT_SERIES_NAME,
    T92541.FUND_CODE,
    T92541.APPROP_SYMBOL),
    SAWITH1 AS (SELECT DISTINCT :"SYS_B_1" AS c1,
    D1.c3 AS c2,
    D1.c4 AS c3,
    D1.c5 AS c4,
    D1.c2 AS c5,
    D1.c1 AS c6,
    D1.c6 AS c7,
    D1.c7 AS c8
    FROM SAWITH0 D1)
    SELECT D1.c1 AS c1,
    D1.c2 AS c2,
    D1.c3 AS c3,
    D1.c4 AS c4,
    D1.c5 AS c5,
    D1.c6 AS c6
    FROM SAWITH1 D1
    ORDER BY c1,
    c3,
    c2,
    c4
    Execution PLan
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 1 M
         29 PX COORDINATOR
              28 PX SEND QC (ORDER) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005 Cost: 1 M Bytes: 1019 M Cardinality: 11 M
                   27 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 1 M Bytes: 1019 M Cardinality: 11 M
                        26 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                             25 PX SEND RANGE PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                                  24 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                                       4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Bytes: 3 K Cardinality: 179
                                            3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                                 2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                                      1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.DIM_FUND :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                       23 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 972 K Bytes: 843 M Cardinality: 11 M
                                            20 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 9 Bytes: 54 K Cardinality: 962
                                                 19 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003 Cost: 9 Bytes: 54 K Cardinality: 962
                                                      18 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 9 Bytes: 54 K Cardinality: 962
                                                           15 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 6 Bytes: 814 Cardinality: 22
                                                                14 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001 Cost: 6 Bytes: 814 Cardinality: 22
                                                                     13 MERGE JOIN CARTESIAN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 6 Bytes: 814 Cardinality: 22
                                                                          9 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1001
                                                                               8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 2 Bytes: 16 Cardinality: 2
                                                                                    7 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 2 Bytes: 16 Cardinality: 2
                                                                                         6 TABLE ACCESS BY INDEX ROWID TABLE DW_FRR.DIM_FISCAL_YEAR Cost: 2 Bytes: 16 Cardinality: 2
                                                                                              5 INDEX RANGE SCAN INDEX (UNIQUE) DW_FRR.UNQ_DIM_FISCAL_YEAR_IDX Cost: 1 Cardinality: 2
                                                                          12 BUFFER SORT PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 4 Bytes: 319 Cardinality: 11
                                                                               11 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                                                    10 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT_SERIES :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                           17 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003 Cost: 2 Bytes: 10 K Cardinality: 481
                                                                16 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT :Q1003 Cost: 2 Bytes: 10 K Cardinality: 481
                                            22 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1004 Cost: 971 K Bytes: 4 G Cardinality: 207 M Partition #: 28 Partitions accessed #1 - #12
                                                 21 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.FACT_GL_TRANSACTION_DETAIL :Q1004 Cost: 971 K Bytes: 4 G Cardinality: 207 M Partition #: 28 Partitions accessed #1 - #132
    Inner SQL Statement without the OBIEE wrap around SQL
    SELECT SUM (T92891.DEBIT_AMOUNT) AS c1,
    SUM (T92891.CREDIT_AMOUNT) AS c2,
    T91932.COMPL_ACCOUNT_NBR AS c3,
    T92541.APPROP_SYMBOL AS c4,
    T92541.FUND_CODE AS c5,
    T91992.ACCOUNT_SERIES_NAME AS c6,
    T91932.ACCOUNT_NBR AS c7
    FROM DW_FRR.DIM_FUND_CODE_FISCAL_YEAR T92149,
    DW_ICE.DIM_FUND T92541,
    DW_FRR.DIM_ACCOUNT T91932,
    DW_FRR.DIM_ACCOUNT_SERIES T91992,
    DW_ICE.FACT_GL_TRANSACTION_DETAIL T92891
    WHERE (T91932.ACCOUNT_SID_PK = T92891.ACCOUNT_SID_FK
    AND T91932.ACCOUNT_SERIES_SID_FK =
    T91992.ACCOUNT_SERIES_SID_PK
    AND T92149.FUND_CODE_FISCAL_YEAR_SID_PK =
    T92891.FUND_CODE_FISCAL_YEAR_SID_FK
    AND T92541.FUND_SID_PK = T92891.FUND_SID_FK
    AND T92149.FISCAL_YEAR >= :"SYS_B_0")
    GROUP BY T91932.ACCOUNT_NBR,
    T91932.COMPL_ACCOUNT_NBR,
    T91992.ACCOUNT_SERIES_NAME,
    T92541.FUND_CODE,
    T92541.APPROP_SYMBOL
    Execution Plan
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 25 K Bytes: 79 M Cardinality: 728 K
         28 PX COORDINATOR
              27 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10002 :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                   26 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                        25 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                             24 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                                  23 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                                       22 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 12 K Bytes: 190 M Cardinality: 2 M
                                            4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                 3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10000 :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                                      2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                                           1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT_SERIES :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                            21 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 12 K Bytes: 142 M Cardinality: 2 M
                                                 6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001
                                                      5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT :Q1001 Cost: 2 Bytes: 12 K Cardinality: 481
                                                 20 VIEW PUSHED PREDICATE VIEW PARALLEL_COMBINED_WITH_PARENT SYS.VW_GBC_17 :Q1001 Bytes: 660 Cardinality: 11
                                                      19 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 376 K Cardinality: 5 K
                                                           18 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 2 M Cardinality: 36 K
                                                                7 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.DIM_FUND :Q1001 Cost: 2 Bytes: 7 K Cardinality: 179
                                                                17 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                     15 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 1 M Cardinality: 36 K
                                                                          8 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_FISCAL_YEAR :Q1001 Cost: 2 Bytes: 16 Cardinality: 2
                                                                          14 PARTITION LIST ALL PARALLEL_COMBINED_WITH_PARENT :Q1001 Partition #: 22 Partitions accessed #1 - #11
                                                                               13 PARTITION LIST ALL PARALLEL_COMBINED_WITH_PARENT :Q1001 Partition #: 23 Partitions accessed #1 - #12
                                                                                    12 BITMAP CONVERSION TO ROWIDS PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                                         11 BITMAP AND PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                                              9 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) PARALLEL_COMBINED_WITH_PARENT DW_ICE.FK_ACCOUNT_GLTRANS_IDX :Q1001 Partition #: 23 Partitions accessed #1 - #132
                                                                                              10 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) PARALLEL_COMBINED_WITH_PARENT DW_ICE.FK_FUNDCODE_FY_GLTRANS_IDX :Q1001 Partition #: 23 Partitions accessed #1 - #132
                                                                     16 TABLE ACCESS BY LOCAL INDEX ROWID TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.FACT_GL_TRANSACTION_DETAIL :Q1001 Cost: 10 K Bytes: 401 K Cardinality: 18 K Partition #: 23 Partitions accessed #1
    Any and all help would be greatly appreciated.

    Have you gathered statistics in the data warehouse recently? That's one reason the optimizer might choose the wrong execution plan.
    Is the schema a star schema? If so do you have the init.ora parameter 'STAR_TRANSFORMATION_ENABLED' set to yes in the DW? This can drastically affect performance.
    Please test any changes you make in a test system before applying to live as altering things can have unrequired impacts/
    Thanks
    Robin

  • Performance issue: extractor:

    Hi,
    I have an issue with the performnace of the extractor:
    Extractor is a generic one with FM in its definition
    The reason i found was logic in the FM:
    the select statement was extracting the fields from View and one more table with join statement: like shown below:
    FROM (  view                    AS p
                  INNER JOIN plko AS k ON pplnty EQ kplnty
                                     AND  pplnnr EQ kplnnr
                                     AND  pplnal EQ kplnal )
               LEFT JOIN crhd AS c ON parbid = cobjid
    Because of too many joins and the table being view there is a performance issue:
    Is there anyway that i can increase the performance ? now it s taking more than 6 hours for the purpose!
    Raj

    Hii Karan
    Sometimes innerjoin may take more time use different internal tables with seperate select queries.
    Thanks
    Viki

  • 11.2.0.0 upgrade- performance issue when commits

    We have upgraded our data warehouse system from 10.2.0.4 for 11.2.0.2. After upgrade ETL system has performance issue. Extraction is fine because we compared the explain plans before and after upgrade. However, loading is taking very long. In loading there is a commit after every 1000 records, that why its slow. Even when we import data with commit=y option its very slow. Is there any know issue on this?
    Any help pl?
    Thanks

    We are solaris 10. ETL job on 10.2.0.3 was taking 2 hours and now its taking 11 hours.
    I beleive commit is a issue because
    1. Source query (Extraction) has same explain plan.
    2. Source database is still on 10.2.0.3
    3. Servers and network is same for 10g and 11g.
    4. I already experienced COMMIT=y performance issue in import.
    Thanks

  • Poor Data Load Performance Issue - BP Default Addr (0BP_DEF_ADDR_ATTR)

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

  • Loading performance issues

    HI gurus
    please can u help in loading issues.i am extracting  data from standard extractor  in purchasing  for 3 lakhs record it is taking  18 hrs..can u please  suuguest  me loading performance issues .
    -KP

    Hi,
    Loading Performance:
    a) Always load and activate the master data before you load the transaction data so that the SIDs don't have to be created at the loading of the transaction data.
    b) Have the optimum packet size. If you have too small a packet size, the system writes messages to the monitor and those entries keep increasing over time and cause slow down. Try different packet sizes to arrive at the optimum number.
    c) Fine tune your data model. Make use of the line item dimension where possible.
    d) Make use of the load parallelization as much as you can.
    e) Check your CMOD code. If you have direct reads, change them to read all the data into internal table first and then do a binary search.
    f) Check code in your start routine, transfer rules and update rules. If you have BW Statistics cubes turned on, you can find out where most of the CPU time is spent and concentrate on that area first.
    g) Work with the basis folks and make sure the database parameters are optimized. If you search on OSS based on the database you are using, it provides recommendations.
    h) Set up your loads processes appropriately. Don't load all the data all the time unless you have to. (e.g) If your functionals say the historical fiscal years are not changed, then load only current FY and onwards.
    i) Set up your jobs to run when there is not much activity. If the system resources are already strained, your processes will have to wait for resources.
    j) For the initial loads only, always buffer the number ranges for SIDs and DIM ids
    Hareesh

Maybe you are looking for

  • What are the best ways to attach my macbook pro to our soundboard

    I need to upgrade the way in which I attach my macbook pro to our soundboard in order to improve quality recording and sound output. Which cables do I need to purchase? I am using a 2007 macbook pro/Snow Leopard OS. Thanks!

  • Could not generate stub object - The element type "META" must be terminated by the matching end-tag "".

    I am getting the following error message when I try invoking a webservice. Could not generate stub objects for web service invocation. Name: ProgrammePrivilege. WSDL: https://clientaccweb.reseaudistinction.com/CardHolderInfo.asmx?WSDL. org.xml.sax.SA

  • F110 debug - exit FEDI0003

    My apologies in advance, as this topic has been posted before. I do realize that there are quite a few threads on this already on the forum, however most, if not all previous postings, don't list a definitive answer aside from 'Thanks, solved problem

  • Sales Order Block

    Hi Guys, I'm wondering how can I block a sales order on item/header level under following cirucmstances: 3 groups of material (1,2,3) existing in the system, they are using the same order type Automatic block of sales order should be done only for ma

  • Help with Spinner Model

    there are some examples how to make a spinner model to get values like 1, 1.5 2, 2.5.... ?? thanks