REPORT PERFORMANCE TUNING:Array FETCH SIZE

Hi Folks,
DOES any one has an idea about   array fetch size  in CRYSTAL REPORTS XI connected to oracle database.
   I have two basic questions here
           1.   What is the default array fetch size in CRYSTAL XI 
           2.   how to change this parameter.
Thanks In advance
Mani

Please re-post if this is still an issue to the Data Connectivity - Crystal Reports Forum or purchase a case and have a dedicated support engineer work with you directly

Similar Messages

  • Crystal Reports for Enterprise - SQL Command - Array Fetch Size

    We're attempting to migrate a report created in Crystal Reports 2013 Support Pack 3  Version 14.1.3.1257 to...
    Crystal Reports for Enterprise Version 14.1.3.1300 Build: 2013 Support Pack 3 Patch 1
    The originating report is successfully connecting and returning data from MySQL through the MySQL ODBC 3.51 Driver (3.51.30.00), using a SQL Command
    When opening up this report in CR for E and running the report against the same database and the same version of database driver, we get the following Crystal Reports error:
    Crystal Reports
    A problem was encountered
    The following error has occurred while trying to retrieve the data:
    Error on Fetch : Largevarchar and Largevarbinary data cannot retrieved as variable-length data if the array fetch size is not set to 1.
    Please check with your System Administrator that the data source is correctly configured.
    How would we check and where would we adjust the array fetch size within Crystal Report for Enterprise?
    Thank you

    Hi Vitaly,
    I had not yet tried creating a new report based on the query, within CR for E, but I did now upon your request\suggestion.  Yes, it fails as well with a brand new report..same new error as above: "Non Supported Datatype"
    I've pasted the SQL we are using below.  I just tried running this exact query natively through MyPHPAdmin, and also creating a brand new report with Crystal Reports 2013 against the same ODBC driver that I am trying to use within CR for E.  In both cases the query was accepted and ran successfully.
    Driver info:
    MySQL ODBC 3.51 Driver -  Version 3.51.30.00
    I have also tried 2 newer versions of the driver:
    MySQL ODBC 5.3 ANSI - Version 5.03.02.00
    MySQL ODBC 5.3 Unicode - Version 5.03.02.00
    ...where I also configure each within the Information Design Tool so that the "Array Fetch Size"=  1
    I receive the same "Non Supported Datatype" error for these drivers as well.  The MySQL server version is:  5.1.73-0ubuntu0.10.04.1-log
    Protocol Version: 10
    MySQL charset:  UTF-8
    Here is the query:
    SELECT ft.seq_id, ft.ticket_date, ft.posttotal,
    wl.name as current_approval_level,
    TIMESTAMPDIFF(DAY,ft.ticket_date,NOW()) AS Days_Since_TicketDate,
    TIMESTAMPDIFF(DAY,ft.date_created,NOW()) AS Days_Since_Created,
    TIMESTAMPDIFF(DAY,wph.workflow_date,NOW()) AS Days_Waiting_Your_Approval,
    ft.internal_comments
    FROM ticket AS ft
    LEFT JOIN unit AS unit ON (ft.unit_id = unit.id)
    LEFT JOIN unit_district AS udist ON (udist.unit_id = unit.id AND (udist.effective_date IS NULL OR udist.effective_date = 0 OR udist.effective_date <= ft.ticket_date)
           AND (udist.expiry_date IS NULL OR udist.expiry_date = 0 OR udist.expiry_date >= ft.ticket_date))
    LEFT JOIN workflow_document_process_map AS wdpm ON (ft.id = wdpm.document_id AND wdpm.document_type_id = 1)
    LEFT JOIN workflow_process AS wp ON (wp.id = wdpm.workflow_process_id)
    LEFT JOIN workflow_rule AS wr ON (wp.workflow_schema_id = wr.workflow_schema_id AND wp.workflow_level_id = wr.workflow_level_id AND wr.workflow_activity_id = 2  AND (wr.effective_date IS NULL OR wr.effective_date = 0 OR wr.effective_date <= ft.ticket_date) AND (wr.expiry_date IS NULL OR wr.expiry_date = 0 OR wr.expiry_date >= ft.ticket_date))
    LEFT JOIN workflow_rule AS wr2 ON (wp.workflow_schema_id = wr2.workflow_schema_id AND wp.workflow_level_id = wr2.workflow_level_id AND wr2.workflow_activity_id = 4 AND (wr2.effective_date IS NULL OR wr2.effective_date = 0 OR wr2.effective_date <= ft.ticket_date) AND (wr2.expiry_date IS NULL OR wr2.expiry_date = 0 OR wr2.expiry_date >= ft.ticket_date))
    LEFT JOIN workflow_process_history AS wph ON (wph.workflow_process_id = wp.id)
    LEFT JOIN ticket_attachment AS ftattach ON (ft.id = ftattach.ticket_id AND ftattach.attachment_type_id = 2)
    LEFT JOIN person AS p ON (ft.supervisor_id = p.id)
    LEFT JOIN person AS p2 ON (ft.head_office_contact_id = p2.id AND (p2.effective_date IS NULL OR p2.effective_date = 0 OR p2.effective_date <= ft.ticket_date)
           AND (p2.expiry_date IS NULL OR p2.expiry_date = 0 OR p2.expiry_date >= ft.ticket_date))
    LEFT JOIN person AS p3 ON (ft.sales_person_id = p3.id)
    LEFT JOIN person_company_sales AS pcs ON (p2.id = pcs.client_person_id AND pcs.company_id = ft.client_company_id)
    LEFT JOIN person AS p4 ON (ft.created_by = p4.id)
    LEFT JOIN person AS p5 ON (ft.well_site_supervisor_id = p5.id)
    LEFT JOIN division AS d ON (ft.division_id = d.id)
    LEFT JOIN district AS dist1 ON (ft.district_id = dist1.id)
    LEFT JOIN district AS dist2 ON (udist.district_id = dist2.id)
    LEFT JOIN company AS c ON (ft.client_company_id = c.id)
    LEFT JOIN province AS prov ON (ft.job_province_id = prov.id)
    LEFT JOIN ticket_attachment AS ftattach2 ON (ft.id = ftattach2.ticket_id AND ftattach2.attachment_type_id != 2)
    LEFT JOIN locale_currency AS lc ON (lc.id = ft.locale_currency_id)
    LEFT JOIN operation_type AS ot ON (ot.id = ft.operation_type_id)
    LEFT JOIN incident AS i ON (i.ticket_id = ft.id)
    LEFT JOIN invoice_type AS it ON (it.id = ft.invoice_type_id)
    LEFT JOIN workflow_level AS wl ON (wl.id = wp.workflow_level_id)
    LEFT JOIN company AS dist1_company ON (dist1_company.id = dist1.internal_company_id)
    WHERE 1
    AND ( pcs.sales_person_id = 2113 AND pcs.effective_date <= ft.ticket_date AND ( pcs.expiry_date >= ft.ticket_date OR pcs.expiry_date = '0000-00-00 00:00:00' OR pcs.expiry_date IS NULL) )
    AND ft.closed_flag <> 1 AND (wr2.id IS NOT NULL OR wr2.id != 0)
    AND ft.commit_flag = 1
    AND ft.dispatch_status = 0
    AND dist1_company.country_id = 1
    AND wr2.role_id = 2
    GROUP BY ft.id
    ORDER BY ticket_date ASC, ft.id ASC
    Thanks you for your help.

  • Performance Tuning of webi report BO4.0

    Hi ,
    I have report which has two data providers in 1st Data Provider i have 10 objects and in 2nd Data provider i have 3 objects the issue is performance report is taking 6 minutes to run.
    Please help me out how we can increase the performance at Query Level.
    Thanks
    Siva

    Hi
    There are various levels to which you have to check to improve the performance of reports could be at connection levels, query, database etc.
    some of them you can try
    1. check array fetch size in relational connection parameters, deactivating array fetch size can increase the efficiency of retrieving your data, but slows server performance.
    2. Setting options for the default list of values in business layer and data foundation layer
    Automatic refresh - If this is selected, the list of values is automatically refreshed each time the
    list is called. This can have an effect on performance each time the list of values is refreshed. You should disable this option if the list of values returns a large number of values.
    3. set query stripping (Optimize query with Query Stripping in Web Intelligence - Business Intelligence (BusinessObjects) - SCN Wiki)
    4. also check Performance Tuning Methods in BO
    Regards,
    Raghava

  • How Can we improve the report performance..?

    Hi exports,
    I am learning the Business Objects XIR2, Please let me know How Can we improve the report performance..?
    Please give the answer in detailed way.

    First find out why your report is performing slowly. Then fix it.
    That sounds silly, but there's really no single-path process for improving report performance. You might find issues with the report. With the network. With the universe. With the database. With the database design. With the query definition. With report variables. With the ETL. Once you figure out where the problem is, then you start fixing it. Fixing one problem may very well reveal another. I spent two years working on a project where we touched every single aspect of reporting (from data collection through ETL and all the way to report delivery) at some point or another.
    I feel like your question is a bit broad (meaning too generic) to address as you have phrased it. Even some of the suggestions already given...
    Array fetch size - this determines the number of rows fetched at a single pass. You really don't need to modify this unless your network is giving issues. I have seen folks suggest setting this to one (which results in a lot of network requests) or 500 (which results in fewer requests but they're much MUCH larger). Does either improve performance? They might, or they might make it worse. Without understanding how your network traffic is managed it's hard to say.
    Shortcut joins? Sure, they can help, as long as they are appropriate. [Many times they are not.|http://www.dagira.com/2010/05/27/everything-about-shortcut-joins/]
    And I could go on and on. The bottom line is that performance tuning doesn't typically fall into a "cookie cutter" approach. It would be better to have a specific question.

  • Performance Degradation - High fetches and Prses

    Hello,
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    2) High fetches and poor number/ low number of rows being processed
    Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
    EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1)  */ * FROM  SAPNXP.INOB
    WHERE MANDT = :A0
    AND KLART = :A1
    AND OBTAB = :A2
    AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      119      0.00       0.00          0          0          0           0
    Execute    239      0.16       0.13          0          0          0           0
    Fetch      239   2069.31    2127.88          0   13738804          0           0
    total      597   2069.47    2128.01          0   13738804          0           0
    PLAN_TABLE_OUTPUT
    Plan hash value: 1235313998
    | Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |        |     2 |   268 |     1   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |        |       |       |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| INOB   |     2 |   268 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX SKIP SCAN           | INOB~2 |  7514 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=TO_NUMBER(:A4))
       2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
       3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
           filter("OBTAB"=:A2)
    18 rows selected.
    SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
    INDEX_NAME      TABLE_NAME                     COLUMN_NAME
    INOB~2          INOB                           MANDT
    INOB~2          INOB                           CLINT
    INOB~2          INOB                           OBTAB
    Is it possible to Maximise the rows/fetch
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      163      0.03       0.00          0          0          0           0
    Execute    163      0.01       0.03          0          0          0           0
    Fetch   174899     55.26      59.14          0    1387649          0     4718932
    total   175225     55.30      59.19          0    1387649          0     4718932
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 27
    Rows     Row Source Operation
      28952  TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
      28952   INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                  174899        0.00          0.16
      SQL*Net more data to client                155767        0.01          5.69
      SQL*Net message from client                174899        0.11        208.21
      latch: cache buffers chains                     2        0.00          0.00
      latch free                                      4        0.00          0.00
    ********************************************************************************

    user4566776 wrote:
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    But if you look at the text you are using bind variables.
    The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
    2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
    You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance tuning in ODI

    Hi,
    Can any help how to do performance tuning in ODI. What are the procedure for performance tuning. What needs to checked for performace tuning. Is there any document for ODI performace tuning.
    Thanks,
    Gnanamanju

    Hi Gnanamanju,
    Let me try to contribute a little,
    ODI performance strategy tells,
    1. Its recommended to run the AGENT in the target host rather than source. In case if your target is a remote host.
    2.Selecting appropriate KMs to load data, for example, if its a simple file to oracle transfer, its recommended to use native DB utilities like SQLLDR, External Table and so on.
    3. In case, if your staging and target are on the same server then make sure in Physical architecture your selected the correct schema (Data Schema and Work Schema)
    4. Right area of execution of joins and filters. If your source has more records then use the filter condition to be executed on SOURCE rather staging.
    5. Increase the Array Fetch Size and Batch Fectch Size of your Data server ( so that agent can accordingly fetch/insert the data)
    And more....
    Thanks,
    Guru

  • Deski reports performance issue

    After an upgrade from BOXI Release2-SP3 to BOXI Release3.1-SP3 all the deski reports are very slow when we try to refresh them using Infoview.
    For example, a specific deski report needs two minutes from Desktop Intelligence and twenty!!! minutes from Infoview. The same behavior have all the migrated deski reports. Do you have any idea/workaround??

    hi,
    - Set the array fetch size as 500 and array bind size as 32767 in Universe Parameters
    Following links will be helpful.
    http://www.forumtopics.com/busobj/viewtopic.php?t=142973&sid=0a48878553739783e77ca43ae06c5cdb
    Performance issue with Deski in three tier mode
    Regards,
    Vamsee

  • Invalid Fetch Size

    Hi,
    I am connecting to Progress database from ODI 11.1.1.1.7 using jdbcodbcdriver. However when I try to fetch data from the tables, i get below error.
    java.sql.SQLException: Invalid Fetch Size
    Do I need to set Fetch Size to a particular value. If so, where and what size should this value be set to?
    Thanks,
    Parag

    Hi Parag,
    In the physical topology, double-click on your dataserver. You should see Array Fetch Size and Batch Update Size. For Oracle RDBMS the default is 30 so you might try it.
    Regards,
    JeromeFr

  • Performance tuning of this report

    Hello friends iam attaching my report give the performance tuning for this report to avoid nested endloops. how to do without using nested endloops.
    give me the reply urgent.
    REPORT  ZDEMO9          NO STANDARD PAGE HEADING
                            LINE-SIZE 250
                            LINE-COUNT 22(3).                             .
                TABLES DECLARATION                    *
    TABLES : MARA,              "general material data
             MAKT,              "material description
             MARC,              "plant data for material
             VBAP,              "sales document for item data
             EKKO,              "purchasing document header
             EKPO,              "purchasing document item
             KNA1.              "customer master details
                INTERNAL TABLE DECLARATION             *
    DATA : BEGIN OF T_MARA OCCURS 0,
           MATNR LIKE MARA-MATNR,
           MTART LIKE MARA-MTART,
           MEINS LIKE MARA-MEINS,
           END OF T_MARA.
    DATA : BEGIN OF T_MAKT OCCURS 0,
           MATNR LIKE MAKT-MATNR,
           MAKTX LIKE MAKT-MAKTX,
           SPRAS LIKE MAKT-SPRAS,
           END OF T_MAKT.
    DATA : BEGIN OF T_MARC OCCURS 0,
           MATNR LIKE MARC-MATNR,
           WERKS LIKE MARC-WERKS,
           END OF T_MARC.
    DATA : BEGIN OF T_KNA1 OCCURS 0,
           KUNNR LIKE KNA1-KUNNR,
           NAME1 LIKE KNA1-NAME1,
           LAND1 LIKE KNA1-LAND1,
           END OF T_KNA1.
    DATA : BEGIN OF T_VBAP OCCURS 0,
           MATNR LIKE VBAP-MATNR,
           POSNR LIKE VBAP-POSNR,
           MATKL LIKE VBAP-MATKL,
           VBELN LIKE VBAP-VBELN,
           END OF T_VBAP.
    DATA : BEGIN OF T_EKPO OCCURS 0,
           EBELN LIKE EKPO-EBELN,
           EBELP LIKE EKPO-EBELP,
           BUKRS LIKE EKPO-BUKRS,
           WERKS LIKE EKPO-WERKS,
           LGORT LIKE EKPO-LGORT,
           MATNR LIKE EKPO-MATNR,
           MANDT LIKE EKPO-MANDT,
           END OF T_EKPO.
                     FINAL INTERNAL TABLE                *
    DATA : BEGIN OF T_FINAL OCCURS 0,
           MATNR LIKE MARA-MATNR,
           MTART LIKE MARA-MTART,
           MEINS LIKE MARA-MEINS,
           WERKS LIKE MARC-WERKS,
           MAKTX LIKE MAKT-MAKTX,
           SPRAS LIKE MAKT-SPRAS,
           VBELN LIKE VBAP-VBELN,
           POSNR LIKE VBAP-POSNR,
           MATKL LIKE VBAP-MATKL,
           EBELN LIKE EKPO-EBELN,
           EBELP LIKE EKPO-EBELP,
           BUKRS LIKE EKPO-BUKRS,
           KUNNR LIKE KNA1-KUNNR,
           LAND1 LIKE KNA1-LAND1,
           NAME1 LIKE KNA1-NAME1,
           LGORT LIKE EKPO-LGORT,
           END OF T_FINAL.
    *DATA: BEGIN OF V_matnr OCCURS 0,
           matnr LIKE mara-matnr,
         END OF t_matnr.
    data:
          a(32) type c.
    a = 'IBT000000000000000001000000000000000050'.
                       SELECTION SCREEN                         *
    SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
    SELECT-OPTIONS : S_BUKRS FOR EKPO-BUKRS,
                     S_KUNNR FOR KNA1-KUNNR,
                     S_WERKS FOR MARC-WERKS,
                     S_MATNR FOR MARA-MATNR obligatory.
    SELECTION-SCREEN END OF BLOCK B1.
                     START OF SELECTION                           *
    START-OF-SELECTION.
      SELECT MATNR mtart meins
              FROM MARA
              INTO CORRESPONDING FIELDS OF TABLE T_MARA
              WHERE MATNR IN S_MATNR.
      SELECT MATNR WERKS
              FROM MARC
              INTO CORRESPONDING FIELDS OF TABLE T_MARC
              FOR ALL ENTRIES IN T_MARA
              WHERE MATNR = T_MARA-MATNR
              and werks in s_werks.
      select  matnr maktx spras
            from makt
            into corresponding fields of table t_makt
            for all entries in t_mara
            where matnr = t_mara-matnr
            and spras = sy-langu.
      select matnr posnr matkl vbeln
             from vbap
             into corresponding fields of table t_vbap
             for all entries in t_mara
             where matnr = t_mara-matnr.
    select matnr werks bukrs ebeln ebelp lgort
             from ekpo
             into corresponding fields of table t_ekpo
             for all entries in t_mara
             where matnr = t_mara-matnr
             and werks in s_werks.
      LOOP AT T_MARA.
        MOVE T_MARA-matnr TO T_FINAL-matnr.
        move t_mara-mtart to t_final-mtart.
        move t_mara-meins to t_final-meins.
        loop at t_marc where matnr eq t_mara-matnr.
          move t_marc-werks to t_final-werks.
          loop at t_makt.
            move t_makt-maktx to t_final-maktx.
            move t_makt-spras to t_final-spras.
            loop at t_vbap.
              move t_vbap-posnr to t_final-posnr.
              move t_vbap-matkl to t_final-matkl.
              move t_vbap-vbeln to t_final-vbeln.
            loop at t_ekpo.
            move t_ekpo-bukrs to t_final-bukrs.
            move t_ekpo-ebeln to t_final-ebeln.
            move t_ekpo-ebelp to t_final-ebelp.
            move t_ekpo-lgort to t_final-lgort.
              append t_final.
            endloop.
          endloop.
        endloop.
      endloop.
      endloop.
      SELECT werks KUNNR LAND1 NAME1
      INTO CORRESPONDING FIELDS OF TABLE T_KNA1
      FROM KNA1.
    WHERE WERKS in s_werks.
      loop at t_kna1.
        move t_kna1-kunnr to t_final-kunnr.
        move t_kna1-name1 to t_final-name1.
        move t_kna1-land1 to t_final-land1.
        append t_final.
      endloop.
      "endloop.
      loop at t_final.
        write :   4 t_final-matnr,
                 20 t_final-mtart,
                 28 t_final-meins,
                 46 t_final-werks,
                 58 t_final-maktx,
                 71 t_final-spras,
                 78 t_final-posnr,
                100 t_final-matkl,
                115 t_final-vbeln,
                130 t_final-kunnr,
                142 t_final-name1,
                156 t_final-land1,
                168 t_final-bukrs,
                190 t_final-ebeln,
                205 t_final-ebelp,
                208 t_final-lgort.
      endloop.
                  TOP-OF-PAGE                       *
    top-of-page.
      uline.
      write : /60 'G E N E R A L   D E T A I L S' COLOR 2 INVERSE OFF.
      ULINE.
      write :/ SY-VLINE,    'MATERIAL'       COLOR 4, "12 SY-VLINE,
            13 SY-VLINE,    'IND SECTOR',
            28 SY-VLINE,    'UNITS',
            43 SY-VLINE,    'PLANT',
            55 SY-VLINE,    'MAT DESC',
            68 SY-VLINE,    'LANGU',
            70 SY-VLINE,    'SALES DOC ITEM',
            95 SY-VLINE,    'MAT GROUP',
           110 SY-VLINE,    'SALES DOC',
           125 SY-VLINE,    'CUST ID',
           140 SY-VLINE,    'NAME',
           155 SY-VLINE,    'COUNTRY',
           165 sy-vline,    'company code',
           205 sy-vline,    'storge loc'.

    *& Report  YTESTCHA                                                    *
    REPORT ytestcha  NO STANDARD PAGE HEADING
    LINE-SIZE 250
    LINE-COUNT 22(3). .
    TABLES DECLARATION *
    TABLES : mara, "general material data
    makt, "material description
    marc, "plant data for material
    vbap, "sales document for item data
    ekko, "purchasing document header
    ekpo, "purchasing document item
    kna1. "customer master details
    INTERNAL TABLE DECLARATION *
    *DECLARE TYPES FIRST AND THE INTERNAL TABLES
    *DONT USE MATNR LIKE MARA-MATNR ,INSTEAD USE MARA TYPE MATNR WHERE MATNR
    *IS THE DATA ELEMENT FOR FIELD MATNR.
    TYPES: BEGIN OF ty_mara,
             matnr TYPE matnr,
             mtart TYPE mtart,
             meins TYPE meins,
             kunnr TYPE wettb,
           END OF ty_mara.
    TYPES: BEGIN OF ty_makt,
            matnr TYPE matnr,
            maktx TYPE maktx,
            spras TYPE spras,
           END OF ty_makt.
    TYPES: BEGIN OF ty_marc,
            matnr TYPE matnr,
            werks TYPE werks_d,
            END OF ty_marc.
    TYPES : BEGIN OF ty_kna1,
              kunnr TYPE kunnr,
              name1 TYPE name1_gp,
              land1 TYPE land1_gp,
              END OF ty_kna1.
    TYPES: BEGIN OF ty_vbap,
             matnr TYPE matnr,
             posnr TYPE posnr_va,
             matkl TYPE matkl,
             vbeln TYPE vbeln_va,
             END OF ty_vbap.
    TYPES: BEGIN OF ty_ekpo,
             ebeln TYPE ebeln,
             ebelp TYPE ebelp,
             bukrs TYPE bukrs,
             werks TYPE werks_d,
             lgort TYPE lgort_d,
             matnr TYPE matnr,
             mandt TYPE mandt,
            END OF ty_ekpo.
    DATA:t_mara TYPE TABLE OF ty_mara WITH HEADER LINE,
         t_makt TYPE TABLE OF ty_makt WITH HEADER LINE,
         t_marc TYPE TABLE OF ty_marc WITH HEADER LINE,
         t_kna1 TYPE TABLE OF ty_kna1 WITH HEADER LINE,
         t_vbap TYPE TABLE OF ty_vbap WITH HEADER LINE,
         t_ekpo TYPE TABLE OF ty_ekpo WITH HEADER LINE.
    FINAL INTERNAL TABLE *
    TYPES: BEGIN OF ty_final,
         matnr TYPE matnr,
         mtart TYPE mtart,
         meins TYPE meins,
         werks TYPE werks_d,
         maktx TYPE maktx,
         spras TYPE spras,
         vbeln TYPE vbeln_va,
         posnr TYPE posnr_va,
         matkl TYPE matkl,
         ebeln TYPE ebeln,
         ebelp TYPE ebelp,
         bukrs TYPE bukrs,
         kunnr TYPE kunnr,
         land1 TYPE land1_gp,
         name1 TYPE name1_gp,
         lgort TYPE lgort_d,
         END OF ty_final.
    DATA : t_final TYPE TABLE OF ty_final WITH HEADER LINE.
    *DATA: BEGIN OF V_matnr OCCURS 0,
    matnr LIKE mara-matnr,
    END OF t_matnr.
    DATA:
    a(32) TYPE c.
    a = 'IBT000000000000000001000000000000000050'.
    SELECTION SCREEN *
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    SELECT-OPTIONS : s_bukrs FOR ekpo-bukrs,
                     s_kunnr FOR kna1-kunnr,
                     s_werks FOR marc-werks,
                     s_matnr FOR mara-matnr OBLIGATORY.
    SELECTION-SCREEN END OF BLOCK b1.
    START OF SELECTION *
    START-OF-SELECTION.
    *USE SUBROUTINES
    *get data
      PERFORM get_data.
    *populate final table
      PERFORM populate_final_table.
    END-OF-SELECTION.
    *display output
      PERFORM display_output.
    TOP-OF-PAGE *
    TOP-OF-PAGE.
      ULINE.
      WRITE : /60 'G E N E R A L D E T A I L S' COLOR 2 INVERSE OFF.
      ULINE.
      WRITE :/ sy-vline, 'MATERIAL' COLOR 4, "12 SY-VLINE,
      13 sy-vline, 'IND SECTOR',
      28 sy-vline, 'UNITS',
      43 sy-vline, 'PLANT',
      55 sy-vline, 'MAT DESC',
      68 sy-vline, 'LANGU',
      70 sy-vline, 'SALES DOC ITEM',
      95 sy-vline, 'MAT GROUP',
      110 sy-vline, 'SALES DOC',
      125 sy-vline, 'CUST ID',
      140 sy-vline, 'NAME',
      155 sy-vline, 'COUNTRY',
      165 sy-vline, 'company code',
      205 sy-vline, 'storge loc'.
    *&      Form  GET_DATA
          text
    -->  p1        text
    <--  p2        text
    *TRY TO CLEAR AND REFRESH TABLES BEFORE SELECT
    FORM get_data .
      CLEAR t_mara.
      REFRESH t_mara.
      SELECT matnr
             mtart
             meins
             kunnr
             FROM mara
             INTO TABLE t_mara
             WHERE matnr IN s_matnr.
      IF NOT t_mara[] IS INITIAL.
        CLEAR t_marc.
        REFRESH t_marc.
        SELECT matnr
               werks
               FROM marc
               INTO TABLE t_marc
               FOR ALL ENTRIES IN t_mara
               WHERE matnr = t_mara-matnr
                     AND werks IN s_werks.
        CLEAR t_makt.
        REFRESH t_makt.
        SELECT matnr
               maktx
               spras
               FROM makt
               INTO TABLE t_makt
               FOR ALL ENTRIES IN t_mara
               WHERE matnr = t_mara-matnr
               AND spras = sy-langu.
        CLEAR t_vbap.
        REFRESH t_vbap.
        SELECT matnr
               posnr
               matkl
               vbeln
               FROM vbap
               INTO TABLE t_vbap
               FOR ALL ENTRIES IN t_mara
               WHERE matnr = t_mara-matnr.
        CLEAR t_ekpo.
        REFRESH t_ekpo.
        SELECT ebeln
               ebelp
               bukrs
               werks
               lgort
               matnr
               mandt
               FROM ekpo
               INTO TABLE t_ekpo
               FOR ALL ENTRIES IN t_mara
               WHERE matnr = t_mara-matnr
               AND werks IN s_werks.
      ENDIF.
      CLEAR t_kna1.
      REFRESH t_kna1.
      SELECT kunnr
             land1
             name1
             INTO  TABLE t_kna1
             FROM kna1.
    WHERE WERKS in s_werks.
    ENDFORM.                    " GET_DATA
    *&      Form  POPULATE_FINAL_TABLE
          text
    -->  p1        text
    <--  p2        text
    FORM populate_final_table .
    *AVOID LOOPS AND TRY  TO USE READ
      CLEAR t_final.
      REFRESH t_final.
      LOOP AT t_mara.
        MOVE t_mara-matnr TO t_final-matnr.
        MOVE t_mara-mtart TO t_final-mtart.
        MOVE t_mara-meins TO t_final-meins.
        READ TABLE t_marc WITH KEY matnr = t_mara-matnr.
        MOVE t_marc-werks TO t_final-werks.
        READ TABLE t_makt WITH KEY matnr = t_mara-matnr.
        MOVE t_makt-maktx TO t_final-maktx.
        MOVE t_makt-spras TO t_final-spras.
        READ TABLE t_vbap WITH KEY matnr = t_mara-matnr.
        MOVE t_vbap-posnr TO t_final-posnr.
        MOVE t_vbap-matkl TO t_final-matkl.
        MOVE t_vbap-vbeln TO t_final-vbeln.
        READ TABLE t_ekpo WITH KEY matnr = t_mara-matnr.
        MOVE t_ekpo-bukrs TO t_final-bukrs.
        MOVE t_ekpo-ebeln TO t_final-ebeln.
        MOVE t_ekpo-ebelp TO t_final-ebelp.
        MOVE t_ekpo-lgort TO t_final-lgort.
        READ TABLE t_kna1 WITH KEY kunnr  = t_mara-kunnr.
        MOVE t_kna1-kunnr TO t_final-kunnr.
        MOVE t_kna1-name1 TO t_final-name1.
        MOVE t_kna1-land1 TO t_final-land1.
        APPEND t_final.
        CLEAR :t_final,t_mara,t_marc,t_makt,t_ekpo,t_vbap.
      ENDLOOP.
    ENDFORM.                    " POPULATE_FINAL_TABLE
    *&      Form  DISPLAY_OUTPUT
          text
    -->  p1        text
    <--  p2        text
    FORM display_output .
      LOOP AT t_final.
        WRITE : 4 t_final-matnr,
        20 t_final-mtart,
        28 t_final-meins,
        46 t_final-werks,
        58 t_final-maktx,
        71 t_final-spras,
        78 t_final-posnr,
        100 t_final-matkl,
        115 t_final-vbeln,
        130 t_final-kunnr,
        142 t_final-name1,
        156 t_final-land1,
        168 t_final-bukrs,
        190 t_final-ebeln,
        205 t_final-ebelp,
        208 t_final-lgort.
      ENDLOOP.
    ENDFORM.                    " DISPLAY_OUTPUT

  • Report running for long time & performance tuning

    Hi All,
    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    (2). Can you tell me about performance tuning in BO ?
    please help me.....
    Thanks
    Kumar

    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    The first step is to see if the problem lies in the query on the data source or in webi itself. Depending on the data source there are different ways to extract the query and try to run it against the database. Which source does your report uses?
    (2). Can you tell me about performance tuning in BO ?
    I would recommend to start by reading the administrator's guide. There is a section about how to improve performance.
    Regards,
    Stratos

  • Need clear steps for doing performance tuning on SQL Server 2008 R2 (DB Engine, Reporting Services and Integration Services)

    We have to inverstigate about a reporting solution where things are getting slow (may be material, database design, network matters).
    I have red a lot in MSDN and some books about performance tuning on SQL Server 2008 R2 (or other) but frankly, I feel a little lost in all that stuff
    I'am looking for practical steps in order to do the tuning. Someone had like a recipe for that : a success story...
    My (brain storm) Methodology should follow these steps:
     Resource bottlenecks: CPU, memory, and I/O bottlenecks
     tempdb bottlenecks
     A slow-running user query : Missing indexes, statistics,...
     Use performance counters : there are many, can one give us the list of the most important
    how to do fine tuning about SQL Server configuration
    SSRS, SSIS configuration ? 
    And do the recommandations.
    Thanks
    "there is no Royal Road to Mathematics, in other words, that I have only a very small head and must live with it..."
    Edsger W. Dijkstra

    Hello,
    There is no clear defined step which can be categorized as step by step to performance tuning.Your first goal is to find out cause or drill down to factor causing slowness of SQL server it can be poorly written query ,missing indexes,outdated stats.RAM crunch
    CPU crunch so on and so forth.
    I generally refer to below doc for SQL server tuning
    http://technet.microsoft.com/en-us/library/dd672789(v=sql.100).aspx
    For SSIS tuning i refer below doc.
    http://technet.microsoft.com/library/Cc966529#ECAA
    http://msdn.microsoft.com/en-us/library/ms137622(v=sql.105).aspx
    When I face issue i generally look at wait stats ,wait stats give you idea about on what resource query was waiting.
    --By Jonathan KehayiasSELECT TOP 10
    wait_type ,
    max_wait_time_ms wait_time_ms ,
    signal_wait_time_ms ,
    wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
    100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
    AS percent_total_waits ,
    100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
    AS percent_total_signal_waits ,
    100.0 * ( wait_time_ms - signal_wait_time_ms )
    / SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
    FROM sys.dm_os_wait_stats
    WHERE wait_time_ms > 0 -- remove zero wait_time
    AND wait_type NOT IN -- filter out additional irrelevant waits
    ( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
    'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
    'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
    'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
    'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
    'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
    'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
    'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
    'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
    'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
    'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
    'RESOURCE_QUEUE' )
    ORDER BY wait_time_ms DESC
    use below link to analyze wait stats
    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
    HTH
    PS: for reporting services you can post in SSRS forum
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Report fetched by windows OS report "Performance by System"

    When we fetch the report from the reporting pane  in "Windows server operating server reporting" (Performance by System), sometimes the data shown in the column form comes in the yellow and red color (which shows warning and critical respectively).
    My question is the threshold for these performance rules are mentioned in those perfmon collection rule or the rules from which the alert gets generated.
    Appreciate your help.

    The threshold of these performance rules, click on the alert and display details then show threshold of this alerts.
    For more details, you can refer below link
    http://technet.microsoft.com/en-us/library/hh457556.aspx
    http://technet.microsoft.com/en-us/library/cc180267.aspx
    http://download.doubletake.com/_download/dt53/docs/RecoverNow/User%27s%20Guide/Content/SCOM.htm
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • Performance Tuning for Concurrent Reports

    Hi,
    Can you help me with Performance Tuning for Concurrent Reports/Requests ?
    It was running fine but suddenly running slow.
    Request Name : Participation Process: Compensation program

    What is your application release?
    Please see if (Performance Issues With Participation Process: Compensation Workbench [ID 389979.1]) is applicable.
    To enable trace/debug, please see (FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12 [ID 296559.1] -- 5. How does one enable trace for a concurrent program INCLUDING bind variables and waits?).
    Thanks,
    Hussein

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

Maybe you are looking for

  • Transformation fields getting misplaced

    I am using DSO --> 66 data fields, 2 key fields. When I create the transformation I get all the fields. But when I save the transformation I am loosing some fields from the transformation. What can be the error? When I look at the detail view, I see

  • How do I delete an e-mail address on the iPhone? (Already removed from contacts)

    Hi - I have a few friends that have changed e-mail addresses.  I have removed the old e-mail address from my contacts on my iPhone, however, when I go to begin a new e-mail to them the old e-mail address still shows up as an option.  I have already s

  • Invoice for text PO

    Hi All , i have created PO for TEXT ITEMS for K Account assignment and i have done GR . these are items don't have article number . now i have to create invoice for these text items . can any please help me . thanks, venu.

  • Uploading prospects in crm 7.0.

    Hi, I want to upload prospects from ms excel sheet to crm 7.0 so here which one we need to choose ELM or BAPI or LSMW? Which is the best for mapping data? If go with bapi means how many bapiu2019s  I need to use and these 2 bapi isu2019t enough or no

  • Masking Effects

    Hi. Is it possible to use the solarize effect on an object within a video without affecting the background?  Basically i want to have a person covered with the solarize effect and the background to be unaffected. Is this even possible? I'd much appre