UNION ALL and UNION performance issue

Hi All,
I am trying to figure out the data for which only receive transaction has been done and further processing is pending. These transactions include all PO, RMA , ISO etc...
I have to use UNION ALL in this case as for RMA and ISO, details which i want are not able to gather in a single query.
But query is taking a lot of time ...may be around 30..mins in UNION ALL while 6 to 7 mins in UNION.
To get all records I must have to use UNION ALL...
So kindly suggest the solution for this problem
Thanks
Sachin
Query is given below...
SELECT /* + FIRST_ROWS */ DECODE(rsl.SOURCE_DOCUMENT_CODE,'REQ',(SELECT org1.ORGANIZATION_NAME
                                                       FROM     org_organization_definitions org1
                                                       WHERE org1.ORGANIZATION_ID =
                                                       rsl.FROM_ORGANIZATION_ID)) Vendor_Name
,rsh.RECEIPT_NUM Receipt_Number
     ,TO_CHAR(rt3.TRANSACTION_DATE,'Mon-DD-YYYY HH:MM:SS') Receipt_Date_and_Time
     ,msi.SEGMENT1 Part_Number
     ,msi.DESCRIPTION Part_Name
     ,rt3.QUANTITY Quantity
     ,rt3.UNIT_OF_MEASURE UOM
     ,NULL ASL_Status
     --for ISO no asl flag ASL Flag
     ,TO_CHAR(TRUNC((((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)/24))|| ' Days ' || TO_CHAR(TRUNC(((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)-24*(TRUNC((((86400*(SYSDATE-rt3.TRANSACTION_DATE))/60)/60)/24)))|| ' Hours' Days_and_hours_passed
     ,DECODE(
                    NVL(msi.max_minmax_quantity,0) ,
                    0 , 0 ,
                    (NVL(msi.max_minmax_quantity,0) -
                    NVL(inmohqd.onhand,0))
                         * 100
                         / NVL(msi.max_minmax_quantity,0)
                    ) gap_percent
FROM rcv_transactions rt3
     ,rcv_shipment_headers rsh
     ,rcv_shipment_lines rsl
     ,mtl_system_items msi
     ,org_organization_definitions org
     --,MTL_ONHAND_QUANTITIES_DETAIL moqhd
     ,(SELECT NVL(SUM(primary_transaction_quantity),0) onhand,INVENTORY_ITEM_ID item_id,ORGANIZATION_ID organization_id
     FROM      mtl_onhand_quantities_detail
     WHERE SUBINVENTORY_CODE NOT IN ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
     GROUP BY INVENTORY_ITEM_ID, ORGANIZATION_ID) inmohqd
WHERE inmohqd.item_id(+) = msi.INVENTORY_ITEM_ID
     AND inmohqd.organization_id(+) = msi.ORGANIZATION_ID
     --AND inmoqhd.SUBINVENTORY_CODE NOT IN  ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
     AND msi.INVENTORY_ITEM_ID = rsl.ITEM_ID
     AND rsh.SHIPMENT_HEADER_ID = rsl.SHIPMENT_HEADER_ID
     AND org.ORGANIZATION_ID = rt3.ORGANIZATION_ID
     AND msi.ORGANIZATION_ID = rt3.ORGANIZATION_ID
     AND rsh.SHIPMENT_HEADER_ID = rt3.SHIPMENT_HEADER_ID
     AND rsl.SHIPMENT_HEADER_ID = rt3.SHIPMENT_HEADER_ID
     AND rsl.SHIPMENT_LINE_ID = rt3.SHIPMENT_LINE_ID
     AND rt3.PO_HEADER_ID IS NULL
     AND TRUNC(rt3.TRANSACTION_DATE) <= TRUNC(p_tilldate)
     AND rsl.TO_ORGANIZATION_ID = p_organization_id
     AND rsh.ORGANIZATION_ID = p_organization_id
     AND CONCAT(TRIM(rt3.SHIPMENT_HEADER_ID),TRIM(rt3.SHIPMENT_LINE_ID)) IN
     SELECT CONCAT(TRIM(rt1.SHIPMENT_HEADER_ID),TRIM(rt1.SHIPMENT_LINE_ID))
     FROM     rcv_transactions rt1
     WHERE NOT EXISTS(
     SELECT 1
          FROM     rcv_transactions rt2
          WHERE     rt2.TRANSACTION_TYPE <> 'RECEIVE'
                    AND rt1.SHIPMENT_HEADER_ID = rt2.SHIPMENT_HEADER_ID
                    AND rt1.SHIPMENT_LINE_ID = rt2.SHIPMENT_LINE_ID
                    AND rt2.ORGANIZATION_ID = p_organization_id
UNION
SELECT /* + FIRST_ROWS */ pv.VENDOR_NAME Vendor_Name
     ,rsh.RECEIPT_NUM Receipt_Number
     ,TO_CHAR(rt.TRANSACTION_DATE,'Mon-DD-YYYY HH:MM:SS') Receipt_Date_and_Time
     ,msi.SEGMENT1 Part_Number
     ,msi.DESCRIPTION Part_Name
     ,rt.QUANTITY Quantity
     ,rt.UNIT_OF_MEASURE UOM
     --start 001
     ,NVL((SELECT DISTINCT DECODE (ASL_STATUS_ID,1,'New',2,'Approved','To be checked')
               FROM po_approved_supplier_list pasl
               WHERE pasl.item_id=rsl.ITEM_ID
                         AND pasl.VENDOR_ID(+) = pv.VENDOR_ID
                         AND pasl.VENDOR_SITE_ID(+) = pvs.VENDOR_SITE_ID),'No_data') ASL_Status
          --end 001
          ,TO_CHAR(TRUNC((((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)/24))|| ' Days ' || TO_CHAR(TRUNC(((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)-24*(TRUNC((((86400*(SYSDATE-rt.TRANSACTION_DATE))/60)/60)/24)))|| ' Hours' Days_and_hours_passed          ,DECODE(
               NVL(msi.max_minmax_quantity,0) ,
          0 , 0 ,
          (NVL(msi.max_minmax_quantity,0) -
          NVL(inmohqd.onhand,0))
               * 100
               / NVL(msi.max_minmax_quantity,0)
          ) gap_percent
FROM rcv_transactions rt
     ,po_vendors pv
     ,po_vendor_sites_all pvs
     ,rcv_shipment_headers rsh
     ,rcv_shipment_lines rsl
     ,mtl_system_items msi
     ,org_organization_definitions org
     --,mtl_onhand_quantities_detail moqhd
     ,(SELECT NVL(SUM(primary_transaction_quantity),0) onhand,INVENTORY_ITEM_ID item_id,ORGANIZATION_ID organization_id
     FROM      mtl_onhand_quantities_detail
     WHERE SUBINVENTORY_CODE NOT IN ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
     GROUP BY INVENTORY_ITEM_ID, ORGANIZATION_ID) inmohqd
WHERE inmohqd.item_id(+) = msi.INVENTORY_ITEM_ID
     AND inmohqd.ORGANIZATION_ID(+) = msi.ORGANIZATION_ID
     --AND inmoqhd.SUBINVENTORY_CODE NOT IN  ('Wip_SF','Wip_Int','Reject','Scrap','FG Trading','FG')
     AND msi.INVENTORY_ITEM_ID = rsl.ITEM_ID
     AND rsh.SHIPMENT_HEADER_ID = rsl.SHIPMENT_HEADER_ID
     AND pv.VENDOR_ID = pvs.VENDOR_ID
     AND org.ORGANIZATION_ID = rt.ORGANIZATION_ID
     AND msi.ORGANIZATION_ID = rt.ORGANIZATION_ID
     AND pvs.VENDOR_SITE_ID = rt.VENDOR_SITE_ID
     AND pv.VENDOR_ID = rt.VENDOR_ID
     AND rsh.SHIPMENT_HEADER_ID = rt.SHIPMENT_HEADER_ID
     AND rsl.SHIPMENT_HEADER_ID = rt.SHIPMENT_HEADER_ID
     AND rsl.SHIPMENT_LINE_ID = rt.SHIPMENT_LINE_ID
     AND TRUNC(rt.TRANSACTION_DATE) <= TRUNC(p_tilldate)
     AND rsl.TO_ORGANIZATION_ID = p_organization_id
     AND CONCAT(TRIM(rt.SHIPMENT_HEADER_ID),TRIM(rt.SHIPMENT_LINE_ID)) IN
          SELECT CONCAT(TRIM(rt1.SHIPMENT_HEADER_ID),TRIM(rt1.SHIPMENT_LINE_ID))
          FROM RCV_TRANSACTIONS rt1
          WHERE rt1.TRANSACTION_TYPE = 'RECEIVE'
               AND rt1.DESTINATION_TYPE_CODE = 'RECEIVING'
               AND rt1.PO_HEADER_ID IS NOT NULL
               AND NOT EXISTS(
               SELECT 1
                    FROM     RCV_TRANSACTIONS rt2
                    WHERE     rt2.SHIPMENT_HEADER_ID = rt1.SHIPMENT_HEADER_ID
                              AND rt2.SHIPMENT_LINE_ID = rt1.SHIPMENT_LINE_ID
                              AND rt2.TRANSACTION_TYPE <> 'RECEIVE'
     )

In this case, for selected columns, all data is same for one of the RMA with more than one line. So UNION will skip one of the records. However, shipment line id are different for both records, so by selecting it in select list is solving the problem and so no need to use UNION ALL. But, anyhow UNION ALL is better than UNION in performance as it does not require to sort. Then why I am facing this problem...
Kindly suggest
Regards,
Sachin

Similar Messages

  • Using union all and rownum

    Hello again.
    Another question.
    Can I query with union all and stop it when I get N rows.
    For example:
    select 1 from dba_segments
    union all
    select 2 from dba_segments where
    union all
    select 3 from dba_segments where;
    and get the 100 first rows without doing the whole query:(not like that-->)
    select * from (
    select 1 from dba_segments
    union all
    select 2 from dba_segments where
    union all
    select 3 from dba_segments where)
    where rownum < 100);
    I want the query will stop when there are 100 rows in the result set.
    thank you!

    You already posted your own answer. It just seems you don't want to use it.
    ROWNUM is NOT assigned until the rows are selected to be returned. So you need to wrap the three inner queries into a query that uses ROWNUM.

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • What happened to PDF document 22040 – "PIX/ASA: Monitor and Troubleshoot Performance Issues"?

    Hi, does anyone knows what was happened to the following PDF notes in Cisco? The PDF file is only contains 1 page compared to the original notes in html format which is about a few pages.
    If there is alternative link for this document, please let me know. Thanks.
    Document ID: 22040
    PIX/ASA: Monitor and Troubleshoot Performance Issues
    http://www.cisco.com/image/gif/paws/22040/pixperformance.pdf <PDF Notes, but 1 page only?>
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_tech_note09186a008009491c.shtml  < HTML Notes>

    Hi experts / marcin
    can anyone of you let me know about my question related to vpn ?
    Jayesh

  • Inconsistent SQL results when using View with UNION-ALL and table function

    Can any of you please execute the below scripts and check the output. In the table type variable, I am adding 4 distinct object ids, where as in the result, I get only the row pertaining to last id in the table type variable. Same row is returned 4 times (4= number of values in the table type).
    This scenario is occurring in our product with a SQL with exactly same pattern. I could simulate the same issue with the sample script I have provided.
    Database version: 11.2.0.3 Enterprise Edition, Single node
    Thank you.
    CREATE TABLE TEMP_T1 AS SELECT * FROM ALL_OBJECTS;
    CREATE TABLE TEMP_T2 AS SELECT * FROM ALL_OBJECTS;
    UPDATE TEMP_T2 SET OBJECT_ID = OBJECT_ID * 37;
    CREATE UNIQUE INDEX TEMP_T1_U1 ON TEMP_T1(OBJECT_ID);
    CREATE UNIQUE INDEX TEMP_T2_U1 ON TEMP_T2(OBJECT_ID);
    CREATE OR REPLACE VIEW TEMP_T1T2_V AS
    SELECT * FROM TEMP_T1 UNION ALL SELECT * FROM TEMP_T2;
    CREATE OR REPLACE TYPE TEMP_OBJ_TYPE AS OBJECT (OBJ_ID NUMBER);
    CREATE OR REPLACE TYPE TEMP_OBJ_TAB_TYPE IS TABLE OF TEMP_OBJ_TYPE;
    SET SERVEROUTPUT ON;
    DECLARE
    TYPE TEMP_T1T2_V_ROW_TAB_TYPE IS TABLE OF TEMP_T1T2_V%ROWTYPE;
    TEMP_T1T2_V_ROW_TAB TEMP_T1T2_V_ROW_TAB_TYPE;
    TEMP_OBJ_TAB TEMP_OBJ_TAB_TYPE := TEMP_OBJ_TAB_TYPE();
    PROCEDURE ADD_TO_TEMP_OBJ_TAB(OBJ_ID IN NUMBER) IS
    BEGIN
    TEMP_OBJ_TAB.EXTEND;
    TEMP_OBJ_TAB(TEMP_OBJ_TAB.LAST) := TEMP_OBJ_TYPE(OBJ_ID);
    END;
    BEGIN
    ADD_TO_TEMP_OBJ_TAB(100);
    ADD_TO_TEMP_OBJ_TAB(116);
    ADD_TO_TEMP_OBJ_TAB(279);
    ADD_TO_TEMP_OBJ_TAB(364);
    DBMS_OUTPUT.PUT_LINE('=====================');
    FOR I IN TEMP_OBJ_TAB.FIRST..TEMP_OBJ_TAB.LAST
    LOOP
    DBMS_OUTPUT.PUT_LINE('OBJ_ID = '||TEMP_OBJ_TAB(I).OBJ_ID);
    END LOOP;
    DBMS_OUTPUT.PUT_LINE('---------------------');
    SELECT * BULK COLLECT INTO TEMP_T1T2_V_ROW_TAB
    FROM TEMP_T1T2_V VW
    WHERE ((VW.OBJECT_ID) IN (SELECT OBJ_ID
    FROM TABLE(CAST(TEMP_OBJ_TAB AS TEMP_OBJ_TAB_TYPE))));
    FOR I IN TEMP_OBJ_TAB.FIRST..TEMP_OBJ_TAB.LAST
    LOOP
    DBMS_OUTPUT.PUT_LINE('OBJ_ID = '||TEMP_OBJ_TAB(I).OBJ_ID);
    END LOOP;
    DBMS_OUTPUT.PUT_LINE('---------------------');
    IF TEMP_T1T2_V_ROW_TAB.COUNT > 0 THEN
    FOR I IN TEMP_T1T2_V_ROW_TAB.FIRST..TEMP_T1T2_V_ROW_TAB.LAST
    LOOP
    DBMS_OUTPUT.PUT_LINE(TEMP_T1T2_V_ROW_TAB(I).OBJECT_ID||' : '||TEMP_T1T2_V_ROW_TAB(I).OBJECT_NAME);
    END LOOP;
    ELSE
    DBMS_OUTPUT.PUT_LINE('NO ROWS RETURNED!');
    END IF;
    DBMS_OUTPUT.PUT_LINE('---------------------');
    END;
    /

    I can reproduce it:
    SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 30 14:05:39 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Enter user-name: scott
    Enter password:
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select  *
      2    from  v$version
      3  /
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> CREATE TABLE TEMP_T1 AS SELECT * FROM ALL_OBJECTS;
    Table created.
    SQL>
    SQL> CREATE TABLE TEMP_T2 AS SELECT * FROM ALL_OBJECTS;
    Table created.
    SQL>
    SQL> UPDATE TEMP_T2 SET OBJECT_ID = OBJECT_ID * 37;
    72883 rows updated.
    SQL>
    SQL> CREATE UNIQUE INDEX TEMP_T1_U1 ON TEMP_T1(OBJECT_ID);
    Index created.
    SQL>
    SQL> CREATE UNIQUE INDEX TEMP_T2_U1 ON TEMP_T2(OBJECT_ID);
    Index created.
    SQL>
    SQL> CREATE OR REPLACE VIEW TEMP_T1T2_V AS
      2  SELECT * FROM TEMP_T1 UNION ALL SELECT * FROM TEMP_T2;
    View created.
    SQL>
    SQL> CREATE OR REPLACE TYPE TEMP_OBJ_TYPE AS OBJECT (OBJ_ID NUMBER)
      2  /
    Type created.
    SQL> CREATE OR REPLACE TYPE TEMP_OBJ_TAB_TYPE IS TABLE OF TEMP_OBJ_TYPE
      2  /
    Type created.
    SQL> SET SERVEROUTPUT ON;
    SQL>
    SQL> DECLARE
      2  TYPE TEMP_T1T2_V_ROW_TAB_TYPE IS TABLE OF TEMP_T1T2_V%ROWTYPE;
      3  TEMP_T1T2_V_ROW_TAB TEMP_T1T2_V_ROW_TAB_TYPE;
      4  TEMP_OBJ_TAB TEMP_OBJ_TAB_TYPE := TEMP_OBJ_TAB_TYPE();
      5  PROCEDURE ADD_TO_TEMP_OBJ_TAB(OBJ_ID IN NUMBER) IS
      6  BEGIN
      7  TEMP_OBJ_TAB.EXTEND;
      8  TEMP_OBJ_TAB(TEMP_OBJ_TAB.LAST) := TEMP_OBJ_TYPE(OBJ_ID);
      9  END;
    10  BEGIN
    11  ADD_TO_TEMP_OBJ_TAB(100);
    12  ADD_TO_TEMP_OBJ_TAB(116);
    13  ADD_TO_TEMP_OBJ_TAB(279);
    14  ADD_TO_TEMP_OBJ_TAB(364);
    15  DBMS_OUTPUT.PUT_LINE('=====================');
    16  FOR I IN TEMP_OBJ_TAB.FIRST..TEMP_OBJ_TAB.LAST
    17  LOOP
    18  DBMS_OUTPUT.PUT_LINE('OBJ_ID = '||TEMP_OBJ_TAB(I).OBJ_ID);
    19  END LOOP;
    20  DBMS_OUTPUT.PUT_LINE('---------------------');
    21  SELECT * BULK COLLECT INTO TEMP_T1T2_V_ROW_TAB
    22  FROM TEMP_T1T2_V VW
    23  WHERE ((VW.OBJECT_ID) IN (SELECT OBJ_ID
    24  FROM TABLE(CAST(TEMP_OBJ_TAB AS TEMP_OBJ_TAB_TYPE))));
    25  FOR I IN TEMP_OBJ_TAB.FIRST..TEMP_OBJ_TAB.LAST
    26  LOOP
    27  DBMS_OUTPUT.PUT_LINE('OBJ_ID = '||TEMP_OBJ_TAB(I).OBJ_ID);
    28  END LOOP;
    29  DBMS_OUTPUT.PUT_LINE('---------------------');
    30  IF TEMP_T1T2_V_ROW_TAB.COUNT > 0 THEN
    31  FOR I IN TEMP_T1T2_V_ROW_TAB.FIRST..TEMP_T1T2_V_ROW_TAB.LAST
    32  LOOP
    33  DBMS_OUTPUT.PUT_LINE(TEMP_T1T2_V_ROW_TAB(I).OBJECT_ID||' : '||TEMP_T1T2_V_ROW_TAB(I).OBJECT_NAME);
    34  END LOOP;
    35  ELSE
    36  DBMS_OUTPUT.PUT_LINE('NO ROWS RETURNED!');
    37  END IF;
    38  DBMS_OUTPUT.PUT_LINE('---------------------');
    39  END;
    40  /
    =====================
    OBJ_ID = 100
    OBJ_ID = 116
    OBJ_ID = 279
    OBJ_ID = 364
    OBJ_ID = 100
    OBJ_ID = 116
    OBJ_ID = 279
    OBJ_ID = 364
    364 : I_AUDIT
    364 : I_AUDIT
    364 : I_AUDIT
    364 : I_AUDIT
    PL/SQL procedure successfully completed.
    SQL> column object_name format a30
    SQL> select  object_id,
      2          object_name
      3    from  dba_objects
      4    where object_id in (100,116,279,364)
      5  /
    OBJECT_ID OBJECT_NAME
           100 ORA$BASE
           116 DUAL
           279 MAP_OBJECT
           364 I_AUDIT
    SQL>  Works fine in:
    =====================
    OBJ_ID = 100
    OBJ_ID = 116
    OBJ_ID = 279
    OBJ_ID = 364
    OBJ_ID = 100
    OBJ_ID = 116
    OBJ_ID = 279
    OBJ_ID = 364
    100 : ORA$BASE
    116 : DUAL
    364 : SYSTEM_PRIVILEGE_MAP
    279 : MAP_OBJECT
    PL/SQL procedure successfully completed.
    SQL> select  object_id,
      2          object_name
      3    from  dba_objects
      4    where object_id in (100,116,279,364)
      5  /
    OBJECT_ID OBJECT_NAME
          100 ORA$BASE
          116 DUAL
          364 SYSTEM_PRIVILEGE_MAP
          279 MAP_OBJECT
    SQL> select  *
      2    from  v$version
      3  /
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>SY.
    Edited by: Solomon Yakobson on Oct 30, 2012 2:14 PM

  • Query using Union All and CTEs is slow

    TypePatient
    [ednum] int NOT NULL,  PK
    [BackgroundID] int NOT NULL, FK
    [Patient_No] varchar(50) NULL, FK
    [Last_Name] varchar(30) NULL,
    [First_Name] varchar(30) NULL,
    [ADateTime] datetime NULL,
    Treat
    [ID] int NOT NULL, PK
    [Ednum] numeric(10, 0) NOT NULL, FK
    [Doctor] char(50) NULL,
    [Dr_ID] numeric(10, 0) NULL,
    background
    [ID] int NOT NULL, PK
    [Patient_No] varchar(50) NULL, FK
    [Last_Name] char(30) NULL,
    [First_Name] char(30) NULL,
    [DateofBirth] datetime NULL,
    pdiagnose
    [ID] int NOT NULL, PK
    [Ednum] int NOT NULL, FK
    [DSMNo] char(10) NULL,
    [DSMNoIndex] char(5) NULL,
    substance
    [ID] int NOT NULL, PK
    [Ednum] int NOT NULL, FK
    [Substance] varchar(120) NULL,
    DXCAT
    [id] int NULL, PK
    [dx_description] char(100) NULL,
    [dx_code] char(10) NULL,
    [dx_category_description] char(100) NULL,
    [diagnosis_category_code] char(10) NULL)
    Substance
    ID
    Ednum
    Substance
    1
    100
    Alcohol Dependence
    4
    200
    Caffeine Dependence
    5
    210
    Cigarettes
    dxcat
    id
    dx_description
    dx_code
    dx_category_description
    diagnosis_category_code
    10
    Tipsy
    zzz
    Alcohol
    SA
    20
    Mellow
    ppp
    Mary Jane
    SA
    30
    Spacey
    fff
    LSD
    SA
    50
    Smoker
    ggg
    Nicotine
    SA
    pdiagnose
    ID
    Ednum
    DSMNo
    Diagnosis
    1
    100
    zzz
    Alcohol
    2
    100
    ddd
    Caffeine
    3
    210
    ggg
    Smoker
    4
    130
    ppp
    Mary Jane
    TypePatient
    ednum
    Patient_No
    Last_Name
    First_Name
    ADateTime
    100
    sssstttt
    Wolly
    Polly
    12/4/2013
    130
    rrrrqqqq
    Jolly
    Molly
    12/8/2013
    200
    bbbbcccc
    Wop
    Doo
    12/12/2013
    210
    vvvvwww
    Jazz
    Razz
    12/14/2013
    Treat
    ID
    Ednum
    Doctor
    Dr_ID
    2500
    100
    Welby, Marcus
    1000
    2550
    200
    Welby, Marcus
    1000
    3000
    210
    Welby, Marcus
    1000
    3050
    130
    Welby, Marcus
    1000
    background
    ID
    Patient_No
    Last_Name
    First_Name
    DateofBirth
    2
    sssstttt
    Wolly
    Polly
    8/6/1974
    3
    rrrrqqqq
    Jolly
    Molly
    3/10/1987
    5
    bbbbcccc
    Wop
    Doo
    8/12/1957
    6
    vvvvwww
    Jazz
    Razz
    7/16/1995
    Desired output:
    Staff ID
    Doctor
    Patient_No
    Client Name
    Date of Service
    Ednum
    DX Code
    DX Cat
    DX Desc
    Substance
    1000
    Welby, Marcus
    bbbcccc
    Wop, Doo
    12/12/2013
    200
    Caffeine Dependence
    1000
    Welby, Marcus
    rrrqqq
    Jolly, Molly
    12/8/2013
    130
    ppp
    SA
    Mary Jane
    1000
    Welby, Marcus
    sssttt
    Wolly, Polly
    12/4/2013
    100
    zzz
    SA
    Alcohol
    1000
    Welby, Marcus
    sssttt
    Wolly, Polly
    12/4/2013
    100
    ddd
    SA
    LSD
    1000
    Welby, Marcus
    sssttt
    Wolly, Polly
    12/4/2013
    100
    Alcohol Dependence
    1000
    Welby, Marcus
    vvvvwww
    Jazz, Razz
    12/14/2013
    210
    ggg
    SA
    Smoker
    1000
    Welby, Marcus
    vvvvwww
    Jazz, Razz
    12/14/2013
    210
    Cigarettes
    A patient is assigned an ednum. There are two different menus for staff to enter
    diagnoses. Each menu stores the entries in a different table. The two tables are substance and pdiagnose. A patient’s diagnosis for a substance abuse can be entered in one table and not the other. 
    The number of entries for different substances for each patient can vary between the two tables. John Doe might be entered for alcohol and caffeine abuse in the pdiagnosis table and entered only for caffeine abuse in the substance table. They are only
    linked by the ednum which has nothing to do with the diagnosis/substance. The substance entered in one table is not linked to the substance entered in the other. A query will not put an entry for alcohol from the pdiagnosis table on the same row as an alcohol
    entry from the substance table except by chance. That is the reason for the way the query is written.
    The query accepts parameters for a Dr ID and a start and end date. It takes about 7 to 15 seconds to run. Hard coding the dates cuts it down to about a second.
    I might be able to select directly from the union all query instead of having it separate. But then I’m not sure about the order by clauses using aliases.
    Is there a way to rewrite the query to speed it up?
    I did not design the tables or come up with the process of entering diagnoses. It can’t be changed at this time.
    Please let me know if you notice any inconsistencies between the DDLs, data, and output. I did a lot of editing.
    Thanks for any suggestions.
    with cte_dxcat (Dr_ID, Doctor, Patient_No,Last_Name,
    First_Name, Adatetime,Ednum,
    dx_code,diagnosis_category_code,dx_description,substance,
    DateofBirth) as
    (Select distinct t.Dr_ID, t.Doctor, TP.Patient_No,TP.Last_Name,
    TP.First_Name, TP.Adatetime as 'Date of Service',TP.Ednum,
    DXCAT.dx_code,DXCAT.diagnosis_category_code,DXCAT.dx_description,
    null as 'substance',BG.DateofBirth
    From TypePatient TP
    inner join treat t on TP.ednum = t.Ednum
    inner join background BG on BG.Patient_No = TP.Patient_No
    inner join pdiagnose PD on TP.Ednum = PD.Ednum
    inner join Live_Knowledge.dbo.VA_DX_CAT_MAPPING DXCAT on DXCAT.dx_code = PD.DSMNo
    Where (TP.Adatetime >= convert(varchar(10), :ST, 121)+ ' 00:00:00.000'
    and TP.Adatetime <= convert(varchar(10), :SP, 121)+ ' 23:59:59.000')
    and DXCAT.diagnosis_category_code = 'SA'
    and t.Dr_ID =:DBLookupComboBox2
    cte_substance (Dr_ID, Doctor, Patient_No,Last_Name,
    First_Name,Adatetime, Ednum,
    dx_code,diagnosis_category_code,dx_description,Substance,DateofBirth) as
    (Select distinct t.Dr_ID, t.Doctor, TP.Patient_No,TP.Last_Name,
    TP.First_Name, TP.Adatetime as 'Date of Service', TP.Ednum,
    null as 'dx_code',null as 'diagnosis_category_code',null as 'dx_description',s.Substance, BG.DateofBirth
    From TypePatient TP
    inner join treat t on TP.ednum = t.Ednum
    inner join background BG on BG.Patient_No = TP.Patient_No
    inner join pdiagnose PD on TP.Ednum = PD.Ednum
    inner join substance s on TP.Ednum = s.Ednum
    Where (TP.Adatetime >= convert(varchar(10), '12/1/2013', 121)+ ' 00:00:00.000'
    and TP.Adatetime <= convert(varchar(10), '12/31/2013', 121)+ ' 23:59:59.000')
    and t.Dr_ID =:DBLookupComboBox2
    cte_all (Dr_ID, Doctor, Patient_No,Last_Name,
    First_Name,Adatetime, Ednum,
    dx_code,diagnosis_category_code,dx_description,Substance,DateofBirth) as
    (select cte_dxcat.Dr_ID as 'Staff ID', cte_dxcat.Doctor as 'Doctor',
    cte_dxcat.Patient_No as 'Patient_No',
    cte_dxcat.Last_Name as 'Last',cte_dxcat.First_Name as 'First',
    cte_dxcat.Adatetime as 'Date of Service',cte_dxcat.Ednum as 'Ednum',
    cte_dxcat.dx_code as 'DX Code',cte_dxcat.diagnosis_category_code as 'DX Category Code',
    cte_dxcat.dx_description as 'DX Description',
    cte_dxcat.substance as 'Substance',cte_dxcat.DateofBirth as 'DOB'
    from cte_dxcat
    union all
    select cte_substance.Dr_ID as 'Staff ID', cte_substance.Doctor as 'Doctor',
    cte_substance.Patient_No as 'Patient_No',
    cte_substance.Last_Name as 'Last',cte_substance.First_Name as 'First',
    cte_substance.Adatetime as 'Date of Service',cte_substance.Ednum as 'Ednum',
    cte_substance.dx_code as 'DX Code',cte_substance.diagnosis_category_code as 'DX Category Code',
    cte_substance.dx_description as 'DX Description',
    cte_substance.substance as 'Substance',cte_substance.DateofBirth as 'DOB'
    from cte_substance)
    select cte_all.Dr_ID as 'Staff ID', cte_all.Doctor as 'Doctor',
    cte_all.Patient_No as 'Patient_No',
    (cte_all.Last_Name + ', '+ cte_all.First_Name) as 'Client Name',
    cte_all.Adatetime as 'Date of Service',cte_all.Ednum as 'Ednum',
    cte_all.dx_code as 'DX Code',cte_all.diagnosis_category_code as 'DX Category Code',
    cte_all.dx_description as 'DX Description',
    cte_all.substance as 'Substance',
    CONVERT(char(10), cte_all.DateofBirth,101) as 'DOB'
    from cte_all
    order by cte_all.Patient_No,cte_all.Adatetime

    Please post real DDL instead of your invented non-language, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions
    and formatting rules. Your rude, non-SQL narrative is so far away from standards I cannot even use you as a bad example in book. 
    Temporal data should use ISO-8601 formats (we have to re-type the dialect you used!). Code should be in Standard SQL as much as possible and not local dialecT. 
    This is minimal polite behavior on SQL forums. You posted a total mess! Do you really have patients without names?? You really use a zero to fifty characters for a patient_nbr??? Give me an example. That is insane! 
    Your disaster has more NULLs than entire major corporate systems. Since you cannot change it, can you quit? I am serious. I have been employed in IT since 1965, and can see a meltdown.
    I looked at this and I am  not even going to try to help you; it is not worth it. I am sorry for you; you are in an environment where you cannot learn to do any right. 
    But you are still responsible for the rudeness of not posting DDL. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Oracle doc inconsistent on materialize view with union all and self joins

    First of all, I can't seem to create a materialized view containing self-joins AND union all. Is it possible?
    I checked Oracle 9i (my version: PL/SQL Release 9.2.0.4.0 - Production) documentation and I get different answers (or so it seems to me).
    First I saw this: "The COMPATIBILITY parameter must be set to 9.0 if the materialized aggregate view has inline views, outer joins, self joins or grouping sets and FAST REFRESH is specified during creation..."
    Did you see the part about 'self joins' in there? I did and I was pumped because that seems to say that you CAN have 'self joins' (and my compatibility is 9.2...)
    BUT
    In the very same document I also found "Oracle does not allow self-joins in materialized join views." (rage)
    You can see the document I am speaking of here: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#574889
    Whenever I try to create the mview I get the following error. (
    In any caseORA-01446 cannot select ROWID from view with DISTINCT, GROUP BY, etc.

    First of all, I can't seem to create a materialized view containing self-joins AND union all. Is it possible?
    I checked Oracle 9i (my version: PL/SQL Release 9.2.0.4.0 - Production) documentation and I get different answers (or so it seems to me).
    First I saw this: "The COMPATIBILITY parameter must be set to 9.0 if the materialized aggregate view has inline views, outer joins, self joins or grouping sets and FAST REFRESH is specified during creation..."
    Did you see the part about 'self joins' in there? I did and I was pumped because that seems to say that you CAN have 'self joins' (and my compatibility is 9.2...)
    BUT
    In the very same document I also found "Oracle does not allow self-joins in materialized join views." (rage)
    You can see the document I am speaking of here: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96520/mv.htm#574889
    Whenever I try to create the mview I get the following error. (
    In any caseORA-01446 cannot select ROWID from view with DISTINCT, GROUP BY, etc.

  • Materalized view with union all and fast referesh

    I have a one view which is very slow. in this view we are joining many tables and many union all queries.
    now I am planing to make materalized view
    Tell me how i will created view with fast refresh with union all query.
    Pls help its urgent..
    Thanks
    Reena

    Refer to the Replication Manual for the create syntax and exceptions.

  • CAT4900M and NetApp - Performance issue

    Hi,
    I'm struggling with a performance issue between our two NetApp Fas3170-devices.
    The setup is quite simple: Each NetApp is connected via two TenGig interfaces to a CAT4900M. The 4900M's are also connected via two TenGig interfaces. Each pair of connections are bundled into an Layer2-etherchannel, configured as a dot.1q trunk. Mode is set to 'ON' on both the 4900 and the NetApp. According to NetApp documentation, this configuration is supported. Across each etherchannel, the vlans 219 and 220 are allowed. Two partitions are configured on the NetApp's, one being active in our primary datacenter and another in our secondary datacenter. Vlan219 and Vlan220 are configured for each the two partitions, using HSRP for gateway redundancy.
    None of the interfaces nor the etherchannels shows any signs of misconfiguration. All links are up and etherchannels working as expected, well almost. Nothing indicates packet loss, crc-errors, Input/Output queue-drops or anything the would impact performance. Jumboframe is not configured, although this has been discussed.
    The problem is, that we're unable to achieve satisfactory performance, when for instance, performing a volume copy between the two NetApp partitions. Even though we have a teoretical bandwidth of 20Gbps end-to-end, we never climb above 75-80 Mbytes of actual transfer-rate between the two NetApps. So performancewise, is almost looks as if we're "scaled" down to a 1Gig link. No QoS or other kind of ratelimiting has been implemented on the 4900's, so from a network point of view, the NetApps can go full-throttle. NetApp sw has been updated and configurations for both NetApp and 4900's have been revised by NetApp engineers and given a "clean bill of health".
    The configuration for the 4900->NetApp etherchannel/interfaces is as follows:
    interface TenGigabitEthernet1/5
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface TenGigabitEthernet1/6
    description *** Trunk NetAPP DC1 ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    udld port aggressive
    channel-group 2 mode on
    spanning-tree bpdufilter enable
    interface Port-channel2
    description *** Trunk Etherchannel DC1 ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 219,220
    switchport mode trunk
    spanning-tree bpdufilter enable
    spanning-tree link-type point-to-point
    Configuration for 4900->4900 interfaces/etherchannel is as follows:
    interface TenGigabitEthernet1/1
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface TenGigabitEthernet1/2
    description *** Site-to-Site trunk ***
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    udld port aggressive
    channel-group 1 mode on
    interface Port-channel1
    description *** Site-to-Site trunk ***
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10,219,220
    switchport mode trunk
    spanning-tree link-type point-to-point
    Vlan10 used for mngt-purpose.
    Does anyone have similar experiences or suggestions as to why we're having theese performanceissues?
    Thanks
    /Ulrich
    Message was edited by: UHansen1976

    Hi,
    Thanks for your reply.
    I take it, that you mean baseline performance between the two NetApp's. Well, that's really out of my hands, as another department is responsible for the NetApp's. I'm not aware of any baseline performance, nor have I seen any benchmark tests or anything, that could give me hint.
    Just as you suggest, I've gone through the switch-setup systematically. Basically, starting with the physical layer and working my way up. So far, I've found nothing that would indicate a physical problem. The switchport/etherchannel setup has been verified by my peers and also verified by NetApp according to the configuration on the NetApps, as well as the various best-practice documentation availible. Futhermore, I haven't seen any signs of packets drops, crc-errors, massive re-transmissions or anything like not, neither on the switches nor the NetApps.
    Recently we had a status-meeting with our NetApp-partner and it looks to me like they're persuing the logical setup on the NetApps, as the're apparently a number of settings etc. that needs adjustment. Also, we're waiting for NetApp tech-support to comment on the traces, config-dump etc. we've send to them.
    /Ulrich

  • WEBUTIL - Does adding it to all forms cause performance issues?

    If I add the webutil library and object library to all forms in the system (as part of a standard template) despite the fact most won't use it, will this cause any performance issues???
    Thanks in advance...

    The webutil user guide has a chapter on performance considerations. Have you looked at that?
    The number one point from that chapter is:
    1. Only WebUtil Enable Forms that actually need the functionality. Each form that is WebUtil enabled will generate a certain amount of network traffic and memory
    usage simply to instantiate the utility, even if you don’t use any WebUtil
    functionality.

  • Can't access "Network" and other performance issues

    Hi all,
    I'm facing a catch-22 and am not sure what to do to get myself and my mac out of this downward spiral. Any help you can offer would be greatly appreciated (fyi I am admittedly not great with macs and probably don't maintain my MacBook Pro well enough).
    I've had a serious slowdown in performance starting a couple weeks ago. the system moves at a glacial pace and most of the time is spent watching the spinning rainbow.
    To add to my issues, my ISP recently performed a system upgrade which requires me to change some settings under "Network" in my system preferences. Well, when I try to access "Network", the machine thinks for a while, then an error message appears telling me that the network preferences has shut down unexpectedly. When I hit "retry" the error message eventually reappears.
    So, because of the upgrade, I can't access the internet to download any repairs for the machine, and because of the problems with the machine, I can't modify the preferences to access the internet. Obviously a vicious cycle which someone of my expertise level is struggling to solve.
    Side notes
    - I've passed the 90-day phone support period so can't call for help.
    - I'm on my work computer now. Could I download some repair/diagnostic tool here, then run it on my machine at home?
    If anyone can throw me a lifeline I would be grateful!
    MacBook Pro 15"   Mac OS X (10.4.3)  

    OZ 99,
    For logic's sake, I'm gong to take these out of order a bit:
    2) A "disk error" occurs when the "file system" (sometimes called the "disk directory") becomes damaged. This is data that is written to the HD, so yes, it could be considered a software error. Your file system is, basically, a map of your physical HD, and it indexes the location on the drive of all the other files. When it is damaged for whatever reason, your disk "forgets" where some amount of data lives. Because of this, the associated files become damaged, or "corrupt." If those files happen to be critical components of the OS, bad things can happen. At worst, the disk will become unmountable, and all of your files unrecoverable.
    1) Disk errors can be caused by several things. Sometimes, one or more "blocks" (let's call them physical locations on the disk) on your HD can become physically damaged. Whether this is because of a slight flaw in manufacturing, a scratch, magnetic particles that lose their "oomph," whatever, matters not. What is important is that some data is lost. Because the file system still believes there is data living in this location, it (the file system) is no longer reliable; it is damaged. While the initial loss of data could be considered hardware-related, the disk error is not. I'll come back to this.
    Another potential cause is some random error in the process of writing data to the disk. Again, this is a software problem, not a hardware problem. The most common cause for this occurs when your computer is shut down improperly, either a forced shutdown or a power loss. Journaling, which is the default for an OS X boot volume, goes a long way toward automatically fixing these types of disk errors, but it is not always a guarantee.
    If you have had your MBP for only a short period of time, it is not surprising that a disk error has occurred, and probably because of a bad block. Absolutely flawless drives are rare, and many computers ship with incipient disk errors. For this reason, many people like to format any new drive, even one in a new computer, right out of the box (I'll get to reasons why this is a good thing to do).
    3) Yes. Disk Utility can check or repair your file system. Any repairs must be made using Disk Utility while booted to the OS X install disk. Your HD can be "verified," however, while booted to the HD. Simply open Disk Utilty (in the "Utilities" folder), select your startup disk, then click "Verify" in the "First Aid" pane.
    4) Yes, you will have to reinstall all of your applications after formatting and reinstalling. Formatting erases everything on the volume or drive selected. Settings and data for all of those applications can be saved, however, then transferred back to the MBP after reinstalling OS X. Once the applications, themselves, have been reinstalled, you will be right back where you started. I can talk about making a comprehensive backup in another post, if you like.
    DISK UTILITY: In my first post, I recommended that you select your entire drive, then using the "Zero All Data" option. This process takes a considerably longer amount of time (as much as an hour and a half, depending on the size of your drive), but it has one big advantage. When this option is used on an entire physical drive (also called a "device"), it will scan for those pesky bad blocks, and "map out" any it finds. Since these bad spots on the disk will not be included in the new file system's list of "useable" locations, your chances of encountering another disk error in the near future is drastically reduced. So, even though a bad block could be considered a hardware error, management of them is handled by software.
    Scott

  • WHERE LIKE% and ASP Performance Issue

    Hi,
    i am facing an issue with my ASP application as i use it as front end web application to connect to a huge oracle Database.
    Basically i use my queries within the ASP pages, one of them uses Where LIKE to more than one column
    Example: i have Col1, Col2 i have created the following indexes:
    Index1 on Col1, Index2 on Col2 and Index3 on Col1,Col2
    From the ASP page i have field 1, field 2 and would like to use LIKE on both fields (Field1,Field2) but the process take so much time to get result not to mention the resources it takes.
    My ASP Query:
    sqlstr = "Select * From TABLE Where COL1 Like '"&field1&"%' And COL2 Like '"&field2&"%' ORDER BY Num ASC"
    Set Rs = Conn.Execute(Sqlstr)
    What to use instead of this query to get same result but much faster (optimized)?
    Thanks.

    if the ratio of the data returning is appropriate for index access Oracle optimizer should choose to use it, but for further commenting;
    a. I couldn't see your query in the output you provided?
    b. I need to know the data distribution; what is the ratio of the data coming over all table's data with the literals you use? you can check it by taking a count of the columns you indexed with a group by query.
    c. I assume that your indexes are in VALID status and you collected statistics with dbms_stats and cascaded to the indexes, and depending on the question above if your data is not skewed which may cause extra need for histograms,
    d. I also assume if like is starting with '%', which in this case Oracle does not use indexes and Text option is what you need to read as advised, or for another smart idea on making “like ‘%xxxx’" use index in Oracle you may check - http://oracle-unix.blogspot.com/2007/07/performance-tuning-how-to-make-like.html
    After you supply the query with literals included and the data distribution, maybe as a last resort we need to force index access with a hint and compare the statistics provided by timing and autotrace options of SQL*Plus.
    ps: Also you may produce a 10053 event trace to understand the optimizer decision - http://tonguc.wordpress.com/2007/01/20/optimizer-debug-trace-event-10053-trace-file/

  • Report preview and printing performance issues in CRXI R2

    Hello to all,
    We have successfully upgraded a corporate Web reporting site from CR8 to CRXI R2 Server SP3 VS2005 Asp2. Using managed reports, and native Oracle data access, performance has greatly improved. The CRXI site  displays the first page in half the time as CR8. These are often very large reports.
    The problem we are having is when you try to print, export, or just page thru a report in previewer. It takes as long to go to page 2, or to bring up the print dialog screen as it did for page 1 to display in the first place. This is drastically different from the performance on CR8. On the old site, when a report displayed, you could flip thru it like a Word document. Hardly any pause at all. Clicking on the printer icon brought up the print box immediately.
    Is there a way to tell the 'CrystalReportsViewer' to load all pages before showing the first page?
    If not, is anyone aware of a third party replacement for the CRV?
    Any help would be greatly appreciated.
    Joe Early

    Hello Joseph,
    What you're seeing is essentially expected behavior for a CR Server (BO Enterprise) based report.  When you try to page through a report or print it you're basically rerunning the report on each button click.
    To get around this behavior you can put your Report Object / InfoStore object into session and view, page, or print the session object from the viewer.
    You can review [Business Objects Note 1203389|https://bcp.wdf.sap.corp/sap/sapnotes/display/0001203389] for an example with the Crystal Reports .NET SDK.  You'll want to add check for the session on post back, etc. but the code should give you an idea of how to get started.
    Sincerely,
    Dan Kelleher

  • REGUH and REGUP Performance issues?

    Hi all,
    I am trying to build a program based on below select query. Will it create performance problem in Production as REGUP is a cluster table.
      SELECT
             LAUFD
             LAUFI
             ZBUKR
             LIFNR
             VBLNR
      FROM REGUH
      INTO TABLE TYT_REGUH
      WHERE  LAUFD IN S_LAUFD AND
             LAUFI IN S_LAUFI AND
            XVORL NE 'X' AND
            ZBUKR  IN S_ZBUKR.
    SELECT
                LAUFD
                LAUFI
                ZBUKR
                 LIFNR
                BUKRS
                 BELNR
                 GJAHR
    FROM REGUP
    INTO TABLE TYT_REGUP
    FOR ALL ENTRIES IN TYT_REGUH
    WHERE  LAUFD EQ TYT_REGUH-LAUFD AND
            LAUFI EQ TYT_REGUH-LAUFI AND
           XVORL NE 'X' AND
           LIFNR EQ TYT_REGUH-LIFNR AND
           VBLNR EQ TYT_REGUH-VBLNR AND
           ZBUKR EQ TYT_REGUH-ZBUKR.
    Thanks,
    Subba

    Hi Subba Krishna,
    As u said, certainlly it will take  a lot of time to fetch data from the Cluster table REGUP. It will be better if u give all the Primary key fields in where condition of the select statement.
    LAUFD     Date on Which the Program Is to Be Run
    LAUFI     Additional Identification
    XVORL     Indicator: Only Proposal Run?
    ZBUKR     Paying company code
    LIFNR     Account Number of Vendor or Creditor
    KUNNR     Customer Number 1
    EMPFG     Payee code
    VBLNR     Document Number of the Payment Document
    BUKRS     Company Code
    BELNR     Accounting Document Number
    GJAHR     Fiscal Year
    BUZEI     Number of Line Item Within Accounting Document
    These are all the Primary key fields of the table REGUP.
    One more thing is to give the where condition fields in order of the table REGUP
    LAUFD
    LAUFI
    XVORL
    ZBUKR
    LIFNR
    VBLNR
    Best regards,
    raam

  • Infoview Reports timing out, Freezing and other performance issues

    Hi all,
    New user and been tasked with a project to try and get to the bottom of a problem our users get using infoview on our network. Now i have very little experience in these kind of things and this is more of a research and learn experience at the same time. I have highlighted in bold below what one of my colleagues has sent me in order to research.
    You may have noticed Reports timing out, Freezing etc.
    We need recognise whatu2019s causing it and any solutions we can apply.
    It maybe down to Memory issues, on local machines. Cost of Queries on the databases or even Java versions used when either editing or viewing Webi reports.
    Other considerations could be Network issues u2013 Are the problems just site specific
    u2022     Identify reports that are causing problems. Is it down to the Queries used and can these be optimised to run faster u2013 Our DBA can assist with this using his Query profiler (ORACLE).
    Or reports that have many tabs on, appear to take longer to open/edit u2013 is this Memory or Java.
    u2022     Research the web for known issues/fixes,
    u2022     Raise Topics on Forums such as BOB and the SAP forum u2013 explaining the above, Can we find out what causes it, which Java versions we should be using, and recommended amount of Memory (RAM) etc.
    At the moment im busy just getting a list  of reports our users are having a problem with along with any error messages etc. Now in order to get help from some gurus on here i obviously would need to supply more information on our setup so if anyone can help just tell me what information you need and i will get this for you.

    Hi,
    Here's my best suggestion, use a monitoring tool like Remote Support Component: www.service.sap.com/remote-supportability
    you can use this utlity to diagnose your system and all aspects of its latency
    regards,
    H

Maybe you are looking for