PAralel query tunnig

Good day everybody, im working with oracle 9.2.0.8 on AIX 5L
I recently mounted the filesystems of the datafiles and indexes (not the product) in DIRECT IO mode (mount -o dio /path_of_files)
but it happens that some parallel query executions doesnt get any benefit from the change... actually they had been downgrades... consider the following Tkproff extract, versus the original timing....
The point is that direct io actually improves several queries, but those related to paralell execution are suffering this downgrade, i think, related to this PX Deq: Execute Reply
here are some related parameters and below the tkprof extracts ...
filesystemio_options     SETALL
parallel_adaptive_multi_user     TRUE
parallel_automatic_tuning     TRUE
parallel_execution_message_size          8192
parallel_max_servers          160
parallel_server_instances     1
parallel_threads_per_cpu     2
DIRECT IO
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
db file sequential read 105442 2.09 1490.70
  process startup                                17        0.02          0.43
  PX Deq: Join ACK                                2        0.01          0.01
  PX Deq: Parse Reply                             6        0.22          0.42
PX Deq: Execute Reply 507 1.95 797.61
  PX Deq Credit: need buffer                     58        0.51          2.40
  PX Deq Credit: send blkd                       50        1.09          5.16
  PX Deq: Table Q Normal                        337        0.55          1.96
  log file switch completion                      4        0.02          0.04
  log buffer space                               51        0.97          6.32
  log file sync                                   9        0.08          0.23
  PX Deq: Signal ACK                              5        0.04          0.06
  enqueue                                         5        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.07          0.07
ORIGINAL ONE
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  process startup                                12        0.03          0.31
  PX Deq: Join ACK                                8        0.00          0.00
  PX Deq: Parse Reply                            12        0.16          0.19
PX Deq: Execute Reply 137 1.95 119.74
db file sequential read 104347 0.39 555.89
  PX Deq Credit: need buffer                      2        0.00          0.00
  log buffer space                               98        0.42          8.15
  log file switch completion                      4        0.38          0.49
  log file sync                                  27        0.62          2.61
  PX Deq: Table Q Normal                         23        0.00          0.00
  PX Deq: Signal ACK                              6        0.00          0.00
  enqueue                                         1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.06          0.06Thanks in advance.

First of all, thanks for your help...
and actually, there was one table that lacked parallel flag... i fixed it but...
yet, the PX Deq: Execute Reply is the top wait event... in the master tkprof..
and i think that your guesswork was accurate, take a look...
Operation                    Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
INSERT STATEMENT          Optimizer Mode=CHOOSE          3             5866                                          
  NESTED LOOPS                                   3       672       5866       P->S       QC (RANDOM)      
    HASH JOIN                                   3       444       5865       PCWP                        
      TABLE ACCESS BY INDEX ROWID     PRD982E.IMP_CONCEPTO     1       33       1       PCWC                        
        NESTED LOOPS                              15 K     1 M     5859       P->P       HASH             
          HASH JOIN                              74 K     4 M     2132       PCWP                        
            TABLE ACCESS BY INDEX ROWID     PRD982E.BDG_MOVIMIENTOS     74 K     3 M     1240       S->P       HASH             
              INDEX RANGE SCAN          PRD982E.SAP_BDG_IDX01     148 K           548                                          
            TABLE ACCESS FULL          PRD982E.SUMCON          1 M     13 M     892       P->P       HASH             
          INDEX RANGE SCAN          PRD982E.PK_IMP_CONCEPTO     1             1       PCWP                        
      TABLE ACCESS FULL               RELACIONES_ASIENTOS     40 K     1 M     6       P->P       HASH             
    TABLE ACCESS BY INDEX ROWID          PRD982E.RECIBOS          1       76       1       PCWP                        
      INDEX UNIQUE SCAN               PRD982E.PK_RECIBOS     1                   PCWP                        
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.02       1.48          0          0          0           0
Execute      1     21.26     615.80      18451      28609     530507      399299
Fetch        0      0.00       0.00          0          0          0           0
total        2     21.28     617.29      18451      28609     530507      399299
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 934
Rows     Row Source Operation
      0  NESTED LOOPS
      0   HASH JOIN
      0    TABLE ACCESS BY INDEX ROWID IMP_CONCEPTO
      0     NESTED LOOPS
      0      HASH JOIN
      0       TABLE ACCESS BY INDEX ROWID BDG_MOVIMIENTOS
152295        INDEX RANGE SCAN SAP_BDG_IDX01 (object id 34358)
      0       TABLE ACCESS FULL SUMCON
      0      INDEX RANGE SCAN PK_IMP_CONCEPTO (object id 26842)
      0    TABLE ACCESS FULL SAP_RELACIONES_ASIENTOS_MV
      0   TABLE ACCESS BY INDEX ROWID RECIBOS
      0    INDEX UNIQUE SCAN PK_RECIBOS (object id 28131)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  process startup                                18        0.97          1.42
  PX Deq: Join ACK                                2        0.21          0.21
  PX Deq: Parse Reply                             7        0.07          0.15
  PX Deq: Execute Reply                         570        1.95        498.31
  db file sequential read                     18450        0.20         74.02
  log file switch completion                      3        0.19          0.23
  PX Deq: Table Q Normal                         31        0.08          0.13
  PX Deq: Signal ACK                              9        0.02          0.03
  enqueue                                         2        0.00          0.00
  log file sync                                   1        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.20          0.20
********************************************************************************

Similar Messages

  • Paralell query using db links

    We use Sparc servers T2-based (CMT) for some of our Oracle 10g databases. Their performance is excellent when your query can be paralleled. But when we run queries (select sth from ...) from others Oracle databases using dblink we can't use parallel queries. Does anybody know why is that ?

    Did you enable parallel dml on the local session? Do: "*alter session enable parallel dml*" and theninsert /*+ parallel */ into my_table
    (branch, date, value1, value2)
    select 1, sysdate, number1, number2
    from huge_table@remote_database_db_linkOne thing you can do if the remote query is not running in parallel is create a view on your remote database with a parallel hint in the view definition. Something like this:create view huge_table_v
    as select /*+ parallel */ number1, number2
    from huge_tableAnd then insert from that view@db_link. Having the hint on the other side of the db link helps tell the remote database optimizer how to run that query.

  • Query Processing Type in Reporting tools(Crystal, Webi Client, Deski etc)

    Hi,
    In tools like Webi, Crystal or Deski, query is executed serially. For Instance, if we have 3 dataproviders in a report, then first data provider fetches data and then second and the third one. In crystal Main report fetches the data first and followed by each subreport. Is there a way to tweak the tool settings and change this Serial query execution to Paralel? That is all the queries should fetch the data at the same time. It should not wait for other dataproviders/subreports. When we have a report fetching data from multiple sources, the paralel processing could improve the report performance too. Is serial query execution type a tool feature? Is there any reason behind not designing a tool with paralel query execution option.
    Thanks
    Aravind

    Hi Aravind,
    The logic of the reports do not allow paralel SQL being sent. Work down the page, intial database is queried to get the data for the main report, once the data is retrieved then CR starts to process it starting at the Report header and working it's way down. When the process gets to a section where a subreport is found it then takes the linking parameter and passes that to the subreport and then send off the query to the database. It will do this each time a new value is passed to the subreport.
    So logically if the first SQL is not run the values for the subreports cannot be generated. Unless you return all records but that would not be efficient either.
    To optimize your report create a Stored Proced which runs Server side and return one data set to the Designer. Servers are much more efficient at optimize data collection than our various tools are.
    Thank you
    Don

  • Need help in tunnig the query

    Experts,
    Plz help in tuning this query
    SELECT
    DECODE(WSH_DELIVERY_DETAILS.LOT_NUMBER,'5I27164/1U',1, TO_NUMBER(SUBSTR(WSH_DELIVERY_DETAILS.LOT_NUMBER, (LENGTH(WSH_DELIVERY_DETAILS.LOT_NUMBER) - INSTR(REVERSE(WSH_DELIVERY_DETAILS.LOT_NUMBER),'/') + 2), (INSTR(REVERSE(WSH_DELIVERY_DETAILS.LOT_NUMBER),'/')-1))) ) AS NO_OF_PLATES,
         P_FORM.DESCRIPTION FORMOFPRODUCT,
    HZ_PARTIES.PARTY_NAME CUSTOMER, HZ_CUST_ACCOUNTS.ACCOUNT_NUMBER ACCT_NUM,
         NVL(OE_ORDER_HEADERS_ALL.CUST_PO_NUMBER,'XXXX') CUST_PO,
         INITCAP(HZ_PARTIES.ADDRESS1) BILL_ADD1,
    INITCAP(HZ_PARTIES.ADDRESS2) BILL_ADD2,
    INITCAP(HZ_PARTIES.ADDRESS3) BILL_ADD3,
    INITCAP(HZ_PARTIES.ADDRESS4) BILL_ADD4,
         INITCAP(HZ_PARTIES.CITY) BILL_CITY,
    INITCAP(HZ_PARTIES.STATE) BILL_STATE,
    HZ_PARTIES.POSTAL_CODE BILL_PC,
         INITCAP(HZ_LOCATIONS.ADDRESS1) SHIP_ADD1,
    INITCAP(HZ_LOCATIONS.ADDRESS2) SHIP_ADD2,
    INITCAP(HZ_LOCATIONS.ADDRESS3) SHIP_ADD3,
    INITCAP(HZ_LOCATIONS.ADDRESS4) SHIP_ADD4,
         INITCAP(HZ_LOCATIONS.CITY) SHIP_CITY,
    INITCAP(HZ_LOCATIONS.STATE) SHIP_STATE,
    HZ_LOCATIONS.POSTAL_CODE SHIP_PC,
         OE_TRANSACTION_TYPES_TL.NAME CATEGORY ,
         OE_ORDER_HEADERS_ALL.ORDER_NUMBER,
    OE_ORDER_HEADERS_ALL.SOURCE_DOCUMENT_ID,
    OE_ORDER_HEADERS_ALL.HEADER_ID,
    OE_ORDER_HEADERS_ALL.FREIGHT_TERMS_CODE,
    /* to_char(oe_order_headers_all.ORDERED_DATE,'DD-MON-RR HH24:MI:SS') ORDER_DATE,*/
    to_char(sysdate,'DD/MM/RRRR HH24:MI:SS') ORDERED_DATE,
         ROWNUM,
    OE_ORDER_LINES_ALL.INVENTORY_ITEM_ID,
         OE_ORDER_LINES_ALL.ORDERED_ITEM,
         OE_ORDER_LINES_ALL.ORDERED_QUANTITY,
         WSH_DELIVERY_DETAILS.SHIPPED_QUANTITY+DECODE(WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID,129881,.22,0) AS SHIPPED_QUANTITY1, (WSH_DELIVERY_DETAILS.SHIPPED_QUANTITY+DECODE(WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID,129881,.22,0))*OE_ORDER_LINES_ALL.UNIT_SELLING_PRICE AS LINE_PRICE,
         OE_ORDER_LINES_ALL.UNIT_SELLING_PRICE,
         JA_IN_RG_I.FOR_HOME_USE_PAY_ED_VAL,
    ----      SUBSTR(JA_IN_RG_I.EXCISE_INVOICE_NUMBER,1,2)|| SUBSTR(JA_IN_RG_I.EXCISE_INVOICE_NUMBER,4,8) INVOICE_NO,
         JA_IN_RG_I.EXCISE_INVOICE_NUMBER INVOICE_NO,
    to_char(JA_IN_RG_I.CREATION_DATE,'DD/MM/RRRR') INVOICE_DATE1,
         to_char(JA_IN_RG_I.EXCISE_INVOICE_DATE,'DD-MON-RR HH24:MI:SS') INV_DATE,
         to_char(JA_IN_RG_I.CREATION_DATE,'DD/MM/RRRR HH24:MI:SS') INVOICE_DATE,
         JA_IN_RG_I.EXCISE_DUTY_RATE,
         JA_IN_RG_I.EXCISE_DUTY_AMOUNT,
         WSH_DELIVERY_DETAILS.LOT_NUMBER,
    WSH_TRIPS.CARRIER_ID CARRIER_ID,
    WSH_TRIPS.VEHICLE_NUM_PREFIX,
    WSH_TRIPS.SEAL_CODE,
    WSH_TRIPS.ROUTING_INSTRUCTIONS,
    WSH_TRIPS.OPERATOR, WSH_NEW_DELIVERIES. DELIVERY_ID,
    WSH_NEW_DELIVERIES. ADDITIONAL_SHIPMENT_INFO TRUCK_NO,
         WSH_TRIPS.MODE_OF_TRANSPORT VEHICLE_TYPE,
    WSH_TRIPS.ATTRIBUTE6||' '|| WSH_TRIPS.ATTRIBUTE7||' '|| WSH_TRIPS.ATTRIBUTE8 REMARKS,
         WSH_TRIPS.ATTRIBUTE1 LR_NO, WSH_TRIPS.ATTRIBUTE7,
         WSH_TRIPS.ATTRIBUTE2 LR_DATE,
         DECODE(WSH_TRIPS.ATTRIBUTE3, '', '', 'ARE NO '||WSH_TRIPS.ATTRIBUTE3) AS ARE_NO,
         DECODE(WSH_TRIPS.ATTRIBUTE6, '', '', WSH_TRIPS.ATTRIBUTE6) AS REMARK1,
         DECODE(WSH_TRIPS.ATTRIBUTE4, '', '', 'R.P. NO. ' ||WSH_TRIPS.ATTRIBUTE4) AS PERMIT,
         DECODE(WSH_TRIPS.ATTRIBUTE5, '', '', 'Export Under '||WSH_TRIPS.ATTRIBUTE5) AS EXPORT_UNDER,
         DECODE(WSH_NEW_DELIVERIES.PORT_OF_DISCHARGE,'','', 'Seal No : '||WSH_NEW_DELIVERIES.PORT_OF_DISCHARGE) AS SEAL_NO,
         DECODE(WSH_NEW_DELIVERIES.DESCRIPTION, '', '', 'Container No : ' ||WSH_NEW_DELIVERIES.DESCRIPTION) AS CONTAINER_NO,
         MTL_SYSTEM_ITEMS_B.ATTRIBUTE4 AS EXCISE_TARIFF_NO,
         --MTL_SYSTEM_ITEMS_B.DESCRIPTION,
         HZ_CUST_ACCOUNTS.ACCOUNT_NUMBER,
         JSW_LOT_PACKSLIP.PACKSLIP_NO PACKSLIP,
         PR_TYPE.DESCRIPTION ||' ,'||
         P_FORM.DESCRIPTION || ' ,'||
         PR_GRADE.DESCRIPTION ||' ,'||
         PR_QTY.DESCRIPTION || ' ,'||
         DECODE(SUBSTR(PR_STL.DESCRIPTION,1,3),'Not','', PR_STL.DESCRIPTION) DESCRIPTION ,
    HZ_CUST_ACCT_SITES_ALL.ATTRIBUTE7 S_CST_NO,
    HZ_CUST_ACCT_SITES_ALL.ATTRIBUTE6 S_LST_NO,
    HZ_CUST_ACCT_SITES_ALL.ATTRIBUTE5 S_ECC_NO,
    A.ATTRIBUTE7 B_CST_NO,
    A.ATTRIBUTE6 B_LST_NO,
    INTERFACECRM.JSW_MES_COMMON_TNGL_T.JSW_ADDMINUTES(WSH_TRIPS.VEHICLE_NUM_PREFIX) POSTFIX_NUM
    FROM -- jsw_vat_slno,
         MTL_PARAMETERS,
         HZ_PARTIES,
         HZ_CUST_ACCOUNTS,
         HZ_LOCATIONS,
         HZ_PARTY_SITES,
         HZ_CUST_ACCT_SITES_ALL,
         HZ_CUST_SITE_USES_ALL,
    HZ_CUST_ACCT_SITES_ALL A,
    HZ_CUST_SITE_USES_ALL B,
         OE_TRANSACTION_TYPES_TL,
         JA_IN_RG_I,
         WSH_DELIVERY_DETAILS,
         OE_ORDER_LINES_ALL,
         OE_ORDER_HEADERS_ALL,
         WSH_DELIVERY_ASSIGNMENTS,
         WSH_NEW_DELIVERIES,
         WSH_DELIVERY_LEGS,
         WSH_TRIP_STOPS,
         WSH_TRIPS,
         MTL_SYSTEM_ITEMS_B,
         JSW_LOT_PACKSLIP,
         IC_TRAN_PND,
                   JSW_ITM_SEARCH_PRD_TYP          PR_TYPE,
              JSW_ITM_SEARCH_PRD_FRM          P_FORM     ,
                   JSW_ITM_SEARCH_PRD_GRD          PR_GRADE,      
              JSW_ITM_SEARCH_QUALITY_LEVEL PR_QTY,      
                   JSW_ITEM_SEARCH_SLITTING      PR_STL
    WHERE /* jsw_vat_slno.excise_invoice_number=ja_in_rg_i.excise_invoice_number and jsw_vat_slno.organization_id = ja_in_rg_i.organization_id */
    HZ_PARTIES.PARTY_ID = HZ_CUST_ACCOUNTS.PARTY_ID
         AND HZ_CUST_ACCOUNTS.CUST_ACCOUNT_ID = OE_ORDER_HEADERS_ALL.SOLD_TO_ORG_ID
         AND OE_ORDER_HEADERS_ALL.SHIP_TO_ORG_ID = HZ_CUST_SITE_USES_ALL.SITE_USE_ID
         AND HZ_CUST_SITE_USES_ALL.CUST_ACCT_SITE_ID = HZ_CUST_ACCT_SITES_ALL.CUST_ACCT_SITE_ID
    AND OE_ORDER_HEADERS_ALL.INVOICE_TO_ORG_ID = B.SITE_USE_ID
         AND B.CUST_ACCT_SITE_ID                          = A.CUST_ACCT_SITE_ID
         AND HZ_CUST_ACCT_SITES_ALL.PARTY_SITE_ID      = HZ_PARTY_SITES.PARTY_SITE_ID
         AND HZ_PARTY_SITES.LOCATION_ID = HZ_LOCATIONS.LOCATION_ID
         AND OE_ORDER_LINES_ALL.SHIP_FROM_ORG_ID = MTL_PARAMETERS.ORGANIZATION_ID
         AND OE_ORDER_HEADERS_ALL.ORDER_TYPE_ID = OE_TRANSACTION_TYPES_TL.TRANSACTION_TYPE_ID
    and upper(oe_transaction_types_tl.description) not like '%SUP%'
         AND OE_ORDER_HEADERS_ALL.HEADER_ID = WSH_DELIVERY_DETAILS.SOURCE_HEADER_ID
         AND OE_ORDER_HEADERS_ALL.HEADER_ID = OE_ORDER_LINES_ALL.HEADER_ID
         AND OE_ORDER_LINES_ALL.LINE_ID = WSH_DELIVERY_DETAILS.SOURCE_LINE_ID
         AND TO_CHAR(WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID ) = JA_IN_RG_I.REF_DOC_ID
         AND WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID = WSH_DELIVERY_ASSIGNMENTS.DELIVERY_DETAIL_ID
         AND WSH_DELIVERY_ASSIGNMENTS.DELIVERY_ID = WSH_NEW_DELIVERIES.DELIVERY_ID
         AND WSH_NEW_DELIVERIES.DELIVERY_ID = WSH_DELIVERY_LEGS.DELIVERY_ID
         AND WSH_DELIVERY_LEGS.PICK_UP_STOP_ID = WSH_TRIP_STOPS.STOP_ID
         AND WSH_TRIP_STOPS.TRIP_ID                = WSH_TRIPS.TRIP_ID
    /* Input parameter can be either Invoice number or Delivery Id */
         AND (WSH_NEW_DELIVERIES.DELIVERY_ID = :DEL_ID OR JA_IN_RG_I.EXCISE_INVOICE_NUMBER = :INVOICE_NUMBER)
         AND NVL(WSH_DELIVERY_DETAILS.SHIPPED_QUANTITY,0) <>0
         AND     MTL_SYSTEM_ITEMS_B.ORGANIZATION_ID     = WSH_DELIVERY_DETAILS.ORGANIZATION_ID
         AND     MTL_SYSTEM_ITEMS_B.INVENTORY_ITEM_ID = WSH_DELIVERY_DETAILS.INVENTORY_ITEM_ID
         AND IC_TRAN_PND.LINE_DETAIL_ID               = WSH_DELIVERY_DETAILS.DELIVERY_DETAIL_ID
         AND     JSW_LOT_PACKSLIP.LOT_ID                    = IC_TRAN_PND.LOT_ID(+)
         AND     JSW_LOT_PACKSLIP.DELIVERY_ID          = WSH_NEW_DELIVERIES.DELIVERY_ID(+)
         AND      IC_TRAN_PND.LINE_ID                    = OE_ORDER_LINES_ALL.LINE_ID                                                  
         AND     IC_TRAN_PND.DELETE_MARK                = 0
         AND IC_TRAN_PND.COMPLETED_IND      = 1
         AND      IC_TRAN_PND.STAGED_IND                    =     1
         AND     P_FORM.CODE(+)                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,3,2)
         AND     PR_TYPE.CODE                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,1,2)
         AND     PR_TYPE.CODE                                   = P_FORM.PRD_TYPE_CODE
         AND     P_FORM.CODE                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,3,2)
    AND     P_FORM.CODE                                   = PR_GRADE.PRD_FRM_CODE
         AND     PR_GRADE.CODE                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,5,2)     
         AND     PR_GRADE.CODE                                   = PR_QTY.PRD_GRD_CODE
         AND     PR_QTY.CODE                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,7,2)
         AND     PR_STL.PROD_CODE                              = PR_TYPE.CODE
         AND     PR_STL.QUALITY_LEVEL_CODE                    = PR_QTY.CODE
         AND     PR_STL.CODE                                   = substr(OE_ORDER_LINES_ALL.ORDERED_ITEM,9,2)
         AND rownum=1
    thanks,
    baskar.l

    Here is the explain plan
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 1 | 920 | 12628 |
    | 1 | COUNT STOPKEY | | | | |
    | 2 | NESTED LOOPS | | 1 | 920 | 12628 |
    | 3 | NESTED LOOPS | | 1 | 728 | 12627 |
    | 4 | NESTED LOOPS | | 1 | 718 | 12626 |
    | 5 | NESTED LOOPS | | 1 | 684 | 12625 |
    | 6 | NESTED LOOPS | | 1 | 674 | 12624 |
    | 7 | NESTED LOOPS | | 1 | 665 | 12623 |
    | 8 | NESTED LOOPS | | 1 | 655 | 12622 |
    | 9 | NESTED LOOPS | | 1 | 532 | 12621 |
    | 10 | NESTED LOOPS | | 1 | 522 | 12620 |
    | 11 | FILTER | | | | |
    | 12 | NESTED LOOPS OUTER | | | | |
    | 13 | NESTED LOOPS | | 1 | 494 | 12617 |
    | 14 | NESTED LOOPS | | 1 | 477 | 12614 |
    | 15 | NESTED LOOPS | | 1 | 467 | 12611 |
    | 16 | NESTED LOOPS | | 1 | 382 | 12610 |
    | 17 | NESTED LOOPS | | 1 | 366 | 12609 |
    | 18 | NESTED LOOPS | | 1 | 321 | 12606 |
    | 19 | FILTER | | | | |
    | 20 | HASH JOIN OUTER | | | | |
    | 21 | TABLE ACCESS BY INDEX ROWID| IC_TRAN_PND | 1 | 22 | 3 |
    | 22 | NESTED LOOPS | | 1 | 292 | 12601 |
    | 23 | HASH JOIN | | 1 | 270 | 12598 |
    | 24 | NESTED LOOPS | | 4011 | 908K| 6272 |
    | 25 | HASH JOIN | | 4012 | 893K| 6272 |
    | 26 | TABLE ACCESS FULL | JSW_ITM_SEARCH_PRD_GRD | 33 | 759 | 2 |
    | 27 | HASH JOIN | | 20547 | 4113K| 6269 |
    | 28 | TABLE ACCESS FULL | JSW_ITM_SEARCH_QUALITY_LEVEL | 54 | 1296 | 2 |
    | 29 | HASH JOIN | | 24352 | 4304K| 6266 |
    | 30 | TABLE ACCESS FULL | JSW_ITM_SEARCH_PRD_TYP | 7 | 154 | 2 |
    | 31 | HASH JOIN | | 170K| 25M| 6260 |
    | 32 | TABLE ACCESS FULL | JSW_ITEM_SEARCH_SLITTING | 73 | 1971 | 2 |
    | 33 | HASH JOIN | | 44367 | 5719K| 6257 |
    | 34 | HASH JOIN | | 4469 | 410K| 367 |
    | 35 | TABLE ACCESS FULL| OE_TRANSACTION_TYPES_TL | 18 | 846 | 4 |
    | 36 | TABLE ACCESS FULL| OE_ORDER_HEADERS_ALL | 38240 | 1755K| 362 |
    | 37 | TABLE ACCESS FULL | OE_ORDER_LINES_ALL | 379K| 13M| 5761 |
    | 38 | INDEX UNIQUE SCAN | MTL_PARAMETERS_U1 | 1 | 4 | |
    | 39 | TABLE ACCESS FULL | WSH_DELIVERY_DETAILS | 579K| 21M| 5926 |
    | 40 | INDEX RANGE SCAN | IC_TRAN_PND_N1 | 2 | | 2 |
    | 41 | TABLE ACCESS FULL | JSW_ITM_SEARCH_PRD_FRM | 16 | 272 | 2 |
    | 42 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 1 | 12 | 2 |
    | 43 | INDEX UNIQUE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 1 | | 1 |
    | 44 | TABLE ACCESS BY INDEX ROWID | JA_IN_RG_I | 1 | 45 | 3 |
    | 45 | INDEX RANGE SCAN | JA_IN_RG_I_N3 | 1 | | 2 |
    | 46 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 16 | 1 |
    | 47 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | |
    | 48 | TABLE ACCESS BY INDEX ROWID | HZ_PARTIES | 1 | 85 | 1 |
    | 49 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | | |
    | 50 | TABLE ACCESS BY INDEX ROWID | WSH_DELIVERY_ASSIGNMENTS | 1 | 10 | 3 |
    | 51 | INDEX RANGE SCAN | WSH_DELIVERY_ASSIGNMENTS_N3 | 1 | | 2 |
    | 52 | TABLE ACCESS BY INDEX ROWID | JSW_LOT_PACKSLIP | 1 | 17 | 3 |
    | 53 | INDEX RANGE SCAN | JSW_LOT_PACKSLIP_N1 | 1 | | 2 |
    | 54 | TABLE ACCESS BY INDEX ROWID | WSH_NEW_DELIVERIES | 1 | 18 | 1 |
    | 55 | INDEX UNIQUE SCAN | WSH_NEW_DELIVERIES_U1 | 1 | | |
    | 56 | TABLE ACCESS BY INDEX ROWID | WSH_DELIVERY_LEGS | 1 | 10 | 2 |
    | 57 | INDEX RANGE SCAN | WSH_DELIVERY_LEGS_N1 | 1 | | 1 |
    | 58 | TABLE ACCESS BY INDEX ROWID | WSH_TRIP_STOPS | 1 | 10 | 1 |
    | 59 | INDEX UNIQUE SCAN | WSH_TRIP_STOPS_U1 | 1 | | |
    | 60 | TABLE ACCESS BY INDEX ROWID | WSH_TRIPS | 1 | 123 | 1 |
    | 61 | INDEX UNIQUE SCAN | WSH_TRIPS_U1 | 1 | | |
    | 62 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 10 | 1 |
    | 63 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | |
    | 64 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 9 | 1 |
    | 65 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | |
    | 66 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 10 | 1 |
    | 67 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | |
    | 68 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 34 | 1 |
    | 69 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | |
    | 70 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 10 | 1 |
    | 71 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | |
    | 72 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 192 | 1 |
    | 73 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | |
    we need to tune this query to reduce it time of execution as our users are very much affected in running requests.
    thanks,
    baskar.l

  • Why does query do a full table scan?

    I have a simple select query that filters on the last 10 or 11 days of data from a table. In the first case it executes in 1 second. In the second case it is taking 15+ minutes and still not done.
    I can tell that the second query (11 days) is doing a full table scan.
    - Why is this happening? ... I guess some sort of threshold calculation???
    - Is there a way to prevent this? ... or encourage Oracle to play nice.
    I find it confusing from a front end/query perspective to get vastly different performance.
    Jason
    Oracle 10g
    Quest Toad 10.6
    CREATE TABLE delme10 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-10,'D');
    Plan hash value: 915912709
    | Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT       |                   |  4799 |  5534K|  4951   (1)| 00:01:00 |
    |   1 |  LOAD AS SELECT              | DELME10           |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| ED_VISITS         |  4799 |  5534K|  4796   (1)| 00:00:58 |
    |*  3 |    INDEX RANGE SCAN          | NDX_ED_VISITS_020 |  4799 |       |    15   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-10,'fmd'))
    CREATE TABLE delme11 AS
    SELECT *
    FROM ed_visits
    WHERE first_contact_dt >= TRUNC(SYSDATE-11,'D');
    Plan hash value: 1113251513
    | Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | CREATE TABLE STATEMENT |           | 25157 |    28M| 14580   (1)| 00:02:55 |        |      |            |
    |   1 |  LOAD AS SELECT        | DELME11   |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR       |           |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM) | :TQ10000  | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR  |           | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWC |            |
    |*  5 |      TABLE ACCESS FULL | ED_VISITS | 25157 |    28M| 14530   (1)| 00:02:55 |  Q1,00 | PCWP |            |
    Predicate Information (identified by operation id):
       5 - filter("FIRST_CONTACT_DT">=TRUNC(SYSDATE@!-11,'fmd'))

    Hi Jason,
    I think you're right with kind of "threshold". You can verify CBO costing with event 10053 enabled. There are many possible ways to change this behaviour. Most straightforward would be probably INDEX hint, but you can also change some index-cost related parameters, check histograms, decrease degree of paralellism on table, create stored outline, etc.
    Lukasz

  • Oracle11g RAC with partitioning and cross instance parallel query problem

    I have set up a 300gb TPC-H database using a 4 node RAC environment (8 cpu per node, 16 GB memory, 2 GHz processors) the system is served by 2.5 terabytes of SSD for its IO subsystem managed by a combination of ASM and OCFS2.
    When I run a large parallel query (number 9 in the TPCH query set) I get:
    ORA-00600: internal error code, arguments [kxfrGraDistNum3],[65535],[4]
    with all other arguments blank. There were some reports of this in version 9, but it was supposedly fixed. Has any one seen this behavior or have a work around?
    Mike

    Good Idea! Why didn't I think of that? Oh yea...I did. Unfortunately TMS CSI is an update only partner type CSI so I cannot submit a SR. The 600 lookup was how I found the old stuff, but it didn't have any 11g references. I hoped maybe someone in the community had encountered this and had a workaround. By the way, the querey looks like so:
    select
    nation,
    o_year,
    sum(amount) as sum_profit
    from
    select
    n_name as nation,
    extract(year from o_orderdate) as o_year,
    l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount
    from
    h_part,
    h_supplier,
    h_lineitem,
    h_partsupp,
    h_order,
    h_nation
    where
    s_suppkey = l_suppkey
    and ps_suppkey = l_suppkey
    and ps_partkey = l_partkey
    and p_partkey = l_partkey
    and o_orderkey = l_orderkey
    and s_nationkey = n_nationkey
    and p_name like '%spring%'
    ) as profit
    group by
    nation,
    o_year
    order by
    nation,
    o_year desc;
    the other 21 queries, all using the same tables and degrees of paralell and cross instance settings, executed ok.
    Mike

  • Can BI Server launch parallel query (with DOP 1)?

    Hi guys,
    I have some perfomance problems with some reports. These reports are quite siple, no aggregation, no measures...just columns coming from different dimensions.
    The problem is that the size of the dimensions ranges from 1 milion to 10 milions of records and some dimensions are linked by "outer join" relationships. The physical query generated by the BI Server takes around 22 minutes but if i modify the same query adding the paralellism for the involved tables (e.g. select /*+ paralell(mytable 4) */ ...) i get the result in 5 minutes (1/4 of the time).
    So is there any way to force the BI server to use a degree (DOP) > 1 for the all/some physical tables?
    What about using indexes (I didn't create any index yet, maybe i should start to use them for primary/foreign keys)?
    Of course, others solutions to fix this issue are appretiated! :-)
    Thanks in advance,
    Nazza

    Nazza,
    Did you try out the HINT feature(Physical Table ->>General Tab -->>HINT) available in physical layer. You can use parallel hint there and the query generated would use the same while firing it to the database.
    Indexex are good to have but do check with your DBA, which index would be more cost effective to your query.
    Cheers
    Dhrubo

  • Error while running a query-Input for variable 'Posting Period is invalid

    Hi All,
    NOTE: This error is only cropping up when I input 12 in the posting period variable selection. If I put in any other value from 1-11 I am not getting any errors. Any ideas why this might be happening?
    I am getting the following error when I try and run a query - "Input for variable 'Posting Period (Single entry, mandatory)' is invalid" - On further clicking on this error the message displayed is as follows -
    Diagnosis
    Variable Posting Period (Single Value Entry, Mandatory) is used as a lower limit (X) and an upper limit () in an interval selection. This limit has the value #.
    System Response
    Procedure
    Enter a different value for variable Posting Period (Single Value Entry, Mandatory). If the value of the other limit is determined by another variable, you can change its value also.
    Procedure for System Administration

    OK.
    Well, if the variable is not used in any interval selection, then I would say "something happened to it".
    I would make a copy of the query and run it to check if I get the same problem with period 12.
       -> If not, something is wrong in the original query (you can proceed as below, if changes to original are permitted).
    If so, then try removing the variable completely from the query and hardcode restriction to 12.
       -> If problem still persists, I would have to do some thinking.
    If problem is gone, then add the variable again. Check.
       -> If problem is back, then the variable "is sick". Only quick thing to do, is to build an identical variable and use that one.
    If problem also happens with the new variable, then it's time to share this experience with someone else and consider raising an OSS.
    Good luck!
    Jacob
    P.S: what fisc year variant are you using?
    Edited by: Jacob Jansen on Jan 25, 2010 8:36 PM

  • Logical database in adhoc query

    Hello All,
    Can anyone tell me what is the logical database in adhoc query?

    Hi
    When you create a query , you have to select an infoset. Infoset can be considered as a source from which data is populated in the Query Fields.
    Infosets are created from Transaction SQ02.
    There can be four methods through which an Infoset can become a source of data:
    1.  Table join ( By joining two or more tables from Data dictionary)
         example: Joining tables PA0001 and PA0006 on Pernr to get a one resultant dataset
    2. Direct read of Basis Table ( Like PA0001 as a source for data in Infoset )
    3. Logical Database ( A Pre-written Program by SAP that extract data from clusters, tables taking care of authorizations and validity periods)
    Example : Logical database PNP, PNPCE (Concurrent Employement),PCH ( LDB for Personnel Development Data)
    Custom Logical DBs can be created in T_Code SE-36.
    4. Data Retrieval by a Program ( Custom code written by ABAP developers which will collect and process data) . This program has a corresponding Structure in data dictionary and the fields of this structure will be used in query)
    Reward Points, if helpful.
    Regards
    Waseem Imran

  • Query help on Goods Receipt Query with AP Invoice

    Looking for a little help on a query.  I would like to list all the goods receipts for a given date range and then display the AP Invoice information (if its been copied to an AP Invoice).  I think my problem is in my where clause, I plagerized an SAP query to show GR and AP from a PO as a start.  SBO 2005 SP01.  Any help would be great appreciated.  Thanks
    SELECT distinct 'GR',
    D0.DocStatus,
    D0.DocNum ,
    D0.DocDate,
    D0.DocDueDate,
    D0.DocTotal,
    'AP',
    I0.DocStatus,
    I0.DocNum ,
    I0.DocDate,
    I0.DocDueDate,
    I0.DocTotal,
    I0.PaidToDate
    FROM
    ((OPDN  D0 inner Join PDN1 D1 on D0.DocEntry = D1.DocEntry)
    full outer join
    (OPCH I0 inner join PCH1 I1 on I0.DocEntry = I1.DocEntry)
    on (I1.BaseType=20 AND D1.DocEntry = I1.BaseEntry AND D1.LineNum=I1.BaseLine))
    WHERE
    (D1.BaseType=22 AND D1.DocDate>='[%0]' AND D1.DocDate<='[%1]')
    OR (I1.BaseType=20 AND I1.BaseEntry IN
    (SELECT Distinct DocEntry
    FROM PDN1 WHERE BaseType=22 AND DocDate>='[%0]' AND DocDate<='[%1]'))

    Hi Dalen ,
    I  believe it is because of the condition
    (D1.BaseType=22 AND D1.DocDate>='%0' AND D1.DocDate<='%1')
    OR (I1.BaseType=20 AND I1.BaseEntry IN
    (SELECT Distinct DocEntry FROM PDN1 WHERE PDN1.BaseType=22 AND DocDate>='%0' AND DocDate<='%1'))
    Try changing
    D1.BaseType=22 OR D1.DocDate>='%0' AND D1.DocDate<='%1
    PDN1.BaseType=22 OR DocDate>='%0' AND DocDate<='%1'))
    Lets see what would be the result . Lets have some fun with troubleshooting
    See what would be the difference in the result .
    Thank you
    Bishal

  • Can you check for data in one table or another but not both in one query?

    I have a situation where I need to link two tables together but the data may be in another (archive) table or different records are in both but I want the latest record from either table:
    ACCOUNT
    AccountID     Name   
    123               John Doe
    124               Jane Donaldson           
    125               Harold Douglas    
    MARKETER_ACCOUNT
    Key     AccountID     Marketer    StartDate     EndDate
    1001     123               10526          8/3/2008     9/27/2009
    1017     123               10987          9/28/2009     12/31/4712    (high date ~ which means currently with this marketer)
    1023     124               10541          12/03/2010     12/31/4712
    ARCHIVE
    Key     AccountID     Marketer    StartDate     EndDate
    1015     124               10526          8/3/2008     12/02/2010
    1033     125               10987         01/01/2011     01/31/2012  
    So my query needs to return the following:
    123     John Doe                        10526     8/3/2008     9/27/2009
    124     Jane Donaldson             10541     12/03/2010     12/31/4712     (this is the later of the two records for this account between archive and marketer_account tables)
    125     Harold Douglas               10987          01/01/2011     01/31/2012     (he is only in archive, so get this record)
    I'm unsure how to proceed in one query.  Note that I am reading in possibly multiple accounts at a time and returning a collection back to .net
    open CURSOR_ACCT
              select AccountID
              from
                     ACCOUNT A,
                     MARKETER_ACCOUNT M,
                     ARCHIVE R
               where A.AccountID = nvl((select max(M.EndDate) from Marketer_account M2
                                                    where M2.AccountID = A.AccountID),
                                                      (select max(R.EndDate) from Archive R2
                                                    where R2.AccountID = A.AccountID)
                   and upper(A.Name) like parameter || '%'
    <can you do a NVL like this?   probably not...   I want to be able to get the MAX record for that account off the MarketerACcount table OR the max record for that account off the Archive table, but not both>
    (parameter could be "DO", so I return all names starting with DO...)

    if I understand your description I would assume that for John Dow we would expect the second row from marketer_account  ("high date ~ which means currently with this marketer"). Here is a solution with analytic functions:
    drop table account;
    drop table marketer_account;
    drop table marketer_account_archive;
    create table account (
        id number
      , name varchar2(20)
    insert into account values (123, 'John Doe');
    insert into account values (124, 'Jane Donaldson');
    insert into account values (125, 'Harold Douglas');
    create table marketer_account (
        key number
      , AccountId number
      , MktKey number
      , FromDt date
      , ToDate date
    insert into marketer_account values (1001, 123, 10526, to_date('03.08.2008', 'dd.mm.yyyy'), to_date('27.09.2009', 'dd.mm.yyyy'));
    insert into marketer_account values (1017, 123, 10987, to_date('28.09.2009', 'dd.mm.yyyy'), to_date('31.12.4712', 'dd.mm.yyyy'));
    insert into marketer_account values (1023, 124, 10541, to_date('03.12.2010', 'dd.mm.yyyy'), to_date('31.12.4712', 'dd.mm.yyyy'));
    create table marketer_account_archive (
        key number
      , AccountId number
      , MktKey number
      , FromDt date
      , ToDate date
    insert into marketer_account_archive values (1015, 124, 10526, to_date('03.08.2008', 'dd.mm.yyyy'), to_date('02.12.2010', 'dd.mm.yyyy'));
    insert into marketer_account_archive values (1033, 125, 10987, to_date('01.01.2011', 'dd.mm.yyyy'), to_date('31.01.2012', 'dd.mm.yyyy'));
    select key, AccountId, MktKey, FromDt, ToDate
         , max(FromDt) over(partition by AccountId) max_FromDt
      from marketer_account
    union all
    select key, AccountId, MktKey, FromDt, ToDate
         , max(FromDt) over(partition by AccountId) max_FromDt
      from marketer_account_archive;
    with
    basedata as (
    select key, AccountId, MktKey, FromDt, ToDate
      from marketer_account
    union all
    select key, AccountId, MktKey, FromDt, ToDate
      from marketer_account_archive
    basedata_with_max_intervals as (
    select key, AccountId, MktKey, FromDt, ToDate
         , row_number() over(partition by AccountId order by FromDt desc) FromDt_Rank
      from basedata
    filtered_basedata as (
    select key, AccountId, MktKey, FromDt, ToDate from basedata_with_max_intervals where FromDt_Rank = 1
    select a.id
         , a.name
         , b.MktKey
         , b.FromDt
         , b.ToDate
      from account a
      join filtered_basedata b
        on (a.id = b.AccountId)
    ID NAME                     MKTKEY FROMDT     TODATE
    123 John Doe                  10987 28.09.2009 31.12.4712
    124 Jane Donaldson            10541 03.12.2010 31.12.4712
    125 Harold Douglas            10987 01.01.2011 31.01.2012
    If your tables are big it could be necessary to do the filtering (according to your condition) in an early step (the first CTE).
    Regards
    Martin

  • Query help : Query to get values SYSDATE-1 18:00 hrs to SYSDATE 08:00 hrs

    Hi Team
    I want the SQl query to get the data for the following comparison : -
    Order Created is a Date Column , and i want to find out all the values from (SYSDATE-1) 18:00 hours to SYSDATE 08:00 hours
    i.e.
    (SYSDATE-1) 18:00:00 < Order.Created < SYSDATE 08:00:00.
    Regards

    Hi, Rohit,
    942281 wrote:
    If i want the data in the below way i.e.
    from (SYSDATE-1) 18:00 hours to SYSDATE 17:59 hours ---> (SYSDATE-1) 18:00:00 < Order.Created < SYSDATE 07:59:00.If you want to include rows from exactly 18:00:00 yesterday (but no earlier), and exclude rows from exatly 08:00:00 today (or later), then use:
    WHERE   ord_dtl.submit_dt  >= TRUNC (SYSDATE) - (6 / 24)
    AND     ord_dtl.submit_dt  <  TRUNC (SYSDATE) + (8 / 24)
    So can i use the below format : -
    ord_dtl.submit_dt BETWEEN trunc(sysdate)-(6/24) and trunc(sysdate)+(7.59/24) . Please suggest . .59 hours is .59 * 60 * 60 = 2124 seconds (or .59 * 60 = 35.4 minutes), so the last time included in the range above is 07:35:24, not 07:59:59.
    If you really, really want to use BETWEEN (which includes both end points), then you could do it with date arithmentic:
    WHERE   ord_dtl.submit_dt  BETWEEN  TRUNC (SYSDATE) - (6 / 24)
                      AND         TRUNC (SYSDATE) + (8 / 24)
                                               - (1 / (24 * 60 * 60))but it would be simpler and less error prone to use INTERVALs, as Karthick suggested earlier:
    WHERE   ord_dtl.submit_dt  BETWEEN  TRUNC (SYSDATE) - INTERVAL '6' HOUR
                      AND         TRUNC (SYSDATE) + INTERVAL '8' HOUR
                                               - INTERVAL '1' SECONDEdited by: Frank Kulash on Apr 17, 2013 9:36 AM
    Edited by: Frank Kulash on Apr 17, 2013 11:56 AM
    Changed "- (8 /24)" to "+ (8 /24)" in first code fragment (after Blushadown, below)

  • Query help, subtract two query parts

    Hi,
    I am beginner of PL/SQL and have a problem I couldn’t solve:
    Table (op_list):
    Item     -     Amount -     Status
    Item1     -     10     -     in
    Item2     -     12     -     in
    Item3     -     7     -     in
    Item1     -     2     -     out
    Item2     -     3     -     out
    Item1     -     1     -     dmg
    Item3     -     3     -     out
    Item1     -     2     -     out
    Item2     -     5     -     out
    Item2     -     2     -     in
    Item3     -     1     -     exp
    Would like to get result of query (subtract amount of 'out/dmg/exp' from 'in' ):
    Item - Amount left
    Item1     -     5
    Item2     -     6
    Item3 -     3
    I wrote code that returns sum of all incoming items and sum all out/dmg/exp items, but couldn’t solve how to subtract one part of querry from another. Or maybe there is a better way. Also worried what happens if there is no 'out/dmg/exp' only 'in'
    select item.name, sum(op_list.item_amount)
    from op_list
    inner join item
    on op_list.item = item.item_id
    where op_list.status = 'in'
    group by item.name
    union
    select item.name, sum(op_list.item_amount)
    from op_list
    inner join item
    on op_list.item = item.item_id
    where op_list.status = 'out'
    or op_list.status = 'dmg'
    or op_list.status = 'exp'
    group by item.name
    Return:
    Item1     -     10      [10 in]
    Item1     -     5     [2+1+2]
    Item2     -     14     [12+2]
    Item3     -     7
    Item3     -     4     [3+1]
    Thanks in advance

    Hi,
    We can also use simple inline views to get what we need.
    select a.item,a.amount-b.amount Balance from
    (select item,sum(amount) Amount from op_list
    where status = 'in'
    group by item) a,
    (select item,sum(amount) Amount from op_list
    where status in ('out','dmg','exp')
    group by item) b
    where
    a.item=b.item
    order by item;
    ITEM       BALANCE
    Item1                      5
    Item2                      6
    Item3                      3Regards,
    Prazy

  • Query help: query to return column that represents multiple rows

    I have a table with a name and location column. The same name can occur multiple times with any arbitrary location, i.e. duplicates are allowed.
    I need a query to find all names that occur in both of two separate locations.
    For example,
    bob usa
    bob mexico
    dot mexico
    dot europe
    hal usa
    hal europe
    sal usa
    sal mexico
    The query in question, if given the locations usa and mexico, would return bob and sal.
    Thanks for any help or advice,
    -=beeky

    How about this?
    SELECT  NAME
    FROM    <LOCATIONS_TABLE>
    WHERE   LOCATION IN ('usa','mexico')
    GROUP BY NAME
    HAVING COUNT(DISTINCT LOCATION) >= 2Results:
    SQL> WITH person_locations AS
      2  (
      3          SELECT 'bob' AS NAME, 'USA' AS LOCATION FROM DUAL UNION ALL
      4          SELECT 'bob' AS NAME, 'Mexico' AS LOCATION FROM DUAL UNION ALL
      5          SELECT 'dot' AS NAME, 'Mexico' AS LOCATION FROM DUAL UNION ALL
      6          SELECT 'dot' AS NAME, 'Europe' AS LOCATION FROM DUAL UNION ALL
      7          SELECT 'hal' AS NAME, 'USA' AS LOCATION FROM DUAL UNION ALL
      8          SELECT 'hal' AS NAME, 'Europe' AS LOCATION FROM DUAL UNION ALL
      9          SELECT 'sal' AS NAME, 'USA' AS LOCATION FROM DUAL UNION ALL
    10          SELECT 'sal' AS NAME, 'Mexico' AS LOCATION FROM DUAL
    11  )
    12  SELECT  NAME
    13  FROM    person_locations
    14  WHERE   LOCATION IN ('USA','Mexico')
    15  GROUP BY NAME
    16  HAVING COUNT(DISTINCT LOCATION) >= 2
    17  /
    NAM
    bob
    salHTH!
    Edited by: Centinul on Oct 15, 2009 2:25 PM
    Added sample results.

  • QUERY HELP!!! trying to create a query

    i'm creating a summary report
    i have a table with sale dates
    for example i have a table tab_1 and column saleDate as
    saleDat
    1923
    1936
    1945
    2003
    2005
    saleDate contains years and there are some missing years where no sale
    was made
    My report has to display years starting from earliest year
    so i have to create a query that starts with 1923
    but the problem is that I have to have years that are not in table.
    for example i have to display years 1924 which is not in table
    so the part of report has to look like
    1923 blah blah summary.........
    1924 "
    1925
    1926
    2005
    2006
    upto current year (2006 may not be in the table, but i have to display)
    i just need to know the query that can query all the years starting from
    the ealiest saleDate to current year
    thanks in advance

    Please write the query in the following form:
    SELECT a.year, --- place other columns from your table.
    FROM (SELECT (:start_num + rownum) year
    FROM all_tab_columns
    WHERE :start_num + rownum <= :end_num) a,
    tab_1 b
    WHERE a.year = b.saleDat(+);
    Note:
    1) if your start year and end year are 1923 and 2006. Then input as below:
    :start_num = 1922
    :end_num = 2006
    2) Since for some of the years (1924 etc) may not be there in your so you may need to use NVL to print proper indicators.
    3) If you have more than one record in tab_1 for a particular year then group them based year and then use it.
    Hope this helps.
    - Saumen.

Maybe you are looking for