This SQL statement always in Top Activity, with PX Deq Credit: send blkd
Hi gurus,
The following SQL statement is always among the Top Activity. I can see the details in Enerprise manager that it suffers from PX Deq Credit: send blkd
This is the statement:
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode;The statement is invoked by our front end (web based app) for a browse table displayed on a web page. The result can be 10 to 8000. It is used to display the current stock availability for a particular product in a particular warehouse. The stock availability it self is kept in a View : V_Stock_Detail
These are the parameters relevant to the optimizer:
SQL> show parameter user_dump_dest
user_dump_dest string /u01/app/oracle/admin/ITTDB/udump
SQL> show parameter optimizer
_optimizer_cost_based_transformation string OFF
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size column sname format a20 column pname format a20
db_block_size integer 8192Here is the output of EXPLAIN PLAN:
SQL> explain plan for
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Explain complete.
Elapsed: 00:00:00:31
SQL> select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
Plan hash value: 3252950027
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ
Distrib |
| 0 | SELECT STATEMENT | | 1 | 169 | 6 (17)| 00:00:01 | | |
|
| 1 | PX COORDINATOR | | | | | | | |
|
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | P->S | QC
(RAND) |
| 3 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 4 | PX RECEIVE | | 1 | 169 | 6 (17)| 00:00:01 | Q1,03 | PCWP |
|
| 5 | PX SEND HASH | :TQ10002 | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | P->P | HA
SH |
| 6 | HASH GROUP BY | | 1 | 169 | 6 (17)| 00:00:01 | Q1,02 | PCWP |
|
| 7 | NESTED LOOPS OUTER | | 1 | 169 | 5 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 8 | MERGE JOIN CARTESIAN | | 1 | 119 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 9 | SORT JOIN | | | | | | Q1,02 | PCWP |
|
| 10 | NESTED LOOPS | | 1 | 49 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 11 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 12 | PX RECEIVE | | | | | | Q1,02 | PCWP |
|
| 13 | PX SEND BROADCAST | :TQ10000 | | | | | | S->P | BR
OADCAST |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 25 | 2 (0)| 00:00:01 | | |
|
| 15 | PX BLOCK ITERATOR | | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWC |
|
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 1 | 24 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 17 | BUFFER SORT | | 1 | 70 | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 18 | BUFFER SORT | | | | | | Q1,02 | PCWC |
|
| 19 | PX RECEIVE | | 1 | 70 | 4 (0)| 00:00:01 | Q1,02 | PCWP |
|
| 20 | PX SEND BROADCAST | :TQ10001 | 1 | 70 | 4 (0)| 00:00:01 | | S->P | BR
OADCAST |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 70 | 4 (0)| 00:00:01 | | |
|
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | | 2 (0)| 00:00:01 | | |
|
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 1 | 50 | 3 (0)| 00:00:01 | Q1,02 | PCWP |
|
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 1 | | 2 (0)| 00:00:01 | Q1,02 | PCWP |
|
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"(+)='N')
24 - access("PRODUCT"(+)=:PPRODUCT AND "WH_CODE"(+)=:PWHCODE AND "LOC_CODE"(+)=:PLOCCODE AND "RACK"(+)="STOCK"."RACK" AND
"BATCH"(+)="STOCK"."BATCH" AND "EXP_DATE"(+)="STOCK"."EXP_DATE")
42 rows selected.
Elapsed: 00:00:00:06Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
Execution Plan
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 169 6
1 0 PX COORDINATOR
2 1 PX SEND QC (RANDOM) SYS.:TQ10003 1 169 6 :Q1003 P->S QC (RANDOM)
3 2 HASH GROUP BY 1 169 6 :Q1003 PCWP
4 3 PX RECEIVE 1 169 6 :Q1003 PCWP
5 4 PX SEND HASH SYS.:TQ10002 1 169 6 :Q1002 P->P HASH
6 5 HASH GROUP BY 1 169 6 :Q1002 PCWP
7 6 NESTED LOOPS OUTER 1 169 5 :Q1002 PCWP
8 7 MERGE JOIN CARTESIAN 1 119 4 :Q1002 PCWP
9 8 SORT JOIN :Q1002 PCWP
10 9 NESTED LOOPS 1 49 4 :Q1002 PCWP
11 10 BUFFER SORT :Q1002 PCWC
12 11 PX RECEIVE :Q1002 PCWP
13 12 PX SEND BROADCAST SYS.:TQ10000 S->P BROADCAST
14 13 INDEX RANGE SCAN ITT_NEW.PRODUCTS_IDX2 1 25 2
15 10 PX BLOCK ITERATOR 1 24 2 :Q1002 PCWC
16 15 MAT_VIEW ACCESS FULL ITT_NEW.MV_CONVERT_UOM 1 24 2 :Q1002 PCWP
17 8 BUFFER SORT 1 70 2 :Q1002 PCWP
18 17 BUFFER SORT :Q1002 PCWC
19 18 PX RECEIVE 1 70 4 :Q1002 PCWP
20 19 PX SEND BROADCAST SYS.:TQ10001 1 70 4 S->P BROADCAST
21 20 TABLE ACCESS BY INDEX ROWID ITT_NEW.STOCK 1 70 4
22 21 INDEX RANGE SCAN ITT_NEW.STOCK_PK 1 2
23 7 TABLE ACCESS BY INDEX ROWID ITT_NEW.MV_TRANS_STOCK 1 50 3 :Q1002 PCWP
24 23 INDEX RANGE SCAN ITT_NEW.MV_TRANS_STOCK_IDX1 1 2 :Q1002 PCWP
Statistics
570 recursive calls
0 physical write total IO requests
0 physical write total multi block requests
0 physical write total bytes
0 physical writes direct temporary tablespace
0 java session heap live size max
0 java session heap object count
0 java session heap object count max
0 java session heap collected count
0 java session heap collected bytes
83 rows processed
Elapsed: 00:00:03:24
SQL> disconnect
Commit complete
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining optionsThe TKPROF output for this statement looks like the following:
TKPROF: Release 10.2.0.3.0 - Production on Thu Apr 23 12:39:29 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: ittdb_ora_9566_mytrace1.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K,
FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.PRODUCT, Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K,
FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P.UOM_K ), 0) QtyOut_K,
FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K,
S.LOC_CODE
FROM V_STOCK_DETAIL S
JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT
WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC_CODE = :pLocCode
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.04 0.12 0 10 4 0
Fetch 43 0.05 2.02 0 73 0 83
total 45 0.10 2.15 0 83 4 83
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 164
Rows Row Source Operation
83 PX COORDINATOR (cr=83 pr=0 pw=0 time=2086576 us)
0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=0 us)
0 SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
1 INDEX RANGE SCAN PRODUCTS_IDX2 (cr=2 pr=0 pw=0 time=62 us)(object id 135097)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 MAT_VIEW ACCESS FULL MV_CONVERT_UOM (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
83 TABLE ACCESS BY INDEX ROWID STOCK (cr=78 pr=0 pw=0 time=1635 us)
83 INDEX RANGE SCAN STOCK_PK (cr=4 pr=0 pw=0 time=458 us)(object id 135252)
0 TABLE ACCESS BY INDEX ROWID MV_TRANS_STOCK (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN MV_TRANS_STOCK_IDX1 (cr=0 pr=0 pw=0 time=0 us)(object id 143537)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Join ACK 17 0.00 0.00
PX qref latch 2 0.00 0.00
PX Deq Credit: send blkd 72 1.95 2.00
PX Deq: Parse Reply 26 0.01 0.01
SQL*Net message to client 43 0.00 0.00
PX Deq: Execute Reply 19 0.00 0.01
SQL*Net message from client 43 0.00 0.04
PX Deq: Signal ACK 12 0.00 0.00
enq: PS - contention 1 0.00 0.00
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
PLAN_TABLE_OUTPUT
SQL_ID 402b8st7vt6ku, child number 2
SELECT /*+ gather_plan_statistics */ S.Product, S.WH_CODE, S.RACK, S.BATCH, S.EXP_DATE, FLOOR(Qty_Beg) QtyBeg_B,
ROUND(f_convert_qty(S.PRODUCT, Qty_Beg-FLOOR(Qty_Beg), P.UOM_K ), 0) QtyBeg_K, FLOOR(Qty_In) QtyIn_B, ROUND(f_convert_qty(S.P
RODUCT,
Qty_In-FLOOR(Qty_In), P.UOM_K), 0) QtyIn_K, FLOOR(Qty_Out) QtyOut_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Out-FLOOR(Qty_Out), P
.UOM_K ),
0) QtyOut_K, FLOOR(Qty_Adj) QtyAdj_B, ROUND(f_convert_qty(S.PRODUCT, Qty_Adj-FLOOR(Qty_Adj), P.UOM_K ), 0) QtyAdj_K,
FLOOR(Qty_End) QtyEnd_B, ROUND(f_convert_qty(S.PRODUCT, Qty_End-FLOOR(Qty_End), P.UOM_K ), 0) QtyEnd_K, S.LOC_CODE FROM
V_STOCK_DETAIL S JOIN PRODUCTS P ON P.PRODUCT = S.PRODUCT WHERE S.Product = :pProduct AND S.WH_CODE = :pWhCode AND S.LOC
_CODE =
:pLocCode
Plan hash value: 3252950027
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem |
1Mem | Used-Mem |
| 1 | PX COORDINATOR | | 1 | | 83 |00:00:02.25 | 83 | |
| |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 3 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 4 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 5 | PX SEND HASH | :TQ10002 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 6 | HASH GROUP BY | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 7 | NESTED LOOPS OUTER | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 8 | MERGE JOIN CARTESIAN | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 9 | SORT JOIN | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 10 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 11 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 12 | PX RECEIVE | | 0 | | 0 |00:00:00.01 | 0 | |
| |
| 13 | PX SEND BROADCAST | :TQ10000 | 0 | | 0 |00:00:00.01 | 0 | |
| |
|* 14 | INDEX RANGE SCAN | PRODUCTS_IDX2 | 1 | 1 | 1 |00:00:00.01 | 2 | |
| |
| 15 | PX BLOCK ITERATOR | | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 16 | MAT_VIEW ACCESS FULL | MV_CONVERT_UOM | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
| 17 | BUFFER SORT | | 0 | 21 | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 18 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 |
73728 | |
| 19 | PX RECEIVE | | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
| 20 | PX SEND BROADCAST | :TQ10001 | 0 | 21 | 0 |00:00:00.01 | 0 | |
| |
|* 21 | TABLE ACCESS BY INDEX ROWID| STOCK | 1 | 21 | 83 |00:00:00.01 | 78 | |
| |
|* 22 | INDEX RANGE SCAN | STOCK_PK | 1 | 91 | 83 |00:00:00.01 | 4 | |
| |
|* 23 | TABLE ACCESS BY INDEX ROWID | MV_TRANS_STOCK | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
|* 24 | INDEX RANGE SCAN | MV_TRANS_STOCK_IDX1 | 0 | 1 | 0 |00:00:00.01 | 0 | |
| |
Predicate Information (identified by operation id):
14 - access("P"."PRODUCT"=:PPRODUCT)
16 - access(:Z>=:Z AND :Z<=:Z)
filter("CON"."PRODUCT"=:PPRODUCT)
21 - filter("STOCK"."LOC_CODE"=:PLOCCODE)
22 - access("STOCK"."PRODUCT"=:PPRODUCT AND "STOCK"."WH_CODE"=:PWHCODE)
23 - filter("STS"='N')
24 - access("PRODUCT"=:PPRODUCT AND "WH_CODE"=:PWHCODE AND "LOC_CODE"=:PLOCCODE AND "RACK"="STOCK"."RACK" AND "BATCH"="STOCK"."B
ATCH" AND
"EXP_DATE"="STOCK"."EXP_DATE")
53 rows selected.
Elapsed: 00:00:00:12I'm looking forward for suggestions how to improve the performance of this statement.
Thank you very much,
xtanto
xtanto wrote:
Hi sir,
How to prevent the query from doing parallel query ?
Because as you see actually I am not issuing any Parallel hints in the query.
Thank you,
xtantoKristanto,
there are a couple of points to consider:
1. Your SQL*Plus version seems to be outdated. Please use a SQL*Plus version that corresponds to your database version. E.g. the AUTOTRACE output is odd.
2. I would suggest to repeat your exercise using serial execution (the plan, the autotrace, the tracing). You can disable parallel queries by issuing this in your session:
ALTER SESSION DISABLE PARALLEL QUERY;
This way the output of the tools is much more meaningful, however you might get a different execution plan, therefore the results might not be representative for your parallel execution.
3. The function calls might pose a problem. If they are, one possible damage limitation has been provided by hoek. Even better would be then to replace the PL/SQL function with equivalent plain SQL. However since you say that it generates not too many rows it might not harm here too much. You can check the impact of the functions by running a similar query but omitting the function calls.
4. The parallel execution plan contains a MERGE JOIN CARTESIAN operation which could be an issue if the estimates of the optimizer are incorrect. If the serial execution still uses this operation the TKPROF and DBMS_XPLAN.DISPLAY_CURSOR output will reveal whether this is a problem or not.
5. The execution of the statement seems to take on 2-3 seconds in your tests. Is this in the right ballpark? If yes, why should this statement then be problematic? How often does it get executed?
6. The statement uses bind variables, so you might have executions that use different execution plans depending on the bind values passed when the statement got optimized. You can use DBMS_XPLAN.DISPLAY_CURSOR using NULL as "child_number" parameter or DBMS_XPLAN.DISPLAY_AWR (if you have a AWR license) to check if you have multiple execution plans for the statement. Please note that older versions might have already been aged out of the shared pool, so the AWR repository might be a more reliable source (but only if the statement has been sampled).
7. You have disabled cost based transformations: "_optimizer_cost_based_transformation" = OFF. Why?
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Similar Messages
-
Top Time Event "PX Deq Credit: send blkd"
Hi,
I have this event on TOP of my Wait Events Statspack Report. Anyone know how to minimize the time of this event?? Increase the number os parallel_max_servers will help me???
Tks,
Paulo.What's your Oracle version and OS ?
Are you using 8i with OPS (Oracle Parallel Server) ? -
PX Deq Credit: send blkd At AWR "Top 5 Timed Events"
PX Deq Credit: send blkd At Top 5 Timed Events
Hi ,
Below are examples of "Top 5 Timed Events" in my Staging data warehouse database.
ALWAYS , at the most Top 5 Timed Events is the event : PX Deq Credit: send blkd.
Oracle saids that its an idel event, but since it always at the the top of my AWR reports
and all the others events are far behind it , i have a feeling that it may indicate of
a problem.
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
PX Deq Credit: send blkd 3,152,038 255,152 81 95.6 Other
direct path read 224,839 4,046 18 1.5 User I/O
CPU time 3,217 1.2
direct path read temp 109,209 2,407 22 0.9 User I/O
db file scattered read 31,110 1,436 46 0.5 User I/O
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
PX Deq Credit: send blkd 6,846,579 16,359 2 50.4 Other
direct path read 101,363 5,348 53 16.5 User I/O
db file scattered read 105,377 4,991 47 15.4 User I/O
CPU time 3,795 11.7
direct path read temp 70,208 940 13 2.9 User I/O
Hir some more information:
Its a 500GB database on linux Red hat 4 with 8 CPUs and 16GB memory.
Its based on an ASM file system.
From the spfile:
SQL> show parameter parallel
NAME_COL_PLUS_SHOW_PARAM VALUE_COL_PLUS_SHOW_PARAM
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning FALSE
parallel_execution_message_size 4096
parallel_instance_group
parallel_max_servers 240
parallel_min_percent 0
parallel_min_servers 0
parallel_server FALSE
parallel_server_instances 1
parallel_threads_per_cpu 2
recovery_parallelism 0
Thanks.>
Metalink Note:280939.1 said:
"Consider the use of different number for the DOP on your tables.
On large tables and their indexes use high degree like #CPU.
For smaller tables use DOP (#CPU)/2 as start value.
Question 1:
"On large tables"--> Does Metalink mean to a large
table by its size (GB) or by number of rows ?
That's one of those vague things that people say without thinking that it
could have different meanings. Most people assume that a table that is
large in Gb is also large in number of rows.
As far as PQ is concerned I think that large numbers of rows may be more significant than large size, because (a) in multi-layer queries you pass rows around and (b) although the initial rows may be big you might not need all the columns to run the query, so Gb become less relevant once the data scan is complete
As a strategy for keeping DOP on the tables, by the way, it sounds quite
good. The difficulty is in the fine-tuning.
Question 2:
I checked how many parallel operations had been
downgraded and found that less than 4% had been
downgraded. Do you think that i still have to consider
reducing the DOP ?
Having lots of slaves means you are less likely to get downgrades. But it's the number of slaves active for a single query that introduce the dequeue waits - so yes, I think you do need to worry about the DOP. (Counter-intuitively, the few downgraded queries may have been performing better than the ones running at full DOP).
The difficulty is this - do you need to choose a strategy, or do you just need to fix a couple of queries.
Strategy 1: set DOP to 1 on all tables and indexes, then hint all queries that you think need to run parallel, possibly identifying a few tables and indexes that could benefit from an explicit setting for DOP.
Strategy 2: set DOP to #CPUs on all very large tables and their indexes and #CPUs/2 on the less large tables and their indexes. Check for any queries that perform very badly and either hint different degrees, or fine-tune the degree on a few tables.
Strategy 3: leave parallelism at default, identify particularly badly performing queries and either put in hints for DOP, or use them to identify any tables that need specific settings for DOP.
Starting from scratch, I would want to adopt strategy 1.
Starting from where you are at present, I would spend a little time checking to see if I could get some clues from any extreme queries - i.e. following strategy 3; but if under a lot of time pressure and saw no improvement I would switch to strategy 2.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Whats wrong with this sql statement ??
Hello all, I am trying to run the below query out of persheet(tanel poder) performance excel chart...but i get below error...db is on 9.2
what is wrong with this sql statement ?
http://blog.tanelpoder.com/2008/12/28/performance-visualization-made-easy-perfsheet-20-beta/
select * from (
with fsq as (
select /*+ materialize */
i.dbid
, i.instance_name
, i.instance_number
-- , trunc(s.snap_time, 'DD') DAY
-- , to_number(to_char(s.snap_time, 'HH24')) HOUR
-- -- , to_char(s.snap_time, 'MI') MINUTE
-- , 0 MINUTE
, trunc(
lag(s.snap_time, 1)
over(
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by
s.snap_time
, 'HH24'
) SNAP_TIME
, v.event_type EVENT_TYPE
, v.event EVENT_NAME
, nvl(
decode(
greatest(
time_waited_micro,
nvl(
lag(time_waited_micro,1,0)
over(
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by v.snap_id
, time_waited_micro
time_waited_micro,
time_waited_micro - lag(time_waited_micro,1,0)
over (
partition by
v.dbid
, i.instance_name
, v.instance_number
, v.event
order by v.snap_id
time_waited_micro
, time_waited_micro
) / 1000000 SECONDS_SPENT
, total_waits WAIT_COUNT
from
(select distinct dbid, instance_name, instance_number from stats$database_instance) i
, stats$snapshot s
, ( select
snap_id, dbid, instance_number, 'WAIT' event_type, event, time_waited_micro, total_waits
from
stats$system_event
where
event not in (select event from stats$idle_event)
union all
select
snap_id, dbid, instance_number,
case
when name in ('CPU used by this session', 'parse time cpu', 'recursive cpu usage') then 'CPU'
when name like 'OS % time' then 'OS'
else 'STAT'
end,
name , value, 1
from
stats$sysstat
-- where name in ('CPU used by this session', 'parse time cpu', 'recursive cpu usage')
-- or name like('OS % time')
-- or 1 = 2 -- this will be a bind variable controlling whether all stats need to be returned
) v
where
i.dbid = s.dbid
and i.dbid = v.dbid
and s.dbid = v.dbid
and s.snap_id = v.snap_id
and s.snap_time between '%FROM_DATE%' and '%TO_DATE%'
and i.instance_name = '%INSTANCE%'
select * from (
select
instance_name
, instance_number
, snap_time
, trunc(snap_time, 'DD') DAY
, to_char(snap_time, 'HH24') HOUR
, to_char(snap_time, 'MI') MINUTE
, event_type
, event_name
, seconds_spent
, wait_count
, ratio_to_report(seconds_spent) over (
-- partition by (to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09'))
partition by (snap_time)
) ratio
from fsq
where
snap_time is not null -- lag(s.snap_time, 1) function above will leave time NULL for first snapshot
-- to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09')
-- > ( select min(to_char(day, 'YYYYMMDD')||to_char(hour,'09')||to_char(minute, '09')) from fsq)
where ratio > 0
order by
instance_name
, instance_number
, day
, hour
, minute
, event_type
, seconds_spent desc
, wait_count desc
Error at line 6
ORA-00604: error occurred at recursive SQL level 1
ORA-00972: identifier is too longHi Alex,
Subquery factoring a.k.a. the with-clause should be possible on 9.2:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_103a.htm#2075888
(used it myself as well on 9.2)
@OP
I recall having problems myself using PL/SQL Developer and trying to get the with clause to work on 9.2 some years ago.
A workaround might be to create a view based on the query.
Also, your error message is "ORA-00972: identifier is too long"...
http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/e900.htm#sthref419
Can't test things currently, no 9.2 available at the moment, but perhaps tomorrow I'll have a chance. -
Hi
Please can you help me to create this SQL statement ?
I have two tables: table1 consist of data (AAA,BBB), table2 consist of data(AAA)
And there is relation between table1 and table2 for example (Invoice_id) .
My question is ( How can create SQL statement to display only data in table1(BBB) but not available In table2 ?
Thanks and best regards .
KhaledTry this:
SQL> ed
Wrote file afiedt.buf
1 with t as (select 'AAA' col1,1 inv_id from dual
2 UNION select 'BBB',1 from dual)
3 , s as (select 'AAA' col1,1 inv_id from dual)
4 SELECT col1,inv_id FROM t
5* WHERE NOT EXISTS (SELECT 1 FROM s WHERE inv_id = t.inv_id AND col1 = t.col1)
SQL> /
COL INV_ID
BBB 1
SQL> -
What's wrong with this SQL Statement?
I hope somebody can help explain to me what is wrong wiht the
following SQL statement in my Recordest. It does not return an
error, but it will only filter records from the first variable
listed, 'varFirstName%'. If I try to use any other variables on my
search form,for example LastName, it returns all records. Why is it
doing this?
Here is the SQL statement:
SELECT *
FROM [Sysco Food Show Contacts]
WHERE FirstName LIKE 'varFirstName%' AND LastName LIKE
'varLastName%' AND OrganizationName LIKE 'varOrganizationName%' AND
Address LIKE 'varAddress%' AND City LIKE 'varCity%' AND State LIKE
'varState' AND PostalCode LIKE 'varPostalCode%'
The variables are defined as below:
Name Default Value Run-Time Value
varFirstName % Request.Form("FirstName")
varLastName % Request.Form("LastName")
...and such with all variables defined the same way.
Any help would be much appreciated. I am pulling my hair out
trying to make this Search Form work.
Thanks, mparsons2000PLEASE IGONRE THIS QUESTION!
There was nothing wrong with the statement. I had made
another STUDIP mistake! -
Can someone help me correct this sql statement in a jsp page?
ive been getting the java.sql.SQLException: Incorrect syntax error for one of my sql nested statements. i cant seem to find similar egs online, so reckon if anyone here could help, really appreciate it.
as im putting the nested sql in jsp page, it has to be with lots of " " n crap. very confusing if there are nested.
heres the sql statement without those "" that i want to use:
select top 5 * from(
select top+"'"+offset+"'"+" * from prod where cat=" +"'" cat "'"+"
)order by prodID desc
when i put this in my jsp pg, i had to add "" to become:
String sql = "select top 5 * from("+"select top"+"'"+offset+"'"+" * from prod where cat=" +"'" +cat+ "'"+")order by prodID desc";cat=" +"'" cat "'"+")order by prodID desc";
all those "" are confusing me to no end, so i cant figure out what should be the correct syntax. the error says the syntax error is near the offset.If offset is, say, 10, and cat is, say, "new", then it looks like you're going to produce the SQL:
select top 5 * from(
select top '10' * from prod where cat='new'
)order by prodID descThat looks exactly like incorrect syntax to me... top almost certainly can't handle a string literal as its operand... you almost certainly would want "top 10" instead of "top '10'"...
If you use PreparedStatement, you don't have to remember what you quote and what you don't and you can have your SQL in a single static final string to boot... -
Need help on how to code this SQL statement! (one key has leading zeros)
Good day, everyone!
First of all, I apologize if this isn't the best forum. I thought of putting it in the SAP Oracle database forum, but the messages there seemed to be geared outside of ABAP SELECTs and programming. Here's my question:
I would like to join the tables FMIFIIT and AUFK. The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
The problem I'm having is this: All of the values in AUFNR are preceeded by two zeros. For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'. Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
Unfortunately, I don't have a lot of experience coding SQL, so I'm not sure how to resolve this.
Please help! As always, I will award points to ALL helpful responses!
Thanks!!
Dave>
Dave Packard wrote:
> Good day, everyone!
> I would like to join the tables FMIFIIT and AUFK. The INNER JOIN will be done between FMIFIIT's MEASURE (Funded Program) field, which is char(24), and AUFK's AUFNR (Order Number) field, which is char(12).
>
> The problem I'm having is this: All of the values in AUFNR are preceeded by two zeros. For example, if I have a MEASURE value of '5200000017', the corresponding value in AUFNR is '005200000017'. Because I have my SQL statement coded to just match the two fields, I obviously get no records returned because, I assume, of those leading zeros.
> Dave
You can't do a join like this in SAP's open SQL. You could do it in real SQL ie EXEC.... ENDEXEC by using SUSBTR to strip off the leading zeros from AUFNR but this would not be a good idea because a) modifying a column in the WHERE clause will stop any index on that column being used and b) using real SQL rather than open SQL is really not something that should be encouraged for database portability reasons etc.
Forget about a database join and do it in two stages; get your AUFK data into an itab, strip off the leading zeros, and then use FAE to get the FMIFIIT data (or do it the other way round).
I do hope you've got an index on your FMIFIIT MEASURE field (we don't have one here); otherwise your SELECT could be slow if the table holds a lot of data. -
Why this SQL statement is wrong?
When I trying update my database using following statement:
String query = "UPDATE Flights" +
"SET AircraftType ='" + inputPanel.type.getText() +
"', EnterPoint = '" + inputPanel.point.getText() +
"',GroundSpeed = " + Integer.parseInt(inputPanel.speed.getText()) +
",Altitude =" + Integer.parseInt(inputPanel.altitude.getText()) +
"WHERE FlightNumber =" + inputPanel.flight.getText();
//where speep and altitude are ints
it always say:Syntax error in UPDATE statement.
I cannot understand why because this is a standard SQL statement I think.
Please help me!
Thanks in advancethis part is causing you problems:
"WHERE FlightNumber =" + inputPanel.flight.getText();
if FlightNumber is an integer then you should convert the text to an int:
"WHERE FlightNumber =" + Integer.parseInt(inputPanel.flight.getText());
or if it is a String then:
"WHERE FlightNumber ='" + inputPanel.flight.getText()+ "'";
Jamie -
Is this SQL Statement possible?
String PreStatement="Insert into Vehicle (Threshold) VALUES (?) AND where ID='" + id + "'" ;
Prepared=connection.prepareStatement(PreStatement);
Prepared.setDouble(1, percent);
Prepared.executeUpdate();I'm trying to add a percentage value to that one column where the ID in the database is equal to my id variable. If I do this
PreStatement="Insert into Vehicle (Threshold) VALUES (?); AND where ID='" + id + "'" ;
Prepared=connection.prepareStatement(PreStatement);
Prepared.setDouble(1, percent);
Prepared.executeUpdate();It does add the percentage value to the database, but not in the right row. Does anyone know what I'm doing wrong?Ok...when I add a new entry, I default the threshold to 0.0.....and I can see that in my database....so I do what you said.....
statement.executeQuery("Update Vehicle SET VALUE = '" + percent + " where ID= '" + id + "'" );run my program...and I get the following
com.mckoi.database.jdbc.MSQLException: Encountered "1" at line 1, column 45.
Was expecting one of:
<EOF>
"where" ...
"limit" ...
"=" ...
"==" ...
">" ...
"<" ...
">=" ...
"<=" ...
<NOTEQ> ...
"is" ...
"like" ...
"not" ...
"and" ...
"or" ...
"+" ...
"||" ...
"regex" ...
<REGEX_LITERAL> ...
"in" ...
"between" ...
at com.mckoi.database.jdbcserver.AbstractJDBCDatabaseInterface.handleExecuteThrowable(AbstractJDBCDatabaseInterface.java:265)
at com.mckoi.database.jdbcserver.AbstractJDBCDatabaseInterface.execQuery(AbstractJDBCDatabaseInterface.java:479)
at com.mckoi.database.jdbcserver.JDBCDatabaseInterface.execQuery(JDBCDatabaseInterface.java:251)
at com.mckoi.database.jdbc.MConnection.executeQuery(MConnection.java:453)
at com.mckoi.database.jdbc.MConnection.executeQueries(MConnection.java:436)
at com.mckoi.database.jdbc.MStatement.executeQueries(MStatement.java:193)
at com.mckoi.database.jdbc.MStatement.executeQuery(MStatement.java:167)
at com.mckoi.database.jdbc.MStatement.executeQuery(MStatement.java:222)
at gov.sandia.emc.ema.bdrs.Database.computePercentage(Database.java:981)
at gov.sandia.emc.ema.bdrs.Database.search(Database.java:936)
at gov.sandia.emc.ema.bdrs.CombinedInterface$5.actionPerformed(CombinedInterface.java:610)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)
CAUSE: com.mckoi.database.sql.ParseException: Encountered "1" at line 1, column 45.
Was expecting one of:
<EOF>
"where" ...
"limit" ...
"=" ...
"==" ...
">" ...
"<" ...
">=" ...
"<=" ...
<NOTEQ> ...
"is" ...
"like" ...
"not" ...
"and" ...
"or" ...
"+" ...
"||" ...
"regex" ...
<REGEX_LITERAL> ...
"in" ...
"between" ...
at com.mckoi.database.sql.SQL.generateParseException(SQL.java:6070)
at com.mckoi.database.sql.SQL.jj_consume_token(SQL.java:5939)
at com.mckoi.database.sql.SQL.Statement(SQL.java:224)
at com.mckoi.database.interpret.SQLQueryExecutor.execute(SQLQueryExecutor.java:94)
at com.mckoi.database.jdbcserver.AbstractJDBCDatabaseInterface.execQuery(AbstractJDBCDatabaseInterface.java:461)
at com.mckoi.database.jdbcserver.JDBCDatabaseInterface.execQuery(JDBCDatabaseInterface.java:251)
at com.mckoi.database.jdbc.MConnection.executeQuery(MConnection.java:453)
at com.mckoi.database.jdbc.MConnection.executeQueries(MConnection.java:436)
at com.mckoi.database.jdbc.MStatement.executeQueries(MStatement.java:193)
at com.mckoi.database.jdbc.MStatement.executeQuery(MStatement.java:167)
at com.mckoi.database.jdbc.MStatement.executeQuery(MStatement.java:222)
at gov.sandia.emc.ema.bdrs.Database.computePercentage(Database.java:981)
at gov.sandia.emc.ema.bdrs.Database.search(Database.java:936)
at gov.sandia.emc.ema.bdrs.CombinedInterface$5.actionPerformed(CombinedInterface.java:610)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)I know there's something wrong with my SQL Statement, what do I need to add? -
How to rewrite this SQL statement
I have tableA and tableB as below. The following query gets Max(Process_date) during month of january from the
two tables tableA and tableB with different criteria
I want to expand the following query to return Max(Process_date) for BOTH current month(txn_date=entire jan2013) and prior month(txn_date= entire dec2013)
Is it possible? If so how can I modify this query to return MAX(PROCESS_DATE) for current and prior months in single query.
SELECT MAX(process_date) AS curr_month_amount FROM
SELECT process_date AS process_date,
amount1 AS amount
FROM tableA
WHERE id = 1 AND
process_date = (SELECT MAX(process_date)
FROM tableA
WHERE id = 1 and txn_date between TO_DATE('01-JAN-2013','dd-mon-yyyy') and TO_DATE('31-JAN-2013', 'dd-mon-yyyy')
AND amount1 = 0
UNION
SELECT MAX(process_date) AS process_date,
0 AS amount
FROM tableB
WHERE id = 1 AND txn_code = 'B' and txn_date between TO_DATE('01-JAN-2013','dd-mon-yyyy') and TO_DATE('31-JAN-2013', 'dd-mon-yyyy')
future state of the sql
Single sql statement a) should look at txn_date between 1/1/2013 - 1/31/2013 to return max(process_date) for current month
b) should look at txn_date between 12/1/2012 - 12/31/2012 to return max(process_date) for prior month
NOTE:-( i want to pass current_month_end date 1/31/2013 to this sql so it will calculate for current and prior months)
expected output
***************: For id=1 in the modified query,
prior month max(process_date) should be 1/19/2013
current month max(process_date) should be 1/15/2012
For id=5 in the modified query
prior month max(process_date) should be NULL
current month max(process_date) should be 1/16/2013
SQL to create TableA and TableB with insert statements( txn_date column not included)
CREATE table tableA
id NUMBER,
process_date DATE,
amount1 NUMBER,
txn_code VARCHAR2(1),
txn_date DATE
Create table tableB
( id NUMBER,
process_date DATE,
amount2 NUMBER,
txn_code VARCHAR2(1),
txn_date DATE
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('01/12/2013','mm/dd/yyyy'), 500, 'A', to_date('01/15/2013','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('01/13/2013','mm/dd/yyyy'), 100, 'A', to_date('01/14/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code , txn_date)
values
( 1, to_date('01/14/2013','mm/dd/yyyy'), 0, 'A', to_date('01/15/2013','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 1, to_date('01/15/2013','mm/dd/yyyy'), 0, 'B', to_date('01/31/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('12/01/2012','mm/dd/yyyy'), 500, 'A', to_date('12/31/2012','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code , txn_date)
values
( 1, to_date('12/23/2012','mm/dd/yyyy'), 100, 'A', to_date('12/14/2012','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 1, to_date('12/19/2012','mm/dd/yyyy'), 0, 'A', to_date('12/15/2012','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 1, to_date('12/15/2012','mm/dd/yyyy'), 0, 'C', to_date('12/31/2012','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('01/11/2013','mm/dd/yyyy'), 500, 'A', to_date('01/09/2013','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
(5, to_date('01/12/2013','mm/dd/yyyy'), 0, 'A', to_date('01/19/2013','mm/dd/yyyy'))
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('01/15/2013','mm/dd/yyyy'), 10 , 'A', to_date('01/09/2013','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 5, to_date('01/16/2013','mm/dd/yyyy'), 1, 'B', to_date('01/09/2013','mm/dd/yyyy'));
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('12/11/2012','mm/dd/yyyy'), 500, 'A', to_date('12/09/2012','mm/dd/yyyy') );
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
(5, to_date('12/12/2012','mm/dd/yyyy'), 0, 'A', to_date('12/19/2012','mm/dd/yyyy'))
INSERT INTO tableA
( id, process_date, amount1, txn_code, txn_date )
values
( 5, to_date('12/15/2012','mm/dd/yyyy'), 10 , 'A', to_date('12/09/2012','mm/dd/yyyy'));
INSERT INTO tableB
( id, process_date, amount2, txn_code, txn_date)
values
( 5, to_date('12/16/2012','mm/dd/yyyy'), 1, 'C', to_date('12/09/2012','mm/dd/yyyy'));
commit;Maybe
SELECT id,
month,
MAX(process_date) AS curr_month_amount
FROM (SELECT id,
process_date AS process_date,
amount1 AS amount,
TO_CHAR(txn_date,'mm') AS month
FROM tableA
WHERE (process_date,TO_CHAR(txn_date,'mm')) in
(SELECT MAX(process_date),TO_CHAR(txn_date,'mm')
FROM tableA
WHERE txn_date between ADD_MONTHS(TO_DATE('01-JAN-2013','dd-mon-yyyy'),-1)
AND TO_DATE('31-JAN-2013','dd-mon-yyyy')
AND amount1 = 0
GROUP BY TO_CHAR(txn_date,'mm')
AND amount1 = 0
UNION
SELECT id,
MAX(process_date) AS process_date,
0 AS amount,
TO_CHAR(txn_date,'mm') AS month
FROM tableB
WHERE txn_code = 'B'
AND txn_date between ADD_MONTHS(TO_DATE('01-JAN-2013','dd-mon-yyyy'),-1)
AND TO_DATE('31-JAN-2013','dd-mon-yyyy')
GROUP BY id,TO_CHAR(txn_date,'mm')
GROUP BY id,month
ID
MONTH
CURR_MONTH_AMOUNT
5
01
01/16/2013
1
12
12/19/2012
1
01
01/15/2013
Regards
Etbin -
Did I misuse 'DISTINCT' in this SQL statement?
I have a table TI_ORDER only contains following data:
O_ID E_CODE M_SEQ B_SEQ
VARCHAR NUMBER NUMBER NUMBER
CE013 1 1 1
CE013 1 2 1
CE013 1 3 1
CE013 1 4 1
CE013 1 5 1
CE013 1 6 1
CE013 1 6 2
CE013 1 7 1
CE013 1 8 1
CE013 1 8 2
CE013 1 9 1
CE013 1 10 1
CE013 1 10 2
CE013 2 1 1
CE013 2 2 1
CE013 2 3 1
CE013 2 4 1
CE013 2 5 1
CE013 2 6 1
CE013 2 6 2
CE013 2 7 1
CE013 2 8 1
CE013 2 8 2
CE013 2 9 1
CE013 2 10 1
CE013 2 10 2
If I execute this SQL:
==============================================
SELECT a.o_id, a.e_code,
COUNT(a.o_id) OVER (PARTITION BY a.o_id, a.e_code) AS cnt
FROM ( SELECT DISTINCT o_id, e_code, m_seq FROM ti_order ) a
WHERE a.o_id = 'CE013'
==============================================
It will show:
==============================================
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 1 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
CE013 2 10
=============================================
If I add 'DISTINCT' to previous SQL statement:
============================================
SELECT DISTINCT a.o_id, a.e_code,
COUNT(a.o_id) OVER (PARTITION BY a.o_id, a.e_code) AS cnt
FROM ( SELECT DISTINCT o_id, e_code, m_seq FROM ti_order ) a
WHERE a.o_id = 'CE013'
============================================
It displays:
============================================
CE013 1 13
CE013 2 13
============================================
Why does it not show following output as I want ?
============================================
CE013 1 10
CE013 2 10
============================================Looks like you have stumbled across a bug. Below output indicates that 9i (example 2) gets the correct answer here while 8i (example 1) does not. You ARE on 8i right?
In any case, there is no need to use analytic functions here, good old COUNT is fine to get the correct answer (example 3).
--------------------------- example 1 -----------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
SQL> CREATE TABLE table_name (
2 o_id VARCHAR2 (5),
3 e_code NUMBER (2),
4 m_seq NUMBER (2),
5 b_seq NUMBER (2));
Table created.
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','1','1','1');
1 row created.
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','1','2','1');
(snip)
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','2','10','2');
1 row created.
SQL> SELECT DISTINCT a.o_id, a.e_code,
2 COUNT (a.o_id) OVER (PARTITION BY a.o_id, a.e_code) AS cnt
3 FROM (SELECT DISTINCT o_id, e_code, m_seq
4 FROM table_name) a
5 WHERE a.o_id = 'CE013';
O_ID E_CODE CNT
CE013 1 13
CE013 2 13
SQL>
--------------------------- example 2 -----------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
SQL> CREATE TABLE table_name (
2 o_id VARCHAR2 (5),
3 e_code NUMBER (2),
4 m_seq NUMBER (2),
5 b_seq NUMBER (2));
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','1','1','1');
1 row created.
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','1','2','1');
1 row created.
(snip)
SQL> INSERT INTO TABLE_NAME VALUES ('CE013','2','10','2');
1 row created.
SQL> SELECT DISTINCT a.o_id, a.e_code,
2 COUNT (a.o_id) OVER (PARTITION BY a.o_id, a.e_code) AS cnt
3 FROM (SELECT DISTINCT o_id, e_code, m_seq
4 FROM table_name) a
5 WHERE a.o_id = 'CE013';
O_ID E_CODE CNT
CE013 1 10
CE013 2 10
SQL>
--------------------------- example 3 -----------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production
SQL> SELECT a.o_id, a.e_code, COUNT (*)
2 FROM (SELECT DISTINCT o_id, e_code, m_seq
3 FROM table_name) a
4 WHERE a.o_id = 'CE013'
5 GROUP BY a.o_id, a.e_code;
O_ID E_CODE COUNT(*)
CE013 1 10
CE013 2 10
SQL>
Padders -
Sql statement for a table name with a space in between
Hi,
I just noticed that one of my tables for Access is consisted of two word. It is called "CURRENT CPL". How would I put this table name into an sql statement. When I did what I normally do, it only reads the CURRENT and thinks that's the table name.
Thanks
FengI just noticed that one of my tables for Access is
consisted of two word. It is called "CURRENT CPL".
How would I put this table name into an sql
statement. When I did what I normally do, it only
reads the CURRENT and thinks that's the table name.That is called a quoted identifier. The SQL (not java) for this would look like this....
select "my field" from "CURRENT CPL"
The double quote is the correct character to use. Note that quoted identifiers are case sensitive, normal SQL is not. -
BPC 7 MS: Which SQL statements are created for writing with input schedule?
Hi,
I wanted to know which SQL statements are created and executed if a user submits values using an Excel input schedule to an application.
When I check the correspoinding MS SQL server log files, I see that data is read from the three partitions belonging to the application and put into a temporary table, but I can't find anything about writing back to the application (presumably the WB partiton...) in the log.
There are some cryptic entries in the log file as well, but they are not human-readale... are there any BPC logfiles that could tell me which SQL statements are created and executed to write back the new values to the application? Thanks!Hi,
As far as i know, when a user send data entry from an excel schedule, it will be written in the WB table of the application (for each application, you have 3 Data tables : WB, Fact2 and Fact).
I presume that the SQL statement may be an INSERT or UPDATE statement.
Technically, the update is done by the send governor service (hosted on your BPC application server).
There is no log that will show you the SQL Statement besides a SQL trace that you have to setup in SQL Server 2005 Manager Studio.
btw, the data are written in the Relationnal database but are read from the OLAP cube. The olap Cube is split in 3 partitions (ROLAP on WB table / and MOLAP on fact and fat2). Wich mean that every new entry in WB will be automatically "updated" in the cube.
Some DM packages can directly write data in fact2 table. In this case you need to reprocess the cube to get it loaded. -
What this SQL statement is doing?
Hi ABAPers,
Can somebody help me in understanding what the below SQL statement is doing..
SELECT * FROM /BIC/ADSO00 AS tb1
INNER JOIN /BIC/PMASTER as tbl2
ON tbl1~field1 = tbl2~field1
AND tbl12~field2 = tbl2~field2
INTO CORRESPONDING FIELDS OF TABLE gt_itab
FOR ALL ENTRIES IN SOURCE_PACKAGE
WHERE tbl1~field3 = SOURCE_PACKAGE-field3
AND tbl1~field4 BETWEEN lv_minper AND lv_maxper
AND tbl1~field5 = SOURCE_PACKAGE-field5
AND tbl1~field6 = '0100'
AND tbl2~OBJVERS = 'A'.
thanks in advance !!
Bharath SHi Bharath,
tb1 is your /BIC/ADSO00 table
tbl2 is your /BIC/PMASTER second table.
It is selecting all contents from /BIC/ADSO00 table available and is selecting respective contents from /BIC/PMASTER second table using field 1 and field2 as key(unique in two tables) and moving into an internal table(structures used in program for holding values similar to DB and can hold values till program execution ends) and selection is valid only for entries in source_package another internal table.
Conditions for the selections are
tbl1~field3 = SOURCE_PACKAGE-field3
AND tbl1~field4 BETWEEN lv_minper AND lv_maxper
AND tbl1~field5 = SOURCE_PACKAGE-field5
AND tbl1~field6 = '0100'
AND tbl2~OBJVERS = 'A'.
Hope it will be helpful.
Regards,
Kannan
Maybe you are looking for
-
Billing doc not flown to A/C
Hi, One billing doc.not flown to A/C,trying to release it manually but showing error - "Error in account determination: table T030K key 1000 JN7 A2." Kindly help to resolve this issue. Harsh
-
Hello All, I have an application where I have to track the changes made by the user and update the table.The requirement is when a user update/add/delete a field this needs to be inserted in a table with the old and the updated value...I'm thinking o
-
Search on iTunes appearing blank
Since upgrading to ios7 on my iPhone 4s I am unable to search for music in the iTunes Store. My screen appears blank. Please can you advice how to solve this?
-
I got a new mac a few months ago and for some reason I can't open accounts in system preferences it keeps coming up with an error report and if I won't to send it to apple. Please help
-
Duplicates when importing pictures from my camera
Every time I import photos from my Nikon D5000 I get two of every picture. The only discussion I can find on this topic says to make sure the date and time setting on my camera are correct... they are. Any ideas how to stop this? Thanks! Dave