PX COORDINATOR FORCED SERIAL
So, I have oodles and oodles of parallel slaves available in one of my databases. However, on Monday, one of the support guys came to me and said 'this query used to take three hours, but now it's taking more than 24 hours before we have to kill it'.
Unfortunately, the last time it was run was last month and I don't have explain plans/AWR stats going back that far, so can't see what it was doing.
However, when I look at it, the query has multiple PARALLEL hints, it's hinting tables with PARALLEL DEGREE set. It should be getting parallel. However, it does not.
When I see the explain plan, I notice the following right at the top:
INSERT STATEMENT
LOAD AS SELECT
COUNT STOPKEY
PX COORDINATOR FORCED SERIAL
PX SEND QC (RANDOM)
COUNT STOPKEY
HASH JOIN OUTER
PX RECEIVE
Blah, blah, blah
What's the purpose of the PX COORDINATOR FORCED SERIAL? The stats for the objects look OK and last month, the query fine as expected. They have been no changes to the underlying table structure. I suspect the fact it's forcing the query to be executed in serial is what's causing the performance issue.
Anyone ever seen this? This is in 10.2.0.4
I found a couple of interesting article on this subject:
http://www.oaktable.net/category/tags/px-coordinator-forced-serial
http://oracle-randolf.blogspot.com/2011/03/px-coordinator-forced-serial-operation.html
http://oracledoug.com/serendipity/index.php?/archives/788-PX-Issues-Continued.html
In your actual SQL do you have any user functions? If yes, was it parallel enabled?
Any database logon trigger? If yes, was it changed in the last n days?
HTH -- Mark D Powell --
PS - If spacial is involved then the following OTN thread may be of interest
Parallel "CONTAINS" query and the "FORCED SERIAL" exec plan
Edited by: Mark D Powell on Aug 10, 2011 8:20 AM
Similar Messages
-
<p>I am using parallel calculation where possible. I havesome scripts that Essbase is forcing serial calculation on, and Ican't figure out why. I checked the documentation for thecriteria that will force serial calculation, and none of thecriteria apply. The message given is 'Formula on member[456-01-010-81-2001] forces calculation to execute in serial mode'. There is no formula on that member. It is not dependenton any othermember (nor do any of the other members have dependencies). There are no dynamic members involved in the calculation. I have identical calculation scipts (just with differentaccounts calculating) that calculate inparallel with no problem. Any help would begreatly appreciated.</p><p> </p><p>FIX(&CurrentYear,"Final","Actual",&CurrentMonth,"PostAlloc", @LEVMBRS(Customer,0),@REMOVE(@RELATIVE("AF_Less_SoldBus", 0),@LIST(@UDA(Product,"Inactive_Product"),@RELATIVE("Total_C&W",0))))<br> "456-01-010-81-2001"="456-01-010-81-2001"->"GLAlloc"->"SKU-Alloc"->"All Customers"* (Syst->"DP - Fixed Exp Cases"/syst->"DP -Fixed ExpCases"->"AF_Less_Sold_and_C&W"->"CustomerTotal");<br>ENDFIX</p>
John mentions that you can set it in the cfg file and shows you the setting specs, but if you just want to do it for a single application and have it in the default calc, you also need to know how to set the default calc. I find the easiest way is to crreate a regular calc script with what I want the default to be. then validate it. Out of the box, the default calculation is nothing more than a Calc All; statement. It can be anything you want. Once you have created the calc script. The following link tells you different methods for setting it
http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/dcaintro.html#dcaintro507
and If you want to do it from EAS look at http://docs.oracle.com/cd/E17236_01/epm.1112/eas_help/defcalc.html
It is important to know that if you change the calculation script you used to create the default calculation, it does not change the default calc itself, you would have to reset it. -
Hi all,
I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 50,318 41.7
db file sequential read 6,688,472 32,711 5 27.1 User I/O
Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
db file scattered read 1,012,065 6,999 7 5.8 User I/O
PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
Statistic Total
AVG_BUSY_TIME 3,221,704
AVG_IDLE_TIME 4,923,831
AVG_IOWAIT_TIME 2,302,776
AVG_SYS_TIME 537,429
AVG_USER_TIME 2,682,900
BUSY_TIME 6,446,121
IDLE_TIME 9,850,381
IOWAIT_TIME 4,608,322
SYS_TIME 1,077,598
USER_TIME 5,368,523
LOAD 0
OS_CPU_WAIT_TIME 1,999,898,469,700
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 12,201,893,888
VM_OUT_BYTES 476,655,616
PHYSICAL_MEMORY_BYTES 8,568,512,512
NUM_CPUS 2
NUM_CPU_SOCKETS 2
###########################3
I think that we are having CPU problems here !!
All my memory caches are good, 99% hit.
Anybody agree with me???
Tks,
PauloI have problems on some queries that have another wait event related to RAC.
"gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
Example:
1-Tables has the same number of rows!!!!!
2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
####RAC DATABASE####
SELECT 1 from dual
WHERE NOT EXISTS (SELECT 1
FROM mensalidade a
WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
----Explain
SELECT STATEMENT, GOAL = ALL_ROWS 4 1
FILTER
FAST DUAL 2 1
PX COORDINATOR FORCED SERIAL
PX SEND QC (RANDOM) SYS :TQ10000 2 1 7
PX BLOCK ITERATOR 2 1 7
INDEX FAST FULL SCAN BRCAPDB2 IMENSALIDADE1 2 1 7
----It takes more than 500 seconds to run
####STANDALONE DATABASE####
SELECT 1 from dual
WHERE NOT EXISTS (SELECT 1
FROM mensalidade a
WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
----Explain
SELECT STATEMENT, GOAL = ALL_ROWS 4 1
FILTER
FAST DUAL 2 1
PX COORDINATOR FORCED SERIAL
PX SEND QC (RANDOM) SYS :TQ10000 2 2 16
PX BLOCK ITERATOR 2 2 16
TABLE ACCESS FULL BRCAPDB2 MENSALIDADE 2 2 16
----It takes 0.1 seconds to run -
Just Question about Explain Plan.
Q- 1. Is it Possible to trace a current session query from other session without altering that session. If How can i do it. I can not do it with v$session becuase it will be different sessionid. Also i ndon't want to alter session.
Q-2 I have SQl statement gives different EXECUTION PLAN on 9R2 and 10g. Is it possible.
Can any one explain this pleasethank you smoradi and william for your response.
Sorry for Long Post
I tried to post it before also. i did not get the conculsion. This are the final results i got. So can any one Help me.
Following are the information
I have TKPROF result. can any one expalin me this what it means. Where are wait time.
I got the explain plan as unnder can any one explain me please i am still having problem solving this
can any one explain me how to improve the perfomance.
1.SS_SKU_STORE_WEEk is the biggest table around 12 million rows.
2. SS_SKU_STORE 50 thosand rows
3. ITEM 44 thousand
4.Rest all tables are small.
SQL> SELECT /*+ index( ss_sku_store SS_SKU_ST_PK ) */
2 TO_NUMBER( ss_sku_store.sku || ss_sku_store.store_num) rowkey,
3 ss_sku_store.sku psku,
4 ' ' || INITCAP( item.descrip ) description,
5 dept_id,
6 TO_CHAR( dept_id ) || '.' || TO_CHAR( sub_dept_id ) subdepartment,
7 TO_CHAR( dept_id ) || '.' || TO_CHAR( sub_dept_id ) ||'.'|| TO_CHAR( class_id ) class,
8 NVL( vendor_id, -1 ),
9 NVL( buyer_num, -1),
10 NVL( TRIM(pattern_cd), -1),
11 DECODE(Color_Cd, 0, -1, NVL( Color_Cd, -1)) Color_Cd,
12 NVL( size_cd, -1),
13 -1 list_id,
14 ss_sku.sku skuattribute,
15 ss_sku_store.store_num pstore,
16 INITCAP( store.name ) location,
17 store.state,
18 NVL( INITCAP( regional_vp), :cUNASSIGNED) regional_vp,
19 NVL( INITCAP( regional_merch_mgr), :cUNASSIGNED) regional_merch_mgr,
20 NVL( INITCAP( district_mgr), :cUNASSIGNED) district_mgr,
21 NVL( INITCAP( area_mgr), :cUNASSIGNED) area_mgr,
22 NVL( sq_footage, -1),
23 SUBSTR( '000' || fashion_attribute.seq, -3, 3 ) || NVL( store.fashion_att_cd, '' ) fashion_att_cd,
24 SUBSTR( '000' || cust_profile.seq, -3, 3 ) || NVL( store.cust_type_cd, '' ) cust_type_cd,
25 NVL( section_count, -1) section_count,
26 '000' corp_vol_grp_cd,
27 0 storegroup,
28 store.store_num storeattribute,
29 0 storesort,
30 submit_status,
31 DECODE( current_user, :pUserID, shipment_schedules.check_staged_sku(ss_sku_store.sku,-1), 1) lockedflag,
32 '' aggregatedrowkey,
33 '' AttributeDescription,
34 starting_on_hand onhand
35 FROM cust_profile,
36 fashion_attribute,
37 ( SELECT vendor_id,
38 sku
39 FROM item_vendor
40 WHERE sku IN ( SELECT sku
41 FROM ss_session_sku
42 WHERE user_key = :pUserKey)
43 AND primary_flag = :cTRUE ) primaryvendor,
44 ( SELECT SKU,
45 DECODE( status, 0, 0, DECODE( status, 1 ,0 ,1) ) AS submit_status
46 FROM ( SELECT /*+ full(ss_session_sku) use_nl(ss_session_sku,ss_sku_store_week) index(ss_sku_store_week SS_SKU_STR_WK_SKU )*/
47 SS_SKU_Store_Week.SKU,
48 MAX( NVL( ssk_week_status, 0 ) ) AS status
49 FROM ss_session_sku,
50 ss_sku_store_week
51 WHERE user_key = :pUserKey
52 AND ss_sku_store_week.sku = ss_session_sku.sku
53 GROUP BY ss_sku_store_week.sku )
54 ) sku_status,
55 ss_sku,
56 store,
57 ss_sku_store,
58 item
59 WHERE sku_status.sku = item.sku
60 AND sku_status.sku = ss_sku.sku
61 AND sku_status.sku = ss_sku_store.sku
62 AND sku_status.sku = primaryvendor.sku
63 AND sku_status.sku = sku_status.sku
64 AND ss_sku_store.store_num = store.store_num
65 AND store.cust_type_cd = cust_profile.cust_type_cd(+)
66 AND store.fashion_att_cd = fashion_attribute.fashion_att_cd(+)
67 ORDER BY ss_sku_store.sku,
68 ss_sku_store.store_num;
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=531 Card=3203 Bytes=948088)
1 0 SORT (GROUP BY) (Cost=531 Card=3203 Bytes=948088)
2 1 HASH JOIN (RIGHT OUTER) (Cost=323 Card=3203 Bytes=948088)
3 2 TABLE ACCESS (FULL) OF 'CUST_PROFILE' (TABLE) (Cost=2 Card=16 Bytes=304)
4 2 HASH JOIN (RIGHT OUTER) (Cost=321 Card=3203 Bytes=887231)
5 4 TABLE ACCESS (FULL) OF 'FASHION_ATTRIBUTE' (TABLE) (Cost=2 Card=9 Bytes=162)
6 4 HASH JOIN (Cost=318 Card=3203 Bytes=829577)
7 6 TABLE ACCESS (FULL) OF 'STORE' (TABLE) (Cost=15 Card=1289 Bytes=105698)
8 6 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'SS_SKU_STORE' (TABLE) (Cost=3 Card=707 Bytes=17675)
9 8 NESTED LOOPS (Cost=302 Card=3203 Bytes=566931)
10 9 HASH JOIN (Cost=287 Card=5 Bytes=760)
11 10 NESTED LOOPS (Cost=284 Card=5 Bytes=655)
12 11 NESTED LOOPS (Cost=273 Card=5 Bytes=270)
13 12 NESTED LOOPS (Cost=252 Card=86 Bytes=3784)
14 13 NESTED LOOPS (Cost=17 Card=13 Bytes=468)
15 14 SORT (UNIQUE) (Cost=1 Card=13 Bytes=130)
16 15 INDEX (RANGE SCAN) OF 'SS_SESS_SKU_PK' (INDEX (UNIQUE)) (Cost=1 Card=13 Bytes=130)
17 14 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_VENDOR' (TABLE) (Cost=3 Card=1 Bytes=26)
18 17 INDEX (RANGE SCAN) OF 'ITEM_VENDOR_ITEM_FK_IDX' (INDEX) (Cost=2 Card=1)
19 13 PARTITION HASH (ITERATOR) (Cost=65 Card=7 Bytes=56)
20 19 TABLE ACCESS (BY LOCAL INDEX ROWID)OF 'SS_SKU_STORE_WEEK' (TABLE) (Cost=65 Card=7 Bytes=56)
21 20 INDEX (RANGE SCAN) OF 'SS_SKU_STR_WK_SKU' (INDEX) (Cost=14 Card=6427)
22 12 TABLE ACCESS (FULL) OF 'SS_SESSION_SKU'(TABLE) (Cost=0 Card=1 Bytes=10)
23 11 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM' (TABLE) (Cost=2 Card=1 Bytes=77)
24 23 INDEX (UNIQUE SCAN) OF 'ITEM_PK' (INDEX (UNIQUE)) (Cost=1 Card=1)
25 10 TABLE ACCESS (FULL) OF 'SS_SKU' (TABLE) (Cost=3 Card=343 Bytes=7203)
26 9 PARTITION HASH (ITERATOR) (Cost=1 Card=211)
27 26 INDEX (RANGE SCAN) OF 'SS_SKU_ST_PK' (INDEX(UNIQUE)) (Cost=1 Card=211)
EXPLIAN PLAN AND DATA FROM TKPROF
all count cpu elapsed disk query current rows
Parse 2 0.00 0.01 0 0 0 0
Execute 2 1.35 1.30 0 0 0 0
Fetch 5 0.14 0.16 5 2497 0 111
total 9 1.49 1.49 5 2497 0 111
Misses in library cache during parse: 2
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 30 (MDSEADMIN)
Rows Row Source Operation
0 PX COORDINATOR FORCED SERIAL (cr=797 pr=2 pw=0 time=42745 us)
0 PX SEND QC (ORDER) :TQ10005 (cr=797 pr=2 pw=0 time=42711 us)
0 SORT ORDER BY (cr=797 pr=2 pw=0 time=42701 us)
0 PX RECEIVE (cr=797 pr=2 pw=0 time=42627 us)
0 PX SEND RANGE :TQ10004 (cr=797 pr=2 pw=0 time=42617 us)
0 BUFFER SORT (cr=797 pr=2 pw=0 time=42609 us)
0 NESTED LOOPS OUTER (cr=797 pr=2 pw=0 time=42532 us)
0 NESTED LOOPS OUTER (cr=797 pr=2 pw=0 time=42520 us)
0 NESTED LOOPS (cr=797 pr=2 pw=0 time=42510 us)
0 NESTED LOOPS (cr=797 pr=2 pw=0 time=42502 us)
0 NESTED LOOPS (cr=797 pr=2 pw=0 time=42495 us)
0 NESTED LOOPS (cr=797 pr=2 pw=0 time=42488 us)
0 HASH JOIN (cr=797 pr=2 pw=0 time=42480 us)
1 BUFFER SORT (cr=5 pr=1 pw=0 time=13357 us)
1 PX RECEIVE (cr=5 pr=1 pw=0 time=13300 us)
1 PX SEND HASH :TQ10001 (cr=5 pr=1 pw=0 time=13291 us)
1 TABLE ACCESS BY INDEX ROWID ITEM_VENDOR (cr=5 pr=1 pw=0 time=13280 us)
3 NESTED LOOPS (cr=4 pr=0 pw=0 time=423 us)
1 SORT UNIQUE (cr=1 pr=0 pw=0 time=189 us)
1 INDEX RANGE SCAN SS_SESS_SKU_PK (cr=1 pr=0 pw=0 time=86 us)(object id 25279)
1 INDEX RANGE SCAN ITEM_VENDOR_ITEM_FK_IDX (cr=3 pr=0 pw=0 time=53 us)(object id 24079)
0 PX RECEIVE (cr=792 pr=1 pw=0 time=28530 us)
0 PX SEND HASH :TQ10003 (cr=792 pr=1 pw=0 time=28524 us)
0 VIEW (cr=792 pr=1 pw=0 time=28517 us)
0 HASH GROUP BY (cr=792 pr=1 pw=0 time=28509 us)
0 PX RECEIVE (cr=792 pr=1 pw=0 time=28295 us)
0 PX SEND HASH :TQ10002 (cr=792 pr=1 pw=0 time=28290 us)
0 NESTED LOOPS (cr=792 pr=1 pw=0 time=28284 us)
1 BUFFER SORT (cr=1 pr=0 pw=0 time=139 us)
1 PX RECEIVE (cr=1 pr=0 pw=0 time=45 us)
1 PX SEND BROADCAST :TQ10000 (cr=1 pr=0 pw=0 time=40 us)
1 INDEX RANGE SCAN SS_SESS_SKU_PK (cr=1 pr=0 pw=0 time=34 us)(object id 25279)
0 PX BLOCK ITERATOR PARTITION: KEY KEY (cr=791 pr=1 pw=0 time=28136 us)
0 TABLE ACCESS FULL SS_SKU_STORE_WEEK PARTITION: KEY KEY (cr=791 pr=1 pw=0 time=28084 us)
0 TABLE ACCESS BY INDEX ROWID ITEM (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN ITEM_PK (cr=0 pr=0 pw=0 time=0 us)(object id 24055)
0 TABLE ACCESS BY INDEX ROWID SS_SKU (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN SS_SKU_PK (cr=0 pr=0 pw=0 time=0 us)(object id 25300)
0 PARTITION HASH ITERATOR PARTITION: KEY KEY (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY LOCAL INDEX ROWID SS_SKU_STORE PARTITION: KEY KEY (cr=0 pr=0 pw=0 time=0 us)
0 INDEX RANGE SCAN SS_SKU_ST_PK PARTITION: KEY KEY (cr=0 pr=0 pw=0 time=0 us)(object id 25547)
0 TABLE ACCESS BY INDEX ROWID STORE (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN STORE_PK (cr=0 pr=0 pw=0 time=0 us)(object id 25586)
0 TABLE ACCESS BY INDEX ROWID FASHION_ATTRIBUTE (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN FASHION_ATTRIBUTE_PK (cr=0 pr=0 pw=0 time=0 us)(object id 24035)
0 TABLE ACCESS BY INDEX ROWID CUST_PROFILE (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN CUST_PROFILE_PK (cr=0 pr=0 pw=0 time=0 us)(object id 24036)
ALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 43 0.10 0.11 0 1 0 0
Execute 861 0.38 0.45 357 622 46 621
Fetch 845 0.21 0.21 128 2626 2 892
total 1749 0.69 0.78 485 3249 48 1513
Misses in library cache during parse: 21
Misses in library cache during execute: 16
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 60 0.00 0.00
SQL*Net message from client 39 0.00 0.16
db file scattered read 82 0.01 0.05
db file sequential read 185 0.01 0.05
log file sync 1 0.00 0.00
191 user SQL statements in session.
23 internal SQL statements in session.
214 SQL statements in session.
37 statements EXPLAINed in this session.Thank you for your help in advance
Message was edited by:
devmiral -
Hi all
I have a query with the following EXECUTION plan
description
SELECT STATEMENT, GOAL = CHOOSE 116432 23506 2468130
PX COORDINATOR FORCED SERIAL
PX SEND QC (RANDOM) SYS :TQ10008 116432 23506 2468130
FILTER
HASH GROUP BY 116432 23506 2468130
PX RECEIVE 116430 470114 49361970
PX SEND HASH SYS :TQ10007 116430 470114 49361970
HASH JOIN BUFFERED 116430 470114 49361970
BUFFER SORT
PX RECEIVE 87 5065 35455
PX SEND BROADCAST SYS :TQ10003 87 5065 35455
TABLE ACCESS FULL AOSWN PRODUCTS 87 5065 35455
HASH JOIN 116342 470114 46071172
BUFFER SORT
PX RECEIVE 3961 621349 8077537
PX SYS :TQ10004 3961 621349 8077537
TABLE ACCESS FULL AOSWN CONTRACTS 3961 621349 8077537
PX RECEIVE 112380 468906 39857010
PX SEND HASH SYS :TQ10006 112380 468906 39857010
HASH JOIN 112380 468906 39857010
BUFFER SORT
PX RECEIVE 2 38 228
PX SYS :TQ10000 2 38 228
TABLE ACCESS FULL AOSWN TAX_SCHEME_RATES 2 38 228
HASH JOIN 1 12378 468906 37043574
BUFFER SORT
PX RECEIVE 87 5065 55715
PX SYS :TQ10001 87 5065 55715
TABLE ACCESS FULL AOSWN PRODUCTS 87 5065 55715
NESTED LOOPS 112290 513303 34904604
HASH JOIN 76624 513332 29773256
PX RECEIVE 18807 3345998 93687944
PX SEND HASH SYS :TQ10005 18807 3345998 93687944
PX BLOCK ITERATOR 18807 3345998 93687944
TABLE ACCESS FULL AOSWN INSTALMENTS 18807 3345998 93687944
BUFFER SORT
PX RECEIVE 56852 10266630 307998900
PX SYS :TQ10002 56852 10266630 307998900
TABLE ACCESS FULL AOSWN TAX_DUE 56852 10266630 307998900
TABLE ACCESS BY INDEX ROWID AOSWN MEMBER_PRODUCTS 2 1 10
INDEX UNIQUE SCAN AOSWN MEPR_PK 1 1but when tracking the actual execution there are no parallel processes spawned, any idee what to track?
BR,
FlorinCheck this link.
http://blogs.oracle.com/datawarehousing/entry/parallel_execution_precedence_of_hints
Regards
Raj -
What can I do to make this query run faster
Hi All,
The below query is taking a long time. Is there any thing that I can do to shorten its time.
SELECT C.FOLIO_NO, C.CO_TRANS_NO TRANS_NO, to_char(C.CREATED_DATE, 'dd/mm/yyyy') DOC_DATE, DECODE(PP.NAME, NULL, D.EMP_NAME, PP.NAME) LODGED_BY, decode(sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID), Null, '-', sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID)) DATE_CHANGE, P.RECEIPT_NO, decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id) TRANS_ID,(case when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR20' then 1 when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR03' then 2 end) TRANS_TYPE FROM CO_TRANS_MASTER C, PAYMENT_DETAIL P, PEOPLE_PROFILE PP, SC_AGENT_EMP D, M_CAA_TRANS E where '1' <> TRIM(UPPER('S0750070Z')) and (C.CO_TRANS_ID in TRIM(UPPER('AR20')) OR C.CO_TRANS_ID in TRIM(UPPER('AR03'))OR c.co_trans_id IN TRIM (UPPER ('A020')))and C.CO_TRANS_NO = P.TRANS_NO and (C.VOID_IND = 'N' or C.VOID_IND is Null) and C.CREATED_BY = PP.PP_ID(+) and C.PROF_NO = D.PROF_NO(+) and C.CREATED_BY = D.EMP_ID (+) and TRIM(UPPER(C.CO_NO)) = TRIM(UPPER('200101586W')) and c.co_trans_id = e.trans_id (+) order by FOLIO_NO;
SQL>
SQL> show parameter user_dump_dest
NAME TYPE VALUE
user_dump_dest string /u01/app/oracle/diag/rdbms/ebi
zfile/EBIZFILE1/trace
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.2
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 128
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select
2 sname, pname, pval1, pval2
3 from
4 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-11-2010 14:25
SYSSTATS_INFO DSTOP 09-11-2010 14:25
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 739.734748
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
Elapsed: 00:00:00.06
SQL>
SQL> explain plan for
2 SELECT C.FOLIO_NO, C.CO_TRANS_NO TRANS_NO, to_char(C.CREATED_DATE, 'dd/mm/yyyy') DOC_DATE, DECODE(PP.NAME, NULL, D.EMP_NAME, PP.NAME) LODGED_BY, decode(sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID), Null, '-', sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID)) DATE_CHANGE, P.RECEIPT_NO, decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id) TRANS_ID,(case when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR20' then 1 when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR03' then 2 end) TRANS_TYPE FROM CO_TRANS_MASTER C, PAYMENT_DETAIL P, PEOPLE_PROFILE PP, SC_AGENT_EMP D, M_CAA_TRANS E where '1' <> TRIM(UPPER('S0750070Z')) and (C.CO_TRANS_ID in TRIM(UPPER('AR20')) OR C.CO_TRANS_ID in TRIM(UPPER('AR03'))OR c.co_trans_id IN TRIM (UPPER ('A020')))and C.CO_TRANS_NO = P.TRANS_NO and (C.VOID_IND = 'N' or C.VOID_IND is Null) and C.CREATED_BY = PP.PP_ID(+) and C.PROF_NO = D.PROF_NO(+) and C.CREATED_BY = D.EMP_ID (+) and TRIM(UPPER(C.CO_NO)) = TRIM(UPPER('200101586W')) and c.co_trans_id = e.trans_id (+) order by FOLIO_NO;
Explained.
Elapsed: 00:00:00.09
SQL>
SQL> set pagesize 1000;
SQL> set linesize 170;
SQL> @/u01/app/oracle/product/11.2.0/rdbms/admin/utlxpls.sql
SQL> Rem
SQL> Rem $Header: utlxpls.sql 26-feb-2002.19:49:37 bdagevil Exp $
SQL> Rem
SQL> Rem utlxpls.sql
SQL> Rem
SQL> Rem Copyright (c) 1998, 2002, Oracle Corporation. All rights reserved.
SQL> Rem
SQL> Rem NAME
SQL> Rem utlxpls.sql - UTiLity eXPLain Serial plans
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem script utility to display the explain plan of the last explain plan
SQL> Rem command. Do not display information related to Parallel Query
SQL> Rem
SQL> Rem NOTES
SQL> Rem Assume that the PLAN_TABLE table has been created. The script
SQL> Rem utlxplan.sql should be used to create that table
SQL> Rem
SQL> Rem With SQL*plus, it is recomended to set linesize and pagesize before
SQL> Rem running this script. For example:
SQL> Rem set linesize 100
SQL> Rem set pagesize 0
SQL> Rem
SQL> Rem MODIFIED (MM/DD/YY)
SQL> Rem bdagevil 02/26/02 - cast arguments
SQL> Rem bdagevil 01/23/02 - rewrite with new dbms_xplan package
SQL> Rem bdagevil 04/05/01 - include CPU cost
SQL> Rem bdagevil 02/27/01 - increase Name column
SQL> Rem jihuang 06/14/00 - change order by to order siblings by.
SQL> Rem jihuang 05/10/00 - include plan info for recursive SQL in LE row source
SQL> Rem bdagevil 01/05/00 - add order-by to make it deterministic
SQL> Rem kquinn 06/28/99 - 901272: Add missing semicolon
SQL> Rem bdagevil 05/07/98 - Explain plan script for serial plans
SQL> Rem bdagevil 05/07/98 - Created
SQL> Rem
SQL>
SQL> set markup html preformat on
SQL>
SQL> Rem
SQL> Rem Use the display table function from the dbms_xplan package to display the last
SQL> Rem explain plan. Force serial option for backward compatibility
SQL> Rem
SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
PLAN_TABLE_OUTPUT
Plan hash value: 2520189693
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 592 | 85248 | 16573 (1)| 00:03:19 |
| 1 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 7 | SORT ORDER BY | | 592 | 85248 | 16573 (1)| 00:03:19 |
| 8 | NESTED LOOPS | | | | | |
| 9 | NESTED LOOPS | | 592 | 85248 | 16572 (1)| 00:03:19 |
| 10 | NESTED LOOPS OUTER | | 477 | 54855 | 15329 (1)| 00:03:04 |
| 11 | NESTED LOOPS OUTER | | 477 | 41499 | 14374 (1)| 00:02:53 |
| 12 | INLIST ITERATOR | | | | | |
|* 13 | TABLE ACCESS BY INDEX ROWID| CO_TRANS_MASTER | 477 | 22896 | 14367 (1)| 00:02:53 |
|* 14 | INDEX RANGE SCAN | IDX_CO_TRANS_ID | 67751 | | 150 (1)| 00:00:02 |
| 15 | TABLE ACCESS BY INDEX ROWID | SC_AGENT_EMP | 1 | 39 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | PK_SC_AGENT_EMP | 1 | | 0 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID | PEOPLE_PROFILE | 1 | 28 | 2 (0)| 00:00:01 |
|* 18 | INDEX UNIQUE SCAN | SYS_C0063100 | 1 | | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX_PAY_DETAIL_TRANS_NO | 1 | | 2 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | PAYMENT_DETAIL | 1 | 29 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("F"."CO_TRANS_NO"=:B1)
4 - access("F"."CO_TRANS_NO"=:B1)
6 - access("F"."CO_TRANS_NO"=:B1)
13 - filter(TRIM(UPPER("SYS_ALIAS_3"."CO_NO"))='200101586W' AND ("SYS_ALIAS_3"."VOID_IND" IS NULL
OR "SYS_ALIAS_3"."VOID_IND"='N'))
14 - access("SYS_ALIAS_3"."CO_TRANS_ID"='A020' OR "SYS_ALIAS_3"."CO_TRANS_ID"='AR03' OR
"SYS_ALIAS_3"."CO_TRANS_ID"='AR20')
16 - access("SYS_ALIAS_3"."PROF_NO"="D"."PROF_NO"(+) AND
"SYS_ALIAS_3"."CREATED_BY"="D"."EMP_ID"(+))
18 - access("SYS_ALIAS_3"."CREATED_BY"="PP"."PP_ID"(+))
19 - access("SYS_ALIAS_3"."CO_TRANS_NO"="P"."TRANS_NO")
42 rows selected.
Elapsed: 00:00:00.53
SQL>
SQL>
SQL>
SQL> rollback;
Rollback complete.
Elapsed: 00:00:00.01
SQL>
SQL> rem Set the ARRAYSIZE according to your application
SQL> set autotrace traceonly arraysize 100
SQL>
SQL> alter session set tracefile_identifier = 'mytrace1';
Session altered.
Elapsed: 00:00:00.00
SQL>
SQL> rem if you're using bind variables
SQL> rem define them here
SQL>
SQL> rem variable b_var1 number
SQL> rem variable b_var2 varchar2(20)
SQL>
SQL> rem and initialize them
SQL>
SQL> rem exec :b_var1 := 1
SQL> rem exec :b_var2 := 'DIAG'
SQL> set pagesize 1000;
SQL> set linesize 170;
SQL> alter session set events '10046 trace name context forever, level 8';
Session altered.
Elapsed: 00:00:00.01
SQL> SELECT C.FOLIO_NO, C.CO_TRANS_NO TRANS_NO, to_char(C.CREATED_DATE, 'dd/mm/yyyy') DOC_DATE, DECODE(PP.NAME, NULL, D.EMP_NAME, PP.NAME) LODGED_BY, decode(sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID), Null, '-', sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID)) DATE_CHANGE, P.RECEIPT_NO, decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id) TRANS_ID,(case when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR20' then 1 when decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR03' then 2 end) TRANS_TYPE FROM CO_TRANS_MASTER C, PAYMENT_DETAIL P, PEOPLE_PROFILE PP, SC_AGENT_EMP D, M_CAA_TRANS E where '1' <> TRIM(UPPER('S0750070Z')) and (C.CO_TRANS_ID in TRIM(UPPER('AR20')) OR C.CO_TRANS_ID in TRIM(UPPER('AR03'))OR c.co_trans_id IN TRIM (UPPER ('A020')))and C.CO_TRANS_NO = P.TRANS_NO and (C.VOID_IND = 'N' or C.VOID_IND is Null) and C.CREATED_BY = PP.PP_ID(+) and C.PROF_NO = D.PROF_NO(+) and C.CREATED_BY = D.EMP_ID (+) and TRIM(UPPER(C.CO_NO)) = TRIM(UPPER('200101586W')) and c.co_trans_id = e.trans_id (+) order by FOLIO_NO;
10 rows selected.
Elapsed: 00:03:42.27
Execution Plan
Plan hash value: 2520189693
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 592 | 85248 | 16573 (1)| 00:03:19 |
| 1 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | 20 | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | 1 (0)| 00:00:01 |
| 7 | SORT ORDER BY | | 592 | 85248 | 16573 (1)| 00:03:19 |
| 8 | NESTED LOOPS | | | | | |
| 9 | NESTED LOOPS | | 592 | 85248 | 16572 (1)| 00:03:19 |
| 10 | NESTED LOOPS OUTER | | 477 | 54855 | 15329 (1)| 00:03:04 |
| 11 | NESTED LOOPS OUTER | | 477 | 41499 | 14374 (1)| 00:02:53 |
| 12 | INLIST ITERATOR | | | | | |
|* 13 | TABLE ACCESS BY INDEX ROWID| CO_TRANS_MASTER | 477 | 22896 | 14367 (1)| 00:02:53 |
|* 14 | INDEX RANGE SCAN | IDX_CO_TRANS_ID | 67751 | | 150 (1)| 00:00:02 |
| 15 | TABLE ACCESS BY INDEX ROWID | SC_AGENT_EMP | 1 | 39 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | PK_SC_AGENT_EMP | 1 | | 0 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID | PEOPLE_PROFILE | 1 | 28 | 2 (0)| 00:00:01 |
|* 18 | INDEX UNIQUE SCAN | SYS_C0063100 | 1 | | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX_PAY_DETAIL_TRANS_NO | 1 | | 2 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | PAYMENT_DETAIL | 1 | 29 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("F"."CO_TRANS_NO"=:B1)
4 - access("F"."CO_TRANS_NO"=:B1)
6 - access("F"."CO_TRANS_NO"=:B1)
13 - filter(TRIM(UPPER("SYS_ALIAS_3"."CO_NO"))='200101586W' AND ("SYS_ALIAS_3"."VOID_IND" IS NULL
OR "SYS_ALIAS_3"."VOID_IND"='N'))
14 - access("SYS_ALIAS_3"."CO_TRANS_ID"='A020' OR "SYS_ALIAS_3"."CO_TRANS_ID"='AR03' OR
"SYS_ALIAS_3"."CO_TRANS_ID"='AR20')
16 - access("SYS_ALIAS_3"."PROF_NO"="D"."PROF_NO"(+) AND
"SYS_ALIAS_3"."CREATED_BY"="D"."EMP_ID"(+))
18 - access("SYS_ALIAS_3"."CREATED_BY"="PP"."PP_ID"(+))
19 - access("SYS_ALIAS_3"."CO_TRANS_NO"="P"."TRANS_NO")
Statistics
51 recursive calls
0 db block gets
651812 consistent gets
92202 physical reads
0 redo size
1594 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
10 rows processed
SQL>
SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> Thanks in advance!Hi Raj,
I have given the output below as you requested....
QL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID 0taz7ckjm41yv, child number 1
SELECT C.FOLIO_NO, C.CO_TRANS_NO TRANS_NO, to_char(C.CREATED_DATE,
'dd/mm/yyyy') DOC_DATE, DECODE(PP.NAME, NULL, D.EMP_NAME, PP.NAME)
LODGED_BY, decode(sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID),
Null, '-', sf_fetch_datechange(c.co_trans_no, C.CO_TRANS_ID))
DATE_CHANGE, P.RECEIPT_NO, decode(c.co_trans_id,'A020',(select
nvl(base_trans_id,co_trans_id) from co_form5a_trans f where
f.co_trans_no=c.co_trans_no),c.co_trans_id) TRANS_ID,(case when
decode(c.co_trans_id,'A020',(select nvl(base_trans_id,co_trans_id) from
co_form5a_trans f where f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR2
0' then 1 when decode(c.co_trans_id,'A020',(select
nvl(base_trans_id,co_trans_id) from co_form5a_trans f where
f.co_trans_no=c.co_trans_no),c.co_trans_id)='AR03' then 2 end)
TRANS_TYPE FROM CO_TRANS_MASTER C, PAYMENT_DETAIL P, PEOPLE_PROFILE PP,
SC_AGENT_EMP D, M_CAA_TRANS E where '1' <> TRIM(UPPER('S0750070Z')) and
(C.CO_TRANS_ID in TRIM(UPPER('AR20')) OR C.CO_TRANS_ID in
TRIM(UPPER('AR03'))OR c.co
Plan hash value: 4175354585
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | | | | |
| 1 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | | | |
|* 2 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | | | |
|* 4 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID | CO_FORM5A_TRANS | 1 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0059692 | 1 | | | |
| 7 | SORT ORDER BY | | 12 | 2048 | 2048 | 2048 (0)|
| 8 | NESTED LOOPS | | | | | |
| 9 | NESTED LOOPS | | 12 | | | |
| 10 | NESTED LOOPS OUTER | | 10 | | | |
| 11 | NESTED LOOPS OUTER | | 10 | | | |
|* 12 | TABLE ACCESS FULL | CO_TRANS_MASTER | 10 | | | |
| 13 | TABLE ACCESS BY INDEX ROWID| SC_AGENT_EMP | 1 | | | |
|* 14 | INDEX UNIQUE SCAN | PK_SC_AGENT_EMP | 1 | | | |
| 15 | TABLE ACCESS BY INDEX ROWID | PEOPLE_PROFILE | 1 | | | |
|* 16 | INDEX UNIQUE SCAN | SYS_C0063100 | 1 | | | |
|* 17 | INDEX RANGE SCAN | IDX_PAY_DETAIL_TRANS_NO | 1 | | | |
| 18 | TABLE ACCESS BY INDEX ROWID | PAYMENT_DETAIL | 1 | | | |
Predicate Information (identified by operation id):
2 - access("F"."CO_TRANS_NO"=:B1)
4 - access("F"."CO_TRANS_NO"=:B1)
6 - access("F"."CO_TRANS_NO"=:B1)
12 - filter((INTERNAL_FUNCTION("SYS_ALIAS_3"."CO_TRANS_ID") AND
TRIM(UPPER("SYS_ALIAS_3"."CO_NO"))='200101586W' AND ("SYS_ALIAS_3"."VOID_IND" IS NULL OR
"SYS_ALIAS_3"."VOID_IND"='N')))
14 - access("SYS_ALIAS_3"."PROF_NO"="D"."PROF_NO" AND "SYS_ALIAS_3"."CREATED_BY"="D"."EMP_ID")
16 - access("SYS_ALIAS_3"."CREATED_BY"="PP"."PP_ID")
17 - access("SYS_ALIAS_3"."CO_TRANS_NO"="P"."TRANS_NO")
Note
- cardinality feedback used for this statement
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
65 rows selected. -
[./solutions/atonx.sql]
REM
REM script ATONX.SQL
REM =====================================
SET AUTOTRACE ON EXPLAIN
[./solutions/saved_settings.sql]
set appinfo OFF
set appinfo "SQL*Plus"
set arraysize 15
set autocommit OFF
set autoprint OFF
set autorecovery OFF
set autotrace OFF
set blockterminator "."
set cmdsep OFF
set colsep " "
set compatibility NATIVE
set concat "."
set copycommit 0
set copytypecheck ON
set define "&"
set describe DEPTH 1 LINENUM OFF INDENT ON
set echo OFF
set editfile "afiedt.buf"
set embedded OFF
set escape OFF
set feedback ON
set flagger OFF
set flush ON
set heading ON
set headsep "|"
set linesize 80
set logsource ""
set long 80
set longchunksize 80
set markup HTML OFF HEAD "<style type='text/css'> body {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} p {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} table,tr,td {font:10pt Arial,Helvetica,sans-serif; color:Black; background:#f7f7e7; padding:0px 0px 0px 0px; margin:0px 0px 0px 0px;} th {font:bold 10pt Arial,Helvetica,sans-serif; color:#336699; background:#cccc99; padding:0px 0px 0px 0px;} h1 {font:16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;} h2 {font:bold 10pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; margin-top:4pt; margin-bottom:0pt;} a {font:9pt Arial,Helvetica,sans-serif; color:#663300; background:#ffffff; margin-top:0pt; margin-bottom:0pt; vertical-align:top;}</style><title>SQL*Plus Report</title>" BODY "" TABLE "border='1' width='90%' align='center' summary='Script output'" SPOOL OFF ENTMAP ON PRE ON
set newpage 1
set null ""
set numformat ""
set numwidth 10
set pagesize 14
set pause OFF
set recsep WRAP
set recsepchar " "
set serveroutput OFF
set shiftinout invisible
set showmode OFF
set sqlblanklines OFF
set sqlcase MIXED
set sqlcontinue "> "
set sqlnumber ON
set sqlpluscompatibility 8.1.7
set sqlprefix "#"
set sqlprompt "SQL> "
set sqlterminator ";"
set suffix "sql"
set tab ON
set termout OFF
set time OFF
set timing OFF
set trimout ON
set trimspool OFF
set underline "-"
set verify ON
set wrap ON
[./solutions/sol_06_04d.sql]
-- this script requires the sql id from the previous script to be substituted
SELECT PLAN_TABLE_OUTPUT
FROM TABLE (DBMS_XPLAN.DISPLAY_AWR(' your sql id here'));
[./solutions/rpsqlarea.sql]
set feedback off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('your sql_id here'));
set feedback on
[./solutions/sqlid2.sql]
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%REPORT%' ;
[./solutions/schemastats.sql]
SELECT last_analyzed analyzed, sample_size, monitoring,
table_name
FROM user_tables;
[./solutions/allrows.sql]
REM
REM script ALLROWS.SQL
REM =====================================
alter session set optimizer_mode = all_rows
[./solutions/aton.sql]
REM
REM script ATON.SQL
REM =====================================
SET AUTOTRACE ON
[./solutions/li.sql]
REM script LI.SQL (list indexes)
REM wildcards in table_name allowed,
REM and a '%' is appended by default
REM ======================================
set termout off
store set sqlplus_settings replace
save buffer.sql replace
set verify off autotrace off
set feedback off termout on
break on table_name skip 1 on index_type
col table_name format a25
col index_name format a30
col index_type format a20
accept table_name -
prompt 'List indexes on table : '
SELECT ui.table_name
, decode(ui.index_type
,'NORMAL', ui.uniqueness
,ui.index_type) AS index_type
, ui.index_name
FROM user_indexes ui
WHERE ui.table_name LIKE upper('&table_name.%')
ORDER BY ui.table_name
, ui.uniqueness desc;
get buffer.sql nolist
@sqlplus_settings
set termout on
[./solutions/utlxplp.sql]
Rem
Rem $Header: utlxplp.sql 23-jan-2002.08:55:23 bdagevil Exp $
Rem
Rem utlxplp.sql
Rem
Rem Copyright (c) 1998, 2002, Oracle Corporation. All rights reserved.
Rem
Rem NAME
Rem utlxplp.sql - UTiLity eXPLain Parallel plans
Rem
Rem DESCRIPTION
Rem script utility to display the explain plan of the last explain plan
Rem command. Display also Parallel Query information if the plan happens to
Rem run parallel
Rem
Rem NOTES
Rem Assume that the table PLAN_TABLE has been created. The script
Rem utlxplan.sql should be used to create that table
Rem
Rem With SQL*plus, it is recomended to set linesize and pagesize before
Rem running this script. For example:
Rem set linesize 130
Rem set pagesize 0
Rem
Rem MODIFIED (MM/DD/YY)
Rem bdagevil 01/23/02 - rewrite with new dbms_xplan package
Rem bdagevil 04/05/01 - include CPU cost
Rem bdagevil 02/27/01 - increase Name column
Rem jihuang 06/14/00 - change order by to order siblings by.
Rem jihuang 05/10/00 - include plan info for recursive SQL in LE row source
Rem bdagevil 01/05/00 - make deterministic with order-by
Rem bdagevil 05/07/98 - Explain plan script for parallel plans
Rem bdagevil 05/07/98 - Created
Rem
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Use default mode which will display only relevant information
Rem
select * from table(dbms_xplan.display());
[./solutions/cbinp.sql]
REM Oracle10g SQL Tuning Workshop
REM script CBI.SQL (create bitmap index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column: "
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE BITMAP INDEX &index_name ON &TABLE_NAME(&COLUMN_NAME)
NOLOGGING COMPUTE STATISTICS
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/dump.sql]
SElECT *
FROM v$parameter
WHERE name LIKE '%dump%';
[./solutions/utlxplan.sql]
rem
rem $Header: utlxplan.sql 29-oct-2001.20:28:58 mzait Exp $ xplainpl.sql
rem
Rem Copyright (c) 1988, 2001, Oracle Corporation. All rights reserved.
Rem NAME
REM UTLXPLAN.SQL
Rem FUNCTION
Rem NOTES
Rem MODIFIED
Rem mzait 10/26/01 - add keys and filter predicates to the plan table
Rem ddas 05/05/00 - increase length of options column
Rem ddas 04/17/00 - add CPU, I/O cost, temp_space columns
Rem mzait 02/19/98 - add distribution method column
Rem ddas 05/17/96 - change search_columns to number
Rem achaudhr 07/23/95 - PTI: Add columns partition_{start, stop, id}
Rem glumpkin 08/25/94 - new optimizer fields
Rem jcohen 11/05/93 - merge changes from branch 1.1.710.1 - 9/24
Rem jcohen 09/24/93 - #163783 add optimizer column
Rem glumpkin 10/25/92 - Renamed from XPLAINPL.SQL
Rem jcohen 05/22/92 - #79645 - set node width to 128 (M_XDBI in gendef)
Rem rlim 04/29/91 - change char to varchar2
Rem Peeler 10/19/88 - Creation
Rem
Rem This is the format for the table that is used by the EXPLAIN PLAN
Rem statement. The explain statement requires the presence of this
Rem table in order to store the descriptions of the row sources.
create table PLAN_TABLE (
statement_id varchar2(30),
timestamp date,
remarks varchar2(80),
operation varchar2(30),
options varchar2(255),
object_node varchar2(128),
object_owner varchar2(30),
object_name varchar2(30),
object_instance numeric,
object_type varchar2(30),
optimizer varchar2(255),
search_columns number,
id numeric,
parent_id numeric,
position numeric,
cost numeric,
cardinality numeric,
bytes numeric,
other_tag varchar2(255),
partition_start varchar2(255),
partition_stop varchar2(255),
partition_id numeric,
other long,
distribution varchar2(30),
cpu_cost numeric,
io_cost numeric,
temp_space numeric,
access_predicates varchar2(4000),
filter_predicates varchar2(4000));
[./solutions/indstats.sql]
accept table_name -
prompt 'on which table : '
SELECT index_name name, num_rows n_r,
last_analyzed l_a, distinct_keys d_k,
leaf_blocks l_b, avg_leaf_blocks_per_key a_l,join_index j_I
FROM user_indexes
WHERE table_name = upper('&table_name');
undef table_name
[./solutions/test.sql]
declare
x number;
begin
for i in 1..10000 loop
select count(*) into x from customers;
end loop;
end;
[./solutions/rp.sql]
set feedback off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
set feedback on
[./solutions/sol_08_02b.sql]
ALTER SESSION SET SQL_TRACE = TRUE;
[./solutions/trace.sql]
ALTER SESSION SET SQL_TRACE = TRUE;
[./solutions/doit.sql]
DROP INDEX SALES_CH_BIX;
DROP INDEX SALES_CUST_BIX;
DROP INDEX SALES_PROD_BIX;
[./solutions/ci.sql]
REM SQL Tuning Workshop
REM script CI.SQL (create index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column(s): "
set termout off
store set saved_settings replace
set heading off feedback off autotrace off
set verify off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE INDEX &index_name
ON &table_name(&column_name)
NOLOGGING COMPUTE STATISTICS;
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/sol_06_04c.sql]
exec dbms_workload_repository.create_snapshot('ALL');
[./solutions/sol_06_04a.sql]
column sql_text format a25
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%REPORT%' ;
[./solutions/login.sql]
REM ======================================
REM COL[UMN] commands
REM ======================================
col dummy new_value index_name
col name format a32
col segment_name format a20
col table_name format a20
col column_name format a20
col index_name format a30
col index_type format a10
col constraint_name format a20
col num_distinct format 999999
col update_comment format a20 word
-- for the SHOW SGA/PARAMETER commands:
col name_col_plus_show_sga format a24
col name_col_plus_show_param format a40 -
heading name
col value_col_plus_show_param format a35 -
heading value
-- for the AUTOTRACE setting:
col id_plus_exp format 90 head i
col parent_id_plus_exp format 90 head p
col plan_plus_exp format a80
col other_plus_exp format a44
col other_tag_plus_exp format a29
col object_node_plus_exp format a8
REM ======================================
REM SET commands
REM ======================================
set describe depth 2
set echo off
set editfile D:\Tmp\buffer.sql
set feedback 40
set linesize 120
set long 999
set numwidth 8
set pagesize 36
set pause "[Enter]..." pause off
set tab off
set trimout on
set trimspool on
set verify off
set wrap on
REM ======================================
REM DEFINE commands
REM ======================================
def 1=employees
def table_name=employees
def column_name=first_name
def buckets=1
def sc=';'
REM ======================================
REM miscellaneous
REM ======================================
[./solutions/sqlid.sql]
SELECT SQL_ID, SQL_TEXT FROM V$SQL
WHERE SQL_TEXT LIKE '%/* my%' ;
[./solutions/hist1.sql]
SELECT * FROM products WHERE prod_status LIKE 'available, on stock'
[./solutions/sol_08_02.sql]
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'User12';
[./solutions/utlxrw.sql]
Rem
Rem $Header: utlxrw.sql 29-apr-2005.08:22:09 mthiyaga Exp $
Rem
Rem utlxrw.sql
Rem
Rem Copyright (c) 2000, 2005, Oracle. All rights reserved.
Rem
Rem NAME
Rem utlxrw.sql - Create the output table for EXPLAIN_REWRITE
Rem
Rem DESCRIPTION
Rem Outputs of the EXPLAIN_REWRITE goes into the table created
Rem by utlxrw.sql (called REWRITE_TABLE). So utlxrw must be
Rem invoked before any EXPLAIN_REWRITE tests.
Rem
Rem NOTES
Rem If user specifies a different name in EXPLAIN_REWRITE, then
Rem it should have been already created before calling EXPLAIN_REWRITE.
Rem
Rem MODIFIED (MM/DD/YY)
Rem mthiyaga 04/29/05 - Remove unncessary comment
Rem mthiyaga 06/08/04 - Add rewritten_txt field
Rem mthiyaga 10/10/02 - Add extra columns
Rem mthiyaga 09/27/00 - Create EXPLAIN_REWRITE output table
Rem mthiyaga 09/27/00 - Created
Rem
Rem
CREATE TABLE REWRITE_TABLE(
statement_id VARCHAR2(30), -- id for the query
mv_owner VARCHAR2(30), -- owner of the MV
mv_name VARCHAR2(30), -- name of the MV
sequence INTEGER, -- sequence no of the error msg
query VARCHAR2(2000),-- user query
query_block_no INTEGER, -- block no of the current subquery
rewritten_txt VARCHAR2(2000),-- rewritten query
message VARCHAR2(512), -- EXPLAIN_REWRITE error msg
pass VARCHAR2(3), -- rewrite pass no
mv_in_msg VARCHAR2(30), -- MV in current message
measure_in_msg VARCHAR2(30), -- Measure in current message
join_back_tbl VARCHAR2(30), -- Join back table in current msg
join_back_col VARCHAR2(30), -- Join back column in current msg
original_cost INTEGER, -- Cost of original query
rewritten_cost INTEGER, -- Cost of rewritten query
flags INTEGER, -- associated flags
reserved1 INTEGER, -- currently not used
reserved2 VARCHAR2(10)) -- currently not used
[./solutions/nm.sql]
ALTER INDEX &indexname NOMONITORING USAGE;
[./solutions/attox.sql]
REM
REM script ATTOX.SQL
REM =====================================
set autotrace traceonly explain
[./solutions/create_tab.sql]
DROP TABLE test_sales;
DROP TABLE test_promotions;
DROP TABLE test_customers;
DROP TABLE test_countries;
CREATE table test_sales as select * from sales;
CREATE TABLE test_promotions AS SELECT * FROM promotions;
CREATE INDEX t_promo_id_idx ON TEST_PROMOTIONS(promo_id);
ALTER TABLE test_promotions MODIFY promo_id PRIMARY KEY USING INDEX t_promo_id_idx;
CREATE TABLE test_customers AS SELECT * FROM customers;
CREATE INDEX t_cust_id_idx ON TEST_CUSTOMERS(cust_id);
ALTER TABLE test_customers MODIFY cust_id PRIMARY KEY USING INDEX t_cust_id_idx;
CREATE TABLE test_countries AS SELECT * FROM countries;
CREATE INDEX t_country_id_idx ON TEST_COUNTRIES(country_id);
ALTER TABLE test_countries MODIFY country_id PRIMARY KEY USING INDEX t_country_id_idx;
UPDATE test_customers SET cust_credit_limit = 1000 WHERE ROWNUM <= 15000;
[./solutions/cui.sql]
REM SQL Tuning Workshop
REM script CUI.SQL (create unique index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column(s): "\
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
SELECT 'creating unique index'
, SUBSTR('ui_&TABLE_NAME._' ||
TRANSLATE(REPLACE('&COLUMN_NAME', ' ', '')
, 1, 30) dummy
from dual
CREATE UNIQUE INDEX &INDEX_NAME ON &TABLE_NAME(&COLUMN_NAME)
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/advisor_cache_setup.sql]
set echo on
alter system flush shared_pool;
grant advisor to sh;
connect sh/sh;
SELECT c.cust_last_name, sum(s.amount_sold) AS dollars,
sum(s.quantity_sold) as quantity
FROM sales s , customers c, products p
WHERE c.cust_id = s.cust_id
AND s.prod_id = p.prod_id
AND c.cust_state_province IN ('Dublin','Galway')
GROUP BY c.cust_last_name;
SELECT c.cust_id, SUM(amount_sold) AS dollar_sales
FROM sales s, customers c WHERE s.cust_id= c.cust_id GROUP BY c.cust_id;
select sum(unit_cost) from costs group by prod_id;
[./solutions/utlxmv.sql]
Rem
Rem $Header: utlxmv.sql 16-feb-2001.13:03:32 nshodhan Exp $
Rem
Rem utlxmv.sql
Rem
Rem Copyright (c) Oracle Corporation 2000. All Rights Reserved.
Rem
Rem NAME
Rem utlxmv.sql - UTiLity for eXplain MV
Rem
Rem DESCRIPTION
Rem The utility script creates the MV_CAPABILITIES_TABLE that is
Rem used by the DBMS_MVIEW.EXPLAIN_MVIEW() API.
Rem
Rem NOTES
Rem
Rem MODIFIED (MM/DD/YY)
Rem nshodhan 02/16/01 - Bug#1647071: replace mv with mview
Rem raavudai 11/28/00 - Fix comment.
Rem twtong 12/01/00 - fix for sql*plus
Rem twtong 09/13/00 - modify mv_capabilities_tabe
Rem twtong 08/18/00 - change create table to upper case
Rem jraitto 06/12/00 - add RELATED_NUM and MSGNO columns
Rem jraitto 05/09/00 - Explain_MV table
Rem jraitto 05/09/00 - Created
Rem
CREATE TABLE MV_CAPABILITIES_TABLE
(STATEMENT_ID VARCHAR(30), -- Client-supplied unique statement identifier
MVOWNER VARCHAR(30), -- NULL for SELECT based EXPLAIN_MVIEW
MVNAME VARCHAR(30), -- NULL for SELECT based EXPLAIN_MVIEW
CAPABILITY_NAME VARCHAR(30), -- A descriptive name of the particular
-- capability:
-- REWRITE
-- Can do at least full text match
-- rewrite
-- REWRITE_PARTIAL_TEXT_MATCH
-- Can do at leat full and partial
-- text match rewrite
-- REWRITE_GENERAL
-- Can do all forms of rewrite
-- REFRESH
-- Can do at least complete refresh
-- REFRESH_FROM_LOG_AFTER_INSERT
-- Can do fast refresh from an mv log
-- or change capture table at least
-- when update operations are
-- restricted to INSERT
-- REFRESH_FROM_LOG_AFTER_ANY
-- can do fast refresh from an mv log
-- or change capture table after any
-- combination of updates
-- PCT
-- Can do Enhanced Update Tracking on
-- the table named in the RELATED_NAME
-- column. EUT is needed for fast
-- refresh after partitioned
-- maintenance operations on the table
-- named in the RELATED_NAME column
-- and to do non-stale tolerated
-- rewrite when the mv is partially
-- stale with respect to the table
-- named in the RELATED_NAME column.
-- EUT can also sometimes enable fast
-- refresh of updates to the table
-- named in the RELATED_NAME column
-- when fast refresh from an mv log
-- or change capture table is not
-- possilbe.
POSSIBLE CHARACTER(1), -- T = capability is possible
-- F = capability is not possible
RELATED_TEXT VARCHAR(2000),-- Owner.table.column, alias name, etc.
-- related to this message. The
-- specific meaning of this column
-- depends on the MSGNO column. See
-- the documentation for
-- DBMS_MVIEW.EXPLAIN_MVIEW() for details
RELATED_NUM NUMBER, -- When there is a numeric value
-- associated with a row, it goes here.
-- The specific meaning of this column
-- depends on the MSGNO column. See
-- the documentation for
-- DBMS_MVIEW.EXPLAIN_MVIEW() for details
MSGNO INTEGER, -- When available, QSM message #
-- explaining why not possible or more
-- details when enabled.
MSGTXT VARCHAR(2000),-- Text associated with MSGNO.
SEQ NUMBER);
-- Useful in ORDER BY clause when
-- selecting from this table.
[./solutions/di.sql]
DROP INDEX &index_name;
[./solutions/hist2.sql]
SELECT * FROM products WHERE prod_status = 'obsolete'
[./solutions/sol_06_04b.sql]
-- this script requires the sql_id that you got from the previous step
SELECT SQL_ID, SQL_TEXT FROM dba_hist_sqltext where sql_id ='yourr sql id here';
[./solutions/tabstats.sql]
accept table_name -
prompt 'on which table : '
SELECT last_analyzed analyzed, sample_size, monitoring,
table_name
FROM user_tables
WHERE table_name = upper('&table_name');
undef TABLE_NAME
[./solutions/rewrite.sql]
ALTER SESSION SET QUERY_REWRITE_ENABLED = true
[./solutions/atto.sql]
REM
REM script ATTO.SQL
REM =====================================
set autotrace traceonly
[./solutions/flush.sql]
--this script flushes the shared pool
alter system flush shared_pool
[./solutions/atoff.sql]
REM
REM script ATOFF.SQLREM =====================================
SET AUTOTRACE OFF
[./solutions/cbi.sql]
REM Oracle10g SQL Tuning Workshop
REM script CBI.SQL (create bitmap index)
REM prompts for input; index name generated
REM =======================================
accept TABLE_NAME prompt " on which table : "
accept COLUMN_NAME prompt " on which column: "
set termout off
store set saved_settings replace
set heading off feedback off verify off
set autotrace off termout on
column dummy new_value index_name
SELECT 'creating index'
, SUBSTR( SUBSTR('&table_name',1,4)||'_' ||
TRANSLATE(REPLACE('&column_name', ' ', '')
, 1, 25
)||'_idx' dummy
FROM dual;
CREATE bitmap index &INDEX_NAME on &TABLE_NAME(&COLUMN_NAME)
LOCAL NOLOGGING COMPUTE STATISTICS
@saved_settings
set termout on
undef INDEX_NAME
undef TABLE_NAME
undef COLUMN_NAME
[./solutions/buffer.sql]
SELECT c.cust_last_name, c.cust_year_of_birth
, co.country_name
FROM customers c
JOIN countries co
USING (country_id)
[./solutions/sol_08_04.sql]
ALTER SESSION SET SQL_TRACE = false;
[./solutions/sqlplus_settings.sql]
set appinfo OFF
set appinfo "SQL*Plus"
set arraysize 15
set autocommit OFF
set autoprint OFF
set autorecovery OFF
set autotrace TRACEONLY EXPLAIN STATISTICS
set blockterminator "."
set cmdsep OFF
set colsep " "
set compatibility NATIVE
set concat "."
set copycommit 0
set copytypecheck ON
set define "&"
set describe DEPTH 1 LINENUM OFF INDENT ON
set echo OFF
set editfile "afiedt.buf"
set embedded OFF
set escape OFF
set feedback 6
set flagger OFF
set flush ON
set heading ON
set headsep "|"
set linesize 80
set logsource ""
set long 80
set longchunksize 80
set markup HTML OFF HEAD "<style type='text/css'> body {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} p {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} table,tr,td {font:10pt Arial,Helvetica,sans-serif; color:Black; background:#f7f7e7; padding:0px 0px 0px 0px; margin:0px 0px 0px 0px;} th {font:bold 10pt Arial,Helvetica,sans-serif; color:#336699; background:#cccc99; padding:0px 0px 0px 0px;} h1 {font:16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;} h2 {font:bold 10pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; margin-top:4pt; margin-bottom:0pt;} a {font:9pt Arial,Helvetica,sans-serif; color:#663300; background:#ffffff; margin-top:0pt; margin-bottom:0pt; vertical-align:top;}</style><title>SQL*Plus Report</title>" BODY "" TABLE "border='1' width='90%' align='center' summary='Script output'" SPOOL OFF ENTMAP ON PRE OFF
set newpage 1
set null ""
set numformat ""
set numwidth 10
set pagesize 14
set pause OFF
set recsep WRAP
set recsepchar " "
set serveroutput OFF
set shiftinout invisible
set showmode OFF
set sqlblanklines OFF
set sqlcase MIXED
set sqlcontinue "> "
set sqlnumber ON
set sqlpluscompatibility 8.1.7
set sqlprefix "#"
set sqlprompt "SQL> "
set sqlterminator ";"
set suffix "sql"
set tab ON
set termout OFF
set time OFF
set timing OFF
set trimout ON
set trimspool OFF
set underline "-"
set verify ON
set wrap ON
[./solutions/sol_07_01.sql]
SELECT owner, job_name,enabled
FROM DBA_SCHEDULER_JOBS
WHERE JOB_NAME = 'GATHER_STATS_JOB';
[./solutions/colhist.sql]
SELECT column_name, num_distinct, num_buckets, histogram
FROM USER_TAB_COL_STATISTICS
WHERE histogram <> 'NONE';
[./solutions/rpawr.sql]
set feedback off
SELECT PLAN_TABLE_OUTPUT
FROM TABLE (DBMS_XPLAN.DISPLAY_AWR('&sqlid'));
set feedback on
[./solutions/im.sql]
ALTER INDEX &indexname MONITORING USAGE;
[./solutions/utlxpls.sql]
Rem
Rem $Header: utlxpls.sql 26-feb-2002.19:49:37 bdagevil Exp $
Rem
Rem utlxpls.sql
Rem
Rem Copyright (c) 1998, 2002, Oracle Corporation. All rights reserved.
Rem
Rem NAME
Rem utlxpls.sql - UTiLity eXPLain Serial plans
Rem
Rem DESCRIPTION
Rem script utility to display the explain plan of the last explain plan
Rem command. Do not display information related to Parallel Query
Rem
Rem NOTES
Rem Assume that the PLAN_TABLE table has been created. The script
Rem utlxplan.sql should be used to create that table
Rem
Rem With SQL*plus, it is recomended to set linesize and pagesize before
Rem running this script. For example:
Rem set linesize 100
Rem set pagesize 0
Rem
Rem MODIFIED (MM/DD/YY)
Rem bdagevil 02/26/02 - cast arguments
Rem bdagevil 01/23/02 - rewrite with new dbms_xplan package
Rem bdagevil 04/05/01 - include CPU cost
Rem bdagevil 02/27/01 - increase Name column
Rem jihuang 06/14/00 - change order by to order siblings by.
Rem jihuang 05/10/00 - include plan info for recursive SQL in LE row source
Rem bdagevil 01/05/00 - add order-by to make it deterministic
Rem kquinn 06/28/99 - 901272: Add missing semicolon
Rem bdagevil 05/07/98 - Explain plan script for serial plans
Rem bdagevil 05/07/98 - Created
Rem
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Force serial option for backward compatibility
Rem
select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
[./solutions/dai.sql]
REM script DAI.SQL (drop all indexes)
REM prompts for a table name; % is appended
REM does not touch indexes associated with constraints
REM ==================================================
accept table_name -
prompt 'on which table : '
set termout off
store set sqlplus_settings replace
save buffer.sql replace
set heading off verify off autotrace off feedback off
spool doit.sql
SELECT 'drop index '||i.index_name||';'
FROM user_indexes i
WHERE i.table_name LIKE UPPER('&table_name.%')
AND NOT EXISTS
(SELECT 'x'
FROM user_constraints c
WHERE c.index_name = i.index_name
AND c.table_name = i.table_name
AND c.status = 'ENABLED');
spool off
@doit
get buffer.sql nolist
@sqlplus_settings
set termout on
[./solutions/setupenv.sql]
connect system/oracle
GRANT DBA TO sh;
GRANT CREATE ANY OUTLINE TO sh;
GRANT ADVISOR TO sh;
GRANT CREATE ANY VIEW TO sh;
EXECUTE DBMS_What an insane topic. Where's your question?
I recommend you to start over with a smart question and only the relevant code lines.
Check this link: [How To Ask Questions The Smart Way|http://www.catb.org/~esr/faqs/smart-questions.html]. -
Max Degree of Parallelism, Timeouts and Deadlocks
We have been trying to track down some timeout issues with our SQL Server. Our SQL Server runs a AccPac Account System, an Internal Intranet site and some SharePoint Content DBs.
We know that some of the queries are complex and take some time, though the timeouts seem to happen randomly and not always on the same pages/queries. IIS and the SQL timeouts are set to 60 and 120 seconds.
Looking at some of the SQL Server settings, I noticed that MAXDOP was set to 1. Before doing extensive research on it and looking through the BOL, another DBA and I changed it to 0.
Server is a Hyper-V VM with:
Processors:4
NUMA nodes:2
Sockets: 2
Interesting thing happened. Our Timeouts seem to have disappeared. Going from several a day, we are now at 1 every few days. Though now the issue we are having is that our Deadlocks have gone through the roof. Going from one or two every few days, we are
up to 8+ a day!
We have been changing our Select statements to include WITH (NOLOCK) so they do not compete with the UPDATE statements they usually fall victim to. The Deadlocks and timeouts do not seem to be related to any of the SharePoint Content DBs. All of the deadlocks
are with our Intranet Site and When it communicates with the AccPac DB, or the Internet Site on its own.
Any Suggestions on where I should be focusing my energy on benchmarking and tuning the server?
Thank you,
Scott<-Thank you all for your replies.
The server had 30GB of RAM and then we bumped it up to 40GB at the same time we changed the MAXDOP to 0.
It was set to 1 because if it isn't MS won't support your SharePoint installation. This is from the Setup Guide for SharePoint on SQL Server, MAXDOP = 1 is a must in their book for official support. It always forces serial plans, because to be honest
the sharepoint queries are extremely terrible.
I understand this, though I would guess that the install of SharePoint didn’t actually do the MAXDOP =1 setting during the install? We basically have two Sharepoint sites on the server. One has about 10 users and the other has maybe 20 users. The Sites are
not used very much either. So I didn't think there would be too much impact.
Though now the issue we are having is that our Deadlocks have gone through the roof.
You probably didn't get this before (though they probably still happened) because the executions were forced serially and you dodged many-a-bullet because of this artificial speed bump. Deadlocks are application based, pure and simple.
The accounting system we are running is something we typically do not alter the DB Contents directly, we are only peering into the database to present information to the user. We looked at READ_COMMITTED_SNAPSHOT, though since that is a DB setting on not
on individual queries, we do not want to alter the Accounting DB as we could not know the potential ramifications of what that would cause.
A Typical Deadlock will occur when the Accounting system is creating or modifying an order’s master record so no one else can modify it, though instead of row lock, it locks the table. This is something that is out of our control. When we do a select against
the same table from the Intranet site, we get a deadlock unless we use the WITH (NOLOCK). The data that we get is not Super Critical. The only potential issue that might happen is that in an uncommitted Transaction form the Accounting system, it could be adding
multiple rows to an order and when we SELECT the data, we might miss a line Item or two.
We have been changing our Select statements to include WITH (NOLOCK) so they do not compete with the UPDATE statements they usually fall victim to.
This really isn't going to get very far to be honest. Are you deadlocking on the same rows? It seems to be the order of operations taken by either the queries or the logic used in them. Without deadlock information there really is nothing to diagnose, but
that definitely sounds like the same resource being used in multiple places when it probably doesn't need to be.
This is one of the typical deadlocks that we get. Intranet Site is getting Totals for Orders while the accounting system is in the process of setting its internal record Lock on an Order.
<EVENT_INSTANCE>
<EventType>DEADLOCK_GRAPH</EventType>
<PostTime>2014-05-12T15:26:09.447</PostTime>
<SPID>23</SPID>
<TextData>
<deadlock-list>
<deadlock victim="process2f848b048">
<process-list>
<process id="process2f848b048" taskpriority="0" logused="0" waitresource="OBJECT: 12:644249400:0 " waittime="1295" ownerId="247639995" transactionname="SELECT" lasttranstarted="2014-05-12T15:26:08.150" XDES="0x69d1d3620" lockMode="IS" schedulerid="2" kpid="2856" status="suspended" spid="184" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2014-05-12T15:26:08.150" lastbatchcompleted="2014-05-12T15:26:08.150" lastattention="2014-05-12T14:50:52.280" clientapp=".Net SqlClient Data Provider" hostname="VSVR-WWW-INT12" hostpid="15060" loginname="SFA" isolationlevel="read committed (2)" xactid="247639995" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="SFA.dbo.SAGE_SO_order_total_no_history_credit" line="20" stmtstart="1190" stmtend="5542" sqlhandle="0x030007004c642a7a7c17d000e9a100000100000000000000">
SELECT SUM(t.price * t.qtyord) AS On_Order_Total
FROM PRODATA01..somast m
INNER JOIN PRODATA01..sotran t ON m.sono = t.sono
INNER JOIN SFA..item i ON i.our_part_number COLLATE DATABASE_DEFAULT = t.item COLLATE DATABASE_DEFAULT
INNER JOIN SFA..supplier s ON s.supplier_key = i.supplier_key
INNER JOIN SFA..customer c ON c.abbreviation COLLATE DATABASE_DEFAULT = m.custno COLLATE DATABASE_DEFAULT
INNER JOIN SFA..sales_order_ownership soo ON soo.so_id_col = m.id_col
LEFT JOIN PRODATA01..potran p ON p.id_col = t.po_id_col
LEFT JOIN SFA..alloc_inv a ON a.sono COLLATE DATABASE_DEFAULT = t.sono COLLATE DATABASE_DEFAULT AND a.tranlineno = t.tranlineno
WHERE c.is_visible = 1 AND m.sostat NOT IN ('V','X') AND m.sotype IN ('C','','O')
AND t.sostat NOT IN ('V','X') AND t.sotype IN ('C','','O')
--AND t.rqdate BETWEEN @start_ordate AND @end_ordate
AND UPPER(LEFT(t.item,4)) <> 'SHIP' AND t.item NOT LIKE '[_]%'
AND ((SUBSTRING(m.ornum,2,1) = 'A' AND p.expdate <= @end_ordate) OR (t.rqdate <= @en </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 7 Object Id = 2049598540] </inputbuf>
</process>
<process id="process51df0c2c8" taskpriority="0" logused="28364" waitresource="OBJECT: 12:1369823992:0 " waittime="1032" ownerId="247639856" transactionname="user_transaction" lasttranstarted="2014-05-12T15:26:07.940" XDES="0xf8b5b620" lockMode="X" schedulerid="1" kpid="7640" status="suspended" spid="292" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-05-12T15:26:08.410" lastbatchcompleted="2014-05-12T15:26:08.410" clientapp="Sage Pro ERP version 7.5" hostname="VSVR-DESKTOP" hostpid="15892" loginname="AISSCH" isolationlevel="read uncommitted (1)" xactid="247639856" currentdb="12" lockTimeout="4294967295" clientoption1="536870944" clientoption2="128056">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="22" sqlhandle="0x02000000304ac5350b86da8b9422b389413bf23015ac25d0">
UPDATE PRODATA01..SOTRAN WITH (TABLOCK HOLDLOCK) SET lckuser = lckuser WHERE id_col = @P1 </frame>
<frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P1 float)UPDATE PRODATA01..SOTRAN WITH (TABLOCK HOLDLOCK) SET lckuser = lckuser WHERE id_col = @P1 </inputbuf>
</process>
</process-list>
<resource-list>
<objectlock lockPartition="0" objid="644249400" subresource="FULL" dbid="12" objectname="PRODATA01.dbo.somast" id="lock18b5e3900" mode="X" associatedObjectId="644249400">
<owner-list>
<owner id="process51df0c2c8" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process2f848b048" mode="IS" requestType="wait" />
</waiter-list>
</objectlock>
<objectlock lockPartition="0" objid="1369823992" subresource="FULL" dbid="12" objectname="PRODATA01.dbo.sotran" id="lock2ce1c4680" mode="SIX" associatedObjectId="1369823992">
<owner-list>
<owner id="process2f848b048" mode="IS" />
</owner-list>
<waiter-list>
<waiter id="process51df0c2c8" mode="X" requestType="convert" />
</waiter-list>
</objectlock>
</resource-list>
</deadlock>
</deadlock-list>
</TextData>
<TransactionID />
<LoginName>sa</LoginName>
<StartTime>2014-05-12T15:26:09.447</StartTime>
<ServerName>VSVR-SQL</ServerName>
<LoginSid>AQ==</LoginSid>
<EventSequence>2335848</EventSequence>
<IsSystem>1</IsSystem>
<SessionLoginName />
</EVENT_INSTANCE>
I'd (in parallel) look at why parallel plans are being chosen. Not that parallel plans are a bad thing, but is the cost of the execution so high that parallelism is chosen all of the time?
How can I determine the cost of different statements? The Current Cost threshold value is 5.
The last place I would set my effort is on the Dev team. Internal Intranet queries should not take 60 to 120 seconds. That's just asking for issues that you already have. If some larger functionality with that is needed, do it on the back end as
aggregation over a certain time period and use that new static data. Recompute as needed. This is especially true if your deadlocks are happening on these resources (chances are, it is).
We are working with the long queries. Trying to Break them up. We thought about backend processing of the data so its available for the user when they need it, but some of the pages that take time are not accessed that often, so if we gathered
the data every 10 minutes in the background it would be called way many more times in a day than would be called on demand.
Thank you all again! -
Trying to decode this from USB Prober re. old joystick device
I'm troubleshooting a USB issue with an old joystick device. I can't find explanations to the various fields in USB Prober, i.e. what are the consequences of specific values/labels. I would like to verify that this device will not work with OS Mavericks or if possible find a workaround.
The device shows up on USB Prober, but it will not show up in "Joystick and Gamepad Tester" for Mac and will not work in FG v3.0 for Mac.
The device does work under a virtual WinXP on my iMac, but the application (FG v3.0) under the virtual machine is running very slow.
I suspect the "Not Captive" could be the problem, but I would appreciate help to simply understand what the various outputs from USB Prober means for this device and what you think causes it not to work and of course if there is any way in your opnion to make it work.
From USB Prober:
Full Speed device @ 5 (0xFD130000): ............................................. Composite device: "Wingman Force"
Port Information: 0x0018
Not Captive
External Device
Connected
Enabled
Device Descriptor
Descriptor Version Number: 0x0100
Device Class: 0 (Composite)
Device Subclass: 0
Device Protocol: 0
Device MaxPacketSize: 8
Device VendorID/ProductID: 0x046D/0xC281 (Logitech Inc.)
Device Version Number: 0x0100
Number of Configurations: 1
Manufacturer String: 1 "Logitech"
Product String: 2 "Wingman Force"
Serial Number String: 0 (none)
Configuration Descriptor
Length (and contents): 32
Number of Interfaces: 1
Configuration Value: 1
Attributes: 0x40 (self-powered)
MaxPower: 0 ma
Interface #0 - Vendor-specific
Alternate Setting 0
Number of Endpoints 2
Interface Class: 255 (Vendor-specific)
Interface Subclass; 255 (Vendor-specific)
Interface Protocol: 255
Endpoint 0x82 - Interrupt Input
Address: 0x82 (IN)
Attributes: 0x03 (Interrupt no synchronization data endpoint)
Max Packet Size: 16
Polling Interval: 8 ms
Endpoint 0x01 - Interrupt Output
Address: 0x01 (OUT)
Attributes: 0x03 (Interrupt no synchronization data endpoint)
Max Packet Size: 32
Polling Interval: 4 ms
From Logitech support forum:
Re: Logitech Wingman Force & Windows 7 64-bit
Options
06-25-2012 11:21 AM
SO:
The Wingman Force and Formula Force are not USB HID products
They do not use the common drivers that HID products can
They are serial products that can use a special USB adapter and communicate to the PC in very nonstandard ways
Their feedback effects HAVE to be prewritten to the memory of the device and executed by calling to the memory space of the effect. Every USB Force Feecback device we made after these two devices can accept on-the-fly effects sent to them from the comptuer and do not require preloading to the hardware.
The memory of the device was quite good at the time of the build, but now and with what game developers are doing, it's very limited
To extend the life of the product, we implemented ever-more complex tricks to get the device to transparently work as long as possible, but modern game effects continued to put more demand on the hardware.
With 64-bit addressing, our tricks were completely broken. Considering the limitations of the hardware on such a fundamental level, we did not continue development. To unify the codebase, all support was taken out for both 5.x builds, and if you want to continue using this product, use LGS 4.6 on a Windows x86 machine.
Our LGS 4.6 code is not prepared for public release and I do not think we will ever release it openly.
Update: Here is more information from USB Prober IORegistry: 5: Wingman Force@fd130000 <class IOUSBDevice>
Message was edited by: renappleAs long as you have not reloaded much (or any) new information to the iPod (or the PCs), the following links will direct you to software that has a chance of recovering some or all of your music files. Most are not shareware or freeware, so you may have to spend a little money.....
This PodSalvage software will only work with Macs running OS X. See the PodSalvage page at SubRosaSoft.com: http://www.subrosasoft.com/MacSoftware/index.php?mainpage=product_info&productsid=2
This link will give you several 'Erased iPod' recovery resources for both Mac and Windows users:
http://forums.ipodlounge.com/showthread.php?s=&threadid=45619
Make sure you read through many of the posts on the iLounge – some folks had good results, others had poor results. There seems to be a split camp on the effectiveness of recovery programs.
Either way, strongly consider a backup strategy after you recover/replace your music: http://discussions.apple.com/click.jspa?searchID=210939&messageID=1215125
http://www.recover4all.com/
http://www.pcinspector.de/file_recovery/UK/welcome.htm
http://www.handyrecovery.com/index.shtml
http://www.stellarinfo.com/mac-data-recovery.htm
http://www.binarybiz.com/vlab/mac.php
http://www.yamipod.com/main/modules/home/ -
NON-transactional session bean access entity bean
We are currently profiling our product using Borland OptmizeIt tool, and we
found some interesting issues. Due to our design, we have many session beans which
are non transactional, and these session beans will access entity beans to do
the reading operations, such as getWeight, getRate, since it's read only, there
is no need to do transaction commit stuff which really takes time, this could
be seen through the profile. I know weblogic support readonly entity bean, but
it seems that it only has benefit on ejbLoad call, my test program shows that
weblogic still creates local transaction even I specified it as transaction not
supported, and Transaction.commit() will always be called in postInvoke(), from
the profile, we got that for a single method call, such as getRate(), 80% time
spent on postInvoke(), any suggestion on this? BTW, most of our entity beans are
using Exclusive lock, that's the reason that we use non-transactional session
bean to avoid dead lock problem.
ThanksSlava,
Thanks for the link, actually I read it before, and following is what I extracted
it from the doc:
<weblogic-doc>
Do not set db-is-shared to "false" if you set the entity bean's concurrency
strategy to the "Database" option. If you do, WebLogic Server will ignore the
db-is-shared setting.
</weblogic-doc>
Thanks
"Slava Imeshev" <[email protected]> wrote:
Hi Jinsong,
You may want to read this to get more detailed explanation
on db-is-shared (cache-between-transactions for 7.0):
http://e-docs.bea.com/wls/docs61/ejb/EJB_environment.html#1127563
Let me know if you have any questions.
Regards,
Slava Imeshev
"Jinsong HU" <[email protected]> wrote in message
news:[email protected]...
Thanks.
But it's still not clear to me in db-is-shared setting, if I specifiedentity
lock as database lock, I assumed db-is-shared is useless, because foreach
new
transaction, entity bean will reload data anyway. Correct me if I amwrong.
Jinsong
"Slava Imeshev" <[email protected]> wrote:
Jinsong,
See my answers inline.
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
Hi Slava,
Thanks for your reply, actually, I agree with you, we need to
review
our db
schema and seperate business logic to avoid db lock. I can not say,guys,
we need
to change this and that, since it's a big application and developedsince
EJB1.0
spec, I think they are afraid to do such a big change.Total rewrite is the worst thing that can happen to an app. The
better aproach would be identifying the most critical piece and
make a surgery on it.
Following are questions in my mind:
(1) I think there should be many companies using weblogic serverto
develop
large enterprise applications, I am just wondering what's the maintransaction/lock
mechanism that is used? Transional session / database lock,
db-is-shared
entity
I can't say for the whole community, as for my experience the standard
usage patthern is session fasades calling Entity EJBs while having
Required TX attribute plus plain transacted JDBC calls for bulk
reads or inserts.
is the dominant one? It seems that if you speficy database lock,
the
db-is-shared
should be true, right?Basically it's not true. One will need db-is-shared only if thereare
changes
to the database done from outside of the app server.
(2) For RO bean, if I specify read-idle-timeout to 0, it shouldonly
load
once at the first use time, right?I assume read-timeout-seconds was meant. That's right, but if
an application constantly reads new RO data, RO beans will be
constantly dropped from cache and new ones will be loaded.
You may want to looks at server console to see if there's a lot
of passivation for RO beans.
(3) For clustering part, have anyone use it in real enterpriseapplication?
My concern, since database lock is the only way to choose, how aboutthe
affect
of ejbLoad to performance, since most transactions are short live,if high
volume
transactions are in processing, I am just scared to death about
the
ejbLoad overhead.
ejbLoad is a part of bean's lifecycle, how would you be scared ofit?
If ejbLoads take too much time, it could be a good idea to profile
used SQLs. Right index optimization can make huge difference.
Also you may want cosider using CMP beans to let weblogic
take care about load optimization.
(4) If using Optimization lock, all the ejbStore need to do
version
check
or timestamp check, right? How about this overhead?As for optimistic concurrency, it performs quite well as you can
use lighter isolation levels.
HTH,
Slava Imeshev
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
We are using Exclusive Lock for entity bean, because of we do
not
want
to
load
data in each new transaction. If we use Database lock, that means
we
dedicate
data access calls to database, if database deadlock happens,
it's
hard
to
detect,
while using Exclusive lock, we could detect this dead lock in
container
level.
The problem is, using Exclusive concurrency mode you serialize
access to data represented by the bean. This aproach has negative
effect on ablity of application to process concurrent requests.As
a
result the app may have performance problems under load.
Actually, at the beginnning, we did use database lock and usingtransactional
The fact that you had database deadlocking issues tells that
application logic / database schema may need some review.
Normally to avoid deadlocking it's good to group database
operations mixing in updattes and inserts into one place so
that db locking sequence is not spreaded in time. Moving to
forced serialized data access just hides design/implementation
problems.
session bean, but the database dead lock and frequent ejbLoad
really
kill
us,
so we decided to move to use Exclusive lock and to avoid dead
lock,
we
change
some session bean to non-transactional.Making session beans non-transactions makes container
creating short-living transactions for each call to entity bean
methods. It's a costly process and it puts additional load to
both container and database.
We could use ReadOnly lock for some entity beans, but since weblogicserver will
always create local transaction for entity bean, and we found
transaction
commit
is expensive, I am arguing why do we need create container leveltransaction for
read only bean.First, read-only beans still need to load data. Also, you may seeRO
beans
contanly loading data if db-is-shared set to true. Other reason
can
be
that
RO semantics is not applicable the data presented by RO bean (forinstance,
you have a reporting engine that constantly produces "RO" data,
while
application-consumer of that data retrieves only new data and neverasks
for "old" data). RO beans are good when there is a relatively stable
data
accessed repeatedly for read only access.
You may want to tell us more about your app, we may be of help.
Regards,
Slava Imeshev
I will post the performance data, let's see how costful
transaction.commit
is.
"Cameron Purdy" <[email protected]> wrote:
We are currently profiling our product using Borland
OptmizeIt
tool,
and we
found some interesting issues. Due to our design, we have
many
session
beans which
are non transactional, and these session beans will access
entity
beans
to
do
the reading operations, such as getWeight, getRate, since
it's
read
only,
there
is no need to do transaction commit stuff which really takes
time,
this
could
be seen through the profile. I know weblogic support readonly
entity
bean,
but
it seems that it only has benefit on ejbLoad call, my test
program
shows
that
weblogic still creates local transaction even I specified
it
as
transaction not
supported, and Transaction.commit() will always be called
in
postInvoke(),
from
the profile, we got that for a single method call, such as
getRate(),
80%
time
spent on postInvoke(), any suggestion on this? BTW, most of
our
entity
beans are
using Exclusive lock, that's the reason that we use
non-transactional
session
bean to avoid dead lock problem.I am worried that you have made some decisions based on an improper
understand of what WebLogic is doing.
First, you say "non transactional", but from your description
you
should
have those marked as tx REQUIRED to avoid multiple transactions
(since
non-transactional just means that the database operation becomesits
own
little transaction).
Second, you say you are using exclusive lock, which you shouldonly
use
if
you are absolutely sure that you need it, (and note that it
does
not
work in
a cluster).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
> -
Unable to open Mitutoyo Digimatic Instrument Reader using Mit. Mux
Hi all,
I was trying to open this example "Mitutoyo Digimatic Instrument Reader using Mit. Mux"
http://zone.ni.com/devzone/cda/epd/p/id/4546
but was no luck.
When i open Miutoyo Event Handler2.vi
It prompts for the following file location:
programmatically writting a 2D array to a multicolumn listbox.vi
I was able to find in here
http://sine.ni.com/devzone/cda/epd/p/id/1108
global 2.vi
Mit Force Serial Write and Read.vi
Can anyone kindly help me to get these 2 files?
Thanks very muchHi Ray,
It shows in http://zone.ni.com/devzone/cda/epd/p/id/4546 that Mr.Kenny Kreitzer submits this example.
You probably mail to his address shown in that page: [email protected]
Regards, Kate -
Code stopped working once in sequence
We are controlling a LabSmith HV power supply in LabView using drivers from the manufacturer. We are trying to make injections by alternating between a floating and an applied voltage on channel D. In an earlier design we achieved this using a case structure and elapsed time VIs. However, our previous code was not serial and caused problems. We have transferred the code to a sequence to force serial commands, and in general this is working much better. We are having one problem though. We would like to have the option of manually injecting (by floating the voltage on Channel D) or having these injections occur with a regular period. Both methods worked in our earlier design (v07) using a case structure and elapsed time VIs. The same structure, however, is not working in our new version (v10). The single injection, when the upper case structure in the last frame is true, works great. However, when this is false, the program appears to execute the false case of the lower left case structure almost all the time, and only very briefly and regularly flips to the true command. We aren't sure why this code, which worked in our last version, does not work now, and we would be very grateful for feedback. Thank you!
Attachments:
Gated timed injections version 10.vi 150 KB
Gated timed injections version 07.vi 70 KBDuplicate
http://forums.ni.com/t5/Instrument-Control-GPIB-Serial/VISA-Hex-0xBFFF0015-Timeout-expired-before-op... -
Decoding USB Prober output on iMac
I'm troubleshooting a USB issue with an old joystick device. I can't find explanations to the various fields in USB Prober, i.e. what are the consequences of specific values/labels. I would like to verify that this device will not work with OS Mavericks or if possible find a workaround.
The device shows up on USB Prober, but it will not show up in "Joystick and Gamepad Tester" for Mac and will not work in FG v3.0 for Mac.
The device does work under a virtual WinXP on my iMac, but the application (FG v3.0) under the virtual machine is running very slow.
I suspect the "Not Captive" could be the problem, but I would appreciate help to simply understand what the various outputs from USB Prober means for this device and what you think causes it not to work and of course if there is any way in your opnion to make it work.
From USB Prober:
Full Speed device @ 5 (0xFD130000): ............................................. Composite device: "Wingman Force"
Port Information: 0x0018
Not Captive
External Device
Connected
Enabled
Device Descriptor
Descriptor Version Number: 0x0100
Device Class: 0 (Composite)
Device Subclass: 0
Device Protocol: 0
Device MaxPacketSize: 8
Device VendorID/ProductID: 0x046D/0xC281 (Logitech Inc.)
Device Version Number: 0x0100
Number of Configurations: 1
Manufacturer String: 1 "Logitech"
Product String: 2 "Wingman Force"
Serial Number String: 0 (none)
Configuration Descriptor
Length (and contents): 32
Number of Interfaces: 1
Configuration Value: 1
Attributes: 0x40 (self-powered)
MaxPower: 0 ma
Interface #0 - Vendor-specific
Alternate Setting 0
Number of Endpoints 2
Interface Class: 255 (Vendor-specific)
Interface Subclass; 255 (Vendor-specific)
Interface Protocol: 255
Endpoint 0x82 - Interrupt Input
Address: 0x82 (IN)
Attributes: 0x03 (Interrupt no synchronization data endpoint)
Max Packet Size: 16
Polling Interval: 8 ms
Endpoint 0x01 - Interrupt Output
Address: 0x01 (OUT)
Attributes: 0x03 (Interrupt no synchronization data endpoint)
Max Packet Size: 32
Polling Interval: 4 ms
From Logitech support forum:
Re: Logitech Wingman Force & Windows 7 64-bit
Options
06-25-2012 11:21 AM
SO:
The Wingman Force and Formula Force are not USB HID products
They do not use the common drivers that HID products can
They are serial products that can use a special USB adapter and communicate to the PC in very nonstandard ways
Their feedback effects HAVE to be prewritten to the memory of the device and executed by calling to the memory space of the effect. Every USB Force Feecback device we made after these two devices can accept on-the-fly effects sent to them from the comptuer and do not require preloading to the hardware.
The memory of the device was quite good at the time of the build, but now and with what game developers are doing, it's very limited
To extend the life of the product, we implemented ever-more complex tricks to get the device to transparently work as long as possible, but modern game effects continued to put more demand on the hardware.
With 64-bit addressing, our tricks were completely broken. Considering the limitations of the hardware on such a fundamental level, we did not continue development. To unify the codebase, all support was taken out for both 5.x builds, and if you want to continue using this product, use LGS 4.6 on a Windows x86 machine.
Our LGS 4.6 code is not prepared for public release and I do not think we will ever release it openly.
Update: Here is more information from USB Prober IORegistry: 5: Wingman Force@fd130000 <class IOUSBDevice>
Message was edited by: renapple
iPhoto '08, OS X Mavericks (10.9.2), iPhone 4 32 GB and iPhone 4 8 GBThank you for replying Rudegar
I just got this further information from USB Prober:
bcdDevice 256 (0x100)
bDeviceClass 0 (0x0)
bDeviceProtocol 0 (0x0)
bDeviceSubClass 0 (0x0)
bMaxPacketSize0 8 (0x8)
bNumConfigurations 1 (0x1)
Bus Power Available 250 (0xfa)
Device Speed 1 (0x1)
idProduct 49793 (0xc281)
idVendor 1133 (0x46d)
iManufacturer 1 (0x1)
IOCFPlugInTypes
9dc7b780-9ec0-11d4-a54f-000a27052861 IOUSBFamily.kext/Contents/PlugIns/IOUSBLib.bundle
IOGeneralInterest IOCommand is not serializable
IOUserClientClass IOUSBDeviceUserClientV2
iProduct 2 (0x2)
iSerialNumber 0 (0x0)
locationID -49086464 (0xfd130000)
Low Power Displayed No
PortNum 3 (0x3)
Requested Power 0 (0x0)
sessionID 3548107075914 (0x1ba9714a1ba9714a)
USB Address 5 (0x5)
USB Product Name Wingman Force
USB Vendor Name Logitech
Can you explain to me what it is that makes this device unserviceable in Mac OS?
Besides the virtual machine where there is a WinXP driver there's also a driver for it in Linux. I do not know if the open source of Linux can be used for anything in this respect. Do you know anything about that? -
Please provide advices on tunning this query
I have the 10gR2. the SGA and PGA info as:
sql> @showpga
NAME VALUE
session uga memory 921,776
session uga memory max 1,518,856
session pga memory 1,569,048
session pga memory max 1,896,728
sum 5,906,408
sql>
sql> @vsga
NAME VALUE
Database Buffers 2,248,146,944
Fixed Size 2,242,736
Redo Buffers 10,813,440
Variable Size 2,033,764,176
sum 4,294,967,296
The tables' info in the query:
Tables info
num_rows table_name last_analyzed
54470 PA_PROJECTS_ALL 08-FEB-09
2104470 PA_TASKS 08-FEB-09
5420270 PA_RESOURCE_ASSIGNMENTS 08-FEB-09
119610 PA_BUDGET_VERSIONS 08-FEB-09
The query to be shown run more than 2 hours for returning 1263880 records.
I ran it as:
01:25:10 sql>> set autotrace trace
01:25:22 sql>> SELECT
01:25:32 2 'PRJ_'||UPPER(P.SEGMENT1),
01:25:32 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
01:25:32 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
01:25:32 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
01:25:32 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
01:25:32 7 FROM PA_PROJECTS_ALL P
01:25:32 8 , PA_TASKS T
01:25:32 9 , PA_RESOURCE_ASSIGNMENTS A
01:25:32 10 , PA_BUDGET_VERSIONS B
01:25:32 11 WHERE P.PROJECT_ID = T.PROJECT_ID
01:25:32 12 AND T.TASK_ID <> T.PARENT_TASK_ID
01:25:32 13 AND T.PARENT_TASK_ID IS NOT NULL
01:25:32 14 AND P.PROJECT_ID = B.PROJECT_ID
01:25:32 15 AND P.PROJECT_ID = A.PROJECT_ID
01:25:32 16 AND T.TASK_ID = A.TASK_ID
01:25:32 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
01:25:32 18 AND B.BUDGET_STATUS_CODE = 'B'
01:25:32 19 AND B.BUDGET_TYPE_CODE = 'Current'
01:25:32 20 AND B.CURRENT_FLAG = 'Y'
01:25:32 21 /
1263880 rows selected.
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 106 | 25304 |
| 1 | NESTED LOOPS | | 1 | 106 | 25304 |
| 2 | HASH JOIN | | 12 | 636 | 25280 |
| 3 | HASH JOIN | | 9968 | 350K| 3579 |
| 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
| 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
| 6 | HASH JOIN | | | | |
| 7 | INDEX FAST FULL SCAN | PA_PROJECTS_U1 | 54470 | 691K| 145 |
| 8 | INDEX FAST FULL SCAN | PA_PROJECTS_U2 | 54470 | 691K| 321 |
| 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
| 10 | TABLE ACCESS BY INDEX ROWID| PA_TASKS | 1 | 53 | 2 |
| 11 | INDEX UNIQUE SCAN | PA_TASKS_U1 | 1 | | 1 |
Statistics
1 recursive calls
0 db block gets
4668610 consistent gets
460575 physical reads
10220 redo size
77725800 bytes sent via SQL*Net to client
884947 bytes received via SQL*Net from client
126389 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1263880 rows processed
04:02:44 sql>>
It had run about 2.5 hrs.
Then I tried to force the hash-join since we have hugh SGA and PGA.
sql>> set time on
02:31:59 sql>> set autotrace trace
02:32:28 sql>>
02:32:28 sql>> SELECT /*+ use_hash(p t) */
02:32:41 2 'PRJ_'||UPPER(P.SEGMENT1),
02:32:41 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
02:32:41 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
02:32:41 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
02:32:42 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
02:32:42 7 FROM PA_PROJECTS_ALL P
02:32:42 8 , PA_TASKS T
02:32:42 9 , PA_RESOURCE_ASSIGNMENTS A
02:32:42 10 , PA_BUDGET_VERSIONS B
02:32:42 11 WHERE P.PROJECT_ID = T.PROJECT_ID
02:32:42 12 AND T.TASK_ID <> T.PARENT_TASK_ID
02:32:42 13 AND T.PARENT_TASK_ID IS NOT NULL
02:32:42 14 AND P.PROJECT_ID = B.PROJECT_ID
02:32:42 15 AND P.PROJECT_ID = A.PROJECT_ID
02:32:42 16 AND T.TASK_ID = A.TASK_ID
02:32:42 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
02:32:42 18 AND B.BUDGET_STATUS_CODE = 'B'
02:32:42 19 AND B.BUDGET_TYPE_CODE = 'Current'
02:32:42 20 AND B.CURRENT_FLAG = 'Y'
02:32:42 21 /
1263880 rows selected.
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 106 | 42350 |
| 1 | HASH JOIN | | 1 | 106 | 42350 |
| 2 | HASH JOIN | | 8 | 424 | 25280 |
| 3 | HASH JOIN | | 9968 | 350K| 3579 |
| 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
| 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
| 6 | HASH JOIN | | | | |
| 7 | INDEX FAST FULL SCAN| PA_PROJECTS_U1 | 54470 | 691K| 145 |
| 8 | INDEX FAST FULL SCAN| PA_PROJECTS_U2 | 54470 | 691K| 321 |
| 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
| 10 | TABLE ACCESS FULL | PA_TASKS | 1837K| 92M| 17041 |
Statistics
1 recursive calls
0 db block gets
535322 consistent gets
355917 physical reads
772 redo size
79117543 bytes sent via SQL*Net to client
884948 bytes received via SQL*Net from client
126389 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1263880 rows processed
04:48:07 sql>>
it still had run 2 hrs.
Based on the info presented to you, I would like to know your adivces on how to make the
improvement.
TIAI have the 10gR2.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL>
the SGA and PGA info as:
sql> @showpga
NAME VALUE
session uga memory 921,776
session uga memory max 1,518,856
session pga memory 1,569,048
session pga memory max 1,896,728
sum 5,906,408
sql>
sql> @vsga
NAME VALUE
Database Buffers 2,248,146,944
Fixed Size 2,242,736
Redo Buffers 10,813,440
Variable Size 2,033,764,176
sum 4,294,967,296
The tables' info in the query:
Tables info
num_rows table_name last_analyzed
54470 PA_PROJECTS_ALL 08-FEB-09
2104470 PA_TASKS 08-FEB-09
5420270 PA_RESOURCE_ASSIGNMENTS 08-FEB-09
119610 PA_BUDGET_VERSIONS 08-FEB-09
The query to be shown run more than 2 hours for returning 1263880 records.
A) I ran it as:
01:25:10 sql>> set autotrace trace
01:25:22 sql>> SELECT
01:25:32 2 'PRJ_'||UPPER(P.SEGMENT1),
01:25:32 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
01:25:32 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
01:25:32 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
01:25:32 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
01:25:32 7 FROM PA_PROJECTS_ALL P
01:25:32 8 , PA_TASKS T
01:25:32 9 , PA_RESOURCE_ASSIGNMENTS A
01:25:32 10 , PA_BUDGET_VERSIONS B
01:25:32 11 WHERE P.PROJECT_ID = T.PROJECT_ID
01:25:32 12 AND T.TASK_ID <> T.PARENT_TASK_ID
01:25:32 13 AND T.PARENT_TASK_ID IS NOT NULL
01:25:32 14 AND P.PROJECT_ID = B.PROJECT_ID
01:25:32 15 AND P.PROJECT_ID = A.PROJECT_ID
01:25:32 16 AND T.TASK_ID = A.TASK_ID
01:25:32 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
01:25:32 18 AND B.BUDGET_STATUS_CODE = 'B'
01:25:32 19 AND B.BUDGET_TYPE_CODE = 'Current'
01:25:32 20 AND B.CURRENT_FLAG = 'Y'
01:25:32 21 /
1263880 rows selected.
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Force serial option for backward compatibility
Rem
set linesize 152
set pagesize 0
select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 106 | 25304 |
| 1 | NESTED LOOPS | | 1 | 106 | 25304 |
| 2 | HASH JOIN | | 12 | 636 | 25280 |
| 3 | HASH JOIN | | 9968 | 350K| 3579 |
| 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
| 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
| 6 | HASH JOIN | | | | |
| 7 | INDEX FAST FULL SCAN | PA_PROJECTS_U1 | 54470 | 691K| 145 |
| 8 | INDEX FAST FULL SCAN | PA_PROJECTS_U2 | 54470 | 691K| 321 |
| 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
| 10 | TABLE ACCESS BY INDEX ROWID| PA_TASKS | 1 | 53 | 2 |
| 11 | INDEX UNIQUE SCAN | PA_TASKS_U1 | 1 | | 1 |
Statistics
1 recursive calls
0 db block gets
4668610 consistent gets
460575 physical reads
10220 redo size
77725800 bytes sent via SQL*Net to client
884947 bytes received via SQL*Net from client
126389 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1263880 rows processed
04:02:44 sql>>
It had run about 2.5 hrs.
B)
Then I tried to force the hash-join since we have hugh SGA and PGA.
sql>> set time on
02:31:59 sql>> set autotrace trace
02:32:28 sql>>
02:32:28 sql>> SELECT /*+ use_hash(p t) */
02:32:41 2 'PRJ_'||UPPER(P.SEGMENT1),
02:32:41 3 'PRJ_'||UPPER(P.SEGMENT1)||'_TSK_'||UPPER(T.TASK_NUMBER),
02:32:41 4 UPPER('ACTIVITY '||P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
02:32:41 5 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER||' - '||T.DESCRIPTION),
02:32:42 6 UPPER(P.SEGMENT1||', '||T.TASK_NUMBER)
02:32:42 7 FROM PA_PROJECTS_ALL P
02:32:42 8 , PA_TASKS T
02:32:42 9 , PA_RESOURCE_ASSIGNMENTS A
02:32:42 10 , PA_BUDGET_VERSIONS B
02:32:42 11 WHERE P.PROJECT_ID = T.PROJECT_ID
02:32:42 12 AND T.TASK_ID <> T.PARENT_TASK_ID
02:32:42 13 AND T.PARENT_TASK_ID IS NOT NULL
02:32:42 14 AND P.PROJECT_ID = B.PROJECT_ID
02:32:42 15 AND P.PROJECT_ID = A.PROJECT_ID
02:32:42 16 AND T.TASK_ID = A.TASK_ID
02:32:42 17 AND B.BUDGET_VERSION_ID = A.BUDGET_VERSION_ID
02:32:42 18 AND B.BUDGET_STATUS_CODE = 'B'
02:32:42 19 AND B.BUDGET_TYPE_CODE = 'Current'
02:32:42 20 AND B.CURRENT_FLAG = 'Y'
02:32:42 21 /
1263880 rows selected.
set markup html preformat on
Rem
Rem Use the display table function from the dbms_xplan package to display the last
Rem explain plan. Force serial option for backward compatibility
Rem
set linesize 152
set pagesize 0
select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 106 | 42350 |
| 1 | HASH JOIN | | 1 | 106 | 42350 |
| 2 | HASH JOIN | | 8 | 424 | 25280 |
| 3 | HASH JOIN | | 9968 | 350K| 3579 |
| 4 | TABLE ACCESS FULL | PA_BUDGET_VERSIONS | 9968 | 223K| 3109 |
| 5 | VIEW | index$_join$_001 | 54470 | 691K| 469 |
| 6 | HASH JOIN | | | | |
| 7 | INDEX FAST FULL SCAN| PA_PROJECTS_U1 | 54470 | 691K| 145 |
| 8 | INDEX FAST FULL SCAN| PA_PROJECTS_U2 | 54470 | 691K| 321 |
| 9 | INDEX FAST FULL SCAN | PA_RESOURCE_ASSIGNMENTS_U2 | 5420K| 87M| 21615 |
| 10 | TABLE ACCESS FULL | PA_TASKS | 1837K| 92M| 17041 |
Statistics
1 recursive calls
0 db block gets
535322 consistent gets
355917 physical reads
772 redo size
79117543 bytes sent via SQL*Net to client
884948 bytes received via SQL*Net from client
126389 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1263880 rows processed
04:48:07 sql>>
it still had run 2 hrs.
Based on the info presented to you, I would like to know your adivces on how to make the
improvement.
TIA -
MEASURING FORCE ON BALANCE USING AN ELECTRONIC BALANCE FROM THE SERIAL PORT
Hi,
I am trying to measure FORCE using an electronic BALANCE from the serial port. My measurements are strange! The BALANCE sometimes give ZERO reading! Is it because of the sample rate, baud rate etc? I have attached the readings concerned.
Attachments:
databalance.doc 144 KBI think your problem is due to the way you read the weight :
1/ask the balance to send the data
2/oversee the byte number on the serial port until it is constant
3/read the bytes received
4/convert to number
During step 2, you compare the bytes at serial port with the previous value. So far, you have been very lucky to be able to read something : the answer here is always ZERO since the readings occur BEFORE the balance has been able to send anything! means that your loop stops immediately (if you are not convinced, jujst add an indicator to display the loop index. However, since you added a wait (0.8 s), when going to step 3, the balance has had some time to send something. Here, you should not have read again the number of bytes at serial port, but that unwillingly corrects the previous error and you can read most of the received data...
You should modify completely your algorithm. Usually, a balance send the weight as a string with some terminator (RC or LF). Accordingly, the algorithm should be :
1/ask the balance to send the data
2/read the serial port, concatenating the received chars until a terminator char is received or a timeout has occured
3/convert to number
CC
Chilly Charly (aka CC)
E-List Master - Kudos glutton - Press the yellow button on the left...
Maybe you are looking for
-
How to Restore Control file to New Server
I want to test my backup by restoring the full backup of my production database on a separate test server. I have taken following action for restore to test server: • Copy full backup to Test Server on same folder where it was in Production Serve
-
BAPI_PS_PRECOMMIT dump
Hi experts, In my spec I am using the bapis: BAPI_BUS2002_GETDATA BAPI_PS_INITIALIZATION BAPI_PROJECTDEF_EXISTENCECHECK BAPI_BUS2001_GETDATA BAPI_BUS2001_CREATE BAPI_TRANSACTION_ROLLBACK BAPI_BUS2054_GETDAT
-
Hi, in Tcode ME23N i display a PO (import), in the header of this PO i can show only the amounts (for delivery costs) for the foreign currency (column : condition value). But i want also show the amount with the local currency. Please advise Regards
-
Leopard won't install on MacBookPro
I purchased the Leopard family pack, and installed Leopard without incident on an Intel iMac. However, on my MacBookPro, the installer cannot seem to find a valid disk to install Leopard on. The screen remains blank, and cannot continue. Strange, bec
-
I have xp pro SP2 I downloaded firefox to open smartswipe it tells me I need sp3
I have smartswipe I also have IE6 but could not use smartswipe with it unless I downloaded SP3. I can use it with firefox 3.6 or higher (which I have downloaded), but when I download smartswipe in firefox and it saves in a document folder but when I