How to tune COLLECTION ITERATOR CONSTRUCTOR FETCH
Hi Gurus
I explained a plan for an SQL.
it has "COLLECTION ITERATOR CONSTRUCTOR FETCH" and consumes more time and cost. Please share your tips or tricks to at-least improve little bit time or cost.
Thanks
Agreed, Cardinality hint is not safe to use as its undocumented.
But Tom says :
one of the few 'safe' undocumented things to use.
Because it's use will not lead to data corruption, wrong answers, unpredictable outcomes. If it works - it will influence a query plan, if it doesn't - it won't. That is all - it is rather 'safe' in that respect.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2233040800346569775
Cheers,
Manik.
Similar Messages
-
COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE
Hi All,
We have Oracle database 10.2.0.4 on solaris 10.
I found some xml queries which are consuming CPU and memory highly, below is the execution plan for one of this xml sql.
PLAN_TABLE_OUTPUT
SQL_ID gzsfqp1mkfk8t, child number 0
SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE (XMLSEQUENCE (EXTRACT (:B1 ,
'/AlternateKeys/AlternateKey'))) T WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE
(VALUE (T), '/AlternateKey/@keyType') AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey') AND NVL
(B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') = NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'),
'6209870F57C254D6E04400306E4A78B0')
Plan hash value: 855909818
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | | | 16864 (100)| | | |
|* 1 | HASH JOIN | | 45 | 3240 | 16864 (2)| 00:03:23 | | |
| 2 | TABLE ACCESS FULL | CM_ALT_KEY_TYPE | 5 | 130 | 6 (0)| 00:00:01 | | |
|* 3 | HASH JOIN | | 227 | 10442 | 16858 (2)| 00:03:23 | | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | | | |
| 5 | PARTITION HASH ALL | | 10M| 447M| 16758 (2)| 00:03:22 | 1 | 16 |
| 6 | TABLE ACCESS FULL | CM_PACKET_ALT_KEY | 10M| 447M| 16758 (2)| 00:03:22 | 1 | 16 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access("B"."ALT_KEY_TYPE_ID"="C"."ALT_KEY_TYPE_ID" AND
"C"."ALT_KEY_TYPE_NAME"=SYS_OP_C2C(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@keyType')))
3 - access("B"."ALT_KEY_VALUE"=EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey') AND
NVL("B"."CHILD_BROKER_CODE",'6209870F57C254D6E04400306E4A78B0')=NVL(EXTRACTVALUE(VALUE(KOKBF$),'/AlternateKey/@broker'
),'6209870F57C254D6E04400306E4A78B0'))Seems due to
1.COLLECTION ITERATOR PICKLER FETCH along with XMLSEQUENCEFROMXMLTYPE which i think is due to usage of table( XMLSEQUENCE() )
2.Conversion taking place according to SYS_OP_C2C function as shown in Predicate Information.
3.Table is not using xmltype datatype to store XML
4.Wilcards have been used (/AlternateKey/@keyType)
Could anyone please help me in tuning this query as i know very less about XML DB
Including one more sql which also use to consume huge CPU and memory, these tables are also not hving any column with xmltype datatype.
SELECT /*+ INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS
"categoryType"), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS
"origin"), XMLAGG (XMLCONCAT (XMLELEMENT ("Value", XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
E.TAG_VALUE), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END))) )) FROM TABLE (CAST (:B1 AS
T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F,
REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID
AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID =
A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH
A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
Plan hash value: 2393257319
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 16455 (100)| |
| 1 | SORT AGGREGATE | | 1 | 185 | 16455 (1)| 00:03:18 |
| 2 | SORT GROUP BY | | 1 | 185 | 16455 (1)| 00:03:18 |
|* 3 | CONNECT BY WITH FILTERING | | | | | |
|* 4 | FILTER | | | | | |
| 5 | COUNT | | | | | |
|* 6 | HASH JOIN | | 667K| 117M| 16413 (1)| 00:03:17 |
| 7 | COLLECTION ITERATOR PICKLER FETCH | | | | | |
|* 8 | HASH JOIN | | 8168 | 1459K| 16384 (1)| 00:03:17 |
| 9 | TABLE ACCESS FULL | REM_TAG_CATEGORY | 25 | 950 | 5 (0)| 00:00:01 |
|* 10 | HASH JOIN | | 8168 | 1156K| 16378 (1)| 00:03:17 |
| 11 | TABLE ACCESS FULL | REM_TAG_LIST | 117 | 7137 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 8168 | 670K| 16373 (1)| 00:03:17 |
| 13 | MERGE JOIN | | 8168 | 215K| 27 (4)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | CM_ORIGIN | 2 | 50 | 2 (0)| 00:00:01 |
| 15 | INDEX FULL SCAN | PK_CM_ORIGIN | 2 | | 1 (0)| 00:00:01 |
|* 16 | SORT JOIN | | 8168 | 16336 | 25 (4)| 00:00:01 |
| 17 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 18 | TABLE ACCESS BY INDEX ROWID | REM_TAG_VALUE | 1 | 57 | 2 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | PK_REM_TAG_VALUE | 1 | | 1 (0)| 00:00:01 |
|* 20 | HASH JOIN | | | | | |
| 21 | CONNECT BY PUMP | | | | | |
| 22 | COUNT | | | | | |
|* 23 | HASH JOIN | | 667K| 117M| 16413 (1)| 00:03:17 |
| 24 | COLLECTION ITERATOR PICKLER FETCH | | | | | |
|* 25 | HASH JOIN | | 8168 | 1459K| 16384 (1)| 00:03:17 |
| 26 | TABLE ACCESS FULL | REM_TAG_CATEGORY | 25 | 950 | 5 (0)| 00:00:01 |
|* 27 | HASH JOIN | | 8168 | 1156K| 16378 (1)| 00:03:17 |
| 28 | TABLE ACCESS FULL | REM_TAG_LIST | 117 | 7137 | 5 (0)| 00:00:01 |
| 29 | NESTED LOOPS | | 8168 | 670K| 16373 (1)| 00:03:17 |
| 30 | MERGE JOIN | | 8168 | 215K| 27 (4)| 00:00:01 |
| 31 | TABLE ACCESS BY INDEX ROWID | CM_ORIGIN | 2 | 50 | 2 (0)| 00:00:01 |
| 32 | INDEX FULL SCAN | PK_CM_ORIGIN | 2 | | 1 (0)| 00:00:01 |
|* 33 | SORT JOIN | | 8168 | 16336 | 25 (4)| 00:00:01 |
| 34 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 35 | TABLE ACCESS BY INDEX ROWID | REM_TAG_VALUE | 1 | 57 | 2 (0)| 00:00:01 |
|* 36 | INDEX UNIQUE SCAN | PK_REM_TAG_VALUE | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
4 - filter(SYS_OP_ATG(VALUE(KOKBF$),2,3,2)=HEXTORAW(:B3))
6 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
8 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
10 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
16 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
19 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))
20 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=PRIOR NULL)
23 - access(SYS_OP_ATG(VALUE(KOKBF$),1,2,2)=SYS_OP_ATG(VALUE(KOKBF$),2,3,2))
25 - access("G"."TAGGING_CATEGORY_ID"="F"."TAGGING_CATEGORY_ID")
27 - access("F"."TAG_LIST_ID"="E"."TAG_LIST_ID")
33 - access("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
filter("H"."ORIGIN_ID"=SYS_OP_ATG(VALUE(KOKBF$),3,4,2))
36 - access("E"."TAG_VALUE_ID"=SYS_OP_ATG(VALUE(KOKBF$),7,8,2))-Yasser
Edited by: YasserRACDBA on Feb 24, 2010 8:30 PM
Added one more sql..Looking at the second query, it too has a lot of bind variables... Can you find out the types and values of each BIND. Also, I'm suspcious about the use of XMLCONCAT.. Can you found out why the developer is using it..
SELECT /*+ INDEX(e) */ XMLAGG
XMLELEMENT
"TaggingCategory",
XMLATTRIBUTES (G.TAG_CATEGORY_CODE AS "categoryType"),
XMLELEMENT
"TaggingValue",
XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"),
XMLAGG
XMLCONCAT
XMLELEMENT
"Value",
XMLATTRIBUTES (F.TAG_LIST_CODE AS "listType"),
E.TAG_VALUE
CASE WHEN LEVEL = 1
THEN :B4
ELSE NULL
END
FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A,
TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C,
REM_TAG_VALUE E,
REM_TAG_LIST F,
REM_TAG_CATEGORY G,
CM_ORIGIN H
WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID
AND F.TAG_LIST_ID = E.TAG_LIST_ID
AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID
AND H.ORIGIN_ID = C.ORIGIN_ID
AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID
GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE
START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 )
CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDEdited by: mdrake on Feb 24, 2010 8:11 AM -
How to tune a query which contains "Bulk Collect Into" clause
I want to tune the below query:
SELECT customer_master_num,
product_nam
BULK COLLECT INTO t_cont09_rec
FROM TB_CMA009_SUPRA_RE_AGNT_CONT
WHERE re_agent_customer_master_num = p_63cust_master_num
AND customer_master_num = p_63board_master_num
AND cancellation_dt IS NULL
AND NVL (is_training_key_flg, 'N') = 'N';
This contains "Bulk Collect Into" clause.
TYPE cont09cur IS RECORD (
customer_master_num TB_CMA009_SUPRA_RE_AGNT_CONT.customer_master_num%TYPE,
product_nam TB_CMA009_SUPRA_RE_AGNT_CONT.product_nam%TYPE);
t_cont09_rec cont09_rec;
"t_cont09_rec" This is of Record Type
Please help me out how to tune this one.[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
Also, don't get too distracted by the PL/SQL bulk collect into construction. If it takes time, then you have more than 99% chance that the time is spent in the SQL.
Regards,
Rob. -
How do i download my i tunes collection from my computer onto my i phone, i only seem to be able to download the ones i have purchased from itunes and not the ones i have uploaded from my collection, i am new to all this and need some advice please?
You will not be able to download those through itunes on the phone, if that is what you are trying to do. You will need to sync to your computer and itunes to get them on it.
http://support.apple.com/kb/HT1386 -
Report cannot be rendered (com/sun/java/util/collections/Iterator)
Hello all
I'm new both to Weblogic Server and BI Publisher.
A few days ago I thought that I managed to install BI Publisher on top of Weblogic. It turns out to be untrue because I am not able to view any report, being it a sample or a newly created one.
Platform: Windows 2003 32-bit
Weblogic version: 10.3.3.0
BI Publisher version: 10.1.3.4.1 (doesn't work both w/ and w/o the latest patchset 9791839)
And now to the problem. Whenever I try to view a report, I get an error message stating "The report cannot be rendered because of an error, please contact the administrator". Being both the user and the administrator, I am forced to press the "Error Detail" link, upon which the only thing that pops below is "com/sun/java/util/collections/Iterator" (in red).
The same non-verbose error message appears also when running in debug mode. Weblogic logs are empty from warnings, errors, etc.
As for the Weblogic Server, it claims that the xmlpserver application has been deployed and started successfully.
It seems to me that the BI Publisher application is trying to use java class that doesn't exist (com.sun.java.util.collections.Iterator). Of course I have no clue how to prove that because I do not have the source code for this app.
Oracle support is hardly able to understand the problem, so I thought maybe one of you could give me some answer.
Any Ideas?
JonathanBy the way, I deployed the app under Oracle Report's cluster. Don't know whether it matters.
-
How to tune the follwoing procedure?
create or replace procedure sample(verror_msg in out varchar2,
vbrn_num in tb_branches.brn_num%type) is
ltext1 varchar2(500);
ltext2 varchar2(500);
ltable_name varchar2(50);
lcolumn_name varchar2(50);
ldata_type varchar2(50);
lold_rcn_num number;
lnew_rcn_num number;
lvalue varchar2(50);
lunit_type char(1);
lsql_stmt1 varchar2(500);
lstring varchar2(500);
lcol varchar2(10);
lstart_time VARCHAR2(100);
lend_time VARCHAR2(100);
lcommit VARCHAR2(10) := 'COMMIT;';
lfile_handle1 utl_file.file_type;
lfile_handle2 utl_file.file_type;
lfile_handle3 utl_file.file_type;
lfile_handle4 utl_file.file_type;
lfile_name1 VARCHAR2(50) := 'RCN_UPDATE_STMTS_' || vbrn_num || '.SQL';
lfile_name2 VARCHAR2(50) := 'RCNSUCCESS_' || vbrn_num || '.TXT';
lfile_name3 VARCHAR2(50) := 'RCNFAIL_' || vbrn_num || '.TXT';
lfile_name4 VARCHAR2(50) := 'RCNERROR_' || vbrn_num || '.TXT';
ldirectory_name VARCHAR2(100);
ldirectory_path VARCHAR2(100);
lspool_on VARCHAR2(100);
lspool_off VARCHAR2(100);
TYPE ref_cur IS REF CURSOR;
cur_tab_cols ref_cur;
cursor c1 is
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
FROM USER_TAB_COLS
WHERE TABLE_NAME NOT LIKE 'TB_CONV%'
and TABLE_NAME LIKE 'TB_%'
AND COLUMN_NAME LIKE '%RCN%'
UNION
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
FROM USER_TAB_COLS
WHERE TABLE_NAME in ('TB_UNITCODES', 'TB_HIST_UNITCODES')
AND COLUMN_NAME = 'UNIT_CODE'
order by table_name;
BEGIN
verror_msg := nvl(verror_msg, 0);
begin
SELECT DISTINCT directory_path, directory_name
INTO ldirectory_path, ldirectory_name
FROM tb_conv_path
WHERE brn_num = vbrn_num;
EXCEPTION
WHEN NO_DATA_FOUND THEN
SP_CO_RAISEERROR('00SY00402', 'F', 'T');
WHEN others THEN
SP_CO_RAISEERROR('00SY00401X', 'F', 'T');
END;
lfile_handle1 := utl_file.fopen(ldirectory_name, lfile_name1, 'W', 32767);
lfile_handle2 := utl_file.fopen(ldirectory_name, lfile_name2, 'W', 32767);
lfile_handle3 := utl_file.fopen(ldirectory_name, lfile_name3, 'W', 32767);
lfile_handle4 := utl_file.fopen(ldirectory_name, lfile_name4, 'W', 32767);
SELECT 'SPOOL ' || ldirectory_path || '/LOG_' || lfile_name1
INTO lspool_on
FROM dual;
utl_file.put_line(lfile_handle1, lspool_on);
utl_file.new_line(lfile_handle1, 1);
select 'EXEC SP_CONV_START_TIMELOG(' || '''' || 'LOG_' || lfile_name1 || '''' || ',' ||
vbrn_num || ');'
into lstart_time
from dual;
UTL_FILE.PUT_LINE(lfile_handle1, lstart_time);
UTL_FILE.NEW_LINE(lfile_handle1, 1);
open C1;
loop
Fetch C1
into ltable_name, lcolumn_name, ldata_type;
Exit When C1%notFound;
lsql_stmt1 := 'select column_name from user_tab_columns where table_name =' || '''' ||
ltable_name || '''' ||
' AND column_name in (''BRN_NUM'',''BRANCH'',''BRANCH_NUMBER'')';
begin
execute immediate lsql_stmt1
into lcol;
exception
when no_data_found then
lcol := null;
end;
if lcol is not null then
if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') then
ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
ltable_name ||
' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
vbrn_num || ') b where a.' || lcolumn_name ||
' = b.col(+) and b.col is null and a.' || lcolumn_name ||
' is not null and ' || lcol || ' = ' || vbrn_num ||
' and a.unit_type=''9''';
else
ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
ltable_name ||
' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
vbrn_num || ') b where a.' || lcolumn_name ||
' = b.col(+) and b.col is null and a.' || lcolumn_name ||
' is not null and ' || lcol || ' = ' || vbrn_num;
end if;
OPEN cur_tab_cols FOR ltext2;
loop
fetch cur_tab_cols
into lvalue;
exit when cur_tab_cols%notfound;
begin
IF VBRN_NUM IN (21, 6, 7, 8) THEN Commented during NAP HK SIT cycle1
SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
INTO LNEW_RCN_NUM, LOLD_RCN_NUM
FROM TB_CONV_RCN_MAPPING
WHERE OLD_RCN_NUM = LVALUE
AND BRN_NUM = VBRN_NUM;
/* ELSE
SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
INTO LNEW_RCN_NUM, LOLD_RCN_NUM
FROM TB_CONV_RCN_MAPPING
WHERE OLD_RCN_NUM = LVALUE
AND NEW_RCN_NUM NOT LIKE '40%'
AND NEW_RCN_NUM NOT LIKE '41%'
AND NEW_RCN_NUM NOT LIKE '42%'
AND NEW_RCN_NUM NOT LIKE '65%'
AND BRN_NUM = VBRN_NUM;
END IF; */ -- Commented during NAP HK SIT cycle1
if ldata_type = 'NUMBER' then
if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') and
lcolumn_name = 'UNIT_CODE' then
begin
select distinct unit_type
into lunit_type
from TB_UNITCODES
where lcol = vbrn_num
and unit_code = lvalue
and unit_type = '9';
exception
when no_data_found then
lunit_type := null;
end;
if lunit_type is not null then
ltext1 := 'update ' || ltable_name || ' set ' ||
lcolumn_name || ' = ' || lnew_rcn_num ||
' where ' || lcolumn_name || ' = ' ||
lold_rcn_num || ' and ' || lcol || ' = ' ||
vbrn_num || ' and unit_type = ' || '''9''' || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name ||
' - ' || lold_rcn_num || ' - ' ||
lnew_rcn_num || ' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
end if;
else
ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
' = ' || lnew_rcn_num || ' where ' || lcolumn_name ||
' = ' || lold_rcn_num || ' and ' || lcol || ' = ' ||
vbrn_num || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name ||
' - ' || lold_rcn_num || ' - ' ||
lnew_rcn_num || ' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
end if;
else
if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') and
lcolumn_name = 'UNIT_CODE' then
begin
lstring := 'select distinct unit_type from ' || ltable_name ||
' where ' || lcol || ' = ' || vbrn_num ||
' and ' || lcolumn_name || ' = ' || '''' ||
lvalue || '''' || ' and unit_type = ' || '''9''';
execute immediate lstring
into lunit_type;
exception
when no_data_found then
lunit_type := null;
end;
if lunit_type is not null then
ltext1 := 'update ' || ltable_name || ' set ' ||
lcolumn_name || ' = ' || '''' || lnew_rcn_num || '''' ||
' where ' || lcolumn_name || ' = ' || '''' ||
lold_rcn_num || '''' || ' and ' || lcol || ' = ' ||
vbrn_num || ' and unit_type = ' || '''9''' || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name ||
' - ' || lold_rcn_num || ' - ' ||
lnew_rcn_num || ' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
end if;
else
ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
' = ' || '''' || lnew_rcn_num || '''' || ' where ' ||
lcolumn_name || ' = ' || '''' || lold_rcn_num || '''' ||
' and ' || lcol || ' = ' || vbrn_num || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name ||
' - ' || lold_rcn_num || ' - ' ||
lnew_rcn_num || ' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
end if;
end if;
exception
When NO_DATA_FOUND THEN
utl_file.put_line(lfile_handle3,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lvalue || ' - ' || 'NO MAPPING FOUND' ||
' - ' || vbrn_num);
utl_file.new_line(lfile_handle3, 0);
when others then
utl_file.put_line(lfile_handle4,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lvalue || ' - ' || SQLERRM || ' - ' ||
vbrn_num);
utl_file.new_line(lfile_handle4, 0);
end;
end loop;
ELSE
ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
ltable_name ||
' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
vbrn_num || ') b where a.' || lcolumn_name ||
' = b.col(+) and b.col is null and a.' || lcolumn_name ||
' is not null';
OPEN cur_tab_cols FOR ltext2;
loop
fetch cur_tab_cols
into lvalue;
exit when cur_tab_cols%notfound;
begin
IF VBRN_NUM IN (21, 6, 7, 8) THEN Commented during NAP HK SIT cycle1
SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
INTO LNEW_RCN_NUM, LOLD_RCN_NUM
FROM TB_CONV_RCN_MAPPING
WHERE OLD_RCN_NUM = LVALUE
AND BRN_NUM = VBRN_NUM;
/* ELSE
SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
INTO LNEW_RCN_NUM, LOLD_RCN_NUM
FROM TB_CONV_RCN_MAPPING
WHERE OLD_RCN_NUM = LVALUE
AND NEW_RCN_NUM NOT LIKE '40%'
AND NEW_RCN_NUM NOT LIKE '41%'
AND NEW_RCN_NUM NOT LIKE '42%'
AND NEW_RCN_NUM NOT LIKE '65%'
AND BRN_NUM = VBRN_NUM;
END IF; */ -- Commented during NAP HK SIT cycle1
if ldata_type = 'NUMBER' then
ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
' = ' || lnew_rcn_num || ' where ' || lcolumn_name ||
' = ' || lold_rcn_num || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lold_rcn_num || ' - ' || lnew_rcn_num ||
' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
else
ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
' = ' || '''' || lnew_rcn_num || '''' || ' where ' ||
lcolumn_name || ' = ' || '''' || lold_rcn_num || '''' || ';';
utl_file.put_line(lfile_handle1, ltext1);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle1, lcommit);
utl_file.new_line(lfile_handle1, 0);
utl_file.put_line(lfile_handle2,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lold_rcn_num || ' - ' || lnew_rcn_num ||
' - ' || vbrn_num);
utl_file.new_line(lfile_handle2, 0);
end if;
exception
When NO_DATA_FOUND THEN
utl_file.put_line(lfile_handle3,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lvalue || ' - ' || 'NO MAPPING FOUND' ||
' - ' || vbrn_num);
utl_file.new_line(lfile_handle3, 0);
when others then
utl_file.put_line(lfile_handle4,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lvalue || ' - ' || SQLERRM || ' - ' ||
vbrn_num);
utl_file.new_line(lfile_handle4, 0);
end;
end loop;
end if;
end loop;
close c1;
utl_file.new_line(lfile_handle1, 1);
select 'EXEC SP_CONV_END_TIMELOG(' || '''' || 'LOG_' || lfile_name1 || '''' || ',' ||
vbrn_num || ');'
into lend_time
from dual;
UTL_FILE.PUT_LINE(lfile_handle1, lend_time);
UTL_FILE.NEW_LINE(lfile_handle1, 1);
SELECT 'SPOOL OFF;' INTO lspool_off FROM dual;
utl_file.put_line(lfile_handle1, lspool_off);
utl_file.new_line(lfile_handle1, 1);
utl_file.fclose(lfile_handle1);
utl_file.fclose(lfile_handle2);
utl_file.fclose(lfile_handle3);
utl_file.fclose(lfile_handle4);
exception
when others then
verror_msg := sqlcode || ' ~ ' || sqlerrm;
utl_file.put_line(lfile_handle4,
ltable_name || ' - ' || lcolumn_name || ' - ' ||
lvalue || ' - ' || SQLERRM || ' - ' || vbrn_num);
utl_file.new_line(lfile_handle4, 0);
utl_file.new_line(lfile_handle4, 0);
utl_file.fclose(lfile_handle1);
utl_file.fclose(lfile_handle2);
utl_file.fclose(lfile_handle3);
utl_file.fclose(lfile_handle4);
end sample;duplicate:
how to tune the follwoing procedure? -
Dear gurus
While executing VA05
Giving material number
and date range from 01.04.2010 to 01.04.2010
sales Organization 1000
distribution Channel 10
It takes more then an hour to perform why is that so ? how to tune it up.
please help
Regards
Saad NisarHi Saad,
This is a standard program(SAPMV75A), RIGHT? . Normal tools in sap for performace analysis are
Run time analysis transaction SE30
This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
SQL Trace transaction ST05
The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
Kindly please take an ABAP'ers help.
Regards,
Ram Pedarla -
How to find the number of fetched lines from select statement
Hi Experts,
Can you tell me how to find the number of fetched lines from select statements..
and one more thing is can you tell me how to check the written select statement or written statement is correct or not????
Thanks in advance
santoshHi,
Look for the system field SY_TABIX. That will contain the number of records which have been put into an internal table through a select statement.
For ex:
data: itab type mara occurs 0 with header line.
Select * from mara into table itab.
Write: Sy-tabix.
This will give you the number of entries that has been selected.
I am not sure what you mean by the second question. If you can let me know what you need then we might have a solution.
Hope this helps,
Sudhi
Message was edited by:
Sudhindra Chandrashekar -
How to inherit super class constructor in the sub class
I have a class A and class B
Class B extends Class A {
// if i use super i can access the super classs variables and methods
// But how to inherit super class constructor
}You cannot inherit constructors. You need to define all the ones you need in the subclass. You can then call the corresponding superclass constructor. e.g
public B() {
super();
public B(String name) {
super(name);
} -
Query takes long time 41 seconds to run how to tune
my query is simple as follows...i dont know how to tune it..
cur_memb_count (p_as_of_date IN date)
is
select
count(ip.individual_id) membercount,
--lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad( pb.plan_cd,3,'0') group_id,
substr(pb.plan_cd,1,1)||lpad(re.region_id, 2,'0')||'0000' group_id,
ipp.legal_entity_id,
bus.gl_bus_unit_a,
bus.lob,
loc.gl_loc_nbr_a,
prod.gl_product_cd_a,
prod.gl_fin_argmt_a,pb.plan_type_id ,pb.plan_cd
from
plan pb ,region_map re , state_plan_billing spb,
insured_plan_profile ipp , insured_plan ip ,
ods.oods_gl_bus_unit bus, ods.oods_gl_loc_nbr loc,
ods.oods_gl_product_line prod,
household h,
employer_household eh
where
ipp.residence_st_plan_billing_id = spb.state_plan_billing_id
and ipp.insured_plan_id = ip.insured_plan_id
and ip.plan_cd = pb.plan_cd
and pb.plan_cd=spb.plan_cd
-- and pb.plan_type_id = loc.lob
and spb.state_cd = re.state_cd
and p_as_of_date between ip.insured_plan_effective_date and
nvl(ip.insured_plan_termination_date,'31-dec-9999')
and ip.insured_plan_effective_date <>
nvl(ip.insured_plan_termination_date,'31-dec-9999')
-- the condition below is necessary. but not enough data to test.when
uncommented will only give
-- a few records. try testing it just by uncommenting it.
--and p_as_of_date between re.region_map_start_date and re.region_map_stop_date
and loc.lob=prod.lob and loc.lob=bus.lob(+) and loc.company_cd=bus.company_cd(+)
and p_as_of_date between pb.plan_start_date and pb.plan_stop_date
and p_as_of_date between ipp.ins_plan_profile_start_date and
ipp.ins_plan_profile_stop_date
-- and lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad(pb.plan_cd,3,'0')
= loc.group_id
and substr(pb.plan_cd,1,
1)||lpad(re.region_id,2,'0')||nvl(employee_id,'0000') =loc.group_id
and p_household_id_param = h.household_id
and h.household_id = eh.employer_household_id
and p_date_param between eh.emp_hhold_start_date and eh.emp_hhold_stop_date
and insplan.individual_id=housmemb.individual_id(+)
and eh.delete_ind = 'N'
group by
--lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad(pb.plan_cd,3,'0'),
substr(pb.plan_cd ,1,1)||lpad(re.region_id,2, '0')||nvl(employee_id,'0000'),If many full table scans on big tables consider creating indexes. Or if many index reads consider forcing full table scans :)
Ah, I just love these tuning questions. "My query is slow. Please make it go fast". Sure, put on these red shoes, click your heels three times and make a wish. Alas, tuning is rather more complicated than that, more of a science than a voodoo rirtual. We would like to help. But we need more data, some concrete figures. Otherwise we're just guessing.
So, first off, please read the Performance Tuning Guide. Apply some of its techniques. If you still don't understand why your query is running slow, come back to us with table descriptions, volumetrics, indexes, explain plans, stats, timings and tkprof output.
Good luck, APC -
How To import collections from Bridge cs6 To Bridge CC
How To import collection from Bridge cs6 To Bridge CC??
thxHow To import collection from Bridge cs6 To Bridge CC??
A collection is a bunch of aliases to the original files in their original place. If you have upgraded to CC on the same machine you can use the same collection, on a different machine it will possible not find the correct location again.
On a Mac the path for the saved collections is : User / library/ application Support / Adobe/ BridgeCS6 / collections. Inhere are the saved collections showing the names you gave them. You can copy these collection files to the same location in Bridge CC.
If you have a different machine you best first choose the collection and add a keyword with the name of the collection to all the files in this particular collection so you can use the find command to easily recreate the collection on your new machine.
Maybe someone with more knowledge of Windows can show you the correct path but it should be something similar as on a Mac. -
How to get collection of embeded fonts using plugins in acrobat reader?
I am trying to create a plugin for acrobat reader to display embedded fonts on opened document and will do business logic afterward, but not sure whats the method to get the collection of embedded fonts.
Please give me example of code snippet how to get collection of embedded fonts of opened document.
ThanksPDDocEnumFonts() will enumerate the fonts in a document for you. You will have to look at each font to determine if it's embedded or not.
Don't forget that to build a plugin for Reader, you need a license from Adobe. -
How to use collect statement for below
data : begin of itab,
n(3) type c,
n1 type n,
k(5) type c,
end of itab.
select n n1 from into itab table /zteest.
*internal table has
n n1 k
gar 100 uji
hae 90 iou
gar 90 uji
hae 87 iou
I want
gar 190
hae 177
How to use collect statement as n1 is n ..?
let me know..
Thankstry this..
DATA : BEGIN OF itab OCCURS 0,
n(3) TYPE c,
n1(3) TYPE p DECIMALS 2,
k(5) TYPE c,
END OF itab.
itab-n = 'gar'.
itab-n1 = 100.
itab-k = 'uji'.
COLLECT itab .CLEAR itab.
itab-n = 'hae'.
itab-n1 = 90.
itab-k = 'iou'.
COLLECT itab .CLEAR itab.
itab-n = 'gar'.
itab-n1 = 90.
itab-k = 'uji'.
COLLECT itab .CLEAR itab.
itab-n = 'hae'.
itab-n1 = 87.
itab-k = 'iou'.
COLLECT itab .CLEAR itab. -
How do I collected all the SMS from iPhone to Mac.
How do I collected all the SMS from iPhone to Mac. Besides these paymentprograms. And I have no previous backup that I can get it from.
This is amazing! Pick out everything I need from the iPhone. Just what I've been looking for a long time, but did not find anything I liked. So started to see if I could pick out one and one thing ... Definitely worth the money. Thank you Julian!
PS. I fully support the payment thing, but theres allot of legal free programs aswell, so just wondered =) -
How to Tune the Transactions/ Z - reports /Progr..of High response time
Dear friends,
in <b>ST03</b> work load anlysis menu.... there are some z-reports, transactions, and some programmes are noticed contineously that they are taking the <b>max. response time</b> (and mostly >90%of time is DB Time ).
how to tune the above situation ??
Thank u.Siva,
You can start with some thing like:
ST04 -> Detail Analysis -> SQL Request (look at top disk reads and buffer get SQL statements)
For the top SQL statements identified you'd want to look at the explain plan to determine if the SQL statements is:
1) inefficient
2) are your DB stats up to date on the tables (note up to date stats does not always means they are the best)
3) if there are better indexes available, if not would a more suitable index help?
4) if there are many slow disk reads, is there an I/O issue?
etc...
While you're in ST04 make sure your buffers are sized adequately.
Also make sure your Oracle parameters are set according to this OSS note.
Note 830576 - Parameter recommendations for Oracle 10g
Maybe you are looking for
-
Compatible list for VGA connection
Since the list theMagius is collecting only deals with DVI compatible monitors I was wondering if anyone is interested in a similar list, but for people with VGA monitors? if so, I volunteer to make a simple PHP page where people can fill out their m
-
How to use CTUPARAMS in CALL TRANSACTION
Hi , How to pass ctuparms in CALL TRANSACTIION. Please give me some example. Thanks in Advance, Kuma A.
-
Newcomer to j2me, please advice
HIs to everyone. I am trying to install JVM on my SONY Clie PEG-??? to produce a final project as a coursework. Could you please refer me to some resouce or anything else. I downloaded MIDP2.0 from sun but breaking my mind as to what to do it next. B
-
If I drop comcast will I still be able to get old emails?
I have used comcast internet for ten years but want to switch. I want to change providers but will I be able to access saved email?
-
Pdfs not opening & crashing on me
after being told ( my comp) that i needed to install new versions , i did & now i cant open any pdfs, it freezes my comp when trying too & it crashes giving me the msg like " unable to display some media on this page as the pluggin has crashed'.... s