UPDATE proc taking HUGE TIME
Hi
Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
/**********************CODE***************************/
PROCEDURE Update_SP IS
CURSOR C1 IS
select tim.c_col,mp.t_n
from Materialized_VW tim, MP_Table mp
where tim.R_id = mp.R_id
and tim.P_id = mp.P_id
and tim.t_id = mp.t_id
and mp.t_date between wk_comm and wk_end;
BEGIN
FOR I IN C1
LOOP
IF v_c=100000 THEN
v_c:=0;
COMMIT;
END IF;
v_c:=v_c+1;
UPDATE MP_Table mp
SET c_col = i.c_col
WHERE mp.t_n = i.t_n;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
err_num := SQLCODE;
err_msg := SUBSTR(SQLERRM,1,100);
END Update_SP;
/**********************CODE***************************/
Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
SELECT STATEMENT ALL_ROWS
Cost: 17,542 Bytes: 67 Cardinality: 1
3 HASH JOIN
Cost: 17,542 Bytes: 67 Cardinality: 1
1 TABLE ACCESS FULL MP_TABLE
Cost: 14 Bytes: 111,645 Cardinality: 4,135
2 TABLE ACCESS FULL MATERIALIZED_VW
Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
SELECT STATEMENT ALL_ROWS
Cost: 2,228 Bytes: 67 Cardinality: 1
6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
4 AND-EQUAL
2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
Please suggest ways of reducing the time or any other method of doing the above ASAP.
Also, is there any way to establish a standard behaviour which takes less time.
Thanks
Arnab
Hi BluShadow,
I followed up your example extending it to the bulk processing.
I have tested insert and update operations.
Here are the insert result:
SQL> CREATE TABLE mytable (x number, z varchar2(5));
Table created.
SQL> DECLARE
2 v_sysdate DATE;
3 v_insert NUMBER;
4 TYPE t_nt_x IS TABLE OF NUMBER;
5 TYPE t_nt_z IS TABLE OF VARCHAR2(5);
6 v_nt_x t_nt_x;
7 v_nt_z t_nt_z;
8 CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
9 BEGIN
10
11 -- Single insert
12 v_insert := 0;
13 EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
14 v_sysdate := SYSDATE;
15 INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
16 v_insert := SQL%ROWCOUNT;
17 COMMIT;
18 DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
19
20 -- Multi insert
21 v_insert := 0;
22 EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
23 v_sysdate := SYSDATE;
24 FOR i IN 1..1000000
25 LOOP
26 INSERT INTO mytable (x,z) VALUES (i,'test1');
27 v_insert := v_insert+SQL%ROWCOUNT;
28 END LOOP;
29 COMMIT;
30 DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
31
32 -- Multi insert using bulk
33 v_insert := 0;
34 EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
35 v_sysdate := SYSDATE;
36 OPEN c1;
37 LOOP
38 FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
39 EXIT WHEN C1%NOTFOUND;
40 FORALL i IN 1..v_nt_x.count
41 INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
42 v_insert := v_insert+SQL%ROWCOUNT;
43 END LOOP;
44 COMMIT;
45 DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
46
47 END;
48 /
Single insert--> Row Inserted: 1000000 Time Taken: 3
Multi insert--> Row Inserted: 1000000 Time Taken: 62
Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
PL/SQL procedure successfully completed.and here the update result:
SQL> DECLARE
2 v_sysdate DATE;
3 v_update NUMBER;
4 TYPE t_nt_x IS TABLE OF ROWID;
5 TYPE t_nt_z IS TABLE OF VARCHAR2(5);
6 v_nt_x t_nt_x;
7 v_nt_z t_nt_z;
8 CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
9 BEGIN
10
11 -- Single update
12 v_update := 0;
13 v_sysdate := SYSDATE;
14 UPDATE mytable SET z='test2';
15 v_update := SQL%ROWCOUNT;
16 COMMIT;
17 DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
18
19 -- Multi update
20 v_update := 0;
21 v_sysdate := SYSDATE;
22 FOR rec IN (SELECT ROWID AS ri FROM mytable)
23 LOOP
24 UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
25 v_update := v_update+SQL%ROWCOUNT;
26 END LOOP;
27 COMMIT;
28 DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
29
30 -- Multi update using bulk
31 v_update := 0;
32 v_sysdate := SYSDATE;
33 OPEN c1;
34 LOOP
35 FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
36 EXIT WHEN C1%NOTFOUND;
37 FORALL i IN 1..v_nt_x.count
38 UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
39 v_update := v_update+SQL%ROWCOUNT;
40 END LOOP;
41 COMMIT;
42 DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
43
44 END;
45 /
Single update--> Row Updated: 1000000 Time Taken: 39
Multi update--> Row Updated: 1000000 Time Taken: 60
Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
(in the update case the bulk processing is even slightly better than the single statement).
I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
It would be interesting to test it with more rows, i might do it tomorrow.
Just thought it would have been interesting sharing the result with you guys.
Cheers,
Davide
Similar Messages
-
Central confirmation is taking huge time for perticular user in SRM
Hi Gurus.
I am facing an issue in Production system. For Some users Central confirmation is taking huge time for user ,
around 10 users reported issue as of now and taking 10 times more than usual. Any suggestions will be great help. If any users facing this issue.Hi Prabhakar,
As Konstantin rightly mentioned, kindly check those BADI's implementations especially BBP_WF_LIST. In addition to that, please check whether you are getting any dump as below
TSV_TNEW_PAGE_ALLOC_FAILED
Best Regards,
Bharathi -
Taking huge time to fetch data from CDHDR
Hi Experts,
To count the entries in CDHDR table it taking huge time and throught time_out dump.
I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
We are finding the data from CDHDR by following conditions.
Objclass - classify.
Udate - 'X' date
Utime - 'X' (( even selecton 1 Min))
We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
Can you suggest us is there any alternate to find the entries.
Regards,
VSHello,
at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
se16 > content, selection criteria and then Proram execute on background.
Best regards,
Peter -
Hi
update TBLMCUSTOMERUSAGEDETAIL set UNUSEDVALUE=0 where CUSTOMERUSAGEDETAILID is not null;
in this table only 440 record is there but is taking more time and not execute?
what is the reason for this.
your early response is appericated.
Thanks in advance
Edited by: user647572 on Sep 13, 2010 4:13 AMSteps to generate 10046 trace
SQL >alter session set max_dump_file_size=unlimited;
SQL >alter session set timed_statistics=true;
SQL >alter session set events '10046 trace name context forever, level 12';
SQL > ------- execute the query-------
SQL > ---If it completes just EXIT
A tracefile will be generated in your user_dump_dest
Format this file with TKPROF
$ tkprof <output_from_10046> <new_file_name> sys=no explain =<username>/<password>
*username password of the user who executed the query
Upload ALL the tracefiles to Metalink.
1) Check on what is waiting for?
2) Anything Else where its more time? parse execute or fetch?
This is how you can do analysis
Kind Regards,
Rakesh Jayappa
Edited by: Rakesh jayappa on Sep 13, 2010 11:52 PM -
Update statement taking long time
Hi All,
The following query is taking too much time for update the 100000 records.
Please tell me how can this query time get reduced.
DECLARE
CURSOR cur IS
SELECT c.account_id FROM crm_statement_fulfilled_sid c;
BEGIN
FOR i IN cur LOOP
UPDATE crm_statement_fulfilled_sid a
SET a.rewards_expired = (SELECT abs(nvl(SUM(t.VALUE), 0))
FROM crm_reward_txns t
WHERE txn_date BETWEEN
(SELECT (decode(MAX(csf.procg_date) + 1,
NULL,
to_date('01-May-2011',
'DD-MM-YYYY'),
MAX(csf.procg_date) + 1))
FROM crm_statement_fulfilled csf
WHERE csf.procg_date < '01-aug-2012'
AND account_id = i.account_id) /*points_start_date_in*/
AND '01-aug-2012'
AND reason = 'RE'
AND t.account_id = i.account_id);
END LOOP;
END;Any help?
SidHoek wrote:
Second: you're doing row-by-row (= slow-by-slow) processing.
See if you can rewrite your code into a single SQL UPDATE statement that will process an entire SET at once.Well to be nitpicking. The op already does a single big update. He just does it again and again and again for each account id. Each time completely overwriting the values from the last account id. -
WebIntelligence - Refresh taking huge time
Hi Team,
I have an issue in BI launchpad. Created WEBI report using BEx as a source and enabled Refresh on open option too. The issue is when the user open the report, its taking lot of time for displaying the prompt screen(Approx 30 mins). And we have only one variable for prompt.
But when I used to create the same report in Advance analysis for OLAP, its running fast in BI Launchpad.
Is there any option to resolve this issue?. Please do the needful.
Awaiting for the reply,
Thanks in Advance..
Krishna.Hi Mahesh,
Please go through this once.
Add 2 new registry entries (string keys) under
For the 32-bit using Rich Client: [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]:
For the 64-bit Web Intelligence:[HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]
"BICSLOVChunkSize"="4000"
"BICSResultChunkSize"="100000"
BICSLOVChunkSize is defined as a string. It help create a chunk of List of Value of maximum the number given. The bigger the number the more likely the retrieval of the list of value will crash. Consider using a value between 100 and 5000. If you have 1000 values in your database and you set the BICSLOVChunkSize to 200. Then you will see a hierarchical view of 5 groups having eah 200 list of value.
BICSResultChunkSize is defined as a string. It limits the number of results that can be retrieved as a List of Value. If you have a List of Value which contains 1 Millions values and set the BICSResultChunkSize to 1000 than only the first 1000 rows will be retrieved. Therefore you will get a partial result set for the list of value.
Regards,
Krishna.K -
Update script taking long time....
Dear all,
I have to update more then 2 mil records in a table but below update script took whole day still did not finish, is there any other way to enhance this update statement.
update customer_table p
set P.ic_no = fn_change_iden_no_dirct(p.v_ic_no, p.ic_code)
where P.ic_no is not null
" fn_change_iden_no_dirct" is a finction which returns the correct ic_no?
BELOW IS THE COSTING OF THE ABOVE UPDATE SCRIPT
ID PID Operation Name Rows Bytes Cost CPU Cost IO Cost
0 UPDATE STATEMENT 2292K 34M 1192 1192
1 0 UPDATE CUSTOMER_TABLE
2 1 TABLE ACCESS FULL CUSTOMER_TABLE 2292K 34M 1192 1192
.....Thank youOne reason the update is slow is because Oracle will do two switches between the SQL engine and the pl/sql engine for each row that it has to process. Such a switch usually is very slow. In 11g you could profit from funcion result cache. But the best way would be not to use a user-defined pL/sql function.
I just noticed that you will do 4 context switches because inside of your function you do a not needed SELECT to get the length.
First thing I would do is replace
BEGIN
select length(trim(PI_INSTR))
into lv_length from dual;
...with
BEGIN
lv_length := length(trim(PI_INSTR));
...Here is a version where I tried to copy the logic from your function to the SQL statement. Also note that I tried to update only those rows that require a change.
starting point
update customer_table p
set P.ic_no = fn_change_iden_no_dirct(p.v_ic_no, p.ic_code)
where P.ic_no is not null
and P.ic_no != fn_change_iden_no_dirct(p.v_ic_no, p.ic_code)
changed logic
update customer_table p
set P.ic_no =
decode(p.ic_code
/* NEW IC part */
,'NEWIC', case when length(trim(p.v_ic_no)) = 12
then substr(trim(p.v_ic_no),1,6)||'15'||'999'||substr(trim(p.v_ic_no),12,1)
when length(trim(p.v_ic_no)) > 12
then substr(trim(p.v_ic_no),1,6)||'15'||'999'||substr(trim(p.v_ic_no),-(length(trim(p.v_ic_no)) -11))
when length(trim(p.v_ic_no)) between 8 and 11
then substr(trim(p.v_ic_no),1,6)||'15'||'999'
else p.v_ic_no
end
/* IC part */
,'UIC', case when length(trim(p.v_ic_no)) > 4
then substr(trim(p.v_ic_no),1,1)|| substr(trim(p.v_ic_no),4,1)||substr(trim(p.v_ic_no),3,1)||substr(trim(p.v_ic_no),2,1) ||substr(trim(p.v_ic_no),- (length(trim(p.v_ic_no)) -4))
else p.v_ic_no
end
/* else part */
,p.v_ic_no)
where P.ic_no is not null
and (p.ic_code in ('NEWIC','UIC') or p.v_ic_no != P.ic_no)
;Edited by: Sven W. on Jul 5, 2010 1:35 PM -
Oracle 11g: Oracle insert/update operation is taking more time.
Hello All,
In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
Query1: How to find actually oracle is taking more time in insert/updates operation.
Query2: How to rectify the problem.
We are having multithread environment.
Thanks
With Regards
Hemant.Liron Amitzi wrote:
Hi Nicolas,
Just a short explanation:
If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
The solution here is simply revuild the index once in a while.
Hope it is clear.
Liron Amitzi
Senior DBA consultant
[www.dbsnaps.com]
[www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
This is a misconception of how indexes are working.
I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
"+Index Rebuild Summary+
+•*The vast majority of indexes do not require rebuilding*+
+•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
+•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
+•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
+•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
+•To improve performance, indexes need to be regularly rebuilt is a myth+"
Good reading,
Nicolas. -
Loading of data is taking much time.
Data is coming from Oracle DB to a cube . It is taking huge time. After 1 day it is still in yellow state but it is giving a short dump <b>MESSAGE_TYPE_X</b> .
<u><b>In the errror analysis it is giving :</b></u>
<b>Diagnosis
The information available is not sufficient for analysis. You must
determine whether further IDocs exist in SAP BW that have not yet been
processed, and could deliver additional information.
Further analysis:
Check the SAP BW ALE inbox. You can check the IDocs using the Wizard or
the "All IDocs" tabstrip</b>
<u><b>Current status :</b></u>
No selection information arrived from the source system.
I have checked Syatem log and found same error. Moreover the RFC connection is OK.
Please suggest.Rajib,
What I mean is load to PSA (only PSA in Infopackage) and do not check the box "update subsequently..."
When all data is in the PSA, then load it to the cube.
(manually by clicking on the button in the monitor tab "Status")
This means that the system has more resources available to do jst one step.
Udo -
Hi guys,
The data lookup in the ODS is taking huge time .....it was working fine a week back it has around 200 mill records..
Thanks,
Your help will be greatly appreciatedFor two records, yes it shud not take that much time...
Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
Also check how is ur system is loaded with?
Thanks..
Hope this helps -
Update ztable is taking long time
Hi All,
i have run the 5 jobs with the same program at a time but when we check the db trace
zs01 is taking long time as shown below.here zs01 is having small amount of data.
in the below dbtrace for updating zs01 is taking 2,315,485 seconds .how to reduce this?
HH:MM:SS.MS Duration Program ObjectName Op. Curs Array Rec RC Conn
2:36:15 AM 2,315,485 SAPLZS01 ZS01 FETCH 294 1 1 0 R/3
The code as shown below
you can check the code in the program SAPLZS01 include LZS01F01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO ZS01-ERR_AMT.
ADD 1 TO ZS01-ERR_INVOI.
UPDATE ZS01.
ENDIF.
ENDIF.
my question is when updating the ztable why it is taking such long time,
how to reduce the time or how to make faster to update the ztable .
Thanks in advance,
regards
SuniTry the code like this..
data: wa_zs01 type zs01.
FORM UPDATE_ZS01.
IF ZS02-STATUS = '3'.
IF Z_ZS02_STATUS = '3'. "previous status is ERROR
EXIT.
ELSE.
SELECT SINGLE FOR UPDATE * FROM ZS01
WHERE PROC_NUM = ZS02-PROC_NUM.
-- change
CHECK SY-SUBRC = 0.
ADD ZS02-MF_AMT TO wa_ZS01-ERR_AMT.
ADD 1 TO wa_ZS01-ERR_INVOI.
UPDATE ZS01 from wa_zs01.
ENDIF.
ENDIF.
And i think this Select query for ZS01 is inside the ZS02 SELECT statement,
This might also make slow process.
If you want to make database access always use Workarea/Internal table to fetch the data
and work with that.
Accessing database like this or with Select.... endselect is an inefficient programming. -
Hi < i have updated my mac 10.5 with 10.5.8 sucessfully , but while instaling it is taking much time and status bar is not showing any progress
If I remember correctly one of the updates could hang after doing the update. Fixed by a restart but installing a combo update over the top of others rarely does any harm and it may be the best thing to do.
-
Update query in sql taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youThis is the Reports forum. Ask this in the SQL and PL/SQL forum.
-
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Update the Z table taking Max time.
Dear Friends,
I am updating a Z table for a sigle field value, I am doing it in loop and giving all the primary key fields in where condition, Its updating the records properly.
But The time is taking more if there are more records ( say 3000 + reacords ) , Now I am planning to update it in a single statement but the Sy-subrc is giving me 4 and the records were not updated in Z table.
pls let me know how to update the Z table in a single statement??
pls find the below code:
corent code:
UPDATE zXX CLIENT SPECIFIED
SET zyy = it_finaldata-zyy
WHERE mandt = sy-mandt AND
matnr = it_finaldata-matnr AND
matkl = it_finaldata-matkl .
this is within a loop and working fine but taking max time more than 30 mins.
now I have moved all the data into anothere internal table ( the internal table is same like Z table )
I am using the below statement but its not working:
update zXX FROM table it_savedata.
Pls help me.
Thanks,
SridharHi
If u use update that line should exist in DB table otherwise the sy-subrc will be 4. You can use MODIFY statement . The advantage is that if a line exist it will modify that particular line otherwise it will create a new line in DB table.
MODIFY dtab FROM TABLE itab.
Regards
Sathar
Maybe you are looking for
-
Can i get a refund on my 1yr warranted iphone 3gs with a major vibration button misfunction?
can i get a refund on my 1yr warranted iphone 3gs with a major vibration button misfunction?
-
New Realplayer 10.5 with Harmony Technology supports Zen produ
hi, not sure if anyone knew about this but the New Realplayer 0.5 with Harmony Techology supports Zen products. It's a free download as well. It even works with my zen micro. Just wanted to share this information. Maybe it might help those having tro
-
I have a VAIO pc and want to put some photos up to iCloud to then go onto my iPad. I have iCloud control panel and have followed the instructions. Everything is on that should be on .. on the iPad and pc. but I cannot find how to ... Select Photo St
-
The stall is 30-60 sec, then a tiny window pops up. No words, but two buttons at the bottom - gray/white on the left, blue on the right. Clicking these does nothing. I CAN move the cursor, but CAN'T change or close tabs or scroll. I CAN use the top a
-
When I updated iTunes 10, I can't rewind the movie I'm watching when it pauses, and I can't move frame forward or back using the arrow keys. Can you help me with this?