Help on table update - millions of rows
Hi,
I am trying to do the following however the process is taking lot of time. Can someone help me the best way to do it.
qtemp1 - 500,000 rows
qtemp2 - 50 Million rows
UPDATE qtemp2 qt
SET product =
SELECT product_cd
FROM qtemp1 qtemp
WHERE qt.quote_id = qtemp.quote_id
processed_ind = 'P'
I have created indexes on product, product_cd and quote_id on both the tables.
Thank you
There are two basic I/O read operations that need to be done to find the required rows.
1. In QTEMP1 find row for a specific QUOTE_ID.
2. In QTEMP2 find all rows where PROCESSED_IND is equal to 'P'.
For every row in (2), the I/O in (1) is executed. So, if there are 10 million rows result for (2), then (1) will be executed 10 million times.
So you want QTEMP1 to be optimised for access via QUOTE_ID - at best it should be using a unique index.
Access on QTEMP2 is more complex. I assume that the process indicator is a column with low cardinality. In addition, being an process status indicator it is likely a candidate for being changed via UPDATE statements - in which case it is a very poor candidate for either a B+ tree index or a Bitmap index.
Even if indexed, a large number of rows may be of process type 'P' - in which case the CBO will rightly decide not to waste I/O on the index, but instead spend all the I/O instead on a faster full table scan.
In this case, (2) will result in all 50 million rows to be read - and for each row that has a 'P' process indicator, calling (1).
Anyway you look at this.. it is a major processing request for the database to perform. It involves a lot of I/O, can involve a huge number of nested SQL calls to QTEMP1... so this is obviously going to be a slow performing process. The majority of elapsed processing time will be spend waiting for I/O from disks.
Similar Messages
-
Dear all,
DB : 10.2.0.4.
OS : Solaris 5.10
I have a partitioned table with million of rows and am updating a filed like
update table test set amount=amount*10 where update='Y';
when I run this query it generates many archive logs as it doesn't commits the transaction anywhere..
Please give me an idea where I can commit this transaction after every 2000 rows.. such that archive log generation will not be much..
Please guide
KaiThere's not a lot you can do about the the amount of redo you generate (unless perhps you make the table unrecoverable, and that might not help much either).
It possible that if the column being updated is in an index dropping that during the update might help and recreating afterwards, but that could land you in more trouble.
One area of concern is the amount of undo space for the large transaction, this could even be exceeded and your statement might fail,
and that might be a reason for splitting it to smaller transactions.
Certainly there is no point in splitting down to 2000 records chunks, I'd want to aim much higher that that.
If you feel you want to divide it the record may contain a field that could be used, eg create_date, or if you are able to do partition by partition that might help.
If archive log management is the problem then speaking to the DBA should help.
Hope these thoughts help - but you are responsible for any actions you take, regards - bigdelboy -
Alternatives to using update to flag rows of data
Hello
I'm looking into finding ways of improving the speed of a process, primarily by avoiding the use of an update statement to flag a row in a table.
The problem with flagging the row is that the table has millions of rows in it.
The logic is a bit like this,
Set flag_field to 'F' for all rows in table. (e.g 104million rows updated)
for a set of rules loop around this
Get all rows that satisfy criteria for a processing rule. (200000 rows found)
Process the rows collected data (200000 rows processed)
Set flag_field to 'T' for those rows processed (200000 rows updated)
end loop
Once a row in the table has been processed it shouldn't be collected as part of any criteria of another rule further down the list, hence it needs to be flagged so that it doesn't get picked up again.
With there being millions of rows in the table and sometimes rules only processing 200k rows, i've learnt recently that this will create a lot of undo to be written and will thus take a long amount of time. (any thoughts on that)
Can anyone suggest anything other then using an update statement to flag each row to avoid it being processed again?
Appreciate the helpWith regard to speeding up the process, the answer is
"it depends". It all hinges on exactly what you mean
by "Process the rows collected data". What does the
processing involve?the processing involved is very straight forwards.
data in these large tables stay unchanged. for sake of this example, i'll call this table orig_data_table, the rules are set by users who have their own tables built (for this example we'll call it target_data_table).
the rules are there to take rows from the orig_data_table and insert them into the target_data_table.
e.g
update orig_data_table set processed='F';rule #1 states : select * from orig_data_table where feature_1 = 310;this applies to 3000 rows in orig_data_table.
these 3000 rows get inserted to target_data_table.
final step is to flag these 3000 rows in the orig_data_table so that they are not used by another rule proceeding rule 1 : update orig_data_table.processed='T' where feature_1 = 310;
commit;rule #2 states :select * from orig_data_table where feature_1=310 and destination='Asia' and orig_data_table.processed = 'F'so it won't pick the 3000 rows that were processed as part of rule #1
once rule #2 has got the rows from orig_data_table (e.g 400000 rows)
those get inserted to the target_data_table, followed by them being flagged to avoid being retrieved again
update orig_data_table.processed='T' where destination='Asia';
commit;continue onto rule #3...
- Is the process some kind of transformation of the
~200,000 selected rows which could possibly be
achieved in SQL alone, or is it an extremely complex
transformation which can not be done in pure SQL?its not at all complex, as i say the data in orig_data_table is unchanged bar the processed field which is initially set to 'F' for all rows and then set to 'T' after each rule for those rows fulfilling the criteria of each rule.
- Does the FLAG_FIELD exist purely for the use of
this process or is it referred to by other
procedures?the flag_field is only for this purpose and not used elsewhere
Having said that, as a first step to simply avoid the
update of the flag field, I would suggest that you
use bulk processing and include a table of booleans
to act as the indicator for whether a particular row
has been processed or not.could you elaborate a bit more on this table of booleans for me, it sounds an interesting approach for me to test...
many thanks again
Sandip -
How to write a cursor to check every row of a table which has millions of rows
Hello every one.
I need help. please... Below is the script (sample data), You can run directly on sql server management studio.
Here we need to update PPTA_Status column in Donation table. There WILL BE 3 statuses, A1, A2 and Q.
Here we need to update PPTA_status of January month donations only. We need to write a cursor. Here as this is a sample data we have only some donations (rows), but in the real table we have millions of rows. Need to check every row.
If i run the cursor for January, cursor should take every row, row by row all the rows of January.
we have donations in don_sample table, i need to check the test_results in the result_sample table for that donations and needs to update PPTA_status COLUMN.
We need to check all the donations of January month one by one. For every donation, we need to check for the 2 previous donations. For the previous donations, we need to in the following way. check
If we want to find previous donations of a donation, first look for the donor of that donation, then we can find previous donations of that donor. Like this we need to check for 2 previous donations.
If there are 2 previous donations and if they have test results, we need to update PPTA_STATUS column of this donatioh as 'Q'.
If 2 previous donation_numbers has test_code column in result_sample table as (9,10,11) values, then it means those donations has result.
BWX72 donor in the sample data I gave is example of above scenario
For the donation we are checking, if it has only 1 previous donation and it has a result in result_sample table, then set this donation Status as A2, after checking the result of this donation also.
ZBW24 donor in the sample data I gave is example of above scenario
For the donation we are checking, if it has only 1 previous donation and it DO NOT have a result in result_sample table, then set this donation Status as A1. after checking the result of this donation also.
PGH56 donor in the sample data I gave is example of above scenario
like this we need to check all the donations in don_sample table, it has millions of rows per every month.
we need to join don_sample and result_sample by donation_number. And we need to check for test_code column for result.
-- creating table
CREATE TABLE [dbo].[DON_SAMPLE](
[donation_number] [varchar](15) NOT NULL,
[donation_date] [datetime] NULL,
[donor_number] [varchar](12) NULL,
[ppta_status] [varchar](5) NULL,
[first_time_donation] [bit] NULL,
[days_since_last_donation] [int] NULL
) ON [PRIMARY]
--inserting values
Insert into [dbo].[DON_SAMPLE] ([donation_number],[donation_date],[donor_number],[ppta_status],[first_time_donation],[days_since_last_donation])
Select '27567167','2013-12-11 00:00:00.000','BWX72','A',1,0
Union ALL
Select '36543897','2014-12-26 00:00:00.000','BWX72','A',0,32
Union ALL
Select '47536542','2014-01-07 00:00:00.000','BWX72','A',0,120
Union ALL
Select '54312654','2014-12-09 00:00:00.000','JPZ41','A',1,0
Union ALL
Select '73276321','2014-12-17 00:00:00.000','JPZ41','A',0,64
Union ALL
Select '83642176','2014-01-15 00:00:00.000','JPZ41','A',0,45
Union ALL
Select '94527541','2014-12-11 00:00:00.000','ZBW24','A',0,120
Union ALL
Select '63497874','2014-01-13 00:00:00.000','ZBW24','A',1,0
Union ALL
Select '95786348','2014-12-17 00:00:00.000','PGH56','A',1,0
Union ALL
Select '87234156','2014-01-27 00:00:00.000','PGH56','A',1,0
--- creating table
CREATE TABLE [dbo].[RESULT_SAMPLE](
[test_result_id] [int] IDENTITY(1,1) NOT NULL,
[donation_number] [varchar](15) NOT NULL,
[donation_date] [datetime] NULL,
[test_code] [varchar](5) NULL,
[test_result_date] [datetime] NULL,
[test_result] [varchar](50) NULL,
[donor_number] [varchar](12) NULL
) ON [PRIMARY]
---SET IDENTITY_INSERT dbo.[RESULT_SAMPLE] ON
---- inserting values
Insert into [dbo].RESULT_SAMPLE( [test_result_id], [donation_number], [donation_date], [test_code], [test_result_date], [test_result], [donor_number])
Select 278453,'27567167','2013-12-11 00:00:00.000','0009','2014-01-20 00:00:00.000','N','BWX72'
Union ALL
Select 278454,'27567167','2013-12-11 00:00:00.000','0010','2014-01-20 00:00:00.000','NEG','BWX72'
Union ALL
Select 278455,'27567167','2013-12-11 00:00:00.000','0011','2014-01-20 00:00:00.000','N','BWX72'
Union ALL
Select 387653,'36543897','2014-12-26 00:00:00.000','0009','2014-01-24 00:00:00.000','N','BWX72'
Union ALL
Select 387654,'36543897','2014-12-26 00:00:00.000','0081','2014-01-24 00:00:00.000','NEG','BWX72'
Union ALL
Select 387655,'36543897','2014-12-26 00:00:00.000','0082','2014-01-24 00:00:00.000','N','BWX72'
UNION ALL
Select 378245,'73276321','2014-12-17 00:00:00.000','0009','2014-01-30 00:00:00.000','N','JPZ41'
Union ALL
Select 378246,'73276321','2014-12-17 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','JPZ41'
Union ALL
Select 378247,'73276321','2014-12-17 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','JPZ41'
UNION ALL
Select 561234,'83642176','2014-01-15 00:00:00.000','0081','2014-01-19 00:00:00.000','N','JPZ41'
Union ALL
Select 561235,'83642176','2014-01-15 00:00:00.000','0082','2014-01-19 00:00:00.000','NEG','JPZ41'
Union ALL
Select 561236,'83642176','2014-01-15 00:00:00.000','0083','2014-01-19 00:00:00.000','NEG','JPZ41'
Union ALL
Select 457834,'94527541','2014-12-11 00:00:00.000','0009','2014-01-30 00:00:00.000','N','ZBW24'
Union ALL
Select 457835,'94527541','2014-12-11 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','ZBW24'
Union ALL
Select 457836,'94527541','2014-12-11 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','ZBW24'
Union ALL
Select 587345,'63497874','2014-01-13 00:00:00.000','0009','2014-01-29 00:00:00.000','N','ZBW24'
Union ALL
Select 587346,'63497874','2014-01-13 00:00:00.000','0010','2014-01-29 00:00:00.000','NEG','ZBW24'
Union ALL
Select 587347,'63497874','2014-01-13 00:00:00.000','0011','2014-01-29 00:00:00.000','NEG','ZBW24'
Union ALL
Select 524876,'87234156','2014-01-27 00:00:00.000','0081','2014-02-03 00:00:00.000','N','PGH56'
Union ALL
Select 524877,'87234156','2014-01-27 00:00:00.000','0082','2014-02-03 00:00:00.000','N','PGH56'
Union ALL
Select 524878,'87234156','2014-01-27 00:00:00.000','0083','2014-02-03 00:00:00.000','N','PGH56'
select * from DON_SAMPLE
order by donor_number
select * from RESULT_SAMPLE
order by donor_numberYou didn't mention the version of SQL Server. It's important, because SQL Server 2012 makes the job much easier (and will also run much faster, by dodging a self join). (As Kalman said, the OVER clause contributes to this answer).
Both approaches below avoid needing the cursor at all. (There was part of your explanation I didn't understand fully, but I think these suggestions work regardless)
Here's a SQL 2012 answer, using LAG() to lookup the previous 1 and 2 donation codes by Donor: (EDIT: I overlooked a couple things in this post: please refer to my follow-up post for the final/fixed answer. I'm leaving this post with my overlooked
items, for posterity).
With Results_Interim as
Select *
, count('x') over(partition by donor_number) as Ct_Donations
, Lag(test_code, 1) over(partition by donor_number order by donation_date ) as PrevDon1
, Lag(test_code, 2) over(partition by donor_number order by donation_date ) as PrevDon2
from RESULT_SAMPLE
Select *
, case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
when PrevDon1 in (9, 10, 11) then 'A2'
when PrevDon1 is not null then 'A1'
End as NEWSTATUS
from Results_Interim
Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
Order by Donor_Number, donation_date
And a SQL 2005 or greater version, not using SQL 2012 new features
With Results_Temp as
Select *
, count('x') over(partition by donor_number) as Ct_Donations
, Row_Number() over(partition by donor_number order by donation_date ) as RN_Donor
from RESULT_SAMPLE
, Results_Interim as
Select R1.*, P1.test_code as PrevDon1, P2.Test_Code as PrevDon2
From Results_Temp R1
left join Results_Temp P1 on P1.Donor_Number = R1.Donor_Number and P1.Rn_Donor = R1.RN_Donor - 1
left join Results_Temp P2 on P2.Donor_Number = R1.Donor_Number and P2.Rn_Donor = R1.RN_Donor - 2
Select *
, case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
when PrevDon1 in (9, 10, 11) then 'A2'
when PrevDon1 is not null then 'A1'
End as NEWSTATUS
from Results_Interim
Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
Order by Donor_Number, donation_date -
Loading millions of rows using SQL*loader to a table with constraints
I have a table with constraints and I need to load millions of rows in it using SQL*Loader.
What is the best way to do this, means what SQL*Loader options to use, for getting the best loading performance and how to deal with constraints?
Regards- check if your table has check constraints (like column not null)
if you trust the data in the file you have to load you can disable this constrainst and after the loader enable this constrainst.
- Check if you can modify the table and place it in nologging mode (generate less redo but ONLY is SOME Conditions)
Hope it helps
Rui Madaleno -
Hi!
What's the best way to update 65 millions of rows? I don't have free space to create a new table.
Thanks
Andrédeclare v_records number;
begin
LOOP
UPDATE table_name
SET col_name = value
WHERE condition
AND ROWNUM <=10000;
v_records := SQL%ROWCOUNT;
IF v_records = 0 THEN
EXIT;
END IF;
END LOOP;
END;Cheers
Sarma. -
Enhance a SQL query with update of millions of rows
Hi ,
I have this query developed to updated around 200 million of rows on my production , I did my best but please need your recommendations\concerns to make it more enhanced
DECLARE @ORIGINAL_ID AS BIGINT
SELECT FID001 INTO #Temp001_
FROM INBA004 WHERE RS_DATE>='1999-01-01'
AND RS_DATE<'2014-01-01' AND CLR_f1st='SSLM'
and FID001 >=12345671
WHILE (SELECT COUNT(*) FROM #Temp001_ ) <>0
BEGIN
SELECT TOP 1 @ORIGINAL_ID=FID001 FROM #Temp001_ ORDER BY FID001
PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' STARTED'
SELECT DISTINCT FID001
INTO #OUT_FID001
FROM OUTTR009 WHERE TRANSACTION_ID IN (SELECT TRANSACTION_ID FROM
INTR00100 WHERE FID001 = @ORIGINAL_ID)
UPDATE A SET RCV_Date=B.TIME_STAMP
FROM OUTTR009 A INNER JOIN INTR00100 B
ON A.TRANSACTION_ID=B.TRANSACTION_ID
WHERE A.FID001 IN (SELECT FID001 FROM #OUT_FID001)
AND B.FID001=@ORIGINAL_ID
UPDATE A SET Sending_Date=B.TIME_STAMP
FROM INTR00100 A INNER JOIN OUTTR009 B
ON A.TRANSACTION_ID=B.TRANSACTION_ID
WHERE A.FID001=@ORIGINAL_ID
AND B.FID001 IN (SELECT FID001 FROM #OUT_FID001)
DELETE FROM #Temp001_ WHERE FID001=@ORIGINAL_ID
DROP TABLE #OUT_FID001
PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' FINISHED'
ENDDECLARE @x INT
SET @x = 1
WHILE @x < 44,000,000 -- Set appropriately
BEGIN
UPDATE Table SET a = c+d where ID BETWEEN @x AND @x + 10000
SET @x = @x + 10000
END
Make sure that ID column has a CI on.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Need help with the update queries - joining three tables -
We have three tables shown below, each with millions of rows;
And need some updates in here;
T_CHECK->TOTAL_AMT_PAID should be equal to
aim00.t_chck_clm_xref->amt_paid + aim01.t_chck_clm_xref->amt_paid;
Some CHECK_SAK values exist in aim00.t_chck_clm_xref and some exist in aim01.t_chck_clm_xref;
We tried to update using the queries within PL/SQL shown below:
Is there a way to make these more effecient?
SQL> desc aim.t_check;
Name Null? Type
CHECK_SAK NOT NULL NUMBER(9)
TOTAL_AMT_PAID NOT NULL NUMBER(11,2)
DTE_ISSUE NOT NULL NUMBER(8)
SQL> desc aim00.t_chck_clm_xref
Name Null? Type
CHECK_SAK NOT NULL NUMBER(9)
AMT_PAID NOT NULL NUMBER(10,2)
SQL> desc aim01.t_chck_clm_xref
Name Null? Type
CHECK_SAK NOT NULL NUMBER(9)
AMT_PAID NOT NULL NUMBER(10,2)
create or replace PROCEDURE CHECKSUPDATE IS
cursor my_cursor is
select /*+ DRIVING_SITE(t_check) INDEX(t_check) */
tot, mid, aim.t_check.total_amt_paid from
(select sum(aim01.t_chck_clm_xref.amt_paid) tot,
aim01.t_chck_clm_xref.check_sak mid from aim01.t_chck_clm_xref
where
not exists (select 'x' from aim00.t_chck_clm_xref
where aim01.t_chck_clm_xref.check_sak = aim00.t_chck_clm_xref.check_sak
group by aim01.t_chck_clm_xref.check_sak) TABLE1, aim.t_check
where aim.t_check.check_sak = table1.mid
and aim.t_check.total_amt_paid <> tot;
my_count NUMBER;
BEGIN
my_count:=0;
for my_pos in my_cursor loop
update aim.t_check a set total_amt_paid=my_pos.tot
where a.check_sak=my_pos.mid;
my_count:=my_count+1;
if (mod(my_count,1000)=0) THEN
commit;
end if;
end loop;
commit;
END CHECKSUPDATE;>
SQL> desc t_check;
Name Null? Type
CHECK_SAK NUMBER(9)
TOTAL_AMT_PAID NUMBER(11,2)
DTE_ISSUE NUMBER(8)
SQL> desc t_chck_clm_xref
Name Null? Type
CHECK_SAK NUMBER(9)
AMT_PAID NUMBER(10,2)
SQL> desc t_test
Name Null? Type
CHECK_SAK NUMBER(9)
AMT_PAID NUMBER(10,2)
select check_sak, sum(amt_paid)
from (
select check_sak, amt_paid from t_chck_clm_xref
union all
select check_sak, amt_paid from t_test
group by check_sak;and use this query in an UPDATE statement or a MERGE statement, as Tubby suggested, against the T_CHECK table.
isotope -
How to Find Source Table Updated Rows using Tsql Script
Hi Folks,
i have 2 tales Source table and Staging Table. yeaterday I have Imported 24 Million Records From Source table into Staging table. These Table Contain approxmately 42 columns. So today may be some of the updates happend in Source table rows. ID is the
Unique column.
So may I know which rows are Updated compared to Both tables using Tsql Query. (all 42 Columns might be updated ) .
usually new rows also appeard in sourcetable. I want only Which are Updated not New rows!.
Thanks in Advance.SELECT Source.*, Stage.*
FROM Source
FULL OUTER JOIN
Stage
ON Source.c1 = Stage.c1
AND Source.c2 = Stage.c2
AND Source.cn = Stage.cn
WHERE Source.key IS NULL
OR Stage.key IS NULL;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
How to update a single row of data table
How we can update a single row of the data table by clicking the button in the same row.
Thanks in Advance.Hi!
What do You mean 'update'? Get fresh data from DB or change data and commit it in DB?
If commit, try to read here:
http://developers.sun.com/jscreator/learning/tutorials/2/inserts_updates_deletes.html
Thanks,
Roman. -
How to update all the rows of table using stored procedures
Hi,
I want to update all the rows of a table of a specific column
sp_a male
sp_b female
sp_c male
sp_d female
in above table
gender of all the columns has to be interchanged.Sir table is like this detail(name varchar(10),gender varchar(10))
Where Details are like this
Name Gender
sp_a
male
sp_b
female
sp_c
male
sp_d
female
I want to create a stored procedure which automtically updates gender from male to female and female to male
for all the rows . i.e., all the rows are updated for column gender by just running a stored procedure.
So after execution of stored proc the above table looks
Name Gender
sp_a
female
sp_b
male
sp_c
female
sp_d
male -
How to update a column which has 35 millions of rows?
Hi everyone
need help
I am updating a column which has 35 millions of records. When i start update, it is taking like > 60 mins and its not stoping (finishing).
I need to null that column A, and update that column A with the other column B values.
Thanks
kumarThree common causes for big updates that are slow:
blocking. You can check whether the UPDATE is waiting on another connection to release locks using sp_who2
improperly sized log file. A big update may use of a lot of log space. If you log file is too small, and you have auto-grow enabled (which is the default setting), it will grow whenever needed. But this takes a lot of extra time. You can easily see this
when you view the file size of the log file
if the column you are updating is indexed, the engine may also you tempdb a lot, so tempdb may also grow a lot, which would take a lot of additional time. This is also easy to check by inspecting tempdb's file size
Gert-Jan -
Hi experts,
I still cant figure out how oracle handles multiple updates to the same row. For instance I have 3 update statements:
update supplier set supp_type = 'k' where supp_code = '1';
update supplier set supp_type = 'j' where supp_code = '1';
update supplier set supp_type = 'm' where supp_code = '1';
I keep getting the final result to be supp_type = 'k' where it should actually be 'm', but when i execute the mapping it shows 3 update operations, which baffled me as to how oracle handles simultaneous updates to same row. I even tried disabling parallel dml on the table object, but am unsure whether this actually helps. I try putting a sorter operator and then a key lookup operator after the sorter operator in my mapping to compare the supp_code field in the sorter with the target table's supp_code field to retrieve the relevant row to update, but instead of 3 update operations, it now updates all supp_type in all my records to NULL. Can anyone explain to me how i should go about dealing with this?Hi experts,
I just took a look at the code section generated for the key lookup operator named SUPPLIER_WH_SURRKEY01 and I feel something is wrong with the generated code. I have pasted the code section on the key lookup operator below.
ORDER BY
"SUPPLIER_CV"."RSID$" ASC ) *"SORTER" ON ( ( ( "SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = "SPIDERWEB2"."GET_EXPIRATI_0_EXPIRATI" ) ) AND ( ( "SUPPLIER_WH_SURRKEY01"."SUPPCODE" = "SORTER"."SUPP_CODE$1" ) ) )*
WHERE
( ( "SUPPLIER_WH_SURRKEY01"."SUPPKEY" IS NULL ) OR ( "SUPPLIER_WH_SURRKEY01"."SUPPKEY" = "SUPPLIER_WH_SURRKEY01"."SUPPKEY" ) );
Can anyone explain to me the codes in bold? I have no clue as to what it means? Furthermore, those bold-ed codes look similar to what I have expected to find in the where clause, except that instead of SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = "SPIDERWEB2"."GET_EXPIRATI_0_EXPIRATI", I expected to find
SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = '31-dec-4000', because my key lookup operator checks upon a constant with the value '31-dec-4000'. And the constant name is CONSTANT itself, while my mapping's name is SPIDERWEB2(not too sure why the generated code refers to my mapping name instead of my constant)
Edited by: user8915380 on 17-Mar-2010 00:52 -
Auto update of Ztable when ever BSID or BSAD tables updated
Auto update of Ztable when ever DB table updated
Hi experts
I want my Ztable get updated automatically when ever a record is created or updated in BSID or BSAD tables.
Here clear requirement
Generally using Company code & Allocation number ( 18 char ) my programs access BSID & BSAD tables it is taking very long time for execution almost more than 30 minutes ( data in millions ).
Step 1.
I created a new Ztable with limited fields Company Code, Customer, Document, Allocation Number and Posting date.
Step 2.
Before look into BSID or BSAD my program searches Ztable for Customer number & Document number using Allocation field and Company code.
Step 3.
Once get the Customer & Document numbers accessing BSID & BSAD table is very easy (now just taking less than 1 minute).
Created a new program to update Ztable every day but BSID and BSAD are live table so I want my Ztable get updated immediately when any entry posted in BSID or BSAD
Please help me
Satya
SingaporeYou need to check what is the procedure (T-Code) from which the data gets updated into these tables.
For example
when we craete a material from MM01 the data gets updated in the corresponding table i.e. EKPO.
In same way you need to find the process and then you can use BTE (Busineess transaction events) for that process. BTE are only for FI module and these tables are also related to FI . -
Updating billions of rows from historical data
Hi,
we have a task of updating billions of rows from some tables ( one at a time ) for some past DATE's.
Could you please suggest what would be the best approach to follow in such cases where simple UPDATE statements would take dont know how much time.
The scenario is something like this..
TEST table.
col1 col2 col3
Now we added one more column col4.
Now all the records in this table needs to be updated for col3=col4 values for past data ( past in the sense the time when this will be executed in the Prodn DB, the sysdate-1 will be past data)
I thought of using FORALL kind of clause but still I dont know if that is going to be useful completely or not.
I am using Oracle 10g rel 2 version
Plase give me your expert opinions on this scenario so that the script to update such a voluminous data can execute successfully with better performance.
Thanks,
AashishHi Mohamed,
thanks for your help.
However, in my case, its not possible to drop and recreate the table only to update single column for values in another column.
I am trying to do some POC for this using BULK UPDATE but I am not able to do so...
I did something like this to check for 50,00,000 records.
create table test
( col1 varchar2(100),
col2 varchar2(100));
inserted 50,00,000 records to col1 column of this table.Now when I tried to do something like
declare
CURSOR s_cur IS
SELECT col1
FROM test;
TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
s_array fetch_array;
begin
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO s_array LIMIT 10000;
FORALL i IN 1..s_array.COUNT
UPDATE TEST
SET col2 = s_array(i); -- dont know if this is correct
EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
COMMIT;
END;It gives me some error.
Can you please correct me in this?
Still I am looking for better way...
Rgds,
Aashish
Maybe you are looking for
-
How to see all tables in a database
I need to view all the tables in a database. What is the statement query to do that? Please help..
-
I have an old iBook Power PC G3 that I would like to recycle. However, many years ago I dropped it and the logic board was damaged. It will boot up for a couple of seconds, then go to sleep or shut down. I can't tell. I think I may have some infor
-
hello abapers, I need to do in a window several box, /: BOX XPOS '5' CH YPOS '40' LN /: SIZE WIDTH '20' CH HEIGHT '4' LN /: BOX FRAME 10 TW /: BOX INTENSITY 20 /: BOX XPOS '25' CH YPOS '40' LN /: SIZE WIDTH '25' CH HEIGHT '4' LN /: BOX
-
Hello, I purchased an iPad 2, 16GB, white, last night at Walmart. I'm starting to see a black line around the bottom left edge of the iPad and it appears to be dirt, as if there is a gap opening between the bezel and the aluminum back of the device.
-
Help to find out clients making heavy queries?
Hello friends, I have given a task to identify clients making heavy queries on directory server, but I don't understand where to start. Could you help me that how can I identify or list out such clients that connect to my directory server (SunOne DS