Updating 30 million recs
Hi friends,
I need some advice and help in updating records as huge as 30 million.
This is the design:
I have a staging table X with 20 million recs and a final table Y with 30 million records.
Every month I need to do a full copy from X to Y i.e. take all 20 million recs and update or insert into table Y. Everyday I do an incremental copy which is update of a few thousand recs. I use a merge statement to do this and I use version 10.2
My problem is that the full copy takes more than a day...and sometimes never completes. I tried a bulk collect and then a merge with LIMIT 1000 but this takes 23 hours to complete.
Could someone please advice what is the best approach....Note that my final table has 30 million records and all indexes in place.
Thanks for the help
This is just a thought. You may have to give it a try and see if the approach below works –
1. Rename the final table Y to Y_Bak -- These 2 steps would probably take a couple of secs to execute
2. Rename the staging table X to Y
After step 1 and 2, You need not worry about “Updates” from the stage table to the final table
- At this point, rows which are present in final table Y_Bak and not existing in the staging
table X needs to be re-populated back into final table Y
3. Write a small script to do this ( To handle ONLY INSERTS )–
You may want to try disabling the constraints before running the below INSERT and enable them once the INSERT is successfully completed.
Declare
COLLECTION
CURSOR
Begin
BULK COLLECT into COLLECTION LIMIT 1000;
FORALL i in COLLECTION.START..COLLECTION.END
INSERT INTO Y COLLECTION(i);
COMMIT;
EXCEPTION
WHEN OTHERS THEN NULL;
End;
Shailender Mehta
Similar Messages
-
Update Millions of records in a table
Hi ,
My DB is 10.2.0
This is a very generic question, hence I am not adding the Plan.
I have to update a column in a table with value of sum of another column from the same table.
The table has 19 million records.
UPDATE transactions a SET rest_amount = Nvl((SELECT Sum(trans_amount) FROM transactions b
WHERE b.trans_date=a.trans_date AND trans_ind < 2),trans_amount)
WHERE trans_grp < 6999 and trans_ind < 2;This query takes 10 hours to run. There are indexes on trans_date and trans_grp. There is no index on rest_amount column.
As per tom in
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6407993912330
He had suggested to disable all indexes and constraints and then enable them. Is it applicable only for INSERT or for all DML.
Only during insert the constraints will be validated and index will be refreshed with the new row. But in my case, I am updating a column which is not a part of any index. Should I go as per tom's suggestion ???The table has 19 million records.How many rows (that's rows) are actually updated?
-
Help on table update - millions of rows
Hi,
I am trying to do the following however the process is taking lot of time. Can someone help me the best way to do it.
qtemp1 - 500,000 rows
qtemp2 - 50 Million rows
UPDATE qtemp2 qt
SET product =
SELECT product_cd
FROM qtemp1 qtemp
WHERE qt.quote_id = qtemp.quote_id
processed_ind = 'P'
I have created indexes on product, product_cd and quote_id on both the tables.
Thank youThere are two basic I/O read operations that need to be done to find the required rows.
1. In QTEMP1 find row for a specific QUOTE_ID.
2. In QTEMP2 find all rows where PROCESSED_IND is equal to 'P'.
For every row in (2), the I/O in (1) is executed. So, if there are 10 million rows result for (2), then (1) will be executed 10 million times.
So you want QTEMP1 to be optimised for access via QUOTE_ID - at best it should be using a unique index.
Access on QTEMP2 is more complex. I assume that the process indicator is a column with low cardinality. In addition, being an process status indicator it is likely a candidate for being changed via UPDATE statements - in which case it is a very poor candidate for either a B+ tree index or a Bitmap index.
Even if indexed, a large number of rows may be of process type 'P' - in which case the CBO will rightly decide not to waste I/O on the index, but instead spend all the I/O instead on a faster full table scan.
In this case, (2) will result in all 50 million rows to be read - and for each row that has a 'P' process indicator, calling (1).
Anyway you look at this.. it is a major processing request for the database to perform. It involves a lot of I/O, can involve a huge number of nested SQL calls to QTEMP1... so this is obviously going to be a slow performing process. The majority of elapsed processing time will be spend waiting for I/O from disks. -
How to Update millions or records in a table
I got a table which contains millions or records.
I want to update and commit every time for so many records ( say 10,000 records). I
dont want to do in one stroke as I may end up in Rollback segment issue(s). Any
suggestions please ! ! !
Thanks in AdvanceGroup your Updates.
1.) Look for a good group criteria in your table, a Index on it is recommend.
2.) Create an PL/SQL Cursor with the group criteria in the where clause.
cursor cur_updt (p_crit_id number) is
select * from large_table
where crit_id > p_crit_id;
3.) Now you can commit in a serial loop all your updates. -
Dear all,
DB : 10.2.0.4.
OS : Solaris 5.10
I have a partitioned table with million of rows and am updating a filed like
update table test set amount=amount*10 where update='Y';
when I run this query it generates many archive logs as it doesn't commits the transaction anywhere..
Please give me an idea where I can commit this transaction after every 2000 rows.. such that archive log generation will not be much..
Please guide
KaiThere's not a lot you can do about the the amount of redo you generate (unless perhps you make the table unrecoverable, and that might not help much either).
It possible that if the column being updated is in an index dropping that during the update might help and recreating afterwards, but that could land you in more trouble.
One area of concern is the amount of undo space for the large transaction, this could even be exceeded and your statement might fail,
and that might be a reason for splitting it to smaller transactions.
Certainly there is no point in splitting down to 2000 records chunks, I'd want to aim much higher that that.
If you feel you want to divide it the record may contain a field that could be used, eg create_date, or if you are able to do partition by partition that might help.
If archive log management is the problem then speaking to the DBA should help.
Hope these thoughts help - but you are responsible for any actions you take, regards - bigdelboy -
How to update the millions of records in oracle database?
How to update the millions of records in oracle database?
table have contraints & index.how to do this mass update.normal update taking several hours.LostWorld wrote:
How to update the millions of records in oracle database?
table have contraints & index.how to do this mass update.normal update taking several hours.Please, refer to Tom Kyte's answer on your question
[How to Update millions or records in a table|http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6407993912330]
Kamran Agayev A. (10g OCP)
http://kamranagayev.wordpress.com
[Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/] -
ITunes wont open at all since incomplete update
Tried to install latest update and rec'd an error code. Ever since then (2 days ago) itunes wont open at all. I tried to install again via the Apple website.. Still wont open. Should I uninstall and reinstall? Will my library remain stored?
Try the following:
1. Go to Microsoft website to fix install and Unistall problems. Click "Run now" from Fix it to remove all iTunes & related installer files:
http://support.microsoft.com/mats/Program_Install_and_Uninstall
Be aware that Windows Installer CleanUp Utility will not remove the actual program from your computer. However, it will remove the installation files so that you can start the installation, upgrade, or uninstall over.
2. You should remove all instances of iTunes and the rest of the components listed below:
it may be necessary to remove all traces of iTunes, QuickTime, and related software components from your computer before reinstalling iTunes.
Use the Control Panel to uninstall iTunes and related software components in the following order:
iTunes
QuickTime
Apple Software Update
Apple Mobile Device Support
Bonjour
Apple Application Support (iTunes 9 or later)
Follow the instructions from Apple article listed here: http://support.apple.com/kb/HT1923 to remove all components
3. Reboot your computer. Next, download iTunes from here:http://www.apple.com/itunes/download/ and install from scratch -
Execute SQL Task - UPDATE or Data Flow Data Conversion
Good Morning folks,
I want to know which is more fast to convert data type.
I want to convert nvarchar(255) to datetime2,
using t-sql (execute sql task)
UPDATE TBL
SET FIELD1= CAST(FIELD1AS DATETIME2)
GO
or data conversion(data flow)
Thank..Itz Shailesh, my t-sql have only UPDATE, not many UPDATES... so it's one batch, no 2,3,4... So.. it's Only one update.. ex: update table set field1 = cast(field1 as datetime2), field2 = cast(field2 as datetime2). not : update table set field = cast(field
as datetime2) go update table set field2 = cast(field2 as datetime2) go.... understand?
Yeah, I understand that you have to update just 1 field. What I am saying, if you have to update millions of rows then you must update rows in batches ( lets say batch of 50k). This way you will only touch 50k rows at a time and not all rows from table.
I see that your rows are less however I would still prefer the option of data conversion transformation over update statement.
If this post answers your query, please click "Mark As Answer" or "Vote as Helpful". -
Please see my thread
http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?forumid=1&catid=3&threadid=13 96256&enterthread=ySince none of us are mind readers, some code would probably
be a start.
quote:
logically when I update a rec on the user table field("role")
it should update the role on the 'role' tb, right?
Why would that be the case unless you
specifically code it to do so? Are you assuming some sort of
automatic cascading update?
Phil -
ORA-01456 : may not perform insert/delete/update operation
When I use following stored procedure with crystal reports, following error occurs.
ORA-01456 : may not perform insert/delete/update operation inside a READ ONLY transaction
Kindly help me on this, please.
My stored procedure is as under:-
create or replace
PROCEDURE PROC_FIFO
(CV IN OUT TB_DATA.CV_TYPE,FDATE1 DATE, FDATE2 DATE,
MSHOLD_CODE IN NUMBER,SHARE_ACCNO IN VARCHAR)
IS
--DECLARE VARIABLES
V_QTY NUMBER(10):=0;
V_RATE NUMBER(10,2):=0;
V_AMOUNT NUMBER(12,2):=0;
V_DATE DATE:=NULL;
--DECLARE CURSORS
CURSOR P1 IS
SELECT *
FROM FIFO
WHERE SHARE_TYPE IN ('P','B','R')
ORDER BY VOUCHER_DATE,
CASE WHEN SHARE_TYPE='P' THEN 1
ELSE
CASE WHEN SHARE_TYPE='R' THEN 2
ELSE
CASE WHEN SHARE_TYPE='B' THEN 3
END
END
END,
TRANS_NO;
RECP P1%ROWTYPE;
CURSOR S1 IS
SELECT * FROM FIFO
WHERE SHARE_TYPE='S'
AND TRANS_COST IS NULL
ORDER BY VOUCHER_DATE,TRANS_NO;
RECS S1%ROWTYPE;
--BEGIN QUERIES
BEGIN
DELETE FROM FIFO;
--OPENING BALANCES
INSERT INTO FIFO
VOUCHER_NO,VOUCHER_TYPE,VOUCHER_DATE,TRANS_QTY,TRANS_AMT,TRANS_RATE,
SHOLD_CODE,SHARE_TYPE,ACC_NO,SHARE_CODE,TRANS_NO)
SELECT TO_CHAR(FDATE1,'YYYYMM')||'001' VOUCHER_NO,'OP' VOUCHER_TYPE,
FDATE1-1 VOUCHER_DATE,
SUM(
CASE WHEN
--((SHARE_TYPE ='S' AND DTAG='Y')
SHARE_TYPE IN ('P','B','R','S') THEN
TRANS_QTY
ELSE
0
END
) TRANS_QTY,
SUM(TRANS_AMT),
NVL(CASE WHEN SUM(TRANS_AMT)<>0
AND
SUM
CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
TRANS_QTY
ELSE
0
END
)<>0 THEN
SUM(TRANS_AMT)/
SUM
CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
TRANS_QTY
ELSE
0
END
) END,0) TRANS_RATE,
MSHOLD_CODE SHOLD_CODE,'P' SHARE_TYPE,SHARE_ACCNO ACC_NO,
SHARE_CODE,0 TRANS_NO
FROM TRANS
WHERE ACC_NO=SHARE_ACCNO
AND SHOLD_CODE= MSHOLD_CODE
AND VOUCHER_DATE<FDATE1
--AND
--(SHARE_TYPE='S' AND DTAG='Y')
--OR SHARE_TYPE IN ('P','R','B'))
group by TO_CHAR(FDATE1,'YYYYMM')||'001', MSHOLD_CODE,SHARE_ACCNO, SHARE_CODE;
COMMIT;
--TRANSACTIONS BETWEEND DATES
INSERT INTO FIFO
TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
VOUCHER_DATE,TRANS_QTY,
TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
DTAG,TRANS_COST,SHARE_TYPE
SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
VOUCHER_DATE,TRANS_QTY,
CASE WHEN SHARE_TYPE='S' THEN
NVL(TRANS_RATE-COMM_PER_SHARE,0)
ELSE
NVL(TRANS_RATE+COMM_PER_SHARE,0)
END
,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
DTAG,NULL TRANS_COST,SHARE_TYPE
FROM TRANS
WHERE ACC_NO=SHARE_ACCNO
AND SHOLD_CODE= MSHOLD_CODE
AND VOUCHER_DATE BETWEEN FDATE1 AND FDATE2
AND
((SHARE_TYPE='S' AND DTAG='Y')
OR SHARE_TYPE IN ('P','R','B'));
COMMIT;
--PURCHASE CURSOR
IF P1%ISOPEN THEN
CLOSE P1;
END IF;
OPEN P1;
LOOP
FETCH P1 INTO RECP;
V_QTY:=RECP.TRANS_QTY;
V_RATE:=RECP.TRANS_RATE;
V_DATE:=RECP.VOUCHER_DATE;
dbms_output.put_line('V_RATE OPENING:'||V_RATE);
dbms_output.put_line('OP.QTY2:'||V_QTY);
EXIT WHEN P1%NOTFOUND;
--SALES CURSOR
IF S1%ISOPEN THEN
CLOSE S1;
END IF;
OPEN S1;
LOOP
FETCH S1 INTO RECS;
EXIT WHEN S1%NOTFOUND;
dbms_output.put_line('OP.QTY:'||V_QTY);
dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
IF ABS(RECS.TRANS_QTY)<=V_QTY
AND V_QTY<>0
AND RECS.TRANS_COST IS NULL THEN
--IF RECS.TRANS_COST IS NULL THEN
--dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
--dbms_output.put_line('BAL1:'||V_QTY);
UPDATE FIFO
SET TRANS_COST=V_RATE,
PUR_DATE=V_DATE
WHERE TRANS_NO=RECS.TRANS_NO
AND TRANS_COST IS NULL;
COMMIT;
dbms_output.put_line('UPDATE TRANS_NO:'||RECS.TRANS_NO);
dbms_output.put_line('OP.QTY:'||V_QTY);
dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
dbms_output.put_line('BAL2:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
END IF;
IF ABS(RECS.TRANS_QTY)>ABS(V_QTY)
AND V_QTY<>0 AND RECS.TRANS_COST IS NULL THEN
UPDATE FIFO
SET
TRANS_QTY=-V_QTY,
TRANS_COST=V_RATE,
TRANS_AMT=ROUND(V_QTY*V_RATE,2),
PUR_DATE=V_DATE
WHERE TRANS_NO=RECS.TRANS_NO;
--AND TRANS_COST IS NULL;
COMMIT;
dbms_output.put_line('UPDATING 100000:'||TO_CHAR(V_QTY));
dbms_output.put_line('UPDATING 100000 TRANS_NO:'||TO_CHAR(RECS.TRANS_NO));
INSERT INTO FIFO
TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
VOUCHER_DATE,TRANS_QTY,
TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
DTAG,TRANS_COST,SHARE_TYPE,PUR_DATE
VALUES
MCL.NEXTVAL,RECS.VOUCHER_NO,RECS.VOUCHER_TYPE,
RECS.VOUCHER_DATE,RECS.TRANS_QTY+V_QTY,
RECS.TRANS_RATE,(RECS.TRANS_QTY+V_QTY)*RECS.TRANS_RATE,RECS.SHOLD_CODE,
RECS.SHARE_CODE,RECS.ACC_NO,
RECS.DTAG,NULL,'S',V_DATE
dbms_output.put_line('INSERTED RECS.QTY:'||TO_CHAR(RECS.TRANS_QTY));
dbms_output.put_line('INSERTED QTY:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
dbms_output.put_line('INSERTED V_QTY:'||TO_CHAR(V_QTY));
dbms_output.put_line('INSERTED RATE:'||TO_CHAR(V_RATE));
COMMIT;
V_QTY:=0;
V_RATE:=0;
EXIT;
END IF;
IF V_QTY>0 THEN
V_QTY:=V_QTY+RECS.TRANS_QTY;
ELSE
V_QTY:=0;
V_RATE:=0;
EXIT;
END IF;
--dbms_output.put_line('BAL3:'||V_QTY);
END LOOP;
V_QTY:=0;
V_RATE:=0;
END LOOP;
CLOSE S1;
CLOSE P1;
OPEN CV FOR
SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
VOUCHER_DATE,TRANS_QTY,
TRANS_RATE,TRANS_AMT,SHOLD_CODE,B.SHARE_CODE,B.ACC_NO,
DTAG,TRANS_COST,SHARE_TYPE, B.SHARE_NAME,A.PUR_DATE
FROM FIFO A, SHARES B
WHERE A.SHARE_CODE=B.SHARE_CODE
--AND A.SHARE_TYPE IS NOT NULL
ORDER BY VOUCHER_DATE,SHARE_TYPE,TRANS_NO;
END PROC_FIFO;
Thanks and Regards,
LuqmanCopy from TODOEXPERTOS.COM
Problem Description
When running a RAM build you get the following error as seen in the RAM build
log file:
14:52:50 2> Updating warehouse tables with build information...
Process Terminated In Error:
[Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
(SIGENG02) ([Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
) Please contact the administrator of your Oracle Express Server application.
Solution Description
Here are the following suggestions to try out:
1. You may want to use oci instead of odbc for your RAM build, provided you
are running an Oracle database. This is setup through the RAA (relational access
administrator) maintenance procedure.
Also make sure your tnsnames.ora file is setup correctly in either net80/admin
or network/admin directory, to point to the correct instance of Oracle.
2. Commit or rollback the current transaction, then retry running your
RAM build. Seems like one or more of your lookup or fact tables have a
read-only lock on them. This occurs if you modify or add some records to your
lookup or fact tables but forget to issue a commit or rollback. You need to do
this through SQL+.
3. You may need to check what permissions has been given to the relational user.
The error could be a permissions issue.
You must give 'connect' permission or roll to the RAM/relational user. You may
also try giving 'dba' and 'resource' priviliges/roll to this user as a test. Inorder to
keep it simple, make sure all your lookup, fact and wh_ tables are created on
a single new tablespace. Create a new user with the above privileges as the
owner of the tablespace, as well as the owner of the lookup, fact and wh_
tables, inorder to see if this is a permissions issue.
In this particular case, the problem was resolved by using oci instead of odbc,
as explained in suggestion# 1. -
OIM 11g R2 - Bulk Catalog Update
Hi,
We have a requirement to write a custom scheduler which updates the entries in the catalog table.We have done it using CatalogService API but we might need to update millions of records during each run.Is there any recommended way to do this?.Thanks.It should support. Bulk load loads the data directly into database table using sql loader. So as long as you have UDF column in USR table and you have specified it in csv file, i believe it should work.
-
Droid Eris 2.1 Update - How long it takes?
I brought 2 ERIS with Andriod 1.5 and I was told that within 2 weeks, over the air they will upgrade to Android 2.1.
But after 3 weeks they have not send any email/update to my phones. I have seen so many old comments about Eris 2.1 upgrade, but when I call verizon (few times) all of the customer service seems surprize and keep asking the same questions and same method.. one on them even rooted my phone, but then too I do not get any over the air upgrade... on my 3rd call one of them told me to talk to HTC... but then HTC people keeping giving us run around .. They are taking advantage of the convoluted mess of having three companies involved being google, HTC and Verizon .. no one is saying how long it takes to get the update and when we will get it... according to HTC they have updated million phones and still have to update few millions..... but if HTC cannot support this why can't they release this software to verizon or put it on their web-site ... I am guessing they do not want another million customer... if they give this kind of service.... My issue is that we bought our phone from verizon and pay money to verizon so why am i needing to call HTC? because of HTC verizon are losing more customer then they will lose to iPhone ;.... Whether any one has similar exp?I totally agree. Do NOT update to 2.1 yet; wait and see if these updates they're pushing out fix all the bugs. From the time I downloaded 2.1 to my Eris (which, till that point, I loved--I had purchased it less than a month before 2.1), I have had nothing but problems. Horribly laggy dialing, force closings on my texting, unable to access my speed dial...the list goes on. Although, surprisingly, not the silent call problem so much; only minor incidences of that. I finally got so frustrated that I de-activated the Eris and went back to using my LG enV, and am waiting till I hear if this new update works to fix all the problems before I reactivate the Eris, which is currently, in effect, an expensive paperweight. I wish I had never updated from 1.5. If this update doesn't work, I'm going back to Verizon and finding some way to get out of my contract. I'm a 12-year customer of theirs, but I'm ready to walk if I can't use this phone; that's how unhappy I am with it after 2.1. So think hard before upgrading your OS!
-
I was doing update but suddenly system is turn down¡¦¤Ð,¤Ð
I did update statement or I made edit situation and then l do that.
jusy system is dead.
What is problem.. please help me!Is your session hanging or is the entire database down? Can you create new database sessions? If not, what error are you receiving? If it is just your session that is hung, how large is the UPDATE statement you submitted (how many rows will it update)? It is quite possible to have an UPDATE statement that updates millions of rows take hours to complete.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Extracting data into two ODS at same time
Hi experts,
I am on BW 3.5, and was wondering if the possible scenario is possible;
We are extracting 6.5 million recs from our R/3 system into ODS 1.
But the records should split up and either go into ODS 1 or ODS 2 depending on its company code.
I know this can be done by creating two datasources and using two different Modules we could decide what goes and what doesn't, but this would require two extractions, one per module.
I want to know if there is a way this can be done with one extraction,
e.g. the extraction would be to ODS 1, and in the update rules start routine is it possible to decide if the record belongs in ODS 1 or ODS 2,
if it is ODS 2 can we write from this update rule to another ODS????
Thanks,
Shane.Hiya Santhosh,
currently I have created two update rules one to each ODS. The problem is the load to ODS 1 is 6.5 million + recs, while the load to ODS 2 is 20,000 recs.
The extraction from R/3 takes about 4 + hours, so I would have to perform the extraction twice to populate them both, which is a waste of time for the 20,000 recs.
Is it possible to get all the recs in in one extract and then dictate what ODS it populates without creating a third ODS to initially populate????
Thanks,
Shane. -
Max number of records to hold in explicit cursor
Hi Everyone,
What is the maxmimum number of records that could be holded in
an explicit cursor for manipulation. I need to process millions of records.
Can I hold it in cursors or use temp table to hold those records and
do fixes with volume control.
ThanksHi Kishore sorry for the delayed response,
Table1
prim_oid sec_oid rel_oid
pp101 cp102 101
pp101 cp103 101
pp102 cp104 101
pp102 cp105 101
Table2
ID p_oid b_oid rel_oid
1 pp101 -51 102
2 pp102 -51 102
3 cp102 52 102
4 cp103 53 102
5 cp104 54 102
6 cp105 54 102
From table1 I get the parent and child recs based on rel_oid=101,
the prim_oid and sec_oid are related to another col in table2 again
with a rel_oid. I need to get all the prim_oid that are linked to -ive b_oid
in table2 whose child sec_oid are linked with +ive b_oid.
In the above case, parent pp101 linked to 2 child cp102 & cp103 and
pp102 linked to 2 child cp104 & cp105. Both pp101 and pp102 are linked
to -ive b_oid (table2), but the children of these parents are linked to +ive b_oids.
But pp101's children are linked to 2 diff b_oid and pp102's childrend are linked
to same b_oid. For my requirement I can only update b_oid of pp102 with that
of its children b_oid whereas cannot update pp101's b_oid as it children are
linked to diff b_oid's.
I've a sql that will return prim_oid, b_oid, sec_oid, b_oid as a record as below
1 pp101 -51 3 cp102 52
1 pp101 -51 4 cp103 53
2 pp102 -51 5 cp104 54
2 pp102 -51 6 cp105 54
with a cursor sql that returns records as above, it would be difficult to process
distinct parent and distinct child. So I've a cursor that returns only the parent
records as below,
1 pp101 -51
2 pp102 -51
and then for each parent I get the distinct child b_oid, if I get only one child
b_oid I update parent else dont. but the problem is table2 has 8 million parent recs
with link to -ve b_oid but child of only 2 million recs have link to only one distinct
b_oid.
If i include volume control in the cursor sql chances are all might returns like
pp101 for which update is not required, so I should not have volume control in
curosr sql which will now return all the 8 million record. (my assumption).
is there any other feasible solution? Thanks
Maybe you are looking for
-
Technical names in Business Role selection screen
Hello, When I log on to the WebUI I get the selection screen for the Business Roles (correct because I'm assigned to multiple). But in the selection screen the Technical Names of the Business Role is hown too and I don't want to show these to the use
-
HT1369 how to connect through itunes when activating?
how to connect through itunes when activating?
-
How can I automatically increment an auto incremented column?
Hello! I have a table with an auto incremented "id" field. It looks like this: id (PK, auto_increment) name address phone I would like to make an insertion like: INSERT INTO Person (name, address, phone) VALUES (?,?,?) ...but it says that all fields
-
Sort Icon/option in ALV tree repot using classes and methods
Hi all, I have done an alv tree report using the class cl_gui_alv_tree and i need to let users re-sort the report by using a sort icon(as visible in a normal alv report).Is there any possibility to create an icon and set the functionality such that t
-
Quick question about Subversion
Just wondering, whether after installing an app via subversion and afterwards using the svn up command, would I need to again do ./configure, make, make install? Re-install over the older installation? Or make uninstall and then make install the ne