Updating billions of rows from historical data
Hi,
we have a task of updating billions of rows from some tables ( one at a time ) for some past DATE's.
Could you please suggest what would be the best approach to follow in such cases where simple UPDATE statements would take dont know how much time.
The scenario is something like this..
TEST table.
col1 col2 col3
Now we added one more column col4.
Now all the records in this table needs to be updated for col3=col4 values for past data ( past in the sense the time when this will be executed in the Prodn DB, the sysdate-1 will be past data)
I thought of using FORALL kind of clause but still I dont know if that is going to be useful completely or not.
I am using Oracle 10g rel 2 version
Plase give me your expert opinions on this scenario so that the script to update such a voluminous data can execute successfully with better performance.
Thanks,
Aashish
Hi Mohamed,
thanks for your help.
However, in my case, its not possible to drop and recreate the table only to update single column for values in another column.
I am trying to do some POC for this using BULK UPDATE but I am not able to do so...
I did something like this to check for 50,00,000 records.
create table test
( col1 varchar2(100),
col2 varchar2(100));
inserted 50,00,000 records to col1 column of this table.Now when I tried to do something like
declare
CURSOR s_cur IS
SELECT col1
FROM test;
TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
s_array fetch_array;
begin
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO s_array LIMIT 10000;
FORALL i IN 1..s_array.COUNT
UPDATE TEST
SET col2 = s_array(i); -- dont know if this is correct
EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
COMMIT;
END;It gives me some error.
Can you please correct me in this?
Still I am looking for better way...
Rgds,
Aashish
Similar Messages
-
Delete a row from a data object using an AQ-driven EMS
Hello, technetwork.
I need to show the contents of a table that changes frequently (inserts, updates and deletes) at ORDBMS 10g in my BAM dashboard.
What i have done is:
- Create two different queue tables, queues.
- Create two triggers: one for "AFTER insert or UPDATE" and the other for "BEFORE DELETE".
- Create two EMS, one configured to Upsert operations and the other to Delete.
- The Upsert EMS works, but the Delete EMS does not.
- Both EMS Metrics say that they work fine and commit the messages i send to the ADC.
- Testing showed records are populated and updated in the ADC but never deleted.
There is a detailed user case for "Creating an EMS Against Oracle Streams AQ JMS Provider" in the Fusion Midleware Developer's Guide for SOA Suite, but it deals only with Upserts. I am using the last versions of SOA suite and Weblogic. The official support has no information either.
I hope writing a web service client isn't the only way to delete a row from a data object. Am i missing something? Any help will be much appreciated.
My EMS config:
Initial Context Factory: weblogic.jndi.WLInitialContextFactory.
JNDI Service Provider URL: t3://<spam>:80.
Topic/Queue Connection Factory Name: jms/BAMAQTopicCF.
Topic/Queue Name: jms/ProdGlobalBAMD_flx.
JNDI Username: .
JNDI Password: .
JMS Message Type: TextMessage.
Durable Subscriber Name (Optional): BAM_ProdGlobalBAMD.
Message Selector (Optional): .
Data Object Name: /bam/ProdGlobalBAM_flx.
Operation: Delete.
Batching: No.
Transaction: No.
Start when BAM Server starts: No.
JMS Username (Optional): .
JMS Password (Optional): .
XML Formatting
Pre-Processing
Message Specification
Message Element Name: row
Column Value
Element Tag
Attribute
Source to Data Object Field Mapping
Key Tag name Data Object Field
. BARCODE. BarCode.
Added my EMS configRam_J2EE_JSF wrote:
How to accomplish this using JavaScript?Using Javascript? Well, you know, Javascript runs at the client side and intercepts on the HTML DOM tree only. The JSF code is completely irrelevant. Open your JSF page in your favourite webbrowser and view the generated HTML source. Finally just base your Javascript function on it. -
Abap logic not fetching multiple rows from master data table
Hi
I just noticed that my logic is fetching only 1 row from master data table.
ProdHier table
PRODHIERACHY Level
1000 1
1000011000 2
10000110003333 3
10000110004444 3
'10000110005555 3*
logic only fetches one row of level 3, I would like to fetch all level 3 rows.
DATA: ITAB type table of /BI0/PPROD_HIER,
wa like line of ITAB.
Select * from /BI0/PPROD_HIER INTO wa where /BIC/ZPRODHTAS = 3.
IF wa-PROD_HIER(10) = SOURCE_FIELDS-PRODH2.
RESULT = wa-PROD_HIER.
ELSEIF wa-PROD_HIER(5) = SOURCE_FIELDS-PRODH1.
RESULT = wa-PROD_HIER.
ENDIF.
ENDSELECT.
thanksHi,,
I have implemented the logic in end routine and it still reads only the first row.
I am loading only PRODH1 and PROD2 but now I want to get all values of PRODH3 from the master data table.
The first 5 values are PRODH1 and first 10 values belongs to PRODH2.
Whenever PRODH2 = 1000011000 in source I should get the following values
10000110001110
10000110001120
10000110001130
I have multiple rows of 1000011000 so my result should be
1000011000 10000110001110
1000011000 10000110001120
1000011000 10000110001130
DATA: ITAB type table of /BI0/PPROD_HIER,
wa like line of ITAB.
data rp type _ty_s_TG_1.
Select * from /BI0/PPROD_HIER INTO table itab where /BIC/ZPRODHTAS = 3.
LOOP AT RESULT_PACKAGE INTO rp.
read table itab into wa with key PROD_HIER(5) = rp-PRODH1.
IF sy-subrc EQ 0.
rp-PRODH3 = wa-PROD_HIER.
ELSE.
read table itab into wa with key PROD_HIER(10) = rp-PRODH2.
IF sy-subrc EQ 0.
rp-PRODH3 = wa-PROD_HIER.
ENDIF.
ENDIF.
MODIFY RESULT_PACKAGE FROM rp.
ENDLOOP.
Edited by: Bhat Vaidya on Sep 10, 2010 11:27 AM
Edited by: Bhat Vaidya on Sep 10, 2010 11:37 AM -
Sqlldr - can you skip last row from a data file
Hi
I need to skip last line from the data file when I load the data into the tables. Is that possible to do using sqlldr, if yes How?
Also, the first row in the data file, which has a single column needs to be loaded into all the rows of the table. How can i do that?
Example
020051213
1088110 0047245 A 000000000000GB 00000496546700
1088110 0719210 A 000000000000GB 00001643982200
1088110 0727871 A 000000000000GB 00000083008900
9010163
The first line needs to go into all the rows, and the last row needs to be skipped. Remaining can be loaded normally. I think this would take multiple loads. Can anybody please help.
TIAhere i am sending a control file sample which use the when clause
in sql loader
load data
infile 'load_4.dat'
discardfile 'load_4.dsc'
insert
into table sql_loader_4_a
when field_2 = 'Fruit'
field_1 position(1) char(8),
field_2 position(9) char(5)
into table sql_loader_4_b
when field_2 = 'City'
field_1 position(1) char(8),
field_2 position(9) char(5)
) -
SQL query - select one row from each date group
Hi,
I have data as follows.
Visit_Date Visit_type Consultant
05/09/2009 G name1
05/09/2009 G name2
05/09/2009 G name3
06/09/2009 I name4
07/09/2009 G name5
07/09/2009 G name6
How to select data as follows
05/09/2009 G name1
06/09/2009 G name4
07/09/2009 G name5
i.e one row from every visit_date
Thanks,
MK Nathan
Edited by: k_murali on Oct 7, 2009 10:44 PMAre you after this (one row per date per visit_type)
with dd as (select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name1' Consultant from dual
union all
select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name2' Consultant from dual
union all
select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name3' Consultant from dual
union all
select to_date('06/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name4' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name5' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name6' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'F' Visit_type, 'name7' Consultant from dual)
select trunc(visit_date) visit_date, visit_type, min(consultant)
from dd
group by trunc(visit_date), visit_type
order by trunc(visit_date);
VISIT_DAT V MIN(C
09/MAY/09 G name1
09/JUN/09 G name4
09/JUL/09 G name5
09/JUL/09 F name7or are you after only one row per date?:
with dd as (select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name1' Consultant from dual
union all
select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name2' Consultant from dual
union all
select to_date('05/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name3' Consultant from dual
union all
select to_date('06/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name4' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name5' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'G' Visit_type, 'name6' Consultant from dual
union all
select to_date('07/09/2009','MM/DD/YYYY') Visit_Date, 'F' Visit_type, 'name7' Consultant from dual)
select trunc(visit_date) visit_date, min(visit_type) visit_type, min(consultant)
from dd
group by trunc(visit_date)
order by trunc(visit_date);
VISIT_DAT V MIN(C
09/MAY/09 G name1
09/JUN/09 G name4
09/JUL/09 F name5 -
How to get two rows from this data?
SQL Gurus,
I need to summarize the following data into two rows (two rows based on the example data below but it can be any number of rows if there are more breaks in continuos numbers)
DETAIL_ID FM_SERIAL_NUMBER TO_SERIAL_NUMBER
63009 11 11
63009 12 12
63009 13 13
63009 14 14
63009 15 15
63009 16 16
63009 17 17
63009 18 18
63009 19 19
63009 20 20
63009 228 228
I need to get two rows, one showing 11-20 (that's because there's a conituity between 11 to 20)
and the other row showing 228 - 228.
Any help is appreciated
Regards,
Srinithe example i gave had some issues.
Here is an updated code.
Provided your detail_id,f_serial_no,t_serial_no are numbers.
Thanks to the example provided by Karthick_Arp
link:genterating one order
WITH t AS
(SELECT 63009 a,
level b ,
level c
FROM dual CONNECT BY level < 10
UNION ALL
SELECT 63009 , 228,228 FROM dual
UNION ALL
SELECT 63009 , 229,229 FROM dual
UNION ALL
SELECT 63009 , 238,238 FROM dual
UNION ALL
SELECT 63009,239,239 FROM dual
UNION ALL
SELECT 630010,223,223 FROM dual
UNION ALL
SELECT 630010,224,224 FROM dual
union all
SELECT 63009,232,232 FROM dual
, t1 as (
select a, b, c, decode(b-nvl(lag(b) over (partition by a order by b),1),1,0,b) d from t)
,t2 as (
select a, b, c,d
from (select row_number() over(order by b) rno, a,b,c,d
from t1) t
model
return updated rows
dimension by (rno)
measures (a, b, c,d)
rules update
d[any] = case when d[cv()] = 0 then nvl(d[cv()-1],0) else d[cv()] end
select a,min(b),max(b) from t2
group by a,doutput
63009 1 9
630010 223 224
63009 232 232
63009 228 229
63009 238 239
Alvinder
Edited by: alvinder on Feb 20, 2009 9:28 AM -
Updating applet without backup from previous data applet
Dear all,
I want to update my applet by the terminal. I should do these following steps:
1) getting a backup from current applet data.
2) deleting old applet and installing new one.
5) setting all of the backup data in the new applet.
Is there any way to update applet without getting data backup from it?
thanks a lot,Yes, you can separate your data into a different applet that exposes a SOI to access the data. You then have your business logic in a main applet that can be deleted and reloaded independently of the data applet. You need to be able to get a reference to this applet when you reload your main applet but when you do you have access to all of the existing data. You cannot delete and reload the data applet though as that will remove the data as well.
Other than that the answer is no.
Cheers,
Shane -
How to update a single row of data table
How we can update a single row of the data table by clicking the button in the same row.
Thanks in Advance.Hi!
What do You mean 'update'? Get fresh data from DB or change data and commit it in DB?
If commit, try to read here:
http://developers.sun.com/jscreator/learning/tutorials/2/inserts_updates_deletes.html
Thanks,
Roman. -
Unable to delete rows from Target.
Hello everyone,
I am unable to delete rows from target data store. Here is what I have done.
Source Oracle 10g - staging 11g - Target Oracle 11g
I have implemented consistent set CDC on data model in staging and added 2 tables to CDC and turned on the journals . Both tables A and B are joined together via Column E (primary key of table A). Table A is the master table(has foreign key). Table B is child table. Target column consists of all the columns of both table A and B.
Following is what I am able to do and not to do
ABLE TO DO. If data is inserted into both or any of journalized tables I can successfully load the same in target by performing following steps. 1. Extend the consistency window at model level. Lock subscriber. Run the interface with any source table marked as Journalized data only. Unlock subscriber and purge journal.
ABLE TO DO. If data is updated in any of the journalized table, along with the steps mentioned above I can execute two interfaces. In one Interface table A marked as journalized data only Joined with table B and in second interface table B marked as Journalized data only joined to table a.
NOT ABLE TO DO If data is deleted from one or both tables it shows up as journalized data in JV$D<tablename> marked as D with date and subscriber name but when i run the interface by extending the window , locking subscriber executing both interfaces, unlock subscriber purge journals. no change takes place is Target. After unlocking subscriber step, journalized data gets removed from JV$D view. Please let me know what I am doing wrong here. How can rows delted from source can also be deleted from TARGET?
NOTE : In the flow table SYNC_JRNL_DELETES is YES
In moel under jounalized table tab Table have following order Table A folloed by Table B
Thanks in advance
GreenwichSorry I still do not get it. when you say "Its a legacy app", are you talking about the VB.NET app ?
If so then I repeat my self :-) Why not to connecting to the SQL server directly?
* even if you need information from several databases (for example ACCESS + SQL Server), in most cases, it is much better to connect directly and get each information to the app. Then in your app you can combine the information and analyse it
[Personal Site] [Blog] [Facebook]
Access app is the legacy app. -
Multiple Selection from a data block
Hello,
I have a data block returning the names and surnames of employees. Can I select multiple rows from that data block?
Thank youYou could put a checkbox on the row and if it is ticked then interpret this as a selected row.
Sometimes it is useful to store this type of selection in some type of structure like a record group or index by table and process those rows rather than read the data block -
How to delete a row from AbstractTableModel
my table is using an object to populate the data.
static Object[][] data = new Object[LineCount()][5];
So when I delete a row, I should delete from the model or the object?
RgdsWhen you populate your table with a two dimensional array, the table actually creates a DefaultTableModel (which extends AbstractTableModel). The DefaultTableModel has methods which allow you to add/remove row from the data model. Read the DefaultTableModel API for more information.
-
Alternatives to using update to flag rows of data
Hello
I'm looking into finding ways of improving the speed of a process, primarily by avoiding the use of an update statement to flag a row in a table.
The problem with flagging the row is that the table has millions of rows in it.
The logic is a bit like this,
Set flag_field to 'F' for all rows in table. (e.g 104million rows updated)
for a set of rules loop around this
Get all rows that satisfy criteria for a processing rule. (200000 rows found)
Process the rows collected data (200000 rows processed)
Set flag_field to 'T' for those rows processed (200000 rows updated)
end loop
Once a row in the table has been processed it shouldn't be collected as part of any criteria of another rule further down the list, hence it needs to be flagged so that it doesn't get picked up again.
With there being millions of rows in the table and sometimes rules only processing 200k rows, i've learnt recently that this will create a lot of undo to be written and will thus take a long amount of time. (any thoughts on that)
Can anyone suggest anything other then using an update statement to flag each row to avoid it being processed again?
Appreciate the helpWith regard to speeding up the process, the answer is
"it depends". It all hinges on exactly what you mean
by "Process the rows collected data". What does the
processing involve?the processing involved is very straight forwards.
data in these large tables stay unchanged. for sake of this example, i'll call this table orig_data_table, the rules are set by users who have their own tables built (for this example we'll call it target_data_table).
the rules are there to take rows from the orig_data_table and insert them into the target_data_table.
e.g
update orig_data_table set processed='F';rule #1 states : select * from orig_data_table where feature_1 = 310;this applies to 3000 rows in orig_data_table.
these 3000 rows get inserted to target_data_table.
final step is to flag these 3000 rows in the orig_data_table so that they are not used by another rule proceeding rule 1 : update orig_data_table.processed='T' where feature_1 = 310;
commit;rule #2 states :select * from orig_data_table where feature_1=310 and destination='Asia' and orig_data_table.processed = 'F'so it won't pick the 3000 rows that were processed as part of rule #1
once rule #2 has got the rows from orig_data_table (e.g 400000 rows)
those get inserted to the target_data_table, followed by them being flagged to avoid being retrieved again
update orig_data_table.processed='T' where destination='Asia';
commit;continue onto rule #3...
- Is the process some kind of transformation of the
~200,000 selected rows which could possibly be
achieved in SQL alone, or is it an extremely complex
transformation which can not be done in pure SQL?its not at all complex, as i say the data in orig_data_table is unchanged bar the processed field which is initially set to 'F' for all rows and then set to 'T' after each rule for those rows fulfilling the criteria of each rule.
- Does the FLAG_FIELD exist purely for the use of
this process or is it referred to by other
procedures?the flag_field is only for this purpose and not used elsewhere
Having said that, as a first step to simply avoid the
update of the flag field, I would suggest that you
use bulk processing and include a table of booleans
to act as the indicator for whether a particular row
has been processed or not.could you elaborate a bit more on this table of booleans for me, it sounds an interesting approach for me to test...
many thanks again
Sandip -
Multiple row update of a table from another one
Im trying to make a multiple row update with date from a different table, but it's giving me an error.
update inv.mtl_system_items_b
set attribute1 = t2.description,
last_update_date = SYSDATE,
last_updated_by = 3606
from inv.mtl_system_items_b t2
inner join TMP.TMP_INACTIVAR_PRODUCTOS t1
on t2.segment2 = t1.inventory_item_id;Edited by: user8986013 on 21-may-2010 14:15Hi,
Whenever you have a question involving an error message, post the complete error message, including line number. Don't you think that might help people solve your problem?
Whneve you have any question, post a little sample data (CREATE TABLE and INSERT statements) and the results you want from that data. (In the case of a DM<L statement, like UPDATE, the INSERT statements show the tables before the change, and the results are the contents of the changed table after it.)
Review the syntax of the UPDATE statement in the SQL Language manual:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_10008.htm#sthref9598
MERGE may be more efficient and easier to use than UPDATE. -
How to clear rows from iterator and re-fetch fresh data from the table ?
Hi,
I am using JDev 11.1.1.2.0
I have generated JPA Service Facade and by using it, I have created Data Control which finally I have dragged & dropped on my .jsff file.
In viewObject, there is a method clearCache() to clear the viewObject data.
Iterator has also one method clear() but when it invoked, ADF framework throws StackOverFlow error.
So, I want to clear my iterator before calling executeQuery() method.
How Can I clear it ?
Because In my case if I run executeQuery() method on DCIteratorBinding, it is not getting updated with the lates value from DB table.
So I want to clear all the rows from iterator and then want to call executeQuery() method to updated latest data from DB tables.
I have also tried below peace of code to refresh iterator but still iterator getting updated with two same rows () while in DB it is proper.
FacesContext fctx = FacesContext.getCurrentInstance();
ValueBinding dcb =
fctx.getApplication().createValueBinding("#{bindings}");
DCBindingContainer iteratorbindings =
(DCBindingContainer)dcb.getValue(fctx);
DCIteratorBinding dciter =
iteratorbindings.findIteratorBinding(<iteratorname>);
dciter.releaseData();
dciter.executeQuery();
dciter.refresh(DCIteratorBinding.RANGESIZE_UNLIMITED);
regards,
devangHi,
Have you try to drag and drop to your refresh or query button an "Execute" operation from the Data Control Pallete?
We are using JPA/ EJB Session Beans and that works for us.
regards,
pino -
Polling Strategy to get back inserted/updated rows from BPEL
Hi,
Can anybody have code to get back the newly inserted/updated rows from database table, using DbAdapter Polling Strategy from BPEL Process.Hi user559009,
Could you please elaborate a bit more about what it is you need.
If you create a Partner link that polls a table and then create a receive activity that uses the partner link the variable in the receive activity will contain the data from the table.
Regards Pete
Maybe you are looking for
-
Edit postDelete postReport this postReply with quote ATP check but the comm
hi , all guys : when I create the pp-pi order , first i use the batch determination to allocate the component batch , then i do the ATP check , but the committed quantity always is zero ,the ATP alway does not support confirm quantity (co09 the confi
-
All text on my Mac is gone (not "text messages")
I was clearing old emails, TV shows, unwanted/unneeded programs from my MacBook Pro to free up some disc space. Now, there is absolutely NO text anywhere. The top toolbar, when I open a file, under the "DVD disc image", when I hit the search tab. Not
-
PDF maker, Acrobat 9.0, Excel 2007 - wrong page sizes
When I use PDF maker, or "Create PDF", with my entire excel workbook, the sheet sizes come out randomly, though they are set as 8.5X11. I notice there have been many reported problems. Are there any current solutions? Thanks.
-
Exchange server 2010 console not opening, inialization failed
hi, exchange server 2010 console not opening, inialization failed when i have upgraded service pack to sp1. please suggest asap or u can reach me @ [email protected] [email protected] +918750003544
-
Why should I use Publish instead of Routing for SOAP messages over JMS?
Hi, I currently read the book "The Definitive Guide to SOA - Oracle Service Bus" and found the following sentence in the chapter "Wrapping an Asynchronous Web Service with a Proxy Service" on page 131, which confuses me: When you send a SOAP message