Performance across millions of records
Hi,
I have millions of records in the database. I need to retrieve these records from multiple master data tables and do the validations and post the error messages in some format. Please let me know the way where I can complete the process in 15minutes and which does not go to short dump. I am really expecting the performance to be excellent.
Hi,
I would go for a different concept - in other words: forget it. Let's say, you have 2 million records (millions wasn't very specific, but could be much more). 15 minutes (usual time-out comes already after 10 minutes!) are 900 seconds. Divide this by 2 Mio -> 0.45 milliseconds per entry.
In this time you want to select the entry and perform a check. I doubt this will be possible - you might select the entries in this time, you might make a loop, maybe one read table in this time - but all together (avoiding RAM problem, having only index access, gathering error messages...) small chance.
I guess you will rather spend a lot of time and won't succeed - or you have less entries to test then you said in the first place.
Of course I cannot estimate the exact runtime - even if you would have given the exact requirement - but just make some tests with very small numbers and see yourself, if you can come close to the time / entry you need.
Regards,
Christian
Similar Messages
-
0FI_AR_4 Initialization - millions of records
Hi,
We are planning to initialize 0FI_AR_4 datasource for which there are millions of records available in Source system.
While checking in Quality system we have realised that just for a single fiscal period it is taking hours to extract data, and in Production system we have data for last 4 years (about 40 million records).
The trace results (ST05) say that most of the time taken while fetching data from BKPF_BSID / BKPF_BSAD view.
I can see index available on tables BSID/BSAD - Index 5 - Index for BW extraction - which is not yet created on database.
This index has 2 fields - BUKRS & CPUDT.
I am not sure whether this index will help in extracting data.
What all things can be done to improve the performance of this extraction so that Initialization of 0FI_AR_4 can be completed within optimum time.
Appreciate your inputs experts.
Regards,
Vikram.We are planning to change the existing FI_AR line item load from current fiscal year full to delta. As of now the FI_AR_4 is full from R/3 for certain company codes and fiscal Yr/Period 2013001 - 2013012. Now business wants historical data and going forward the extractor should bring only changes ( delta).
we would like to perform these below steps
1. Initialisation w/o data transfer on comp_code and FY/period 1998001 - 9999012
2. Reapir full load for all the historical data fiscal year/period wise like 1998001-1998012, 1999001-1999012...... current year 2013001 - 2013011 till PSA
3. Load these to DSO
4. activate the requests
5. Now do a delta load from R/3 to BW till PSA for the new selection 1998001-9999012
6. load till DSO
7. Activate the load
Pls let me know if these above steps will bring in all the data for FI_AR_4 line items and will not be missing any data once I do the delta load after the repair full loads.
Thanks -
Need help / Advice ; manage daily millions of records;;plz help me:)
Hi all,
I've only 2 years of experience in Oracle DBA. I need advice from Experts:)
To begin, the company I work for, decide to daily save in our Oracle database about 40 millions of records in our only table (User tables). These records should be daily imported from csv or xml feeds into one table.
This 's a project that need :
- Study the performance
- Study What is required in terms of hardware
As a leader in the market, Oracle 's the only DBMS that could support this size of data, but what's the limit of Oracle in this case? can Oracle support and manage perfectly daily 40 millions of records and for many years, ie We need all data of this table, we can't consider after a period that we don't need history: we need to save all data and without purge the history and this for many years i suppose!!! you can imagine 40 daily millions of records and for many years!!!
Then we need to consolidate from this table different views (or maybe materalized view) for each department and business inside the company, one other project that need study!
My questions 're :Using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0:
1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?
2-Study the performance ; which solutions, technics could I use to improve the performance of :
- Daily Loading 40 millions of records from csv or xml file/files?
- Daily Consolidate / managing different views/ materalized view from this big table?
3- What is required in terms of hardware? features / Technologies( maybe clusters...)
Hope that experts help me and advice me! thank you very much for your atention :)1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?Yes
2-Study the performance ; which solutions, technics could I use to improve >>>the performance of :Send me your email, and I can send you a Performance tuning metodology pdf.
You can see my email on my profile.
Daily Loading 40 millions of records from csv or xml file/files?DIrect Load
- Daily Consolidate / managing different views/ materalized view from this big table?You can use table partitions, one partition for each day.
Regards,
Francisco Munoz Alvarez -
Database table with potentially millions of records
Hello,
We want to keep track of user's transaction history from the performance database. The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
We want to keep track of the following in a table that we can query later:
User ID - Length 12
Transaction - Length 20
Date - Length 8
With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
What is the best way to store this type of information? Is there a specific table type that is designed for storing massive data quantity? Also, over time (few years) this table can grow into millions or hundreds of millions of records. How can we manage that in terms of performance and storage space?
If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
Best Regards.Hi SS
Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
You can also select to archieve data of older than some definite date.
You can also mix your alternatives for the recent and archieve data.
*--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a> -
Best way to insert millions of records into the table
Hi,
Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
Also guide me How to implement in easier way to make better performance.
Thanks,
Orahar.Orahar wrote:
Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
If the source data is already in the database, it is centralised.
In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table. -
Very slow Sorting 48 million of records
Hi All,
I am working on a project to calculate calls, inbound and outbound at a telecom company.
We get 12 million call records everyday i.e. Calling_number_From , Call_start_time,Calling_number_to and call_end_time.
We then split these records into 4 records using UNION ALL which means we have 48 Million records to process and then we do order by call_time.
This order by takes hours to run, Please advice on ideas to improve performance.
Table has Parallel_degree 10
We are on Oracle 10G
We spilt the call into four records i.e.
Each call will have incoming number and outgoing number.
Incoming call
Main_number Column_Call time Count_calls
999 Call_start_time +1
999 Call_end_time -1
Outgoing Call
Main_number Column_Call time Count_calls
888 Call_start_time +1
888 Call_end_time -1
Then we sort the Column_call_time by asc order and check for maximum simultaneous Incoming,outgoing and maximum calls active for each Main_number in one hour.That is the reason we need the sort
Do you guys know Any other alogoritm to do the same?
Any way to sort 48 million rows faster.
Below is the query.
SELECT did_qry.PART_TS,
did_qry.P_NUMBER ,
TO_CHAR(did_qry.call_time,'HH24')
||':00-'
||TO_CHAR(DID_QRY.CALL_TIME,'HH24')
||':59' HOUR_RANGE,
FLAG,
HOUR_CHANGE,
DECODE(HOUR_CHANGE,'HC',DID_QRY.ACTIVE_CALLS+1,DECODE(DID_QRY.ACTIVE_CALLS,0,1,DID_QRY.ACTIVE_CALLS)) ACTIVE_CALLS,
DECODE(HOUR_CHANGE,'HC',DID_QRY.IO_CALLS_CNT+1,DECODE(DID_QRY.IO_CALLS_CNT,0,1,DID_QRY.IO_CALLS_CNT)) io_calls
FROM
(SELECT PART_TS,
P_NUMBER,
did,
call_time,
flag ,
hour_change,
SUM(act) over ( partition BY P_NUMBER order by rownum ) active_calls ,
SUM(io_calls) over ( partition BY P_NUMBER,flag order by rownum ) io_calls_cnt
FROM
select TRUNC(H.PART_TS) PART_TS,
TPILOT.P_NUM P_NUMBER,
h.orig_num did,
h.Call_start_ts call_time,
'IN' flag,
1 act,
1 io_calls,
'NA' hour_change
from CALL_REC H,
DISCONN_CD DCODE,
P_DID TPILOT
where ( (H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
and H.PART_TS <to_date('14-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS'))
AND DCODE.EFF_START_DT <= H.PART_TS
AND DCODE.EFF_END_DT > H.PART_TS
AND dcode.CDR_C_CDE =h.A_I_ID
AND dcode.CDR_B_CDE =h.R_C_ID
AND dcode.AB_DIS_IND ='N'
AND RECORD_TYP_ID ='00000000'
AND tpilot.EFF_START_DT <= h.PART_TS
AND tpilot.EFF_END_DT > h.PART_TS
and TPILOT.D_NUM =H.TERM_NUM
UNION ALL
select TRUNC(H.PART_TS) PART_TS,
tpilot.P_NUM P_NUMBER,
h.term_num did,
h.PART_TS call_time,
'IN' flag,
-1 act,
-1 io_calls,
DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), least(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')),'NC',DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), TO_CHAR(h.PART_TS,'HH12'),'HC','NC')) hour_change
from CALL_REC H,
DISCONN_CD DCODE,
P_DID tpilot
where H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
AND DCODE.EFF_START_DT <= h.PART_TS
AND DCODE.EFF_END_DT > h.PART_TS
AND dcode.CDR_C_CDE =h.A_I_ID
AND dcode.CDR_B_CDE =h.R_C_ID
AND dcode.AB_DIS_IND ='N'
AND RECORD_TYP_ID ='00000000'
and TPILOT.EFF_START_DT <= H.PART_TS
and TPILOT.EFF_END_DT > H.PART_TS
and TPILOT.D_NUM =H.TERM_NUM
UNION ALL
SELECT TRUNC(H.PART_TS) PART_TS,
pilot.P_NUM P_NUMBER,
h.orig_num did,
h.Call_start_ts call_time,
'OUT' flag,
1 act,
1 io_calls,
'NA' hour_change
FROM CALL_REC H,
DISCONN_CD DCODE,
P_DID PILOT
where H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
AND DCODE.EFF_START_DT <= H.PART_TS
AND DCODE.EFF_END_DT > H.PART_TS
AND dcode.CDR_C_CDE =h.A_I_ID
AND dcode.CDR_B_CDE =h.R_C_ID
AND dcode.AB_DIS_IND ='N'
AND RECORD_TYP_ID ='00000000'
AND pilot.EFF_START_DT <= h.PART_TS
and PILOT.EFF_END_DT > H.PART_TS
and PILOT.D_NUM =H.ORIG_NUM
UNION ALL
SELECT TRUNC(h.PART_TS) PART_TS,
pilot.P_NUM P_NUMBER,
h.term_num did,
h.PART_TS call_time,
'OUT' flag,
-1 act,
-1 io_calls,
DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), least(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')),'NC',DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), TO_CHAR(h.PART_TS,'HH12'),'HC','NC')) hour_change
FROM CALL_REC H,
DISCONN_CD DCODE,
P_DID pilot
WHERE H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
AND DCODE.EFF_START_DT <= h.PART_TS
AND DCODE.EFF_END_DT > h.PART_TS
AND dcode.CDR_C_CDE =h.A_I_ID
AND dcode.CDR_B_CDE =h.R_C_ID
AND dcode.AB_DIS_IND ='N'
AND RECORD_TYP_ID ='00000000'
AND pilot.EFF_START_DT <= h.PART_TS
AND pilot.EFF_END_DT > h.PART_TS
AND pilot.D_NUM =h.orig_num
ORDER BY 2,4,6 ASC
) DID_QRY
)Explain Plan
Plan hash value: 616103529
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 204M| 12G| | 759K (1)| 02:31:49 | | | | | |
| 1 | WINDOW SORT | | 204M| 12G| 33G| 759K (1)| 02:31:49 | | | | | |
| 2 | WINDOW SORT | | 204M| 12G| 33G| 759K (1)| 02:31:49 | | | | | |
| 3 | COUNT | | | | | | | | | | | |
| 4 | PX COORDINATOR | | | | | | | | | | | |
| 5 | PX SEND QC (ORDER) | :TQ10005 | 204M| 12G| | 5919K(100)| 19:44:00 | | | Q1,05 | P->S | QC (ORDER) |
| 6 | VIEW | | 204M| 12G| | 5919K(100)| 19:44:00 | | | Q1,05 | PCWP | |
| 7 | SORT ORDER BY | | 204M| 22G| 55G| 22449 (76)| 00:04:30 | | | Q1,05 | PCWP | |
| 8 | PX RECEIVE | | | | | | | | | Q1,05 | PCWP | |
| 9 | PX SEND RANGE | :TQ10004 | | | | | | | | Q1,04 | P->P | RANGE |
| 10 | BUFFER SORT | | 204M| 12G| | | | | | Q1,04 | PCWP | |
| 11 | UNION-ALL | | | | | | | | | Q1,04 | PCWP | |
|* 12 | HASH JOIN | | 51M| 6052M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
| 13 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
| 14 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 16 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
| 17 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
| 18 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
|* 19 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
| 20 | PX BLOCK ITERATOR | | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
|* 21 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
|* 22 | HASH JOIN | | 51M| 5516M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
| 23 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
| 24 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
| 25 | PX SEND BROADCAST | :TQ10001 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 26 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
| 27 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
| 28 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
|* 29 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
| 30 | PX BLOCK ITERATOR | | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
|* 31 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
|* 32 | HASH JOIN | | 51M| 5516M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
| 33 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
| 34 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
| 35 | PX SEND BROADCAST | :TQ10002 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 36 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
| 37 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
| 38 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
|* 39 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
| 40 | PX BLOCK ITERATOR | | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
|* 41 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
|* 42 | HASH JOIN | | 51M| 6052M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
| 43 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
| 44 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
| 45 | PX SEND BROADCAST | :TQ10003 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 46 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
| 47 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
| 48 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
|* 49 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
| 50 | PX BLOCK ITERATOR | | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
|* 51 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
Predicate Information (identified by operation id):
12 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
"TPILOT"."DID_NUM"="H"."TERM_NUM")
filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "TPILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
"TPILOT"."EFF_END_DT">"H"."RETENTION_TS")
19 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
21 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
"H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
"H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
22 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
"TPILOT"."DID_NUM"="H"."TERM_NUM")
filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "TPILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
"TPILOT"."EFF_END_DT">"H"."RETENTION_TS")
29 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
31 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
"H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
"H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
32 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
"PILOT"."DID_NUM"="H"."ORIG_NUM")
filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "PILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
"PILOT"."EFF_END_DT">"H"."RETENTION_TS")
39 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
41 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
"H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
"H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
42 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
"PILOT"."DID_NUM"="H"."ORIG_NUM")
filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "PILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
"PILOT"."EFF_END_DT">"H"."RETENTION_TS")
49 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
51 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
"H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
"H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss'))) -
Hi!
I need to copy millions of records (call details records) from an older to new table on new
database.
I create the follow procedure, but with lot of performance problems. I only copy
200000 records/hour. Could you suggest another process? Table is partitioned by day and
have an index on column TID.
PROCEDURE sendCdrs(
inOperador IN VARCHAR2,
inOperacao IN VARCHAR2,
outError OUT VARCHAR2,
outErrorNbr OUT VARCHAR2,
outNbrCDRs OUT NUMBER)
IS
CURSOR myCdrs (inDay NUMBER) IS
SELECT *
FROM CDRV4_SCP_DSCP_CAMEL
WHERE day = inDay;
rowCdr CDRV4_SCP_DSCP_CAMEL%ROWTYPE;
vToday PLS_INTEGER := to_number(to_char(SYSDATE, 'ddd'));
vNbrMinimumDays PLS_INTEGER;
vDiffDays PLS_INTEGER;
vCommit PLS_INTEGER;
BEGIN
outErrorNbr := '501';
outNbrCDRs := 0;
SELECT nbrcommit, nbrminimumdays, diffdays
INTO vCommit, vNbrMinimumDays, vDiffDays
FROM SMP_CENTRAL_XDRS
WHERE rownum = 1;
outErrorNbr := '505';
IF vCommit IS NULL OR vNbrMinimumDays IS NULL OR vDiffDays IS NULL THEN
outError := '910 Error in conf table';
RETURN;
END IF;
outErrorNbr := '510';
IF ((vToday - vDiffDays) < (vToday - vNbrMinimumDays)) THEN
OPEN myCdrs (vToday);
LOOP
FETCH myCdrs INTO rowCdr;
EXIT WHEN myCdrs%NOTFOUND;
--Insercao dos cdrs no destino
INSERT INTO CDRV4_SCP_DSCP_CAMEL@DBL_SMP_STRESS_CORE
VALUES rowCdr;
DELETE FROM CDRV4_SCP_DSCP_CAMEL
WHERE tid = rowCdr.tid;
outNbrCDRs := outNbrCDRs + 1;
--Commit de x em x
IF MOD(outNbrCDRs, vCommit) = 0 THEN
COMMIT;
END IF;
END LOOP;
CLOSE myCdrs;
--Confirmar as alteracoes quando outNbrCDRs < x
COMMIT;
END IF;
outErrorNbr := RET_OK;
outError := RET_OK;
EXCEPTION
WHEN OTHERS THEN
outError := '910 Erro BD';
ROLLBACK;
IF myCdrs%ISOPEN THEN
CLOSE myCdrs;
END IF;
END sendCdrs;
Thanks a lot
AndréHi,
Why not just set up a materialized view and have the database do the copying for you? Just set up to refresh at the end of every day. -
How can I read, millions of records and write as *.csv file
I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
Please help me in solving this problem.
I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
Please help me out.Jimmy,
Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
exception
when utl_file.invalid_path then
raise_application_error(-20001,
'INVALID_PATH: File location or filename was invalid.');
when utl_file.invalid_mode then
raise_application_error(-20002,
'INVALID_MODE: The open_mode parameter in FOPEN was
invalid.');
when utl_file.invalid_filehandle then
raise_application_error(-20002,
'INVALID_FILEHANDLE: The file handle was invalid.');
when utl_file.invalid_operation then
raise_application_error(-20003,
'INVALID_OPERATION: The file could not be opened or
operated on as requested.');
when utl_file.read_error then
raise_application_error(-20004,
'READ_ERROR: An operating system error occurred during
the read operation.');
when utl_file.write_error then
raise_application_error(-20005,
'WRITE_ERROR: An operating system error occurred
during the write operation.');
when utl_file.internal_error then
raise_application_error(-20006,
'INTERNAL_ERROR: An unspecified error in PL/SQL.'); -
Getting error Unable to perform transaction on the record.
Hi,
My requirement is to implement the custom attachment, and to store the data into custom lob table.
my custom table structure is similer to that of standard fnd_lobs table and have inserted the data through EO based VO.
Structure of custom table
CREATE TABLE XXAPL.XXAPL_LOBS
ATTACHMENT_ID NUMBER NOT NULL,
FILE_NAME VARCHAR2(256 BYTE),
FILE_CONTENT_TYPE VARCHAR2(256 BYTE) NOT NULL,
FILE_DATA BLOB,
UPLOAD_DATE DATE,
EXPIRATION_DATE DATE,
PROGRAM_NAME VARCHAR2(32 BYTE),
PROGRAM_TAG VARCHAR2(32 BYTE),
LANGUAGE VARCHAR2(4 BYTE) DEFAULT ( userenv ( 'LANG') ),
ORACLE_CHARSET VARCHAR2(30 BYTE) DEFAULT ( substr ( userenv ( 'LANGUAGE') , instr ( userenv ( 'LANGUAGE') , '.') +1 ) ),
FILE_FORMAT VARCHAR2(10 BYTE) NOT NULL
i have created a simple messegefileupload and submit button on my custom page and written below code on CO:
Process Request Code:
if(!pageContext.isBackNavigationFired(false))
TransactionUnitHelper.startTransactionUnit(pageContext, "AttachmentCreateTxn");
if(!pageContext.isFormSubmission()){
System.out.println("In ProcessRequest of AplAttachmentCO");
am.invokeMethod("initAplAttachment");
else
if(!TransactionUnitHelper.isTransactionUnitInProgress(pageContext, "AttachmentCreateTxn", true))
OADialogPage dialogPage = new OADialogPage(NAVIGATION_ERROR);
pageContext.redirectToDialogPage(dialogPage);
ProcessFormRequest Code:
if (pageContext.getParameter("Upload") != null)
DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("FileItem");
String strFileName = null;
strFileName = pageContext.getParameter("FileItem");
if(strFileName == null || "".equals(strFileName))
throw new OAException("Please select a File for upload");
fileName = strFileName;
contentType = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");
BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);
String strItemDescr = pageContext.getParameter("ItemDesc");
OAFormValueBean bean = (OAFormValueBean)webBean.findIndexedChildRecursive("AttachmentId");
String strAttachId = (String)bean.getValue(pageContext);
System.out.println("Attachment Id:" +strAttachId);
int aInt = Integer.parseInt(strAttachId);
Number numAttachId = new Number(aInt);
Serializable[] methodParams = {fileName, contentType , uploadedByteStream , strItemDescr , numAttachId};
Class[] methodParamTypes = {fileName.getClass(), contentType.getClass() , uploadedByteStream.getClass() , strItemDescr.getClass() , numAttachId.getClass()};
am.invokeMethod("setUploadFileRowData", methodParams, methodParamTypes);
am.invokeMethod("apply");
System.out.println("Records committed in lobs table");
if (pageContext.getParameter("AddAnother") != null)
pageContext.forwardImmediatelyToCurrentPage(null,
true, // retain AM
OAWebBeanConstants.ADD_BREAD_CRUMB_YES);
if (pageContext.getParameter("cancel") != null)
am.invokeMethod("rollbackShipment");
TransactionUnitHelper.endTransactionUnit(pageContext, "AttachmentCreateTxn");
Code in AM:
public void apply(){
getTransaction().commit();
public void initAplAttachment() {
OAViewObject lobsvo = (OAViewObject)getAplLobsAttachVO1();
if (!lobsvo.isPreparedForExecution())
lobsvo.executeQuery();
Row row = lobsvo.createRow();
lobsvo.insertRow(row);
row.setNewRowState(Row.STATUS_INITIALIZED);
public void setUploadFileRowData(String fName, String fContentType, BlobDomain fileData , String fItemDescr , Number fAttachId)
AplLobsAttachVOImpl VOImpl = (AplLobsAttachVOImpl)getAplLobsAttachVO1();
System.out.println("In setUploadFileRowData method");
System.out.println("In setUploadFileRowData method fAttachId: "+fAttachId);
System.out.println("In setUploadFileRowData method fName: "+fName);
System.out.println("In setUploadFileRowData method fContentType: "+fContentType);
RowSetIterator rowIter = VOImpl.createRowSetIterator("rowIter");
while (rowIter.hasNext())
AplLobsAttachVORowImpl viewRow = (AplLobsAttachVORowImpl)rowIter.next();
viewRow.setFileContentType(fContentType);
viewRow.setFileData(fileData);
viewRow.setFileFormat("IGNORE");
viewRow.setFileName(fName);
rowIter.closeRowSetIterator();
System.out.println("setting on fndlobs done");
The attchemnt id is the sequence generated number, and its defaulting logic is written in EO
public void create(AttributeList attributeList) {
super.create(attributeList);
OADBTransaction transaction = getOADBTransaction();
Number attachmentId = transaction.getSequenceValue("xxapl_po_ship_attch_s");
setAttachmentId(attachmentId);
public void setAttachmentId(Number value) {
System.out.println("In ShipmentsEOImpl value::"+value);
if (getAttachmentId() != null)
System.out.println("In AplLobsAttachEOImpl AttachmentId::"+(Number)getAttachmentId());
throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
getEntityDef().getFullName(), // EO name
getPrimaryKey(), // EO PK
"AttachmentId", // Attribute Name
value, // Attribute value
"AK", // Message product short name
"FWK_TBX_T_EMP_ID_NO_UPDATE"); // Message name
if (value != null)
// Attachment ID must be unique. To verify this, you must check both the
// entity cache and the database. In this case, it's appropriate
// to use findByPrimaryKey() because you're unlikely to get a match, and
// and are therefore unlikely to pull a bunch of large objects into memory.
// Note that findByPrimaryKey() is guaranteed to check all AplLobsAttachment.
// First it checks the entity cache, then it checks the database.
OADBTransaction transaction = getOADBTransaction();
Object[] attachmentKey = {value};
EntityDefImpl attachDefinition = AplLobsAttachEOImpl.getDefinitionObject();
AplLobsAttachEOImpl attachment =
(AplLobsAttachEOImpl)attachDefinition.findByPrimaryKey(transaction, new Key(attachmentKey));
if (attachment != null)
throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
getEntityDef().getFullName(), // EO name
getPrimaryKey(), // EO PK
"AttachmentId", // Attribute Name
value, // Attribute value
"AK", // Message product short name
"FWK_TBX_T_EMP_ID_UNIQUE"); // Message name
setAttributeInternal(ATTACHMENTID, value);
Issue faced:
When i run the page for the first time data gets inserted into custom table perfectly on clicking upload button,
but when clicked on add another button on the same page (which basically redirects to the same upload page and increments the attachment id by 1)
i am getting the below error:
Error
Unable to perform transaction on the record.
Cause: The record contains stale data. The record has been modified by another user.
Action: Cancel the transaction and re-query the record to get the new data.
Have spent entire day to resolve this issue but no luck.
Any help on this will be appreciated, let me know if i am going wrong anywhere.
Thanks nd Regards
AvinashHi,
After, inserting the values please re-execute the VO query.
Also, try to redirect the page with no AM retension
Thanks,
Gaurav -
Performance problem with selecting records from BSEG and KONV
Hi,
I am having performance problem while selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
Regards,
PrashantHi,
Some steps to improve performance
SOME STEPS USED TO IMPROVE UR PERFORMANCE:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
5. Avoid using nested SELECT statement SELECT within LOOPs.
6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
7. Avoid using SELECT * and Select only the required fields from the table.
8. Avoid nested loops when working with large internal tables.
9. Use assign instead of into in LOOPs for table types with large work areas
10. When in doubt call transaction SE30 and use the examples and check your code
11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
12. Use "CHECK" instead of IF/ENDIF whenever possible.
13. Use "CASE" instead of IF/ENDIF whenever possible.
14. Use "MOVE" with individual variable/field moves instead of "MOVE-
CORRESPONDING" creates more coding but is more effcient. -
Having Millions of Records in table how we can reduce the exicution time
We have developed report it takes time to running eighteen hours background job monthly data because having millions of records in tables and used loops also Could you please help me how can read record million wise as parrlel exicution to reduce time
Moderator message - Welcome to SCN.
Please search the forums before asking a question.
Also, Please read Please read "The Forum Rules of Engagement" before posting! HOT NEWS!! and How to post code in SCN, and some things NOT to do... and [Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers] before posting again.
Thread locked.
Rob -
How to Update millions or records in a table
I got a table which contains millions or records.
I want to update and commit every time for so many records ( say 10,000 records). I
dont want to do in one stroke as I may end up in Rollback segment issue(s). Any
suggestions please ! ! !
Thanks in AdvanceGroup your Updates.
1.) Look for a good group criteria in your table, a Index on it is recommend.
2.) Create an PL/SQL Cursor with the group criteria in the where clause.
cursor cur_updt (p_crit_id number) is
select * from large_table
where crit_id > p_crit_id;
3.) Now you can commit in a serial loop all your updates. -
What's the best way to delete 2.4 million of records from table?
We are having two tables one is production one and another is temp table which data we want to insert into production table. temp table having 2.5 million of records and on the other side production table is having billions of records. the thing which we want to do just simple delete already existed records from production table and then insert the remaining records from temp to production table.
Can anyone guide what's the best way to do this?
Thanks,
Waheed.Waheed Azhar wrote:
production table is live and data is appending in this table on random basis. if i go insert data from temp to prod table a pk voilation exception occured bcoz already a record is exist in prod table which we are going to insert from temp to prod
If you really just want to insert the records and don't want to update the matching ones and you're already on 10g you could use the "DML error logging" facility of the INSERT command, which would log all failed records but succeeds for the remaining ones.
You can create a suitable exception table using the DBMS_ERRLOG.CREATE_ERROR_LOG procedure and then use the "LOG ERRORS INTO" clause of the INSERT command. Note that you can't use the "direct-path" insert mode (APPEND hint) if you expect to encounter UNIQUE CONSTRAINT violations, because this can't be logged and cause the direct-path insert to fail. Since this is a "live" table you probably don't want to use the direct-path insert anyway.
See the manuals for more information: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#BGBEIACB
Sample taken from 10g manuals:
CREATE TABLE raises (emp_id NUMBER, sal NUMBER
CONSTRAINT check_sal CHECK(sal > 8000));
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
INSERT INTO raises
SELECT employee_id, salary*1.1 FROM employees
WHERE commission_pct > .2
LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
ORA_ERR_MESG$ ORA_ERR_TAG$ EMP_ID SAL
ORA-02290: check constraint my_bad 161 7700
(HR.SYS_C004266) violatedIf the number of rows in the temp table is not too large and you have a suitable index on the large table for the lookup you could also try to use a NOT EXISTS clause in the insert command:
INSERT INTO <large_table>
SELECT ...
FROM TEMP A
WHERE NOT EXISTS (
SELECT NULL
FROM <large_table> B
WHERE B.<lookup> = A.<key>
);But you need to check the execution plan, because a hash join using a full table scan on the <large_table> is probably something you want to avoid.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Team,
In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
Scenario:
We have millions of records to be migrated to destination SQL database after some transformation.
The source SQL server is on premise in partners domain and destination server is in Azure.
Can you please suggest what would be best approach to do so.
thanks,
Bishnu
Bishnupriya PradhanYou can use SSIS itself for this
Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
You can use ODBC or ADO .NET connection to connect to azure.
http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
WebInterfaces for Millions of records - Transactional InfoCube
Hi Gerd,
Could u please suggest me which one can i use when i'm dealing with millions of records-Large amount of data.
(Displaying data from planning folders or WebInterfaceBuilder)
Right now i'm using WebInterfaceBuilder when i'm doing planning where user is allowed to enter values - for millions of records like Revenue forecast planning on salesorders.
Thanks in advance,
Thanks for your time,
Saritha.Hello Saritha,
Well - technically there is no big difference whether you are using Web interfaces or planning folders. All data has to be selected from the data base, processed by the BPS, the information has to be transmitted to the PC and displayed there. So both front ends should have roughly the same speed.
Sorry, but one question - is it really necessary to work with millions of data records online? The philosophy of the BPS is that you should limit the number of records you use online as much as possible - it should be an amount also the user can handle online - i.e. manually working with every record (which is probably not possible when handling 1 million of records). If a large number of records should be calculated/manipulated this should be done in a batch job - i.e. a planning sequence that runs in the back ground. This prevents the system from terminating the operation due to a long run time (usual time until a time out for an online transaction occurs is about 20 min) and gives you also more opportunities to control memory use or parallelizing of processes (see note 645454).
Best regards,
Gerd Schoeffl
NetWeaver RIG BI
Maybe you are looking for
-
Experts, While doing IR in MIRO , i have entered a PO no . then the Line Items under this PO are getting displayed.in PO Reference Tab. Suppose there are 3 line Items . But i want to do IR for only 2 line items (whose Quality Inspection has been done
-
Profit Center value for Creation of Cost Center in MDG F
Hi I am using MDG-F 6.1. I want to create cost center using change request type OG_W003 ( Cost Center single processing) without uploading intial data from ECC check table ( CEPC- Procit Center Master Data ). Is this possible as there is warning mess
-
Using an external hard drive to connect the camera to the Mac Book Pro
I am about three weeks away from a 9 to 4 pin firewire cable. In the meantime, I am logging and capturing from my Canon AX HI through my external hard drive. I am not super amazed at the quality of the image, it's noisy. I am using HDV tapes, and I d
-
When attempting to convert pdf to word doc, I continually get "An error occurred while signing in" despite the fact that I'm already signed in. Please help if you can?
-
Sorting Incoming Files??
http://lifehacker.com/5099883/download-mover-automatically-sorts-incoming-files Is it possible to do this on a Macbook?? I have tried smart folders, but I am trying to be able to download in firefox or safari and only mp3 files are moved to a designa