Reduce time consumption of a bulk insert statement
Hi All,
I am trying to insert some large amount of data records into a Global Temporary Table (GTT) using a query like below. Below SQL statement is generated as a string and executed as a dynamic sql statement.
INSERT INTO
my_table ( col1, col2, col3, col4)
SELECT c1, c2, c3, c4
FROM tab1 t1,
tab2 t2,
tab3 t3
WHERE <<dynamically generated where clause>>
UNION ALL
SELECT c1, c2, c3, c4
FROM tab4 t4,
tab5 t5,
tab6 t6
WHERE <<dynamically generated where clause>>
UNION ALL
SELECT c1, c2, c3, c4
FROM tab4 t4,
tab5 t5,
tab6 t6
WHERE <<dynamically generated where clause>>
UNION ALL
SELECT c1, c2, c3, c4
FROM tab4 t4,
tab5 t5,
tab6 t6
WHERE <<dynamically generated where clause>>
The problem is, it takes some considerable amount of time(25-30 seconds) to write the resulted data set into GTT(my_table ). I have checked the SELECT statement above (without the INSERT), and it gives the result set in few seconds. Therefore i assume it's the INSERT that takes more time. (SELECT statement returns around 75000+ records). The GTT consists of 8 columns and 6 out of those are marked as a primary key.
Are there any other mechanisms that I can use to efficiently insert large amount of data in to the table? Really appreciate all of your comments and suggestions.
Best Regards,
Nipuna
Hi,
I'm not sure about your query speed :) Don't have it against me, but sometimes users just run SELECT statement and waits unitl first rows are returned (e.g. using TOAD which fetches 500 first rows as default) and they are convinced that this is response time for whole set. You have to navigate to last record to see real response time. But probably this is not the casse here. When you insert a large set of data consider to move such statement from forms to database (you avoid data roundtrip between forms and DB). If your insert still takes to much time then try to trace session you run your statement and(if possible) try to monitor your database resource ussge. If you're not able to use dbms_trace then you can just lokk at the system v$Session views. Following two views can be helpful to investigate most time consuming waits and can help understand the reason of slowness.
select * from v$session_wait
select * from v$session_wait_history
Similar Messages
-
ROWCOUNT in BULK Insert Statement
Hi,
I'm using in a BULK INSERT statement in a PL/SQL procedure and after execution of the SQL statement,I need to capture the row count.
Same is the case for UPDATE.
The Example code is as mentioned below:
INSERT INTO TBL1
(SELECT VAL1,VAL2 FROM TBL2)
No. of rows inserted needs to be retrieved after execution of this SQL.
Please let me know if there is any way to do it.
Thanks.SQL> create table emp as select * from scott.emp where 1 = 0 ;
Table created.
SQL> set serveroutput on
SQL> begin
2 insert into emp select * from scott.emp ;
3 dbms_output.put_line('Count='||SQL%RowCount) ;
4 end ;
5 /
Count=14
PL/SQL procedure successfully completed.
SQL> -
How to change Bulk Insert statement from MS SQL to Oracle
Hi All,
Good day, I would like to bulk insert the content of a file into Oracle db. May I know how to change the below MS SQL syntax to Oracle syntax?
Statement statement = objConnection.createStatement();
statement.execute("BULK INSERT [TBL_MERCHANT] FROM '" MERCHANT_FILE_DIR "' WITH ( FIELDTERMINATOR = '~~', ROWTERMINATOR = '##' )");
Thanks in advance.
cs.Oracle SQL Loader utility allows you to insert data from flat file to database tables.
Go to SQL Loader links on following url to learn more on this utility
http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96652/toc.htm
Chandar -
Pro*C compilation error - bulk insert statement
Guys, I am screwed up!
There's this Pro*C application that I am working on. I made some changes in some of the functions and when I compile using Pro*C pre-compiler (Version 9), it throws an error in one of the modules that I have not even touched.
Following is the error:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
++: Release 9.0.1.1.1 - Production on Fri May 25 20:35:30 2007
(c) Copyright 2001 Oracle Corporation. All rights reserved.
System default option values taken from: C:\oracle\ora90\precomp\admin\pcscfg.cfg
Error at line 1512, column 1 in file D:\Siemens\GABS-R\Source\GSM-R_Rating\RTLRa
ting\multimainrate_supp1_cug.pc
EXEC SQL FOR :record_roaming
1
PLS-S-00382, expression is of wrong type
Error at line 1512, column 1 in file D:\Siemens\GABS-R\Source\GSM-R_Rating\RTLRa
ting\multimainrate_supp1_cug.pc
EXEC SQL FOR :record_roaming
1
PLS-S-00000, SQL Statement ignored
Semantic error at line 1512, column 1, file D:\Siemens\GABS-R\Source\GSM-R_Rating\RTLRating\multimainrate_supp1_cug.pc:
EXEC SQL FOR :record_roaming
1
PCC-S-02346, PL/SQL found semantic errors
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
The location it is referring to is:
EXEC SQL FOR :record_roaming
INSERT /*+ PARALLEL(GT_RATED_ROAMING_CALLS) */
INTO GT_RATED_ROAMING_CALLS
(GF_SEQ_NO,
GF_CALL_PRODUCT,
GF_CDR_TYPE,
GF_REC_NO,
GF_REC_STAT,
GF_INTER_REC_NO,
GF_SS_REC_NO,
GF_SS_CODE,
GF_CALLING_NUM,
GF_CALLED_NUM,
GF_CALLING_IMSI,
GF_CALLED_IMSI,
GF_CALLED_NUM_TON,
GF_DIALLED_DIGITS,
GF_LAC,
GF_CELL,
GF_IN_CKTGRP,
GF_IN_CKTID,
GF_OUT_CKTGRP,
GF_OUT_CKTID,
GF_BS_TYPE,
GF_BS_CODE,
GF_DURATION,
GF_THREAD_ID,
GF_CARRIER,
GF_AIR_TIM_CHARGE,
GF_TOLL_CHARGE,
GF_TOLL_UNITS,
GF_ROAM_SURCHARGE,
GF_ROAMING_CHARGES,
GF_ADDITIONAL_CHARGES,
GF_CUST_ID,
GF_CONT_ID,
GF_BILL_PRD,
GF_STATUS,
GF_ERROR_CODE,
GF_RECON_STAT_LANDLINE,
GF_RECON_STAT_ROAMING,
GF_CALLING_IMEI,
GF_CALLED_IMEI,
GF_MSC_ID,
GF_ACTION_CODE,
GF_MSRN,
GF_LONG_DISTANCE_AIR_CHRG,
gf_home_zone_surcharge,
GF_SWITCH_CODE,
GF_BILL_DATE,
GF_CITY_CODE,
GF_BILL_FREQ,
GF_CALL_START_DATE,
GF_CALL_END_DATE,
GF_SMS_TEXT,
GF_TIME_STAMP_AVAILABLE,
GF_TYPE_OF_CALL,
GF_PEAK_OFFPEAK,
GF_PROCESS_DATE,
GF_MATCH_DATA,
GF_ORIG_ZONE_CODE,
GF_DEST_ZONE_CODE,
GF_PRICE_PLAN_CODE
values
(:gt_rated_roaming_calls,sysdate,:roaming_details_match_data,:l_gf_rcd_zone,:l_gf_price_plan);
Pls note that the host variables match exactly the number of fields in the INTO clause (anyway, the error in such a case is different).
Please save my soul!
SanchitHi, check Metalink Note 451413.1 Pro*C Build Fails With Error PCC-02014 on File /usr/include/standards.h
I did workarround #2, and worked.
*+<Moderator edi t- deleted MOS Doc content - pl do NOT post contents of MOS Docs - this is a violation of your Support agreement>+* -
ODBC, bulk inserts and dynamic SQL
I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
I have also considered using the FOR ALL statement and SQL*Loader utility.
My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
Any ideas??
nullHi,
I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
3) Use SQL*Loader (the best performance, but no real control of what's happening).
I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
null -
I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
I have also considered using the FOR ALL statement and SQL*Loader utility.
My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
Any ideas??
nullHi,
I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
3) Use SQL*Loader (the best performance, but no real control of what's happening).
I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
null -
Using of bulk insert create problem
How to use the bulk insert statement in my jsp and which should be updated into the database.
I've tried this one:
bulk insert test from test1.txt with (fieldterminator=':')but it gives an error as:
u don't have permission to use bulk statement.
Could any1 get me rid off from this problem.
in advance thanks.....Most likely the person responsible for permissions in your database could help you out there. If you're here looking for a way to execute commands that you aren't authorized to execute, you are wasting your time.
-
[Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line
Introduction
Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
Solution
For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
In Windows command line,
bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
The txt file as follows:
However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
The txt file as follows:
When multiple field terminators means multiple fields, you use the below comma separated format,
column1,,column2,,,column3
In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
USE <testdatabase>;
GO
BULK INSERT <your table> FROM ‘<Path>’
WITH (
DATAFILETYPE = ' char/native/ widechar /widenative',
FIELDTERMINATOR = ' field_terminator',
For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,:’,
The new table contains like as follows:
We could not declare multiple field terminators (, and :) in the Query statement, as the following format, a duplicate error will occur.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,’,
FIELDTERMINATOR = ‘:’
However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.
More Information
For more information about filed terminators, you can review the following article.
http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
http://technet.microsoft.com/en-us/library/ms191485.aspx
http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
Applies to
SQL Server 2012
SQL Server 2008R2
SQL Server 2005
SQL Server 2000
Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.Thanks,
Is this a supported scenario, or does it use unsupported features?
For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
in a supported way?
Thanks! Josh -
Multiple field terminators in BULK INSERT or BCP
Can i have multiple field terminators in BULK INSERT or BCP commands i,e, define more than 1 for a file?
Pls provide example.Hi stellios,
For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator and the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted as
a terminator, not as data, and the data after that character is interpreted as belonging to the next field or record. I do a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
In Windows command line:
bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t ,: -r \n –T
---you can use the two characters as one terminators, it can do well
--if you want to bcp by using multiple field terminators just like the following command, it will still use the last terminator defined by default.
bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
If you specify terminator for BULK import. You can only set one terminator in the BULK INSERT statement.
USE <testdatabase>;
GO
BULK INSERT <your table> FROM ‘<Path>’
WITH (
DATAFILETYPE = 'char',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
GO
For more information about filed terminators, you can review the following article.
http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
http://technet.microsoft.com/en-us/library/ms191485.aspx
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
FORALL bulk insert ..strange behaviour
Hi all..
I have the following problem..
I use a FORALL bulk Insert statement to insert a set of values using a collection that has only one row. The thing is I get a ' ORA-01400: cannot insert NULL into <schema>.<table>.<column>' error message, whereas the row has been inserted into the table!
Any ideas why this is happening?Here is the sample code..
te strange thing is that the cursor has 1 row and the array gets also 1 row..
FUNCTION MAIN() RETURN BOOLEAN IS
-- This cursor retrieves all necessary values from CRD table to be inserted into PDCS_DEFERRED_RELATIONSHIP table
CURSOR mycursor IS
SELECT key1,
key2,
column1,
date1,
date2,
txn_date
FROM mytable pc
WHERE
-- create an array and a type for the scancrd cursor
type t_arraysample IS TABLE OF mycursor%ROWTYPE;
myarrayofvalues t_arraysample;
TYPE t_target IS TABLE OF mytable%ROWTYPE;
la_target t_target := t_target();
BEGIN
OPEN mycursor;
FETCH mycursor BULK COLLECT
INTO myarrayofvalues
LIMIT 1000;
myarrayofvalues.extend(1000);
FOR x IN 1 .. myarrayofvalues.COUNT
LOOP
-- fetch variables into arrays
gn_index := gn_index + 1;
la_target(gn_index).key1 := myarrayofvalues(x).key1;
la_target(gn_index).key2 := myarrayofvalues(x).key2;
la_target(gn_index).column1 := myarrayofvalues(x).column1;
la_target(gn_index).date1 := myarrayofvalues(x).date1;
la_target(gn_index).date2 := myarrayofvalues(x).date2;
la_target(gn_index).txn_date := myarrayofvalues(x).txn_date;
END LOOP;
-- call function to insert/update TABLE
IF NOT MyFunction(la_target) THEN
ROLLBACK;
RAISE genericError;
ELSE COMMIT;
END IF;
CLOSE mycursor;
END IF;
FUNCTION MyFunction(t_crd IN t_arraysample) return boolean;
DECLARE
BEGIN
FORALL x IN la_target.FIRST..la_target.LAST
INSERT INTO mytable
VALUES la_target(x);
END IF; -
Number of rows inserted is different in bulk insert using select statement
I am facing a problem in bulk insert using SELECT statement.
My sql statement is like below.
strQuery :='INSERT INTO TAB3
(SELECT t1.c1,t2.c2
FROM TAB1 t1, TAB2 t2
WHERE t1.c1 = t2.c1
AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
EXECUTE IMMEDIATE strQuery ;
These SQL statements are inside a procedure. And this procedure is called from C#.
The number of rows returned by the "SELECT" query is 70.
On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
Anybody faced these kind of issues?
Can anyone tell what would be the reason of this issue..? any other work around for this...?
I am using Oracle 10g R2 version.
Edited by: user13339527 on Jun 29, 2010 3:55 AM
Edited by: user13339527 on Jun 29, 2010 3:56 AMYou have very likely concurrent transactions on the database:
>
By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
>
If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
You can try to run your test with:
set transaction isolation level serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
like:
wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46 -
Insert statement taking more time
Hi,
Insert happening very slow after sqlldr happening in my program. Please find the below workflow of my program.
1) SQLLDR will be called, it will insert around 4 lakhs records in 'TEMP" table using direct path load.Response time is good here.
2)After, SQLLDR has finished its job, my procedure will be called, there every cursor statement working fine, but when it comes for "INSERT" statement it takes almost 40 mins.
3)Insert statement like this
INSERT /*+ append */ INTO HISTORY_TABLE(<COLUMN1>,<COLUMN2>,..etc) SELECT (<COLUMN1>,.<COLUMN2>...etc) from TEMP_TABLE;
4)select records from temp table which records were inserted during sqlldr(direct=true),before procedure call.
5)I check explain plan for the Insert statement it shows conventional path loading
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | INSERT STATEMENT | | 409K| 143M| 6752 (2)|
| 1 | LOAD AS SELECT | HISTORY_TABLE | | | |
| 2 | TABLE ACCESS FULL | TEMP_TABLE | 409K| 143M| 6752 (2)|
6)Since i have no where condition in my insert statement it go for an full table scan.
Kindly advice to impove its performance.
My db is oracle 11g r2(11.2.0.3.0)
OS-Windows server 2008 r2
Tkprof for the sesion:
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 98.10 1860.58 347770 74736 1711253 407077
Fetch 0 0.00 0.00 0 0 0 0
total 1 98.10 1860.58 347770 74736 1711253 407077
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 84 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 256304 6.61 1299.01
direct path read temp 273 3.47 14.99
log buffer space 22 0.75 3.84
log file switch completion 7 19.48 30.70
log file switch (checkpoint incomplete) 16 8.12 17.15
db file parallel read 2 0.07 0.09
log file switch (private strand flush incomplete)
3 0.32 0.74
buffer busy waits 4 0.00 0.00
undo segment extension 2 0.00 0.00
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 128
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> l
1 select
2* sname,pname,pval1,pval2 from sys.aux_stats$
SQL> /
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 11-03-2011 06:38
SYSSTATS_INFO DSTOP 11-03-2011 06:38
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 1720.20725
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SNAME PNAME PVAL1 PVAL2
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
Thanks
FaizHi,
Below i apend the both table definitions. Please check
CREATE TABLE HISTORY_TABLE
( DAT_TIM VARCHAR2(19 BYTE),
REC_TYP VARCHAR2(2 BYTE),
AUTH_PPD VARCHAR2(4 BYTE),
LN_TERM VARCHAR2(4 BYTE),
FIID_TERM VARCHAR2(4 BYTE),
TERM_ID VARCHAR2(16 BYTE),
LN_ISSUER VARCHAR2(4 BYTE),
FIID_ISSUER VARCHAR2(20 BYTE),
PAN VARCHAR2(19 BYTE),
MBR_NUM VARCHAR2(3 BYTE),
BRCH_ID VARCHAR2(4 BYTE),
REGN_ID VARCHAR2(4 BYTE),
USER_FLD1X VARCHAR2(2 BYTE),
TYP_CDE VARCHAR2(2 BYTE),
TYP VARCHAR2(4 BYTE),
RTE_STAT VARCHAR2(2 BYTE),
ORIGINATOR CHAR(1 BYTE),
RESPONDER CHAR(1 BYTE),
ENTRY_TIM VARCHAR2(19 BYTE),
EXIT_TIM VARCHAR2(19 BYTE),
RE_ENTRY_TIM VARCHAR2(19 BYTE),
TRAN_DAT VARCHAR2(6 BYTE),
TRAN_TIM VARCHAR2(8 BYTE),
POST_DAT VARCHAR2(6 BYTE),
ACQ_ICHG_SETL_DAT VARCHAR2(6 BYTE),
ISS_ICHG_SETL_DAT VARCHAR2(6 BYTE),
SEQ_NUM VARCHAR2(12 BYTE),
TERM_TYP VARCHAR2(2 BYTE),
TIM_OFST VARCHAR2(5 BYTE),
ACQ_INST_ID_NUM VARCHAR2(11 BYTE),
RCV_INST_ID_NUM VARCHAR2(11 BYTE),
T_CDE VARCHAR2(2 BYTE),
T_FROM VARCHAR2(2 BYTE),
T_TO VARCHAR2(2 BYTE),
FROM_ACCT VARCHAR2(24 BYTE),
USER_FLD1 VARCHAR2(1 BYTE),
TO_ACCT VARCHAR2(19 BYTE),
MULT_ACCT VARCHAR2(1 BYTE),
AMT1 VARCHAR2(19 BYTE),
AMT2 VARCHAR2(19 BYTE),
AMT3 VARCHAR2(19 BYTE),
DEP_BAL_CR VARCHAR2(10 BYTE),
DEP_TYP VARCHAR2(1 BYTE),
RESP_BYTE1 VARCHAR2(3 BYTE),
RESP_BYTE2 VARCHAR2(3 BYTE),
TERM_NAME_LOC VARCHAR2(25 BYTE),
TERM_OWNER_NAME VARCHAR2(40 BYTE),
TERM_CITY VARCHAR2(13 BYTE),
TERM_ST_X VARCHAR2(3 BYTE),
TERM_CNTRY_X VARCHAR2(2 BYTE),
OSEQ_NUM VARCHAR2(12 BYTE),
OTRAN_DAT VARCHAR2(4 BYTE),
OTRAN_TIM VARCHAR2(8 BYTE),
B24_POST_DAT VARCHAR2(4 BYTE),
ORIG_CRNCY_CDE VARCHAR2(3 BYTE),
AUTH_CRNCY_CDE VARCHAR2(3 BYTE),
AUTH_CONV_RATE VARCHAR2(8 BYTE),
SETL_CRNCY_CDE VARCHAR2(3 BYTE),
SETL_CONV_RATE VARCHAR2(8 BYTE),
CONV_DAT_TIM VARCHAR2(19 BYTE),
RVSL_RSN VARCHAR2(2 BYTE),
PIN_OFST VARCHAR2(16 BYTE),
SHRG_GRP VARCHAR2(1 BYTE),
DEST_ORDER VARCHAR2(1 BYTE),
AUTH_ID_RESP VARCHAR2(6 BYTE),
IMP_IND VARCHAR2(1 BYTE),
AVAIL_IMP VARCHAR2(2 BYTE),
LEDG_IMP VARCHAR2(2 BYTE),
HLD_AMT_IMP VARCHAR2(2 BYTE),
CAF_REFR_IND VARCHAR2(1 BYTE),
USER_FLD3 VARCHAR2(1 BYTE),
DEP_SETL_IMP_FLG VARCHAR2(1 BYTE),
ADJ_SETL_IMP_FLG VARCHAR2(1 BYTE),
PBF1 VARCHAR2(1 BYTE),
PBF2 VARCHAR2(1 BYTE),
PBF3 VARCHAR2(1 BYTE),
PBF4 VARCHAR2(1 BYTE),
USER_FLD4 VARCHAR2(16 BYTE),
FRWD_INST_ID_NUM VARCHAR2(11 BYTE),
CRD_ACCPT_ID_NUM VARCHAR2(40 BYTE),
CRD_ISS_ID_NUM VARCHAR2(11 BYTE),
USER_FLD6 VARCHAR2(1 BYTE),
FILE_NAME VARCHAR2(100 BYTE),
ERR_FLAG CHAR(1 BYTE),
AMT2_ACTUAL VARCHAR2(20 BYTE),
ID_COL NUMBER(23,0),
RVSL_FLAG CHAR(1 BYTE),
SWRE_ID VARCHAR2(20 BYTE),
GAC_ID VARCHAR2(20 BYTE),
INS_USER NUMBER(5,0),
PART_CODE NUMBER(3,0),
ISS_RECON NUMBER(1,0),
ACQ_RECON NUMBER(1,0),
CROSS_BRANCH CHAR(1 BYTE),
CONSORTIUM_CODE NUMBER(3,0),
FROM_HOST VARCHAR2(1 BYTE),
FROM_HOST_ACQ VARCHAR2(1 BYTE),
AUDIT_NUM VARCHAR2(12 BYTE),
CAPTURE_CODE VARCHAR2(1 BYTE),
RESP_DAT_TIME VARCHAR2(19 BYTE),
PAN_SEQ_NUM NUMBER(1,0),
SERVICE_CODE VARCHAR2(3 BYTE),
ISS_BIN VARCHAR2(6 BYTE),
POS_DATA VARCHAR2(12 BYTE),
SECURITY_DATA VARCHAR2(8 BYTE),
CASHBACK_AMT VARCHAR2(15 BYTE),
REPLACEMENT_AMOUNT VARCHAR2(15 BYTE),
SETTL_AMT VARCHAR2(16 BYTE),
TRAN_FEE VARCHAR2(15 BYTE),
SETL_FEE VARCHAR2(15 BYTE),
MERC_CODE VARCHAR2(4 BYTE),
NTWORK_DATA VARCHAR2(12 BYTE),
PRIVATE_DATA_C_100 VARCHAR2(100 BYTE),
PAYMENT_INFO VARCHAR2(50 BYTE),
SURCHARGE_FEE VARCHAR2(15 BYTE),
SURC_BILL_AMT VARCHAR2(19 BYTE),
PROCESSING_CODE VARCHAR2(7 BYTE),
FRWD_CNTRY_CODE VARCHAR2(3 BYTE),
NTWORK_CODE VARCHAR2(2 BYTE),
FUNCTION_CODE VARCHAR2(3 BYTE),
REASON_CODE VARCHAR2(4 BYTE),
FEES VARCHAR2(10 BYTE),
SUR_CHARGE VARCHAR2(10 BYTE),
MESSAGE_TYPE VARCHAR2(4 BYTE),
APPROVE_STATUS VARCHAR2(10 BYTE),
CPS_TRAN_ID VARCHAR2(20 BYTE),
BANKTYPE_CODE VARCHAR2(5 BYTE),
RRB_BIN_CODE VARCHAR2(7 BYTE),
T_TYPE VARCHAR2(5 BYTE),
FEE_CRNCY VARCHAR2(25 BYTE),
FEE_INDICATOR VARCHAR2(1 BYTE),
FEE_TYPE VARCHAR2(84 BYTE),
CREDIT_BANK_CODE VARCHAR2(11 BYTE),
CREDIT_BR_CODE VARCHAR2(8 BYTE),
DEBIT_BANK_CODE VARCHAR2(11 BYTE),
ACTION_CODE VARCHAR2(3 BYTE),
ZEROS VARCHAR2(6 BYTE),
OTRACE_AUDIT_NO VARCHAR2(6 BYTE),
OFEE_TYPE1 VARCHAR2(2 BYTE),
OFEE_CRNCY1 VARCHAR2(3 BYTE),
OFEE_AMNT VARCHAR2(16 BYTE),
OFEE_INDICATOR VARCHAR2(1 BYTE),
SPACES VARCHAR2(2 BYTE),
ACQ_BANK_CODE VARCHAR2(6 BYTE),
CASHAT_POS VARCHAR2(16 BYTE),
LTS_STATUS VARCHAR2(30 BYTE),
APP_CODE VARCHAR2(6 BYTE),
CARD_ACCEPTID VARCHAR2(15 BYTE),
RESPONSE_CODE VARCHAR2(15 BYTE),
RES_RECVD_HOST VARCHAR2(1 BYTE),
DEVICE_ID VARCHAR2(16 BYTE),
RECORD_TYPE VARCHAR2(1 BYTE),
RRT_DEBIT_BANK_CODE VARCHAR2(11 BYTE),
ORIG_TYP VARCHAR2(6 BYTE),
DEVICE_TYPE VARCHAR2(3 BYTE),
ORGTRAN_CODE VARCHAR2(4 BYTE),
BILL_CRNCY VARCHAR2(3 BYTE),
BILL_AMNT VARCHAR2(19 BYTE),
OTRAN_AMNT VARCHAR2(19 BYTE),
INS_DATE DATE,
RECON_FLAG VARCHAR2(2 BYTE),
TRAN_ID VARCHAR2(25 BYTE)
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_TST ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX ON HISTORY_TABLE (TYP, DEVICE_ID, FROM_ACCT, AMT1, NTWORK_CODE, ISS_BIN, POST_DAT, RESP_BYTE1, DEVICE_TYPE, T_CDE, PRIVATE_DATA_C_100, REASON_CODE)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX1 ON HISTORY_TABLE (TO_NUMBER(AMT1))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX_FN ON HISTORY_TABLE (SUBSTR(DEVICE_ID,4))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX_IDX ON HISTORY_TABLE (T_CDE, PRIVATE_DATA_C_100, ISS_BIN)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX_TST ON HISTORY_TABLE (TYP, POST_DAT, RESP_BYTE1, DEVICE_TYPE, AMT1)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_ALL_SWT_DATA_INDEX_TST1 ON HISTORY_TABLE (DEVICE_TYPE, TO_NUMBER(AMT1))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IDX_ALLSWT_PAN ON HISTORY_TABLE (PAN)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
Table Definition for TEMP Table:
CREATE TABLE REC_TLF_TEMP
( DAT_TIM VARCHAR2(19 BYTE),
REC_TYP VARCHAR2(2 BYTE),
AUTH_PPD VARCHAR2(4 BYTE),
LN_TERM VARCHAR2(4 BYTE),
FIID_TERM VARCHAR2(4 BYTE),
TERM_ID VARCHAR2(16 BYTE),
LN_ISSUER VARCHAR2(4 BYTE),
FIID_ISSUER VARCHAR2(20 BYTE),
PAN VARCHAR2(19 BYTE),
MBR_NUM VARCHAR2(3 BYTE),
BRCH_ID VARCHAR2(4 BYTE),
REGN_ID VARCHAR2(4 BYTE),
USER_FLD1X VARCHAR2(2 BYTE),
TYP_CDE VARCHAR2(2 BYTE),
TYP VARCHAR2(4 BYTE),
RTE_STAT VARCHAR2(2 BYTE),
ORIGINATOR CHAR(1 BYTE),
RESPONDER CHAR(1 BYTE),
ENTRY_TIM VARCHAR2(19 BYTE),
EXIT_TIM VARCHAR2(19 BYTE),
RE_ENTRY_TIM VARCHAR2(19 BYTE),
TRAN_DAT VARCHAR2(6 BYTE),
TRAN_TIM VARCHAR2(8 BYTE),
POST_DAT VARCHAR2(6 BYTE),
ACQ_ICHG_SETL_DAT VARCHAR2(6 BYTE),
ISS_ICHG_SETL_DAT VARCHAR2(6 BYTE),
SEQ_NUM VARCHAR2(12 BYTE),
TERM_TYP VARCHAR2(2 BYTE),
TIM_OFST VARCHAR2(5 BYTE),
ACQ_INST_ID_NUM VARCHAR2(11 BYTE),
RCV_INST_ID_NUM VARCHAR2(11 BYTE),
T_CDE VARCHAR2(2 BYTE),
T_FROM VARCHAR2(2 BYTE),
T_TO VARCHAR2(2 BYTE),
FROM_ACCT VARCHAR2(24 BYTE),
USER_FLD1 VARCHAR2(1 BYTE),
TO_ACCT VARCHAR2(19 BYTE),
MULT_ACCT VARCHAR2(1 BYTE),
AMT1 VARCHAR2(19 BYTE),
AMT2 VARCHAR2(19 BYTE),
AMT3 VARCHAR2(19 BYTE),
DEP_BAL_CR VARCHAR2(10 BYTE),
DEP_TYP VARCHAR2(1 BYTE),
RESP_BYTE1 VARCHAR2(3 BYTE),
RESP_BYTE2 VARCHAR2(3 BYTE),
TERM_NAME_LOC VARCHAR2(25 BYTE),
TERM_OWNER_NAME VARCHAR2(40 BYTE),
TERM_CITY VARCHAR2(13 BYTE),
TERM_ST_X VARCHAR2(3 BYTE),
TERM_CNTRY_X VARCHAR2(2 BYTE),
OSEQ_NUM VARCHAR2(12 BYTE),
OTRAN_DAT VARCHAR2(4 BYTE),
OTRAN_TIM VARCHAR2(8 BYTE),
B24_POST_DAT VARCHAR2(4 BYTE),
ORIG_CRNCY_CDE VARCHAR2(3 BYTE),
AUTH_CRNCY_CDE VARCHAR2(3 BYTE),
AUTH_CONV_RATE VARCHAR2(8 BYTE),
SETL_CRNCY_CDE VARCHAR2(3 BYTE),
SETL_CONV_RATE VARCHAR2(8 BYTE),
CONV_DAT_TIM VARCHAR2(19 BYTE),
RVSL_RSN VARCHAR2(2 BYTE),
PIN_OFST VARCHAR2(16 BYTE),
SHRG_GRP VARCHAR2(1 BYTE),
DEST_ORDER VARCHAR2(1 BYTE),
AUTH_ID_RESP VARCHAR2(6 BYTE),
IMP_IND VARCHAR2(1 BYTE),
AVAIL_IMP VARCHAR2(2 BYTE),
LEDG_IMP VARCHAR2(2 BYTE),
HLD_AMT_IMP VARCHAR2(2 BYTE),
CAF_REFR_IND VARCHAR2(1 BYTE),
USER_FLD3 VARCHAR2(1 BYTE),
DEP_SETL_IMP_FLG VARCHAR2(1 BYTE),
ADJ_SETL_IMP_FLG VARCHAR2(1 BYTE),
PBF1 VARCHAR2(1 BYTE),
PBF2 VARCHAR2(1 BYTE),
PBF3 VARCHAR2(1 BYTE),
PBF4 VARCHAR2(1 BYTE),
USER_FLD4 VARCHAR2(16 BYTE),
FRWD_INST_ID_NUM VARCHAR2(11 BYTE),
CRD_ACCPT_ID_NUM VARCHAR2(40 BYTE),
CRD_ISS_ID_NUM VARCHAR2(11 BYTE),
USER_FLD6 VARCHAR2(1 BYTE),
FILE_NAME VARCHAR2(100 BYTE),
ERR_FLAG CHAR(1 BYTE),
AMT2_ACTUAL VARCHAR2(20 BYTE),
ID_COL NUMBER(23,0),
RVSL_FLAG CHAR(1 BYTE),
SWRE_ID VARCHAR2(20 BYTE),
GAC_ID VARCHAR2(20 BYTE),
INS_USER NUMBER(5,0),
PART_CODE NUMBER(3,0),
ISS_RECON NUMBER(1,0),
ACQ_RECON NUMBER(1,0),
CROSS_BRANCH CHAR(1 BYTE),
CONSORTIUM_CODE NUMBER(3,0),
FROM_HOST VARCHAR2(1 BYTE),
FROM_HOST_ACQ VARCHAR2(1 BYTE),
AUDIT_NUM VARCHAR2(12 BYTE),
CAPTURE_CODE VARCHAR2(1 BYTE),
RESP_DAT_TIME VARCHAR2(19 BYTE),
PAN_SEQ_NUM NUMBER(1,0),
SERVICE_CODE VARCHAR2(3 BYTE),
ISS_BIN VARCHAR2(6 BYTE),
POS_DATA VARCHAR2(12 BYTE),
SECURITY_DATA VARCHAR2(8 BYTE),
CASHBACK_AMT VARCHAR2(15 BYTE),
REPLACEMENT_AMOUNT VARCHAR2(15 BYTE),
SETTL_AMT VARCHAR2(16 BYTE),
TRAN_FEE VARCHAR2(15 BYTE),
SETL_FEE VARCHAR2(15 BYTE),
MERC_CODE VARCHAR2(4 BYTE),
NTWORK_DATA VARCHAR2(12 BYTE),
PRIVATE_DATA_C_100 VARCHAR2(100 BYTE),
PAYMENT_INFO VARCHAR2(50 BYTE),
SURCHARGE_FEE VARCHAR2(15 BYTE),
SURC_BILL_AMT VARCHAR2(19 BYTE),
PROCESSING_CODE VARCHAR2(7 BYTE),
FRWD_CNTRY_CODE VARCHAR2(3 BYTE),
NTWORK_CODE VARCHAR2(2 BYTE),
FUNCTION_CODE VARCHAR2(3 BYTE),
REASON_CODE VARCHAR2(4 BYTE),
FEES VARCHAR2(10 BYTE),
SUR_CHARGE VARCHAR2(10 BYTE),
MESSAGE_TYPE VARCHAR2(4 BYTE),
APPROVE_STATUS VARCHAR2(10 BYTE),
CPS_TRAN_ID VARCHAR2(20 BYTE),
BANKTYPE_CODE VARCHAR2(5 BYTE),
RRB_BIN_CODE VARCHAR2(7 BYTE),
T_TYPE VARCHAR2(5 BYTE),
FEE_CRNCY VARCHAR2(25 BYTE),
FEE_INDICATOR VARCHAR2(1 BYTE),
FEE_TYPE VARCHAR2(84 BYTE),
CREDIT_BANK_CODE VARCHAR2(11 BYTE),
CREDIT_BR_CODE VARCHAR2(8 BYTE),
DEBIT_BANK_CODE VARCHAR2(11 BYTE),
ACTION_CODE VARCHAR2(3 BYTE),
ZEROS VARCHAR2(6 BYTE),
OTRACE_AUDIT_NO VARCHAR2(6 BYTE),
OFEE_TYPE1 VARCHAR2(2 BYTE),
OFEE_CRNCY1 VARCHAR2(3 BYTE),
OFEE_AMNT VARCHAR2(16 BYTE),
OFEE_INDICATOR VARCHAR2(1 BYTE),
SPACES VARCHAR2(2 BYTE),
ACQ_BANK_CODE VARCHAR2(6 BYTE),
CASHAT_POS VARCHAR2(16 BYTE),
LTS_STATUS VARCHAR2(30 BYTE),
APP_CODE VARCHAR2(6 BYTE),
CARD_ACCEPTID VARCHAR2(15 BYTE),
RESPONSE_CODE VARCHAR2(15 BYTE),
RES_RECVD_HOST VARCHAR2(1 BYTE),
DEVICE_ID VARCHAR2(16 BYTE),
RECORD_TYPE VARCHAR2(1 BYTE),
RRT_DEBIT_BANK_CODE VARCHAR2(11 BYTE),
ORIG_TYP VARCHAR2(6 BYTE),
DEVICE_TYPE VARCHAR2(3 BYTE),
ORGTRAN_CODE VARCHAR2(4 BYTE),
BILL_CRNCY VARCHAR2(3 BYTE),
TRAN_ID VARCHAR2(25 BYTE),
RECON_FLAG VARCHAR2(2 BYTE),
BILL_AMNT VARCHAR2(19 BYTE),
OTRAN_AMNT VARCHAR2(19 BYTE),
INS_DATE DATE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_TST ;
CREATE INDEX CMS_TLF_TEMP_INDEX3 ON REC_TLF_TEMP (TO_NUMBER(PAN))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX CMS_TLF_TEMP_INDEX4 ON REC_TLF_TEMP (TO_NUMBER(AUDIT_NUM))
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IDX_REC_TEMP_CDE ON REC_TLF_TEMP (T_CDE)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_TST ;
CREATE INDEX IDX_TLF_DEVICEID ON REC_TLF_TEMP (DEVICE_ID)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IDX_TLF_PRIVATE ON REC_TLF_TEMP (PRIVATE_DATA_C_100)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_TST ;
CREATE INDEX IDX_TLF_TEMP_BYTE1 ON REC_TLF_TEMP (PRIVATE_DATA_C_100, ISS_BIN, NTWORK_CODE)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_TST ;
CREATE INDEX IDX_TLF_TEMP_TST ON REC_TLF_TEMP (RESP_BYTE1)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IND_TLFTEMP_RVSLFLG ON REC_TLF_TEMP (RVSL_FLAG)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IND_TLFTEMP_TRMID_TCD ON REC_TLF_TEMP (TERM_ID, T_CDE)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX IND_TLFTMP_NWCDE ON REC_TLF_TEMP (NTWORK_CODE)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX REC_TLF_TEMP_INDEX ON REC_TLF_TEMP (PAN, SEQ_NUM, AUDIT_NUM)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
CREATE INDEX TEMPREVLSET ON REC_TLF_TEMP (SEQ_NUM, TYP, REC_TYP, PAN)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE TBS_IDX ;
It shows that no triggers for both the tables.
Confirm that no where condition being used in the SQL.
Thanks
Faiz -
Insert Statement taking longer time
Hi,
One of my insert statement inside the procedure is taking more than 10secs to complete.
Scenario:
1. Having three tables e.g invoice table - invoice header, invoice Item and invoice attribute
2. These tables are loaded with data in the above specified order and they are not commited till all the data for an invoice gets inserted sucessfully.
e.g
a. Invoice header gets inserted first(one row)
b. Invoice Item second(will be in a loop with more than one row)
c. For every Invoice item an invoice attribute will be inserted
(Invoice Attribute has a FK column with a PK column in invoice item table column)
The problem is while inserting data into InvoiceAttribute it takes more than 10sec, IF the FK(mentioned above) is disabled insert statement is running fine.
note: Invoice Item table has 8.4million records
My assumption :
while inserting into Invoice attribute table the insert statement validated the FK with the invoice item table, since the new Inovice item id(PK column in invoice Item table) is not commited the insert statement takes time to validate the FK against (8.4million records)......
Please suggest me a solution.....................
Thanks in advance
NaveenPeriI'll try with a blind shot : The FK is likely missing an index, thus performing full table scan on invoice attribute each time you perform an insert on the invoice attribute table.
By default an FK doesn't have an index. You must create one manually.
If this turns out to be a missfire then please post a plan with statistics for the attribute insertion.
Message was edited by:
76°® -
Insert statement taking time on oracle 10g
Hi,
My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
I m using oracle version 10.2.0.4.0.
cust_item is matiralize view in procedure and it is refreshing in the procedure
Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
There are almost 6 lac records into MV which are going to insert into TABLE.
In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl NOLOGGING
(SELECT /*+ PARALLEL */
ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
cust_nbr, item_nbr, lu_eff_dt,
0, 0, 0, lu_end_dt,
bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
'', 0, ' ',
case
when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
THEN
case
when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
a.cases_per_pallet)
else cases_per_pallet
end
else cases_per_pallet
END cases_per_pallet,
cases_per_layer
FROM cust_item a
WHERE a.ctry_code = p_country_code ----varible passing by procedure
AND a.co_code = p_company_code ----varible passing by procedure
AND a.ROWID =
(SELECT MAX (b.ROWID)
FROM cust_item b
WHERE b.ctry_code = a.ctry_code
AND b.co_code = a.co_code
AND b.ctry_code = p_country_code ----varible passing by procedure
AND b.co_code = p_company_code ----varible passing by procedure
AND b.srce_loc_nbr = a.srce_loc_nbr
AND b.srce_loc_type_code = a.srce_loc_type_code
AND b.cust_nbr = a.cust_nbr
AND b.item_nbr = a.item_nbr
AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
Plan
INSERT STATEMENT CHOOSECost: 133,310 Bytes: 248 Cardinality: 1
5 FILTER
4 HASH GROUP BY Cost: 133,310 Bytes: 248 Cardinality: 1
3 HASH JOIN Cost: 132,424 Bytes: 1,273,090,640 Cardinality: 5,133,430
1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026 Bytes: 554,410,440 Cardinality: 5,133,430
2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570 Bytes: 718,680,200 Cardinality: 5,133,430 can you please look into the issue?
Thanks.According to the execution plan you posted parallelism is not taking place - no parallel operations listed
Check the hint syntax. In particular, "PARALLEL" does not look right.
Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky -
Bulk Load question for an insert statement.
I'm looking to put the following statement into a FORALL statement using BULK COLLLECT and I need some guidance.
Am I going to be putting the SELECT statement into a cursor and then load the cursor values into a defined Nested Table type defined variable?
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
SELECT aor.associate_office_record_id ,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
));I see people are reading this so for the insanely curious here's how I did it.
Type AOR_REC is RECORD(
associate_office_record_id dbms_sql.number_table,
week_id dbms_sql.number_table); --RJS.***Setting up Type for use with Bulk Collect FORALL statements.
v_a_rec AOR_REC; -- RJS. *** defining variable of defined Type to use with Bulk Collect FORALL statements.
CURSOR cur_aor_ids -- RJS *** Cursor for BULK COLLECT.
IS
SELECT aor.associate_office_record_id associate_office_record_id,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
FOR UPDATE NOWAIT;
BEGIN
BEGIN
OPEN cur_aor_ids;
LOOP
FETCH cur_aor_ids BULK COLLECT into
v_a_rec.associate_office_record_id, v_a_rec.week_id; --RJS. *** Bulk Load your cursor data into a buffer to do the Delete all at once.
FORALL i IN 1..v_a_rec.associate_office_record_id.COUNT SAVE EXCEPTIONS
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
(associate_office_record_id,week_id)
VALUES
(v_a_rec.associate_office_record_id(i), v_a_rec.week_id(i)); --RJS. *** Single FORALL BULK DELETE statement.
EXIT WHEN cur_aor_ids%NOTFOUND;
END LOOP;
CLOSE cur_aor_ids;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('ERROR ENCOUNTERED IS SQLCODE = '|| SQLCODE ||' AND SQLERRM = ' || SQLERRM);
dbms_output.put_line('Number of INSERT statements that
failed: ' || SQL%BULK_EXCEPTIONS.COUNT);
End;
Easy right?
Maybe you are looking for
-
Log to Track External Drive Connections?
HI, we're just getting a Lion Server setup and running (Mac Mini). We mainly use it for file sharing off of one main external Thunderbolt RAID enclosure (Promise Pegasus R4 running RAID 5). But we've noticed a couple intermittent issues with reading
-
Case with where clause - ORA-00920: Invalid relational operator
Hi All, when I try to run the query below, I get the following error... ORA-00920: invalid relational operator 00920. 00000 - "invalid relational operator" *Cause: *Action: Error at Line: 16 Column: 5 Does anyone know what's wrong with my query?
-
HT3965 How do you change the name of an ipod nano (6th gen.) in iTunes 11?
How do you change the name of an ipod nano (6th gen.) in iTunes 11? Note there was an explanation for iTunes 10, but the software does not work the same way in iTunes 11.
-
PC suit does not displays messages and contacts
I have Nokia N73 and I have installed Nokia PC suite which displays my images and videos but not my contacts and messages. When I open my contacts in PC suit it displays error that Refreshing contact list fail and during synchronizing PC suite gives
-
Import Discharge transit not reflecting
Hello Team, I have created a PO and entered the Previous document type and MRN number in Foreign trade Import data of PO and Inbound Delivery but still my document is not reflecting in Transit procedure. am I missing somewhere? please help me in this