ORA-30036: During Bulk Insert
Hi gurus,
While loading data from stgging table to dimension table we are facing below mention error
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOGTAMAC'
we are using below script
INSERT /*+ APPEND */ INTO table_dim (SELECT * FROM TEMP_tab);
COMMIT;
Where TEMP_tab table contains around 2,00,00,000 rows
Can we use sqlloader to achieve it ?
Please advice
Thanks in advance
Edited by: user12084499 on Oct 4, 2010 12:14 AM
user12084499 wrote:
While loading data from stgging table to dimension table we are facing below mention error
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOGTAMAC'
we are using below script
INSERT /*+ APPEND */ INTO table_dim (SELECT * FROM TEMP_tab);
COMMIT;
Where TEMP_tab table contains around 2,00,00,000 rowsYou can do this as a custom parallel-enabled process. You can create a procedure as follows:
create or replace procedure InsertRowRange( fromRow varchar2, toRow varchar2 ) is
begin
insert /*+ append */ into table_dm
select * from temp_tab where rowid between fromRow and toRow;
end;You now need to split TEMP_TAB into multiple rowid ranges - let's say 20 ranges. Instead of starting 20 parallel copies of the procedure, each with a unique rowid range, you schedule it as 20 serialised processes.
The only requirement is that TEMP_TAB remains unchanged for the duration of the serialised processing as adding or deleting or inserting rows into it, will cause problems for the rowid range approach.
You can also potentially use the primary key (e.g. date based pk) of the source table to govern what ranges of rows to insert per processing step (e.g. only a single day's rows).
The main thing to stay away from (because of poor design and poor performance) is fetching from a cursor loop, inserting rows, and then committing every x number of rows. This is a horrible and non-scalable approach.
Similar Messages
-
I am using v11.2 with the new Jena adapter.
I am trying to upload data from a bunch of ntriple files to the triple store via the bulk load interface in the Jena adaptor- aka. bulk append. The code does something like this
while(moreFiles exist)
readFilesToMemory;
bulkLoadToDatabase using the options "MBV_JOIN_HINT=USE_HASH PARALLEL=4"
Loading the first set of triples goes well. But when I try to load the second set of triples, I get the exception below.
Some thoughts:
1) I dont think this is data problem because I uploaded all the data during an earlier test + when I upload the same data on an empty database it works fine.
2) I saw some earlier posts with similar error but none of the seem to be using the Jena adaptor..
3) The model also has a OWL Prime entailment in incremental mode.
4) I am not sure if this is relevant but... Before I ran the current test, I mistakenly launched multiple of java processes that bulk loaded the data. Ofcourse I killed all the processes and dropped the sem_models and the backing rdf tables they were uploading to.
EXCEPTION
java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 3164
ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 4244
ORA-06512: at "MDSYS.SDO_RDF", line 276
ORA-06512: at "MDSYS.RDF_APIS", line 693
ORA-06512: at line 1
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:204)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:950)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3488)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3840)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1086)
at oracle.spatial.rdf.client.jena.Oracle.executeCall(Oracle.java:689)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:740)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:463)
at oracleuploadtest.OracleUploader.loadModelToDatabase(OracleUploader.java:84)
at oracleuploadtest.RunOracleUploadTest.main(RunOracleUploadTest.java:81)
thanks!
Ram.The addInBulk method needs to be called twice to trigger the bug. Here is a test case that passes only while the bug is present! (It is to remind me to remove the workaround code when the fix gets through to my code).
@Test
public void testThatOracleBulkBugIsNotYetFixed() throws SQLException {
char nm[] = new char[22-TestDataUtils.getUserID().length()-TestOracleHelper.ORACLE_USER.length()];
Arrays.fill(nm,'A');
TestOracleHelper helper = new TestOracleHelper(new String(nm)); // actual name is TestDataUtils.getUserID() +"_" + nm
GraphOracleSem og = helper.createGraph();
Node n = RDF.value.asNode();
Triple triples[] = new Triple[]{new Triple(n,n,n)};
try {
og.getBulkUpdateHandler().addInBulk(triples, null);
// Oracle bug hits on second call:
og.getBulkUpdateHandler().addInBulk(triples, null);
catch (SQLException e) {
if (e.getErrorCode()==6502) {
return; // we have a work-around for this expected error;
throw e; // some other problem.
Assert.fail("It seems that an Oracle update (has the ora jar been updated?) resolves a silly bug - please modify BulkLoaderExportMode");
Jeremy -
When bulk inserting with BindArrayOfStruct when a row fails with ORA-02291 all remaining valid rows fail with the same error.
If the invalid row is removed from the array the other rows are inserted.
If a row fails with another error all remaining rows are inserted.
Oracle9i Enterprise Edition Release 9.2.0.5.0 - ProductionAre you executing the statement in OCI_BATCH_ERRORS mode? If so, after statement execution, the statement handle attribute OCI_ATTR_ROW_COUNT should indicate the number of rows successfully inserted. The statement handle attribute OCI_ATTR_NUM_DML_ERRORS should indicate the number of rows that encountered an error. You can then loop through the errors to determine which row(s) had errors. Chapter 4 of the OCI Programmer's Guide has an example of how this is done.
If you are executing the statement in OCI_DEFAULT mode, then execution should stop when the first error is encountered. The statement handle attribute OCI_ATTR_ROW_COUNT will indicate the number of rows successfully inserted so you can deduce the problematic row.
If this is not what you are seeing, then perhaps you can include a snippet of your code. -
Why Index-organized Table (IOT) is so slow during bulk/initial insert?
Tested in 11.1.0.7.0 RAC on RHEL 5 with ASM and 16KB block size.
Table is not wide: PK contains 4 columns and the leading 2 are compressed because they have relatively low cardinality; 2 other columns are included; the table contains another 4 audit columns; overflow table space defined.
Created 2 tables, one is IOT, the other is a normal heap-organized table with "COMPRESS FOR ALL OPERATIONS". Both tables have been range partitioned by the first column into 8 partitions, and DOP is set to 8.
Initial load volume is about 160M rows. Direct Path insert is used with parallel degree 8.
After initial load, create PK for the 4 columns with the leading 2 compressed on the normal table. The IOT occupied about 7GB storage; the normal table occupied 9GB storage (avg_row_len = 80 bytes) and the PK occupied 5.8GB storage.
The storage saving of IOT is significant, but it took about 60 minutes to load the IOT, while it only took 10 minutes to load the heap-organized table and then 6 minutes to create the PK. Overall, the bulk insert for IOT is about 4 times slower than the equivalent heap-organized table.
I have ordered the 4 columns in PK for the best compression ratio (lower cardinality comes first) and only compress the most repetitive leading columns (this matches ORACLE's recommendation in index_stats after validate structure), partition is used to reduce contention, parallel degree is amble, /*+ append */ is used for insert, the ASM system is backed with high-end SAN with a lot of I/O bandwidth.
So it seems that such table is good candidate for IOT and I've tried a few tricks to get the best out of IOT, but the insert performance is quite disappointing. Please advise me if I missed anything, or you have some tips to share.
Thanks a lot.
CREATE TABLE IOT_IS_SLOW
GROUP_ID NUMBER(2) NOT NULL,
BATCH_ID NUMBER(4) NOT NULL,
KEY1 NUMBER(10) NOT NULL,
KEY2 NUMBER(10) NOT NULL,
STATUS_ID NUMBER(2) NOT NULL,
VERSION NUMBER(10),
SRC_LAST_UPDATED DATE,
SRC_CREATION_DATE DATE,
DW_LAST_UPDATED DATE,
DW_CREATION_DATE DATE,
CONSTRAINT PK_IOT_IS_SLOW
PRIMARY KEY (GROUP_ID, BATCH_ID, KEY1, KEY2)
ORGANIZATION INDEX COMPRESS 2
INCLUDING VERSION
NOLOGGING
PCTFREE 20
OVERFLOW
PARALLEL ( DEGREE 8 )
PARTITION BY RANGE(GROUP_ID)
PARTITION P01 VALUES LESS THAN (2),
PARTITION P02 VALUES LESS THAN (3),
PARTITION P03 VALUES LESS THAN (4),
PARTITION P04 VALUES LESS THAN (5),
PARTITION P05 VALUES LESS THAN (6),
PARTITION P06 VALUES LESS THAN (7),
PARTITION P07 VALUES LESS THAN (8),
PARTITION P08 VALUES LESS THAN (MAXVALUE)
);Even if /*+ APPEND */ is ignored for IOT, it is too slow, isn't it?David_Aldridge wrote:
oftengo wrote:
>
Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND or APPEND_VALUES hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
>
http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm
Hmmm, that's very interesting. I'm still a bit cynical though -- in order for direct path to work on an index organized table by appending blocks I would think that some extra conditions would have to be satisfied:
* the table would have to be empty, or the lowest-sorting row of the new data would have to be higher than the highest-sorting row of the existing data
* the data would have to be sorted
... that sort of thing. Maybe I'm suffering a failure of imagination though.Could be. From a Tanel Poder post:
>
The “direct path loader” (KCBL) module is used for performing direct path IO in Oracle, such as direct path segment scans and reading/writing spilled over workareas in temporary tablespace. Direct path IO is used whenever you see “direct path read/write*” wait events reported in your session. This means that IOs aren’t done from/to buffer cache, but from/to PGA directly, bypassing the buffer cache.
This KCBL module tries to dynamically scale up the number of asynch IO descriptors (AIO descriptors are the OS kernel structures, which keep track of asynch IO requests) to match the number of direct path IO slots a process uses. In other words, if the PGA workarea and/or spilled-over hash area in temp tablespace gets larger, Oracle also scales up the number of direct IO slots. Direct IO slots are PGA memory structures helping to do direct IO between files and PGA.
>
So I'm reading into this that somehow these temp segments handle it, perhaps because with parallelism you have to be able to deal anyway. I speculate the data is inserted past the high water mark, then any ordering issues left can be resolved before moving the high water mark(s). Maybe examining where segments wind up in the data files can show how this works.
>
I can't find anything in the documentation that speaks to this, so I wonder whether the docs are really talking about a form of conventional path parallel insert into an IOT and not true direct path inserts.
One way to check, I think, would be to get the wait events for the insert and see whether the writes are direct. -
Doubt regarding ORA-30036: unable to extend segment by 8 in undo tablespace
I am using 11g Release 1 Database .
I have to analyze the performance of two tables of different designs which serve the same purpose and come up with the design which is efficient .
SQL> desc staging_dict
Name Null? Type
SNO NUMBER
CODE_FRAGMENTS CLOB
CODE_FRAGMENTS_U CLOB
CODE_FRAGMENTS_D CLOB
CODE_FRAGMENTS_DO CLOB
SQL> desc staging_dict1
Name Null? Type
SNO NUMBER
CODE_FRAGMENTS CLOB
CODE_FRAGMENTS_UD CLOB
CODE_TYPE VARCHAR2(5 CHAR)Initially I tried inserting a few thousand records into both the tables . Then I did some conversion on one column and I populate the result on other column of the same table . So I update the table in bulk mode and I commit for every thousand records .
I have a undo tablespace of 2G with undo_retention=900 , retention guratantee is not set for the undo tablespace .
When I tried the conversion and update on the first table (STAGING_DICT) it took more time for around 2500 records compared to other table and when I increased the number of records to 10000 it threw an error
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS'
But I didn't come across this error when I tried the conversion and update on the table for the same 2500 records (STAGING_DICT1) and it was also 10 times faster .
My doubt is does the error ORA-30036 occur because it is saving the undo image of all the four clob columns though I am doing conversion on one column and updating the other column (using only two columns in the update and only one column is affected by update command) ?
Also how is that having less CLOB rows prove more effective by adding one more VARCHAR column which differentiates the code_frament type in the STAGING_DICT1 table than having it as more CLOB columns as in STAGING_DICT table ?Don't you think the error OP reported is kind of weird?
Because as you said, Oracle stores "undo" of lob in user tablespace not undo tablespace if the lob is stored out-of-line.
1. If the size of lob was small, small size of undo would be stored into undo tablespace,
and OP would'nt have undo tablespace shortage problem.
(How does small lob flood undo tablespace?)
2. If the size of lob was big, OP would have 01555 error on user tablespace not undo tablespace shortage error.
So, i think there are 2 theories that can explain this abnormality.
1. OP hit the bug of massive undo generation.
2. OP is using securefile which is 11g new feature.
Oracle documents says that undo for securefile lob is stored in "undo" tablespace, not user tablespace.
But unfortunately, i'm not sture about this coz i didn't try it myself -
[Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line
Introduction
Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
Solution
For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
In Windows command line,
bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
The txt file as follows:
However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
The txt file as follows:
When multiple field terminators means multiple fields, you use the below comma separated format,
column1,,column2,,,column3
In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
USE <testdatabase>;
GO
BULK INSERT <your table> FROM ‘<Path>’
WITH (
DATAFILETYPE = ' char/native/ widechar /widenative',
FIELDTERMINATOR = ' field_terminator',
For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,:’,
The new table contains like as follows:
We could not declare multiple field terminators (, and :) in the Query statement, as the following format, a duplicate error will occur.
In SQL Server Management Studio Query Editor:
BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
WITH (
DATAFILETYPE = ‘char',
FIELDTERMINATOR = ‘,’,
FIELDTERMINATOR = ‘:’
However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.
More Information
For more information about filed terminators, you can review the following article.
http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
http://technet.microsoft.com/en-us/library/ms191485.aspx
http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
Applies to
SQL Server 2012
SQL Server 2008R2
SQL Server 2005
SQL Server 2000
Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.Thanks,
Is this a supported scenario, or does it use unsupported features?
For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
in a supported way?
Thanks! Josh -
BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue
Oracle 10.2.0.4
I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
Here's what I'm doing:
1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
Here's the "test" procedure I created to test my scenario:
View: V_TEST_TABLE
Underlying Table: TEST_TABLE
PROCEDURE BulkTest
IS
TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
varRemData remDataType;
begin
select /*+ DRIVING_SITE(r)*/ *
BULK COLLECT INTO varRemData
from TEST_TABLE@REMOTE_LINK
where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
BEGIN
FORALL idx IN varRemData.FIRST .. varRemData.LAST
INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
EXCEPTION WHEN others THEN
DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
END;
COMMIT;
end;
I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
Any help would be appreciated....
Thanks,
SteveThanks, obviously this is my first post, I'm desperate to figure out why this won't work....
This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
Thanks,
Steve -
What is the counter part for sql server BULK insert in Oracle.
I have gone through an existing thread for the same question Bulk Insert CSV file in Oracle 9i But here it is suggested to read from a file into and external table and then to create my own table reading from external table.
But I want to read directly from a file to an existing table in my database. How do I achieve this?
Thank you,
Praveen.Refer to
http://download.oracle.com/docs/cd/B14117_01/server.101/b10825/ldr_modes.htm#i1008078
"During Direct Path load through SQL Loader, some constaints and all db triggers are disabled. With the conventional path load method, arrays of rows are inserted with standard SQL INSERT statements—integrity constraints and insert triggers are automatically applied.
After the rows are loaded and indexes rebuilt, any triggers that were disabled are automatically reenabled. The log file lists all triggers that were disabled for the load. There should not be any errors reenabling triggers." -
HOW to USE 'LONG RAW' in Bulk Insertion using OCI
Hi,
I need to do bulk insertion of LONG RAW data into a table using the OCI.
To the OCIBindByPos API what should I specify in the field size(value_sz). As different records can have different lenght for this LONG RAW data what value should I provide.
Thanks,
Tuhin
sword OCIBindByPos ( OCIStmt *stmtp,
OCIBind **bindpp,
OCIError *errhp,
ub4 position,
dvoid *valuep,
sb4 value_sz,
ub2 dty,
dvoid *indp,
ub2 *alenp,
ub2 *rcodep,
ub4 maxarr_len,
ub4 *curelep,
ub4 mode );ORA-00997: illegal use of LONG datatype
Cause: A value of datatype LONG was used in a function or in a DISTINCT, WHERE, CONNECT BY, GROUP BY, or ORDER BY clause. A LONG value can only be used in a SELECT clause.
Action: Remove the LONG value from the function or clause
Are you using the column anywhere else but in the SELECT? -
Hello,
i'm trying to bulk insert data into planning and i get an error. As i can understand it seems related to some bad settings regarding decimal separator. I'm from italy where default decimal separator is ",". As far as i can check everything is set on english including Oracle Client (... sqlldr).
As you can see in datafile.log i get oracle ORA-01722 error.
Any ideas?
Thank you in advance,
Daniele
S.O: windows2008
FDM Version: 11.1.2.1
REPOSITORY: Oracle 11.2.0.1.0 - 64bit
Here are log files:
admin.log:
** Begin FDM Runtime Error Log Entry [2011-09-07 16:50:18] **
ERROR:
Code............................................. 4003
Description...................................... Oracle (SQL-Loader) data load failed, please see processing log for details!
Procedure........................................ clsImpProcessMgr.fLoadAndProcessFile
Component........................................ upsWObjectsDM
Version.......................................... 1112
Thread........................................... 4040
IDENTIFICATION:
User............................................. admin
Computer Name.................................... IPEREPMA00
App Name......................................... FDMAuchan
Client App....................................... WebClient
CONNECTION:
Provider......................................... ORAOLEDB.ORACLE
Data Server......................................
Database Name.................................... HYPTEST
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... LOADACTSAP
Location ID...................................... 750
Location Seg..................................... 4
Category......................................... SAP
Category ID...................................... 13
Period........................................... Jan - 2011
Period ID........................................ 1/31/2011
POV Local........................................ False
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
datafile.log
+2011-09-07 16:50:16+
User ID........... admin
Location.......... LOADACTSAP
Source File....... D:\Oracle\Middleware\EPMSystem11R1\products\FinancialDataQuality\SharedComponents\APPS\FDMAuchan\Inbox\LOADACTSAP\SAP_1101_Prova.txt
Processing Codes:
BLANK............. Line is blank or empty.
ESD............... Excluded String Detected, SKIP Field value was found.
NN................ Non-Numeric, Amount field contains non numeric characters.
RFM............... Required Field Missing.
TC................ Type Conversion, Amount field could be converted to a number.
ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
Create Output File Start: [2011-09-07 16:50:16]
+[Blank] - +
Excluded Record Count..............0
Blank Record Count.................1
Total Records Bypassed.............1
Valid Records......................477
Total Records Processed............478
Begin Oracle (SQL-Loader) Process (477): [2011-09-07 16:50:16]
Oracle (SQL-Loader) Log File Contents:
SQL*Loader: Release 11.2.0.1.0 - Production on Mer Set 7 16:50:16 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Control File: C:\Users\INSTAL~1\AppData\Local\Temp\tWadmin154840402517.ctl
Character Set UTF16 specified for all input.
Using character length semantics.
First primary datafile C:\Users\INSTAL~1\AppData\Local\Temp\tWadmin154840402517.tmp has a
little endian byte order mark in it.
Data File: C:\Users\INSTAL~1\AppData\Local\Temp\tWadmin154840402517.tmp
Bad File: C:\Users\INSTAL~1\AppData\Local\Temp\tWadmin154840402517.bad
Discard File: none specified
+(Allow all discards)+
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct
Load is UNRECOVERABLE; invalidation redo is produced.
Table TWADMIN154840402517, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
DATAKEY SEQUENCE (1, 1)
PARTITIONKEY FIRST * CHARACTER
Terminator string : '~|'
CATKEY NEXT * CHARACTER
Terminator string : '~|'
PERIODKEY NEXT * DATE YYYYMMDD
Terminator string : '~|'
DATAVIEW NEXT * CHARACTER
Terminator string : '~|'
ACCOUNT NEXT * CHARACTER
Terminator string : '~|'
ENTITY NEXT * CHARACTER
Terminator string : '~|'
ICP NEXT * CHARACTER
Terminator string : '~|'
UD1 NEXT * CHARACTER
Terminator string : '~|'
UD2 NEXT * CHARACTER
Terminator string : '~|'
UD3 NEXT * CHARACTER
Terminator string : '~|'
UD4 NEXT * CHARACTER
Terminator string : '~|'
UD5 NEXT * CHARACTER
Terminator string : '~|'
UD6 NEXT * CHARACTER
Terminator string : '~|'
UD7 NEXT * CHARACTER
Terminator string : '~|'
UD8 NEXT * CHARACTER
Terminator string : '~|'
UD9 NEXT * CHARACTER
Terminator string : '~|'
UD10 NEXT * CHARACTER
Terminator string : '~|'
UD11 NEXT * CHARACTER
Terminator string : '~|'
UD12 NEXT * CHARACTER
Terminator string : '~|'
UD13 NEXT * CHARACTER
Terminator string : '~|'
UD14 NEXT * CHARACTER
Terminator string : '~|'
UD15 NEXT * CHARACTER
Terminator string : '~|'
UD16 NEXT * CHARACTER
Terminator string : '~|'
UD17 NEXT * CHARACTER
Terminator string : '~|'
UD18 NEXT * CHARACTER
Terminator string : '~|'
UD19 NEXT * CHARACTER
Terminator string : '~|'
UD20 NEXT * CHARACTER
Terminator string : '~|'
DESC1 NEXT * CHARACTER
Terminator string : '~|'
DESC2 NEXT * CHARACTER
Terminator string : '~|'
ATTR1 NEXT * CHARACTER
Terminator string : '~|'
ATTR2 NEXT * CHARACTER
Terminator string : '~|'
ATTR3 NEXT * CHARACTER
Terminator string : '~|'
ATTR4 NEXT * CHARACTER
Terminator string : '~|'
ATTR5 NEXT * CHARACTER
Terminator string : '~|'
ATTR6 NEXT * CHARACTER
Terminator string : '~|'
ATTR7 NEXT * CHARACTER
Terminator string : '~|'
ATTR8 NEXT * CHARACTER
Terminator string : '~|'
ATTR9 NEXT * CHARACTER
Terminator string : '~|'
ATTR10 NEXT * CHARACTER
Terminator string : '~|'
ATTR11 NEXT * CHARACTER
Terminator string : '~|'
ATTR12 NEXT * CHARACTER
Terminator string : '~|'
ATTR13 NEXT * CHARACTER
Terminator string : '~|'
ATTR14 NEXT * CHARACTER
Terminator string : '~|'
MEMOKEY NEXT * CHARACTER
Terminator string : '~|'
AMOUNT NEXT * CHARACTER
Terminator string : '~|'
CALCACCTTYPE CONSTANT
Value is '9'
CHANGESIGN CONSTANT
Value is '0'
AMOUNTX CONSTANT
Value is '0'
ACCOUNTR CONSTANT
Value is '0'
ACCOUNTF CONSTANT
Value is '0'
ENTITYR CONSTANT
Value is '0'
ENTITYF CONSTANT
Value is '0'
ICPR CONSTANT
Value is '0'
ICPF CONSTANT
Value is '0'
UD1R CONSTANT
Value is '0'
UD1F CONSTANT
Value is '0'
UD2R CONSTANT
Value is '0'
UD2F CONSTANT
Value is '0'
UD3R CONSTANT
Value is '0'
UD3F CONSTANT
Value is '0'
UD4R CONSTANT
Value is '0'
UD4F CONSTANT
Value is '0'
UD5R CONSTANT
Value is '0'
UD5F CONSTANT
Value is '0'
UD6R CONSTANT
Value is '0'
UD6F CONSTANT
Value is '0'
UD7R CONSTANT
Value is '0'
UD7F CONSTANT
Value is '0'
UD8R CONSTANT
Value is '0'
UD8F CONSTANT
Value is '0'
UD9R CONSTANT
Value is '0'
UD9F CONSTANT
Value is '0'
UD10R CONSTANT
Value is '0'
UD10F CONSTANT
Value is '0'
UD11R CONSTANT
Value is '0'
UD11F CONSTANT
Value is '0'
UD12R CONSTANT
Value is '0'
UD12F CONSTANT
Value is '0'
UD13R CONSTANT
Value is '0'
UD13F CONSTANT
Value is '0'
UD14R CONSTANT
Value is '0'
UD14F CONSTANT
Value is '0'
UD15R CONSTANT
Value is '0'
UD15F CONSTANT
Value is '0'
UD16R CONSTANT
Value is '0'
UD16F CONSTANT
Value is '0'
UD17R CONSTANT
Value is '0'
UD17F CONSTANT
Value is '0'
UD18R CONSTANT
Value is '0'
UD18F CONSTANT
Value is '0'
UD19R CONSTANT
Value is '0'
UD19F CONSTANT
Value is '0'
UD20R CONSTANT
Value is '0'
UD20F CONSTANT
Value is '0'
ARCHIVEID CONSTANT
Value is '774'
HASMEMOITEM CONSTANT
Value is '0'
STATICDATAKEY CONSTANT
Value is '0'
Referential Integrity Constraint/Trigger Information:
NULL, UNIQUE, and PRIMARY KEY constraints are unaffected.
Trigger FDMAUCHAN."TWADMIN154840402517_AK" was disabled before the load.
Record 1: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 2: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 3: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 4: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 5: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 6: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 7: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 8: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 9: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 10: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 11: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 12: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 13: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 14: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 15: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 16: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 17: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 18: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 19: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 20: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 21: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 22: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 23: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 24: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 25: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 26: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 27: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 28: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 29: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 30: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 31: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 32: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 33: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 34: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 35: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 36: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 37: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 38: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 39: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 40: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 41: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 42: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 43: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 44: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 45: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 46: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 47: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 48: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 49: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 50: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
Record 51: Rejected - Error on table TWADMIN154840402517, column AMOUNT.
ORA-01722: numero non valido
FDMAUCHAN."TWADMIN154840402517_AK" was re-enabled.
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table TWADMIN154840402517:
+0 Rows successfully loaded.+
+51 Rows not loaded due to data errors.+
+0 Rows not loaded because all WHEN clauses were failed.+
+0 Rows not loaded because all fields were null.+
Bind array size not used in direct path.
Column array rows : 3000
Stream buffer bytes: 256000
Read buffer bytes:18576000
Total logical records skipped: 0
Total logical records rejected: 51
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 1
Total stream buffers loaded by SQL*Loader load thread: 0
Run began on Mer Set 07 16:50:16 2011
Run ended on Mer Set 07 16:50:17 2011
Elapsed time was: 00:00:00.56
CPU time was: 00:00:00.24
Oracle (SQL-Loader) Bad File Contents:
+750~|13~|20110131~|YTD~|701306~|1854~|~|153~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711200~|0355~|~|190~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711260~|1822~|~|119~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711260~|1858~|~|142~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711261~|3601~|~|160~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711302~|5308~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711383~|1748~|~|010~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711383~|1857~|~|010~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711458~|1608~|~|191~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|0355~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|0392~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|1565~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|1784~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|1823~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|1848~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711468~|1853~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711605~|2644~|~|555~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711606~|3601~|~|555~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711641~|2389~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|711655~|2649~|~|116~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721330~|1161~|~|155~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721330~|1852~|~|010~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721330~|3539~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721330~|5158~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721340~|3473~|~|190~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721341~|4263~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|721341~|5617~|~|999~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730001~|3898~|~|010~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730003~|0355~|~|192~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730003~|0392~|~|140~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730003~|1565~|~|142~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730003~|1795~|~|134~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|730003~|1848~|~|126~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|732000~|1823~|~|130~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|734001~|3601~|~|151~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761202~|1748~|~|180~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761205~|5309~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761206~|1608~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761206~|1823~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761210~|1857~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|761210~|7347~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H11468~|1854~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H11468~|2644~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H11468~|5309~|~|181~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H11473~|1784~|~|191~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H11605~|2644~|~|151~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H21330~|1161~|~|123~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H21330~|1161~|~|156~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H21340~|3898~|~|154~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H30000~|1854~|~|177~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+750~|13~|20110131~|YTD~|H30000~|1854~|~|188~|SAP~|DEF_ATT~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|~|0~|100.00+
+[RDMS Bulk Load Error Begin]+
+ Message: (4003) - Oracle (SQL-Loader) data load failed, please see processing log for details!+
+ See Bulk Load File: C:\Users\INSTAL~1\AppData\Local\Temp\tWadmin154840402517.tmp+
+[RDMS Bulk Load Error End]+Hi Daniele,
Can i know the steps to resolve this issue.
I am also facing the same issue. Below is the error message.
[RDMS Bulk Load Error Begin]
Message: (4003) - Oracle (SQL-Loader) data load failed, please see processing log for details!
See Bulk Load File: C:\temp\tWthinad824889961810.tmp
[RDMS Bulk Load Error End]
I am using the Essbase as the Target adaptor. Your reply will be very helpful for me.
Thanks
Dhinesh Kumar T -
ORA-30036 when modifying column in large table
Oracle version 10.2.0.4
Afternoon everyone!
I have a large table that I need to modify a column in - increasing from CHAR(3) to CHAR(6)
On altering the table I'm getting an ORA-30036: unable to extend segment but 8 in undo tablespace
Increasing undo tbs size isn't really an option, and I don't really want to go copying the table elsewhere either - again due to space limitations.
Is there a way to avoid this undo exhaustion? Will disabling logging for this table solve my issue? Or is there another way similar to the 'checkpoint' clause you can use when dropping columns?
Many thanks!
Adam MJust in case nothing better appears and you can't increase the UNDO ...
1. Create a new table with the correct datatype
2. Insert data from the old table to the new table, in batches if necessary. Divide the data by key values if possible or get a range of rowids to process or something
3. Make sure dependent objects are created for the new table
4. drop the old table
5. rename the old table to the new table -
FORALL bulk insert ..strange behaviour
Hi all..
I have the following problem..
I use a FORALL bulk Insert statement to insert a set of values using a collection that has only one row. The thing is I get a ' ORA-01400: cannot insert NULL into <schema>.<table>.<column>' error message, whereas the row has been inserted into the table!
Any ideas why this is happening?Here is the sample code..
te strange thing is that the cursor has 1 row and the array gets also 1 row..
FUNCTION MAIN() RETURN BOOLEAN IS
-- This cursor retrieves all necessary values from CRD table to be inserted into PDCS_DEFERRED_RELATIONSHIP table
CURSOR mycursor IS
SELECT key1,
key2,
column1,
date1,
date2,
txn_date
FROM mytable pc
WHERE
-- create an array and a type for the scancrd cursor
type t_arraysample IS TABLE OF mycursor%ROWTYPE;
myarrayofvalues t_arraysample;
TYPE t_target IS TABLE OF mytable%ROWTYPE;
la_target t_target := t_target();
BEGIN
OPEN mycursor;
FETCH mycursor BULK COLLECT
INTO myarrayofvalues
LIMIT 1000;
myarrayofvalues.extend(1000);
FOR x IN 1 .. myarrayofvalues.COUNT
LOOP
-- fetch variables into arrays
gn_index := gn_index + 1;
la_target(gn_index).key1 := myarrayofvalues(x).key1;
la_target(gn_index).key2 := myarrayofvalues(x).key2;
la_target(gn_index).column1 := myarrayofvalues(x).column1;
la_target(gn_index).date1 := myarrayofvalues(x).date1;
la_target(gn_index).date2 := myarrayofvalues(x).date2;
la_target(gn_index).txn_date := myarrayofvalues(x).txn_date;
END LOOP;
-- call function to insert/update TABLE
IF NOT MyFunction(la_target) THEN
ROLLBACK;
RAISE genericError;
ELSE COMMIT;
END IF;
CLOSE mycursor;
END IF;
FUNCTION MyFunction(t_crd IN t_arraysample) return boolean;
DECLARE
BEGIN
FORALL x IN la_target.FIRST..la_target.LAST
INSERT INTO mytable
VALUES la_target(x);
END IF; -
I have tried to enable bulk insert on a location. SQL* Loader is installed and appears to work, but I get an error when it tries to load the temp table into my Oracle database. Here's the output from the load file log:
2011-05-12-10:43:09
User ID........... xxx
Location.......... xxx
Source File....... xxx
Processing Codes:
BLANK............. Line is blank or empty.
ESD............... Excluded String Detected, SKIP Field value was found.
NN................ Non-Numeric, Amount field contains non numeric characters.
RFM............... Required Field Missing.
TC................ Type Conversion, Amount field could be converted to a number.
ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
Create Output File Start: [2011-05-12-10:43:10]
Begin Oracle (SQL-Loader) Process (42386): [2011-05-12-10:43:17]
Oracle (SQL-Loader) Log File Contents:
SQL*Loader: Release 10.2.0.1.0 - Production on Thu May 12 10:43:17 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: C:\WINDOWS\TEMP\tWrobert358876967263.ctl
Character Set UTF16 specified for all input.
Using character length semantics.
First primary datafile C:\WINDOWS\TEMP\tWrobert358876967263.tmp has a
little endian byte order mark in it.
Data File: C:\WINDOWS\TEMP\tWrobert358876967263.tmp
Bad File: C:\WINDOWS\TEMP\tWrobert358876967263.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct
Load is UNRECOVERABLE; invalidation redo is produced.
Table TWROBERT358876967263, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
DATAKEY SEQUENCE (1, 1)
PARTITIONKEY FIRST * CHARACTER
Terminator string : '~|'
CATKEY NEXT * CHARACTER
Terminator string : '~|'
PERIODKEY NEXT * DATE YYYYMMDD
Terminator string : '~|'
DATAVIEW NEXT * CHARACTER
Terminator string : '~|'
ACCOUNT NEXT * CHARACTER
Terminator string : '~|'
ENTITY NEXT * CHARACTER
Terminator string : '~|'
ICP NEXT * CHARACTER
Terminator string : '~|'
UD1 NEXT * CHARACTER
Terminator string : '~|'
UD2 NEXT * CHARACTER
Terminator string : '~|'
UD3 NEXT * CHARACTER
Terminator string : '~|'
UD4 NEXT * CHARACTER
Terminator string : '~|'
UD5 NEXT * CHARACTER
Terminator string : '~|'
UD6 NEXT * CHARACTER
Terminator string : '~|'
UD7 NEXT * CHARACTER
Terminator string : '~|'
UD8 NEXT * CHARACTER
Terminator string : '~|'
UD9 NEXT * CHARACTER
Terminator string : '~|'
UD10 NEXT * CHARACTER
Terminator string : '~|'
UD11 NEXT * CHARACTER
Terminator string : '~|'
UD12 NEXT * CHARACTER
Terminator string : '~|'
UD13 NEXT * CHARACTER
Terminator string : '~|'
UD14 NEXT * CHARACTER
Terminator string : '~|'
UD15 NEXT * CHARACTER
Terminator string : '~|'
UD16 NEXT * CHARACTER
Terminator string : '~|'
UD17 NEXT * CHARACTER
Terminator string : '~|'
UD18 NEXT * CHARACTER
Terminator string : '~|'
UD19 NEXT * CHARACTER
Terminator string : '~|'
UD20 NEXT * CHARACTER
Terminator string : '~|'
DESC1 NEXT * CHARACTER
Terminator string : '~|'
DESC2 NEXT * CHARACTER
Terminator string : '~|'
ATTR1 NEXT * CHARACTER
Terminator string : '~|'
ATTR2 NEXT * CHARACTER
Terminator string : '~|'
ATTR3 NEXT * CHARACTER
Terminator string : '~|'
ATTR4 NEXT * CHARACTER
Terminator string : '~|'
ATTR5 NEXT * CHARACTER
Terminator string : '~|'
ATTR6 NEXT * CHARACTER
Terminator string : '~|'
ATTR7 NEXT * CHARACTER
Terminator string : '~|'
ATTR8 NEXT * CHARACTER
Terminator string : '~|'
ATTR9 NEXT * CHARACTER
Terminator string : '~|'
ATTR10 NEXT * CHARACTER
Terminator string : '~|'
ATTR11 NEXT * CHARACTER
Terminator string : '~|'
ATTR12 NEXT * CHARACTER
Terminator string : '~|'
ATTR13 NEXT * CHARACTER
Terminator string : '~|'
ATTR14 NEXT * CHARACTER
Terminator string : '~|'
MEMOKEY NEXT * CHARACTER
Terminator string : '~|'
AMOUNT NEXT * CHARACTER
Terminator string : '~|'
CALCACCTTYPE CONSTANT
Value is '9'
CHANGESIGN CONSTANT
Value is '0'
AMOUNTX CONSTANT
Value is '0'
ACCOUNTR CONSTANT
Value is '0'
ACCOUNTF CONSTANT
Value is '0'
ENTITYR CONSTANT
Value is '0'
ENTITYF CONSTANT
Value is '0'
ICPR CONSTANT
Value is '0'
ICPF CONSTANT
Value is '0'
UD1R CONSTANT
Value is '0'
UD1F CONSTANT
Value is '0'
UD2R CONSTANT
Value is '0'
UD2F CONSTANT
Value is '0'
UD3R CONSTANT
Value is '0'
UD3F CONSTANT
Value is '0'
UD4R CONSTANT
Value is '0'
UD4F CONSTANT
Value is '0'
UD5R CONSTANT
Value is '0'
UD5F CONSTANT
Value is '0'
UD6R CONSTANT
Value is '0'
UD6F CONSTANT
Value is '0'
UD7R CONSTANT
Value is '0'
UD7F CONSTANT
Value is '0'
UD8R CONSTANT
Value is '0'
UD8F CONSTANT
Value is '0'
UD9R CONSTANT
Value is '0'
UD9F CONSTANT
Value is '0'
UD10R CONSTANT
Value is '0'
UD10F CONSTANT
Value is '0'
UD11R CONSTANT
Value is '0'
UD11F CONSTANT
Value is '0'
UD12R CONSTANT
Value is '0'
UD12F CONSTANT
Value is '0'
UD13R CONSTANT
Value is '0'
UD13F CONSTANT
Value is '0'
UD14R CONSTANT
Value is '0'
UD14F CONSTANT
Value is '0'
UD15R CONSTANT
Value is '0'
UD15F CONSTANT
Value is '0'
UD16R CONSTANT
Value is '0'
UD16F CONSTANT
Value is '0'
UD17R CONSTANT
Value is '0'
UD17F CONSTANT
Value is '0'
UD18R CONSTANT
Value is '0'
UD18F CONSTANT
Value is '0'
UD19R CONSTANT
Value is '0'
UD19F CONSTANT
Value is '0'
UD20R CONSTANT
Value is '0'
UD20F CONSTANT
Value is '0'
ARCHIVEID CONSTANT
Value is '2260'
HASMEMOITEM CONSTANT
Value is '0'
STATICDATAKEY CONSTANT
Value is '0'
Referential Integrity Constraint/Trigger Information:
NULL, UNIQUE, and PRIMARY KEY constraints are unaffected.
Trigger FDMAPP2."TWROBERT358876967263_AK" was disabled before the load.
SQL*Loader-951: Error calling once/load initialization
ORA-00942: table or view does not exist
Processing Complete... [2011-05-12-10:43:18]Hi rjgideon,
At the very end it says "ORA-00942: table or view does not exist". Looks like SQL*Loader is trying to insert data into a table that doesn't exist.
Take a look at the control file: C:\WINDOWS\TEMP\tWrobert358876967263.ctl
You might find a hint in there.
Regards,
Matt -
How can I debug a Bulk Insert error?
I'm loading a bunch of files into SQL server. All work fine, but one keeps erroring out on me. All files should be exactly the same in structure, but they have different dates, and other different financial metrics, but the structure and field
names should be exactly the same. Nevertheless, one keeps konking out, and throwing this error.
Msg 4832, Level 16, State 1, Line 1
Bulk load: An unexpected end of file was encountered in the data file.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
The ROWTERMINATOR should be CRLF, and when you look at it in Notepad++ that's what it looks like, but it must be something else, because I keep getting errors here. I tried the good old: ROWTERMINATOR='0x0a'
That works on all files, but one, so there's something funky going on here, and I need to see what SQL Server is really doing.
Is there some way to print out a log, or look at a log somewhere?
Thanks!!
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.The first thing to try is to see if BCP likes the file. BCP and BULK INSERT adhere to the same spec, but they are different implementations, but there are subtle differences.
There is an ERRORFILE option, but it more helps when there is bad data.
You can also use the BATCHSIZE option to see how many records in the file it swallows, before things go bad. FIRSTROW and LASTROW can also help.
All in all, it can be quite tedious find that single row where things are different - and where BULK INSERT loses sync entirely. Keep in mind that it reads fields on by one, and it there is one field terminator to few on a line, it will consume the line
feed at the end of the line as data.
Erland Sommarskog, SQL Server MVP, [email protected] -
ODBC, bulk inserts and dynamic SQL
I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
I have also considered using the FOR ALL statement and SQL*Loader utility.
My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
Any ideas??
nullHi,
I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
3) Use SQL*Loader (the best performance, but no real control of what's happening).
I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
null
Maybe you are looking for
-
I can no longer sync music, books and ring tones
I recently updated my iTunes, and something has changed. I can sync music and audio books - but it erases my ring tones. I noticed this, and attempted to sync my ring tones -- only to lose all of my music and audio books. I have the "manually manag
-
My old iPhone used to show the messages graphically in the VM tab and I could listen and delete them there. I never had to dial into my carrier (ATT) and punch the buttons. on my 4 I have to call is there a setting I'm missing somewhere to get the
-
I can't add songs, even after I delete songs. Only at 16.5 Gb
I had my computer reformatted so i had to get all my songs from my iPod and put them on my laptop. After I did that, I tried importing the whole music folder into iTunes but it stopped after 3151 and about 16.5 GB. I tried every way of adding songs.
-
I have a form that i need to convert to a pdf with fields
i am working on a form that will be used for registration. I created it in Publisher then saved it as a jpeg. In order to be more efficient with collecting information, i need to save this form as a pdf with fields so it can be filled out online and
-
Why is my purchased serial number not accepted in the downloaded trial version?
I purchased an upgrade to Elements 10 from Elements 9 and downloaded a 32 bit version instead of a 64 bit version. The only way I found to re-download was to download a trial version. Entering the serial number provided by Adobe gives the message t