Sql -loader and parallel
Hello,
I'm trying to use sql-loader with parallel option.
I have loaded data with and without parellel but it takes same time. I don't appreciatte a better perfomance with parallel.
In the called to sqlldr i'm adding PARALLEL=TRUE. That is the only change i have made.
Do i have to make any 'alter table' with my Oracle table??.
Any advice will be greatly appreciatted.
Nauj
The PARALLEL=TRUE parameter will not, by itself, result in any performance gain at all. It is really just a flag which tells SQL*Loader to allow multiple sessions to perform direct loads at the same time. It is up to you to actually spawn multiple SQL*Loader sessions on different portions of the data (e.g. you might divide your source data into multiple files and run distinct SQL*Loader sessions on each file concurrently)
Similar Messages
-
SQL*Loader-971: parallel load option not allowed when loading lob columns
Hi,
I am trying to load a table, which has a VARRAY column, using DIRECT=TRUE and PARALLEL=TRUE through
Sql *Loader 10.2.0.4.0
OS: Sun Solaris 10 SPARC 64-bit,
Database: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
The following error recieved:
SQL*Loader-971: parallel load option not allowed when loading lob columns
Please help me to resolve..
Thanks and regrds
Anjiuser8836881 wrote:
Hi,
I am trying to load a table, which has a VARRAY column, using DIRECT=TRUE and PARALLEL=TRUE through
Sql *Loader 10.2.0.4.0
OS: Sun Solaris 10 SPARC 64-bit,
Database: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
The following error recieved:
SQL*Loader-971: parallel load option not allowed when loading lob columns
Please help me to resolve..
Thanks and regrds
Anjihttp://tinyurl.com/yhxdhnt -
SQL Loader and Insert Into Performance Difference
Hello All,
Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
Thanks,
Kannan.Kannan B wrote:
Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative. -
SQL *Loader and External Table
Hi,
Can anyone tell me the difference between SQL* Loader and External table?
What are the conditions under we can use SQL * Loader and External Table.
ThanxExternal tables are accessible from SQL, which generally simplifies life if the data files are physically located on the database server since you don't have to coordinate a call to an external SQL*Loader script with other PL/SQL processing. Under the covers, external tables are normally just invoking SQL*Loader.
SQL*Loader is more appropriate if the data files are on a different server or if it is easier to call an executable rather than calling PL/SQL (i.e. if you have a batch file that runs on a server other than the database server that wants to FTP a data file from a FTP server and then load the data into Oracle).
Justin -
Help in calling sql loader and an oracle procedure in a script
Hi Guru's
please help me in writing an unix script which will call sql loader and also an oracle procedure..
i wrote an script which is as follows.
!/bin/sh
clear
#export ORACLE_SID='HOBS2'
sqlldr USERID=load/ps94mfo16 CONTROL=test_nica.ctl LOG=test_nica.log
retcode=`echo $?`
case "$retcode" in
0) echo "SQL*Loader execution successful" ;;
1) echo "SQL*Loader execution exited with EX_FAIL, see logfile" ;;
2) echo "SQL*Loader execution exited with EX_WARN, see logfile" ;;
3) echo "SQL*Loader execution encountered a fatal error" ;;
*) echo "unknown return code";;
esac
sqlplus USERID=load/ps94mfo16 << EOF
EXEC DO_TEST_SHELL_SCRIPT
it is loading the data in to an oracle table
but the procedure is not executed..
any valuable suggestion is highly appriciated..
Cheersmultiple duplicate threads:
to call an oracle procedure and sql loader in an unix script
Re: Can some one help he sql loader issue. -
HELP: SQL*LOADER AND Ref Column
Hallo,
I have already posted and I really need help and don't come further with this
I have the following problem. I have 2 tables which I created the following way:
CREATE TYPE gemark_schluessel_t AS OBJECT(
gemark_id NUMBER(8),
gemark_schl NUMBER(4),
gemark_name VARCHAR2(45)
CREATE TABLE gemark_schluessel_tab OF gemark_schluessel_t(
constraint pk_gemark PRIMARY KEY(gemark_id)
CREATE TYPE flurstueck_t AS OBJECT(
flst_id NUMBER(8),
flst_nr_zaehler NUMBER(4),
flst_nr_nenner NUMBER(4),
zusatz VARCHAR2(2),
flur_nr NUMBER(2),
gemark_schluessel REF gemark_schluessel_t,
flaeche SDO_GEOMETRY
CREATE TABLE flurstuecke_tab OF flurstueck_t(
constraint pk_flst PRIMARY KEY(flst_id),
constraint uq_flst UNIQUE(flst_nr_zaehler,flst_nr_nenner,zusatz,flur_nr),
flst_nr_zaehler NOT NULL,
flur_nr NOT NULL,
gemark_schluessel REFERENCES gemark_schluessel_tab
Now I have data in the gemark_schluessel_tab which looks like this (a sample):
1 101 Borna
2 102 Draisdorf
Now I wanna load data in my flurstuecke_tab with SQL*Loader and there I have problems with my ref column gemark_schluessel.
One data record looks like this in my file (it is without geometry)
1|97|7||1|1|
If I wanna load my data record, it does not work. The reference (the system generated OID) should be taken from gemark_schluessel_tab.
LOAD DATA
INFILE *
TRUNCATE
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE FLURSTUECKE_TAB
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS (
flst_id,
flst_nr_zaehler,
flst_nr_nenner,
zusatz,
flur_nr,
gemark_schluessel REF(CONSTANT 'GEMARK_SCHLUESSEL_TAB',GEMARK_ID),
gemark_id FILLER
BEGINDATA
1|97|7||1|1|
Is there a error I made?
Thanks in advance
Tigmultiple duplicate threads:
to call an oracle procedure and sql loader in an unix script
Re: Can some one help he sql loader issue. -
Problem : Load PDF or similiar files( stored at operating system) into an oracle table using SQl*Loader .
and than Unload the files back from oracle tables to prevoius format.
I 've used SQL*LOADER .... " sqlldr " command as :
" sqlldr scott/[email protected] control=c:\sqlldr\control.ctl log=c:\any.txt "
Control file is written as :
LOAD DATA
INFILE 'c:\sqlldr\r_sqlldr.txt'
REPLACE
INTO table r_sqlldr
Fields terminated by ','
id sequence (max,1) ,
fname char(20),
data LOBFILE(fname) terminated by EOF )
It loads files ( Pdf, Image and more...) that are mentioned in file r_sqlldr.txt into oracle table r_sqlldr
Text file ( used as source ) is written as :
c:\kalam.pdf,
c:\CTSlogo1.bmp
c:\any1.txt
after this load ....i used UTL_FILE to unload data and write procedure like ...
CREATE OR REPLACE PROCEDURE R_UTL AS
l_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER ;
l_pos INTEGER := 1;
l_blob BLOB;
l_blob_len INTEGER;
BEGIN
SELECT data
INTO l_blob
FROM r_sqlldr
where id= 1;
l_blob_len := DBMS_LOB.GETLENGTH(l_blob);
DBMS_OUTPUT.PUT_LINE('blob length : ' || l_blob_len);
IF (l_blob_len < 32767) THEN
l_amount :=l_blob_len;
ELSE
l_amount := 32767;
END IF;
DBMS_LOB.OPEN(l_blob, DBMS_LOB.LOB_READONLY);
l_file := UTL_FILE.FOPEN('DBDIR1','Kalam_out.pdf','w', 32767);
DBMS_OUTPUT.PUT_LINE('File opened');
WHILE l_pos < l_blob_len LOOP
DBMS_LOB.READ (l_blob, l_amount, l_pos, l_buffer);
DBMS_OUTPUT.PUT_LINE('Blob read');
l_pos := l_pos + l_amount;
UTL_FILE.PUT_RAW(l_file, l_buffer, TRUE);
DBMS_OUTPUT.PUT_LINE('writing to file');
UTL_FILE.FFLUSH(l_file);
UTL_FILE.NEW_LINE(l_file);
END LOOP;
UTL_FILE.FFLUSH(l_file);
UTL_FILE.FCLOSE(l_file);
DBMS_OUTPUT.PUT_LINE('File closed');
DBMS_LOB.CLOSE(l_blob);
EXCEPTION
WHEN OTHERS THEN
IF UTL_FILE.IS_OPEN(l_file) THEN
UTL_FILE.FCLOSE(l_file);
END IF;
DBMS_OUTPUT.PUT_LINE('Its working at last');
END R_UTL;
This loads data from r_sqlldr table (BOLBS) to files on operating system ,,,
-> Same procedure with minor changes is used to unload other similar files like Images and text files.
In above example : Loading : 3 files 1) Kalam.pdf 2) CTSlogo1.bmp 3) any1.txt are loaded into oracle table r_sqlldr 's 3 rows respectively.
file names into fname column and corresponding data into data ( BLOB) column.
Unload : And than these files are loaded back into their previous format to operating system using UTL_FILE feature of oracle.
so PROBLEM IS : Actual capacity (size ) of these files is getting unloaded back but with quality decreased. And PDF file doesnt even view its data. means size is almot equal to source file but data are lost when i open it.....
and for images .... imgaes are getting loaded an unloaded but with colors changed ....
Also features ( like FFLUSH ) of Oracle 've been used but it never worked
ANY SUGGESTIONS OR aLTERNATE SOLUTION TO LOAD AND UNLOAD PDFs through Oracle ARE REQUESTED.
------------------------------------------------------------------------------------------------------------------------Thanks Justin ...for a quick response ...
well ... i am loading data into BLOB only and using SQL*Loader ...
I've never used dbms_lob.loadFromFile to do the loads ...
i 've opend a file on network and than used dbms_lob.read and
UTL_FILE.PUT_RAW to read and write data into target file.
actually ...my process is working fine with text files but not with PDF and IMAGES ...
and your doubt of ..."Is the data the proper length after reading it in?" ..m not getting wat r you asking ...but ... i think regarding data length ..there is no problem... except ... source PDF length is 90.4 kb ..and Target is 90.8 kb..
thats it...
So Request u to add some more help ......or should i provide some more details ?? -
Is there any difference in Oracle 9i SQL Loader and Oracle 10g SQL Loader
Hi
Can anyone tell me whether is there any difference in Oracle 9i SQL Loader and Oracle 10g SQL Loader?
I am upgrading the 9i db to 10g and wanted to run the 9i SQL Loader control files on upgraded 10g db. So please let me know is there any difference which I need to consider any modifications in the control files..
Thank you in advance
Adianswered
-
Import and process larger data with SQL*Loader and Java resource
Hello,
I have a project to import data from a text file in a schedule. A lager data, with nearly 20,000 record/1 hours.
After that, we have to analysis the data, and export the results into a another database.
I research about SQL*Loader and Java resource to do these task. But I have no experiment about that.
I'm afraid of the huge data, Oracle could be slowdown or the session in Java Resource application could be timeout.
Please tell me some advice about the solution.
Thank you very much.With '?' mark i mean " How i can link this COL1 with column in csv file ? "
Attilio -
hi,
I want to insert 100,000 records daily in a table for the first month and then in next month these records are going to be replaced by new updated records.
there might be few addition and deletion in the previous records also.
actually its consumer data so there might be few consumer who have withdrawn the utility and there will be some more consumer added in the database.
but almost 99% of the previous month data have to be updated/replaced with the fresh month data.
For instance, what i have in my mind is that i will use sql loader to load data for the first month and then i will delete the previous data using sqlPlus and load the fresh month data using sql loader again.
1. Is this ok ? or there is some better solution to this.
2. I have heard of external files, are they feasible in my scenario?
3. I have planned that i will make scripts for sqlPlus and Loader and use them in batch files. (OS windows 2003 server, Oracle 9i database). is there some better choice to make all the procedure automatic?
looking for your suggestions
nadeem ameerI would suggest u use External tables since its more flexible then
sqlloader & is a better option.
For using external tables
1)u will have to create a directory first
2)Generally creation od directory is done by sys,hence after creating the directory
privileges read & write to be provided to user .
3)Creation of external tables.
4) Now use the table as a normal table to insert ,update delete in
ur table.
U can get more information from
http://www.oracle-base.com/articles/9i/SQLNewFeatures9i.php#ExternalTables
Create Directory <directory_name> as <Directory path where file be present>
Grant read,write on directory <directory_name> to <username>
CREATE TABLE <table_name>
(<column names>)
ORGANIZATION EXTERNAL
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ,directory_name>
ACCESS PARAMETERS
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
LOCATION (<filename>)
PARALLEL 5
REJECT LIMIT 200;
Hope this helps. -
"ORA-00054 Resource Busy Error" when running SQL*Loader in Parallel
Hi all,
Please help me on an issue. We are using Datastage which uses sql*loader to load data into an Oracle Table. SQL*Loader invokes 8 parallel sessions for insert on the table. When doing so, we are facing the following error intermittently:
SQL*Loader-951: Error calling once/load initialization
ORA-00604: error occurred at recursive SQL level 1
ORA-00054: resource busy and acquire with NOWAIT specifiedSince the control file is generated automatically by datastage, we cannot modify/change the options and test. Control File for the same is:
OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)
LOAD DATA INFILE 'ora.2958.371909.fifo.1' "FIX 1358"
APPEND INTO TABLE X
x1 POSITION(1:8) DECIMAL(15,0) NULLIF (1:8) = X'0000000000000000',
x2 POSITION(9:16) DECIMAL(15,0) NULLIF (9:16) = X'0000000000000000',
x3 POSITION(17:20) INTEGER NULLIF (17:20) = X'80000000',
IDNTFR POSITION(21:40) NULLIF (21:40) = BLANKS,
IDNTFR_DTLS POSITION(41:240) NULLIF (41:240) = BLANKS,
FROM_DATE POSITION(241:259) DATE "YYYY-MM-DD HH24:MI:SS" NULLIF (241:259) = BLANKS,
TO_DATE POSITION(260:278) DATE "YYYY-MM-DD HH24:MI:SS" NULLIF (260:278) = BLANKS,
DATA_SOURCE_LKPCD POSITION(279:283) NULLIF (279:283) = BLANKS,
EFFECTIVE_DATE POSITION(284:302) DATE "YYYY-MM-DD HH24:MI:SS" NULLIF (284:302) = BLANKS,
REMARK POSITION(303:1302) NULLIF (303:1302) = BLANKS,
OPRTNL_FLAG POSITION(1303:1303) NULLIF (1303:1303) = BLANKS,
CREATED_BY POSITION(1304:1311) DECIMAL(15,0) NULLIF (1304:1311) = X'0000000000000000',
CREATED_DATE POSITION(1312:1330) DATE "YYYY-MM-DD HH24:MI:SS" NULLIF (1312:1330) = BLANKS,
MODIFIED_BY POSITION(1331:1338) DECIMAL(15,0) NULLIF (1331:1338) = X'0000000000000000',
MODIFIED_DATE POSITION(1339:1357) DATE "YYYY-MM-DD HH24:MI:SS" NULLIF (1339:1357) = BLANKS
)- it occurs intermittently. When this job runs, no one will be accessing the database or the tables.
- When we do not run in parallel, then we are not facing the error but it is very slow (obviously).Just in case, I am also attaching the Datastage Logs:
Item #: 466
Event ID: 1467
Timestamp: 2009-06-02 23:03:19
Type: Info
User Name: dsadm
Message: main_program: APT configuration file: /clu01/datastage/Ascential/DataStage/Configurations/default.apt
node "node1"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node2"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node3"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node4"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node5"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node6"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node7"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
node "node8"
fastname "machine_name"
pools ""
resource disk "/clu01/datastage/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/clu01/datastage/Ascential/DataStage/Scratch" {pools ""}
Item #: 467
Event ID: 1468
Timestamp: 2009-06-02 23:03:20
Type: Warning
User Name: dsadm
Message: main_program: Warning: the value of the PWD environment variable (/clu01/datastage/Ascential/DataStage/DSEngine) does not appear to be a synonym for the current working directory (/clu01/datastage/Ascential/DataStage/Projects/Production). The current working directory will be used, but if your ORCHESTRATE job does not start up correctly, you should set your PWD environment variable to a value that will work on all nodes of your system.
Item #: 468
Event ID: 1469
Timestamp: 2009-06-02 23:03:32
Type: Warning
User Name: dsadm
Message: Lkp_1: Input dataset 1 has a partitioning method other than entire specified; disabling memory sharing.
Item #: 469
Event ID: 1470
Timestamp: 2009-06-02 23:04:22
Type: Warning
User Name: dsadm
Message: Lkp_2: Input dataset 1 has a partitioning method other than entire specified; disabling memory sharing.
Item #: 470
Event ID: 1471
Timestamp: 2009-06-02 23:04:30
Type: Warning
User Name: dsadm
Message: Xfmer1: Input dataset 0 has a partitioning method other than entire specified; disabling memory sharing.
Item #: 471
Event ID: 1472
Timestamp: 2009-06-02 23:04:30
Type: Warning
User Name: dsadm
Message: Lkp_2: When checking operator: Operator of type "APT_LUTProcessOp": will partition despite the
preserve-partitioning flag on the data set on input port 0.
Item #: 472
Event ID: 1473
Timestamp: 2009-06-02 23:04:30
Type: Warning
User Name: dsadm
Message: SKey_1: When checking operator: A sequential operator cannot preserve the partitioning
of the parallel data set on input port 0.
Item #: 473
Event ID: 1474
Timestamp: 2009-06-02 23:04:30
Type: Warning
User Name: dsadm
Message: SKey_2: When checking operator: Operator of type "APT_GeneratorOperator": will partition despite the
preserve-partitioning flag on the data set on input port 0.
Item #: 474
Event ID: 1475
Timestamp: 2009-06-02 23:04:30
Type: Warning
User Name: dsadm
Message: buffer(1): When checking operator: Operator of type "APT_BufferOperator": will partition despite the
preserve-partitioning flag on the data set on input port 0.
Item #: 475
Event ID: 1476
Timestamp: 2009-06-02 23:04:30
Type: Info
User Name: dsadm
Message: Tgt_member: When checking operator: The -index rebuild option has been included; in order for this option to be
applicable and to work properly, the environment variable APT_ORACLE_LOAD_OPTIONS should contain the options
DIRECT and PARALLEL set to TRUE, and the option SKIP_INDEX_MAINTENANCE set to YES;
this variable has been set by the user to `OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)'.
Item #: 476
Event ID: 1477
Timestamp: 2009-06-02 23:04:35
Type: Info
User Name: dsadm
Message: Tgt_member_idtfr: When checking operator: The -index rebuild option has been included; in order for this option to be
applicable and to work properly, the environment variable APT_ORACLE_LOAD_OPTIONS should contain the options
DIRECT and PARALLEL set to TRUE, and the option SKIP_INDEX_MAINTENANCE set to YES;
this variable has been set by the user to `OPTIONS(DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=YES)'.
Item #: 477
Event ID: 1478
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Lkp_2,6: Ignoring duplicate entry at table record 1; no further warnings will be issued for this table
Item #: 478
Event ID: 1479
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,0: SQL*Loader-951: Error calling once/load initialization
Item #: 479
Event ID: 1480
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,0: ORA-00604: error occurred at recursive SQL level 1
Item #: 480
Event ID: 1481
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,0: ORA-00054: resource busy and acquire with NOWAIT specified
Item #: 481
Event ID: 1482
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,6: SQL*Loader-951: Error calling once/load initialization
Item #: 482
Event ID: 1483
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,6: ORA-00604: error occurred at recursive SQL level 1
Item #: 483
Event ID: 1484
Timestamp: 2009-06-02 23:04:41
Type: Warning
User Name: dsadm
Message: Tgt_member_idtfr,6: ORA-00054: resource busy and acquire with NOWAIT specified
Item #: 484
Event ID: 1485
Timestamp: 2009-06-02 23:04:41
Type: Fatal
User Name: dsadm
Message: Tgt_member_idtfr,6: The call to sqlldr failed; the return code = 256;
please see the loader logfile: /clu01/datastage/Ascential/DataStage/Scratch/ora.23335.478434.6.log for details.
Item #: 485
Event ID: 1486
Timestamp: 2009-06-02 23:04:41
Type: Fatal
User Name: dsadm
Message: Tgt_member_idtfr,0: The call to sqlldr failed; the return code = 256;
please see the loader logfile: /clu01/datastage/Ascential/DataStage/Scratch/ora.23335.478434.0.log for details. -
SQL*Loader and DECODE function
Hi All,
I am loading data from data files into oracle tables and while loading the data using SQL*Loader, the following requirement needs to be fulfilled.
1) If OQPR < 300, RB = $ 0-299, SC = "SC1"
2) If 300 < OQPR < 1200, RB = $ 300-1199, SC = "SC2"
3) If 1200 < OQPR < 3000, RB = $ 1200-2999, SC = "SC3"
4) If OQPR > 3000 USD, RB = > $3000, SC = "SC4"
Here OPQR is a field in the data file.
Can anyone suggest how do we handle this using DECODE function? Triggers and PL/SQL functions are not to be used.
TIA.
Regards,
Ravi.The following expression gives you different values for your different intervals and boundaries :
SIGN(:OQPR - 300) + SIGN(:OQPR - 1200) + SIGN(:OQPR - 3000) -
Problem with SQL*Loader and different date formats in the same file
DB: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
System: AIX 5.3.0.0
Hello,
I'm using SQL*Loader to import semi-colon separated values into a table. The files are delivered to us by a data provider who concatenates data from different sources and this results in us having different date formats within the same file. For example:
...;2010-12-31;22/11/1932;...
I load this data using the following lines in the control file:
EXECUTIONDATE1 TIMESTAMP NULLIF EXECUTIONDATE1=BLANKS "TO_DATE(:EXECUTIONDATE1, 'YYYY-MM-DD')",
DELDOB TIMESTAMP NULLIF DELDOB=BLANKS "TO_DATE(:DELDOB, 'DD/MM/YYYY')",
The relevant NLS parameters:
NLS_LANGUAGE=FRENCH
NLS_DATE_FORMAT=DD/MM/RR
NLS_DATE_LANGUAGE=FRENCH
If I load this file as is the values loaded into the table are 31 dec 2010 and 22 nov *2032*, aven though the years are on 4 digits. If I change the NLS_DATE_FORMAT to DD/MM/YYYY then the second date value will be loaded correctly, but the first value will be loaded as 31 dec *2020* !!
How can I get both date values to load correctly?
Thanks!
SylvainThis is very strange, after running a few tests I realized that if the year is 19XX then it will get loaded as 2019, and if it is 20XX then it will be 2020. I'm guessing it may have something to do with certain env variables that aren't set up properly because I'm fairly sure my SQL*Loader control file is correct... I'll run more tests :-(
-
I'm setting up a sql*loader script and trying to use the decode function as referred to in 'Applying SQL Operators to Fields' I'm getting an error message ' Token longer than max allowable length of 258 chars'. Is there a limit to the size of the decode statement within sql*loader - or is it better to use a table trigger to handle this on insert? I ran the decode statement as a select through SQL*Plus and it works okay there. Oracle 8.0 Utilities shows example of decode in Ch. 5, but Oracle 9i Utilities Ch. 6 does not. Has anyone done this and what's the impact on performance of the load if I can get it to work? See my example below:
LOAD DATA
INFILE 'e2e_prod_cust_profile.csv'
APPEND
INTO TABLE APPS.RA_CUSTOMER_PROFILES_INTERFACE
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
(Insert_update_flag CHAR(1),
Orig_system_customer_ref CHAR(240),
customer_profile_class_name CHAR(30) NULLIF customer_profile_class=BLANKS
"decode(customer_profile_class_name,
'NORTHLAND Default','(MIA) Default',
'NORTHLAND Non Consolidated','(MIA) Non Cons',
'NORTHLAND Consolidated A','(MIA) Cons A',
'NORTHLAND Consolidated B','(MIA) Cons B',
'NORTHLAND Consolidated C','(MIA) Cons C',
'NORTHLAND Consolidated D','(MIA) Cons D',
'NORTHLAND Cons A NonZS','(MIA) Cons A NonZS',
'NORTHLAND Cons B NonZS','(MIA) Cons B NonZS',
'NORTHLAND Cons C NonZS','(MIA) Cons C NonZS',
'NORTHLAND Cons D NonZS','(MIA) Cons D NonZS',
'NORTHLAND International Billing','(MIA) International Billing',
customer_profile_class_name)",
credit_hold CHAR(1),
overall_credit_limit INTERGER EXTERNAL,
"e2e_cust_profile.ctl" 49 lines, 1855 characters
SQL*Loader-350: Syntax error at line 15.
Token longer than max allowable length of 258 chars
'NORTHLAND Consolidated D','(MIA) Cons D',
^Your controlfile is incomplete and has some typos, but you could try something like:
create or replace function decode_profile_class_name (p_longname IN VARCHAR2)
return VARCHAR2
is
begin
CASE p_longname
WHEN 'NORTHLAND Default' THEN RETURN '(MIA) Default';
WHEN 'NORTHLAND Non Consolidated' THEN RETURN '(MIA) Non Cons';
WHEN 'NORTHLAND Consolidated A' THEN RETURN '(MIA) Cons A';
WHEN 'NORTHLAND Consolidated B' THEN RETURN '(MIA) Cons B';
WHEN 'NORTHLAND Consolidated C' THEN RETURN '(MIA) Cons C';
WHEN 'NORTHLAND Consolidated D' THEN RETURN '(MIA) Cons D';
WHEN 'NORTHLAND Cons A NonZS' THEN RETURN '(MIA) Cons A NonZS';
WHEN 'NORTHLAND Cons B NonZS' THEN RETURN '(MIA) Cons B NonZS';
WHEN 'NORTHLAND Cons C NonZS' THEN RETURN '(MIA) Cons C NonZS';
WHEN 'NORTHLAND Cons D NonZS' THEN RETURN '(MIA) Cons D NonZS';
WHEN 'NORTHLAND International Billing' THEN RETURN '(MIA) International Billing';
ELSE RETURN p_longname;
END CASE;
end;
LOAD DATA
INFILE 'e2e_prod_cust_profile.csv'
APPEND
INTO TABLE APPS.RA_CUSTOMER_PROFILES_INTERFACE
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
Insert_update_flag CHAR(1),
Orig_system_customer_ref CHAR(240),
customer_profile_class_name CHAR(30) NULLIF customer_profile_class=BLANKS "decode_profile_class_name(:customer_profile_class_name)"
credit_hold CHAR(1),
overall_credit_limit INTEGER EXTERNAL -
SQL*Loader and HTMLDB_APPLICATION_FILES
Hello!
Can I use SQL*Loader for loading data from file stored in HTMLDB_APPLICATION_FILES as blob to tables in database? Files are always CSV in my case.
Best regards,
TomHello Maxim!
Of course files are stored in HTMLDB_APPLICATION_FILES as blobs and of course I can do 'select blob_content from htmldb.....' - BUT I have to get text from this blob, and copy certain words from this blob, NOT the blob itself.
Example: I have a file.csv. I upload it to HTMLDB_APPLICATION_FILES so it is stored there as a blob. Now I want copy data delimited by ';'(for example) to table in database.
Any ideas?
Best regards,
Tom
Maybe you are looking for
-
How do I get my new iPod Touch 64 Gb to play music through my stereo system
As a newbie owner of an iPod Touch 64GB I am trying to figure out how to get music on my Touch to play through my home stereo system. Is there a way to use the output from the Touch via the earphone plug to input to a stereo pre-amplifier to play my
-
Each time my macbook restarts and tries to install the downloaded software updates, I get an error message telling me that there might have been an error during the download process of the updates and that he's unable to install the updates. What ca
-
hello.. the part of my query in ireport for selecting the date range is tran_date BETWEEN (select trunc(sysdate,'year') from dual) AND (select last_day(trunc(sysdate,'month')-1) from dual) and when i run this in january every year, the year value
-
i fail to run my VI and it shows load.cpp line:4081 failure. what happen?
-
HELP! Why does FCP Crash when Batch Exporting Freeze Frames?
I have been Batch Exporting a lot of freeze frames recently but just last week FCP just started to crash when I tried exporting. It will let me export one single freeze frame but no batches. Also, when I open FCP now some of the freeze frames I have