Sql loader 510
I am loading a file with some BLOBs. Most of the data seems to have loaded ok but I am now getting this error:
SQL*Loader-510: Physical record in data file
(c:\Sheets_2005.dat) is longer than the maximum(20971520)
The ctl file was auto generated by Migration Workbench...i have added the options in....
options (BINDSIZE=20971520, READSIZE=20971520)
load data
infile 'c:\sheets_2005.dat' "str '<EORD>'"
append
into table SHEETS
fields terminated by '<EOFD>'
trailing nullcols
(REFNO,
SHEETNO,
DETAIL CHAR(100000000)
MESSAGE,
SIZE_)
Any ways around this error?
Thanks
Hello,
Can you tell me which plugin you are using?
option#1
Cause: I think from the error message it appears that the datafile has a physical record that is too long.
If that is the case, try changing the length of the column [this problem could be most likely at the blob/clob column]. Also try Using CONCATENATE or CONTINUEIF or break up the physical records.
OPTION#2
If you are using sql server or sybase plugin, this workaround may work:
Cause: Export of Binary data may be bit too big. Hence it needs to be converted to the HEX format. Thus produced HEX data can be saved into a clob column.
The task is split into 4 sub tasks
1. CREATE A TABLESPACE TO HOLD ALL THE LOB DATA
--log into your system schema and create a tablespace
--Create a new tablespace for the CLOB and BLOB column
--You may resize this to fit your data ,
--Remember that we save the data once as CLOB and then as BLOB
--create tablespace lob_tablespace datafile 'lob_tablespace' SIZE 1000M AUTOEXTEND ON NEXT 50M;
2. LOG INTO YOUR TABLE SCHEMA IN ORACLE
--Modify this script to fit your requirements
--START.SQL (this script will do the following tasks)
~~Modify your current schema so that it can accept HEX data
~~Modify your current schema so that it can hold that huge amount of data.
~~Modify the new tablespace to suite your requirements [can be estimated based on size of the blobs/clobs and number of rows]
~~Disable triggers, indexes & primary keys on tblfiles
3. DATA MOVE: The data move now involves moving the HEX data in the .dat files to a CLOB.
--The START.SQL script adds a new column to <tablename> called <blob_column>_CLOB. This is where the HEX values will be stored.
--MODIFY YOUR CONTROL FILE TO LOOK LIKE THIS
~~load data
~~infile '<tablename>.dat' "str '<er>'"
~~into table <tablename>
~~fields terminated by '<ec>'
~~trailing nullcols
~~(
~~ <blob_column>_CLOB CHAR(200000000),
~~)
The important part being "_CLOB" appended to your BLOB column name and the datatype set to CHAR(200000000)
--RUN sql_loader_script.bat
--log into your schema to check if the data was loaded successfully
--now you can see that the hex values were sent to the CLOB column
--SQL> select dbms_lob.getlength(<blob_column>),dbms_lob.getlength(<blob_column>_clob) from <tablename>;
4. LOG INTO YOUR SCHEMA
--Run FINISH.SQL. This script will do the following tasks:
~~Creates the procedure needed to perform the CLOB to BLOB transformation
~~Executes the procedure (this may take some time a 500Mb has to be converted to BLOB)
~~Alters the table back to its original form (removes the <blob_column>_clob)
~~Enables the triggers, indexes and primary keys
Good luck
Srinivas Nandavanam
Similar Messages
-
If I generate loader / Insert script from Raptor, it's not working for Clob columns.
I am getting error:
SQL*Loader-510: Physical record in data file (clob_table.ldr) is long
er than the maximum(1048576)
What's the solution?
Regards,Hi,
Has the file been somehow changed by copying it between windows and unix? Ora file transfer done as binary or ASCII? The most common cause of your problem. Is if the end of line carriage return characters have been changed so they are no longer /n/r could this have happened? Can you open the file in a good editor or do an od command in unix to see what is actually present?
Regards,
Harry
http://dbaharrison.blogspot.co.uk/ -
Encounter SQL*Loader-510 error
When try to load data from flat file to Oracle,the size is around 6 million row,below error was throwed.
SQL*Loader-510: Physical record in data file (yadayadayada) is longer than the maximum(1048576)
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Anyone has idea about this,thanks..
Message was edited by:
user588299Hi,
I think that the error has occured if the target table column lenght is smaller than the one that you are sending fromt he control file
Also there is an option 'reject unlimited ' that you can use in control file so even if errors occurs while uploading the file through SQL LOADER it will skip that record and carry on with the next.
In that way i think the load shouldnt get aborted.
You can also have a log file that may log out these bad data.
Thanks -
Sql loader unable to read from pipe
Hi All:
I'm using named pipe along with Oracle SQL*Loader to load some 20 millions rows into database.
The source of the pipe is from a Java application which write to the pipe using simple FileOutputStream.
It can be observed that the Oracle SQL*Loader need to wait a lot on the Java application to produce enough data for loading.
The waiting is fine. However, the Oracle SQL*Loader always exist after loading about 1 million rows with output like:
SQL*Loader-501: Unable to read file (upipe.dat)
SQL*Loader-560: error reading file
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
And in this case, the Java will throw IOException with information:
Exception in thread "main" java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:284)
It runs on Linux environment with 11g database.
Any idea why this will happen?check
SQLLDR NOT LOADING ALL DATA IN DAT FILE : SQL*Loader-510/SQL*Loader-2026 [ID 741100.1] -
Sql Loader Data Import Error!!!
Hi,
I have to import a huge volume of records from a CSV(150MB) file to table using sql loader. The input file contains '3286909' records. I am using the following control file and command to run the sql loader.
Control File:
lload data
infile Test.csv
into table TEST_LOAD
fields terminated by ',' optionally enclosed by '"'
ID integer external,
PATH char
*Command:* --------------
C:\CSVFiles\CSV>sqlldr system/tiger control= test.ctl log=test.log readsize=200000000 bindsize=200000000
After running the above command I am able to import '1215717'. Once sql loader reaches this limit then it stops without any error. Somtimes I would get 'SQL*Loader-510: Physical record in data file (test.csv) is longer than the maximum(1048576)'
Please help me to perform this import operation.
Thankshttp://www.morganslibrary.org/reference/externaltab.html
Example:
I have a file on my server in a folder c:\mydata called text.csv which is a comma seperated file...
1,"Fred",200
2,"Bob",300
3,"Jim",50As sys user:
CREATE OR REPLACE DIRECTORY TEST_DIR AS "c:\mydata";
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO myuser;Note: creates a directory object, pointing to a directory on the server and must exist on the server (it doesn't create the physical directory).
As myuser:
SQL> CREATE TABLE ext_test
2 (id NUMBER,
3 empname VARCHAR2(20),
4 rate NUMBER)
5 ORGANIZATION EXTERNAL
6 (TYPE ORACLE_LOADER
7 DEFAULT DIRECTORY TEST_DIR
8 ACCESS PARAMETERS
9 (RECORDS DELIMITED BY NEWLINE
10 FIELDS TERMINATED BY ","
11 OPTIONALLY ENCLOSED BY '"'
12 (id,
13 empname,
14 rate
15 )
16 )
17 LOCATION ('test.csv')
18 );
Table created.
SQL> select * from ext_test;
ID EMPNAME RATE
1 Fred 200
2 Bob 300
3 Jim 50
SQL>
{code} -
Loading spatial data by sql *loader
hi there
i have a load_kat_opcina.ctl file from which i should load spatial data into my 10g db table.
load_data.ctl file is as shown below:
LOAD DATA
INFILE *
REPLACE
CONTINUEIF NEXT(1:1) = '#'
INTO TABLE KAT_OPCINA
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(KO_MBR NULLIF KO_MBR=BLANKS,
KO_SIFRA NULLIF KO_SIFRA=BLANKS,
KO_NAZIV NULLIF KO_NAZIV=BLANKS,
KO_ID NULLIF KO_ID=BLANKS,
ID NULLIF ID=BLANKS,
is_null1 FILLER CHAR,
POVRSINA COLUMN OBJECT NULLIF is_null1='E'
( sdo_gtype INTEGER EXTERNAL,
sdo_srid INTEGER EXTERNAL NULLIF POVRSINA.sdo_srid=BLANKS,
SDO_POINT COLUMN OBJECT NULLIF is_null1='C'
( X INTEGER EXTERNAL,
Y INTEGER EXTERNAL,
Z INTEGER EXTERNAL NULLIF POVRSINA.SDO_POINT.Z=BLANKS),
SDO_ELEM_INFO VARRAY terminated by ';' NULLIF is_null1='P'
(SDO_ORDINATES INTEGER EXTERNAL),
SDO_ORDINATES VARRAY terminated by ':' NULLIF is_null1='P'
(SDO_ORDINATES INTEGER EXTERNAL)
BEGINDATA
0|426|MARKU[EVEC|314717|6789094|
0|3131|VURNOVEC|16605787|6789097|
#C|2003|||||1|1005|3|1|2|1|169|......|5589490440|5082192250:
0|3034|\UR\EKOVEC|16225011|6789100|
0|35|^EHI|12297784|6789190|
#C|2003|||||1|1005|2|1|2|1|239|....|5574944600|5064714553:
0|221|ODRANSKI OBRE@|12441649|6789193|
0|353|TRPUCI|14071974|6789199|
i have deleted most of data here due to space savings.
i call sql *loader from winxp command prompt as follows:
SQLLDR CONTROL=C:\temp\load_kat_opcina.ctl, USERID=username/pswrd@sid, LOG=logfile.log,BAD==baz.bad, DISCARD=DISCARD=toss.dsc
after executing command, table 'kat_opcina' is not filled with data from this .ctl file.
the following is the content of the log file:
SQL*Loader: Release 10.2.0.1.0 - Production on Sri Svi 31 14:20:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: C:\TEMP\load_kat_opcina.ctl
Data File: C:\TEMP\load_kat_opcina.ctl
Bad File: C:\TEMP\baz.bad
Discard File: C:\TEMP\toss.dsc
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: 1:1 = 0X23(character '#'), in next physical record
Path used: Conventional
Table KAT_OPCINA, loaded from every logical record.
Insert option in effect for this table: REPLACE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
KO_MBR FIRST * | O(") CHARACTER
NULL if KO_MBR = BLANKS
KO_SIFRA NEXT * | O(") CHARACTER
NULL if KO_SIFRA = BLANKS
KO_NAZIV NEXT * | O(") CHARACTER
NULL if KO_NAZIV = BLANKS
KO_ID NEXT * | O(") CHARACTER
NULL if KO_ID = BLANKS
ID NEXT * | O(") CHARACTER
NULL if ID = BLANKS
IS_NULL1 NEXT * | O(") CHARACTER
(FILLER FIELD)
POVRSINA DERIVED * COLUMN OBJECT
NULL if IS_NULL1 = 0X45(character 'E')
*** Fields in POVRSINA
SDO_GTYPE NEXT * | O(") CHARACTER
SDO_SRID NEXT * | O(") CHARACTER
NULL if POVRSINA.SDO_SRID = BLANKS
SDO_POINT DERIVED * COLUMN OBJECT
NULL if IS_NULL1 = 0X43(character 'C')
*** Fields in POVRSINA.SDO_POINT
X NEXT * | O(") CHARACTER
Y NEXT * | O(") CHARACTER
Z NEXT * | O(") CHARACTER
NULL if POVRSINA.SDO_POINT.Z = BLANKS
*** End of fields in POVRSINA.SDO_POINT
SDO_ELEM_INFO DERIVED * ; VARRAY
NULL if IS_NULL1 = 0X50(character 'P')
*** Fields in POVRSINA.SDO_ELEM_INFO
SDO_ORDINATES FIRST * | O(") CHARACTER
*** End of fields in POVRSINA.SDO_ELEM_INFO
SDO_ORDINATES DERIVED * : VARRAY
NULL if IS_NULL1 = 0X50(character 'P')
*** Fields in POVRSINA.SDO_ORDINATES
SDO_ORDINATES FIRST * | O(") CHARACTER
*** End of fields in POVRSINA.SDO_ORDINATES
*** End of fields in POVRSINA
Record 1: Rejected - Error on table KAT_OPCINA.
ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13365: layer SRID does not match geometry SRID
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
Record 2: Rejected - Error on table KAT_OPCINA.
ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13365: layer SRID does not match geometry SRID
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
Record 33: Rejected - Error on table KAT_OPCINA.
ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13365: layer SRID does not match geometry SRID
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 623
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 227
SQL*Loader-510: Physical record in data file (C:\TEMP\load_kat_opcina.ctl) is longer than the maximum(65536)
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Specify SKIP=33 when continuing the load.
Table KAT_OPCINA:
0 Rows successfully loaded.
33 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 215168 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 33
Total logical records rejected: 33
Total logical records discarded: 0
Run began on Sri Svi 31 14:20:28 2006
Run ended on Sri Svi 31 14:20:32 2006
Elapsed time was: 00:00:04.51
CPU time was: 00:00:00.26
error messages are all the same for record numbers: 3-32.
so, i'd like to know what am i doing wrong that table cannot be filled with data using sql *loader.
also, would like to know if there's another way of loading data into table from .ctl file (using maybe some other tool)
appreciate any help
thanksHi,
You receive:
ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13365: layer SRID does not match geometry SRID
Have you created spatial index for table PORVSINA? I guess that yes, and you have created it with non NULL SRID value? So, ORA-13365 means that you are trying to insert spatial data with SRID that is not the same as SRID defined in spatial index.
Check index SRID and your data SRID, they must be the same. Or, you can disable spatial index.
Andrejus -
hi, iam using SQL LOADER to insert the but it is giving the following problem ..
SQL*Loader-510: Physical record in data file (D:\LucyData\18\trialv7.ctl) is longer than the maximum(65536)
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
thanks in advance
Edited by: user13340372 on Feb 17, 2011 10:25 PMHi
Do you get some lines like the below and then getting the error message. if so please check your data your data file . probebly you will have a inconsistant record/s in the file
Commit point reached - logical record count 59Cheers
Kanchana. -
Multibyte character error... (SQL*Loader)
Hi,
I am getting error while loading data via SQL*Loader:
"Multibyte character error." while loading data from flat files comgin from mailframe into oracle 10g Rel2 with character set AL32UTF8
here is my .ctl loader file
OPTIONS (ERRORS=9999, ROWS=500, BINDSIZE=65536, SILENT=(FEEDBACK) )
LOAD DATA
APPEND
INTO TABLE GLSTB270
GLS27001_CONUMBER POSITION(0001:0011)
, GLS27002_STORE POSITION(0012:0012)
, GLS27003_STATUS POSITION(0013:0013)
, GLS27004_CUST_TYPE POSITION(0014:0014)
, GLS27005_EXTERN POSITION(0015:0025)
, GLS27006_ADD_DATE POSITION(0026:0039) DATE "yyyymmddhh24miss"
, GLS27007_EXTCUST POSITION(0040:0071)
, GLS27008_LPICKCHRO POSITION(0072:0073)
, GLS27009_LAST_ITEM POSITION(0074:0075)
, GLS27010_DLVADD1 POSITION(0076:0107)
, GLS27011_DLVADD2 POSITION(0108:0139)
, GLS27012_DLVADD3 POSITION(0140:0171)
, GLS27013_DLVPOSTAL POSITION(0172:0181)
, GLS27014_DLVCOUNTY POSITION(0182:0213)
, GLS27015_DLVCNTRY POSITION(0214:0215)
, GLS27016_SPECADD POSITION(0216:0216)
, GLS27017_GROUPING POSITION(0217:0217)
, GLS27018_CO_TYPE POSITION(0218:0218)
, GLS27019_QUOTATION POSITION(0219:0226) DATE "yyyymmdd"
NULLIF (GLS27019_QUOTATION = "00000000")
, GLS27020_USHIP POSITION(0227:0227)
, GLS27021_CONFIRM POSITION(0228:0228)
, GLS27022_UNUDEMAND POSITION(0229:0229)
, GLS27023_FREECHARG POSITION(0230:0230)
, GLS27024_CONF_DATE POSITION(0231:0238) DATE "yyyymmdd"
NULLIF (GLS27024_CONF_DATE = "00000000")
, GLS27025_CONTACT POSITION(0239:0270)
, GLS27026_LICENCE POSITION(0271:0290)
, GLS27027_WARRANT POSITION(0291:0291)
, GLS27028_WARR_AUTH POSITION(0292:0301)
, GLS27029_CURRENCY POSITION(0302:0304)
, GLS27030_FSE POSITION(0305:0310)
, GLS27031_CARRIER POSITION(0311:0320)
, GLS27032_MANPRICIN POSITION(0321:0321)
, GLS27033_ADD_USER POSITION(0322:0329)
, GLS27034_AUTO_INV POSITION(0330:0330)
, GLS27035_PRIFACT POSITION(0331:0338)
, GLS27036_CRELETTER POSITION(0339:0353)
, GLS27025_CONTACT POSITION(0239:0270)
, GLS27026_LICENCE POSITION(0271:0290)
, GLS27027_WARRANT POSITION(0291:0291)
, GLS27028_WARR_AUTH POSITION(0292:0301)
, GLS27029_CURRENCY POSITION(0302:0304)
, GLS27030_FSE POSITION(0305:0310)
, GLS27031_CARRIER POSITION(0311:0320)
, GLS27032_MANPRICIN POSITION(0321:0321)
, GLS27033_ADD_USER POSITION(0322:0329)
, GLS27034_AUTO_INV POSITION(0330:0330)
, GLS27035_PRIFACT POSITION(0331:0338)
, GLS27036_CRELETTER POSITION(0339:0353)
, GLS27037_SHIPMENT POSITION(0354:0354)
, GLS27038_DIVISION POSITION(0355:0356)
, GLS27039_ACCREF POSITION(0357:0365)
, GLS27040_EXPENSE POSITION(0366:0366)
, GLS27041_ALREADY POSITION(0367:0367)
, GLS27042_SITE POSITION(0368:0375)
, GLS27043_SITE_DES POSITION(0376:0395)
, GLS27044_ADDTYPE POSITION(0396:0396)
, GLS27045_PROJECT POSITION(0397:0406)
, GLS27046_SITE_DOWN POSITION(0407:0407)
, GLS27047_QUOTATION POSITION(0408:0408)
, GLS27048_DELIVERY POSITION(0409:0428)
, GLS27049_CONSPERM POSITION(0429:0429)
, GLS27050_CHARACT POSITION(0430:0432)
, GLS27051_CONTRACT POSITION(0433:0434)
, GLS27052_FSE POSITION(0435:0435)
, GLS27053_SYSTEM POSITION(0436:0445)
, GLS27054_SYSTEM_D POSITION(0446:0465)
, GLS27055_JOBSTATUS POSITION(0466:0468)
, GLS27056_BO_L_CHRO POSITION(0469:0470)
, GLS27057_BUYER POSITION(0471:0480)
, GLS27058_SCREASON POSITION(0481:0481)
, GLS27059_L_M_DATE POSITION(0482:0495) DATE "yyyymmddhh24miss"
, GLS27061_L_M_USER POSITION(0496:0503)
, GLS27062_SCREEN POSITION(0504:0507)
, GLS27063_CUST_EXP POSITION(0508:0508)
, GLS270F1_GLS08001 POSITION(0509:0509)
, GLS270F2_GLS08002 POSITION(0510:0511)
, GLS270F3_GLS25001 POSITION(0512:0512)
, GLS270F4_GLS25002 POSITION(0513:0522)
)and here is the .log file containing error msg for sql*loader:
SQL*Loader: Release 9.2.0.8.0 - Production on Thu Apr 5 15:35:21 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Control File: /opt/oracle/test/admin/glsdbt01/load/glstb270.ctl
Data File: /opt/oracle/test/admin/glsdbt01/download2/GLSTB270.ZZ.CRE
Bad File: /dblog02/glsdbt01/load/results/glsltb270zz.bad
Discard File: /dblog02/glsdbt01/load/results/glsltb270zz.dis
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 9999
Continuation: none specified
Path used: Direct
Silent options: FEEDBACK
Table GLSTB270, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
GLS27001_CONUMBER 1:11 11 CHARACTER
GLS27002_STORE 12:12 1 CHARACTER
GLS27003_STATUS 13:13 1 CHARACTER
GLS27004_CUST_TYPE 14:14 1 CHARACTER
GLS27005_EXTERN 15:25 11 CHARACTER
GLS27006_ADD_DATE 26:39 14 DATE yyyymmddhh24miss
GLS27007_EXTCUST 40:71 32 CHARACTER
GLS27008_LPICKCHRO 72:73 2 CHARACTER
GLS27009_LAST_ITEM 74:75 2 CHARACTER
GLS27010_DLVADD1 76:107 32 CHARACTER
GLS27011_DLVADD2 108:139 32 CHARACTER
GLS27012_DLVADD3 140:171 32 CHARACTER
GLS27013_DLVPOSTAL 172:181 10 CHARACTER
GLS27014_DLVCOUNTY 182:213 32 CHARACTER
GLS27015_DLVCNTRY 214:215 2 CHARACTER
GLS27016_SPECADD 216:216 1 CHARACTER
GLS27017_GROUPING 217:217 1 CHARACTER
GLS27018_CO_TYPE 218:218 1 CHARACTER
GLS27019_QUOTATION 219:226 8 DATE yyyymmdd
NULL if GLS27019_QUOTATION = 0X3030303030303030(character '00000000')
GLS27020_USHIP 227:227 1 CHARACTER
GLS27020_USHIP 227:227 1 CHARACTER
GLS27021_CONFIRM 228:228 1 CHARACTER
GLS27022_UNUDEMAND 229:229 1 CHARACTER
GLS27023_FREECHARG 230:230 1 CHARACTER
GLS27024_CONF_DATE 231:238 8 DATE yyyymmdd
NULL if GLS27024_CONF_DATE = 0X3030303030303030(character '00000000')
GLS27025_CONTACT 239:270 32 CHARACTER
GLS27026_LICENCE 271:290 20 CHARACTER
GLS27027_WARRANT 291:291 1 CHARACTER
GLS27028_WARR_AUTH 292:301 10 CHARACTER
GLS27029_CURRENCY 302:304 3 CHARACTER
GLS27030_FSE 305:310 6 CHARACTER
GLS27031_CARRIER 311:320 10 CHARACTER
GLS27032_MANPRICIN 321:321 1 CHARACTER
GLS27033_ADD_USER 322:329 8 CHARACTER
GLS27034_AUTO_INV 330:330 1 CHARACTER
GLS27035_PRIFACT 331:338 8 CHARACTER
GLS27036_CRELETTER 339:353 15 CHARACTER
GLS27037_SHIPMENT 354:354 1 CHARACTER
GLS27038_DIVISION 355:356 2 CHARACTER
GLS27039_ACCREF 357:365 9 CHARACTER
GLS27040_EXPENSE 366:366 1 CHARACTER
GLS27041_ALREADY 367:367 1 CHARACTER
GLS27042_SITE 368:375 8 CHARACTER
GLS27043_SITE_DES 376:395 20 CHARACTER
GLS27044_ADDTYPE 396:396 1 CHARACTER
GLS27045_PROJECT 397:406 10 CHARACTER
GLS27046_SITE_DOWN 407:407 1 CHARACTER
GLS27047_QUOTATION 408:408 1 CHARACTER
GLS27048_DELIVERY 409:428 20 CHARACTER
GLS27049_CONSPERM 429:429 1 CHARACTER
GLS27050_CHARACT 430:432 3 CHARACTER
GLS27051_CONTRACT 433:434 2 CHARACTER
GLS27052_FSE 435:435 1 CHARACTER
GLS27053_SYSTEM 436:445 10 CHARACTER
GLS27054_SYSTEM_D 446:465 20 CHARACTER
GLS27055_JOBSTATUS 466:468 3 CHARACTER
GLS27056_BO_L_CHRO 469:470 2 CHARACTER
GLS27057_BUYER 471:480 10 CHARACTER
GLS27058_SCREASON 481:481 1 CHARACTER
GLS27059_L_M_DATE 482:495 14 DATE yyyymmddhh24miss
GLS27061_L_M_USER 496:503 8 CHARACTER
GLS27062_SCREEN 504:507 4 CHARACTER
GLS27063_CUST_EXP 508:508 1 CHARACTER
GLS27063_CUST_EXP 508:508 1 CHARACTER
GLS270F1_GLS08001 509:509 1 CHARACTER
GLS270F2_GLS08002 510:511 2 CHARACTER
GLS270F3_GLS25001 512:512 1 CHARACTER
GLS270F4_GLS25002 513:522 10 CHARACTER
Record 20405: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20418: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20419: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20420: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20425: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20426: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20436: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20452: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20481: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20482: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20483: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20484: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20485: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20486: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20487: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20494: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20499: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20502: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.
Record 20503: Rejected - Error on table GLSTB270, column GLS27043_SITE_DES.
Multibyte character error.Can you pls help..?
thanksHi Werner,
on my linux desktop:
$ file test.dat
test.dat: UTF-8 Unicode text, with very long lines
my colleague is working on a windows system.
On both systems exact the same error from SQL Loader.
Btw, try with different number of special characters (german umlaute and euro) and there is no chance to load without the error
when to many (?) special characters or data is long as column length and special characters included.
Regards
Michael -
제품 : ORACLE SERVER
작성날짜 : 2002-04-25
Advanced SQL*LOADER
====================
PURPOSE
Q&A를 통한 SQL*LOADER 사용 방법에 대해 알아보도록 한다.
Explanation
1) Carriage returns, linefeeds, EOL character를 load 하는 방법
2) Delimiter 를 사용한 경우 특정 field 를 skip 할수 있나?
3) "FIELD IN DATA FILE EXCEEDED MAXIMUM SPECIFIED LENGTH" 메세지
해결 방법
4) Blob 이나 raw data 는 어떻게 사용하나?
5) Error LDR-510은 어떻게 처리하나?
6) EBCDIC Characterset data file 은 어떻게 load 하나?
7) Control file 에 WE8EBCDIC500 characterset 을 사용하였는데도
SQL*Loader-266 가 발생하는 경우
8) Decimal data는 어떻게 load 하는가?
9) Trailing signs 이 있는 숫자는 어떻게 load 하는가?
10) Zoned number는 무엇인가?
11) Packed Decimal Number는 무엇인가?
12) NULLIF 는 어떻게 사용하나 ?
13) 한개의 single column을 위해 여러 개의 nullif는 사용가능한가?
14) SQL*Loader 가 NULL 로 여기는 column 을 blank space 로
여기도록 만들려면 ?
15) WHEN 절은 어떻게 사용하나?
16) WHEN 절에 OR 를 사용할수 있는가?
17) Delimiter 로 ','를 사용하였는데 data 에 ','가 포함된 경우는?
18) 어떻게 하면 commit 을 덜 사용할 수 있나?
1) carriage returns, linefeeds, EOL character를 load 하기
SQL*Loader 는 이들을 통해 서로 다른 physical record 로 인식한다.
이들을 load 하기 위해 사용되는 option 은
- concatenate
- continueif
- file processing option: FIX (PRE: 1012555.6 PRS: 2060647.6)
- file processing option: VAR (PRE: 1011372.6 PRS: 2059405.6)
예제1) concatenate 사용하기
Control File: test.ctl
load data
infile 'test.dat'
truncate
concatenate (2)
into table test
(col1 char(2000))
Data File: 'test.dat'
This is a test \n
this is the second Line.
This is a new record\n
this is the second line of the second record
결과:
SQL> select * from test;
COL1
This is a test \nthis is the second Line.
This is a new record\nthis is the second line of the second record
예제2) continueif 를 사용하기
Control File: test.ctl
load data
infile test.dat
truncate
continueif this (1) = '*'
into table test
(col1 char(64000))
Data File: 'test.dat'
*11111111111
*22222222222
*33333333333
*44444444444
*55555555555
66666666666
*77777777777
*88888888888
999999999999
*aaaaaaaaaaa
*bbbbbbbbbbb
cccccccccccc
*ddddddddddd
*eeeeeeeeeee
*fffffffffff
*ggggggggggg
*hhhhhhhhhhh
결과 :
SQL> select * from test;
COL1
11111111111222222222223333333333344444444444555555555556666666666
777777777778888888888899999999999
aaaaaaaaaaabbbbbbbbbbbccccccccccc
dddddddddddeeeeeeeeeeefffffffffffggggggggggghhhhhhhhhhh
2) delimiter 를 사용한 경우 특정 field 를 skip 할수 있나 ?
: 불가능하다 (RTSS Bulletin: 103235.426)
3): 모든 column 에 해당 column 에 해당하는 length 를 지정하였는데
다음과 같은 error message 를 접한 경우 :
"FIELD IN DATA FILE EXCEEDED MAXIMUM SPECIFIED LENGTH"
:255 자 이상의 char/varchar2/long field 를 load 하려한 경우
이 경우는 CHAR data type 의 buffer size 를 증가 시켜야 한다.
이의 default 는 255 이다.
예제)
load data
into table
(col1 char(64000))
이 경우 buffer 를 64K 로 증가시켜 col1 에 load 해야 한다.
<주의> 64K 는 sql*loader 에서 사용 가능한 maximum record length이다.
이는 1 record 를 여러개의 buffer 를 통해 load 하지 못하는 문제이다.
최대 buffer 는 64k 이다.
64 K 이상을 원하는 경우
a) 여러개의 작은 chunk 로 나누거나,
b) SQL*Loader 를 사용하지 못한다.
4) blob 이나 raw data 는 어떻게 사용하나 ?
: RAW datatype 을 사용하고 length를 부여한다.
예제)
LOAD DATA
INFILE xx.dat "VAR"
REPLACE
INtO TABLE test
(BLOB raw (32767) )
xx.dat 는 load 하고자 하는 전체 file 이다.
이는 bit mapped files 이나, 그외 file type 을 DB 에 load 시
사용 가능한 option 이다.
5) error LDR-510은 어떻게 처리한가 ?
: LDR-510은 data file 안의 physical record가 최대값인 64K 를
넘은 경우이다.
SQL*Loader 에서는 physical record 가 연속적이어야 하며, 이는
최대인 64k 를 넘지 못하는 제한이 있기 때문이다.
(여러 buffers 를 통해 load 불가능)
이는 위의 1번과 같으므로 physical record 를 여러 logical record 로
나누어야 한다.
6) EBCDIC Characterset data file 은 어떻게 load 하나 ?
: control file 에 characterset 을 명시한다.
많이 사용되는 EBCDIC character set 은 WE8EBCDIC500 이다.
예제)
load data
characterset we8ebcdic500
infile *
replace
into table for_load
(x)
begindata
B
Z
X
SQL> select * from for_load;
X
a
i
이는 ASCII 의 경우도 같다.
7) control file 에 WE8EBCDIC500 characterset 을 사용하였는데도
SQL*Loader-266 가 발생하는 경우
환경변수인 ORA_NLS 가 맞게 설정되었는지를 check 한다.
(PRE: 1012552.6 PRS: 2060644.6)
8) decimal data 는 어떻게 load 하는가?
: data 를 조작하는 방법밖에 없다.즉 decimal point 가 들어가게
data 를 조작한다.
예제)
load data
infile *
truncate
into table test
(col1 integer external(5) ":col1/100")
begindata
12345
10000
24983
SQL> select * from test;
COL1
123.45
100
249.83
9) trailing signs 이 있는 숫자는 어떻게 load 하는가?
예제)
load data
infile *
truncate
into table loadnums
(col1 position(1:5),
col2 position(7:16) "to_number(:col2,'99,999.99MI')")
begindata
abcde 1,234.99-
abcde 11,234.34+
abcde 45.23
abcde 99,234.38-
abcde 23,234.23+
abcde 98,234.23+
SQL> select * from loadnums;
COL1 COL2
abcde -1234.99
abcde 11234.34
abcde -99234.38
abcde 23234.23
abcde 98234.23
<주의> 이경우 log file 에 다음의 error message 를 볼수있다.
Record 3: Rejected - Error on table LOADNUMS, column COL2.
ORA-01722: invalid number
이 error message 는 당연한 것이다.
왜냐면 control file 에 number datatype 에 mask 를 주었기 때문에
모든 number 는 mask 를 가져야 한다.
이때 Record 3은 trailing sign 이 없기 때문에 reject 된 것이다.
10) zoned number는 무엇인가?
이는 decimal digit 의 string 으로 1byte 에 1 string 이 해당된며,
부호가 맨 마지막byte 에 포함되는 경우이다.
예제)
LOAD DATA
infile *
append
INTO TABLE test
(col1 position(1:3) zoned(3),
col2 position(4:6),
col3 position(7:8))
begindata
12J43323
43023423
SQL> select * from test;
COL1 COL2 COL3
-121 433 23
430 234 23
2 rows selected.
다음은 zoned values의 map 이다:
{ABCDEFGHI}JKLMNOPQR0123456789
++++++++++----------++++++++++
{ABCDEFGHI}JKLMNOPQR
01234567890123456789
11) Packed Decimal Number는 무엇인가 ?
: packed decimal format은 bytes 로 이루어진 1 string 의 모음인데
각각은 2 digit (2 nibbles) 을 가진다.
이중 마지막 byte 는 1 digit 와 sign 으로 이루어진다.
이 sign 은 보통 0x0a, 0x0b, ..., 0x0f: usually 0x0c/a/e/f for +ve,
0x0d/b for -ve 로 이루어져 있다.
예를 들어 +123 의 packed decimal 표현은 다음과 같다.
[12] [3C] 이고 이때 [12] 는 nibbles 0x01 and 0x02를 포함하는
byte 이다. [3C]의 경우도 같다.
The kernel 은 sing nibble 의 여부를 ttcp2n() 을 통해 check한다.
만일 0x0a, ..., 0x0f 중의 하나가 아니면 ORA-1488 error 를 발생시킨다.
<주의> BUG:296890:
이는 bug 는 아니며, SQL*Loader 는 COBOL 에서 발생시키는 UNSIGNED
packed decimal 을 support 하지 않는다.
즉 SQL*Loader 는 packed decimal을 load 시 마지막 byte 가 반드시
1 digit와 ,sign 이 포함되어지는지 check 한다.
12) NULLIF 는 어떻게 사용하나 ?
예제)
load data
infile *
truncate
into table test
fields terminated by ','
(col1 date "mm/dd/yy" nullif col1='0', col2)
begindata
0,12345
11/11/95,12345
0,12345
11/11/95,12345
SQL> select * from test;
COL1 COL2
12345
11-NOV-95 12345
12345
11-NOV-95 12345
13) 한개의 single column을 위해 여러 개의 nullif 는 사용가능한가 ?
: 불가능하다. 그러나 workaround 로는
예제)
load data
infile *
truncate
into table test
fields terminated by ',' optionally enclosed by '"'
(col1,col2 "decode(:col2,'X',NULL,'Y',NULL,'Z',NULL,:col2)",col3)
begindata
12345,"X",12345
12345,"A",12345
12345,"Y",12345
12345,"Z",12345
12345,"B",12345
SQLDBA> select * from test;
COL1 COL2 COL3
12345 12345
12345 A 12345
12345 12345
12345 12345
12345 B 12345
5 rows selected.
14) SQL*Loader 가 NULL 로 여기는 column 을 blank space 로 여기도록
만들려면 ?
: PRESERVE BLANKS를 사용하지 않으면 loader 는 blank 들을 null 로
인식한다.
이 경우 workaround 로는
예제)
load data
infile *
into table test
fields terminated by ','
(col1 position(1:5) integer external,
col2 position(15:20) char "nvl(:col2,' ')",
col3 position(25:30) integer external)
begindata
12345 rec1 12345
12345 23453
23333 rec3 29874
98273 98783
98723 rec5 234
SQL> select * from test yields;
COL1 COL2 COL3
12345 rec1 12345
12345 23453
23333 rec3 29874
98273 98783
98723 rec4 234
5 rows selected.
15) WHEN 절은 어떻게 사용한가 ?
예제)
load data
infile *
truncate
into table t1
when col1 = '12345'
(col1 position(1:5) integer external, col2 position(7:12) char)
into table t2
when col1 = '54321'
(col1 position(1:5) integer external, col2 position(7:12) char)
begindata
12345 table1
54321 table2
99999 no tab
12345 table2
54321 table2
SQL> select * from t1;
COL1 COL2
12345 table1
12345 table2
SQL> select * from t2;
COL1 COL2
54321 table2
54321 table2
16) WHEN 절에 OR 를 사용할수 있는가 ?
: 불가능하다 그러나 workaround 로는
예제) 만일 where (col1=12345 OR col1=54321) AND col2='rowena'경우에
data 를 insert 하고자 하면
load data
infile *
truncate
into table test
when col1 = '12345' and col2='rowena'
(col1 position(1:5) integer external, col2 position(7:12) char)
into table test
when col1 = '54321' and col2='rowena'
(col1 position(1:5) integer external, col2 position(7:12) char)
begindata
12345 rowena
43234 rowena
54321 rowena
(즉, OR를 갖지 못하나, 같은 table 에 두개의 when 은 가질수있다.)
SQLDBA> select * from test;
COL1 COL2
12345 rowena
54321 rowena
2 rows selected.
17) delimiter 로 ','를 사용하였는데 data 에 ','가 포함된 경우는 ?
: 두 번 지정한다.
예제)
load 할 data 가 col1, rowena, rowena, col3 이고
datafile 이 다음과 같은 경우
col1, rowena,, rowena, col3
select * from table:
COL1 COL2 COL3
col1 rowena, rowena col3
18) 어떻게 하면 commit 을 덜 사용할 수 있나 ?
: rows 나 bindsize를 사용한다.
그러나 bindsize 가 적은 경우 bindsize는 rows 를 많이 적재할 수
있으므로 항상 1 bindsize 에 몇 개의 rows 가 set 되었는지에 관계
없이 bindsize에 commit 이 수행된다.
Reference Document
------------------hi,
pls take a look at this document
http://www.petefinnigan.com/weblog/archives/00000020.htm
regards, -
How to load a default value in to a column when using sql loader
Im trying to load from a flat file using sql loader.
for 1 column i need to update using a default value
how to go about this?Hi!
try this code --
LOAD DATA
INFILE 'sample.dat'
REPLACE
INTO TABLE emp
empno POSITION(01:04) INTEGER EXTERNAL NULLIF empno=BLANKS,
ename POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL NULLIF mgr=BLANKS,
sal POSITION(32:39) DECIMAL EXTERNAL NULLIF sal=BLANKS,
comm POSITION(41:48) DECIMAL EXTERNAL DEFAULTIF comm = 100,
deptno POSITION(50:51) INTEGER EXTERNAL NULLIF deptno=BLANKS,
hiredate POSITION(52:62) CONSTANT SYSDATE
)-hope this will solve ur purpose.
Regards.
Satyaki De. -
How can we tell if SQL*Loader is working on a TABLE?
We have a process that requires comparing batches with LDAP information. Instead of using an LDAP lookup tool, we get a nightly directory file, and import the two COLUMNs we want via SQL*Loader (REPLACE) into an IOT. Out of three cases, two just check the first COLUMN, and the third needs the second COLUMN as well.
We did not think of using External TABLEs, because we cannot store files on the DB server itself.
The question arises, what to do while the file is being imported. The file is just under 300M, so it takes a minute or so to replace all the data. We found SQL*Loader waits until a transaction is finished before starting, but a query against the TABLE only waits while it is actually importing the data. At the beginning of SQL*Loader's process, however, a query against the TABLE returns no rows.
The solution we are trying right now is, to have the process that starts SQL*Loader flip a flag in another TABLE denoting that it is unavailable. When it is done, it flips it back, and notes the date. Then, the process that queries the information, exits if the flag is currently 'N'.
The problem, is, what if SQL*Loader starts inbetween the check of the flag, and the query against the TABLE. How do we guarantee that it is still not being imported.
I can think of three solutions:
1) LOCK the ldap information TABLE before checking the flag.
2) LOCK the record that the process starting SQL*Loader flips.
3) Add a clause to the query against the TABLE checks that there are records in the TABLE (AND EXISTS(SELECT * FROM ldap_information).
The problem with 3) is that the process has already tagged the batches (via a COLUMN). It could, technically reset them afterwards, but that seems a bit backwards.Just out of curiosity, are you aware that Oracle supplies a DBMS_LDAP package for pulling information from LDAP sources? It would obviously be relatively easy to have a single transaction that deletes the existing data, loads the new data via DBMS_LDAP, and commits, which would get around the problem you're having with SQL*Loader truncating the table.
You could also have SQL*Loader load the data into a staging table and then have a second process either MERGE the changes from the staging table into the real table (again in a transactionally consistent manner) or just delete and insert the data.
Justin -
Loading two tables at same time with SQL Loader
I have two tables I would like to populate from a file C:\my_data_file.txt.
Many of the columns I am loading into both tables but there are a handful of columns I do not want. The first column I do not want for either table. My problem is how I can direct SQL Loader to go back to the first column and skip over it. I had tried using POSITION(1) and FILLER for the first column while loading the second table but I got THE following error message:
SQL*Loader-350: Syntax error at line 65
Expecting "," or ")" found keyword Filler
col_a Poistion(1) FILLER INTEGER EXTERNALMy control file looks like the following:
LOAD DATA
INFILE 'C:\my_data_file.txt'
BADFILE 'C:\my_data_file.txt'
DISCARDFILE 'C:\my_data_file.txt'
TRUNCATE INTO TABLE table_one
WHEN (specific conditions)
FIELDS TERMINATED BY ' '
TRAILING NULLCOLS
col_a FILLER INTEGER EXTERNAL,
col_b INTEGER EXTERNAL,
col_g FILLER CHAR,
col_h CHAR,
col_date DATE "yyyy-mm-dd"
INTO TABLE table_two
WHEN (specific conditions)
FIELDS TERMINATED BY ' '
TRAILING NULLCOLS
col_a POSITION(1) FILLER INTEGER EXTERNAL,
col_b INTEGER EXTERNAL,
col_g FILLER CHAR,
col_h CHAR,
col_date DATE "yyyy-mm-dd"
)Try adapting this for your scenario.
tables for the test
create table test1 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
create table test2 ( fld1 varchar2(20), fld2 integer, fld3 varchar2(20) );
control file
LOAD DATA
INFILE "test.txt"
INTO TABLE user.test1 TRUNCATE
WHEN RECID = '1'
FIELDS TERMINATED BY ' '
recid filler integer external,
fld1 char,
fld2 integer external,
fld3 char
INTO TABLE user.test2 TRUNCATE
WHEN RECID <> '1'
FIELDS TERMINATED BY ' '
recid filler position(1) integer external,
fld1 char,
fld2 integer external,
fld3 char
data for loading [text.txt]
1 AAAAA 11111 IIIII
2 BBBBB 22222 JJJJJ
1 CCCCC 33333 KKKKK
2 DDDDD 44444 LLLLL
1 EEEEE 55555 MMMMM
2 FFFFF 66666 NNNNN
1 GGGGG 77777 OOOOO
2 HHHHH 88888 PPPPP
HTH
RK -
Creating SQL-Loader script for more than one table at a time
Hi,
I am using OMWB 2.0.2.0.0 with Oracle 8.1.7 and Sybase 11.9.
It looks like I can create SQL-Loader scripts for all the tables
or for one table at a time. If I want to create SQL-Loader
scripts for 5-6 tables, I have to either create script for all
the tables and then delete the unwanted tables or create the
scripts for one table at a time and then merge them.
Is there a simple way to create migration scripts for more than
one but not all tables at a time?
Thanks,
Prashant RaneNo there is no multi-select for creating SQL-Loader scripts.
You can either create them separately or create them all and
then discard the one you do not need. -
SQL Loader (Oracle 8.1.5 on Suse 6.3) Internal Error
Hi all,
I try to insert data with SQL Loader on Linux (Suse 6.3) and get the following message:
SQL*Loader-704: Internal error: ulmtsyn: OCIStmtExecute (tabhp) [-1073747572]
ORA-00942: table or view does not exist
The control file and data file did work on another platform.
Please help me!
Thanks,
Thies MaukerLee Bennett (guest) wrote:
:Hi
:I have successfully installed Oracle 8.1.5 Enterprise edition
on
:Suse 6.2 and applied the 8.1.5.0.1 patch set,
NO!
SuSe 6.2 have a patch file for Oracle made from their developers.
Never use Oracle 8.1.5.0.1 patch file that doesn't work because
us bugged.
Use SuSe 6.2 Oracle patch set.
(don't remember the web page where you can download it but a
search with word "oracle" from SuSe homepage will lead you to
it)
-Stefano
null -
Decode Not working in sql loader
I had a requirement of loading flatfile into staging table using SQL Loader, One of the columns in the the Flat file is having values FALSE or TRUE and my requirement is that I load 0 for FALSE and 1 for TRUE which can be achieved by simple DECODE function...I did use decode and tried to load several times but did not work. What might be the problem
LOAD DATA
INFILE 'sql_4ODS.txt'
BADFILE 'SQL_4ODS.badtxt'
APPEND
INTO TABLE members
FIELDS TERMINATED BY "|"
( Person_ID,
FNAME,
LNAME,
Contact,
status "decode(:status, 'TRUE', '1','FALSE','0')"
I did try putting a trim as well as SUBSTR but did not work....the cloumn just doent get any values in the output (just null or say free space)
Any help would be great.....Hello user8937215.
Please provide a create table statement and a sample of data file contents. I would expect DECODE or CASE to work based on the information provided.
Cheers,
Luke
Please mark the answer as helpful or answered if it is so. If not, provide additional details.
Always try to provide create table and insert table statements to help the forum members help you better.
Maybe you are looking for
-
Automatic creation of AUC (CWIP Asset) at the time of creation of IO
Hello Gurus, When iam creating an investment internal order, system is not creating AUC(CWIP Asset), I have created investment profile and entered the same in internal order, what could be the reason for this. Please let me know the missing configur
-
What component I should choose to simulate a stepper motor's winding?
Dear all, I'm now trying to simulate a chopper type stepper motor driver. In order to learn from the basic, only one motor coil was place inside the chopper circuit. Now I choose a advanced inductor for the motor winding. I'm able to measure motor wi
-
Need a new phone - Built in GPS
Well, my Nokia 6110 Navigator died when I accidentally gav it a bath! So looking for a new one I really need a phone with a SatNav, so what options are there, I would like one with Bulit in GPS rather than needing an external gps device. Is the Nokia
-
Exceptions in file-idoc scenario
Hi Folks, I am developing a file-idoc scenario. What are the different exceptions I would be needed to handle apart from the following two identified? 1. Mapping issue 2. Problem in posting the idoc (SAP system may be down and others) Thank you
-
Where are Smart Mailbox settings?
Hello I set up a large number of Smart Mailboxes on my Macbook and they synced over to my iMac. Now, however, they have disappeared from the Macbook and i am getting a prompt to delete them on the iMac when it tries to sync. Can anyone tell me where