Processing a .CSV file to Table
HI Friends,
Please le tme know how to process a .CSV file to Table by using SQL loader Concept.
No other technique required...! Know it can be done thru various processes. But needed in the above way...
Thanks in adv...!
tempdbs
www.dwforum.net
here:
http://forums.oracle.com/forums/search.jspa?threadID=&q=loading+csv+sql+loader&objID=c84&dateRange=lastyear&userID=&numResults=15
Similar Messages
-
How to load csv file to table using ssis
Hi,
i have loaded the csv file to table using ssis, it is working file but record is mismatch.
source data :
col1 col2 col3 col4
aa, bb, this is sample,data for the details, this is
column delimiter - comma
col3 - record contains comma. so col3 record after the comma is added with the as col4
i need the col3 record with comma.
so what is the column delimiter will use ?
Regards,
abdul KhadirHi Abdul,
Based on your description, you want to distinguish a comma is used for column separator or used in address column in a text file, then load the data from the text file to SQL Server table.
As per my understanding, if you can replace the comma column separator to another delimiter like semicolon (;), just do it. Then we can select Semicolon {;} as Column delimiter for the Flat File Connection Manager. Or ensure all columns are enclosed in double
quotes ("). Then we can set the double quotes (") as Text qualifier for the Flat File Connection Manager, and the commas will be loaded as part of the string fields.
If you can't have that done, because computers don't know the context of the data, you would have to come up with some kind of rules that decides when a comma represents a delimiter, and when it is just part of the text. I think a custom script component
would be necessary to pre-process the data, identify where a comma is part of an address, and then treat that as one field.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Stored Proc to create .csv file from table
Hi all,
I have a table with 4 columns and 10 rows. Now I need a stored proc to create a .csv file from the table data. Here the scripts to create a table and insert the row's.
Create table emp(emp_id number(10), emp_name varchar2(20), department varchar2(20), salary number(10));
Insert into emp values ('1','AAAAA','SALES','10000');
Insert into emp values ('2','BBBBB','MARKETING','8000');
Insert into emp values ('3','CCCCC','SALES','12000');
Insert into emp values ('4','DDDDD','FINANCE','10000');
Insert into emp values ('5','EEEEE','SALES','11000');
Insert into emp values ('6','FFFFF','MANAGER','90000');
Insert into emp values ('7','GGGGG','SALES','12000');
Insert into emp values ('8','HHHHH','FINANCE','14000');
Insert into emp values ('9','IIIII','SALES','20000');
Insert into emp values ('10','JJJJJ','FINANCE','21000');
commit;
Now I need a stored proc to create a .csv file in my local location. Please let me know If you need any other details....Some pointers:
http://www.oracle-base.com/articles/9i/GeneratingCSVFiles.php
http://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteflat.html
also, doing a search on this forum or http://asktom.oracle.com will give you many clues.
.csv file in my local location.What is your 'local location'?
A client machine? The database server machine?
What database version are you using?
(the result of: select * from v$version; ) -
Uploading & Processing of CSV file fails in clustered env.
We have a csv file which is uploaded to the weblogic application server, written
to a temporary directory and then manipulated before being written to the database.
This process works correctly in a single server environment but fails in a clustered
environment.
The file gets uploaded to the server into the temporary directory without problem.
The processing starts. When running in a cluster the csv file is replicated
to a temporary directory on the secondary server as well as the primary.
The manipulation process is running but never finishes and the browser times out
with the following message:
Message from the NSAPI plugin:
No backend server available for connection: timed out after 30 seconds.
Build date/time: Jun 3 2002 12:27:28
The server which is loading the file and processing it writes this to the log:
17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
lfilment) One of the getParameter family of methods called after reading from
the Serv
letInputStream, not merging post parameters>.
Anna Bancroft wrote:
> We have a csv file which is uploaded to the weblogic application server, written
> to a temporary directory and then manipulated before being written to the database.
> This process works correctly in a single server environment but fails in a clustered
> environment.
>
> The file gets uploaded to the server into the temporary directory without problem.
> The processing starts. When running in a cluster the csv file is replicated
> to a temporary directory on the secondary server as well as the primary.
>
> The manipulation process is running but never finishes and the browser times out
> with the following message:
> Message from the NSAPI plugin:
> No backend server available for connection: timed out after 30 seconds.
>
>
>
> --------------------------------------------------------------------------------
>
> Build date/time: Jun 3 2002 12:27:28
>
> The server which is loading the file and processing it writes this to the log:
> 17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
> lfilment) One of the getParameter family of methods called after reading from
> the Serv
> letInputStream, not merging post parameters>.
>
>
It doesn't make sense? Who is replicating the file? How long does it
take to process the file? Which plugin are you using as proxy to the
cluster?
-- Prasad
-
Processing a CSV file in batching by FTP Adapter gives translation error
Hi All,
I have a CSV with 2000 records.. want to process it in batch of 500.
When i dont use batching in FTP adapter.. everything goes fine.
But when i include batching in the adpater.. and try to process the same file. it gives:
<2009-09-09 12:09:15,997> <ERROR> <VDSTServices.collaxa.cube.translation> <NXSDTranslatorImpl::logError> translateFromNative Failed with exception = 50
<2009-09-09 12:09:15,997> <INFO> <VDSTServices.collaxa.cube.activation> <FTP Adapter::Inbound> Error while translating inbound file : VDST_CNP_EMR5110026_20090904_000000.csv
<2009-09-09 12:09:15,997> <INFO> <VDSTServices.collaxa.cube.activation> <FTP Adapter::Inbound>
ORABPEL-11100
Translation Failure.
[Line=27, Col=1] Translation from native failed. 50.
Check the error stack and fix the cause of the error. Contact oracle support if error is not fixable.
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.doTranslateFromNative(NXSDTranslatorImpl.java:754)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.translateFromNative(NXSDTranslatorImpl.java:489)
at oracle.tip.adapter.file.inbound.ProcessWork.doTranslation(ProcessWork.java:748)
at oracle.tip.adapter.file.inbound.ProcessWork.processMessages(ProcessWork.java:336)
at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:218)
at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:280)
at java.lang.Thread.run(Thread.java:595)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 50
at oracle.tip.pc.services.translation.xlators.nxsd.ErrorList.addError(ErrorList.java:108)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.terminateLoop(NXSDTranslatorImpl.java:1688)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.parseNXSD(NXSDTranslatorImpl.java:1307)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.parseNXSD(NXSDTranslatorImpl.java:1216)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.parseNXSD(NXSDTranslatorImpl.java:1084)
at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.doTranslateFromNative(NXSDTranslatorImpl.java:706)
... 7 more
I tried deleting the rows 24-28 from the CSV and process again.. again got the same error [Line=27, Col=1] .. so its not an issue with CSV.
Please suggest.
ThanksCan you publish or pass your csv and the native schema you are using. Please also mention your soa version. thx.
-
Dynamically creating oracle table with csv file as source
Hi,
We have a requirment..TO create a dynamic external table.. Whenever the data or number of columns change in the CSV file the table should be replaced with current data and current number of columns...as we are naive experienced people in oracle please give us a clear solution.. We have tried with a code already ..But getting some errors. Code given below..
thank you
we have executed this code by changing the schema name and table name ..Remaining everything same ...
Assume the following:
- Oracle User and Schema name is ALLEXPERTS
- Database name is EXPERTS
- The directory object is file_dir
- CSV file directory is /export/home/log
- The csv file name is ALLEXPERTS_CSV.log
- The table name is all_experts_tbl
1. Create a directory object in Oracle. The directory will point to the directory where the file located.
conn sys/{password}@EXPERTS as sysdba;
CREATE OR REPLACE DIRECTORY file_dir AS '/export/home/log';
2. Grant the directory privilege to the user
GRANT READ ON DIRECTORY file_dir TO ALLEXPERTS;
3. Create the table
Connect as ALLEXPERTS user
create table ALLEXPERTS.all_experts_tbl
(txt_line varchar2(512))
organization external
(type ORACLE_LOADER
default directory file_dir
access parameters (records delimited by newline
fields
(txt_line char(512)))
location ('ALLEXPERTS_CSV.log')
This will create a table that links the data to a file. Now you can treat this file as a regular table where you can use SELECT statement to retrieve the data.
PL/SQL to create the data (PSEUDO code)
CREATE OR REPLACE PROCEDURE new_proc IS
-- Setup the cursor
CURSOR c_main IS SELECT *
FROM allexperts.all_experts_tbl;
CURSOR c_first_row IS ALLEXPERTS_CSV.logSELECT *
FROM allexperts.all_experts_tbl
WHERE ROWNUM = 1;
-- Declare Variable
l_delimiter_count NUMBER;
l_temp_counter NUMBER:=1;
l_current_row VARCHAR2(100);
l_create_statements VARCHAR2(1000);
BEGIN
-- Get the first row
-- Open the c_first_row and fetch the data into l_current_row
-- Count the number of delimiter l_current_row and set the l_delimiter_count
OPEN c_first_row;
FETCH c_first_row INTO l_current_row;
CLOSE c_first_row;
l_delimiter_count := number of delimiter in l_current_row;
-- Create the table with the right number of columns
l_create_statements := 'CREATE TABLE csv_table ( ';
WHILE l_temp_counter <= l_delimiter_count
LOOP
l_create_statement := l_create_statement || 'COL' || l_temp_counter || ' VARCHAR2(100)'
l_temp_counter := l_temp_counter + 1;
IF l_temp_counter <=l_delimiter_count THEN
l_create_statement := l_create_statement || ',';
END IF;
END;
l_create_statement := l_create_statement || ')';
EXECUTE IMMEDIATE l_create_statement;
-- Open the c_main to parse all the rows and insert into the table
WHILE rec IN c_main
LOOP
-- Loop thru all the records and parse them
-- Insert the data into the table created above
END LOOP;The initial table is showing errors and the procedure is created with compilation errors
After executing the create table i am getting the following errors
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "badfile,
byteordermark, characterset, column, data, delimited, discardfile,
disable_directory_link_check, exit, fields, fixed, load, logfile, language,
nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string,
skip, territory, varia"
KUP-01008: the bad identifier was: deli
KUP-01007: at line 1 column 9
ORA-06512: at "SYS.ORACLE_LOADER", line 19 -
Error while processing csv file
Hi,
i get this messages while processing a csv file to load the content to database,
[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL;
data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
[SSIS.Pipeline] Warning: The output column "Copy of Column 13" (842) on output "Data Conversion Output" (798) and component
"Data Conversion" (796) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
can someone please me with this.
With Regards
litu HereHello Litu,
Are you using: OLEDB Source and Destination ?
Can you change the source and destination provider to ADO.NET and check if the issue still persist?
Mark this post as "Answered" if this addresses your question.
Regards,
Don Rohan [MSFT]
Regards, Don Rohan [MSFT] -
Setting required to process incomming Email with CSV file attached
Dear all,
We are having a scenario where in the supplier receives a PO confirmation CSV file from the download center as an Email attachment. The supplier updates the file by confirming PO's and replies back. The CSV file needs to update the confirmed PO's in SNC.
The SO50 settings are in place but still we are not able to receive the Email back in SNC. Any suggestion on what settings we need in SNC to receive and process the CSV file?
thanks,
mahehsHi,
You can use the upload center to process the changed data. Isnt that helpful?
Best Regards,
Harsha Gatt -
Replace String in csv file with file adapter content conversion
Hello experts,
I have a sender file channel to receive csv files from an external server. I can process the csv files but I have a little problem. Since the separator in the csv file is a comma and sometimes a comma also appears in the dates but this time it is not a separator.
Example:
1234,20120123,ABCD,customer, which has a comma in it's name,...
5678,20120123,FGHI,customer without comma,...
My plan is to remove all commas when a blank follows.
My current content conversion looks like this:
Record.fieldNames field1,field2,...
Record.fieldSeparator ,
Record.endSeparator 'nl'
Is there something like Record.fieldReplaceString or something similar which I can use? Or is there perhaps a better way to do this?
I'd appreciate every help I can get here.
Best regards.
Oliver.Hi Oliver,
I think I've got a solution for you, after all. All you need to do is have your customer name enclosed in quotes, like that:
1234,20120123,ABCD,"customer, which has a comma in it's name",...
Use the rest of FCC configuration as usually. You might also want to have a look at this wiki entry for a configuration example:
http://wiki.sdn.sap.com/wiki/display/XI/FileContentConversion
Moreover, see the description for "NameA.enclosureSign" in the help document below:
http://help.sap.com/saphelp_nwpi71/helpdata/en/44/6830e67f2a6d12e10000000a1553f6/frameset.htm
Hope this helps,
Greg -
Csv file processing using external table
Dear All,
Our database is oracle 10g r2 and OS is solaris
We would receive csv files to a particular directory on server each day.
File Format look like:
H00,SOURCE_NAME,FILE_CREATED_DATE
RECORD_TYPE,EMP_ID,EMP_NAME,EMP_DOB(DDMMYYYY),EMP_HIRE_DATE(DDMMYYYY),EMP_LOCATION
T00,RECORD_COUNT
EMPLOYEE TABLE STRUCTURE
EMP_ID NOT NULL NUMBER ,
EMP_NAME NOT NULL VARCHAR2(10) ,
EMP_DOB DATE,
EMP_HIRE_DATE NOT NULL DATE,
EMP_LOCATION VARCHAR2(80)
Sample File:
H00,ABC,21092013
"R01",1,"EMP1","14021986","06072010","LOC1"
"R01",20000000000,"EMP2","14021-987","06072011",""
,***,"EMPPPPPPPPPPP3","14021988","060**012","LOC2"
"R01",4,4,"14021989","06072013",
T00,4
we need to validate each record excluding header and trailer for:
DATATYPE, LENGTH,OPTIONALITY, and other date validations such as EMP_HIRE_DATE can not be less than EMP_DOB
In case of any data errors we need to send a response file for corresponding source file.
we have predefined error codes to be sent in the response file.
ERR001 EMP_ID can not be null
ERR002 EMP_ID exceeds 10 digits
ERR003 EMP_ID is not a number
ERR004 EMP_NAME has to be text
ERR005 EMP_NAME length can not exceed 10
ERR006 EMP_NAME can not be null
ERR007 EMP_DOB is not a date
ERR008 EMP_DOB is not in ddmmyyyy format
ERR009 EMP_HIRE_DATE is not a date
ERR010 EMP_HIRE_DATE is not in ddmmyyyy format
ERR011 EMP_HIRE_DATE can not be null
ERR012 EMP_LOCATION has to be text
ERR013 EMP_LOCATION length can not exceed 80
ERR014 EMP_HIRE_DATE can not be less than EMP_DOB
ERR015 Field missing in the record
ERR016 More number of fields than allowed
1.Do I need to create external table before processing each file.(EMP1.txt,EMP2.txt)?
2.How to generate these error codes in case of respective failure scenarios and to log into an exception table?
3.response file needs to have entire record and a concatination of all the error codes in the next line.
4.what would be a better approach among
creating an external table with all char(2000) fields and writing a select statement
such as select * from ext_table where (emp id is not null and length(emp_id)<=10 and....to select only proper data);
or creating the external table to be same as employee table and creating a bad file? if this is the preferred how can I generate the custom error codes?
Could you please help me in achieving this!
Warm Regards,
Shankar.You can do a one-time creation of an external table. After that, you can either overwrite the existing text file or alter the location. In the example below I have split your original sample file into two files, in order to demonstrate altering the location. You can make the external table all varchar2 fields as large as they need to be to accommodate all possible data. If you then create a staging table and rejects table of the same structure, you can create a trigger on the staging table such that, when you insert from the external table to the staging table, it inserts into either the employee table or rejects table with the concatenated error list. If you want this in a file, then you can just spool a select from the rejects table or use some other method such as utl_file. The following is a partial example. Another alternative would be to use SQL*Loader to load directly into the staging table without an external table.
SCOTT@orcl12c> HOST TYPE emp1.txt
H00,ABC,21092013
"R01",1,"EMP1","14021986","06072010","LOC1"
"R01",20000000000,"EMP2","14021-987","06072011",""
T00,4
SCOTT@orcl12c> HOST TYPE emp2.txt
H00,ABC,21092013
,***,"EMPPPPPPPPPPP3","14021988","060**012","LOC2"
"R01",4,4,"14021989","06072013",
T00,4
SCOTT@orcl12c> CREATE OR REPLACE DIRECTORY my_dir AS 'c:\my_oracle_files'
2 /
Directory created.
SCOTT@orcl12c> CREATE TABLE external_table
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80))
8 ORGANIZATION external
9 (TYPE oracle_loader
10 DEFAULT DIRECTORY my_dir
11 ACCESS PARAMETERS
12 (RECORDS DELIMITED BY NEWLINE
13 LOAD WHEN ((1: 3) != "H00" AND (1:3) != 'T00')
14 LOGFILE 'test.log'
15 FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
16 MISSING FIELD VALUES ARE NULL
17 REJECT ROWS WITH ALL NULL FIELDS
18 (record_type, emp_id, emp_name, emp_dob, emp_hire_date, emp_location))
19 LOCATION ('emp1.txt'))
20 /
Table created.
SCOTT@orcl12c> CREATE TABLE staging
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80))
8 /
Table created.
SCOTT@orcl12c> CREATE TABLE employee
2 (emp_id NUMBER NOT NULL,
3 emp_name VARCHAR2(10) NOT NULL,
4 emp_dob DATE,
5 emp_hire_date DATE,
6 emp_location VARCHAR2(80))
7 /
Table created.
SCOTT@orcl12c> CREATE TABLE rejects
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80),
8 error_codes VARCHAR2(4000))
9 /
Table created.
SCOTT@orcl12c> CREATE OR REPLACE TRIGGER staging_air
2 AFTER INSERT ON staging
3 FOR EACH ROW
4 DECLARE
5 v_rejects NUMBER := 0;
6 v_error_codes VARCHAR2(4000);
7 v_num NUMBER;
8 v_dob DATE;
9 v_hire DATE;
10 BEGIN
11 IF :NEW.emp_id IS NULL THEN
12 v_rejects := v_rejects + 1;
13 v_error_codes := v_error_codes || ',' || 'ERR001';
14 ELSIF LENGTH (:NEW.emp_id) > 10 THEN
15 v_rejects := v_rejects + 1;
16 v_error_codes := v_error_codes || ',' || 'ERR002';
17 END IF;
18 BEGIN
19 v_num := TO_NUMBER (:NEW.emp_id);
20 EXCEPTION
21 WHEN value_error THEN
22 v_rejects := v_rejects + 1;
23 v_error_codes := v_error_codes || ',' || 'ERR003';
24 END;
25 IF :NEW.emp_name IS NULL THEN
26 v_rejects := v_rejects + 1;
27 v_error_codes := v_error_codes || ',' || 'ERR006';
28 ELSIF LENGTH (:NEW.emp_name) > 10 THEN
29 v_rejects := v_rejects + 1;
30 v_error_codes := v_error_codes || ',' || 'ERR005';
31 END IF;
32 BEGIN
33 v_dob := TO_DATE (:NEW.emp_dob, 'ddmmyyyy');
34 EXCEPTION
35 WHEN OTHERS THEN
36 v_rejects := v_rejects + 1;
37 v_error_codes := v_error_codes || ',' || 'ERR008';
38 END;
39 BEGIN
40 IF :NEW.emp_hire_date IS NULL THEN
41 v_rejects := v_rejects + 1;
42 v_error_codes := v_error_codes || ',' || 'ERR011';
43 ELSE
44 v_hire := TO_DATE (:NEW.emp_hire_date, 'ddmmyyyy');
45 END IF;
46 EXCEPTION
47 WHEN OTHERS THEN
48 v_rejects := v_rejects + 1;
49 v_error_codes := v_error_codes || ',' || 'ERR010';
50 END;
51 IF LENGTH (:NEW.emp_location) > 80 THEN
52 v_rejects := v_rejects + 1;
53 v_error_codes := v_error_codes || ',' || 'ERR013';
54 END IF;
55 IF v_hire IS NOT NULL AND v_dob IS NOT NULL AND v_hire < v_dob THEN
56 v_rejects := v_rejects + 1;
57 v_error_codes := v_error_codes || ',' || 'ERR014';
58 END IF;
59 IF :NEW.emp_id IS NULL OR :NEW.emp_name IS NULL OR :NEW.emp_dob IS NULL
60 OR :NEW.emp_hire_date IS NULL OR :NEW.emp_location IS NULL THEN
61 v_rejects := v_rejects + 1;
62 v_error_codes := v_error_codes || ',' || 'ERR015';
63 END IF;
64 IF v_rejects = 0 THEN
65 INSERT INTO employee (emp_id, emp_name, emp_dob, emp_hire_date, emp_location)
66 VALUES (:NEW.emp_id, :NEW.emp_name,
67 TO_DATE (:NEW.emp_dob, 'ddmmyyyy'), TO_DATE (:NEW.emp_hire_date, 'ddmmyyyy'),
68 :NEW.emp_location);
69 ELSE
70 v_error_codes := LTRIM (v_error_codes, ',');
71 INSERT INTO rejects
72 (record_type,
73 emp_id, emp_name, emp_dob, emp_hire_date, emp_location,
74 error_codes)
75 VALUES
76 (:NEW.record_type,
77 :NEW.emp_id, :NEW.emp_name, :NEW.emp_dob, :NEW.emp_hire_date, :NEW.emp_location,
78 v_error_codes);
79 END IF;
80 END staging_air;
81 /
Trigger created.
SCOTT@orcl12c> SHOW ERRORS
No errors.
SCOTT@orcl12c> INSERT INTO staging SELECT * FROM external_table
2 /
2 rows created.
SCOTT@orcl12c> ALTER TABLE external_table LOCATION ('emp2.txt')
2 /
Table altered.
SCOTT@orcl12c> INSERT INTO staging SELECT * FROM external_table
2 /
2 rows created.
SCOTT@orcl12c> SELECT * FROM employee
2 /
EMP_ID EMP_NAME EMP_DOB EMP_HIRE_DATE EMP_LOCATION
1 EMP1 Fri 14-Feb-1986 Tue 06-Jul-2010 LOC1
1 row selected.
SCOTT@orcl12c> COLUMN error_codes NEWLINE
SCOTT@orcl12c> SELECT * FROM rejects
2 /
RECORD_TYP EMP_ID EMP_NAME EMP_DOB EMP_HIRE_D EMP_LOCATION
ERROR_CODES
R01 20000000000 EMP2 14021-987 06072011
ERR002,ERR008,ERR015
*** EMPPPPPPPPPPP3 14021988 060**012 LOC2
ERR003,ERR005,ERR010
R01 4 4 14021989 06072013
ERR015
3 rows selected. -
Problem in converting table data into CSV file
Hi All,
In my Process i need to convert my error table data into csv file,my data is converted as csv file by using OdisqlUnload function,but the column headers are not converted,i use another procedure for converting column headers but iam getting below error ...
com.sunopsis.sql.SnpsMissingParametersException: Missing parameter string.find, string.find
SQL: import string import java.sql as sql import java.lang as lang import re sourceConnection = odiRef.getJDBCConnection("SRC") output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+') myStmt = sourceConnection.createStatement() my_query = "select * FROM E$_LOCAL_F0911Z1" my_query=my_query.upper() if string.find(my_query, '*') > 0: myRs = myStmt.executeQuery(my_query) md=myRs.getMetaData() collect=[] i=1 while (i <= md.getColumnCount()): collect.append(md.getColumnName(i)) i += 1 header=','.join(map(string.strip, collect)) elif string.find(my_query,'||') > 0: header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',') else: header = my_query[7:string.find(my_query, 'FROM')] print header old=output_write.read() output_write.seek(0) output_write.write (header+'\n'+old) sourceConnection.close() output_write.close()
And i used below code for converting.......
import string
import java.sql as sql
import java.lang as lang
import re
sourceConnection = odiRef.getJDBCConnection("SRC")
output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+')
myStmt = sourceConnection.createStatement()
my_query = "select FROM E$_COMPANY"*
*my_query=my_query.upper()*
*if string.find(my_query, '*') > 0:*
*myRs = myStmt.executeQuery(my_query)*
*md=myRs.getMetaData()*
*collect=[]*
*i=1*
*while (i <= md.getColumnCount()):*
*collect.append(md.getColumnName(i))*
*i += 1*
*header=','.join(map(string.strip, collect))*
*elif string.find(my_query,'||') > 0:*
*header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',')*
*else:*
*header = my_query[7:string.find(my_query, 'FROM')]*
*print header*
*old=output_write.read()*
*output_write.seek(0)*
*output_write.write (header+'\n'+old)*
*sourceConnection.close()*
*output_write.close()*
Any one can you help regarding this
Edited by: 30021986 on Oct 1, 2012 6:04 PMThis may not be an option for you but in pinch you may want to consider outputing your data to an MS Spreadsheet, then saving it as a CSV. It's somewhat of a cumbersome process, but it will get you by for now.
You will need to change your content type to application/vnd.ms-excel.
<% response.setContentType("application/vnd.ms-excel"); %> -
How to upload above20000 records csv files into oracle table,Oracle APEX3.2
Can any one help me how to CSV upload more than 20,000 records using APEX 3.2 upload process.i am using regular upload process using BOLB file
SELECT blob_content,id,filename into v_blob_data,v_file_id,v_file_name
FROM apex_application_files
WHERE last_updated = (select max(last_updated)
from apex_application_files WHERE UPDATED_BY = :APP_USER)
AND id = (select max(id) from apex_application_files where updated_by = :APP_USER);
I tried to upload but my page getting time out. my application best working up to 1000 records. after that its getting timed out.Each record is storing 2 secornds in the oracle table.So 1000 records it taking 7 minuts after that APEX upload webpage getting timed out
please help me with source how to speed upload csv file process or help another best with with source example.
Thanks,
Sant.
Edited by: 994152 on Mar 15, 2013 5:38 AMSee this posting:
Internet Explorer Cannot Display
There, I provided a couple of links on this particular issue. You need to change the timeout on your application server.
Denes Kubicek
http://deneskubicek.blogspot.com/
http://www.apress.com/9781430235125
http://apex.oracle.com/pls/apex/f?p=31517:1
http://www.amazon.de/Oracle-APEX-XE-Praxis/dp/3826655494
------------------------------------------------------------------- -
SQL server 2014 and VS 2013 - Dataflow task, read CSV file and insert data to SQL table
Hello everyone,
I was assigned a work item wherein, I've a dataflow task on For Each Loop container at control flow of SSIS package. This For Each Loop container reads the CSV files from the specified location one by one, and populates a variable with current
file name. Note, the tables where I would like to push the data from each CSV file are also having the same names as CSV file names.
On the dataflow task, I've Flat File component as a source, this component uses the above variable to read the data of a particular file. Now, here my question comes, how can I move the data to destination, SQL table, using the same variable name?
I've tried to setup the OLE DB destination component dynamically but it executes well only for first time. It does not change the mappings as per the columns of the second CSV file. There're around 50 CSV files, each has different set off columns
in it. These files needs to be migrated to SQL tables using the optimum way.
Does anybody know which is the best way to setup the Dataflow task for this requirement?
Also, I cannot use Bulk insert task here as we would like to keep a log of corrupted rows.
Any help would be much appreciated. It's very urgent.
Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.comThe standard Data Flow Task supports only static metadata defined during design time. I would recommend you check the commercial COZYROC
Data Flow Task Plus. It is an extension of the standard Data Flow Task and it supports dynamic metadata at runtime. You can process all your input CSV files using a single Data Flow Task
Plus. No programming skills are required.
SSIS Tasks Components Scripts Services | http://www.cozyroc.com/ -
Program to upload csv file to internal table and insert into database table
Hi I'm writing a program where I need to upload a csv file into an internal table using gui_upload, but i also need this program to insert the data into my custom database table using the split command. Anybody have any samples to help, its urgent!
Hi,
Check this table may be it will give u an hint...
REPORT z_table_upload LINE-SIZE 255.
Data
DATA: it_dd03p TYPE TABLE OF dd03p,
is_dd03p TYPE dd03p.
DATA: it_rdata TYPE TABLE OF text1024,
is_rdata TYPE text1024.
DATA: it_fields TYPE TABLE OF fieldname.
DATA: it_file TYPE REF TO data,
is_file TYPE REF TO data.
DATA: w_error TYPE text132.
Macros
DEFINE write_error.
concatenate 'Error: table'
p_table
&1
&2
into w_error
separated by space.
condense w_error.
write: / w_error.
stop.
END-OF-DEFINITION.
Field symbols
FIELD-SYMBOLS: <table> TYPE STANDARD TABLE,
<data> TYPE ANY,
<fs> TYPE ANY.
Selection screen
SELECTION-SCREEN: BEGIN OF BLOCK b01 WITH FRAME TITLE text-b01.
PARAMETERS: p_file TYPE localfile DEFAULT 'C:\temp\' OBLIGATORY,
p_separ TYPE c DEFAULT ';' OBLIGATORY.
SELECTION-SCREEN: END OF BLOCK b01.
SELECTION-SCREEN: BEGIN OF BLOCK b02 WITH FRAME TITLE text-b02.
PARAMETERS: p_table TYPE tabname OBLIGATORY
MEMORY ID dtb
MATCHCODE OBJECT dd_dbtb_16.
SELECTION-SCREEN: END OF BLOCK b02.
SELECTION-SCREEN: BEGIN OF BLOCK b03 WITH FRAME TITLE text-b03.
PARAMETERS: p_create TYPE c AS CHECKBOX.
SELECTION-SCREEN: END OF BLOCK b03,
SKIP.
SELECTION-SCREEN: BEGIN OF BLOCK b04 WITH FRAME TITLE text-b04.
PARAMETERS: p_nodb RADIOBUTTON GROUP g1 DEFAULT 'X'
USER-COMMAND rg1,
p_save RADIOBUTTON GROUP g1,
p_dele RADIOBUTTON GROUP g1.
SELECTION-SCREEN: SKIP.
PARAMETERS: p_test TYPE c AS CHECKBOX,
p_list TYPE c AS CHECKBOX DEFAULT 'X'.
SELECTION-SCREEN: END OF BLOCK b04.
At selection screen
AT SELECTION-SCREEN.
IF sy-ucomm = 'RG1'.
IF p_nodb IS INITIAL.
p_test = 'X'.
ENDIF.
ENDIF.
At selection screen
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
CALL FUNCTION 'F4_FILENAME'
EXPORTING
field_name = 'P_FILE'
IMPORTING
file_name = p_file.
Start of selection
START-OF-SELECTION.
PERFORM f_table_definition USING p_table.
PERFORM f_upload_data USING p_file.
PERFORM f_prepare_table USING p_table.
PERFORM f_process_data.
IF p_nodb IS INITIAL.
PERFORM f_modify_table.
ENDIF.
IF p_list = 'X'.
PERFORM f_list_records.
ENDIF.
End of selection
END-OF-SELECTION.
FORM f_table_definition *
--> VALUE(IN_TABLE) *
FORM f_table_definition USING value(in_table).
DATA: l_tname TYPE tabname,
l_state TYPE ddgotstate,
l_dd02v TYPE dd02v.
l_tname = in_table.
CALL FUNCTION 'DDIF_TABL_GET'
EXPORTING
name = l_tname
IMPORTING
gotstate = l_state
dd02v_wa = l_dd02v
TABLES
dd03p_tab = it_dd03p
EXCEPTIONS
illegal_input = 1
OTHERS = 2.
IF l_state NE 'A'.
write_error 'does not exist or is not active' space.
ENDIF.
IF l_dd02v-tabclass NE 'TRANSP' AND
l_dd02v-tabclass NE 'CLUSTER'.
write_error 'is type' l_dd02v-tabclass.
ENDIF.
ENDFORM.
FORM f_prepare_table *
--> VALUE(IN_TABLE) *
FORM f_prepare_table USING value(in_table).
DATA: l_tname TYPE tabname,
lt_ftab TYPE lvc_t_fcat.
l_tname = in_table.
CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
EXPORTING
i_structure_name = l_tname
CHANGING
ct_fieldcat = lt_ftab
EXCEPTIONS
OTHERS = 1.
IF sy-subrc NE 0.
WRITE: / 'Error while building field catalog'.
STOP.
ENDIF.
CALL METHOD cl_alv_table_create=>create_dynamic_table
EXPORTING
it_fieldcatalog = lt_ftab
IMPORTING
ep_table = it_file.
ASSIGN it_file->* TO <table>.
CREATE DATA is_file LIKE LINE OF <table>.
ASSIGN is_file->* TO <data>.
ENDFORM.
FORM f_upload_data *
--> VALUE(IN_FILE) *
FORM f_upload_data USING value(in_file).
DATA: l_file TYPE string,
l_ltext TYPE string.
DATA: l_lengt TYPE i,
l_field TYPE fieldname.
DATA: l_missk TYPE c.
l_file = in_file.
l_lengt = strlen( in_file ).
FORMAT INTENSIFIED ON.
WRITE: / 'Reading file', in_file(l_lengt).
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_file
filetype = 'ASC'
TABLES
data_tab = it_rdata
EXCEPTIONS
OTHERS = 1.
IF sy-subrc <> 0.
WRITE: /3 'Error uploading', l_file.
STOP.
ENDIF.
File not empty
DESCRIBE TABLE it_rdata LINES sy-tmaxl.
IF sy-tmaxl = 0.
WRITE: /3 'File', l_file, 'is empty'.
STOP.
ELSE.
WRITE: '-', sy-tmaxl, 'rows read'.
ENDIF.
File header on first row
READ TABLE it_rdata INTO is_rdata INDEX 1.
l_ltext = is_rdata.
WHILE l_ltext CS p_separ.
SPLIT l_ltext AT p_separ INTO l_field l_ltext.
APPEND l_field TO it_fields.
ENDWHILE.
IF sy-subrc = 0.
l_field = l_ltext.
APPEND l_field TO it_fields.
ENDIF.
Check all key fields are present
SKIP.
FORMAT RESET.
FORMAT COLOR COL_HEADING.
WRITE: /3 'Key fields'.
FORMAT RESET.
LOOP AT it_dd03p INTO is_dd03p WHERE NOT keyflag IS initial.
WRITE: /3 is_dd03p-fieldname.
READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
TRANSPORTING NO FIELDS.
IF sy-subrc = 0.
FORMAT COLOR COL_POSITIVE.
WRITE: 'ok'.
FORMAT RESET.
ELSEIF is_dd03p-datatype NE 'CLNT'.
FORMAT COLOR COL_NEGATIVE.
WRITE: 'error'.
FORMAT RESET.
l_missk = 'X'.
ENDIF.
ENDLOOP.
Log other fields
SKIP.
FORMAT COLOR COL_HEADING.
WRITE: /3 'Other fields'.
FORMAT RESET.
LOOP AT it_dd03p INTO is_dd03p WHERE keyflag IS initial.
WRITE: /3 is_dd03p-fieldname.
READ TABLE it_fields WITH KEY table_line = is_dd03p-fieldname
TRANSPORTING NO FIELDS.
IF sy-subrc = 0.
WRITE: 'X'.
ENDIF.
ENDLOOP.
Missing key field
IF l_missk = 'X'.
SKIP.
WRITE: /3 'Missing key fields - no further processing'.
STOP.
ENDIF.
ENDFORM.
FORM f_process_data *
FORM f_process_data.
DATA: l_ltext TYPE string,
l_stext TYPE text40,
l_field TYPE fieldname,
l_datat TYPE c.
LOOP AT it_rdata INTO is_rdata FROM 2.
l_ltext = is_rdata.
LOOP AT it_fields INTO l_field.
ASSIGN COMPONENT l_field OF STRUCTURE <data> TO <fs>.
IF sy-subrc = 0.
Field value comes from file, determine conversion
DESCRIBE FIELD <fs> TYPE l_datat.
CASE l_datat.
WHEN 'N'.
SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
WRITE l_stext TO <fs> RIGHT-JUSTIFIED.
OVERLAY <fs> WITH '0000000000000000'. "max 16
WHEN 'P'.
SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
TRANSLATE l_stext USING ',.'.
<fs> = l_stext.
WHEN 'F'.
SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
TRANSLATE l_stext USING ',.'.
<fs> = l_stext.
WHEN 'D'.
SPLIT l_ltext AT p_separ INTO l_stext l_ltext.
TRANSLATE l_stext USING '/.-.'.
CALL FUNCTION 'CONVERT_DATE_TO_INTERNAL'
EXPORTING
date_external = l_stext
IMPORTING
date_internal = <fs>
EXCEPTIONS
OTHERS = 1.
WHEN 'T'.
CALL FUNCTION 'CONVERT_TIME_INPUT'
EXPORTING
input = l_stext
IMPORTING
output = <fs>
EXCEPTIONS
OTHERS = 1.
WHEN OTHERS.
SPLIT l_ltext AT p_separ INTO <fs> l_ltext.
ENDCASE.
ELSE.
SHIFT l_ltext UP TO p_separ.
SHIFT l_ltext.
ENDIF.
ENDLOOP.
IF NOT <data> IS INITIAL.
LOOP AT it_dd03p INTO is_dd03p WHERE datatype = 'CLNT'.
This field is mandant
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-mandt.
ENDLOOP.
IF p_create = 'X'.
IF is_dd03p-rollname = 'ERDAT'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-datum.
ENDIF.
IF is_dd03p-rollname = 'ERZET'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-uzeit.
ENDIF.
IF is_dd03p-rollname = 'ERNAM'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-uname.
ENDIF.
ENDIF.
IF is_dd03p-rollname = 'AEDAT'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-datum.
ENDIF.
IF is_dd03p-rollname = 'AETIM'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-uzeit.
ENDIF.
IF is_dd03p-rollname = 'AENAM'.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data>
TO <fs>.
<fs> = sy-uname.
ENDIF.
APPEND <data> TO <table>.
ENDIF.
ENDLOOP.
ENDFORM.
FORM f_modify_table *
FORM f_modify_table.
SKIP.
IF p_save = 'X'.
MODIFY (p_table) FROM TABLE <table>.
ELSEIF p_dele = 'X'.
DELETE (p_table) FROM TABLE <table>.
ELSE.
EXIT.
ENDIF.
IF sy-subrc EQ 0.
FORMAT COLOR COL_POSITIVE.
IF p_save = 'X'.
WRITE: /3 'Modify table OK'.
ELSE.
WRITE: /3 'Delete table OK'.
ENDIF.
FORMAT RESET.
IF p_test IS INITIAL.
COMMIT WORK.
ELSE.
ROLLBACK WORK.
WRITE: '- test only, no update'.
ENDIF.
ELSE.
FORMAT COLOR COL_NEGATIVE.
WRITE: /3 'Error while modifying table'.
FORMAT RESET.
ENDIF.
ENDFORM.
FORM f_list_records *
FORM f_list_records.
DATA: l_tleng TYPE i,
l_lasti TYPE i,
l_offst TYPE i.
Output width
l_tleng = 1.
LOOP AT it_dd03p INTO is_dd03p.
l_tleng = l_tleng + is_dd03p-outputlen.
IF l_tleng LT sy-linsz.
l_lasti = sy-tabix.
l_tleng = l_tleng + 1.
ELSE.
l_tleng = l_tleng - is_dd03p-outputlen.
EXIT.
ENDIF.
ENDLOOP.
Output header
SKIP.
FORMAT COLOR COL_HEADING.
WRITE: /3 'Contents'.
FORMAT RESET.
ULINE AT /3(l_tleng).
Output records
LOOP AT <table> ASSIGNING <data>.
LOOP AT it_dd03p INTO is_dd03p FROM 1 TO l_lasti.
IF is_dd03p-position = 1.
WRITE: /3 sy-vline.
l_offst = 3.
ENDIF.
ASSIGN COMPONENT is_dd03p-fieldname OF STRUCTURE <data> TO <fs>.
l_offst = l_offst + 1.
IF is_dd03p-decimals LE 2.
WRITE: AT l_offst <fs>.
ELSE.
WRITE: AT l_offst <fs> DECIMALS 3.
ENDIF.
l_offst = l_offst + is_dd03p-outputlen.
WRITE: AT l_offst sy-vline.
ENDLOOP.
ENDLOOP.
Ouptut end
ULINE AT /3(l_tleng).
ENDFORM.
Regards,
Joy. -
Csv file uploading for database table creation
Hi there,
I'm in the process of making an application that will be able to upload a csv file and create a table based on the same file. As of now, I have managed to make my application upload a csv file into the database. My problem now is to transfer the data in the csv into a table. If there is a function that can do this, please let me know. But as of now, I have tried all that I can but in vain. I would appreciate any assistance rendered as to how I can go about this.
Kind regards,
Lusunthahai Lusuntha ,
Go to search forum and type "upload within html db".here u will find the required information ,as well as the code.go for each topic in the search result.
Maybe you are looking for
-
Creating contract with service line items
Hi, My requirement is to create contract with multiple service line item. The issue is my system is ECC5 so i don't have BAPIS like BAPI_CONTRACT_CREATE or BAPI_SERVICE_CREATE .How do we achieve this without writing a bdc. Do we have any equivalent b
-
Report on olap universe (integration kit) fails by WEBI and it runs by Rich
I've implemented the integration Kit and an OLAP universe query based and it runs. I've a problem on a report that runs via Rich Client and fails via web client. We are on BI 7.0 sp 19. Is it necessary to upgrade to BI 7.01 SP03 or SP04 ? We've seen
-
Ok, I have a problem with oracle which appears to be that it is caching an old version of a class file. I get a class not found exception but the thing is the package that the error message is refering to doesn't even exist anymore and isn't refrence
-
Looking for OSX (1) or Third Party (2) Solution to tag Audio Files
Hey all, I'm new here. I've signed up, because I'm looking for a way to Tag Audio Files. I've been doing Audio Diary over the years. Now instead of converting the Audio to Text, I'm looking for a way to identify Audio entries by adding Tags to them.
-
Automatic update of material master purchasing group
Dear. Does exist a chance in order to update the purchasing group of material master from pur.group of info record or from pur. group of the last purchase order ? I want to avoid a manual update of material master. Thanks.