Spooled data to .csv

Hi Guys,
Please can someone tell me if there is a way to spool two queries to a .csv file such that the output of the first query goes to one tab and the output of the other to the second tab in Excel.. Both these queries have different information.
Regards,
G

You could set up excel to obtain it's data from external datasources and set up that external datasource as an odbc connection to an Oracle database and issue the queries through that, but you're getting into the realms of Excel development which is outside the scope of this forum.
If you've already done as someone else suggested above, and searched this forum, you will find recent posts where a couple of members have made available their database packages that allow you to create Excel XML files from within PL/SQL.
;)

Similar Messages

  • Issue with spooling data to csv file

    Hi,
    I am trying to spool a table data to csv file using spool command in command prompt. and its spooling around thousand records.
    I am able to see all the thousnad records on dos prompt while spooling happening.
    Instwead of that, is there any other way that will stop printing output on dosprompt rather it will spool directly to csv file?
    Please help me...
    Thanks,
    Ravi.

    user10403630 wrote:
    Thanks for all your suggestions...
    I have tried a sample script which does have only set termout off and its working fine...
    but my script is having some thing like below
    SET ESCAPE ON
    SET SERVEROUTPUT ON
    SET ECHO OFF
    SET HEADING ON
    SET COLSEP ','
    SET FEEDBACK OFF
    SET PAGESIZE 0
    SET LINESIZE 5000
    SET TRIMSPOOL OFF
    SET TERMOUT OFF
    This is not working...
    I believe something is trying to overwrite the functionalilty of set termout off in this case
    Please help me,....So go to the SQL*Plus reference manual and read up on each of the SET options you listed above . . .

  • Spooling data in CSV Format

    how can i make CSV format file through SQL.

    That would work if columns do not have commas and double quotes. In general, you would use something like:
    '"' || replace(col1,'"','""',) || '","' || replace(col2,'"','""',) ... || '","' || replace(coln,'"','""',) || '"'
    {code}
    Also, we'd need to set proper linesize and set trimspool on.
    SY.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Problems With Data Alignment when spooling to a CSV file

    Dear members,
    I am spooling data to a csv file. My data contains 3 columns
    For example :
    col1 col2 col3
    USD,10000033020000000000000,-1144206.34
    The 2nd column is alphanumeric, it contains some rows which have only numbers and some which have numbers and alphabets.
    The 3rd column contains only numbers with positive or negative values.
    I am facing problem with alignment. when i open the spooled csv file then i find that the 3rd column is aligned to right .
    In the 2nd column, rows which have only numbers are right justified and rows which have alpha numeric data are left justified.
    I tried using the JUSTIFY function in sql plus but still it is not working for me.
    Can any body give your opinion on how to control the alignment in spooled csv files.
    Your responce is highly appreciated.
    Here is my code :
    WHENEVER SQLERROR CONTINUE
    SET TIMING off
    set feedback off
    set heading off
    set termout OFF
    set pagesize 0
    set linesize 200
    set verify off
    set trimspool ON
    SET NEWPAGE NONE
    col to_char(glcd.segment1||glcd.segment2||glcd.segment3||glcd.segment4||glcd.segment5||glcd.segment6) ALIAS CONCATENATED_SEGMENTS
    col CONCATENATED_SEGMENTS justify left
    col to_char(decode(glbal.currency_code,glsob.currency_code,
    (begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr),
    (begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq))) alias Total_Functional_Currency
    col Total_Functional_Currency justify left
    COlUMN V_INSTANCE NEW_VALUE V_inst noprint
    select trim(lower(instance_name)) V_INSTANCE
    from v$instance;
    column clogname new_value logname
    select '/d01/oracle/'|| '&&V_inst' ||'out/outbound/KEMET_BALANCE_FILE_EXTRACT' clogname from dual;
    spool &&logname..csv
    SELECT glsob.currency_code ||','||
    to_char(glcd.segment1||glcd.segment2||glcd.segment3||glcd.segment4||glcd.segment5||glcd.segment6) ||','||
    to_char(decode(glbal.currency_code,glsob.currency_code,
    (begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr),
    (begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq)))
    from gl_balances glbal , gl_code_combinations glcd , gl_sets_of_books glsob
    where      period_name = '&1' /* Period Name */
    and      glbal.translated_flag IS NULL
    and      glbal.code_combination_id = glcd.code_combination_id
    and      glbal.set_of_books_id = glsob.set_of_books_id
    and      glbal.actual_flag = 'A'
    and      glsob.short_name in ('KEC-BOOKS' , 'KUE' , 'KEU','KEMS', 'KEAL' , 'KEAL-TW' , 'KEAL-SZ' , 'KEAM')
    and glcd.segment1 != '05'
    and decode(glbal.currency_code , glsob.currency_code , (begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr) ,
    (begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq)) != 0
    and glbal.template_id IS NULL
    ORDER BY glcd.segment1 || glcd.segment2 || glcd.segment3 || glcd.segment4 || glcd.segment5 || glcd.segment6
    spool off
    SET TIMING on
    set termout on
    set feedback on
    set heading on
    set pagesize 35
    set linesize 100
    set echo on
    set verify on
    Thanks
    Sandeep

    i think you do not have to worry about your code when you say that the plain texts created are ok when opened on the notepad. it is on the excel that you will need some adjustments. not sure about this but you might want to read about the applying styles in the excel by going through it's help menu.

  • How to export data to .csv and send that file to folder using sql commands?

    Really hoping someone can provide some code sample or point me in the right direction (with examples using sql).
    I am using oracle 11g with plsql developer. My requirement is that the data retrieval should be automatic. That means when query a data in oracle table , the data should be convert into .csv file and then it should send that to some specific folder.
    Example: Assume that student_sub1 table have all students subject1 data.when i write a query like select * from student_sub1 where student=1, the student1 data should convert into .csv file and then the .csv file should go to student1 folder.
    Hence I am requesting you to suggest me how write a sql code to convert the data into .csv file and to create a folder to send that file using oracle.
    Your inputs helps me alot.
    Thanks in advance!!!!
    Waiting for your replies.... Cheers

    981145 wrote:
    Really hoping someone can provide some code sample or point me in the right direction (with examples using sql).
    I am using oracle 11g with plsql developer. My requirement is that the data retrieval should be automatic. That means when query a data in oracle table , the data should be convert into .csv file and then it should send that to some specific folder.
    Example: Assume that student_sub1 table have all students subject1 data.when i write a query like select * from student_sub1 where student=1, the student1 data should convert into .csv file and then the .csv file should go to student1 folder.
    Hence I am requesting you to suggest me how write a sql code to convert the data into .csv file and to create a folder to send that file using oracle.
    Your inputs helps me alot.
    Thanks in advance!!!!
    Waiting for your replies.... CheersSET COLSEP ,
    spool student1.csv
    select * from student_sub1 where student=1;
    spool off
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     981145
    Status Level:     Newbie
    Registered:     Jan 10, 2013
    Total Posts:     25
    Total Questions:     10 (8 unresolved)
    I extend my condolences to you since you rarely get answers to your questions here.

  • Problem in converting table data into CSV file

    Hi All,
    In my Process i need to convert my error table data into csv file,my data is converted as csv file by using OdisqlUnload function,but the column headers are not converted,i use another procedure for converting column headers but iam getting below error ...
    com.sunopsis.sql.SnpsMissingParametersException: Missing parameter string.find, string.find
    SQL: import string import java.sql as sql import java.lang as lang import re sourceConnection = odiRef.getJDBCConnection("SRC") output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+') myStmt = sourceConnection.createStatement() my_query = "select * FROM E$_LOCAL_F0911Z1" my_query=my_query.upper() if string.find(my_query, '*') > 0: myRs = myStmt.executeQuery(my_query) md=myRs.getMetaData() collect=[] i=1 while (i <= md.getColumnCount()): collect.append(md.getColumnName(i)) i += 1 header=','.join(map(string.strip, collect)) elif string.find(my_query,'||') > 0: header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',') else: header = my_query[7:string.find(my_query, 'FROM')] print header old=output_write.read() output_write.seek(0) output_write.write (header+'\n'+old) sourceConnection.close() output_write.close()
    And i used below code for converting.......
    import string
    import java.sql as sql
    import java.lang as lang
    import re
    sourceConnection = odiRef.getJDBCConnection("SRC")
    output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+')
    myStmt = sourceConnection.createStatement()
    my_query = "select FROM E$_COMPANY"*
    *my_query=my_query.upper()*
    *if string.find(my_query, '*') > 0:*
    *myRs = myStmt.executeQuery(my_query)*
    *md=myRs.getMetaData()*
    *collect=[]*
    *i=1*
    *while (i <= md.getColumnCount()):*
    *collect.append(md.getColumnName(i))*
    *i += 1*
    *header=','.join(map(string.strip, collect))*
    *elif string.find(my_query,'||') > 0:*
    *header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',')*
    *else:*
    *header = my_query[7:string.find(my_query, 'FROM')]*
    *print header*
    *old=output_write.read()*
    *output_write.seek(0)*
    *output_write.write (header+'\n'+old)*
    *sourceConnection.close()*
    *output_write.close()*
    Any one can you help regarding this
    Edited by: 30021986 on Oct 1, 2012 6:04 PM

    This may not be an option for you but in pinch you may want to consider outputing your data to an MS Spreadsheet, then saving it as a CSV. It's somewhat of a cumbersome process, but it will get you by for now.
    You will need to change your content type to application/vnd.ms-excel.
    <% response.setContentType("application/vnd.ms-excel"); %>

  • How to restrict spool data in the next line in pdf print

    Hi Experts,
    i have spool data which is going to be print in PDF format.
    if line is extented..thenit is prining the in the next line..
    but that next line is not starting after one tab space.
    so how to remove that space.
    can anybody give me help.
    Regards,
    venkat

    Hi,
    output is displaying in pdf file in the give format..
    ex:-
    919   sub     00001      44445       testmater
             ial     final
    here test material is not coming fully in the first line..it is cutting and displaying in 2 lines.
    Regards,
    venkat

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • How to load the data from .csv file to oracle table???

    Hi,
    I am using oracle 10g , plsql developer. Can anyone help me in how to load the data from .csv file to oracle table. The table is already created with the required columns. The .csv file is having about 10lakh records. Is it possible to load 10lakh records. can any one please tell me how to proceed.
    Thanks in advance

    981145 wrote:
    Can you tell more about sql * loader??? how to know that utility is available for me or not??? I am using oracle 10g database and plsql developer???SQL*Loader is part of the Oracle client. If you have a developer installation you should normally have it on your client.
    the command is
    sqlldrType it and see if you have it installed.
    Have a look also at the FAQ link posted by Marwin.
    There are plenty of examples also on the web.
    Regards.
    Al

  • Error while running bulk load utility for account data with CSV file

    Hi All,
    I'm trying to run the bulk load utility for account data using CSV but i'm getting following error...
    ERROR ==> The number of CSV files provided as input does not match with the number of account tables.
    Thanks in advance........

    Please check your child table.
    http://docs.oracle.com/cd/E28389_01/doc.1111/e14309/bulkload.htm#CHDCGGDA
    -kuldeep

  • Error while load the data from CSV with CTL file..?

    Hi TOM,
    When i try to load data from CSV file to this table,
    CTL File content:
    load data
    into table XXXX append
         Y_aca position char (3),
         x_date position date 'yyyy/mm/dd'
    NULLIF (x_date = ' '),
    X_aca position (* + 3) char (6)
    "case when :Y_aca = 'ABCDDD' and :XM_dt is null then
    decode(:X_aca,'AB','BA','CD',
    'DC','EF','FE','GH','HG',:X_aca)
    else :X_aca
    end as X_aca",
    Z_cdd position char (2),
         XM_dt position date 'yyyy/mm/dd'
    NULLIF XM_dt = ' ',
    When I try the above CTL file; geting the following error..
    SQL*Loader-281: Warning: ROWS parameter ignored in parallel mode.
    SQL*Loader-951: Error calling once/load initialization
    ORA-02373: Error parsing insert statement for table "XYZ"."XXXX".
    ORA-00917: missing comma

    Possible Solutions
    Make sure that the data source is valid.
    Is a member from each dimension specified correctly in the data source or rules file?
    Is the numeric data field at the end of the record? If not, move the numeric data field in the data source or move the numeric data field in the rules file.
    Are all members that might contain numbers (such as "100") enclosed in quotation marks in the data source?
    If you are using a header, is the header set up correctly? Remember that you can add missing dimension names to the header.
    Does the data source contain extra spaces or tabs?
    Has the updated outline been saved?

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • Data in CSV uploads successfully, but it is not visible after upload.

    Hi,
    I am using Apex 3.2 on Oracle 11g.
    This is an imported application for which I am making changes as per my requirements. As I am new to Apex and even SQL, I request forum members to help me with this.
    Please find below the old code for uploading data from CSV. It displays only 6 columns - Database Name, Server Name, Application Name, Application Provider, Critical, Remarks. This was successfully uploading all the data from CSV and that data was visible after upload.
    OLD CODE:_
    --PLSQL code for uploading application details
    DECLARE
    v_blob_data      BLOB;
    v_blob_len      NUMBER;
    v_position      NUMBER;
    v_raw_chunk      RAW(10000);
    v_char           CHAR(1);
    c_chunk_len           NUMBER:= 1;
    v_line           VARCHAR2 (32767):= NULL;
    v_data_array      wwv_flow_global.vc_arr2;
    v_rows           NUMBER;
    v_count           NUMBER;
    v_dbid           NUMBER;
    v_serverid           NUMBER;
    v_sr_no          NUMBER:=1;
    v_last_char          varchar2(2);
    BEGIN
    -- Read data from wwv_flow_files
    SELECT blob_content INTO v_blob_data FROM wwv_flow_files
    WHERE last_updated = (SELECT MAX(last_updated) FROM wwv_flow_files WHERE UPDATED_BY = :APP_USER)
    AND id = (SELECT MAX(id) FROM wwv_flow_files WHERE updated_by = :APP_USER);
    v_blob_len := dbms_lob.getlength(v_blob_data);
    v_position := 1;
    -- For removing the first line
    WHILE ( v_position <= v_blob_len )
    LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved
    IF v_char = CHR(10) THEN
    EXIT;
    END IF;
    END LOOP;
    -- Read and convert binary to char
    WHILE ( v_position <= v_blob_len )
    LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_line := v_line || v_char;
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved
    IF v_char = CHR(10) THEN
    --removing the new line character added in the end
    v_line := substr(v_line, 1, length(v_line)-2);
    --removing the double quotes
    v_line := REPLACE (v_line, '"', '');
    --checking the absense of data in the end
    v_last_char:= substr(v_line,length(v_line),1);
    IF v_last_char = CHR(44) THEN
         v_line :=v_line||'-';
    END IF;
    -- Convert each column separated by , into array of data
    v_data_array := wwv_flow_utilities.string_to_table (v_line, ',');
    -- Insert data into target tables
    SELECT SERVERID into v_serverid FROM REPOS_SERVERS WHERE SERVERNAME=v_data_array(2);
    SELECT DBID into v_dbid FROM REPOS_DATABASES WHERE DBNAME=v_data_array(1) AND SERVERID=v_serverid;
    --Checking whether the data already exist
    SELECT COUNT(APPID) INTO v_count FROM REPOS_APPLICATIONS WHERE DBID=v_dbid AND APPNAME=v_data_array(1);
    IF v_count = 0 THEN
    EXECUTE IMMEDIATE 'INSERT INTO
    REPOS_APPLICATIONS (APPID,APPNAME,APP_PROVIDER,DBID,SERVERID,CRITICAL,LAST_UPDATE_BY,LAST_UPDATE_DATE,REMARKS) VALUES(:1,:2,:3,:4,:5,:6,:7,:8,:9)'
    USING
    APP_ID_SEQ.NEXTVAL,
    v_data_array(3),
    v_data_array(4),
    v_dbid,
    v_serverid,
    v_data_array(5),
    v_data_array(6),
    v_data_array(7),
    v_data_array(8);
    END IF;
    -- Clearing out the previous line
    v_line := NULL;
    END IF;
    END LOOP;
    END;
    ==============================================================================================================================
    Please find below the new code (which I modified as per my requirements) for uploading data from CSV. It displays 17 columns - Hostname, IP Address, Env Type, Env Num, Env Name, Application, Application Component, Notes, Cluster , Load Balanced, Business User Access Mechanism for Application, Env Owner, Controlled Environment, SSO Enabled, ADSI / LDAP / External Directory Authentication, Disaster Recovery Solution in Place, Interfaces with other application.
    This is successfully uploading all the data from CSV, But this uploaded data is not visible in its respective tab.
    _*NEW CODE:*_
    --PLSQL code for uploading application details
    DECLARE
    v_blob_data      BLOB;
    v_blob_len      NUMBER;
    v_position      NUMBER;
    v_raw_chunk      RAW(10000);
    v_char           CHAR(1);
    c_chunk_len           NUMBER:= 1;
    v_line           VARCHAR2 (32767):= NULL;
    v_data_array      wwv_flow_global.vc_arr2;
    v_rows           NUMBER;
    v_count           NUMBER;
    v_dbid           NUMBER;
    v_serverid           NUMBER;
    v_sr_no          NUMBER:=1;
    v_last_char          varchar2(2);
    BEGIN
    -- Read data from wwv_flow_files
    SELECT blob_content INTO v_blob_data FROM wwv_flow_files
    WHERE last_updated = (SELECT MAX(last_updated) FROM wwv_flow_files WHERE UPDATED_BY = :APP_USER)
    AND id = (SELECT MAX(id) FROM wwv_flow_files WHERE updated_by = :APP_USER);
    v_blob_len := dbms_lob.getlength(v_blob_data);
    v_position := 1;
    -- For removing the first line
    WHILE ( v_position <= v_blob_len )
    LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved
    IF v_char = CHR(10) THEN
    EXIT;
    END IF;
    END LOOP;
    -- Read and convert binary to char
    WHILE ( v_position <= v_blob_len )
    LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_line := v_line || v_char;
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved
    IF v_char = CHR(10) THEN
    --removing the new line character added in the end
    v_line := substr(v_line, 1, length(v_line)-2);
    --removing the double quotes
    v_line := REPLACE (v_line, '"', '');
    --checking the absense of data in the end
    v_last_char:= substr(v_line,length(v_line),1);
    IF v_last_char = CHR(44) THEN
         v_line :=v_line||'-';
    END IF;
    -- Convert each column separated by , into array of data
    v_data_array := wwv_flow_utilities.string_to_table (v_line, ',');
    -- Insert data into target tables
    --SELECT SERVERID into v_serverid FROM REPOS_SERVERS WHERE SERVERNAME=v_data_array(2);
    --SELECT DBID into v_dbid FROM REPOS_DATABASES WHERE DBNAME=v_data_array(1) AND SERVERID=v_serverid;
    --Checking whether the data already exist
    --SELECT COUNT(APPID) INTO v_count FROM REPOS_APPLICATIONS WHERE DBID=v_dbid AND APPNAME=v_data_array(1);
    IF v_count = 0 THEN
    EXECUTE IMMEDIATE 'INSERT INTO
    REPOS_APPLICATIONS (APPID,HOSTNAME,IPADDRESS,ENV_TYPE,ENV_NUM,ENV_NAME,APPLICATION,APPLICATION_COMPONENT,NOTES,CLSTR,LOAD_BALANCED,BUSINESS,ENV_OWNER,CONTROLLED,SSO_ENABLED,ADSI,DISASTER,INTERFACES) VALUES(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18)'
    USING
    APP_ID_SEQ.NEXTVAL,
    v_data_array(1),
    v_data_array(2),
    v_data_array(3),
    v_data_array(4),
    v_data_array(5),
    v_data_array(6),
    v_data_array(7),
    v_data_array(8),
    v_data_array(9),
    v_data_array(10),
    v_data_array(11),
    v_data_array(12),
    v_data_array(13),
    v_data_array(14),
    v_data_array(15),
    v_data_array(16),
    v_data_array(17);
    END IF;
    -- Clearing out the previous line
    v_line := NULL;
    END IF;
    END LOOP;
    END;
    ============================================================================================================================
    FYI, CREATE TABLE_ is as below:
    CREATE TABLE "REPOS_APPLICATIONS"
    (     "APPID" NUMBER,
         "APPNAME" VARCHAR2(50),
         "APP_PROVIDER" VARCHAR2(50),
         "DBID" NUMBER,
         "CRITICAL" VARCHAR2(3),
         "REMARKS" VARCHAR2(255),
         "LAST_UPDATE_DATE" TIMESTAMP (6) DEFAULT SYSDATE NOT NULL ENABLE,
         "LAST_UPDATE_BY" VARCHAR2(10),
         "SERVERID" NUMBER,
         "HOSTNAME" VARCHAR2(20),
         "IPADDRESS" VARCHAR2(16),
         "ENV_TYPE" VARCHAR2(20),
         "ENV_NUM" VARCHAR2(20),
         "ENV_NAME" VARCHAR2(50),
         "APPLICATION" VARCHAR2(50),
         "APPLICATION_COMPONENT" VARCHAR2(50),
         "NOTES" VARCHAR2(255),
         "CLSTR" VARCHAR2(20),
         "LOAD_BALANCED" VARCHAR2(20),
         "BUSINESS" VARCHAR2(255),
         "ENV_OWNER" VARCHAR2(20),
         "CONTROLLED" VARCHAR2(20),
         "SSO_ENABLED" VARCHAR2(20),
         "ADSI" VARCHAR2(20),
         "DISASTER" VARCHAR2(50),
         "INTERFACES" VARCHAR2(50),
         CONSTRAINT "REPOS_APPLICATIONS_PK" PRIMARY KEY ("APPID") ENABLE
    ALTER TABLE "REPOS_APPLICATIONS" ADD CONSTRAINT "REPOS_APPLICATIONS_R01" FOREIGN KEY ("DBID")
         REFERENCES "REPOS_DATABASES" ("DBID") ENABLE
    ALTER TABLE "REPOS_APPLICATIONS" ADD CONSTRAINT "REPOS_APPLICATIONS_R02" FOREIGN KEY ("SERVERID")
         REFERENCES "REPOS_SERVERS" ("SERVERID") ENABLE
    ==============================================================================================================================
    It would be of great help if someone can help me to resolve this issue with uploading data from CSV.
    Thanks & Regards
    Sharath

    Hi,
    You can see the installed dictionaries and change between them by right-clicking and choosing '''Languages''' inside a live text box eg. the box you are in when replying or right-clicking on the '''Search''' box on the top right corner of this page and choosing '''Check Spelling'''.

  • How to saving data in csv file

    I have problem with saving data in csv file. I would like save my data look
    like this example :
    excel preview :
    A B C
    1 10 11 12
    2 13 14 15
    As we see all values are in separate cell A1=10, B1=11, C1=12 ...
    so I try :
    PrintWriter wy = new PrintWriter(new FileWriter("test.csv"));
    values[0][0]="10";
    values[0][1]="11";
    values[0][2]="12";
    values[1][0]="13";
    values[1][1]="14";
    values[1][2]="15";
    for (String[] row : values){
    for (String col : row) {
    wy.print(col + "\t");
    but csv file look like :
    A1=10 11 12
    A2=13 14 15
    but B1-B2 and C1-C2 is empty
    the second steep is use Ostermiller library :
    OutputStream out;
    out = new FileOutputStream("temp.csv");
    CSVPrinter csvp = new CSVPrinter(out);
    String[][] values = new String[2][3];
    csvp.changeDelimiter('\t');
    values[0][0]="10";
    values[0][1]="11";
    values[0][2]="12";
    values[1][0]="13";
    values[1][1]="14";
    values[1][2]="15";
    csvp.println(values);
    but the result is also this same, is anyone do how to resolve this problem
    ?

    but iI don`t want to seperate with comma....value
    I
    want to seperate each value to seperate cellWhen you save the file, separate w/ comma
    When excel loads it, it will automagicarifficallyput
    it in it's own cell.what it is "w/ " ? when I separate my data with
    comma excel don`t put value to spererate cells"w/" is an abbreviation of "with". Post the code that doesn't work, along with the results you get. If it's not working with commas, you must be doing something else wrong.

Maybe you are looking for

  • How do I save documents from iBooks on iPad to my iMac?

    I am upgrading to iPad Air 2. Yeay! I am backing up everything on my old iPad (3rd generation), preparatory to erasing it to hand it on to a family member. There are documents I have saved into iBooks on my old iPad that are nowhere else, all categor

  • "UNICODE_IN_DATA" error in ODI 11.1.1.5 data load interface

    Hello! I am sorry, I have again to ask for help with the new issue with ODI 11.1.1.5. This is a multiple-column data load interface. I am loading data from tab-delimited text file into Essbase ASO 11.1.2. The ODI repository database is MS SQL Server.

  • Illegal use of LONG datatype

    While I was insert a rows into the same table I am getting an error. I am having one long column in table list. I am getting following error while I was inserting a row. ORA-00997: illegal use of LONG datatype Please let me know how do I insert long

  • No syslog on device center

    Hi all, I have test for the ws-c2960s-tc-l to unplug the cable, it can see the syslog in device center. however when i test for cisco3945 to unplug the UTP cable. it doen't show any message on device center. I can see the syslog from the log/syslog.l

  • How do I upgrade a purchased song to itunes plus?

    how do I upgrade a purchased song to itunes plus?