Taking Table script back up in text file

Hi:
One of my customer has Oracle 8 running on IBM AIX. I need to take their table sript back up in form of a text file. Could anyone please walk me through the
commands on IBM AIX for the above?
Thanks - Prabir.

What version of Oracle are they running? "Oracle 8" could refer to anything from 8.0.3 to 8.1.7.
What is a "table script backup"? The DDL to recreate their tables? Or something else?
Does it really need to be a human readable text file? Or would a binary export file be sufficient?
Justin

Similar Messages

  • How to insert data into the mysql table by giving as a text file

    Hi,
    Any one know's how to insert data into the mysql table by giving as a text file as the input in JSP.Please respond ASAP.
    Thanks:)

    At least you can try StringTokenizer to parse your text files. Or download a text JDBC driver to parse your files, for instance, HXTT Text(www.hxtt.net) or StelsCSV(www.csv-jdbc.com).

  • Multiple tables in GUI_DOWNLOAD in same text file

    Hi,
    Can anybody tell me how to pass multiple internal tables for download in same text file. And also each table output should start with NEW LINE. Please help me in executing this functionality.
    Thanks,
    Amol

    I'm not sure that I understand correctly but :
    - if you mean that the first line before your table should be NEW LINE, then before Appending your new internal table (itab2) into the consolidated one (itab), you could just do something like :
    itab-text = 'NEW LINE'.
    append itab.
    And then
    append itab2 into itab.
    - if you mean that the first line of your new table should begin you NEW LINE then it could be something like that :
    read table itab2 index 1.
    concatenate 'NEW LINE' itab2-text into itab2-text.
    modify itab2 transporting text index 1.
    append itab2 into itab.
    Hope it helps,
    Regards,
    Sylvie

  • How to save the text from text area back to a text file?

    Hi all,
    Now I am doing a small panel which containts a JTextArea, this TextArea will read the content from a text file, then display the content of the text file on the TextArea itself.
    My idea is, allow the user to modify the content in the textarea (I set it Editable=true;), then save it. After the user press the save button, the content in the text file will be updated accordng to the user;s modification on TextArea.
    I had done the displaying part by using JavaI/O (BufferReader), now any expert here know how to update the TextArea content back to the text file itself? PLease teach me, THANKS !!!

    NOw I had manage to clear out the error, but now the problem is when I click on the 'save' button, this function clear out the entire text file for me !!!
    The code are as below:
           if(e.getSource()== vButtons[0])
              try{
               String str;
               String Ky = tArea.getText();
               BufferedWriter buffWrite=null;
               BufferedWriter in1 = new BufferedWriter(new FileWriter("Keyword.txt"));
                        buffWrite.write(Ky);
                    buffWrite.flush();
                        buffWrite.close();
                    in1.close();
              catch (IOException ex) {
              ex.printStackTrace();
            if(e.getSource()== vButtons[1])
                  new panel();
                  dispose();
             if(e.getSource()== vButtons[2])
                System.exit(0);           
        }     Please tell me how to save the TextArea to the text file, THANKS !!!

  • Collating PDFs with similar table structure into one excel / text file

    Hi, I'm using Adobe Acrobat X v10.1.1.  I have multiple PDFs which aswell as containing pictures, contain the same table of data in each one.  This is in the same format in each one and I want to extract each table's data and collate into one text file(This is a small table of data, 3 x 3).  I explored the option of exporting FDF data and then using a 3rd party tool to convert all into an excel file, however, the forms option in my tools menu won't expand.  I am possible making this more difficult for myself than is necessary.  Has anyone attempted something similar before and what would recommendations of approach be?
    Thanks for any help.
    Al.

    use logical database SDF, nodes ska1 and  skc1c
    A.

  • XML Column from table extract to Pipe Delimited Text File

    Hi,
    I have an XML column with large data in a Table ( Source SQL server Database).
    I was asked to   extract    XML column to .txt file using SSIS.
    Is it possible to extract xml column with huge data to text file ?
    when I tried,  select XML column from Table in source , I noticed  that Property of column is taken as [DT_NTEXT] . I Converted it to DT_TEXT as Ansi donot support DT_NTEXT.
    Execution method was success but it failed due to trucation. so wondering is there a way to get XML column extracted to Pipe delimited text file?
    Is it advisable to do this ? or IS It Valid to export XML in Pipe Delimited File ?
    Please Kindly advice
    thanks
    kodi

    Are you looking at shredding data within XML nodes and then importing it to text file or are you looking at exporting XML value as is? Also is SSIS a necessity?
    If not, You can simply use T-SQL for this along with bcp for this. just use a query like
    EXEC xp_cmdshell 'bcp "SELECT CAST(XMLColumn AS varchar(max)) AS Column FROM table" queryout <full file path> -c -S <ServerName> -T -t |'
    provided you use trusted connection (windows authentication)
    see
    http://visakhm.blogspot.in/2013/10/bcp-out-custom-format-data-to-flat-file.html
    If you want to shred the data use Xpath functions in the query as below
    http://visakhm.blogspot.in/2012/10/shred-data-as-well-as-metadata-from-xml.html
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Script to autoflow multiple text files?

    A publisher I'm working with is being slowed down by a necessary change to their workflow. Maybe there's a script around to help? I've looked but can't find any, so I'm posting here.
    Old workflow: Designers would autoflow a single, large Word doc that contained all front matter (sometimes >10 diff. sections) and chapters and endnotes into a book template.
    New workflow: The Word doc is now broken up into 30-40 InCopy files, still destined for a single InDesign layout doc. (The InCopy files were created from a tagged Word doc by a different system, so they're all new to the InDesign user.)
    Is there a script that, after the designers load a Place cursor with the 30-40 files (in the correct order), will allow them to single-click on page 1, and InDesign will place one after the other of the files, autoflowing as necessary? They still need them to be placed as individual InCopy files (so concatenating first wouldn't help). I don't think there would ever be a case of 2 stories being on the same page.
    thanks,
    AM

    Hi AnneMarie,
    This script will allow you to select a bunch of files, and will attempt to place them, one by one, into your InDesign document. It will auto-flow them as needed (although watch out for permanent overset problems -- because the script does not test for that).
    I've tested this with a bunch of assets, but not with InCopy files because I don't have any. If they're placeable like Word documents, the script should run fine.
    Let us know if it all works properly for you.
    // BS"D
    // Multi-file auto-place
    // An InDesign Script by Ariel, (c) Id-Extras.com, 2014
    // This script will allow the user to select a bunch of placeable files (Word docs, etc.)
    // It will then attempt to place and auto-flow all the selected files.
    // The script will start from page 1 of the active document
    // and keep adding pages as needed.
    // It will add text frames as needed, within the margins of the page.
    // IMPORTANT: There is no error-checking for perpetual overflow!!!
    // So, if something you're trying to place cannot fit within the margins,
    // The script will continue adding pages to InDesign until it crashes.
    // To quit the script, press ESC.
    var myDoc = app.activeDocument,
    myFiles = File.openDialog("Select files to place...", undefined, true),
    i,
    currentPage = myDoc.pages[0],
    prevFrame,
    myFrame;
    for (i = 0; i < myFiles.length; i++){
    myFrame = addFrame(currentPage);
    myFile = myFiles[i];
    try{
      myFrame.place(myFile, false);
    catch(e){
      alert("Unable to place file: "+myFile, "Multi-file auto-place");
      continue;
    while (myFrame.overflows){
      currentPage = addPageAfter(currentPage);
      prevFrame = myFrame;
      myFrame = addFrame(currentPage);
      prevFrame.nextTextFrame = myFrame;
    currentPage = addPageAfter(currentPage);
    function addFrame(aPage){
    var pageMargins = aPage.marginPreferences,
      aFrame = aPage.textFrames.add(),
      areFacing = app.activeDocument.documentPreferences.facingPages,
      myTop = aPage.bounds[0]+pageMargins.top,
      myBottom = aPage.bounds[2]-pageMargins.bottom,
      myLeft = aPage.bounds[1]+pageMargins.left,
      myRight = aPage.bounds[3]-pageMargins.right;
    //When document.documentPreferences.facingPages == true,
    //"left" means inside; "right" means outside.
    if (areFacing && aPage.side == PageSideOptions.LEFT_HAND){
       myLeft = aPage.bounds[1]+pageMargins.right;
       myRight = aPage.bounds[3]-pageMargins.left;
    aFrame.geometricBounds = [myTop, myLeft, myBottom, myRight];
    return aFrame;
    function addPageAfter(aPage){
    return myDoc.pages.add(LocationOptions.AFTER, aPage);

  • Load multiple records in 1 table from 1 line in text file w/sql loader

    hi guys,
    quick question, perhaps someone can help. searched around and didn't see this question asked before, and can't find any answer in SQL Loader faqs or docs.
    i know i can extract multiple logical records from a single physical record in the input file as two logical records, and then use two into tables clauses to load the data into the table. see oracle 9i sql loader control file reference chapter 5 to see what i am talking about.
    but my question follows:
    cust_id amount1_val amount1_qual amount2_val amount2_qual amount3_val amount3_qual
    123 1500.35 TA 230.34 VZ 3045.50 TW
    basically i want to use one sql loader statement to load these 3 records into 1 table. the issue for me is that i need to re-use the cust_id for all 3 records as the key, along with the qualifier code. the example in the Oracle docs only works for data where the logical records are completely separate -- no shared column values.
    i'm sure this is possible, perhaps using some :cust_id type parameter for the 2nd and 3rd records, or something, but i just don't have enough knowledge/experience with sql loader to know what to do. appreciate any help.
    wayne

    Hi wayne,
    I found an example on what exactly you were looking for from an SQL*Loader documentation. Please see if it of some help to you
    EXAMPLE
    The control file is ULCASE5.CTL.
    1234 BAKER 10 9999 101
    1234 JOKER 10 9999 777
    2664 YOUNG 20 2893 425
    5321 OTOOLE 10 9999 321
    2134 FARMER 20 4555 236
    2414 LITTLE 20 5634 236
    6542 LEE 10 4532 102
    2849 EDDS xx 4555
    4532 PERKINS 10 9999 40
    1244 HUNT 11 3452 665
    123 DOOLITTLE 12 9940
    1453 MACDONALD 25 5532
    In the above datafile
    Column1 - Empno
    Column2 - ENAME
    Column3 - Depno.
    Column4 - MGR
    Column5 - Proj no.
    -- Loads EMP records from first 23 characters
    -- Creates and loads PROJ records for each PROJNO listed
    -- for each employee
    LOAD DATA
    INFILE 'ulcase5.dat'
    BADFILE 'ulcase5.bad'
    DISCARDFILE 'ulcase5.dsc'
    1) REPLACE
    2) INTO TABLE emp
    (empno POSITION(1:4) INTEGER EXTERNAL,
    ename POSITION(6:15) CHAR,
    deptno POSITION(17:18) CHAR,
    mgr POSITION(20:23) INTEGER EXTERNAL)
    2) INTO TABLE proj
    (empno POSITION(1:4) INTEGER EXTERNAL,
    3) projno POSITION(25:27) INTEGER EXTERNAL) -- 1st proj
    Notes:
    REPLACE specifies that if there is data in the tables to be loaded (EMP and PROJ), SQL*loader should delete the data before loading new rows.
    Multiple INTO clauses load two tables, EMP and PROJ. The same set of records is processed three times, using different combinations of columns each time to load table PROJ.
    Regards,
    Murali Mohan

  • How to save changes back into binary text file

    I have a repository with a single binary field that contains an HTML file
    I have a pageflow with a form where users can edit content, including the contents of the HTML file.
    Can I just treat that like any other property:
    If the name of my binary field containing an HTML file is “file”
    Can I just treat it like any other field in the repository and do :
    <pre>
    try{
    properties = node.getProperties();
    properties[0] = new Property("title",new Value(form.title));
    properties[1] = new Property("description",new Value(form.description));
    properties[2] = new Property("file",new Value(form.contentText));
    properties[3] = new Property("author",new Value(form.author));
    node.setProperties(properties);
    String title = form.title;
    nodeId = node.getId();
    title = node.getProperty("title").getValue().toString();
    } catch(Exception e){
    e.printStackTrace();
    error = "error1";
    RepositoryManager manager = null;
    try{
    manager = RepositoryManagerFactory.connect();
    NodeOps nodeOps = manager.getNodeOps();
    nodeOps.updateProperties(nodeId, properties);
    } catch(Exception e){
    error += "error2";
    e.printStackTrace();
    </pre>
    Thanks much.

    I have a repository with a single binary field that contains an HTML file
    I have a pageflow with a form where users can edit content, including the contents of the HTML file.
    Can I just treat that like any other property:
    If the name of my binary field containing an HTML file is “file”
    Can I just treat it like any other field in the repository and do :
    <pre>
    try{
    properties = node.getProperties();
    properties[0] = new Property("title",new Value(form.title));
    properties[1] = new Property("description",new Value(form.description));
    properties[2] = new Property("file",new Value(form.contentText));
    properties[3] = new Property("author",new Value(form.author));
    node.setProperties(properties);
    String title = form.title;
    nodeId = node.getId();
    title = node.getProperty("title").getValue().toString();
    } catch(Exception e){
    e.printStackTrace();
    error = "error1";
    RepositoryManager manager = null;
    try{
    manager = RepositoryManagerFactory.connect();
    NodeOps nodeOps = manager.getNodeOps();
    nodeOps.updateProperties(nodeId, properties);
    } catch(Exception e){
    error += "error2";
    e.printStackTrace();
    </pre>
    Thanks much.

  • Sql/Plsql code to store data into a temporary table from a text file

    Dear all,
    I need to create a temporary table getting data from a text file. I am very new to data loading could you please help me how to read the text file in to a temporary table.
    i have text file like as below:
    order* items : books Purchasing
    start date:
    8-11-09
    Notes: Books are selling from aug10 to aug 25
    Action performed*
    Time*
    Verified By*
    sold* out from shop, sold out date:_________
    +1.+
    physics _______ book sold to ravi
    +2.+
    social _______ book this is a good book
    sold to kiran
    aug10th
    ronald
    +3.+
    maths book to sal
    +4.+
    english book__________ this was a newbook
    to raj
    jak
    return* to shop, return date:____________
    +1.+
    maths book return by:_____________ Verify book
    aug11th
    john
    +2.+
    story book by:_________ checked
    aug14th
    Now i need to create a temporary table with 5columns(order,Status,Action_Performed,Time,Verified_By) like as below:
    Now i need to create a temporary table named as books_order with 5columns(order,Status,Action_Performed,Time,Verified_By) like as below:
    Order               status     Action_Performed                         Time               Verified_By
    books Purchasing     sold          physics _______ book sold to ravi               _______          _________
    books Purchasing     sold          social _______ book this is a good book sold to kiran aug10th               ronald
    books Purchasing sold          maths book to sal                         _____               __________
    books Purchasing     sold          english book__________ this was a newbook to raj __________          jak
    books Purchasing return     maths book return by:_____________ Verify book      aug11th               john
    books Purchasing     return     story book by:_________ checked                aug14th               _________
    Thanks in advance.

    Hi,
    Thanks for your suggestions. I Was able to get the data using utl_file.get_line. But i was not able to the data if it is in the below format:
    I was able to read the data and storing if it is in the same line.But i dont know how to read below data
    Book. Type Name Location Ownership Code
    Story SL hyd SS-HYD
    In this data i have to search for 'Book. type' and then i need to save the word 'Story' to the column 'Book_type'
    Then i need to search for 'Name' and i need to save 'SL' into the column into 'Name'
    Then i need to search for 'Location' and i need to save 'hyd' into the column into 'Location'
    I was able to extract the data if it is in below format using utl_file.get_line
    Known Author: Unknown
    Less Selling Factors: Thunderstorms
    Reason: Unknown
    Any one can explain me how to solve the above criteria.
    Below i am explaining the same problem in detail.
    I have a text file as below and i have a table having 12 columns. Now i need to insert this text file into the table story_books.
    CREATE TABLE story_books
    book_id NUMBER,
    Category VARCHAR2(100 BYTE),
    Book_type VARCHAR2(100 BYTE),
    Name VARCHAR2(700 BYTE),
    Location VARCHAR2(700 BYTE),
    Ownership_code VARCHAR2(700 BYTE),
    Author VARCHAR2(700 BYTE),
    Less_Sel_fact VARCHAR2(700 BYTE),
    Reason VARCHAR2(700 BYTE),
    Buying VARCHAR2(700 BYTE),
    Suspected Book VARCHAR2(700 BYTE),
    Conditions VARCHAR2(700 BYTE)
    -------------------------text file---------------
    Books Out Table: Books
    Book. Type Name Location Ownership Code
    Story SL hyd SS-HYD
    Known Author: Unknown
    Less Selling Factors: Thunderstorms
    Reason: Unknown
    Buying (if applicable):
    Not Applicable
    Suspected Book:
    Unknown
    Conditions to increace sales:
    Advertisement in all areas
    i was able to read the data and storing if it is in the same line.But i dont know how to read below data
    Book. Type Name Location Ownership Code
    Story SL hyd SS-HYD
    In this data i have to search for 'Book. type' and then i need to save the word 'Story' to the column 'Book_type'
    Then i need to search for 'Name' and i need to save 'SL' into the column into 'Name'
    Then i need to search for 'Location' and i need to save 'hyd' into the column into 'Location'
    I was able to extract the data if it is in below format using utl_file.get_line
    Known Author: Unknown
    Less Selling Factors: Thunderstorms
    Reason: Unknown
    Any one can explain me how to solve the above criteria.
    Thanks in advance.

  • Sql/Plsql code to export data into a temporary table from a text file

    Dear all,
    I need to create a temporary table getting data from a text file. I am very new to data loading could you please help me how to read the text file in to a temporary table.
    i have text file like as below:
    order items : books Purchasing
    start date:
    8-11-09
    Notes: Books are selling from aug10 to aug 25
    Action performed
    Time
    Verified By
    sold out from shop, sold out date:_________
    1.
    physics _______ book sold to ravi
    2.
    social _______ book this is a good book
    sold to kiran
    aug10th
    ronald
    3.
    maths book to sal
    4.
    english book__________ this was a newbook
    to raj
    jak
    return to shop, return date:____________
    1.
    maths book return by:_____________ Verify book
    aug11th
    john
    2.
    story book by:_________ checked
    aug14th
    Now i need to create a temporary table and insert the data into the table from this text file.
    Now i need to create a temporary table named as books_order with 5columns(order,Status,Action_Performed,Time,Verified_By) like as below:
    Order     status     Action_Performed     Time     Verified_By
    books Purchasing     sold     physics _______ book sold to ravi     _______     _________
    books Purchasing     sold     social _______ book this is a good book sold to kiran aug10th     ronald
    books Purchasing sold     maths book to sal     _____     __________
    books Purchasing     sold     english book__________ this was a newbook to raj __________     jak
    books Purchasing return     maths book return by:_____________ Verify book aug11th     john
    books Purchasing     return     story book by:_________ checked aug14th     _________
    Thanks in advance.

    Isn't school work marvelous?
    Create an external table.
    http://www.morganslibrary.org/reference/externaltab.html
    Getting the data into a temporary table may make sense in SQL Server ... but not in Oracle.

  • Read from text file vi won't read file...

    I am very new to LV programming so I hope you forgive any stupid mistakes I am making.   I am using Ver. 8.2 on an XP machine.
    I have a small program that stores small data sets in text files and can update them individually or read and update them all sequentially, sending the data out a USB device.   Currently I am just using two data sets, each in their own small text file.  The delimiter is two commas ",,".
    The program works fine as written when run in the regular programming environment.   I noticed, however, as soon as I built it into a project that the one function where it would read each file sequentially to update both files the read from text file vi would return an empty data set, resulting in blank values being written back into the file.   I read and rewrite the values back to the text file to place the one updated field (price) in it'sproper place.  Each small text file is identified and named with a 4 digit number "ID".   I built it twce, and get the same result.  I also built it into an installer and unfortunately the bug travelled into the installation as well.
    Here is the overall program code in question:
    Here is the reading and parsing subvi:
    If you have any idea at all what could cause this I would really appreciate it!
    Solved!
    Go to Solution.

    Hi Kiauma,
    Dennis beat me to it, but here goes my two cents:
    First of all, it's great to see that you're using error handling - that should make troubleshooting a lot easier.  By any chance, have you observed error 7 when you try to read your files and get an empty data set?  (You've probably seen that error before - it means the file wasn't found)
    If you're seeing that error, the issue probably has something to do with this:
    Relative paths differ in an executable.  This knowledge base document sums it up pretty well. To make matters more confusing, if you ever upgrade to LabVIEW 2009 the whole scheme changes.  Also, because an installer contains the executable, building the installer will always yield the same results.
    Lastly, instead of parsing each set of commas using the "match pattern" function, there's a function called "spreadsheet string to array" (also on the string palette) that does exactly what you're doing, except with one function:
    I hope this is helpful...
    Jim

  • Include heading in text file

    Hi Experts,
    I need your HELP to include headings START_DATE, NUM_LOGS,MBYES,RSIZE in my text file "redo_history.log" below.
    05-3-2009,36, 3600,100
    05-4-2009,191, 19100,100
    05-5-2009,56, 5600,100
    06-1-2009,220, 22000,100
    06-2-2009,245, 24500,100
    06-3-2009,217, 21700,100
    My desired output text file (redo_history.log) should be on below:
    START_DATE, NUM_LOGS,MBYES,RSIZE
    05-3-2009,36, 3600,100
    05-4-2009,191, 19100,100
    05-5-2009,56, 5600,100
    06-1-2009,220, 22000,100
    06-2-2009,245, 24500,100
    06-3-2009,217, 21700,100
    Im executing the following script below to generate text file named redo_history.log:
    select dump_csv('SELECT Start_Date,
    Num_Logs,
    to_char(Round(Num_Logs * (Vl.Bytes / (1024 * 1024)),2),''999999999'') AS Mbytes,
    Vl.Bytes / (1024*1024) AS RSize
    FROM (SELECT To_Char(Vlh.First_Time,''MM-W-YYYY'') AS Start_Date,
    COUNT(Vlh.Thread#) Num_Logs
    FROM V$log_History Vlh
    WHERE Vlh.First_Time > current_date - interval ''30'' day
    GROUP BY To_Char(Vlh.First_Time,''MM-W-YYYY'')) log_hist,
    ( select distinct bytes from V$log ) Vl
    ORDER BY Log_Hist.Start_Date',',','EXT_TABLES','redo_history.log')
    from dual;
    Please find below dump_csv.sql:
    CREATE OR REPLACE function dump_csv( p_query in varchar2,
    p_separator in varchar2
    default ',',
    P_DIR in varchar2 ,
    p_filename in varchar2 )
    return number
    AUTHID CURRENT_USER
    is
    l_output utl_file.file_type;
    l_theCursor integer default dbms_sql.open_cursor;
    l_columnValue varchar2(2000);
    l_status integer;
    l_colCnt number default 0;
    l_separator varchar2(10) default '';
    l_cnt number default 0;
    begin
    l_output := utl_file.fopen( P_DIR, p_filename, 'w' );
    dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
    for i in 1 .. 255 loop
    begin
    dbms_sql.define_column( l_theCursor, i,
    l_columnValue, 2000 );
    l_colCnt := i;
    exception
    when others then
    if ( sqlcode = -1007 ) then exit;
    else
    raise;
    end if;
    end;
    end loop;
    dbms_sql.define_column( l_theCursor, 1, l_columnValue,
    2000 );
    l_status := dbms_sql.execute(l_theCursor);
    loop
    exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
    l_separator := '';
    for i in 1 .. l_colCnt loop
    dbms_sql.column_value( l_theCursor, i,
    l_columnValue );
    utl_file.put( l_output, l_separator ||
    l_columnValue );
    l_separator := p_separator;
    end loop;
    utl_file.new_line( l_output );
    l_cnt := l_cnt+1;
    end loop;
    dbms_sql.close_cursor(l_theCursor);
    utl_file.fclose( l_output );
    return l_cnt;
    end dump_csv;
    Thanks in advance for you HELP.
    Regards,
    Eddie

    ow005731 wrote:
    No, I am not asking how to know the column name.
    I am questioning why go through all the hassles to determine the column names, while they are already known.
    When the user prepares the query to pass on to dump_csv():
    SELECT Start_Date,
    Num_Logs,
    to_char(Round(Num_Logs * (Vl.Bytes / (1024 * 1024)),2),''999999999'') AS Mbytes,
    Vl.Bytes / (1024*1024) AS RSize
    FROM ...
    he obviously specifies/knows the column names. (It's not like he was doing select * ...)But what if he was doing select *? And why hard code column names for this query and then have to change them if the query changes when you can write it to be completely dynamic?
    CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2
                                         ,p_dir IN VARCHAR2
                                         ,p_header_file IN VARCHAR2
                                         ,p_data_file IN VARCHAR2 := NULL) IS
      v_finaltxt  VARCHAR2(4000);
      v_v_val     VARCHAR2(4000);
      v_n_val     NUMBER;
      v_d_val     DATE;
      v_ret       NUMBER;
      c           NUMBER;
      d           NUMBER;
      col_cnt     INTEGER;
      f           BOOLEAN;
      rec_tab     DBMS_SQL.DESC_TAB;
      col_num     NUMBER;
      v_fh        UTL_FILE.FILE_TYPE;
      v_samefile  BOOLEAN := (NVL(p_data_file,p_header_file) = p_header_file);
    BEGIN
      c := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
      d := DBMS_SQL.EXECUTE(c);
      DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
      FOR j in 1..col_cnt
      LOOP
        CASE rec_tab(j).col_type
          WHEN 1 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
          WHEN 2 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
          WHEN 12 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
        ELSE
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
        END CASE;
      END LOOP;
      -- This part outputs the HEADER
      v_fh := UTL_FILE.FOPEN(upper(p_dir),p_header_file,'w',32767);
      FOR j in 1..col_cnt
      LOOP
        v_finaltxt := ltrim(v_finaltxt||','||lower(rec_tab(j).col_name),',');
      END LOOP;
      --  DBMS_OUTPUT.PUT_LINE(v_finaltxt);
      UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
      IF NOT v_samefile THEN
        UTL_FILE.FCLOSE(v_fh);
      END IF;
      -- This part outputs the DATA
      IF NOT v_samefile THEN
        v_fh := UTL_FILE.FOPEN(upper(p_dir),p_data_file,'w',32767);
      END IF;
      LOOP
        v_ret := DBMS_SQL.FETCH_ROWS(c);
        EXIT WHEN v_ret = 0;
        v_finaltxt := NULL;
        FOR j in 1..col_cnt
        LOOP
          CASE rec_tab(j).col_type
            WHEN 1 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
                        v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
            WHEN 2 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
                        v_finaltxt := ltrim(v_finaltxt||','||v_n_val,',');
            WHEN 12 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
                        v_finaltxt := ltrim(v_finaltxt||','||to_char(v_d_val,'DD/MM/YYYY HH24:MI:SS'),',');
          ELSE
            v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
          END CASE;
        END LOOP;
      --  DBMS_OUTPUT.PUT_LINE(v_finaltxt);
        UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
      END LOOP;
      UTL_FILE.FCLOSE(v_fh);
      DBMS_SQL.CLOSE_CURSOR(c);
    END;This allows for the header row and the data to be written to seperate files if required.
    e.g.
    SQL> exec run_query('select * from emp','TEST_DIR','output.txt');
    PL/SQL procedure successfully completed.Output.txt file contains:
    empno,ename,job,mgr,hiredate,sal,comm,deptno
    7369,"SMITH","CLERK",7902,17/12/1980 00:00:00,800,,20
    7499,"ALLEN","SALESMAN",7698,20/02/1981 00:00:00,1600,300,30
    7521,"WARD","SALESMAN",7698,22/02/1981 00:00:00,1250,500,30
    7566,"JONES","MANAGER",7839,02/04/1981 00:00:00,2975,,20
    7654,"MARTIN","SALESMAN",7698,28/09/1981 00:00:00,1250,1400,30
    7698,"BLAKE","MANAGER",7839,01/05/1981 00:00:00,2850,,30
    7782,"CLARK","MANAGER",7839,09/06/1981 00:00:00,2450,,10
    7788,"SCOTT","ANALYST",7566,19/04/1987 00:00:00,3000,,20
    7839,"KING","PRESIDENT",,17/11/1981 00:00:00,5000,,10
    7844,"TURNER","SALESMAN",7698,08/09/1981 00:00:00,1500,0,30
    7876,"ADAMS","CLERK",7788,23/05/1987 00:00:00,1100,,20
    7900,"JAMES","CLERK",7698,03/12/1981 00:00:00,950,,30
    7902,"FORD","ANALYST",7566,03/12/1981 00:00:00,3000,,20
    7934,"MILLER","CLERK",7782,23/01/1982 00:00:00,1300,,10

  • SQL Loader-How to insert -ve & date values from flat text file into coloumn

    Question: How to insert -ve & date values from flat text file into coloumns in a table.
    Explanation: In the text file, the negative values are like -10201.30 or 15317.10- and the date values are as DDMMYYYY (like 10052001 for 10th May, 2002).
    How to load such values in columns of database using SQL Loader?
    Please guide.

    Question: How to insert -ve & date values from flat text file into coloumns in a table.
    Explanation: In the text file, the negative values are like -10201.30 or 15317.10- and the date values are as DDMMYYYY (like 10052001 for 10th May, 2002).
    How to load such values in columns of database using SQL Loader?
    Please guide. Try something like
    someDate    DATE 'DDMMYYYY'
    someNumber1      "TO_NUMBER ('s99999999.00')"
    someNumber2      "TO_NUMBER ('99999999.00s')"Good luck,
    Eric Kamradt

  • Export data into text file

    Hi all,
    I want to export table data into a delimeted text file with SQL*Plus.
    [edit]
    Sorry, non delimited text file
    [edit]
    Example:
    CREATE TABLE delim (
    col_a VARCHAR2(20),
    col_b VARCHAR2(40)
    value stored in
    col_a = FISH_1
    col_b = FISH_2
    spool x:\test_1.lst
    set feedback off;
    set HEADING off;
    set pagesize 0;
    set linesize 60;
    select col_a, col_b
    from table delim;
    spool off;
    =>
    FISH_1
    FISH_2
    When I now do the same with
    set linesize 62;
    the reslut is like this
    =>
    FISH_1 FISH_2
    In the output of the second example there is a blank between col_a and col_b.
    I have to export the column data without this one blank between columns.
    Is there any way to do this?
    Thanks and cheers,
    ben
    Message was edited by:
    ben512

    Well in your example there is one space between the two columns, anyway you can see in my previous example that there is a set colsep and that show what you want.
    But here is another example:
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> create table test (
      2  col_a VARCHAR2(5),
      3  col_b VARCHAR2(3)
      4  );
    Table created.
    SQL>
    SQL> insert into test (col_a, col_b)
      2  values('abc', 'FA');
    1 row created.
    SQL>
    SQL> insert into test (col_a, col_b)
      2  values('def', 'KL');
    1 row created.
    SQL> insert into test values ('12345','123');
    1 row created.
    SQL> rem if you don't want to have a space between the two columns then
    SQL> set head off
    SQL> set colsep ""
    SQL> select col_a, col_b from test;
    abc  FA
    def  KL
    12345123
    SQL> rem if you want to have one space between the two columns then
    SQL> set colsep " "
    SQL> select col_a, col_b from test;
    abc   FA
    def   KL
    12345 123
    SQL>

Maybe you are looking for