Spooling large data using UTL_FILE

Hi Everybody!
While spooling out data into a file using UTL_FILE package I am unable spool the data The column data has a size of 2531 characters
The column 'source_where_clause_text' has very large data.
Its not giving any error but the external table is not returning and data
Following is the code.
CREATE OR REPLACE PROCEDURE transformation_utl_file AS
CURSOR c1 IS
select transformation_nme,source_where_clause_text
from utility.data_transformation where transformation_nme='product_closing';
v_fh UTL_FILE.file_type;
BEGIN
v_fh := UTL_FILE.fopen('UTLFILELOAD', 'transformation_data.dat', 'w', 32000);--132767
FOR ci IN c1
LOOP
UTL_FILE.put_line( v_fh, ci.transformation_nme ||'~'|| ci.source_where_clause_text);
-- UTL_FILE.put_line( v_fh, ci.system_id ||'~'||ci.system_nme ||'~'|| ci.system_desc ||'~'|| ci.date_stamp);
END LOOP;
UTL_FILE.fclose( v_fh );
exception
when utl_file.invalid_path then dbms_output.put_line('Invalid Path');
END;
select length(
'(select to_char(b.system_id) || to_date(a.period_start_date,''dd-mon-yyyy'') view_key, b.system_id, to_date(a.period_start_date,''dd-mon-yyyy'') period_start_date, to_date(a.period_end_date,''dd-mon-yyyy'') period_end_date, to_date(a.clos
ing_date,''dd-mon-yyyy'') closing_date from ((select decode(certification_type_code, ''A'', ''IDESK_PRODUCTS_PIPELINE'',''C'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'') system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyy
yy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsu
pload.prod_monthly_certification where certification_type_code in (''A'',''C'') minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing
statusv where system_nme in (''IDESK_PRODUCTS_PIPELINE'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'')) union all (select ''BMS Commitment Link'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'')
period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_c
ertification where certification_type_code = ''C'' minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme
= ''BMS Commitment Link'') union all (select ''BMS'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_
of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_certification where certification_type_code = ''A'' minus select trim(system_nme), to_char
(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme = ''BMS'')) a, utility.system_v b where a.system_nme = b.system_nme)') length1
from dual
--2531
begin
SSUBRAMANIAN.transformation_utl_file;
end;
create table transformation_utl
TRANSFORMATION_NME VARCHAR2(40),
SOURCE_WHERE_CLAUSE_TEXT VARCHAR2(4000)
ORGANIZATION external
type oracle_loader
default directory UTLFILELOAD
ACCESS PARAMETERS
records delimited by newline CHARACTERSET US7ASCII
BADFILE UTLFILELOAD:'transformation.bad'
LOGFILE UTLFILELOAD:'transformation.log'
fields TERMINATED by "~"
LOCATION ('transformation_data.dat')
) REJECT LIMIT UNLIMITED
select * from transformation_utl

after running the procedure, did you verify that the file 'transformation_data.dat' has data? open it, make sure it's correct. maybe it has no data, and that's why the external table doesn't show anything.
also, check the LOG and BAD files after selecting from the external table. maybe they have ERRORS in them (or all the data is going to BAD because you defined something wrong).

Similar Messages

  • Write data using utl_file

    hi , i am using oracle 10g.
    i am writing this job which runs every day and writes into file.
    first time i has to put upto 700,000 records into file and from next run it will be around 5000.
    please let me know what ever i am doing is correct or not. i never used utl_file , since this is my first time using utl_file i really need help from you guys.
    CREATE OR REPLACE PROCEDURE pr_cpe_dashboard
    IS
       CURSOR c1
       IS
          SELECT /*+ parallel ( qdm 8) index(sdm INDX_RPRE_MART_SITE_QREV) index(odm INDX_RPRE_MART_ORD_QSITE) */
                    qdm.quote_id
                 || '||'
                 || qdm.quote_revision
                 || '||'
                 || qdm.quote_status
                 || '||'
                 || qdm.last_modified_date
                 || '||'
                 || qdm.billing_method
                 || '||'
                 || qdm.quote_total
                 || '||'
                 || CASE
                       WHEN sdm.project_number IS NULL
                          THEN 'NULL'
                       ELSE sdm.project_number
                    END
                 || '||'
                 || CASE
                       WHEN odm.order_number IS NULL
                          THEN 'NULL'
                       ELSE odm.order_number
                    END
                 || '||'
                 || CASE
                       WHEN odm.order_type IS NULL
                          THEN 'NULL'
                       ELSE odm.order_type
                    END
                 || '||'
                 || CASE
                       WHEN odm.release_timestamp IS NULL
                          THEN 'NULL'
                       ELSE TO_CHAR (odm.release_timestamp, 'mm/dd/yyyy hh:mi:ss')
                    END
                 || '||'
                 || CASE
                       WHEN sdm.account_name IS NULL
                          THEN 'NULL'
                       ELSE sdm.account_name
                    END
                 || '||'
                 || qdm.nasp_id
            FROM r_premisys_quote_detail_mart qdm,
                 r_premisys_site_detail_mart sdm,
                 r_premisys_order_detail_mart odm
           WHERE qdm.quote_id = sdm.quote_id(+)
             AND qdm.quote_revision = sdm.quote_revision(+)
             AND qdm.quote_id = odm.quote_id(+)
             AND qdm.quote_revision = odm.quote_revision(+)
             AND qdm.last_modified_date >= (SELECT last_used_date
                                              FROM job_audit_date);
       output_file   UTL_FILE.file_type;
       l_dir         VARCHAR2 (10)      := 'c:/orders';
       l_filename    VARCHAR2 (25)      := 'cpe.txt';
       v_array       t_array            := t_array ();
    BEGIN
       output_file := UTL_FILE.fopen (l_dir, l_filename, 'W');
       OPEN c1;
       LOOP
          FETCH employees_cur
          BULK COLLECT INTO v_array LIMIT 1000;
          FOR i IN 1 .. v_array.COUNT
          LOOP
             UTL_FILE.put_line (output_file, v_array (i));
          END LOOP;
          EXIT WHEN v_array.COUNT = 0;
       END LOOP;
       UTL_FILE.fclose (output_file);
    END pr_cpe_dashboard;

    A couple of things.
    This is wrong:
    l_dir VARCHAR2 (10) := 'c:/orders';It must be the name of a directory object, created via the "+CREATE DIRECTORY+" command. Dir objects are (database) aliases for physical paths. Like any other db object, it provides a security layer that allows one to control access to it. You do not want any and all db sessions to access the root drive on that server.
    Not necessary:
    v_array t_array := t_array ();You are calling the constructor to create an empty array. Not needed as the bulk collect does that for you. Simply define the variable - the bulk collect takes the responsibility for initialising and populating it.
    Review:
    parallel ( qdm 8) Personally I dislike such hints, hints forcing indexes and so on. Reason is that the developer is second guessing the CBO. Yes, you may have the right values for the development db and its data set. It may even work for production for a while. But production is very seldom static. Process loads varies. Data volumes increases. Even the database model changes (e.g. new columns, new indexes, etc). H/w changes (e.g. more CPUs, more memory, etc).
    By second guessing the CBO, the developer makes it very hard for the DBA and CBO properly manage performance and scalability on the server.
    Also, keep in mind that UTL_FILE (and PL/SQL code) is a serialised process (only exception is specially crafted pipeline table functions). So despite using/requesting 8 parallel query processes (PQ) in the hint, a single PL process has to write that data to file. Be sure that you identify the appropriate bottleneck when dealing with I/O and wanting to use PQ to address it.

  • How to upload data using utl_file when you dont know the exact file name

    Hi
    i want to upload data from a flat file to a table.
    i dont know the exact file name. i want to make a search for filename. like i want to make a search on file (say test*) which will give me all the files with test.
    i want to upload data using these files.
    how can i do this by using UTL_FILE.
    Regards
    Manish

    Thank you very much.
    Thing is previously we are using sqlloader and shell script for loading the data.
    now i am creating a procedure (if possible without parameters).
    Is there any other way i can do that.
    can i make a wild card search using utl_file.
    Thanks n Regards
    Manish

  • If i encrypted a large data using rsa,what i can do

    if i encrypted a large data,the date size > 1024(rsa keysize),the type of data is
    byte[],what i can do

    You'll have to block it yourself, and encrypt each block on its own. On the decrypt side, your algorithm needs to expect a series of blocks, and it needs to decrypt them each and rebuild the original plaintext. It's not hard - but you're in for some tedious times with byte[].
    And you'll end up with something that runs at a snail's pace - but at least the security will be weaker! It's a very bad answer to encryption. I realize that you know that - but feel free to tell whoever is requiring you to be stupid, that what they're asking you to do is stupid. ;)
    Grant

  • Writing CLOB data using UTL_FILE to several files by settingmaxrows in loop

    Hey Gurus,
    I have a procedure that creates a CLOB document (in the form of a table in oracle 9i). I need to now write this large CLOB data (with some 270,000 rows) into several files, setting the row max at 1000. So essentially ther would be some sort of loop construct and substr process that creates a file after looping through 1000 rows and then continues the count and creates a another file until all 270 xml files are created. Simple enough right...lol? Well I've tried doing this and haven't gotten anywhere. My pl/sql coding skills are too elementary I am guessing. I've only been doing this for about three months and could use the assistance of a more experienced person here.
    Here are the particulars...
    CLOB doc is a table in my Oracle 9i application named "XML_CLOB"
    Directory name to write output to "DIR_PATH"
    DIRECTORY PATH is "\app\cdg\cov"
    Regards,
    Chris

    the xmldata itself in the CLOB would look like this for one row.
    <macess_exp_import_export_file><file_header><file_information></file_information></file_header><items><documents><document><external_reference></external_reference><processing_instructions><update>Date of Service</update></processing_instructions><filing_instructions><folder_ids><folder id="UNKNOWN" folder_type_id="19"></folder></folder_ids></filing_instructions><document_header><document_type id="27"></document_type><document_id>CUE0000179</document_id><document_description>CUE0000179</document_description><document_date name="Date of Service">1900-01-01</document_date><document_properties></document_properties></document_header><document_data><internal_file><document_file_path>\\adam\GiftSystems\Huron\TIFS\066\CUE0000179.tif</document_file_path><document_extension>TIF</document_extension></internal_file></document_data></document></documents></items></macess_exp_import_export_file>

  • How to delete large data using XML batch in chunk

    public void DeleteListItems(SPWeb web, SPList list)
    RMPExceptionManager.LogErrorInFile("--------Delete List Items from : " + list + " starts--------", true);
    try
    web.AllowUnsafeUpdates = true;
    StringBuilder builder = new StringBuilder();
    builder.Append("<?xml version=\"1.0\" encoding=\"UTF-8\"?><Batch>");
    string s = "<Method>" +
    "<SetList Scope=\"Request\">" + list.ID + "</SetList>" +
    "<SetVar Name=\"ID\">{0}</SetVar>" +
    "<SetVar Name=\"Cmd\">Delete</SetVar>" +
    "</Method>";
    foreach (SPListItem item in list.Items)
    builder.Append(string.Format(s, item.ID.ToString()));
    builder.Append("</Batch>");
    web.ProcessBatchData(builder.ToString());
    web.AllowUnsafeUpdates = false;
    RMPExceptionManager.LogErrorInFile("--------Delete List Items from : " + list + " ends--------", true);
    catch (Exception ex)
    RMPExceptionManager.LogErrorInFile("--------delete List Items exception--------", bIsLogEnabled);
    RMPExceptionManager.LogErrorInFile(ex.Message, bIsLogEnabled);
    RMPExceptionManager.LogErrorInFile(ex.Source, bIsLogEnabled);
    RMPExceptionManager.LogErrorInFile(ex.StackTrace, bIsLogEnabled);
    RMPExceptionManager.LogErrorInFile("-------------------------------------------------------", bIsLogEnabled);
    I am using the above code to delete the data from list.
    currently its not deleting properly as its executing huge xml batch.
    How can I change the above code in to chunk of 1000 so that I can delete 25000 records easily?

    I tried the below code but its not working properly.
    first time when it comes in do loop its deleting the records one by one and not using batch of 1000.
    after some time(aprox 30 min) it starts executeing batch of 1000 but not deleting whole records. out of 10,000 its deleting hardly 7000 then stops the delete process and jumps on next step.
    I am not getting why its showing this behaivour ? Is it time out after some time in xml batch? but its still executing the other code after comming out of delete function.
    web.AllowUnsafeUpdates = true;
    StringBuilder builder = new StringBuilder();
    builder.Append("<?xml version=\"1.0\" encoding=\"UTF-8\"?><Batch>");
    string s = "<Method>" +
    "<SetList Scope=\"Request\">" + list.ID + "</SetList>" +
    "<SetVar Name=\"ID\">{0}</SetVar>" +
    "<SetVar Name=\"Cmd\">Delete</SetVar>" +
    "</Method>";
    // Query to get the unprocessed items.
    SPQuery query = new SPQuery();
    query.RowLimit = 1000;
    query.Query = "<Where></Where>";
    do
    SPListItemCollection remainingItems = list.GetItems(query);
    foreach (SPListItem item in remainingItems)
    builder.Append(string.Format(s, item.ID.ToString()));
    builder.Append("</Batch>");
    web.ProcessBatchData(builder.ToString());
    query.ListItemCollectionPosition = remainingItems.ListItemCollectionPosition;
    RMPExceptionManager.LogErrorInFile("--------Delete batch remaining : " + remainingItems.ListItemCollectionPosition + " --------", true);
    } while (query.ListItemCollectionPosition != null);
    any body got this type of issue?
    is there any problem in above code?
    is there any other way to do this deletion process in better way?

  • CSV file reading using UTL_FILE at run time

    Hi,
    I have to read CSV file using UTL_FILE.
    but Folder contains Many CSV files.
    I dont know there name.So i have to read csv file at run time.
    Please let me know how should we achieve this?
    Thanks

    place the following in a shell script, say "list_my_files.ksh"
    ls -l > my_file_list.datthen run the shell script using dbms_scheduler;
    begin
    dbms_scheduler.create_program (program_name   => 'a_test_proc'
                                  ,program_type   => 'EXECUTABLE'
                                  ,program_action => '/home/bluefrog/list_my_files.ksh'
                                  ,number_of_arguments => 0
                                  ,enabled => true);
    end;
    /then open "my_file_list.dat" using UTL_FILE, read all file names and choose the one you require.
    P;

  • Can express vi handle large data

    Hello,
    I'm facing problem in handling large data using express vi's. The input to express vi is a large data of 2M samples waveform & i am using 4 such express vi's each with 2M samples connected in parallel. To process these data the express vi's are taking too much of time compared to other general vi's or subvi's. Can anybody give the reason why its taking too much time in processing. As per my understanding since displaying large data in labview is not efficient & since the express vi's have an internal display in the form of configure dialog box. Hence i feel most of the processing time is taken to plot the data on the graph of configure dailog box. If this is correct then Is there any solution to overcome this.
    waiting for reply
    Thanks in advance

    Hi sayaf,
    I don't understand your reasoning for not using the "Open Front Panel"
    option to convert the Express VI to a standard VI. When converting the
    Express VI to a VI, you can save it with a new name and still use the
    Express VI in the same VI.
    By the way, have you heard about the NI LabVIEW Express VI Development Toolkit? That is the choice if you want to be able to create your own Express VIs.
    NB: Not all Express VIs can be edited with the toolkit - you should mainly use the toolkit to develop your own Express VIs.
    Have fun!
    - Philip Courtois, Thinkbot Solutions

  • Writing large xmltype data to UTL_FILE and setting max row per file

    Hey Gurus,
    I am trying to create a procedure (in Oracle 9i) that writes out xml data I have created into several xml files (file would probably be to large for one xml file output...I am doing this for 270,000 rows of data), setting the max rows to 1000 rows per file. I know one would have to create a looping contsruct to do this but I am just not adept enough in PL/SQL to figure it out at the moment.
    So essentially their would be some sort of loop construct and substr process that creates a file after looping through 1000 rows and then continues the count and creates a another file until all 270 xml files are created. Simple enough right...lol? Well I've tried doing this and haven't gotten anywhere. My pl/sql coding skills are too elementary I am guessing. I've only been doing this for about three months and could use the assistance of a more experienced person here.
    Here are the particulars...
    This is the xmltype view code that I used to create the xml data.
    select XMLELEMENT("macess_exp_import_export_file",
    XMLELEMENT("file_header",
    XMLELEMENT("file_information")),
    XMLELEMENT("items",
    XMLELEMENT("documents",
    (SELECT XMLAGG(XMLELEMENT("document",
    XMLELEMENT("external_reference"),
    XMLELEMENT("processing_instructions",
    XMLELEMENT("update", name)),
    XMLELEMENT("filing_instructions",
    XMLELEMENT("folder_ids",
    XMLELEMENT("folder",
    XMLATTRIBUTES(folder_id AS "id", folder_type_id AS "folder_type_id")))),
    XMLELEMENT("document_header",
    XMLELEMENT("document_type",
    XMLATTRIBUTES(document_type AS "id")),
    XMLELEMENT("document_id", document_id),
    XMLELEMENT("document_description", document_description),
    XMLELEMENT("document_date",
    XMLATTRIBUTES(name AS "name"), document_date),
    XMLELEMENT("document_properties")),
    XMLELEMENT("document_data",
    XMLELEMENT("internal_file",
    XMLELEMENT("document_file_path", document_file_path),
    XMLELEMENT("document_extension", document_extension)
    ))))from macess_import_base WHERE rownum < 270000))))AS result
    from macess_import_base WHERE rownum < 270000;
    This is the Macess_Import_Base table that I am creating xml data from
    create table MACESS_IMPORT_BASE
    MACESS_EXP_IMPORT_EXPORT_FILE VARCHAR2(100),
    FILE_HEADER VARCHAR2(20),
    ITEMS VARCHAR2(20),
    DOCUMENTS VARCHAR2(20),
    DOCUMENT VARCHAR2(20),
    EXTERNAL_REFERENCE VARCHAR2(20),
    PROCESSING_INSTRUCTIONS VARCHAR2(20),
    PATENT VARCHAR2(20),
    FILING_INSTRUCTIONS VARCHAR2(20),
    FOLDER_IDS VARCHAR2(20),
    FOLDER_ID VARCHAR2(20),
    FOLDER_TYPE_ID NUMBER(20),
    DOCUMENT_HEADER VARCHAR2(20),
    DOCUMENT_PROPERTIES VARCHAR2(20),
    DOCUMENT_DATA VARCHAR2(20),
    INTERNAL_FILE VARCHAR2(20),
    NAME VARCHAR2(20),
    DOCUMENT_TYPE VARCHAR2(40),
    DOCUMENT_ID VARCHAR2(64),
    DOCUMENT_DESCRIPTION VARCHAR2(200),
    DOCUMENT_DATE VARCHAR2(100),
    DOCUMENT_FILE_PATH VARCHAR2(200),
    DOCUMENT_EXTENSION VARCHAR2(200)
    Directory name to write output to "DIR_PATH"
    DIRECTORY PATH is "\app\cdg\cov"
    Regards,
    Chris

    I also would like to use UTL_FILE to achieve this functionality in the procedure.

  • How do I use UTL_FILE to insert a large number of fields to a file?

    Hi
    I am trying to use UTL_FILE for the first time in a Stored Procedure. I need to run a complex query to select 50 fields from various tables. I need these to be inserted into one line in the output file for all rows. Is this possible? My procedure so far is like the following
    CREATE OR REPLACE PROCEDURE PROC_TEST IS
    output_file UTL_FILE.FILE_TYPE;
    BEGIN
    FOR query in (SELECT FIELD1, FIELD2, ..........FIELD50)
    FROM TABLE A, TABLE B
    WHERE A.ID = B.ID
    ETC
    LOOP
    UTL_FILE.PUT_LINE(output_file, <put all 50 fields for all records into file> );
    END LOOP;               
    UTL_FILE.FCLOSE (output_file);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NULL;
    WHEN OTHERS THEN
         UTL_FILE.FCLOSE_ALL;
    RAISE;
    END PROC_TEST;
    Do I need to define 'query' (after the FOR) anywhere, also please advise with how I put all of the fields into the file.
    Thanks
    GB

    Thanks Steve,
    I have the UTL_FILE working fine now.
    I have other queries to run and conditions to apply in the same procedure, and I need to schedule via Enterprise Manager, therefore using UTL_FILE in a procedure seemed the best option. I looked up Data-pump but this seems to be an 11g feature, and we are still on 10g therefore I will not be able to use it.
    Thanks for your help.
    GB

  • How to read a tab seperated data from a text file using utl_file

    Hi,
    How to read a tab seperated data from a text file using utl_file...
    I know if we use UTL_FILE.get_line we can read the whole line...but i need to read the tab separated value separately.....
    Thanks in advance...
    Naveen

    Naveen Nishad wrote:
    How to read a tab seperated data from a text file using utl_file...
    I know if we use UTL_FILE.get_line we can read the whole line...but i need to read the tab separated value separately.....If it's a text file then UTL_FILE will only allow you to read it a line at a time. It is then up to you to split that string up (search for split string on this forum for methods) into it's individual components.
    If the text file contains a standard structure on each line, i.e. it is a fixed delimited structure, then you could use external tables to read the data instead.

  • HOW TO READ DATA FROM A FILE AND INSERT INTO A TABLE USING UTL_FILE

    Hi..
    I have a file.I want to read the data from file and load it into a table using utl_file.
    how can I do it?
    Any reply apreciated...

    Hi,
    This is not your requirment but u can try this :
    CREATE OR REPLACE DIRECTORY text_file AS 'D:\TEXT_FILE\';
    GRANT READ ON DIRECTORY text_file TO fah;
    GRANT WRITE ON DIRECTORY text_file TO fah;
    DROP TABLE load_a;
    CREATE TABLE load_a
    (a1 varchar2(20),
    a2 varchar2(200))
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY text_file
    ACCESS PARAMETERS
    (FIELDS TERMINATED BY ','
    LOCATION ('data.txt')
    select * from load_a;
    CREATE TABLE A AS select * from load_a;
    SELECT * FROM A
    Regards
    Faheem Latif

  • Exporting data from text file to a table using utl_file

    Dear all,
    I have a text file as below and i have a table having 12 columns. Now i need to insert this text file into the table story_books.
    CREATE TABLE story_books
    book_id NUMBER,
    Category VARCHAR2(100 BYTE),
    Book_type VARCHAR2(100 BYTE),
    Name VARCHAR2(700 BYTE),
    Location VARCHAR2(700 BYTE),
    Ownership_code VARCHAR2(700 BYTE),
    Author VARCHAR2(700 BYTE),
    Less_Sel_fact VARCHAR2(700 BYTE),
    Reason VARCHAR2(700 BYTE),
    Buying VARCHAR2(700 BYTE),
    Suspected Book VARCHAR2(700 BYTE),
    Conditions VARCHAR2(700 BYTE)
    -------------------------text file---------------
    Books Out Table: Books
    Book. Type          Name          Location               Ownership Code
    Story               SL          hyd               SS-HYD
    Known Author:     Unknown               
    Less Selling Factors: Thunderstorms     
    Reason:     Unknown               
    Buying (if applicable):
    Not Applicable
    Suspected Book:
    Unknown
    Conditions to increace sales:
    Advertisement in all areas
    i was able to read the data and storing if it is in the same line.But i dont know how to read below data
    Book. Type          Name          Location               Ownership Code
    Story               SL          hyd               SS-HYD
    In this data i have to search for 'Book. type' and then i need to save the word 'Story' to the column 'Book_type'
    Then i need to search for 'Name' and i need to save 'SL' into the column into 'Name'
    Then i need to search for 'Location' and i need to save 'hyd' into the column into 'Location'
    I was able to extract the data if it is in below format using utl_file.get_line
    Known Author:     Unknown               
    Less Selling Factors: Thunderstorms     
    Reason:     Unknown     
    Any one can explain me how to solve the above criteria.
    Thanks in advance.

    Dear all,
    I have a text file as below and i have a table having 12 columns. Now i need to insert this text file into the table story_books.
    CREATE TABLE story_books
    book_id NUMBER,
    Category VARCHAR2(100 BYTE),
    Book_type VARCHAR2(100 BYTE),
    Name VARCHAR2(700 BYTE),
    Location VARCHAR2(700 BYTE),
    Ownership_code VARCHAR2(700 BYTE),
    Author VARCHAR2(700 BYTE),
    Less_Sel_fact VARCHAR2(700 BYTE),
    Reason VARCHAR2(700 BYTE),
    Buying VARCHAR2(700 BYTE),
    Suspected Book VARCHAR2(700 BYTE),
    Conditions VARCHAR2(700 BYTE)
    -------------------------text file---------------
    Books Out Table: Books
    Book. Type          Name          Location               Ownership Code
    Story               SL          hyd               SS-HYD
    Known Author:     Unknown               
    Less Selling Factors: Thunderstorms     
    Reason:     Unknown               
    Buying (if applicable):
    Not Applicable
    Suspected Book:
    Unknown
    Conditions to increace sales:
    Advertisement in all areas
    i was able to read the data and storing if it is in the same line.But i dont know how to read below data
    Book. Type          Name          Location               Ownership Code
    Story               SL          hyd               SS-HYD
    In this data i have to search for 'Book. type' and then i need to save the word 'Story' to the column 'Book_type'
    Then i need to search for 'Name' and i need to save 'SL' into the column into 'Name'
    Then i need to search for 'Location' and i need to save 'hyd' into the column into 'Location'
    I was able to extract the data if it is in below format using utl_file.get_line
    Known Author:     Unknown               
    Less Selling Factors: Thunderstorms     
    Reason:     Unknown     
    Any one can explain me how to solve the above criteria.
    Thanks in advance.

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Spool SQl data into text file using dynamic sql

    Hi,
    I am spooling output data into text file using command
    select 'select t.mxname,bo.lxtype,t.mxrev'||chr(10)||'from mx_1234567'||chr(10)||
    'where <condition>';
    here mxname varchar(128),lxtype(128),mxrev(128) all are of varchar type.I want the output in format
    e.g Part|1211121313|A
    but due to column width the output,I am getting is with spaces.
    "Part then blank spaces |1211121313 then blank spaces |A"
    how can I remove these spaces between columns.I used set space 0 but not working.
    Thanks in advance.
    Your help will be appreciated.

    Hi Frank,
    I have seen your reply for SET LINE SIZE function. But, I could not be able to understand it.
    I am facing similar kind of issue in my present project.
    I am trying spool more than 50 columns from a table into flat file. Because of more column lengths in few columns, i am getting space. There are so many columns with the same issue. I want to remove that space.so that, data can fit perfectly in one line in .txt file without any wrap text.
    Below is my sample query.sql. Please let me know the syntax. My mail id : [email protected]
    --Created : Sep 22,2008, Created By : Srinivasa Bojja
    --Export all Fulfillments
    --Scheduled daily after 1:00am and should complete before 3:30am
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    SET LINESIZE 800
    SET WRAP OFF
    SET PAGESIZE 800
    SET FEEDBACK OFF
    SET HEADING ON
    SET ECHO OFF
    SET CONCAT OFF
    SET COLSEP '|'
    SET UNDERLINE OFF
    SPOOL C:\Fulfillment.txt;
    SELECT SRV.COMM_METHOD_CD AS Method,
    SRV.SR_NUM AS "Fulfillment Row_Id",
    CON.LAST_NAME AS "Filled By"
    SRV.SR_TITLE AS Notes,
    SRVXM.ATTRIB_04 AS "Form Description"
    FROM SIEBEL.S_SRV_REQ SRV,
    SIEBEL.S_SRV_REQ_XM SRVXM,
    SIEBEL.S_USER USR,
    SIEBEL.S_CONTACT CON
    WHERE SRV.ROW_ID = SRVXM.PAR_ROW_ID AND
    SRV.OWNER_EMP_ID = USR.ROW_ID AND
    CON.ROW_ID= SRV.CST_CON_ID;
    SPOOL OFF;
    EXIT;

Maybe you are looking for