HUGE amount of data in flat file every day to external system

Hello,
i have to develop several programs to export all tables data in a flat file for external system ( EG. WEB).
I have some worries like if is possible by SAP export all KNA1 data that contains a lot of data in a flat file using the extraction:
SELECT * FROM KNA1 ITO TABLE TB_KNA1.
I need some advices about these kind of huge extractions.
I also have to extract from some tables, only the data changes, new record, and deleted records; to do this I thought of developing a program that every day extract all data from MARA and save the extraction in a custom table like MARA; the next day when the programs runs compare all data of the new extraction with the old extraction in the ZMARA table to understand the datachanges, new records or deleted record.. It's a righ approach? Can have problems with performance? Do you now other methods?
Thanks a lot!
Bye

you should not have a problem with this simple approach, transferring each row to the output file rather than reading all data into an internal table first:
open dataset <file> ...
select * from kna1 into wa_kna1
  transfer wa_kna1 to <file>
endselect
close dataset <file>
Thomas

Similar Messages

  • Handling Huge Amount of data in Browser

    Some information regarding large data handling in Web Browser. Browser data will be downloaded to the
    cache of local machine. So when the browser needs data to be downloaded in terms of MBs and GBs, how can we
    handle that ?
    requirement is as mentioned below.
    A performance monitoring application is collecting performance data of a system every 30 seconds.
    The size of the data collected can be around 10 KB for each interval and it is logged to a database. If this application
    runs for one day, the statistical data size will be around 30 MB (28.8 MB) . If it runs for one week, the data size will be
    210 MB. There is no limitation on the number of days from the software perspective.
    User needs to see this statistical data in the browser. We are not sure if this huge amount of data transfer to the
    browser in one instance is feasible. The user should be able to get the overall picture of the logged data for a
    particular period and if needed, should be able to drill down step by step to lesser ranges.
    For e.g, if the user queries for data between the dates 10'th Nov to 20'th Nov, the user expects to get an overall idea of
    the 11 days data. Note that it is not possible to show each 30 second data when showing 11 days data. So some logic
    has to be applied to present the 11 days data in a reasonably acceptable form. Then the user can go and select a
    particular date in the graph and the data for that day alone should be shown with a better granularity than the overall
    graph.
    Note: The applet may not be a signed applet.

    How do you download gigabytes of data to a browser? The answer is simple. You don't. A data analysis package like the one you describe should run on the server and send the requested summary views to the browser.

  • Loading data from flat file to ODS

    Hi Experts,
    I have a developed a report, in which Flat file is the datasource and data target as DSO. 
    Here I have one requriment to Increatment the counter value in TVARVC table after every load compleate.
    Can this be done thru Process chain?  I belive this can be done thru process chain by creating variable.
    If this can be done thru Process chains can  any experts send me sample examples code.
    Thanks In Advance.
    Sreedhar.M

    Hi sridhar,
         We have a similar scenario in which we will load data from flat file to datasource the file name (x_20081010.dat) of such format  being fetched from the low value of the TVARVC table ... once successfully executed we had program to increment the value of low which increments by one day adn the next day the file with that will be placed in the application server...
        U can create a abap program varaint and attribute for that and u can achieve that.... this is sample code which searches for the string and after some coding i.e once ur file has been loaded successful code u can write similar code to increment or counter values which u are asking for...
    search s_tvarvc-low for '20'.
            if sy-subrc ne 0.
              sy-msgty = 'E'.
              sy-msgid = 'ZWM1'.
              sy-msgno = '000'.
              sy-msgv1 = 'Unable to find DATE to increment'.
              raise unable_increment.
            else.
              w_date = s_tvarvc-low+sy-fdpos(08).
              w_nextdate = w_date + 1.
              replace w_date with w_nextdate into s_tvarvc-low.
              modify tvarvc from s_tvarvc.
            endif.
    Regards
    vamsi

  • Need a script to import the data from flat file

    Hi Friends,
    Any one have any scripts to import the data from flat files into oracle database(Linux OS). I have to automate the script for every 30min to check any flat files in Incoming directory process them with out user interaction.
    Thanks.
    Srini

    Here is my init.ora file
    # $Header: init.ora 06-aug-98.10:24:40 atsukerm Exp $
    # Copyright (c) 1991, 1997, 1998 by Oracle Corporation
    # NAME
    # init.ora
    # FUNCTION
    # NOTES
    # MODIFIED
    # atsukerm 08/06/98 - fix for 8.1.
    # hpiao 06/05/97 - fix for 803
    # glavash 05/12/97 - add oracle_trace_enable comment
    # hpiao 04/22/97 - remove ifile=, events=, etc.
    # alingelb 09/19/94 - remove vms-specific stuff
    # dpawson 07/07/93 - add more comments regarded archive start
    # maporter 10/29/92 - Add vms_sga_use_gblpagfile=TRUE
    # jloaiza 03/07/92 - change ALPHA to BETA
    # danderso 02/26/92 - change db_block_cache_protect to dbblock_cache_p
    # ghallmar 02/03/92 - db_directory -> db_domain
    # maporter 01/12/92 - merge changes from branch 1.8.308.1
    # maporter 12/21/91 - bug 76493: Add control_files parameter
    # wbridge 12/03/91 - use of %c in archive format is discouraged
    # ghallmar 12/02/91 - add global_names=true, db_directory=us.acme.com
    # thayes 11/27/91 - Change default for cache_clone
    # jloaiza 08/13/91 - merge changes from branch 1.7.100.1
    # jloaiza 07/31/91 - add debug stuff
    # rlim 04/29/91 - removal of char_is_varchar2
    # Bridge 03/12/91 - log_allocation no longer exists
    # Wijaya 02/05/91 - remove obsolete parameters
    # Example INIT.ORA file
    # This file is provided by Oracle Corporation to help you customize
    # your RDBMS installation for your site. Important system parameters
    # are discussed, and example settings given.
    # Some parameter settings are generic to any size installation.
    # For parameters that require different values in different size
    # installations, three scenarios have been provided: SMALL, MEDIUM
    # and LARGE. Any parameter that needs to be tuned according to
    # installation size will have three settings, each one commented
    # according to installation size.
    # Use the following table to approximate the SGA size needed for the
    # three scenarious provided in this file:
    # -------Installation/Database Size------
    # SMALL MEDIUM LARGE
    # Block 2K 4500K 6800K 17000K
    # Size 4K 5500K 8800K 21000K
    # To set up a database that multiple instances will be using, place
    # all instance-specific parameters in one file, and then have all
    # of these files point to a master file using the IFILE command.
    # This way, when you change a public
    # parameter, it will automatically change on all instances. This is
    # necessary, since all instances must run with the same value for many
    # parameters. For example, if you choose to use private rollback segments,
    # these must be specified in different files, but since all gc_*
    # parameters must be the same on all instances, they should be in one file.
    # INSTRUCTIONS: Edit this file and the other INIT files it calls for
    # your site, either by using the values provided here or by providing
    # your own. Then place an IFILE= line into each instance-specific
    # INIT file that points at this file.
    # NOTE: Parameter values suggested in this file are based on conservative
    # estimates for computer memory availability. You should adjust values upward
    # for modern machines.
    # You may also consider using Database Configuration Assistant tool (DBCA)
    # to create INIT file and to size your initial set of tablespaces based
    # on the user input.
    # replace DEFAULT with your database name
    db_name=DEFAULT
    db_files = 80 # SMALL
    # db_files = 400 # MEDIUM
    # db_files = 1500 # LARGE
    db_file_multiblock_read_count = 8 # SMALL
    # db_file_multiblock_read_count = 16 # MEDIUM
    # db_file_multiblock_read_count = 32 # LARGE
    db_block_buffers = 100 # SMALL
    # db_block_buffers = 550 # MEDIUM
    # db_block_buffers = 3200 # LARGE
    shared_pool_size = 3500000 # SMALL
    # shared_pool_size = 5000000 # MEDIUM
    # shared_pool_size = 9000000 # LARGE
    log_checkpoint_interval = 10000
    processes = 50 # SMALL
    # processes = 100 # MEDIUM
    # processes = 200 # LARGE
    parallel_max_servers = 5 # SMALL
    # parallel_max_servers = 4 x (number of CPUs) # MEDIUM
    # parallel_max_servers = 4 x (number of CPUs) # LARGE
    log_buffer = 32768 # SMALL
    # log_buffer = 32768 # MEDIUM
    # log_buffer = 163840 # LARGE
    # audit_trail = true # if you want auditing
    # timed_statistics = true # if you want timed statistics
    max_dump_file_size = 10240 # limit trace file size to 5 Meg each
    # Uncommenting the line below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest = disk$rdbms:[oracle.archive]
    # log_archive_format = "T%TS%S.ARC"
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    # rollback_segments = (name1, name2)
    # If using public rollback segments, define how many
    # rollback segments each instance will pick up, using the formula
    # # of rollback segments = transactions / transactions_per_rollback_segment
    # In this example each instance will grab 40/5 = 8:
    # transactions = 40
    # transactions_per_rollback_segment = 5
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    global_names = TRUE
    # Edit and uncomment the following line to provide the suffix that will be
    # appended to the db_name parameter (separated with a dot) and stored as the
    # global database name when a database is created. If your site uses
    # Internet Domain names for e-mail, then the part of your e-mail address after
    # the '@' is a good candidate for this parameter value.
    # db_domain = us.acme.com      # global database name is db_name.db_domain
    # FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
    # vms_sga_use_gblpagfil = TRUE
    # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can
    # adversely affect performance. On some non-VMS ports the db_block_cache_*
    # debugging modes have a severe effect on performance.
    #_db_block_cache_protect = true # memory protect buffers
    #event = "10210 trace name context forever, level 2" # data block checking
    #event = "10211 trace name context forever, level 2" # index block checking
    #event = "10235 trace name context forever, level 1" # memory heap checking
    #event = "10049 trace name context forever, level 2" # memory protect cursors
    # define parallel server (multi-instance) parameters
    #ifile = ora_system:initps.ora
    # define two control files by default
    control_files = (ora_control1, ora_control2)
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = TRUE
    # Uncomment the following line, if you want to use some of the new 8.1
    # features. Please remember that using them may require some downgrade
    # actions if you later decide to move back to 8.0.
    #compatible = 8.1.0
    Thanks.
    Srini

  • How can we transfer huge amount of data from database server to xml format

    hi guru
    how can we transfer huge amount of data from database server to xml format.
    regards
    subhasis.

    Create ABAP coding
    At first we create the internal table TYPES and DATA definition, we want to fill with the XML data. I have declared the table "it_airplus" like the structure from XML file definition for a better overview, because it is a long XML Definition (see the XSD file in the sample ZIP container by airplus.com)
    *the declaration
    TYPES: BEGIN OF t_sum_vat_sum,
              a_rate(5),
              net_value(15),
              vat_value(15),
             END OF t_sum_vat_sum.
    TYPES: BEGIN OF t_sum_total_sale,
            a_currency(3),
            net_total(15),
            vat_total(15),
            vat_sum TYPE REF TO t_sum_vat_sum,
           END OF t_sum_total_sale.
    TYPES: BEGIN OF t_sum_total_bill,
            net_total(15),
            vat_total(15),
            vat_sum TYPE t_sum_vat_sum,
            add_ins_val(15),
            total_bill_amount(15),
           END OF t_sum_total_bill.TYPES: BEGIN OF t_ap_summary,
            a_num_inv_det(5),
            total_sale_values TYPE t_sum_total_sale,
            total_bill_values TYPE t_sum_total_bill,
           END OF t_ap_summary.TYPES: BEGIN OF t_ap,
            head    TYPE t_ap_head,
            details TYPE t_ap_details,
            summary TYPE t_ap_summary,
           END OF t_ap.DATA: it_airplus TYPE STANDARD TABLE OF t_ap
    *call the transformation
    CALL TRANSFORMATION ZFI_AIRPLUS
         SOURCE xml l_xml_x1
         RESULT xml_output = it_airplus
         .see the complete report: Read data from XML file via XSLT program
    Create XSLT program
    There are two options to create a XSLT program:
    Tcode: SE80 -> create/choose packet -> right click on it | Create -> Others -> XSL Transformation
    Tcode: XSLT_TOOL
    For a quick overview you can watch at the SXSLTDEMO* programs.
    In this example we already use the three XSLT options explained later.
    As you can see we define a XSL and ASX (ABAP) tags to handle the ABAP and XML variables/tags. After "

  • Changes to write optimized DSO containing huge amount of data

    Hi Experts,
    We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
    We are able to make the changes in Development (with DSO containing data).  But when we tried to
    tranport the changes to our QA system, the transport hangs.  The transport triggers a job which
    filled-up the logs so we need to kill the job which aborts the transport.
    Does anyone of you had the same experience.  Do we need to empty the DSO so we can transport
    successfully?  We really don't want to empty the DSO's as it will take time to load? 
    Any help?
    Thank you very muhc for your help.
    Best regards,
    Rose

    emptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
    What are the things in the logs; sort of conversions for all the records?
    Marco

  • Report in Excel format fails for huge amount of data with headers!!

    Hi All,
    I have developed an oracle report which fetches upto 5000 records.
    The requirements is to fetch upto 100000 records.
    This report fetches data if the headers are removed. If headers are given its not able to fetch the data.
    Have anyone faced this issue??
    Any idea to fetch huge amount of data by oracle report in excel format.
    Thanks & Regards,
    KP.

    Hi Manikant,
    According to your description, the performance is slow when display huge amount of data with more than 3 measures into powerpivot, so you need the hardware requirements for build a PowerPivot to display huge amount of data with more than 3 measures, right?
    PowerPivot benefits from multi-core processors, large memory and storage capacities, and a 64-bit operating system on the client computer.
    Based on my experience, large memory, multiprocessor and even
    solid state drives are benefit PowerPivot performance. Here is a blog about Memory Considerations about PowerPivot for Excel for you reference.
    http://sqlblog.com/blogs/marco_russo/archive/2010/01/26/memory-considerations-about-powerpivot-for-excel.aspx
    Besides, you can identify which query was taking the time by using the tracing, please refer to the link below.
    http://blogs.msdn.com/b/jtarquino/archive/2013/12/27/troubleshooting-slow-queries-in-excel-powerpivot.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • How can I get data in flat file from Pool table and cluster table ?

    Hi,
    I am working in one Achiving project. My requirement is to get data into flat file from Cluster table and pool table.
    Is there any tool avilable to download data into flat file from pool table and cluster table ?
    if table name given in the selection screen then data will be downloaded into flat file.
    waiting for quick response.
    Best Regards,
    Bansidhar

    Data cannot be retrived directly form the cluster table
    as the Cluster results are stored in Cluster Key say for example PCLkey
    and form that Key we need to fetch the data
    these clustes are not the part of PNP or PNPCE tables
    for ur info kindly check

  • How to  upload data from flat file to datastore object in BI 7.0

    Dear friends,
    Please tell me
    step by step process for upload data from flat file to datastore object in BI 7.0
    <removed by moderator>
    please help me
    Thanks,
    D.prabhu
    Edited by: Siegfried Szameitat on Aug 17, 2011 11:40 AM

    Create transformation on thr data source and keep the DSO as the target and load.
    Ravi Thothadri

  • Data Source creation for Master Data from Flat File to BW

    Hi,
    I need to upload Master Data from Flat File. Can anybody tell step by step right from begining of creation of DataSource up to Loading into Master Data InfoObject.
    can any body have Document.
    Regards,
    Chakri.

    Hi,
    This is the procedure.
    1. Create m-data with or without attributes.
    2. Create infosource .
        a) with flexible update
             or
        b) with direct update
    3. Create transfer rules and assign tyhe names of m-data and attribute in "Transfer rules" tab and transfer them to communication structure.
    4. Create the flat-file with same stucture as communication structure.
    5. if chosen direct update then create infopackage and assign the name of flat-file and schedule it.
    6. if chosen flexible update the create update rule with assigning name of the infosource and the schedule it by creating infopackage.
    Hope this helps. If still have prob then let me know.
    Follow this link also.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm
    Assign points if helpful.
    Vinod.

  • Data Transfer Prozess (several data packages due two huge amount of data)

    Hi,
    a)
    I`ve been uploading data from ERP via PSA, ODS and InfoCube.
    Due to a huge amount of data in ERP - BI splits those data in two data packages.
    When prozessing those data to ODS the system delete a few dataset.
    This is not done in step "Filter" but in "Transformation".
    General Question: How can this be?
    b)
    As described in a) data is split by BI into two data packages due to amount of data.
    To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
    As a result I upload data a several time, each time with different selction criteria in InfoPackage.
    Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
    Question: How is the general behaviour of BI when splitting data in several data packages?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • How often in the real time projects extract data from flat files n process

    I am going thru teh BODS data integrator, and trying to understand the demand of ETL services extract data from a flat file, is that really impt in teh real time jobs.
    Thank you very much for the helpful info.

    Hi,
    As per the inputs given by you guys i started loading data from flat file.
    I try to load 28 files from i which i was able to load 24 files succesfully.For the other 4 i got this error messages
    1) Error 'Enter period in the format __.YYYY...' at conversion exit CONVERSION_EXIT_PERI6_INPUT (field CALMONTH record 1, value DUMYTRA)
    Message no. RSDS012
    2)  a) Error 'The argument '1,008.00' cannot be interpreted as anumber' on assignment field QUANT_B record 11714 value 1,008.00
    Message no. RSDS013
       b) Error 'The argument '1,110.00' cannot be interpreted as anumber' on assignment field QUANT_B record 15374 value 1,110.00
    Message no. RSDS013
    3) a) Error 'The argument '1,140.00' cannot be interpreted as anumber' on assignment field QUANT_B record 1647 value 1,140.00
    Message no. RSDS013
       b) Error 'The argument '2,028.00' cannot be interpreted as anumber' on assignment field QUANT_B record 4625 value 2,028.00
    Message no. RSDS013
    4) Error 'The argument '1,151.00' cannot be interpreted as anumber' on assignment field QUANT_B record 7808 value 1,151.00
    Message no. RSDS013
    I'am unable to trace out what is the error exactly.
    I checked this values in files they are perfect.
    can anybody please guide me on this issue.
    With Regards,
    Pradeep.B

  • I have huge amount of data on a windows external drive and want to transfer to a Mac drive.  Does anyone know an easy way to do this?  I have almost 2TB of data to transfer.  Thanks.

    I have huge amount of data 2TB on a windows Fantom external drive and want to transfer to a Mac drive.  Does anyone know an easy way to do this?  Thanks.  I have an IMac 3.5 GHz Intel Core i7.  I haven't bought a Mac external yet. 

    Move your data to a new Mac - Apple Support

  • How to create the Export Data and Import Data using flat file interface

    Hi,
    Request to let me know based on the requirement below on how to export and import data using flat file interface.....
    Please provide the steps involved for the same.......
    BW/BI - Recovery Process for SNP data. 
    For each SNP InfoProvider,
    create:
    1) Export Data:
    1.a)  Create an export data source, InfoPackage, comm structure, etc. necessary to create an ASCII fixed length flat file on the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider. 
    1.b)  All fields in each InfoProvider should be exported and included in the flat file. 
    1.c)  A process chain should be created for each InfoProvider with a start event. 
    1.d)  If the file exists on the target drive it should be overwritten. 
    1.e)  The exported data file name should include the InfoProvider technical name.
    1.f)  Include APO Planning Version, Date of Planning Run, APO Location, Calendar Year/Month, Material and BW Plant as selection criteria.
    2) Import Data:
    2.a) Create a flat file source system InfoPackage, comm structure, etc. necessary to import ASCII fixed length flat files from the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider.
    2.b)  All fields for each InfoProvider should be mapped and imported from the flat file.
    2.c)  A process chain should be created for each InfoProvider with a start event. 
    2.d)  The file should be archived in the
    ctnhsappdata\iface\SCPI063\Archive directory.  Each file name should have the date appended in YYYYMMDD format.  Each file should be deleted from the \Out directory after it is archived. 
    Thanks in advance.
    Tyson

    Here's some info on working with plists:
    http://developer.apple.com/documentation/Cocoa/Conceptual/PropertyLists/Introduc tion/chapter1_section1.html
    They can be edited with any text editor. Xcode provides a graphical editor for them - make sure to use the .plist extension so Xcode will recognize it.

  • Reading data from flat file Using TEXT_IO

    Dear Gurus
    I already posted this question but this time i need some other changes .....Sorry for that ..
    I am using 10G forms and using TEXT_IO for reading data from flat file ..
    My data is like this :-
    0|BP-V1|20100928|01|1|2430962.89|27|2430962.89|MUR|20100928120106
    9|2430962.89|000111111111|
    1|61304.88|000014104113|
    1|41961.73|000022096086|
    1|38475.65|000023640081|
    1|49749.34|000032133154|
    1|35572.46|000033093377|
    1|246671.01|000042148111|
    Here each column is separated by | . I want to read all the columns and want to do some validation .
    How can i do ?
    Initially my requirement was to read only 2 or 3 columns so i did like this ...
    Procedure Pay_Simulator(lfile_type varchar2,lac_no varchar2,lcur varchar2,lno_item number,ltotal number,ldate date,lpay_purp varchar2,lfile_name varchar2)
    IS
    v_handle utl_file.file_type;
    v_filebuffer varchar2(500);
    line_0_date VARCHAR2 (10);
    line_0_Purp VARCHAR2 (10);
    line_0_count Number;
    line_0_sum number(12,2);
    line_0_ccy Varchar2(3);
    line_9_sum Number(12,2);
    line_9_Acc_no Varchar2(12);
    Line_1_Sum Number(12,2);
    Line_1_tot Number(15,2) := 0;
    Line_1_flag Number := 0;
    lval number;
    lacno varchar2(16);
    v_file varchar2(20);
    v_path varchar2(50);
    Begin
    v_file := mcb_simulator_pkg.GET_FILENAME(lfile_name); -- For the file name
    v_path :=rtrim(regexp_substr( lfile_name , '.*\\' ),'\'); For the Path
    v_path := SUBSTR (lfile_name,0, INSTR (lfile_name, '\', -1));
    v_handle := UTL_FILE.fopen (v_path, v_file, 'r');
    LOOP
    UTL_FILE.get_line (v_handle, v_filebuffer);
    IF SUBSTR (v_filebuffer, 0, 1) = '0' THEN
    SELECT line_0 INTO line_0_date
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    SELECT line_0 INTO line_0_Purp
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 4;
    SELECT line_0 INTO line_0_count
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 7;
    SELECT line_0 INTO line_0_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 8;
    SELECT line_0 INTO line_0_ccy
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_0, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 9;
    ELSIF SUBSTR (v_filebuffer, 0, 1) = '9' THEN
    SELECT line_9 INTO line_9_Acc_no
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_9, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    SELECT line_9 INTO line_9_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_9, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 2;
    ELSIF SUBSTR (v_filebuffer, 0, 1) = '1' THEN
    line_1_flag := line_1_flag+1;
    SELECT line_1 INTO line_1_sum
    FROM (SELECT LTRIM (REGEXP_SUBSTR (v_filebuffer, '[^|]+{1}', 1, LEVEL)) line_1, ROWNUM rn
    FROM DUAL
    CONNECT BY LEVEL <= LENGTH (REGEXP_REPLACE (v_filebuffer, '[^|]*')) + 1)
    WHERE rn = 3;
    Line_1_tot := Line_1_tot + line_1_sum;
    END IF;
    END LOOP;
    DBMS_OUTPUT.put_line (Line_1_tot);
    DBMS_OUTPUT.PUT_LINE (Line_1_flag);
    UTL_FILE.fclose (v_handle);
    END;
    But now how can i do ? Shall i use like this select Statement for all the columns ?

    Sorry for that ..
    As per our requirement ...
    I need to read the flat file and it looks like like this .
    *0|BP-V1|20100928|01|1|2430962.89|9|2430962.89|MUR|20100928120106*
    *9|2430962.89|000111111111|*
    *1|61304.88|000014104113|*
    *1|41961.73|000022096086|*
    *1|38475.65|000023640081|*
    *1|49749.34|000032133154|*
    *1|35572.46|000033093377|*
    *1|246671.01|000042148111|*
    *1|120737.25|000053101979|*
    *1|151898.79|000082139768|*
    *1|84182.34|000082485593|*
    I have to check the file :-
    Validation are 1st line should start from 0 else it should raise an error and insert that error into one table .
    The for 2nd line also same thing ..it should start from 9 else it should raise an error and insert that error into one table .
    Then the 3rd line should start from 1 else it should raise an error and insert that error into one table .
    After that i have to do a validation like i will read the 1st line 2nd column .. It should be like this BP-V1 else raise an error and insert that error to a table . Then i will check the 3rd column which is 20100928 , it should be YYYYMMDD format else same thing ERROR.
    Then like this for all columns i have different validation .......
    Then it will check for the 2nd line 3rd column . this is an account no .1st i will check it should be 12 char else ERROR .Then I will check that what user has imputed in the form.Like for example User putted 111111111 then i will check with this 000111111111 which is there in the 2nd line . I have to add 000 before that Account no which user imputed .
    Then the lines which is starting from 1 , i have to take all the 2nd column for all the lines which is starting from 1 and i have to do a sum . After that i have to compare that sum with the value in the 1st lines ( Starting from 0) 6th column . It should be same else ERROR ...
    Then same way i have to count all the lines which is starting from 1 . Then i have to compare with the 7th column of 1st line . It should be same . Here in this file it should be 9.
    MY CODE IS :-
    Procedure Pay_Simulator(lfile_type varchar2,lac_no varchar2,lcur varchar2,lno_item number,ltotal number,ldate date,lpay_purp varchar2,lfile_name varchar2)
    IS
    v_handle TEXT_IO.file_type;
    v_filebuffer varchar2(500);
    line_0_date VARCHAR2 (10);
    line_0_Purp VARCHAR2 (10);
    line_0_count Number;
    line_0_sum number(12,2);
    line_0_ccy Varchar2(3);
    line_9_sum Number(12,2);
    line_9_Acc_no Varchar2(12);
    Line_1_Sum Number(12,2);
    Line_1_tot Number(15,2) := 0;
    Line_1_flag Number := 0;
    lval number;
    lacno varchar2(16);
    v_file varchar2(20);
    v_path varchar2(50);
    LC$String VARCHAR2(50) ;--:= 'one|two|three|four|five|six|seven' ;
    LC$Token VARCHAR2(100) ;
    i PLS_INTEGER := 2 ;
    lfirst_char number;
    lvalue Varchar2(100) ;
    Begin
    v_file := mcb_simulator_pkg.GET_FILENAME(lfile_name); For the file name
    v_path :=rtrim(regexp_substr( lfile_name , '.*\\' ),'\'); For the Path
    --v_path := SUBSTR (lfile_name,0, INSTR (lfile_name, '\', -1));
    Message(lfile_name);
    v_handle := TEXT_IO.fopen(lfile_name, 'r');
              BEGIN
                        LOOP
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        --Message('First Char '||lfirst_char); 
                                  IF lfirst_char = '0' Then
                                  Loop
                                  LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                                  Message('VAL - '||LC$Token);
                                  lvalue := LC$Token;
                                  EXIT WHEN LC$Token IS NULL ;
    i := i + 1 ;
    End Loop;
                                  Else
                                       Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (9999,'0002','First line should always start with 0');
                                       Forms_DDL('Commit');
                                       raise form_Trigger_failure;
                                  End if ;
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                        --Message('Row '||LC$Token);
                             IF lfirst_char = '9' Then
                                  Null;
                             Else
                                  Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (8888,'0016','Second line should start with 9');
                                  Forms_DDL('Commit');
                                  raise form_Trigger_failure;
                             End IF;
                        LOOP
                        TEXT_IO.get_line (v_handle, v_filebuffer);
                        lfirst_char := Substr(v_filebuffer,0,1);
                        LC$Token := mcb_simulator_pkg.Split( v_filebuffer, i , '|') ;
                        --Message('Row '||LC$Token);
                                  IF lfirst_char = '1' Then
                                  Null;
                                  Else
                                       Insert into MU_SIMULATOR_output_ERR (load_no,ERR_CODE,ERR_DESC) values (7777,'0022','The third line onward should start with 1');
                                       Forms_DDL('Commit');
                                       raise form_Trigger_failure;
                                  End if;
                        END LOOP;
                        --END IF;
                        END LOOP;
              EXCEPTION
                   When No_Data_Found Then
              TEXT_IO.fclose (v_handle);
              END;
    Exception
         When Others Then
         Message('Other error');
    END;
    I am calling the FUNCTION which you gave SPLIT as mcb_simulator_pkg.Split.

Maybe you are looking for

  • What is the best way of deploying a jsp and bc4j aplication

    Hi I would like to know what is the best way of deploying a jsp and bc4j aplication in ias 9i. thanks in advanced rjc

  • Tracking button active state?

    Hi - I have three pages within a section of a site - when the user clicks on a text button to go there, I'd like the text link to show active. In this case, the text is black - when you rollover it becomes red. So, when you're on a page, I'd like the

  • DOMNotSupportedException in com.sun.java.browser.dom.* classes

    I'd like to enable an applet to modify the DOM in a browser window directly (ie. by using org.w3c.dom.* stuff). I'm using the latest 1.5. I'm using a small class that I've pasted below to attempt to facilitate this. It evidently works in MSIE, where

  • Problem downloading JDeveloper 10g production!

    Hi! It seems to me I can't download Oracle JDeveloper 10g production. When I click on a download link on the main JDeveloper page on otn.oracle.com, I'm redirected to a questionnaire with a download button at the bottom of the page. But when I click

  • HT201210 unable to sync music on phone

    i have restored my iphone and when i go onto my itunes to sync music, it keeps doing 5 steps then stops