IExpense API's and Base Tables Need

Hi All,
What are iExpense API's available to insert the data from other database source to Oracle Standard Tables using Oracle provided API's ?

we have Private API's for expense report creation
   AP_WEB_OA_MAINFLOW_PKG
   AP_EXPENSE_REPORT_HEADERS_PKG
   AP_WEB_DB_EXPRPT_PKG
   AP_WEB_DB_EXPLINE_PKG

Similar Messages

  • Attach User define tables and view table need add to database into my add-o

    Hi there,
    I want to deploy an addon, there are User define tables and view table need add to database.
    I need some advice on some issues..
    1. Can I attach User define tables and view table need add to database into my addon.
    2. I wonder which chance is properly to add them, if add these user define objects in time of install and I can't get the enough information that connect to SQL server
    Thanks for any help.

    Hi Weerachai,
    Here's an example of how to create a user-defined table in code. My suggestion would be to check if it exists when your add-on starts up and then if not, create the tables, fields and objects.
    'User Table
        Private Sub CreateTable(ByVal sTable As String, ByVal sDescription As String, ByVal oObjectType As SAPbobsCOM.BoUTBTableType)
            Dim oUserTablesMD As SAPbobsCOM.UserTablesMD
            Dim iResult As Long
            Dim sMsg As String
            oUserTablesMD = oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oUserTables)
            If Not oUserTablesMD.GetByKey(sTable) Then
                oUserTablesMD.TableName = sTable
                oUserTablesMD.TableDescription = sDescription
                oUserTablesMD.TableType = oObjectType
                iResult = oUserTablesMD.Add()
                If iResult <> 0 Then
                    oCompany.GetLastError(iResult, sMsg)
                    MessageBox.Show("Error Creating Table: " & sTable & " Error: " & sMsg)
                End If
            End If
            System.Runtime.InteropServices.Marshal.ReleaseComObject(oUserTablesMD)
        End Sub
    'User Field
        Private Sub CreateField(ByVal sTable As String, ByVal sName As String, ByVal sDescription As String, _
                                ByVal iSize As Integer, ByVal aFieldType As SAPbobsCOM.BoFieldTypes, _
                                ByVal aSubType As SAPbobsCOM.BoFldSubTypes, ByVal sLink As String, _
                                ByVal bMandatory As SAPbobsCOM.BoYesNoEnum)
            Dim oUserFieldsMD As SAPbobsCOM.UserFieldsMD
            Dim oTable As SAPbobsCOM.UserTable
            Dim iResult As Long
            Dim sMsg As String
            Dim i As Integer
            Dim x As Integer
            Dim bFound As Boolean = False
            Dim oField As SAPbobsCOM.Field
            oTable = oCompany.UserTables.Item(sTable)
            For i = 0 To oTable.UserFields.Fields.Count - 1
                oField = oTable.UserFields.Fields.Item(i)
                'MessageBox.Show(oField.Name)
                If oField.Name = "U_" & sName Then
                    bFound = True
                End If
            Next
            System.Runtime.InteropServices.Marshal.ReleaseComObject(oTable)
            If Not bFound Then
                oUserFieldsMD = oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oUserFields)
                oUserFieldsMD.TableName = "@" & sTable
                oUserFieldsMD.Name = sName
                oUserFieldsMD.Description = sDescription
                oUserFieldsMD.Type = aFieldType
                If aFieldType = SAPbobsCOM.BoFieldTypes.db_Alpha Or aFieldType = SAPbobsCOM.BoFieldTypes.db_Numeric Then
                    oUserFieldsMD.EditSize = iSize
                Else
                    oUserFieldsMD.SubType = aSubType
                    oUserFieldsMD.Mandatory = bMandatory
                End If
                oUserFieldsMD.LinkedTable = sLink
                iResult = oUserFieldsMD.Add()
                If iResult <> 0 Then
                    oCompany.GetLastError(iResult, sMsg)
                    MessageBox.Show("Error Creating Field: " & sTable & "." & sName & " Error: " & sMsg)
                End If
                System.Runtime.InteropServices.Marshal.ReleaseComObject(oUserFieldsMD)
            End If
        End Sub
    If you want to create a View I think you would have to use the RecordSet object. This will ensure that you don't have to log in to the database again
    Hope it helps,
    Adele

  • Interface and  Base tables for Receiving Transaction Processor Program

    Hi Everyone,
    My requirement is to move data from staging table to rcv interface tables and then run Receiving transaction processor program. How will in know what interface tables i need to insert data into? and after I run the concurrent program what are the base tables that i need to check to confirm?
    Please help!
    Thanks
    Sunny

    RCV_HEADERS_INTERFACE
    RCV_TRANSACTIONS_INTERFACE
    If you have serial numbers or lot numbers for the receipts, then you need to insert into
    RCV_SERIALS_INTERFACE
    RCV_LOTS_INTERFACE
    Once the transactions are processed, you will see records in
    RCV_SHIPMENT_HEADERS
    RCV_SHIPMENT_LINES
    For details on the interface, check irep.oracle.com
    See Import PO Receipts using custom conversion. for some scripts.
    And see http://www.oracleug.com/tables/purchasing/rcvheadersinterface and http://www.oracleug.com/tables/purchasing/rcvtrnsactionsinterface for details on the tables.
    Hope this answers your question,
    Sandeep Gandhi

  • Count difference b/w forecast interface table(success rec)  and base table

    Hi all
    We did forecast conversion.
    We got wrong count while comparing count of successful loaded records of interface table(based on process_flag=5) and records inserted in base table .
    I used below querys
    1) to count the records successfully loaded into base table based on MRP_FORECAST_INTERFACE
    select count (* ) from MRP_FORECAST_INTERFACE where TO_CHAR( last_update_date,'DD-MON-YYYY HH24:MI:SS') between
    '20-MAR-2012 06:08:05' AND '20-MAR-2012 14:06:33'
    and process_status=5 -- shows 859022
    2) records hitted the base table at the same time
    SELECT count (* ) FROM MRP_FORECAST_DATES where last_update_date >sysdate-2 --851451
    I refer this document http://docs.oracle.com/cd/A85683_01/acrobat/mrptrm.pdf
    In that it written like this If records process_status =5 in MRP_FORECAST_INTERFACE table then the planning manager successfully loaded into base table..
    But why this count is different? Can any one help me on this...

    Hi all
    We did forecast conversion.
    We got wrong count while comparing count of successful loaded records of interface table(based on process_flag=5) and records inserted in base table .
    I used below querys
    1) to count the records successfully loaded into base table based on MRP_FORECAST_INTERFACE
    select count (* ) from MRP_FORECAST_INTERFACE where TO_CHAR( last_update_date,'DD-MON-YYYY HH24:MI:SS') between
    '20-MAR-2012 06:08:05' AND '20-MAR-2012 14:06:33'
    and process_status=5 -- shows 859022
    2) records hitted the base table at the same time
    SELECT count (* ) FROM MRP_FORECAST_DATES where last_update_date >sysdate-2 --851451
    I refer this document http://docs.oracle.com/cd/A85683_01/acrobat/mrptrm.pdf
    In that it written like this If records process_status =5 in MRP_FORECAST_INTERFACE table then the planning manager successfully loaded into base table..
    But why this count is different? Can any one help me on this...

  • Does "Target table" and "Error Table" need to be in same DB as OWB's?

    We get the following error ORA-00600: internal error code, arguments: [opixrb-3], [1036], ORA-01036: illegal variable name/number at the time of OWB execution
    OWB seems to successfully log the errors rows in the error table when a) target table and it’s b) error table are in the same database as OWB’s but throws ORA-01036 error when they are in a remote database from OWB's
    Any one has any information about this please...? It will be of great help
    Edited by: ghiyer on Jul 20, 2009 8:42 AM

    Oracle has come back to us saying that
    "DML error logging feature is not supported for distributed DML."
    Bug 5698887 has more information about this.

  • How to Compare Data length of staging table with base table definition

    Hi,
    I've two tables :staging table and base table.
    I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
    so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
    cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
    But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
    Can anyone tell me how to get desired results?
    Thanks,
    Mahender.

    This is a shot in the dark but, take a look at this example below:
    SQL> DROP TABLE STAGING;
    Table dropped.
    SQL> DROP TABLE BASE;
    Table dropped.
    SQL> CREATE TABLE STAGING
      2  (
      3          ID              NUMBER
      4  ,       A               VARCHAR2(40)
      5  ,       B               VARCHAR2(40)
      6  ,       ERR_LENGTH      VARCHAR2(1)
      7  );
    Table created.
    SQL> CREATE TABLE BASE
      2  (
      3          ID      NUMBER
      4  ,       A       VARCHAR2(25)
      5  ,       B       VARCHAR2(25)
      6  );
    Table created.
    SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXX
    SQL> UPDATE  STAGING ST
      2  SET     ERR_LENGTH = 'Y'
      3  WHERE   EXISTS
      4          (
      5                  WITH    columns_in_staging AS
      6                  (
      7                          /* Retrieve all the columns names for the staging table with the exception of the primary key column
      8                           * and order them alphabetically.
      9                           */
    10                          SELECT  COLUMN_NAME
    11                          ,       ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
    12                          FROM    ALL_TAB_COLUMNS
    13                          WHERE   TABLE_NAME='STAGING'
    14                          AND     COLUMN_NAME != 'ID'
    15                          ORDER BY 1
    16                  ),      staging_unpivot AS
    17                  (
    18                          /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
    19                           * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
    20                           * in the same order as the ROW_NUMBER() function in columns_in_staging
    21                           */
    22                          SELECT  ID
    23                          ,       COLUMN_NAME
    24                          ,       DECODE
    25                                  (
    26                                          RN
    27                                  ,       1,A
    28                                  ,       2,B
    29                                  )  AS VAL
    30                          FROM            STAGING
    31                          CROSS JOIN      COLUMNS_IN_STAGING
    32                  )
    33                  /*      Only return IDs for records that have at least one column value that exceeds the length. */
    34                  SELECT  ID
    35                  FROM
    36                  (
    37                          /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
    38                           * the check to see if there are any differences in the length if so set a flag.
    39                           */
    40                          SELECT  STAGING_UNPIVOT.ID
    41                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
    42                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
    43                          FROM    STAGING_UNPIVOT
    44                          JOIN    ALL_TAB_COLUMNS ATC     ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
    45                          WHERE   ATC.TABLE_NAME='BASE'
    46                  )       A
    47                  WHERE   COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
    48                  AND     ST.ID = A.ID
    49          )
    50  /
    2 rows updated.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX                Y
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX               Y
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
    This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
    Thanks!

  • View's base table ?

    how do i knw the views base table...
    i can view text from user_source
    but i there any dictionary table which show view name and base table name..?

    Hmm , jsut a note to mention that you may use , v$fixed_view_definition also.
    For example,
    SQL> select view_definition
      2  from v$fixed_view_definition where view_name=’V$LOG’;
    VIEW_DEFINITION
    select   GROUP# , THREAD# , SEQUENCE# , BYTES , MEMBERS , ARCHIVED , STATUS , FI
    RST_CHANGE# , FIRST_TIME from GV$LOG where inst_id = USERENV(’Instance’)
    SQL> select view_definition
      2  from v$fixed_view_definition where view_name=’GV$LOG’;
    VIEW_DEFINITION
    select le.inst_id, le.lenum, le.lethr, le.leseq, le.lesiz*le.lebsz, ledup, decod
    e(bitand(le.leflg,1),0,’NO’,'YES’), decode(bitand(le.leflg,24), 8, ‘CURRENT’,
                             16,’CLEARING’,                            24,’CLEARING_
    CURRENT’,        decode(sign(leseq),0,’UNUSED’,        decode(sign((to_number(rt
    .rtckp_scn)-to_number(le.lenxs))*        bitand(rt.rtsta,2)),-1,’ACTIVE’,'INACTI
    VE’))), to_number(le.lelos), to_date(le.lelot,’MM/DD/RR HH24:MI:SS’,'NLS_CALENDA
    R=Gregorian’) from x$kccle le, x$kccrt rt where le.ledup!=0 and le.lethr=rt.rtnu
    m and  le.inst_id = rt.inst_id
    SQL>HTH
    Aman....

  • Staging area and base are in different schema/tablespace

    Please can someone give me some advantages and disadvantages to keep staging tables and base tables in same schema. For eg we have staging area where we daily truncate staging table, load in the stg table and do transformation process. Once done we will move the staging table data to base tables. Base tables sizes are huge volume like 6 to 8 million rows as there is no purging done.
    I want to suggest to my team that we should keep them in separate schemas as I understand it will be good from I/O point.
    Is there any other reason to keep staging and base tables in separate schema/tablespace.

    Hi,
    Definitely I agree with previous answers. You wrote that staging area, transformation etc.. So I think it's a datawarehouse. Staging and base tables should be stored in different data files also in different harddrives. During the ETL task there can be high load on the Disk subsystem. Storing them in different schemas is another subject which is not related with the disk performance.
    When it comes to blocksize of the datafile, there is a approach like this:
    If the database is used for OLTP systems, it would be good idea to setup smaller size blocks. But when we're talking about a datawarehouse, it's highly recommended to use larger block sizes like 16KB or 32KB. This setting has an advantage related to reading, accessing data blocks.
    Regards,
    Cuneyt

  • Need to know base table and base column name for Oracle view columns

    I am trying to load metadata for some views and that requires me to know what the final table and column name would be corresponding to the view column.
    For example, if I have a view x_v that is select a,b from x_v2; and x_v2 is a view that is select a,b,c from x;
    I need to get the following information.
    View View Col Base Table Base Col
    X_V A X A
    X_V B X B
    and so on.
    Is it possible to get this programmatically using any SYS schema tables or Dependency tables?
    I tried an indirect approach wherein I lock tables using LOCK TABLE or FOR UPDATE OF. But I'm not too sure if I can go down to the level of individual view columns.
    Can you help me with this?

    Thanks. I was looking at some indirect approaches.
    I came up with this script that does it faster than dependencies but thats just for tables and not column mapping.
    declare
    cursor bt(cp_sid number) is
    select u.name uname, o.name oname
    from sys.obj$ o, sys.user$ u
    where o.obj# in (select id1
    from v$lock
    where sid=cp_sid)
    and u.user# =o.owner#;
    cursor c_sid is
    select sid
    from v$session
    where audsid =userenv('sessionid');
    l_sid number;
    l_view varchar2(100):= 'PER_PEOPLE_V';
    l_schema varchar2(10):= 'APPS';
    begin
    open c_sid;
    fetch c_sid into l_sid;
    close c_sid;
    dbms_output.put_line('SID: '||l_sid);
    execute immediate 'lock table '||l_schema||'.'||l_view||' in row share mode nowait';
    for i in bt(l_sid) loop
    dbms_output.put_line(i.uname||'.'||i.oname);
    execute immediate 'alter table '||i.uname||'.'||i.oname||' disable table lock';
    execute immediate 'alter table '||i.uname||'.'||i.oname||' enable table lock';
    end loop;
    end;
    It basically uses locks on views to verify locks on the base tables.
    Just wondering, can we use FOR UPDATE OF statements to get to the columns?

  • API to update oracle base table in oracle apps

    Hi,
    What is the API used to update a base table in oracle contracts?
    Please suggest.

    If you are on 11i, please check irep.oracle.com
    It lists all APIs available.
    IREP - Oracle Integration Repository: The Tool To Find Which API Is Supported and How To Use It ... (Doc ID 554986.1)
    For R12, see
    Oracle Integration Repository Documentation Resources Release 12 (Doc ID 396116.1)
    Note: 462586.1 - Where are the Oracle® Release 12 (R12) API Reference Guide?
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=462586.1
    Note: 458225.1 - Release 12 Integration Repository
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=458225.1
    Hope this helps,
    Sandeep Gandhi

  • PO Base Tables and Validation

    Hello All,
    I need to develop conversion interface of open purchase orders.
    Need help to know what are the base tables which are used for the same.
    Also Need to know what all validations needs to be done.
    I have identified a few.
    1     PO Number already Exists in Oracle EBS.
    2     Vendor not defined in Oracle EBS.
    3     Currency Code not defined in Oracle EBS.
    4     Bill to Location Not defined.
    5     Ship to Location not defined.
    6     Payment Terms not defined in Oracle EBS.
    7     Vendor Site not defined
    8     Buyer Not defined in EBS
    9     EBS Ship to Org does not exist
    9     UOM Code not defined in EBS
    10     Account Code not valid
    Can someone pls help with the experiences.
    Thanks

    Hi,
    3 important tables when you are converting the PO's in Oracle interface are
    PO_HEADERS_INTERFACE
    PO_LINES_INTERFACE
    PO_DISTRIBUTIONS_INTERFACE
    Normally when you are converting the data through the stanadard interface, all validations are taken care by the above interface tables, Import Standard Purchase Orders is the standard concurrent program provided by oracle to load the data from standard interface to the base tables. PO_HEADERS_ALL, PO_LINES_ALL, PO_LINE_LOCATIONS_ALL, PO_DISTRIBUTIONS_ALL.
    And also you need to take care of the partially received PO's, Partially Billed PO's.
    Hope this Helps,
    Raghav

  • Aggregation table - Diffrent agg levels for base table and agg table

    Is it possible to have Different aggregation level for base table and Aggregation. Say sum on a column in AGG table and Count for the same column in Fact table.
    Example
    Region,Day_product,sales person, customer are dimensions and Call is a fact measure
    FACT_TABLE has columns Region, Day, Product, Sales person,Customer, Call
    AGG_TABLE has columns Region, Month,Product, call
    We already have a Logical Table definition for the fact table say FACT_CALL
    We have a Logical column called No of customers.
    For the Data source as FACT_TABLE Formula for the column is "Customer" and Aggregation level is count distinct.
    But agg table we already have a calculated column call TOT_CUSTOMERS. which is been calculated and aggregated in the ETL.
    IF we map this to the logical column we have to set the formula as TOT_CUSTOMERS and we need to define aggregation type as SUM as this is at REGION, MONTH AND Product level. But OBI does not allow to do so.
    Is there a work around for this? Can you please let us know.
    Regard
    Arun D

    The way BI server picks up the table that would satisfy the query is through column mappings and contents levels. You have set the column mappings to TOT_CUSTOMER, which is right. When it comes to aggregation, since its already precalculated through ETL, you want to set the aggregation to SUM. Which I would say - is not correct, you can set the aggregation to COUNT DISTIMCT which is same as that of the detailed fact. But set the content levels to month in date table, and appropriate levels in region etc., So now BI Server will be aware of how to aggregate the rows when it chooses the agg table.

  • Base Tables of Oracle Time And Labor

    What are the base tables or important tables of Oracle Time and Labor?

    Hi,
    Refer thread as this might help you:
    https://forums.oracle.com/thread/384562?start=0&tstart=0
    Please also refer the following docs/notes:
    Oracle 11i and R12 Time And Labor (OTL) Timecard Configuration (Doc ID 304340.1)
    APIs in Oracle Time & Labor (Doc ID 216773.1)
    http://docs.oracle.com/cd/B34956_01/current/acrobat/120hxtig.pdf
    Thanks &
    Best Regards,

  • Need to know the base tables for the QM datasources

    Hello SAP Experts,
    I want to know the base tables for the below mentioned datasources which stores the transaction data.
    1. 2LIS_05_QE1:  Inspection Characterstics Results
    2. 2LIS_05_QE2: Inspection Characterstics Results (Quan)
    3. 2LIS_05_QVUDN: Usage decision for Inspection
    Need your inputs.
    Thanks,
    Lasya.
    Edited by: lasya john on Dec 10, 2008 8:28 PM

    Hi Lasya,
    You can use help.sap.com for this.
    If you do not find this data source there...then
    You can go to LBWE t-code in R/3 and then infront of the data source...you have maintenance option.
    Click on that ...and go inside it in the display mode.
    Here you can see a generic pool of fields on the left side which can be included in the data source........  field present in the data sources in the right side of the pool.
    For every field you have a tructure name infront of it like MCXXXX where XXXX is the table name.
    So the field is being from that particular table.
    Even I would the same thing to reply to your thread.
    You can follow the same procedure to find the table for any LO data source.
    This is accurate in 99% cases...only in very few cases like purchasing...we get one or two fields from different tables.
    Hope it helps.
    Thanks
    Ajeet

  • Table name input. and output i need  all rows and columns  using procedures

    hi,
    question: table name input. and output i need all rows and columns by using procedures.
    thanks,
    To All

    An example of using DBMS_SQL package to execute dynamic SQL (in this case to generate CSV data in a file)...
    As sys user:
    CREATE OR REPLACE DIRECTORY TEST_DIR AS '\tmp\myfiles'
    GRANT READ, WRITE ON DIRECTORY TEST_DIR TO myuser
    /As myuser:
    CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2
                                         ,p_dir IN VARCHAR2
                                         ,p_header_file IN VARCHAR2
                                         ,p_data_file IN VARCHAR2 := NULL) IS
      v_finaltxt  VARCHAR2(4000);
      v_v_val     VARCHAR2(4000);
      v_n_val     NUMBER;
      v_d_val     DATE;
      v_ret       NUMBER;
      c           NUMBER;
      d           NUMBER;
      col_cnt     INTEGER;
      f           BOOLEAN;
      rec_tab     DBMS_SQL.DESC_TAB;
      col_num     NUMBER;
      v_fh        UTL_FILE.FILE_TYPE;
      v_samefile  BOOLEAN := (NVL(p_data_file,p_header_file) = p_header_file);
    BEGIN
      c := DBMS_SQL.OPEN_CURSOR;
      DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
      d := DBMS_SQL.EXECUTE(c);
      DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
      FOR j in 1..col_cnt
      LOOP
        CASE rec_tab(j).col_type
          WHEN 1 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
          WHEN 2 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
          WHEN 12 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
        ELSE
          DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
        END CASE;
      END LOOP;
      -- This part outputs the HEADER
      v_fh := UTL_FILE.FOPEN(upper(p_dir),p_header_file,'w',32767);
      FOR j in 1..col_cnt
      LOOP
        v_finaltxt := ltrim(v_finaltxt||','||lower(rec_tab(j).col_name),',');
      END LOOP;
      --  DBMS_OUTPUT.PUT_LINE(v_finaltxt);
      UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
      IF NOT v_samefile THEN
        UTL_FILE.FCLOSE(v_fh);
      END IF;
      -- This part outputs the DATA
      IF NOT v_samefile THEN
        v_fh := UTL_FILE.FOPEN(upper(p_dir),p_data_file,'w',32767);
      END IF;
      LOOP
        v_ret := DBMS_SQL.FETCH_ROWS(c);
        EXIT WHEN v_ret = 0;
        v_finaltxt := NULL;
        FOR j in 1..col_cnt
        LOOP
          CASE rec_tab(j).col_type
            WHEN 1 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
                        v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
            WHEN 2 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
                        v_finaltxt := ltrim(v_finaltxt||','||v_n_val,',');
            WHEN 12 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
                        v_finaltxt := ltrim(v_finaltxt||','||to_char(v_d_val,'DD/MM/YYYY HH24:MI:SS'),',');
          ELSE
            v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
          END CASE;
        END LOOP;
      --  DBMS_OUTPUT.PUT_LINE(v_finaltxt);
        UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
      END LOOP;
      UTL_FILE.FCLOSE(v_fh);
      DBMS_SQL.CLOSE_CURSOR(c);
    END;This allows for the header row and the data to be written to seperate files if required.
    e.g.
    SQL> exec run_query('select * from emp','TEST_DIR','output.txt');
    PL/SQL procedure successfully completed.Output.txt file contains:
    empno,ename,job,mgr,hiredate,sal,comm,deptno
    7369,"SMITH","CLERK",7902,17/12/1980 00:00:00,800,,20
    7499,"ALLEN","SALESMAN",7698,20/02/1981 00:00:00,1600,300,30
    7521,"WARD","SALESMAN",7698,22/02/1981 00:00:00,1250,500,30
    7566,"JONES","MANAGER",7839,02/04/1981 00:00:00,2975,,20
    7654,"MARTIN","SALESMAN",7698,28/09/1981 00:00:00,1250,1400,30
    7698,"BLAKE","MANAGER",7839,01/05/1981 00:00:00,2850,,30
    7782,"CLARK","MANAGER",7839,09/06/1981 00:00:00,2450,,10
    7788,"SCOTT","ANALYST",7566,19/04/1987 00:00:00,3000,,20
    7839,"KING","PRESIDENT",,17/11/1981 00:00:00,5000,,10
    7844,"TURNER","SALESMAN",7698,08/09/1981 00:00:00,1500,0,30
    7876,"ADAMS","CLERK",7788,23/05/1987 00:00:00,1100,,20
    7900,"JAMES","CLERK",7698,03/12/1981 00:00:00,950,,30
    7902,"FORD","ANALYST",7566,03/12/1981 00:00:00,3000,,20
    7934,"MILLER","CLERK",7782,23/01/1982 00:00:00,1300,,10The procedure allows for the header and data to go to seperate files if required. Just specifying the "header" filename will put the header and data in the one file.
    Adapt to output different datatypes and styles are required.

Maybe you are looking for

  • Using an External Hard Drive to Run Logic

    In an attempt to keep logic from continually telling me there is a "system overload" I'd like to run as much of the program off a faster external hard drive. I've all ready download all the applications and most of the loops.I realize that you can't

  • Cancel PO - Release strategy remains active... WHY?

    Sometimes, when I cancel a Purchase Order with the bin, the release strategy remains active and the approver finds the PO among the ones he has to approve, even if it has been cancelled. In many other cases, instead, when I cancel a PO the release st

  • HTTP to JDBC

    Hi Experts, I am doing a HTTP to JDBC synchronous interface, while testing it from the Component Monitoring using the Test Message, i get the status as Message Sent . But when i check it in the SXMB_MONI i am getting the following error com.sap.aii.a

  • I am experiencing a memory leak when using ajax calls, only in Firefox 4.

    I have been developing an intranet application that uses jquery's ajax function to refresh certain parts of a page. I have tested this application in Firefox 4, IE9 and Chrome 11 on a Windows 7 64-bit machine. Only Firefox is seeing an increase in me

  • Isr and internal table ?

    Hello @ll, how can I define an internal table in ISR scenario ?  is that possible? best regards Heidi