PL/SQL function: using parameter of type PL/SQL record?

Hi,
Is it possible to call the following function from within c#:
CREATE FUNCTION myfunc (par1 table%ROWTYPE) RETURN <sometype>
IS ...
Can ODP.Net handle rowtype records? I have found no hints in the manuals.
thanks,
stephan

Hello Stephan,
Can ODP.Net handle rowtype records?ODP does not currently support %rowtype (or object) types.
PLSQLAssociativeArray is the only supported collection type at this time.
- Mark

Similar Messages

  • Can window and aggregate SQL functions used in Pro*C embedded SQL?

    It appears that the window functions such as dense_rank() over(partition by ... order by ...) are not available in Pro*C embedded SQL. Can somebody please confirm that that is really the case?
    Thanks
    Rawender Guron

    Please refer to this thread: "Is this forum also used for Pro*'C Questions? "
    Is this forum also used for Pro*'C Questions?

  • Deploy resulset of PL\SQL function, that returns ANYTABLE%TYPE

    How can deploy resulset of PL\SQL function, that returns ANYTABLE%TYPE in SQL?

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by angelrip:
    Hello everyone,
    I've come through the following problem:
    1.- I created an PL/SQL stored procedure which returns a REF CURSOR element, definition looks like this:
    PACKAGE PKG_LISTADOS AS
    TYPE tuplas IS REF CURSOR;
    /* Procedimientos exportados por el paquete */
    PROCEDURE inicializarModuloListados;
    FUNCTION recaudacionUltimoMes(medioPago DEF_MEDIO_PAGO.MEDIO_PAGO%TYPE)
    RETURN tuplas;
    2.- Now I would like to call the stored procedure and retrieve the PL/SQL cursor as a ResultSet Java Object. The code I wrote is this:
    Connection conn;
    XmlDocument paramDef;
    conn=poolMgr.getConnection str_poolDBConnection);
    try
    CallableStatement cstmt=conn.prepareCall("{?=call PKG_LISTADOS.recaudacionUltimoMes(?)}");
    cstmt.registerOutParameter(1, java.sql.Types.OTHER);
    cstmt.setString(2, "MONEDA");
    cstmt.executeQuery();
    ResultSet rs=(ResultSet)cstmt.getObject(1);
    catch(SQLException sqlE)
    3.- However, I can't make it OK, all the time I get the following error:
    SQL Error(17004), java.sql.SQLException: Non valid column type
    May anyone help me with this, thanks in advance:
    Miguel-Angel<HR></BLOCKQUOTE>
    Do something like the following:
    cstmt = conn.prepareCall("{call customer_proc(?, ?)}");
    //Set the first parameter
    cstmt.setInt(1, 40);
    //Register to get the Cursor parameter back from the procedure
    cstmt.registerOutParameter(2, OracleTypes.CURSOR);
    cstmt.execute();
    ResultSet cursor = ((OracleCallableStatement)cstmt).getCursor(2);
    while(cursor.next())
    System.out.println("CUSTOMER NAME: " + cursor.getString(1));
    System.out.println("CUSTOMER AGE: " + cursor.getInt(2));
    cursor.close();
    null

  • SQL report region source to call a pl/sql function using DB link

    Hi - I have a pl/sql function fn_dbtype(id NUMBER) defined in database X. The pl/sql function executes couple DML statements and returns a string (a SELECT query). I am able to call this function using SQL Plus (Connected to Database X) as below and it works fine:
    declare
    vSQL VARCHAR2(100);
    begin
    vSQL := fn_dbtype(1);
    end;
    The DML operations completed fine and vSQL contains the "Select" query now.
    In APEX:
    I am trying to create a SQL report in APEX using SQL query(PL/SQL function returning a sql statement) option. I am trying to figure out what to put in the region source so that the output of the "Select" query is displayed in the report.
    Moreover APEX is hosted in a different database instance. So I would need to call this pl/sql function using a DB Link.
    Please let me know what I need to put in the region source to execute the pl/sql function which returns the "Select" query thereby displaying the query output in the report. Thanks.
    Edited by: user709584 on Mar 19, 2009 2:32 PM
    Edited by: user709584 on Mar 19, 2009 2:34 PM

    try something like this:
    return fn_dbtype(1)@dblink;

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Demonstrating PL/SQL Functions Using SQL Developer

    Good afternoon,
    I'm starting to write some PL/SQL functions to replace some of the SQL that I use most frequently.  A couple of very simple examples would be:
    create or replace function func_test (p_1 number) return number
    is
    x number;
    y number;
    begin
    x :=1;
    y :=2;
    return p_1 * x * y;
    end func_test;
    create or replace function func_test2 (p_1 varchar2) return varchar2
    is
    return_val varchar2(10);
    begin
    select p_1 into return_val from dual;
    return return_val;
    end func_test2;
    However, at my workplace I haven't been granted create function privileges yet until I can demonstrate some examples, which is understandable.
    For the time being, without these privileges, is there a way I can build/test functions in principle locally using SQL Developer without the need to write the functions to our database? I.e. can I demonstrate the above in SQL Developer, but without wrapping in create or replace syntax?
    I hope this isn't too vague.
    Using Oracle 11gR2 (not logged in to workplace database at the moment for specific version no.)
    SQL Developer 3.4
    Thanks,
    TP

    sb92075 02-Nov-2013 19:12 (in response to TinyPenguin)
    populating test DB with data is a solvable problem.
    You don't need client data to test code (functions).
    You only need sample test data; which generally is less than a few dozen records per table.
    Absolutely, of course. Our client database is pretty messy though, and includes data prior to the implementation of more recent business rules that I need to take account of. Useful perspective though, thanks.
    rp0428 02-Nov-2013 19:14 (in response to TinyPenguin)
    Sure, but then I wouldn't have access to all the data in our client database to test functions under various circumstances.
    Huh? Why not? It's your database so what keeps you from creating a database link to your client database where all the data is?
    Also, I suppose it's not good practice to constantly write/replace/drop functions to/from a database when developing them? Better to test the function in principle and then write to the database?
    Huh? Why not? What you think a dev database is for if not for development?
    Based on your two posts so far in this thread it's understandable why they don't want to give you privileges yet. Those sample 'functions' you posted are NOT a good use for functions.
    In sql developer you can just create and save the queries you use most often. There is no need to create functions for that.
    But if you do need an anonymous function now and then just create one using sql*plus syntax:
    Our IT department are pretty sensitive about how they allow access, even to the dev environment. As you've identified, I'm not naturally a programmer so the option to play around with the data to develop some representative examples about how we can simplify and devolve SQL reporting to more members of staff is useful to me. I just wrote those two function quickly for the purpose of posting some sample data, which I thought would be helpful. Thanks for illustrating how to return their output using an anonymous block.
    FrankKulash 02-Nov-2013 19:13 (in response to TinyPenguin)
    Hi,
    The obvious solution is to get the privileges.  If your employer really wants you to do something, they need to give you the necessary privileges to do it.  It's silly for them to tell you to do something, but refuse to let you do it.
    Failing that, you can install Oracle on your own machine, as suggested above.  It's free and legitimate if you're only using it for learning and developing.  Oracle Express Edition is very easy to install.
    As a last resort, you can write functions and procedures that are local to an anonymous block, like this:
    Thanks Frank. Yeah I'm going to speak with our DBA next week about privileges. I've got XE/SQL Developer installed on my own computer - I wrote those sample functions using them - I just wasn't sure how to call/return anonymous blocks as both you and rp identified to develop at work as an interim solution.
    Thanks a lot All,
    TP.

  • Unable to retreive the return value of pl/sql function using DB Adapter

    Dear Experts,
    I am using DB Adapter in my BPEL Process. Using DB Adapter I am invoking a PL / SQL function. I am able to send two input parameters for the pl/sql function. But I dont know how to retrieve the return value from the function. Please suggest me.
    Thanks,
    Rajesh

    Yes I am returning a value from PL/SQL function.
    Please see the code segments below,
    FUNCTION "TD_INSERT" (a TDINIT_TYPE, stops TDDETAIL_TABLE )
    RETURN VARCHAR2
    AS
    td_no Number;
    td_id Number;
    stop TDDETAILFULL_TYPE;
    length number;
    BEGIN
    insert into TD_INIT values( ----passing all the values here --------- );
    select max(tdno) into td_no from TD_INIT ;
    length := stops.count;
    for i in 1.. length loop
    stop := stops(i);
    insert into TD_DETAIL_FULL values(
    td_no, ------- );
    end loop;
    commit;
    RETURN td_no;
    END;
    Thanks,
    Rajesh

  • XML SQL functions causing internal errors in PL/SQL

    Still busy searching through Metalink, but have yet to find anything useful. Maybe someone can point me towards a URL or a doc ref that has some hints on how to solve the following:
    Oracle: 10.2.0.1.0 46bit
    O/S: HP-UX 11
    SQL> var l varchar2(4000)
    SQL>
    SQL>
    SQL> select XMLForest( 123 as "ID" ) as L from dual;
    L
    <ID>123</ID>
    SQL> begin
      2     select XMLForest( 123 as "ID" ) into :l from dual;
      3  end;
      4  /
            select XMLForest( 123 as "ID" ) into :l from dual;
    ERROR at line 2:
    ORA-06550: line 2, column 2:
    PLS-00801: internal error [*** ASSERT at file pdw4.c, line  793;
    Cannot coerce between type 43 and type 30; anon_c000000044b37bb0__AB[2, 2]]
    SQL> begin
      2     EXECUTE IMMEDIATE 'select XMLForest( 123 as "ID" ) from dual' into :l;
      3* end;
    SQL> /
    PL/SQL procedure successfully completed.
    SQL> print l
    L
    <ID>123</ID>

    Ah.. found the problem. The function returns a type which result is implicitly type casted to a varchar2 it seems when running a SELECT on it via the SQL engine.
    PL/SQL sees it as a type (which it is) and expects you to put it into a var of the same type. It does not allow/perform the implicit conversion to varchar2.
    But then SQL is not Turing Complete so I guess we'll have to let the SQL Engine slide on doing these implicit conversions... :-)

  • Execute SQL Task with Parameter - from DB2 to SQL Server

    I am pulling data from DB2 to SQL Server.
    1st Execute SQL task runs the following against DB2:
    SELECT TIMESTAMP(CHAR(CURRENT_DATE - (DAY(CURRENT_DATE)-1) DAYS - 1 MONTH)) as START_MONTH
    FROM SYSIBM.SYSDUMMY1;
    I'm storing it as a Result Set.
    Next I have a Data Flow Task.  This pulls data from DB2 where the date matches the parameter.
    FROM SCHEMA.TABLE t
    WHERE DATE_TIME_COLUMN= ?
    This works fine. Guessing, because the parameter source is DB2 and the Data Flow source is DB2.
    The problem is, I want to remove existing data for the same date in the SQL table.  IE, if I'm pulling March 2014 data from DB2, I want to be sure there is no March 2014 data in the SQL table.  The main use is for re-runs.
    So, I added another Execute SQL task after the first one that assigns the variable, and before the Data Flow Task. This runs the following statement against SQL Server:
    DELETE FROM
    database.dbo.table
    WHERE DATE_TIME_FIELD >= ?
    The package fails at the new Execute SQL Task with the following error message:
    Error: 0xC002F210 at Execute SQL Task, Execute SQL Task: Executing the query "DELETE FROM
    SBO.dbo.tblHE_MSP_FEE_BUCKETS
    WHERE T..." failed with the following error: "Parameter name is unrecognized.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established
    correctly.
    Task failed: Execute SQL Task
    SSIS package "Load_MSP_Fee_Buckets_SQL.dtsx" finished: Success.
    The program '[14240] Load_MSP_Fee_Buckets_SQL.dtsx: DTS' has exited with code 0 (0x0).
    I am assuming this is something to do with the Parameter source being DB2, and trying to use against SQL?
    Any suggestions on how to make this work??
    Thanks,
    -Al H

    Parameter name is unrecognized
    is the key, how come DB2 is related if it runs against SQL Server
    What I think is happening you do not use the OLEDB connection to SQL Server.
    Likewise, if it is ADO then the query needs to be
    DELETE FROM
    database.dbo.table
    WHERE DATE_TIME_FIELD >= @MyDate
    Arthur My Blog

  • Calling function using user-defined type from query

    I am in a bit over my head on this, at least at 2:30 in the morning after several hours of intense development. I am working on a function to return financial aging figures by aging date range. I created a user type Aging with a Currency type member for
    each of these.
     Public Type Aging
      Aging0 As Currency
      Aging30 As Currency
      Aging60 As Currency
      Aging90 As Currency
      Aging120 As Currency
     End Type
    I then created a function that accepts the account ID & the as-of date. I have all the rest of the code written to actually calculate all of it, but I am having a fundamental problem figuring out how to call the function. To eliminate any of the complex
    mathematical elements for testing, I have reducted the function to a set of simple explicitly-set values below:
    Public Function GetAging(AccountID As Integer, AsOfDate As Date) As Aging
        GetAging.Aging0 = 0
        GetAging.Aging30 = 0
        GetAging.Aging60 = 0
        GetAging.Aging90 = 0
        GetAging.Aging120 = 0
    End Function
    This all works fine from the immediate window, where I can type this:
    ?GetAging(115,#2015-04-05#).Aging0
    and get the expected response of 0 (I have tested with other values for each member, so I know this part is working correctly).
    But can I and/or how do I call this from a query? I should be able, as a test, to pass in each account # and the current date, just to get the Aging0 element out of my function--something like this:
     SELECT
      AccountID,
      CustomerName,
      GetAging([AccountID],Date()).Aging0
     FROM
      Customer
     ORDER BY
      CustomerName
    I get "The expression you entered has an invalid .(dot) or ! operator or invalid parentheses. But if I remove the ".Aging0", I get this error: Undefined function 'GetAging' in expression. 
    What am I missing here?

    To expand on the answer of Hans, you could use a AgePoint parameter in the function like this:
    Public Function GetAging(AccountID As Integer, AsOfDate As Date, AgePoint As Integer) As Currency
    ' do calculation
    Select Case AgePoint
    Case 0: GetAging = Aging.Aging0
    Case 30: GetAging = Aging.Aging30
    Case 60: GetAging = Aging.Aging60
    Case 90: GetAging = Aging.Aging90
    Case 120: GetAging = Aging.Aging120End Select
    End Function
    then use it in the query like this:
    SELECT
    AccountID,
    CustomerName,
    GetAging([AccountID],Date(), 0),
    GetAging([AccountID],Date(), 30),
    GetAging([AccountID],Date(), 60),
    GetAging([AccountID],Date(), 90),
    GetAging([AccountID],Date(), 120)
    FROM
    Customer
    ORDER BY
    CustomerName
    However, this has the big drawback that the function will be called 5 times for each and every AccountID, so you will probably have bad performance.
    Since you pass the AccountID to the function, you could consider to calculate the results for all AccountIDs in the code and store the results into a table, then join this table into the query.
    Matthias Kläy, Kläy Computing AG

  • Error while submitting to a report using Parameter of type string

    Requirement: Selection screen has a parameter so as to input the file. under that a checkbox is provided. If this checkbox is checked the program should read the file from desktop and based on that data the report should execute in background mode.
    For this i have done in the following way. by comparing the checkbox and sy-batch i read the file contents placed in AL11 and then submitted the report by setting the path in the parameter. Accordingly when sy-batch eq X i read the file from AL11 from the path specified in the parameter.
    This is working fine in Dev & Prd. But in Quality system i am facing the error like Wrong type passing parameters P_FILE.
    Message No. DB036.

    Cases can be
    1. file name should be in caps (upper case lower case conversion
    2. when executing in bg variant not picked
    3. collision of file path across servers in a system
    In a system there will be servers  like in Quality it can have varying servers ex sapqp1 sapqp2 sapqp3  
    In here qp1 qp2 and qp3 are the servers in quality which can hold the unix data (al11) . If qp2 is directed in your check and if qp2 dosent have the file path it throws error. recently we have faced a similar issue . Problem was solved when basis guys made the setting across all server's to access the path.
    Try these options.
    Br,
    vijay.

  • SQL Query using DATS data type

    Hello people,
    I have a table 'TableA' with a column 'xpto' that has DATS data type.
    A lot of lines in this table has this field empty.
    I have to create a SQL query to select those lines with this field not empty.
    Select * from TableA
    where xpto is not null
    It does not work.... The result is all the lines in the table.
    What is wrong?
    Thanks!

    Easiest way to do this is :
    constants l_xpto   type tableA-xpto  value is initial
    Select * from TableA
    where xpto ne l_xpto.
    You can see the NULL/NOT NULL settings for the column through SE11
    Utilities -> Database Object -> Display.
    I tend to set the "Initial Value" flag when adding columns to tables, which creates the database column as NOT NULL with a default value (whatever SAP considers the initial value for that datatype). I think SAP does this by default, but I've had issues where it hasn't so I do this to ensure consistency
    Chris Bain

  • How to rewrite dynamic SQL built using IF clauses, as static SQL?

    Hi folks
    I have some Dynamic SQL that I would like to rewrite as static.
    The original PL/SQL code contains a number of IF clauses that I would normally replace with CASE statements in the static code, but I am wondering if I'm doing it the best way as although it works, I am not that happy with it for reasons described below.
    [Please excuse any remaining syntax errors, I have tried to reduce them but we have no Oracle on our internet network and I can't CutnPaste from the dev network.]
    So...
    An example of what I would normally do.
    A silly example, I admit, but it's just for demo.
    Original Dynamic Code:
    PROCEDURE GET_EMP_NAMES(p_job_title  IN VARCHAR2 := NULL
                           ,p_car_class  IN NUMBER   := NULL
                           ,p_REF_CURSOR OUT CURVAR)
      Purpose: Provide names of all employees.  But if MANAGERS only is requested,
      then limit the result set to those employees with the car class specified.
      Fields in table emp: first_name, fam_name, car_class(NUMBER)
    AS
      v_str VARCHAR2(1000);
    BEGIN
      v_str := 'select e.first_name, e.fam_name FROM emp e WHERE 1=1 ';
      IF p_job_title = 'MANAGER' and p_car_class IS NOT NULL THEN
          v_str :=  v_str || ' AND e.car_class = ' || p_car_class;
      END IF;
      OPEN p_REF_CURSOR FOR v_str;
    END;My usual rewrite:
    BEGIN
      OPEN p_REF_CURSOR FOR
        SELECT e.first_name, e.fam_name
          FROM emp e
         WHERE e.car_class =
                          CASE WHEN p_job_title = 'MANAGER' AND p_car_class IS NOT NULL
                                 THEN p_car_class
                               ELSE e.car_class
                           END;
    END;My problem with this is that some employees may have an e.car_class of NULL.
    In this case the WHERE clause will evaluate to NULL = NULL, and so they will not be included in the result set.
    So I usually defend with NVL() and some fictive constant for the NVL result, ie:
    AS
      k_dummy_val CONSTANT NUMBER(20) := 56394739465639473946;
    BEGIN
      OPEN p_REF_CURSOR FOR
        SELECT e.first_name, e.fam_name
          FROM emp e
         WHERE NVL(e.car_class,k_dummy_val) =
                        CASE WHEN p_job_title = 'MANAGER' AND p_car_class IS NOT NULL
                               THEN p_car_class
                             ELSE NVL(e.car_class,k_dummy_val)
                         END;
    END;But now I have had to decide what to use for k_dummy_val. I need to choose some fictive number that is not and will not EVER be used as a car class. This is a problem as I do not know this.
    So this makes me think that my technique is incorrect.
    Do you have any suggestions?
    Many thanks
    Assaf
    Edited by: user5245441 on 03:55 24/04/2011

    Hi,
    Assaf wrote:
    ... if I were to do the following then it would be a mess, right? Barckets are needed and if I have many such conditions and make a mistake with the brackets, then... :(Brackets may be needed, but they're a good idea even if they are not needed.
    WHERE     p_job_title     != 'MANAGER'
         OR     'X' || TO_CHAR (car_class)     
               = 'X' || TO_CHAR (p_car_class)
    AND   p_yet_another_param != 'SOMETHING ELSE'
         OR     'X' || TO_CHAR (another_field)     
               = 'X' || TO_CHAR (p_another_param)What do you think?Using parenetheses, that would be:
    WHERE     (   p_job_title     != 'MANAGER'
         OR     'X' || TO_CHAR (car_class)     
               = 'X' || TO_CHAR (p_car_class)
      AND   (   p_yet_another_param != 'SOMETHING ELSE'
         OR     'X' || TO_CHAR (another_field)     
               = 'X' || TO_CHAR (p_another_param)
         )I find this clearer than CASE expressions, but I won't be maintaining your code.
    This is one place where implicit conversions won't hurt you. Both operands to || have to be strings. If you use a NUMBER as an operand to ||, then it will be implicitly converted to a string. Usually, it's better to make this clear by using TO_CHAR, and converting the NUMBER to a string explicitly, but in this case you might gain clarity by not using TO_CHAR.
    Put comments in your code where you think they'll help someone to read and understand it.
    WHERE
         -- p_car_class only applies to MANAGERs
         (   p_job_title     != 'MANAGER' 
         OR     'X' || car_class
               = 'X' || p_car_class
      AND   (   p_yet_another_param != 'SOMETHING ELSE'
         OR     'X' || another_field
               = 'X' || p_another_param
    ... I avoid dynamic whenever possible, leaving it for use only in situations where things such as Table Names are unknown at compile time. I find it complex and error prone, and especially difficult for another developer to support at a later date.I agree. I agree especially with being concerned about the developer who has to modify or debug the code in the future, regardless of whether it's you or another.

  • How do I pass an array of structs to a C function using the dll flexible prototype adapter?

    What I want to do is pass into a C dll function a variably sized Array of structs of type TPS_Data. My Code compiles but when I run it in TestStand, I get an error -17001; Program Error. "Cannot allocate 0 size buffer Error in parameter 2, 'OpenFrdData'."
    I've allocated the Array of structs, and all of the information is there before I call my function, so is it my prototype? Or am I asking too much of the DLL Flexible Prototype Adapter to pass an Array of Structs?
    I can pass in a single struct of type TPS_Data and that works, but not an array.
    Here's the relevent code:
    typedef struct TPS_DATA
    char Report_Number[256];
    char System_Name[256];
    char Open_Date[256];
    char UUT_Part_Number[256];
    char UUT_Serial_Number[256];
    char UUT_Name[256];
    char Open_Employee_Name[256];
    char Open_Employee_Number[256];
    char Close_Employee_Name[256];
    char Close_Employee_Number[256];
    char Close_Date[256];
    } TPS_Data;
    typedef struct TPS_DATA_ARRAY
    TPS_Data DataRecord;
    } TPS_DataArray;
    long __declspec(dllexport) __stdcall OpenDialog (CAObjHandle Context, TPS_DataArray *TpsData[], const char *psFaultStr, char *sComments, const int nCount);

    OK,
    I can pass the data to the DLL function, using the following types:
    typedef struct StringArrayType
    char string[10][256];
    } StringArray;
    typedef struct MultiStringArrayType
    StringArray Record[10];
    } MultiStringArray;
    void __declspec(dllexport) __stdcall ATP_TestStructPassing(StringArray Strings)
    return;
    void __declspec(dllexport) __stdcall ATP_TestMultiStructPassing(MultiStringArray *Strings)
    return;
    But when the MultiStruct function Exits, TestStand reports an Error:
    -17501 "Unexpected Operating System Error" Source: 'TSAPI'
    There doesn't seem to be a way around this, and once the error occurs, I have to force quit TestStand. I've included the sequence file, and the dll code can be compiled from the fun
    ctions shown above.
    Any thoughts on how to get around this error would be greatly appreciated.
    Attachments:
    StructArrayPassing.seq ‏16 KB

  • Calling PL/SQL functions in SQL

    Hi,
    I had a question on the behaviour of PL/SQL function when they are embedded in SQL queries.
    I have simple PL/SQl function:
    function get_city_id
    (vCUSTOMER_GROUP_ID IN NUMBER)
    RETURN VARCHAR2 AS
    vIDs VARCHAR2(1000);
    begin
    vIDs := '2,3';
    RETURN vIDs;
    end;
    I use an SQL query call this PL SQL function;
    select equipment_id from equipment where equipment_type_id in (get_city_id(0));
    When I run this query I get this error:
    ERROR at line 1:
    ORA-01722: invalid number
    I understand that the SQL compiler is complaining that get_city_id() function is not returning a number. but isn't the PL/SQL suppose to be replace with whatever string is returned from the function.
    This brings me to the second question: How can we return multiple values from a function to the SQL query, when the function is called in a SQL query. In my above explame, can the get_city_id() function return more than one city id and if yes, the how?
    Thanks for the help in advance
    regards
    Alankar

    How can we return multiple values from a function to the SQL query,Have it return a collection, e.g.
    CREATE OR REPLACE TYPE VARCHAR2_TT AS TABLE OF VARCHAR2(4000)
    CREATE OR REPLACE FUNCTION test_collection
        RETURN VARCHAR2_TT
    AS
    BEGIN
        RETURN VARCHAR2_TT(1,2,3);
    END;
    SELECT *
    FROM   employees
    WHERE  emp_id IN
           ( SELECT column_value
             FROM   TABLE(test_collection) );

Maybe you are looking for