Distributed queries+pipelined table function

HI friends,
can i get better performance for distributed queries if i use pipelined table function.I have got my data distribued across three different databases.
thanx
somy

You will need to grant EXECUTE access on the pipelined table function to whatever users want it. When other users call this function, they may need to prefix the schema owner (i.e. <<owner>>.getValue('001') ) unless you've set up the appropriate synonym.
What version of SQL*Plus do you have on the NT machine?
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Query performance improvement using pipelined table function

    Hi,
    I have got two select queries one is like...
    select * from table
    another is using pielined table function
    select *
    from table(pipelined_function(cursor(select * from table)))
    which query will return result set more faster????????
    suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
    rgds
    somy

    Compare the performance between these solutions:
    create table big as select * from all_objects;
    First test the performance of a normal select statement:
    begin
      for r in (select * from big) loop
       null;
      end loop;
    end;
    /Second a pipelined function:
    create type rc_vars as object 
    (OWNER  VARCHAR2(30)
    ,OBJECT_NAME     VARCHAR2(30));
    create or replace type rc_vars_table as table of  rc_vars ;
    create or replace
    function rc_get_vars
    return rc_vars_table
    pipelined
    as
      cursor c_aobj
             is
             select owner, object_name
             from   big;
      l_aobj c_aobj%rowtype;
    begin
      for r_aobj in c_aobj loop
        pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
      end loop;
      return;
    end;
    /Test the performance of the pipelined function:
    begin
      for r in (select * from table(rc_get_vars)) loop
       null;
      end loop;
    end;
    /On my system the simple select-statement is 20 times faster.
    Correction: It is 10 times faster, not 20.
    Message was edited by:
    wateenmooiedag

  • Using Pipeline Table functions with other tables

    I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables. In my tests, selecting from just the pipelined table perform pretty well (whether it is directly from the pipleined table or the view I created on top of it). Where I start to see some degregation when I try to join the pipelined tabe view to other tables and add where conditions.
    ie:
    SELECT A.empno, A.empname, A.job, B.sal
    FROM EMP_VIEW A, EMP B
    WHERE A.empno = B.empno AND
          B.mgr = '7839'
    I have seen some articles and blogs that mention this as a cardinality issue, and offer some undocumented methods to try and combat.
    Can someone please give me some advice or tips on this. Thanks!
    I have created a simple example using the emp table below to help illustrate what I am doing.
    DROP TYPE EMP_TYPE;
    DROP TYPE EMP_SEQ;
    CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT
           ( EMPNO                                         NUMBER(10),
             ENAME                                         VARCHAR2(100),
             JOB                                           VARCHAR2(100));
    CREATE OR REPLACE TYPE EMP_TYPE AS TABLE OF EMP_SEQ;
    CREATE OR REPLACE FUNCTION get_emp return EMP_TYPE PIPELINED AS
    BEGIN
      FOR cur IN (SELECT
                    empno,
                    ename,
                    job
                  FROM emp
             LOOP
               PIPE ROW(EMP_SEQ(cur.empno,
                                cur.ename,
                                cur.job));
             END LOOP;
             RETURN;
    END get_emp;
    create OR REPLACE view EMP_VIEW as select * from table(get_emp());
    SELECT A.empno, A.empname, A.job, B.sal
    FROM EMP_VIEW A, EMP B
    WHERE A.empno = B.empno AND
          B.mgr = '7839'

    I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables
    Which begs the question: WHY? What PROBLEM are you trying to solve and what makes you think using pipelined table functions is the best way to solve that problem?
    The lack of information about cardinality is the likely root of the degradation you noticed as already mentioned.
    But that should be a red flag about pipelined functions in general. PIPELINED functions hide virtually ALL KNOWLEDGE about the result set that is produced; cardinality is just the tip of the iceberg. Those functions pretty much say 'here is a result set' without ANY information about the number of rows (cardinality), distinct values for any columns, nullability of any columns, constraints that might apply to any columns (foreign key, primary key) and so on.
    If you are going to hide all of that information from Oracle that would normally be used to help optimize queries and select the appropriate execution plan you need to have a VERY good reason.
    The use of PIPELINED functions should be reserved for those use cases where ordinary SQL and PL/SQL cannot get the job done. That is they are a 'special case' solution.
    The classic use case for those functions is for the transform stage of ETL where multiple pipelined functions are chained together: one function feeds its rows to the next function which feeds its rows to another and so on. Each of those 'chained' functions is roughly analogous to a full table scan of the data that often does not need to be joined to other data except perhaps low volumn lookup tables where the data may even be cached.
    I suggest that any exploratory or prototyping work you do use standard relational tables until such point as you run into a problem whose solution might require PIPELINED functions to solve.

  • Interactive report on view based on pipelined table function.

    Hi,
    I want to build an Interactive Report on a view.
    The view definition contains a select on a pipelined table function. I use context functionality to pass paramaters to the pipelined table function.
    A plain select * from #my_view# in SqlPlus results in 121 different rows.
    However, If I base my Interactive report on this view, I get 15 repeated rows (all the same).
    Is it possible to use pipelined table functionality on an Interactive report? I can't seem to get it working.
    If I use the following approach (http://rakeshjsr.blogspot.nl/2010/10/oracle-apex-interactive-report-based-on.html) I do get results, but I can't use this solution for a reason that's not relevant.

    Hello,
    Is it possible to use pipelined table functionality on an Interactive report? I can't seem to get it working. I have used it in one instance and it works fine. However I was passing the values to pipe-lined function directly.
    IR Query..
    SELECT * FROM TABLE(fn_pipeline(:P1_ITEM_NAME))Call pipe-lined function from IR query directly (instead of using view)
    Try sending values to Pipe-lined function directly. In-case if the problem is with setting and getting values from the context?
    Regards,
    Hari

  • Pipeline Table Function returning a fraction of data

    My current project involves migrating an Oracle database to a new structure based on the new client application requirements. I would like to use pipelined table functions as it seems as though that would provide the best performance.
    The first table has about 65 fields, about 75% of which require some type of recoding for the new app. I have written a function for each transformation and have all of these functions stored in a package. If I do:
    create new_table as select (
    pkg_name.function1(old_field1),
    pkg_name.function2(old_field2),
    pkg_name.function3(old_field3),
    it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.
    I wrote a function that is passed the old table as a cursor, runs all the functions for the transformations and then pipes the new row back to the insert statement that called the function. It is incredibly fast but only returns .025% of the data (about 50 rows out of my sample table of 200,000). It does not throw any errors.
    So I am trying to determine what is going on. Perhaps one of my functions has a bug. If there was would cause the row to be kicked out? There are 40 or so functions so tracking this down has been a bit of a bear.
    Any advice as to how I might resolve this would be much appreciated.
    Thanks
    Dan

    . I would like to use pipelined table functions as it seems as though that would provide the best performanceUh huh...
    it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.Not the first time a lovely theory has been killed by an ugly fact. Did you do any bench marks to see whether the pipelined functions did offer performance benefits over doing it some other way?
    From the context of your comments I think you are trying to a populate a new table from a single old table. Is this the case? If so I would have thought a straightforward CTAS with normal functions would be more appropriate: pipelined functions are really meant for situations in which one input produced more than one output. Anyway, ifr we are to help you I think you need to give us more details about how this process works and post a sample transformation function.
    There are 40 or so functions so tracking this down has been a bit of a bear.The teaching is: we should code one function and get that working before moving on to the next one. Which might not seem like a helpful thing to say, but the best lesson is often "I'll do it differently next time".
    Cheers, APC

  • 10g: delay for collecting results from parallel pipelined table functions

    When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
    This delay is huge compared with the run-time of the worker threads.
    For my application it goes like this:
    main thread timing efforts to start worker and collect their results:
    [10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
    worker threads:
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
    You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
    But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
    Why does it need #10 sec more just to save the data?
    Here's a sample sqlplus script to demonstrate this:
    --------------------------- snip -------------------------------------------------------
    set serveroutput on;
    drop table perf_data;
    drop table test_table;
    drop table tmp_test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table perf_data
         sid number,
         t1 timestamp with time zone,
         t2 timestamp with time zone,
         client varchar2(256)
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create global temporary table tmp_test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TZDeltaToMilliseconds(
              t1 in timestamp with time zone,
              t2 in timestamp with time zone)
         return pls_integer;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         * Calculate timestamp with timezone difference
         * in milliseconds
         function TZDeltaToMilliseconds(
              t1 in timestamp with time zone,
              t2 in timestamp with time zone)
         return pls_integer
         is
         begin
              return     (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
              +     (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
              +     (extract(second from t2) - extract(second from t1)) * 1000;
         end TZDeltaToMilliseconds;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              pragma autonomous_transaction;
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
              t1 timestamp with time zone;
              t2 timestamp with time zone;
         begin
              t1 := systimestamp;
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
                   counter := counter + 1;
              end loop;
              t2 := systimestamp;
              insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
              commit;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
         sid number;
         t1 timestamp with time zone;
         t2 timestamp with time zone;
    procedure LogPerfTable
    is
    type ton is table of number;
    type tot is table of timestamp with time zone;
              type clients_t is table of varchar2(256);
    sids ton;
    t1s tot;
    t2s tot;
              clients clients_t;
    deltaTime integer;
    btsPerSecond number(19,0);
    edrsPerSecond number(19,0);
    begin
    select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
    if clients.count > 0 then
    for i in clients.FIRST .. clients.LAST loop
    deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
    if deltaTime = 0 then deltaTime := 1; end if;
    dbms_output.put_line(
    '[' || to_char(t1s(i), 'hh:mi:ss') ||
    '-' || to_char(t2s(i), 'hh:mi:ss') ||
    ']:' ||
    ' client ' || clients(i) || ' / sid #' || sids(i)
    end loop;
    end if;
    end LogPerfTable;
    begin
         select userenv('SID') into sid from dual;
         for i in 1..200000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         -- save into the tmp table
         insert into tmp_test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    end;
    --------------------------- snap -------------------------------------------------------
    best regards,
    Frank

    Hello
    I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
    create or replace package test_pkg
    as
    type test_rec is record (
    a number(19,0),
    b timestamp with time zone,
    c varchar2(256)
    type test_tab is table of test_rec;
    type test_cur is ref cursor return test_rec;
    function TZDeltaToMilliseconds(
    t1 in timestamp with time zone,
    t2 in timestamp with time zone)
    return pls_integer;
    function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    function TF_Any(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by ANY);
    end;
    create or replace package body test_pkg
    as
    * Calculate timestamp with timezone difference
    * in milliseconds
    function TZDeltaToMilliseconds(
    t1 in timestamp with time zone,
    t2 in timestamp with time zone)
    return pls_integer
    is
    begin
    return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
    + (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
    + (extract(second from t2) - extract(second from t1)) * 1000;
    end TZDeltaToMilliseconds;
      function TF(mycur test_cur)
      return test_list pipelined
      parallel_enable(partition mycur by hash(a))
      is
      pragma autonomous_transaction;
        sid number;
        counter number(19,0) := 0;
        myrec test_rec;
        t1 timestamp with time zone;
        t2 timestamp with time zone;
      begin
        t1 := systimestamp;
        select userenv('SID') into sid from dual;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
      loop
        fetch mycur into myRec;
        exit when mycur%NOTFOUND;
        -- attention: saves own SID in test_obj.a for indication to caller
        -- how many sids have been involved
        pipe row(test_obj(sid, myRec.b, myRec.c));
        pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
        pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
        counter := counter + 1;
      end loop;
      t2 := systimestamp;
      insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
      commit;
      dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
      end;
      function TF_any(mycur test_cur)
      return test_list pipelined
      parallel_enable(partition mycur by ANY)
      is
      pragma autonomous_transaction;
        sid number;
        counter number(19,0) := 0;
        myrec test_rec;
        t1 timestamp with time zone;
        t2 timestamp with time zone;
      begin
        t1 := systimestamp;
        select userenv('SID') into sid from dual;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
        loop
          fetch mycur into myRec;
          exit when mycur%NOTFOUND;
          -- attention: saves own SID in test_obj.a for indication to caller
          -- how many sids have been involved
          pipe row(test_obj(sid, myRec.b, myRec.c));
          pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
          pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
          counter := counter + 1;
        end loop;
        t2 := systimestamp;
        insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
        commit;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
      end;
    end;
    explain plan for
    select /*+ first_rows */ test_obj(a, b, c)
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    select * from table(dbms_xplan.display);
    Plan hash value: 1037943675
    | Id  | Operation                             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                      |            |  8168 |  3972K|    20   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                       |            |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                 | :TQ10001   |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |   3 |    BUFFER SORT                        |            |  8168 |  3972K|            |          |  Q1,01 | PCWP |            |
    |   4 |     VIEW                              |            |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   5 |      COLLECTION ITERATOR PICKLER FETCH| TF         |       |       |            |          |  Q1,01 | PCWP |            |
    |   6 |       PX RECEIVE                      |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,01 | PCWP |            |
    |   7 |        PX SEND HASH                   | :TQ10000   |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | P->P | HASH       |
    |   8 |         PX BLOCK ITERATOR             |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWC |            |
    |   9 |          TABLE ACCESS FULL            | TEST_TABLE |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWP |            |
    Note
       - dynamic sampling used for this statement
    explain plan for
    select /*+ first_rows */ test_obj(a, b, c)
    from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    select * from table(dbms_xplan.display);
    Plan hash value: 4097140875
    | Id  | Operation                            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                     |            |  8168 |  3972K|    20   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                      |            |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                | :TQ10000   |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    VIEW                              |            |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   4 |     COLLECTION ITERATOR PICKLER FETCH| TF_ANY     |       |       |            |          |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR               |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWC |            |
    |   6 |       TABLE ACCESS FULL              | TEST_TABLE |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWP |            |
    Note
       - dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
    Pipelined function partition by hash has extra sort#
    I ran your tests with HASH vs ANY and the results are in line with the observations above....
    declare
    myList test_list := test_list();
    myList2 test_list := test_list();
    sids ton_t := ton_t();
    sid number;
    t1 timestamp with time zone;
    t2 timestamp with time zone;
    procedure LogPerfTable
    is
    type ton is table of number;
    type tot is table of timestamp with time zone;
    type clients_t is table of varchar2(256);
    sids ton;
    t1s tot;
    t2s tot;
    clients clients_t;
    deltaTime integer;
    btsPerSecond number(19,0);
    edrsPerSecond number(19,0);
    begin
    select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
    if clients.count > 0 then
    for i in clients.FIRST .. clients.LAST loop
    deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
    if deltaTime = 0 then deltaTime := 1; end if;
    dbms_output.put_line(
    '[' || to_char(t1s(i), 'hh:mi:ss') ||
    '-' || to_char(t2s(i), 'hh:mi:ss') ||
    ']:' ||
    ' client ' || clients(i) || ' / sid #' || sids(i)
    end loop;
    end if;
    end LogPerfTable;
    begin
    select userenv('SID') into sid from dual;
    for i in 1..200000 loop
    myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
    end loop;
    -- save into the real table
    insert into test_table select * from table(cast (myList as test_list));
    -- save into the tmp table
    insert into tmp_test_table select * from table(cast (myList as test_list));
    dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    end;
    (1) copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '918' ): enter
    test_pkg.TF( sid => '918' ): exit, piped #200000 records
    [01:40:19-01:40:29]: client master / sid #918
    [01:40:19-01:40:29]: client slave / sid #918
    ... saved #600000 records
    (2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
    [01:40:31-01:40:36]: client master / sid #918
    [01:40:31-01:40:32]: client slave / sid #659
    [01:40:31-01:40:32]: client slave / sid #880
    [01:40:31-01:40:32]: client slave / sid #1045
    [01:40:31-01:40:32]: client slave / sid #963
    [01:40:31-01:40:32]: client slave / sid #712
    ... saved #600000 records
    (3) copy physical 'test_table' to 'mylist2' by streaming via table function:
    [01:40:37-01:41:05]: client master / sid #918
    [01:40:37-01:40:42]: client slave / sid #738
    [01:40:37-01:40:42]: client slave / sid #568
    [01:40:37-01:40:42]: client slave / sid #618
    [01:40:37-01:40:42]: client slave / sid #659
    [01:40:37-01:40:42]: client slave / sid #963
    ... saved #3000000 records
    (4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
    [01:41:12-01:41:16]: client master / sid #918
    [01:41:12-01:41:16]: client slave / sid #712
    [01:41:12-01:41:16]: client slave / sid #1045
    [01:41:12-01:41:16]: client slave / sid #681
    [01:41:12-01:41:16]: client slave / sid #754
    [01:41:12-01:41:16]: client slave / sid #880
    ... saved #600000 records
    (5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
    [01:41:18-01:41:38]: client master / sid #918
    [01:41:18-01:41:38]: client slave / sid #681
    [01:41:18-01:41:38]: client slave / sid #712
    [01:41:18-01:41:38]: client slave / sid #754
    [01:41:18-01:41:37]: client slave / sid #880
    [01:41:18-01:41:38]: client slave / sid #1045
    ... saved #3000000 recordsHTH
    David

  • Parallel pipelined table function, autonomous_transaction to global tmp tab

    Hi,
    i try to speed up my parallel pipelined table function and switch from pl/sql collection to global temporary table inside.
    This requires to use PRAGMA AUTONOMOUS_TRANSACTION (and commit), because inserting into global temporary table (DML)
    within select - for invoking the table function - is not allowed without.
    As a consequence of commit it next requires to have on commit preserve rows for the global temporary table.
    Now:
    Inserts into the global temporary table are done - indicated by sql%rowcount.
    But a select afterwards doesn't show any record anymore.
    Here is a program to demonstrate it:
    set serveroutput on;
    drop type TestTableOfNumber_t;
    create or replace type TestTableOfNumber_t is table of number;
    drop type TestStatusList;
    drop type TestStatusObj;
    create or replace type TestStatusObj as object(
         sid number,
         ctr1 number,
         ctr2 number,
         ctr3 number
    create or replace type TestStatusList is table of TestStatusObj;
    drop table TestTmpTable;
    create global temporary table TestTmpTable (
         value     number
    ) on commit preserve rows;
    create or replace package test_pkg
    as
         type TestStatusRec is record (
              sid number,
              ctr1 number,
              ctr2 number,
              ctr3 number
         type TestStatusTab is table of TestStatusRec;
         function FillTmpTable(id in varchar2)
         return TestStatusRec;
         FUNCTION ptf (p_cursor  IN  sys_refcursor)
         RETURN TestStatusList PIPELINED
         PARALLEL_ENABLE(PARTITION p_cursor BY any);
    end;
    create or replace package body test_pkg
    as
         function FillTmpTable(id in varchar2)
         return TestStatusRec
         is
              PRAGMA AUTONOMOUS_TRANSACTION;
              result TestStatusRec;
              sid number;
              type ton is table of number;
              tids TestTableOfNumber_t := TestTableOfNumber_t();
              records number := 0;
         begin
              select userenv('SID') into sid from dual;
              result.sid := sid;
              delete from TestTmpTable;
              for i in 1..100 loop
                   tids.extend;
                   tids(tids.last) := i;
              end loop;
              forall i in 1..tids.count
                   insert into TestTmpTable (value) values (tids(i));
              -- get number of records inserted
              records := sql%rowcount;
              result.ctr1 := records;
              -- retrieve again before commit
              select count(*) into records from TestTmpTable;
              result.ctr2 := records;
              commit;
              -- retrieve again after commit
              select count(*) into records from TestTmpTable;
              result.ctr3 := records;
              return result;
         end;
           FUNCTION ptf (p_cursor  IN  sys_refcursor)
         RETURN TestStatusList PIPELINED
         PARALLEL_ENABLE(PARTITION p_cursor BY any)
         IS
              rec test_pkg.TestStatusRec;
              value number;
              sid number;
              ctr integer := 0;
         BEGIN
              select userenv('SID') into sid from dual;
              rec := FillTmpTable('IN PTF');
              LOOP
                   FETCH p_cursor into value;
                   EXIT WHEN p_cursor%NOTFOUND;
                   ctr := ctr + 1;     
              END LOOP;
              -- as a result i am only interested in the results of FillTmpTable():
              PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
                  RETURN;
         END;
    end;
    declare
         tons TestTableOfNumber_t;
         counts TestTableOfNumber_t;
         status test_pkg.TestStatusRec;
         statusList test_pkg.TestStatusTab;
    begin
         status := test_pkg.FillTmpTable('MAIN');
         dbms_output.put_line('main thread:'
              || ' sid #' || status.sid
              || ' / #' || status.ctr1 || ' inserted '
              || ' / #' || status.ctr2 || ' before commit'
              || ' / #' || status.ctr3 || ' after commit');     
         select value bulk collect into tons from TestTmpTable;
         select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2) */ value from TestTmpTable tab)));
         for i in 1..StatusList.count loop
              dbms_output.put_line('worker thread #' || i  || ':'
              || ' sid #' || statusList(i).sid
              || ' / #' || statusList(i).ctr1 || ' inserted '
              || ' / #' || statusList(i).ctr2 || ' before commit'
              || ' / #' || statusList(i).ctr3 || ' after commit');
         end loop;
    end;
    /The output is:
    main thread: sid #881 / #100 inserted  / #100 before commit / #100 after commit
    worker thread #1: sid #421 / #100 inserted  / #0 before commit / #0 after commit
    worker thread #2: sid #321 / #100 inserted  / #0 before commit / #0 after commitThe 1st line is for the main thread invoking FillTmpTable().
    The next #2 lines are for the worker threads of the parallel pipelined table function for invoking the same FillTmpTable().
    For the main thread everything is as expected.
    But for the worker threads the logs for before commit and after commit both give #0 for the number of available records in the global temporary table.
    However all indicate #100 for the SQL insert
    regards,
    Frank
    Edited by: user8704911 on Jul 7, 2011 10:13 AM
    Edited by: user8704911 on Jul 7, 2011 10:20 AM
    Edited by: user8704911 on Jul 7, 2011 10:27 AM

    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    SQL> set serveroutput on;
    SQL> drop type TestTableOfNumber;
    drop type TestTableOfNumber
    ERROR at line 1:
    ORA-04043: object TESTTABLEOFNUMBER does not exist
    SQL> /
    drop type TestTableOfNumber
    ERROR at line 1:
    ORA-04043: object TESTTABLEOFNUMBER does not exist
    SQL> 
    SQL> create or replace type TestTableOfNumber_t is table of number;
      2  /
    Type created.
    SQL> 
    SQL> drop type TestStatusObj;
    drop type TestStatusObj
    ERROR at line 1:
    ORA-04043: object TESTSTATUSOBJ does not exist
    SQL> /
    drop type TestStatusObj
    ERROR at line 1:
    ORA-04043: object TESTSTATUSOBJ does not exist
    SQL> 
    SQL> create or replace type TestStatusObj as object(
      2   sid number,
      3   ctr1 number,
      4   ctr2 number,
      5   ctr3 number
      6  );
      7  /
    Type created.
    SQL> 
    SQL> drop type TestStatusList;
    drop type TestStatusList
    ERROR at line 1:
    ORA-04043: object TESTSTATUSLIST does not exist
    SQL> /
    drop type TestStatusList
    ERROR at line 1:
    ORA-04043: object TESTSTATUSLIST does not exist
    SQL> 
    SQL> create or replace type TestStatusList is table of TestStatusObj;
      2  /
    Type created.
    SQL> 
    SQL> drop table TestTmpTable;
    drop table TestTmpTable
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> /
    drop table TestTmpTable
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> 
    SQL> create global temporary table TestTmpTable (
      2   value number
      3  ) on commit preserve rows;
    Table created.
    SQL> /
    create global temporary table TestTmpTable (
    ERROR at line 1:
    ORA-00955: name is already used by an existing object
    SQL> 
    SQL> create or replace package test_pkg
      2  as
      3  
      4   type TestStatusRec is record (
      5    sid number,
      6    ctr1 number,
      7    ctr2 number,
      8    ctr3 number
      9   );
    10  
    11   type TestStatusTab is table of TestStatusRec;
    12  
    13   function FillTmpTable(id in varchar2)
    14   return TestStatusRec;
    15  
    16   FUNCTION ptf (p_cursor  IN  sys_refcursor)
    17   RETURN TestStatusList PIPELINED
    18   PARALLEL_ENABLE(PARTITION p_cursor BY any);
    19  
    20  end;
    21  /
    Package created.
    SQL> 
    SQL> create or replace package body test_pkg
      2  as
      3  
      4   function FillTmpTable(id in varchar2)
      5   return TestStatusRec
      6   is
      7    PRAGMA AUTONOMOUS_TRANSACTION;
      8  
      9    result TestStatusRec;
    10  
    11    sid number;
    12  
    13    type ton is table of number;
    14    tids TestTableOfNumber_t := TestTableOfNumber_t();
    15  
    16    records number := 0;
    17   begin
    18    select userenv('SID') into sid from dual;
    19    result.sid := sid;
    20  
    21    delete from TestTmpTable;
    22  
    23    for i in 1..100 loop
    24     tids.extend;
    25     tids(tids.last) := i;
    26    end loop;
    27  
    28    forall i in 1..tids.count
    29     insert into TestTmpTable (value) values (tids(i));
    30  
    31    -- get number of records inserted
    32    records := sql%rowcount;
    33    result.ctr1 := records;
    34  
    35    -- retrieve again before commit
    36    select count(*) into records from TestTmpTable;
    37    result.ctr2 := records;
    38  
    39    commit;
    40  
    41    -- retrieve again after commit
    42    select count(*) into records from TestTmpTable;
    43    result.ctr3 := records;
    44  
    45    return result;
    46   end;
    47  
    48     FUNCTION ptf (p_cursor  IN  sys_refcursor)
    49   RETURN TestStatusList PIPELINED
    50   PARALLEL_ENABLE(PARTITION p_cursor BY any)
    51   IS
    52    rec test_pkg.TestStatusRec;
    53    value number;
    54    sid number;
    55    ctr integer := 0;
    56   BEGIN
    57    select userenv('SID') into sid from dual;
    58    rec := FillTmpTable('IN PTF');
    59    LOOP
    60     FETCH p_cursor into value;
    61     EXIT WHEN p_cursor%NOTFOUND;
    62     ctr := ctr + 1;
    63    END LOOP;
    64  
    65    -- as a result i am only interested in the results of FillTmpTable():
    66    PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
    67  
    68        RETURN;
    69   END;
    70  end;
    71  /
    Package body created.
    SQL> 
    SQL> declare
      2   tons TestTableOfNumber_t;
      3   counts TestTableOfNumber_t;
      4   status test_pkg.TestStatusRec;
      5   statusList test_pkg.TestStatusTab;
      6  begin
      7   status := test_pkg.FillTmpTable('MAIN');
      8   dbms_output.put_line('main thread:'
      9    || ' sid #' || status.sid
    10    || ' / #' || status.ctr1 || ' inserted '
    11    || ' / #' || status.ctr2 || ' before commit'
    12    || ' / #' || status.ctr3 || ' after commit');
    13  
    14   select value bulk collect into tons from TestTmpTable;
    15  
    16   select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2
    ) */ value from TestTmpTable tab)));
    17  
    18   for i in 1..StatusList.count loop
    19    dbms_output.put_line('worker thread #' || i  || ':'
    20    || ' sid #' || statusList(i).sid
    21    || ' / #' || statusList(i).ctr1 || ' inserted '
    22    || ' / #' || statusList(i).ctr2 || ' before commit'
    23    || ' / #' || statusList(i).ctr3 || ' after commit');
    24   end loop;
    25  
    26  end;
    27  /
    main thread: sid #1023 / #100 inserted  / #100 before commit / #100 after commit
    worker thread #1: sid #1045 / #100 inserted  / #100 before commit / #100 after
    commit
    worker thread #2: sid #1019 / #100 inserted  / #100 before commit / #100 after
    commit
    PL/SQL procedure successfully completed.
    SQL> I am getting a different result.
    Regards
    Raj

  • Duplicate Rows In Oracle Pipelined Table Functions

    Hi fellow oracle users,
    I am trying to create an Oracle piplined table function that contains duplicate records. Whenever I try to pipe the same record twice, the duplicate record does not show up in the resulting pipelined table.
    Here's a sample piece of SQL:
    /* Type declarations */
    TYPE MY_RECORD IS RECORD(
    MY_NUM INTEGER
    TYPE MY_TABLE IS TABLE OF MY_RECORD;
    /* Pipelined function declaration */
    FUNCTION MY_FUNCTION RETURN MY_TABLE PIPELINED IS
    V_RECORD MY_RECORD;
    BEGIN
    -- insert first record
    V_RECORD.MY_NUM = 1;
    PIPE ROW (V_RECORD);
    -- insert second duplicate record
    V_RECORD.MY_NUM = 1;
    PIPE ROW (V_RECORD);
    -- return piplined table
    RETURN;
    END;
    /* Statement to query pipelined function */
    SELECT * FROM TABLE( MY_FUNCTION ); -- for some reason this only returns one record instead of two
    I am trying to get the duplicate row to show up in the select statement. Any help would be greatly appreciated.

    Can you provide actual output from an SQL*Plus prompt trying this? I don't see the same behavior
    SQL> SELECT * FROM V$VERSION;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> CREATE TYPE MY_RECORD IS OBJECT(MY_NUM INTEGER);
      2  /
    Type created.
    SQL> CREATE TYPE MY_TABLE IS TABLE OF MY_RECORD;
      2  /
    Type created.
    SQL> CREATE OR REPLACE FUNCTION MY_FUNCTION
      2  RETURN MY_TABLE
      3  PIPELINED
      4          AS
      5                  V_RECORD        MY_RECORD;
      6          BEGIN
      7                  V_RECORD.MY_NUM := 1;
      8                  PIPE ROW(V_RECORD);
      9
    10                  V_RECORD.MY_NUM := 1;
    11                  PIPE ROW(V_RECORD);
    12
    13                  RETURN;
    14          END;
    15  /
    Function created.
    SQL> SELECT * FROM TABLE(MY_FUNCTION);
                  MY_NUM
                       1
                       1

  • Clob datatype with pipelined table function.

    hi
    i made two functions one of them which use varchar2 data type with pipelined and
    another with clob data type with pipelined.
    i am giving parameters to both of them first varch2 with pipelined is working fine.
    but another is not.
    and i made diff type for both of them.
    like clob object type for second.
    and varchar2 type for first.
    my first function is like
    TYPE "CSVOBJECTFORMAT"  AS OBJECT ( "S"   VARCHAR2(500));
    TYPE "CSVTABLETYPE" AS TABLE OF CSVOBJECTFORMAT;
    CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING" (p_list
        VARCHAR2, p_delim VARCHAR2:=' ') RETURN CsvTableType PIPELINED
        IS
    l_idx PLS_INTEGER;
    l_list VARCHAR2(32767) := p_list;
    l_value VARCHAR2(32767);
    BEGIN
    LOOP
    l_idx := INSTR(l_list, p_delim);
    IF l_idx > 0 THEN
    PIPE ROW(CsvObjectFormat(SUBSTR(l_list, 1, l_idx-1)));
    l_list := SUBSTR(l_list, l_idx+LENGTH(p_delim));
    ELSE
    PIPE ROW(CsvObjectFormat(l_list));
    EXIT;
    END IF;
    END LOOP;
    RETURN;
    END fn_ParseCSVString;
    and out put for this function is like
    which is correct.
    SQL>   SELECT s  FROM  TABLE( CAST( fn_ParseCSVString('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType)) ;
    S
    +588675,1
    #588675^1^99^
    2
    16
    115
    99
    SP5601
    S
    0
    14 rows selected.
    SQL>
    my second function is like
    TYPE "CSVOBJECTFORMAT1"  AS OBJECT ( "S"   clob);
    TYPE "CSVTABLETYPE1" AS TABLE OF CSVOBJECTFORMAT1;
    CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING1" (p_list
        clob, p_delim VARCHAR2:=' ') RETURN CsvTableType1 PIPELINED
        IS
    l_idx PLS_INTEGER;
    l_list  clob := p_list;
    l_value VARCHAR2(32767);
    BEGIN
    dbms_output.put_line('hello');
      LOOP
      l_idx := INSTR(l_list, p_delim);
      IF l_idx > 0 THEN
    PIPE ROW(CsvObjectFormat1(substr(l_list, 1, l_idx-1)));
    l_list :=  dbms_lob.substr(l_list, l_idx+LENGTH(p_delim));
    ELSE
    PIPE ROW(CsvObjectFormat1(l_list));
    exit;
    END IF;
      END LOOP;
    RETURN;
    END fn_ParseCSVString1;
    SQL>   SELECT s  FROM  TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType1)) ;
    S
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    +588675,1
    and it goes on until i use ctrl+C to break it.
    actually i want to make a function which can accept large values  so i am trying to change first function.thanks

    RTFM DBMS_LOB.SUBSTR. Unlike built-in function SUBSTR, second parameter in DBMS_LOB.SUBSTR is length, not position. Also, PL/SQL fully supports CLOBs, so there is no need to use DBMS_LOB:
    SQL> CREATE OR REPLACE
      2  FUNCTION FN_PARSECSVSTRING1(p_list clob,
      3                              p_delim VARCHAR2:=' '
      4                             )
      5    RETURN CsvTableType1
      6    PIPELINED
      7    IS
      8        l_pos   PLS_INTEGER := 1;
      9        l_idx   PLS_INTEGER;
    10        l_value clob;
    11    BEGIN
    12        LOOP
    13          l_idx := INSTR(p_list, p_delim,l_pos);
    14          IF l_idx > 0
    15            THEN
    16              PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos,l_idx-l_pos)));
    17              l_pos := l_idx+LENGTH(p_delim);
    18            ELSE
    19              PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos)));
    20              RETURN;
    21          END IF;
    22        END LOOP;
    23        RETURN;
    24  END fn_ParseCSVString1;
    25  /
    Function created.
    SQL> SELECT rownum,s  FROM  TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5
    601~~~~~0~~', '~') as CsvTableType1)) ;
        ROWNUM S
             1 +588675,1
             2 #588675^1^99^
             3 2
             4 16
             5 115
             6 99
             7 SP5601
             8
             9
            10
            11
        ROWNUM S
            12 0
            13
            14
    14 rows selected.
    SQL> SY.

  • Parallel not working in pipelined table function?

    I've found this excelent article titled 'Oracle fast parallel data unload into ASCII file(s)' in this blog: http://jiri.wordpress.com/2009/03/18/oracle-fast-parallel-data-unload-into-ascii-files/
    I have compiled the code and created the objects and the directory in my DB
    But when I execute :
    SELECT *
    FROM TABLE(
    DATA_UNLOAD(
    CURSOR(
    SELECT /*+ PARALLEL(A, 2, 1) */
    TABLE_NAME || '|' ||
    COLUMN_NAME || '|' ||
    DATA_TYPE
    FROM MYTABLE A
    'SAMPLE_SPOOL.TXT',
    'DIR_USERS_JIRI',
    'Y',
    'Y' )
    It is suposed to return 2 rows (because of parallel execution), but it just returns 1
    Do I have to do something special in order to make parallel pipelined function work
    Edited by: igorcb123 on 01-02-2011 01:58 PM

    If & Else is wrongly used in the function
    FUNCTION F_PS_HIGH_SCHOOL (OLD_HS CHAR)
    RETURN
    CHAR
    IS
    NEW_HIGH_SCHOOL CHAR(11);
    BEGIN
    IF OLD_HS = ' ' THEN NEW_HIGH_SCHOOL := ' ';
    ELSE-- Incorrect usage
    SELECT
    MC_AD_HS_NAME INTO NEW_HIGH_SCHOOL
    FROM
    XLAT_HIGH_SCHOOL_CODES_2
    WHERE
    SIS_HS_CODE = OLD_HS;
    RETURN NEW_HIGH_SCHOOL;
    END IF;
    RETURN NEW_HIGH_SCHOOL;
    END F_PS_HIGH_SCHOOL;

  • 10g: parallel pipelined table func - distributing DISTINCT data sets

    Hi,
    i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
    The tables, where i am selecting data from, are partitioned and subpartitioned.
    All tables share the same partitioning/subpartitioning schema.
    Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
    E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
    The select of records is done partition-wise - one partition after another (in bulks).
    The parallel running worker threads select more data from other tables for their processing (2nd level select)
    Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
    But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
    hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
    Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
    Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
    How can i distribute the records to the worker threads according a given subpartition-schema?
    +[amendment:+
    Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
    - many thanks!
    best regards,
    Frank
    Edited by: user8704911 on Jan 12, 2012 2:51 AM

    Hello
    A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
    | Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                      |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                       |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                 | :TQ10001 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
    |   3 |****BUFFER SORT****                    |          |  8168 |  1722K|            |          |       |       |  Q1,01 | PCWP |            |
    |   4 |     VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   5 |      COLLECTION ITERATOR PICKLER FETCH| TF       |       |       |            |          |       |       |  Q1,01 | PCWP |            |
    |   6 |       PX RECEIVE                      |          |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
    |   7 |        PX SEND HASH                   | :TQ10000 |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
    |   8 |         PX BLOCK ITERATOR             |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
    |   9 |          TABLE ACCESS FULL            | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
    -----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
    create or replace package test_pkg as
         type Test_Tab_Rec_t is record (
              Tracking_ID                 number(19),
              Partition_Key               date,
              Subpartition_Key            number(3),
              sid                    number
         type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
         type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
         type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
         type Cur_t is ref cursor return Test_Tab_Rec_t;
         procedure populate;
         procedure report;
         function tf(cur in Cur_t)
         return test_list pipelined
         parallel_enable(partition cur by hash(subpartition_key));
         function tf_any(cur in Cur_t)
         return test_list PIPELINED
        CLUSTER cur BY (Subpartition_Key)
         parallel_enable(partition cur by ANY);   
    end;
    create or replace package body test_pkg as
         procedure populate
         is
              Tracking_ID number(19) := 1;
              Partition_Key date := current_timestamp;
              Subpartition_Key number(3) := 1;
         begin
              dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
              for Subpartition_Key in 0..99
              loop
                   for ctr in 1..1
                   loop
                        insert into test_tab (tracking_id, partition_key, subpartition_key)
                        values (Tracking_ID, Partition_Key, Subpartition_Key);
                        Tracking_ID := Tracking_ID + 1;
                   end loop;
              end loop;
              dbms_output.put_line('...done (populate data into Test_Tab)');
         end;
         procedure report
         is
              recs Test_Tab_Rec_Tab_t;
         begin
              dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
              for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
              loop
                   dbms_output.put_line('partition/subpartition = '  || item.partition_name || '/' || item.subpartition_name || ':');
                   execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
                   if recs.count > 0
                   then
                        for i in recs.first..recs.last
                        loop
                             dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key  || ', ' || recs(i).Subpartition_Key);
                        end loop;
                   end if;
              end loop;
              dbms_output.put_line('... done (list data per partition/subpartition)');
         end;
         function tf(cur in Cur_t)
         return test_list pipelined
         parallel_enable(partition cur by hash(subpartition_key))
         is
              sid number;
              input Test_Tab_Rec_t;
              output test_object;
         begin
              select userenv('SID') into sid from dual;
              loop
                   fetch cur into input;
                   exit when cur%notfound;
                   output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
                   pipe row(output);
              end loop;
         end;
         function tf_any(cur in Cur_t)
         return test_list PIPELINED
        CLUSTER cur BY (Subpartition_Key)
         parallel_enable(partition cur by ANY)
         is
              sid number;
              input Test_Tab_Rec_t;
              output test_object;
         begin
              select userenv('SID') into sid from dual;
              loop
                   fetch cur into input;
                   exit when cur%notfound;
                   output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
                   pipe row(output);
              end loop;
         end;
    end;
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_hash target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  ORDER BY
    28        target.sid,
    29        parts.subobject_name
    30  /
    XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
      2  /
    100 rows created.
    Elapsed: 00:00:00.14
    XXXX>
    XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
      2  /
    100 rows created.
    --using partition by hash
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_hash target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  /
      COUNT(*) SUBOBJECT_NAME                        SID
             3 SYS_SUBP31                           1272
             1 SYS_SUBP32                           1272
             1 SYS_SUBP33                           1272
             3 SYS_SUBP34                           1272
             1 SYS_SUBP36                           1272
             1 SYS_SUBP37                           1272
             3 SYS_SUBP38                           1272
             1 SYS_SUBP39                           1272
             1 SYS_SUBP32                           1280
             2 SYS_SUBP33                           1280
             2 SYS_SUBP34                           1280
             1 SYS_SUBP35                           1280
             2 SYS_SUBP36                           1280
             1 SYS_SUBP37                           1280
             2 SYS_SUBP38                           1280
             1 SYS_SUBP40                           1280
             2 SYS_SUBP33                           1283
             2 SYS_SUBP34                           1283
             2 SYS_SUBP35                           1283
             2 SYS_SUBP36                           1283
             1 SYS_SUBP37                           1283
             1 SYS_SUBP38                           1283
             2 SYS_SUBP39                           1283
             1 SYS_SUBP40                           1283
             1 SYS_SUBP32                           1298
             1 SYS_SUBP34                           1298
             1 SYS_SUBP36                           1298
             2 SYS_SUBP37                           1298
             4 SYS_SUBP38                           1298
             2 SYS_SUBP40                           1298
             1 SYS_SUBP31                           1313
             1 SYS_SUBP33                           1313
             1 SYS_SUBP39                           1313
             1 SYS_SUBP40                           1313
             1 SYS_SUBP32                           1314
             1 SYS_SUBP35                           1314
             1 SYS_SUBP38                           1314
             1 SYS_SUBP40                           1314
             2 SYS_SUBP33                           1381
             1 SYS_SUBP34                           1381
             1 SYS_SUBP35                           1381
             3 SYS_SUBP36                           1381
             3 SYS_SUBP37                           1381
             1 SYS_SUBP38                           1381
             2 SYS_SUBP36                           1531
             1 SYS_SUBP37                           1531
             2 SYS_SUBP38                           1531
             1 SYS_SUBP39                           1531
             1 SYS_SUBP40                           1531
             2 SYS_SUBP33                           1566
             1 SYS_SUBP34                           1566
             1 SYS_SUBP35                           1566
             1 SYS_SUBP37                           1566
             1 SYS_SUBP38                           1566
             2 SYS_SUBP39                           1566
             3 SYS_SUBP40                           1566
             1 SYS_SUBP32                           1567
             3 SYS_SUBP33                           1567
             3 SYS_SUBP35                           1567
             3 SYS_SUBP36                           1567
             1 SYS_SUBP37                           1567
             2 SYS_SUBP38                           1567
    62 rows selected.
    --using partition by any cluster by subpartition_key
    Elapsed: 00:00:00.26
    XXXX> with parts as (
      2  select --+ materialize
      3      data_object_id,
      4      subobject_name
      5  FROM
      6      user_objects
      7  WHERE
      8      object_name = 'TEST_TAB'
      9  and
    10      object_type = 'TABLE SUBPARTITION'
    11  )
    12  SELECT
    13        COUNT(*),
    14        parts.subobject_name,
    15        target.sid
    16  FROM
    17        parts,
    18        test_tab tt,
    19        test_tab_part_any_cluster target
    20  WHERE
    21        tt.tracking_id = target.tracking_id
    22  and
    23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
    24  GROUP BY
    25        parts.subobject_name,
    26        target.sid
    27  ORDER BY
    28        target.sid,
    29        parts.subobject_name
    30  /
      COUNT(*) SUBOBJECT_NAME                        SID
            11 SYS_SUBP37                           1253
            10 SYS_SUBP34                           1268
             4 SYS_SUBP31                           1289
            10 SYS_SUBP40                           1314
             7 SYS_SUBP39                           1367
             9 SYS_SUBP35                           1377
            14 SYS_SUBP36                           1531
             5 SYS_SUBP32                           1572
            13 SYS_SUBP33                           1577
            17 SYS_SUBP38                           1609
    10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
    PLAN_TABLE_OUTPUT
    Plan hash value: 2570087774
    | Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                     |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
    |   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                | :TQ10000 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
    |   4 |     COLLECTION ITERATOR PICKLER FETCH| TF_ANY   |       |       |            |          |       |       |  Q1,00 | PCWP |            |
    |   5 |      SORT ORDER BY                   |          |       |       |            |          |       |       |  Q1,00 | PCWP |            |
    |   6 |       PX BLOCK ITERATOR              |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
    |   7 |        TABLE ACCESS FULL             | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
    ----------------------------------------------------------------------------------------------------------------------------------------------HTH
    David

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • How to define a view on a table function

    I have a defined a pipelined table function in a package and can return data from it using:
    select * from table(mtreport.FN_GET_JOBS_FOR_PROJECT(?));How can I define a view on this so that users can select from it without having to know to use the 'table' syntax?
    select *
    from   vw_jobs_for_project
    where id = ?I've tried this, but it doesn't work
    CREATE OR REPLACE VIEW vw_jobs_for_project (PROJECT_ID, NAME) AS
    SELECT     PROJECT_ID, NAME
    FROM     table(mtreport.FN_GET_JOBS_FOR_PROJECT(PROJECT_ID));
    ERROR at line 3:
    ORA-00904: "PROJECT_ID": invalid identifier

    views do not accept input parameters in the way that you want to use. you can define a CONTEXT parameter, but the users would need to set the context value before selecting from the view, so you'll still need to train them to do something new.
    example:
    create or replace context my_context using pkg_context;
    <br>
    create or replace package pkg_context is
      procedure set_context(p_parameter in varchar2, p_value in varchar2);
      function get_context(p_parameter in varchar2) return varchar2;
    end;
    show errors
    create or replace package body pkg_context is
      CONTEXT_NAME constant all_context.namespace%type := 'MY_CONTEXT';
      procedure set_context(p_parameter in varchar2, p_value in varchar2) is
      begin
        dbms_session.set_context(CONTEXT_NAME, p_parameter, p_value);
      end;
      function get_context(p_parameter in varchar2) return varchar2 is
      begin
        return sys_context(CONTEXT_NAME, p_parameter);
      end;
    end;
    show errors
    create or replace view tree_view as
    select level lvl, chid
    from tree
    connect by prior parid=chid
    start with chid = sys_context('MY_CONTEXT','CHID');
    -- at runtime
    exec pkg_context.set_context('CHID', 14)
    select * from tree_view;

  • How to debug "pipelined parallel enable table function" ?

    Dear All,
    Normally we can retrieve output from a "pipelined parallel enable table function " by using SQL statement, such as
    select output from table(pipelined_function(arg1));
    Can anyone alight me how to call this function purely using PL/SQL statement?
    Reason for this : We have third party developer develooped a complicate "pipelined parallel enable table function" and this function is currently calling by SQL (select output from table(pipelined_function(arg1));
    ), I will like to debug this function, but unable to do so, so far I have done :
    1) compile function with debug option
    2) using Procedure Builder to debug, if I execute the above statement (select output from table(pipelined_function(arg1)); I believe, Procedure Builder will not debug a function from a SQL statement
    3) try to build up a PL/SQL block, but don't know how to it.
    basically I want to debug this "pipelined parallel enable table function " and don't know how to do it, any example will be greate!

    user2302827 wrote:
    Using dbms_output is fine but too tedious. I was looking for a PL/SQL procedure builder that can help to debug a statement like this:
    select output from table(pipelined_function(arg1));
    Any suggestion?I don't know about any procedure builders.
    There is a debugger you can use, though I'd have to think about how to invoke it to use with a pipelined function. The debugger is available in GUI tools like SQL*Developer and TOAD and can be used to walk through a program and set values

Maybe you are looking for

  • "iTunes has stopped working." ... Again...

    So apparently this is a recurring problem, based on the number of threads I have found with this title so far. Unfortunately, I've tried all of the solutions short of trying to rewrite the code myself (which I don't know how to do, so please don't su

  • How do I edit an event already in my phone on Ical?

    I have events on my new Iphone 5S that I need to amend. It's on my Ical calander.  I don't see a logical way to amend (change the time or date, etc)

  • Best practice: read information from server

    Hi All, currently I am wondering about the best-practice approach to read/write data from an iPhone app to a webserver. What is the easiest way to achieve such a scenario? Is it just to build an easy SQL server online an connect via xcode? Using whic

  • Dreaweaver upload fails after changing host webserver

    Hello, since a long time, i encounter this problem several times and deleting the "Configuration" folder does not change anything. Any version of dream do the same problem. I work on a lot of website dev with dreamCC on windows 7. I recently change m

  • Portal language and Browser setting

    hi, The language of the portal objects change once i change the language settings in Browser. I want some portal objects to remain in English unchanged and the others  to change according to the language selected. What should i do? Where should i cha