10g: parallel pipelined table func - distributing DISTINCT data sets

Hi,
i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
The tables, where i am selecting data from, are partitioned and subpartitioned.
All tables share the same partitioning/subpartitioning schema.
Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
The select of records is done partition-wise - one partition after another (in bulks).
The parallel running worker threads select more data from other tables for their processing (2nd level select)
Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
How can i distribute the records to the worker threads according a given subpartition-schema?
+[amendment:+
Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
- many thanks!
best regards,
Frank
Edited by: user8704911 on Jan 12, 2012 2:51 AM

Hello
A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT                      |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR                       |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                 | :TQ10001 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   3 |****BUFFER SORT****                    |          |  8168 |  1722K|            |          |       |       |  Q1,01 | PCWP |            |
|   4 |     VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   5 |      COLLECTION ITERATOR PICKLER FETCH| TF       |       |       |            |          |       |       |  Q1,01 | PCWP |            |
|   6 |       PX RECEIVE                      |          |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,01 | PCWP |            |
|   7 |        PX SEND HASH                   | :TQ10000 |   100 |  4800 |     2   (0)| 00:00:01 |       |       |  Q1,00 | P->P | HASH       |
|   8 |         PX BLOCK ITERATOR             |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL            | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
create or replace package test_pkg as
     type Test_Tab_Rec_t is record (
          Tracking_ID                 number(19),
          Partition_Key               date,
          Subpartition_Key            number(3),
          sid                    number
     type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
     type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
     type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
     type Cur_t is ref cursor return Test_Tab_Rec_t;
     procedure populate;
     procedure report;
     function tf(cur in Cur_t)
     return test_list pipelined
     parallel_enable(partition cur by hash(subpartition_key));
     function tf_any(cur in Cur_t)
     return test_list PIPELINED
    CLUSTER cur BY (Subpartition_Key)
     parallel_enable(partition cur by ANY);   
end;
create or replace package body test_pkg as
     procedure populate
     is
          Tracking_ID number(19) := 1;
          Partition_Key date := current_timestamp;
          Subpartition_Key number(3) := 1;
     begin
          dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
          for Subpartition_Key in 0..99
          loop
               for ctr in 1..1
               loop
                    insert into test_tab (tracking_id, partition_key, subpartition_key)
                    values (Tracking_ID, Partition_Key, Subpartition_Key);
                    Tracking_ID := Tracking_ID + 1;
               end loop;
          end loop;
          dbms_output.put_line('...done (populate data into Test_Tab)');
     end;
     procedure report
     is
          recs Test_Tab_Rec_Tab_t;
     begin
          dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
          for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
          loop
               dbms_output.put_line('partition/subpartition = '  || item.partition_name || '/' || item.subpartition_name || ':');
               execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
               if recs.count > 0
               then
                    for i in recs.first..recs.last
                    loop
                         dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key  || ', ' || recs(i).Subpartition_Key);
                    end loop;
               end if;
          end loop;
          dbms_output.put_line('... done (list data per partition/subpartition)');
     end;
     function tf(cur in Cur_t)
     return test_list pipelined
     parallel_enable(partition cur by hash(subpartition_key))
     is
          sid number;
          input Test_Tab_Rec_t;
          output test_object;
     begin
          select userenv('SID') into sid from dual;
          loop
               fetch cur into input;
               exit when cur%notfound;
               output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
               pipe row(output);
          end loop;
     end;
     function tf_any(cur in Cur_t)
     return test_list PIPELINED
    CLUSTER cur BY (Subpartition_Key)
     parallel_enable(partition cur by ANY)
     is
          sid number;
          input Test_Tab_Rec_t;
          output test_object;
     begin
          select userenv('SID') into sid from dual;
          loop
               fetch cur into input;
               exit when cur%notfound;
               output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
               pipe row(output);
          end loop;
     end;
end;
XXXX> with parts as (
  2  select --+ materialize
  3      data_object_id,
  4      subobject_name
  5  FROM
  6      user_objects
  7  WHERE
  8      object_name = 'TEST_TAB'
  9  and
10      object_type = 'TABLE SUBPARTITION'
11  )
12  SELECT
13        COUNT(*),
14        parts.subobject_name,
15        target.sid
16  FROM
17        parts,
18        test_tab tt,
19        test_tab_part_hash target
20  WHERE
21        tt.tracking_id = target.tracking_id
22  and
23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24  GROUP BY
25        parts.subobject_name,
26        target.sid
27  ORDER BY
28        target.sid,
29        parts.subobject_name
30  /
XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
  2  /
100 rows created.
Elapsed: 00:00:00.14
XXXX>
XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
  2  /
100 rows created.
--using partition by hash
XXXX> with parts as (
  2  select --+ materialize
  3      data_object_id,
  4      subobject_name
  5  FROM
  6      user_objects
  7  WHERE
  8      object_name = 'TEST_TAB'
  9  and
10      object_type = 'TABLE SUBPARTITION'
11  )
12  SELECT
13        COUNT(*),
14        parts.subobject_name,
15        target.sid
16  FROM
17        parts,
18        test_tab tt,
19        test_tab_part_hash target
20  WHERE
21        tt.tracking_id = target.tracking_id
22  and
23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24  GROUP BY
25        parts.subobject_name,
26        target.sid
27  /
  COUNT(*) SUBOBJECT_NAME                        SID
         3 SYS_SUBP31                           1272
         1 SYS_SUBP32                           1272
         1 SYS_SUBP33                           1272
         3 SYS_SUBP34                           1272
         1 SYS_SUBP36                           1272
         1 SYS_SUBP37                           1272
         3 SYS_SUBP38                           1272
         1 SYS_SUBP39                           1272
         1 SYS_SUBP32                           1280
         2 SYS_SUBP33                           1280
         2 SYS_SUBP34                           1280
         1 SYS_SUBP35                           1280
         2 SYS_SUBP36                           1280
         1 SYS_SUBP37                           1280
         2 SYS_SUBP38                           1280
         1 SYS_SUBP40                           1280
         2 SYS_SUBP33                           1283
         2 SYS_SUBP34                           1283
         2 SYS_SUBP35                           1283
         2 SYS_SUBP36                           1283
         1 SYS_SUBP37                           1283
         1 SYS_SUBP38                           1283
         2 SYS_SUBP39                           1283
         1 SYS_SUBP40                           1283
         1 SYS_SUBP32                           1298
         1 SYS_SUBP34                           1298
         1 SYS_SUBP36                           1298
         2 SYS_SUBP37                           1298
         4 SYS_SUBP38                           1298
         2 SYS_SUBP40                           1298
         1 SYS_SUBP31                           1313
         1 SYS_SUBP33                           1313
         1 SYS_SUBP39                           1313
         1 SYS_SUBP40                           1313
         1 SYS_SUBP32                           1314
         1 SYS_SUBP35                           1314
         1 SYS_SUBP38                           1314
         1 SYS_SUBP40                           1314
         2 SYS_SUBP33                           1381
         1 SYS_SUBP34                           1381
         1 SYS_SUBP35                           1381
         3 SYS_SUBP36                           1381
         3 SYS_SUBP37                           1381
         1 SYS_SUBP38                           1381
         2 SYS_SUBP36                           1531
         1 SYS_SUBP37                           1531
         2 SYS_SUBP38                           1531
         1 SYS_SUBP39                           1531
         1 SYS_SUBP40                           1531
         2 SYS_SUBP33                           1566
         1 SYS_SUBP34                           1566
         1 SYS_SUBP35                           1566
         1 SYS_SUBP37                           1566
         1 SYS_SUBP38                           1566
         2 SYS_SUBP39                           1566
         3 SYS_SUBP40                           1566
         1 SYS_SUBP32                           1567
         3 SYS_SUBP33                           1567
         3 SYS_SUBP35                           1567
         3 SYS_SUBP36                           1567
         1 SYS_SUBP37                           1567
         2 SYS_SUBP38                           1567
62 rows selected.
--using partition by any cluster by subpartition_key
Elapsed: 00:00:00.26
XXXX> with parts as (
  2  select --+ materialize
  3      data_object_id,
  4      subobject_name
  5  FROM
  6      user_objects
  7  WHERE
  8      object_name = 'TEST_TAB'
  9  and
10      object_type = 'TABLE SUBPARTITION'
11  )
12  SELECT
13        COUNT(*),
14        parts.subobject_name,
15        target.sid
16  FROM
17        parts,
18        test_tab tt,
19        test_tab_part_any_cluster target
20  WHERE
21        tt.tracking_id = target.tracking_id
22  and
23        parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24  GROUP BY
25        parts.subobject_name,
26        target.sid
27  ORDER BY
28        target.sid,
29        parts.subobject_name
30  /
  COUNT(*) SUBOBJECT_NAME                        SID
        11 SYS_SUBP37                           1253
        10 SYS_SUBP34                           1268
         4 SYS_SUBP31                           1289
        10 SYS_SUBP40                           1314
         7 SYS_SUBP39                           1367
         9 SYS_SUBP35                           1377
        14 SYS_SUBP36                           1531
         5 SYS_SUBP32                           1572
        13 SYS_SUBP33                           1577
        17 SYS_SUBP38                           1609
10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
PLAN_TABLE_OUTPUT
Plan hash value: 2570087774
| Id  | Operation                            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT                     |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR                      |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                | :TQ10000 |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    VIEW                              |          |  8168 |  1722K|    24   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   4 |     COLLECTION ITERATOR PICKLER FETCH| TF_ANY   |       |       |            |          |       |       |  Q1,00 | PCWP |            |
|   5 |      SORT ORDER BY                   |          |       |       |            |          |       |       |  Q1,00 | PCWP |            |
|   6 |       PX BLOCK ITERATOR              |          |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    10 |  Q1,00 | PCWC |            |
|   7 |        TABLE ACCESS FULL             | TEST_TAB |   100 |  4800 |     2   (0)| 00:00:01 |     1 |    20 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------HTH
David

Similar Messages

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • 10g: delay for collecting results from parallel pipelined table functions

    When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
    This delay is huge compared with the run-time of the worker threads.
    For my application it goes like this:
    main thread timing efforts to start worker and collect their results:
    [10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
    worker threads:
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
    [10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
    You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
    But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
    Why does it need #10 sec more just to save the data?
    Here's a sample sqlplus script to demonstrate this:
    --------------------------- snip -------------------------------------------------------
    set serveroutput on;
    drop table perf_data;
    drop table test_table;
    drop table tmp_test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table perf_data
         sid number,
         t1 timestamp with time zone,
         t2 timestamp with time zone,
         client varchar2(256)
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create global temporary table tmp_test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TZDeltaToMilliseconds(
              t1 in timestamp with time zone,
              t2 in timestamp with time zone)
         return pls_integer;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         * Calculate timestamp with timezone difference
         * in milliseconds
         function TZDeltaToMilliseconds(
              t1 in timestamp with time zone,
              t2 in timestamp with time zone)
         return pls_integer
         is
         begin
              return     (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
              +     (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
              +     (extract(second from t2) - extract(second from t1)) * 1000;
         end TZDeltaToMilliseconds;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              pragma autonomous_transaction;
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
              t1 timestamp with time zone;
              t2 timestamp with time zone;
         begin
              t1 := systimestamp;
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
                   counter := counter + 1;
              end loop;
              t2 := systimestamp;
              insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
              commit;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
         sid number;
         t1 timestamp with time zone;
         t2 timestamp with time zone;
    procedure LogPerfTable
    is
    type ton is table of number;
    type tot is table of timestamp with time zone;
              type clients_t is table of varchar2(256);
    sids ton;
    t1s tot;
    t2s tot;
              clients clients_t;
    deltaTime integer;
    btsPerSecond number(19,0);
    edrsPerSecond number(19,0);
    begin
    select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
    if clients.count > 0 then
    for i in clients.FIRST .. clients.LAST loop
    deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
    if deltaTime = 0 then deltaTime := 1; end if;
    dbms_output.put_line(
    '[' || to_char(t1s(i), 'hh:mi:ss') ||
    '-' || to_char(t2s(i), 'hh:mi:ss') ||
    ']:' ||
    ' client ' || clients(i) || ' / sid #' || sids(i)
    end loop;
    end if;
    end LogPerfTable;
    begin
         select userenv('SID') into sid from dual;
         for i in 1..200000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         -- save into the tmp table
         insert into tmp_test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         delete from perf_data;
         t1 := systimestamp;
         select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
         t2 := systimestamp;
         insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
         LogPerfTable;
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    end;
    --------------------------- snap -------------------------------------------------------
    best regards,
    Frank

    Hello
    I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
    create or replace package test_pkg
    as
    type test_rec is record (
    a number(19,0),
    b timestamp with time zone,
    c varchar2(256)
    type test_tab is table of test_rec;
    type test_cur is ref cursor return test_rec;
    function TZDeltaToMilliseconds(
    t1 in timestamp with time zone,
    t2 in timestamp with time zone)
    return pls_integer;
    function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    function TF_Any(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by ANY);
    end;
    create or replace package body test_pkg
    as
    * Calculate timestamp with timezone difference
    * in milliseconds
    function TZDeltaToMilliseconds(
    t1 in timestamp with time zone,
    t2 in timestamp with time zone)
    return pls_integer
    is
    begin
    return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
    + (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
    + (extract(second from t2) - extract(second from t1)) * 1000;
    end TZDeltaToMilliseconds;
      function TF(mycur test_cur)
      return test_list pipelined
      parallel_enable(partition mycur by hash(a))
      is
      pragma autonomous_transaction;
        sid number;
        counter number(19,0) := 0;
        myrec test_rec;
        t1 timestamp with time zone;
        t2 timestamp with time zone;
      begin
        t1 := systimestamp;
        select userenv('SID') into sid from dual;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
      loop
        fetch mycur into myRec;
        exit when mycur%NOTFOUND;
        -- attention: saves own SID in test_obj.a for indication to caller
        -- how many sids have been involved
        pipe row(test_obj(sid, myRec.b, myRec.c));
        pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
        pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
        counter := counter + 1;
      end loop;
      t2 := systimestamp;
      insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
      commit;
      dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
      end;
      function TF_any(mycur test_cur)
      return test_list pipelined
      parallel_enable(partition mycur by ANY)
      is
      pragma autonomous_transaction;
        sid number;
        counter number(19,0) := 0;
        myrec test_rec;
        t1 timestamp with time zone;
        t2 timestamp with time zone;
      begin
        t1 := systimestamp;
        select userenv('SID') into sid from dual;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
        loop
          fetch mycur into myRec;
          exit when mycur%NOTFOUND;
          -- attention: saves own SID in test_obj.a for indication to caller
          -- how many sids have been involved
          pipe row(test_obj(sid, myRec.b, myRec.c));
          pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
          pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
          counter := counter + 1;
        end loop;
        t2 := systimestamp;
        insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
        commit;
        dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
      end;
    end;
    explain plan for
    select /*+ first_rows */ test_obj(a, b, c)
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    select * from table(dbms_xplan.display);
    Plan hash value: 1037943675
    | Id  | Operation                             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                      |            |  8168 |  3972K|    20   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                       |            |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                 | :TQ10001   |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |   3 |    BUFFER SORT                        |            |  8168 |  3972K|            |          |  Q1,01 | PCWP |            |
    |   4 |     VIEW                              |            |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   5 |      COLLECTION ITERATOR PICKLER FETCH| TF         |       |       |            |          |  Q1,01 | PCWP |            |
    |   6 |       PX RECEIVE                      |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,01 | PCWP |            |
    |   7 |        PX SEND HASH                   | :TQ10000   |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | P->P | HASH       |
    |   8 |         PX BLOCK ITERATOR             |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWC |            |
    |   9 |          TABLE ACCESS FULL            | TEST_TABLE |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWP |            |
    Note
       - dynamic sampling used for this statement
    explain plan for
    select /*+ first_rows */ test_obj(a, b, c)
    from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    select * from table(dbms_xplan.display);
    Plan hash value: 4097140875
    | Id  | Operation                            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                     |            |  8168 |  3972K|    20   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                      |            |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)                | :TQ10000   |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    VIEW                              |            |  8168 |  3972K|    20   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   4 |     COLLECTION ITERATOR PICKLER FETCH| TF_ANY     |       |       |            |          |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR               |            |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWC |            |
    |   6 |       TABLE ACCESS FULL              | TEST_TABLE |   931K|   140M|   136   (2)| 00:00:02 |  Q1,00 | PCWP |            |
    Note
       - dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
    Pipelined function partition by hash has extra sort#
    I ran your tests with HASH vs ANY and the results are in line with the observations above....
    declare
    myList test_list := test_list();
    myList2 test_list := test_list();
    sids ton_t := ton_t();
    sid number;
    t1 timestamp with time zone;
    t2 timestamp with time zone;
    procedure LogPerfTable
    is
    type ton is table of number;
    type tot is table of timestamp with time zone;
    type clients_t is table of varchar2(256);
    sids ton;
    t1s tot;
    t2s tot;
    clients clients_t;
    deltaTime integer;
    btsPerSecond number(19,0);
    edrsPerSecond number(19,0);
    begin
    select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
    if clients.count > 0 then
    for i in clients.FIRST .. clients.LAST loop
    deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
    if deltaTime = 0 then deltaTime := 1; end if;
    dbms_output.put_line(
    '[' || to_char(t1s(i), 'hh:mi:ss') ||
    '-' || to_char(t2s(i), 'hh:mi:ss') ||
    ']:' ||
    ' client ' || clients(i) || ' / sid #' || sids(i)
    end loop;
    end if;
    end LogPerfTable;
    begin
    select userenv('SID') into sid from dual;
    for i in 1..200000 loop
    myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
    end loop;
    -- save into the real table
    insert into test_table select * from table(cast (myList as test_list));
    -- save into the tmp table
    insert into tmp_test_table select * from table(cast (myList as test_list));
    dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
    delete from perf_data;
    t1 := systimestamp;
    select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
    from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
    t2 := systimestamp;
    insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
    LogPerfTable;
    dbms_output.put_line('... saved #' || myList2.count || ' records');
    select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
    end;
    (1) copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '918' ): enter
    test_pkg.TF( sid => '918' ): exit, piped #200000 records
    [01:40:19-01:40:29]: client master / sid #918
    [01:40:19-01:40:29]: client slave / sid #918
    ... saved #600000 records
    (2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
    [01:40:31-01:40:36]: client master / sid #918
    [01:40:31-01:40:32]: client slave / sid #659
    [01:40:31-01:40:32]: client slave / sid #880
    [01:40:31-01:40:32]: client slave / sid #1045
    [01:40:31-01:40:32]: client slave / sid #963
    [01:40:31-01:40:32]: client slave / sid #712
    ... saved #600000 records
    (3) copy physical 'test_table' to 'mylist2' by streaming via table function:
    [01:40:37-01:41:05]: client master / sid #918
    [01:40:37-01:40:42]: client slave / sid #738
    [01:40:37-01:40:42]: client slave / sid #568
    [01:40:37-01:40:42]: client slave / sid #618
    [01:40:37-01:40:42]: client slave / sid #659
    [01:40:37-01:40:42]: client slave / sid #963
    ... saved #3000000 records
    (4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
    [01:41:12-01:41:16]: client master / sid #918
    [01:41:12-01:41:16]: client slave / sid #712
    [01:41:12-01:41:16]: client slave / sid #1045
    [01:41:12-01:41:16]: client slave / sid #681
    [01:41:12-01:41:16]: client slave / sid #754
    [01:41:12-01:41:16]: client slave / sid #880
    ... saved #600000 records
    (5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
    [01:41:18-01:41:38]: client master / sid #918
    [01:41:18-01:41:38]: client slave / sid #681
    [01:41:18-01:41:38]: client slave / sid #712
    [01:41:18-01:41:38]: client slave / sid #754
    [01:41:18-01:41:37]: client slave / sid #880
    [01:41:18-01:41:38]: client slave / sid #1045
    ... saved #3000000 recordsHTH
    David

  • Parallel pipelined table function, autonomous_transaction to global tmp tab

    Hi,
    i try to speed up my parallel pipelined table function and switch from pl/sql collection to global temporary table inside.
    This requires to use PRAGMA AUTONOMOUS_TRANSACTION (and commit), because inserting into global temporary table (DML)
    within select - for invoking the table function - is not allowed without.
    As a consequence of commit it next requires to have on commit preserve rows for the global temporary table.
    Now:
    Inserts into the global temporary table are done - indicated by sql%rowcount.
    But a select afterwards doesn't show any record anymore.
    Here is a program to demonstrate it:
    set serveroutput on;
    drop type TestTableOfNumber_t;
    create or replace type TestTableOfNumber_t is table of number;
    drop type TestStatusList;
    drop type TestStatusObj;
    create or replace type TestStatusObj as object(
         sid number,
         ctr1 number,
         ctr2 number,
         ctr3 number
    create or replace type TestStatusList is table of TestStatusObj;
    drop table TestTmpTable;
    create global temporary table TestTmpTable (
         value     number
    ) on commit preserve rows;
    create or replace package test_pkg
    as
         type TestStatusRec is record (
              sid number,
              ctr1 number,
              ctr2 number,
              ctr3 number
         type TestStatusTab is table of TestStatusRec;
         function FillTmpTable(id in varchar2)
         return TestStatusRec;
         FUNCTION ptf (p_cursor  IN  sys_refcursor)
         RETURN TestStatusList PIPELINED
         PARALLEL_ENABLE(PARTITION p_cursor BY any);
    end;
    create or replace package body test_pkg
    as
         function FillTmpTable(id in varchar2)
         return TestStatusRec
         is
              PRAGMA AUTONOMOUS_TRANSACTION;
              result TestStatusRec;
              sid number;
              type ton is table of number;
              tids TestTableOfNumber_t := TestTableOfNumber_t();
              records number := 0;
         begin
              select userenv('SID') into sid from dual;
              result.sid := sid;
              delete from TestTmpTable;
              for i in 1..100 loop
                   tids.extend;
                   tids(tids.last) := i;
              end loop;
              forall i in 1..tids.count
                   insert into TestTmpTable (value) values (tids(i));
              -- get number of records inserted
              records := sql%rowcount;
              result.ctr1 := records;
              -- retrieve again before commit
              select count(*) into records from TestTmpTable;
              result.ctr2 := records;
              commit;
              -- retrieve again after commit
              select count(*) into records from TestTmpTable;
              result.ctr3 := records;
              return result;
         end;
           FUNCTION ptf (p_cursor  IN  sys_refcursor)
         RETURN TestStatusList PIPELINED
         PARALLEL_ENABLE(PARTITION p_cursor BY any)
         IS
              rec test_pkg.TestStatusRec;
              value number;
              sid number;
              ctr integer := 0;
         BEGIN
              select userenv('SID') into sid from dual;
              rec := FillTmpTable('IN PTF');
              LOOP
                   FETCH p_cursor into value;
                   EXIT WHEN p_cursor%NOTFOUND;
                   ctr := ctr + 1;     
              END LOOP;
              -- as a result i am only interested in the results of FillTmpTable():
              PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
                  RETURN;
         END;
    end;
    declare
         tons TestTableOfNumber_t;
         counts TestTableOfNumber_t;
         status test_pkg.TestStatusRec;
         statusList test_pkg.TestStatusTab;
    begin
         status := test_pkg.FillTmpTable('MAIN');
         dbms_output.put_line('main thread:'
              || ' sid #' || status.sid
              || ' / #' || status.ctr1 || ' inserted '
              || ' / #' || status.ctr2 || ' before commit'
              || ' / #' || status.ctr3 || ' after commit');     
         select value bulk collect into tons from TestTmpTable;
         select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2) */ value from TestTmpTable tab)));
         for i in 1..StatusList.count loop
              dbms_output.put_line('worker thread #' || i  || ':'
              || ' sid #' || statusList(i).sid
              || ' / #' || statusList(i).ctr1 || ' inserted '
              || ' / #' || statusList(i).ctr2 || ' before commit'
              || ' / #' || statusList(i).ctr3 || ' after commit');
         end loop;
    end;
    /The output is:
    main thread: sid #881 / #100 inserted  / #100 before commit / #100 after commit
    worker thread #1: sid #421 / #100 inserted  / #0 before commit / #0 after commit
    worker thread #2: sid #321 / #100 inserted  / #0 before commit / #0 after commitThe 1st line is for the main thread invoking FillTmpTable().
    The next #2 lines are for the worker threads of the parallel pipelined table function for invoking the same FillTmpTable().
    For the main thread everything is as expected.
    But for the worker threads the logs for before commit and after commit both give #0 for the number of available records in the global temporary table.
    However all indicate #100 for the SQL insert
    regards,
    Frank
    Edited by: user8704911 on Jul 7, 2011 10:13 AM
    Edited by: user8704911 on Jul 7, 2011 10:20 AM
    Edited by: user8704911 on Jul 7, 2011 10:27 AM

    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    SQL> set serveroutput on;
    SQL> drop type TestTableOfNumber;
    drop type TestTableOfNumber
    ERROR at line 1:
    ORA-04043: object TESTTABLEOFNUMBER does not exist
    SQL> /
    drop type TestTableOfNumber
    ERROR at line 1:
    ORA-04043: object TESTTABLEOFNUMBER does not exist
    SQL> 
    SQL> create or replace type TestTableOfNumber_t is table of number;
      2  /
    Type created.
    SQL> 
    SQL> drop type TestStatusObj;
    drop type TestStatusObj
    ERROR at line 1:
    ORA-04043: object TESTSTATUSOBJ does not exist
    SQL> /
    drop type TestStatusObj
    ERROR at line 1:
    ORA-04043: object TESTSTATUSOBJ does not exist
    SQL> 
    SQL> create or replace type TestStatusObj as object(
      2   sid number,
      3   ctr1 number,
      4   ctr2 number,
      5   ctr3 number
      6  );
      7  /
    Type created.
    SQL> 
    SQL> drop type TestStatusList;
    drop type TestStatusList
    ERROR at line 1:
    ORA-04043: object TESTSTATUSLIST does not exist
    SQL> /
    drop type TestStatusList
    ERROR at line 1:
    ORA-04043: object TESTSTATUSLIST does not exist
    SQL> 
    SQL> create or replace type TestStatusList is table of TestStatusObj;
      2  /
    Type created.
    SQL> 
    SQL> drop table TestTmpTable;
    drop table TestTmpTable
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> /
    drop table TestTmpTable
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> 
    SQL> create global temporary table TestTmpTable (
      2   value number
      3  ) on commit preserve rows;
    Table created.
    SQL> /
    create global temporary table TestTmpTable (
    ERROR at line 1:
    ORA-00955: name is already used by an existing object
    SQL> 
    SQL> create or replace package test_pkg
      2  as
      3  
      4   type TestStatusRec is record (
      5    sid number,
      6    ctr1 number,
      7    ctr2 number,
      8    ctr3 number
      9   );
    10  
    11   type TestStatusTab is table of TestStatusRec;
    12  
    13   function FillTmpTable(id in varchar2)
    14   return TestStatusRec;
    15  
    16   FUNCTION ptf (p_cursor  IN  sys_refcursor)
    17   RETURN TestStatusList PIPELINED
    18   PARALLEL_ENABLE(PARTITION p_cursor BY any);
    19  
    20  end;
    21  /
    Package created.
    SQL> 
    SQL> create or replace package body test_pkg
      2  as
      3  
      4   function FillTmpTable(id in varchar2)
      5   return TestStatusRec
      6   is
      7    PRAGMA AUTONOMOUS_TRANSACTION;
      8  
      9    result TestStatusRec;
    10  
    11    sid number;
    12  
    13    type ton is table of number;
    14    tids TestTableOfNumber_t := TestTableOfNumber_t();
    15  
    16    records number := 0;
    17   begin
    18    select userenv('SID') into sid from dual;
    19    result.sid := sid;
    20  
    21    delete from TestTmpTable;
    22  
    23    for i in 1..100 loop
    24     tids.extend;
    25     tids(tids.last) := i;
    26    end loop;
    27  
    28    forall i in 1..tids.count
    29     insert into TestTmpTable (value) values (tids(i));
    30  
    31    -- get number of records inserted
    32    records := sql%rowcount;
    33    result.ctr1 := records;
    34  
    35    -- retrieve again before commit
    36    select count(*) into records from TestTmpTable;
    37    result.ctr2 := records;
    38  
    39    commit;
    40  
    41    -- retrieve again after commit
    42    select count(*) into records from TestTmpTable;
    43    result.ctr3 := records;
    44  
    45    return result;
    46   end;
    47  
    48     FUNCTION ptf (p_cursor  IN  sys_refcursor)
    49   RETURN TestStatusList PIPELINED
    50   PARALLEL_ENABLE(PARTITION p_cursor BY any)
    51   IS
    52    rec test_pkg.TestStatusRec;
    53    value number;
    54    sid number;
    55    ctr integer := 0;
    56   BEGIN
    57    select userenv('SID') into sid from dual;
    58    rec := FillTmpTable('IN PTF');
    59    LOOP
    60     FETCH p_cursor into value;
    61     EXIT WHEN p_cursor%NOTFOUND;
    62     ctr := ctr + 1;
    63    END LOOP;
    64  
    65    -- as a result i am only interested in the results of FillTmpTable():
    66    PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
    67  
    68        RETURN;
    69   END;
    70  end;
    71  /
    Package body created.
    SQL> 
    SQL> declare
      2   tons TestTableOfNumber_t;
      3   counts TestTableOfNumber_t;
      4   status test_pkg.TestStatusRec;
      5   statusList test_pkg.TestStatusTab;
      6  begin
      7   status := test_pkg.FillTmpTable('MAIN');
      8   dbms_output.put_line('main thread:'
      9    || ' sid #' || status.sid
    10    || ' / #' || status.ctr1 || ' inserted '
    11    || ' / #' || status.ctr2 || ' before commit'
    12    || ' / #' || status.ctr3 || ' after commit');
    13  
    14   select value bulk collect into tons from TestTmpTable;
    15  
    16   select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2
    ) */ value from TestTmpTable tab)));
    17  
    18   for i in 1..StatusList.count loop
    19    dbms_output.put_line('worker thread #' || i  || ':'
    20    || ' sid #' || statusList(i).sid
    21    || ' / #' || statusList(i).ctr1 || ' inserted '
    22    || ' / #' || statusList(i).ctr2 || ' before commit'
    23    || ' / #' || statusList(i).ctr3 || ' after commit');
    24   end loop;
    25  
    26  end;
    27  /
    main thread: sid #1023 / #100 inserted  / #100 before commit / #100 after commit
    worker thread #1: sid #1045 / #100 inserted  / #100 before commit / #100 after
    commit
    worker thread #2: sid #1019 / #100 inserted  / #100 before commit / #100 after
    commit
    PL/SQL procedure successfully completed.
    SQL> I am getting a different result.
    Regards
    Raj

  • Build a table based on XML data set with Spry

    Hi there,
    I'm new to spry technology therefore forgive any basic question of mine.
    I'm trying to fill content in a table based on XML data set values but nothing is shown :-(
    here is my code.... any suggestion? pls tell me where I'm wrong.
    Thank you in advance
    <script src="SpryAssets/xpath.js" type="text/javascript"></script>
    <script src="SpryAssets/SpryData.js" type="text/javascript"></script>
    <script type="text/javascript">
    var uscite = new Spry.Data.XMLDataSet("data/Calendario 2011.xml", "csv_data_records/record", {sortOnLoad: "Date", sortOrderOnLoad: "ascending"});
    uscite.setColumnType("Date", "date");
    uscite.setColumnType("km", "number");
    </script>
    <div class="RankContainer" id="UsciteDiv" spry:region="uscite" >
              <table width="100%" border="0" class="RankTable">
                <tr>
                  <th width="10%" scope="col" spry:sort="Date">Data</th>
                  <th width="20%" scope="col">Destinazione</th>
                  <th width="5%" scope="col">KM</th>
                  <th width="35%" scope="col">Percorso</th>
                  <th width="30%" scope="col">Breve</th>
                 <!-- <th width="15%" scope="col">Mappa</th>-->
                </tr>
                <tr>
                   <script type="text/javascript">
           var rows = uscite.getData();
        for (var i = 0; i < rows.length; i++)
         if (rows[i]["Mappa"].startsWith("/"))
          rowContent = "<td> si </td>";
         else
              rowContent = "<td> no </td>";
         document.write("<td>{Date}</td>");
         document.write("<td>"+rowContent+"</td>");
         document.write("<td>{km}</td>");
         document.write("<td>{Percorso}</td>");
         document.write("<td>{Breve}</td>");
          </script>
               </tr>
              </table>
           </div>

    Sure this is how it should work (except that no anchor tag shall be present for Destinazione whereas Mappa has no real value in)
    http://www.gsc-borsano.it/_Calendario%202011.html
    and this is the non working page
    http://www.gsc-borsano.it/_v2Calendario%202011.html
    Thanks

  • Excel table to update spry data set (?)

    Hello again!
    In my last posted I asked for a scrolling text script.. I have decided to use spry data set to update shipping info - the problem I am not running into is - I need my corker(s) to be able to update this too - I of course understand rows and cell in html etc but they dont. MY goal is you have somehow a way for them to update an excel file click export or whatever and when the webpage I have created refreshs - the new data is there. I have tried do experiement with a macro in excel that exports to an xml file but , man... yikes!  I need help. Anyone point me in the right direction? I have no problems of course updating this info for them but if Im gone or somehting .. I need an easy way for my coworkers to update this from excel or the like
    Thanks
    Rob

    thanks V
    Im not sure that is going to work... I need to have multiple lines in one row
    example - in the 3rd line there - 122 and 123:
    I tried a sample csv file too and it said it couldnt retrieve spry:repeat - or something -
    thanks, Ill keep trying witht he CSV file stuff but so far not working
    R

  • Joining three tables to get specific data set

    table1
    id
    seq
    dat
    table2
    id
    seq
    empid
    taxid
    Table3
    empid
    taxid
    I want to find out records That have same id,seq,empid having different taxid column Common to three of the tables .
    Please help me

    Hi,
    There are a lot of different things you could mean.
    Here's how to do one of them:
    SELECT       t2.id,  t2.seq,  t2.empid
    FROM       table2     t2
    JOIN       table1     t1     ON     t2.id        = t1.id
                          AND     t2.seq        = t1.seq
    JOIN       table3     t3     ON     t2.emp_id  = t3.empid
                          AND     t2.taxid   = t3.taxid
    GROUP BY  t2.id,  t2.seq,  t2.empid
    HAVING       COUNT (DISTINCT t2.taxid) > 1
    ;It would help if you posted some sample data (CREATE TABLE and INSERT statements) from all 3 tables, and the results you want from that data.

  • Is a database table required for temporary interfaces with flat file data set source ?

    Folks,  this is the situation I have in ODI 11.1.1.7
    I have a temporary interface (yellow), called MJ_TEMP_INT,  that pulls data from TWO data sets in the source into a temporary target (TEMP_TARG). The catch is one data set pulls from a from a table whereas the other data set pulls from a flat file.  A union is done on the data sets.
    I then create another interface, called MJ_INT, that uses the MJ_TEMP_INT as a source and the target is a real db. table called "REAL_TARGET"
    Two questions:
    When I execute my second interface  (MJ_INT), I get a message "ORA-00942: table or view does not exist" because it is looking for a real db table TEMP_TARG. Why must I have one ? because I am pulling from a flat file ?
    On my second interface (MJ_INT) when I look at the property sheet of my source interface MJ_TEMP_INT (yellow), the checkbox next to "Use temporary interface as Derived table" is DISABLED.  Why ? Is is also because my temporary interface is pulling from a flat file ?
    I have attached a file that shows a screen shot of my ODI studio.
    By the way,  IF my temporary interface source has only one data set pulling from a db. table into a temporary target table, say called MJ_TEMP2_TARG,  and then when I use this temporary interface as a source to another other real db. target table (REAL2_TARGET),  THEN, every thing works.  ODI does not require me to have a real db. table MJ_TEMP2_TARG and the checkbox for "Use temporary interface as Derived table" is NOT DISABLED and my REAL2_TARGET table gets populated.
    Thank you in advance.
    M. Jamal.

    Thanks SH. I thought so. 
    Though I understand the reason to materialize the file in a staging area, but that almost defeats the purpose of having a temporary interface in this case if we have to save the data in a permanent db. table first.  I assume the db. table sticks around and is not automatically dropped once the interface executing ends.  If the db. table sticks around then I also must truncate it first before executing the temporary interface each time. Right ?

  • Parallel not working in pipelined table function?

    I've found this excelent article titled 'Oracle fast parallel data unload into ASCII file(s)' in this blog: http://jiri.wordpress.com/2009/03/18/oracle-fast-parallel-data-unload-into-ascii-files/
    I have compiled the code and created the objects and the directory in my DB
    But when I execute :
    SELECT *
    FROM TABLE(
    DATA_UNLOAD(
    CURSOR(
    SELECT /*+ PARALLEL(A, 2, 1) */
    TABLE_NAME || '|' ||
    COLUMN_NAME || '|' ||
    DATA_TYPE
    FROM MYTABLE A
    'SAMPLE_SPOOL.TXT',
    'DIR_USERS_JIRI',
    'Y',
    'Y' )
    It is suposed to return 2 rows (because of parallel execution), but it just returns 1
    Do I have to do something special in order to make parallel pipelined function work
    Edited by: igorcb123 on 01-02-2011 01:58 PM

    If & Else is wrongly used in the function
    FUNCTION F_PS_HIGH_SCHOOL (OLD_HS CHAR)
    RETURN
    CHAR
    IS
    NEW_HIGH_SCHOOL CHAR(11);
    BEGIN
    IF OLD_HS = ' ' THEN NEW_HIGH_SCHOOL := ' ';
    ELSE-- Incorrect usage
    SELECT
    MC_AD_HS_NAME INTO NEW_HIGH_SCHOOL
    FROM
    XLAT_HIGH_SCHOOL_CODES_2
    WHERE
    SIS_HS_CODE = OLD_HS;
    RETURN NEW_HIGH_SCHOOL;
    END IF;
    RETURN NEW_HIGH_SCHOOL;
    END F_PS_HIGH_SCHOOL;

  • Distributed queries+pipelined table function

    HI friends,
    can i get better performance for distributed queries if i use pipelined table function.I have got my data distribued across three different databases.
    thanx
    somy

    You will need to grant EXECUTE access on the pipelined table function to whatever users want it. When other users call this function, they may need to prefix the schema owner (i.e. <<owner>>.getValue('001') ) unless you've set up the appropriate synonym.
    What version of SQL*Plus do you have on the NT machine?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Pipeline Table Function returning a fraction of data

    My current project involves migrating an Oracle database to a new structure based on the new client application requirements. I would like to use pipelined table functions as it seems as though that would provide the best performance.
    The first table has about 65 fields, about 75% of which require some type of recoding for the new app. I have written a function for each transformation and have all of these functions stored in a package. If I do:
    create new_table as select (
    pkg_name.function1(old_field1),
    pkg_name.function2(old_field2),
    pkg_name.function3(old_field3),
    it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.
    I wrote a function that is passed the old table as a cursor, runs all the functions for the transformations and then pipes the new row back to the insert statement that called the function. It is incredibly fast but only returns .025% of the data (about 50 rows out of my sample table of 200,000). It does not throw any errors.
    So I am trying to determine what is going on. Perhaps one of my functions has a bug. If there was would cause the row to be kicked out? There are 40 or so functions so tracking this down has been a bit of a bear.
    Any advice as to how I might resolve this would be much appreciated.
    Thanks
    Dan

    . I would like to use pipelined table functions as it seems as though that would provide the best performanceUh huh...
    it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.Not the first time a lovely theory has been killed by an ugly fact. Did you do any bench marks to see whether the pipelined functions did offer performance benefits over doing it some other way?
    From the context of your comments I think you are trying to a populate a new table from a single old table. Is this the case? If so I would have thought a straightforward CTAS with normal functions would be more appropriate: pipelined functions are really meant for situations in which one input produced more than one output. Anyway, ifr we are to help you I think you need to give us more details about how this process works and post a sample transformation function.
    There are 40 or so functions so tracking this down has been a bit of a bear.The teaching is: we should code one function and get that working before moving on to the next one. Which might not seem like a helpful thing to say, but the best lesson is often "I'll do it differently next time".
    Cheers, APC

  • ODI: Way to select Distinct data while doing Join of tables at Source

    HI,
    Our requirement is to do a join on multiple tables selecting distinct data from those table at source.
    But we are not able to see the Distinct Box in flow tab.
    Any thoughts to resolve our problem
    Pratik

    You can not put DISTINCT clause selectively .
    If you choose to opt for DISTINCT then Oracle will apply the distinct to all columns present in the select list .
    So at your IKM level just click the distinct check box .. run your interface and find out the query generated .
    See if your requirement is getting fullfilled or not .
    Thank,
    Sutirtha

  • How to debug "pipelined parallel enable table function" ?

    Dear All,
    Normally we can retrieve output from a "pipelined parallel enable table function " by using SQL statement, such as
    select output from table(pipelined_function(arg1));
    Can anyone alight me how to call this function purely using PL/SQL statement?
    Reason for this : We have third party developer develooped a complicate "pipelined parallel enable table function" and this function is currently calling by SQL (select output from table(pipelined_function(arg1));
    ), I will like to debug this function, but unable to do so, so far I have done :
    1) compile function with debug option
    2) using Procedure Builder to debug, if I execute the above statement (select output from table(pipelined_function(arg1)); I believe, Procedure Builder will not debug a function from a SQL statement
    3) try to build up a PL/SQL block, but don't know how to it.
    basically I want to debug this "pipelined parallel enable table function " and don't know how to do it, any example will be greate!

    user2302827 wrote:
    Using dbms_output is fine but too tedious. I was looking for a PL/SQL procedure builder that can help to debug a statement like this:
    select output from table(pipelined_function(arg1));
    Any suggestion?I don't know about any procedure builders.
    There is a debugger you can use, though I'd have to think about how to invoke it to use with a pipelined function. The debugger is available in GUI tools like SQL*Developer and TOAD and can be used to walk through a program and set values

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Using Pipeline Table functions with other tables

    I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables. In my tests, selecting from just the pipelined table perform pretty well (whether it is directly from the pipleined table or the view I created on top of it). Where I start to see some degregation when I try to join the pipelined tabe view to other tables and add where conditions.
    ie:
    SELECT A.empno, A.empname, A.job, B.sal
    FROM EMP_VIEW A, EMP B
    WHERE A.empno = B.empno AND
          B.mgr = '7839'
    I have seen some articles and blogs that mention this as a cardinality issue, and offer some undocumented methods to try and combat.
    Can someone please give me some advice or tips on this. Thanks!
    I have created a simple example using the emp table below to help illustrate what I am doing.
    DROP TYPE EMP_TYPE;
    DROP TYPE EMP_SEQ;
    CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT
           ( EMPNO                                         NUMBER(10),
             ENAME                                         VARCHAR2(100),
             JOB                                           VARCHAR2(100));
    CREATE OR REPLACE TYPE EMP_TYPE AS TABLE OF EMP_SEQ;
    CREATE OR REPLACE FUNCTION get_emp return EMP_TYPE PIPELINED AS
    BEGIN
      FOR cur IN (SELECT
                    empno,
                    ename,
                    job
                  FROM emp
             LOOP
               PIPE ROW(EMP_SEQ(cur.empno,
                                cur.ename,
                                cur.job));
             END LOOP;
             RETURN;
    END get_emp;
    create OR REPLACE view EMP_VIEW as select * from table(get_emp());
    SELECT A.empno, A.empname, A.job, B.sal
    FROM EMP_VIEW A, EMP B
    WHERE A.empno = B.empno AND
          B.mgr = '7839'

    I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables
    Which begs the question: WHY? What PROBLEM are you trying to solve and what makes you think using pipelined table functions is the best way to solve that problem?
    The lack of information about cardinality is the likely root of the degradation you noticed as already mentioned.
    But that should be a red flag about pipelined functions in general. PIPELINED functions hide virtually ALL KNOWLEDGE about the result set that is produced; cardinality is just the tip of the iceberg. Those functions pretty much say 'here is a result set' without ANY information about the number of rows (cardinality), distinct values for any columns, nullability of any columns, constraints that might apply to any columns (foreign key, primary key) and so on.
    If you are going to hide all of that information from Oracle that would normally be used to help optimize queries and select the appropriate execution plan you need to have a VERY good reason.
    The use of PIPELINED functions should be reserved for those use cases where ordinary SQL and PL/SQL cannot get the job done. That is they are a 'special case' solution.
    The classic use case for those functions is for the transform stage of ETL where multiple pipelined functions are chained together: one function feeds its rows to the next function which feeds its rows to another and so on. Each of those 'chained' functions is roughly analogous to a full table scan of the data that often does not need to be joined to other data except perhaps low volumn lookup tables where the data may even be cached.
    I suggest that any exploratory or prototyping work you do use standard relational tables until such point as you run into a problem whose solution might require PIPELINED functions to solve.

Maybe you are looking for

  • HDD fails again after restoring data from Time Machine?

    Hello, I will try to explain my case and end with a question that maybe someone will be able to answer (I hope..): 1. My imac started to run very slow and every 2 minutes or less appeared the coloured wheel for some secs. While the wheel was visible,

  • Windows 8.1 BSoD & driver issue

    Hello, I was hoping someone could help me out here. Let me describe the issue. For quite a while I was using Windows 8.1 fine and did not have any problems, however I recently burned my PSU somehow and then the problems started. Which I somewhy find

  • How to update files

    I started using my new macbook pro a few days ago. I downloaded music, photos, etc. and then I found the two disks in the "Everything Else" folder. I downloaded them. When downloading the first one, i told it to put my files in a folder named "Previo

  • BAPI for T.Code F-02

    Hi Dear Friends, Thanks in Advance. Is there any BAPI for the Transaction F-02. Regards; Sridhar.J

  • LENGTH OF SERVICE,AGE RANGE, NUMBER OF ACTIONS CALCULATION???

    Hi all, Could you please tell me the logic to calculate the above mentioned fields. <b>please provide me the logic.</b> cheers sheela