Table Functions error in manual
In the user guide it writes:
create function f1(x number) return numset_t pipelined is
begin
for i in 1..x loop
pipe row(i);
end loop;
return;
end;
Note that the "end;" should be "end f1;". During unit test this didn't cause a problem, but once testing on a large amount of data we kept getting an ORA 600 (curse of Oracle).
Here was the exact error:
ORA-00600: internal error code, arguments: [kohdtf048], [], [], [], [], [], [],
Hope this helps someone in the future,
Tim
A precision please :
In production,
- using the syntax without the function name after "end" produces the error.
- using the syntax with the function name doesn't produce the error
Is the above true?
As far as I know, the name of the function in the END syntax is OPTIONAL.
Similar Messages
-
'setProgress is not a function' error while setting the progress of a progress bar manually
I want to set value of a progress bar in an accordian but I am encountering 'setProgress is not a function' error. Any idea what's wrong with following code.
Observation:
If I move the progress bar out of the Accordian then the error goes away and the progress bar appears fine.
I want to set the progress bar eventually to {repMonitor.currentItem.threatLevel} but for now I am just testing with hypothetical threat value i.e 60
<mx:Accordion id="monAccordian" includeIn="Monitoring" x="10" y="10" width="554" height="242" change="monAccordianChange()" >
<mx:Repeater id="repMonitor" dataProvider="{monitoringArray}">
<mx:Canvas width="100%" height="100%" label="{repMonitor.currentItem.firstName+' '+ repMonitor.currentItem.lastName}" >
<mx:Image x="10" y="10" source="{repMonitor.currentItem.imageName}" width="175" height="118"/>
<s:Label x="200" y="14" text="Threat Level:"/>
<mx:ProgressBar x="200" y="30" mode="manual" label="" id="bar" width="200" creationComplete="bar.setProgress(60,100);" />
</mx:Canvas>
</mx:Repeater>
</mx:Accordion>Okay.. Thanks.
On another forum I ve been told that I need to use getRepeaterItem. How can I use it to set my progress bar such the value of progress may be taken from repMonitor.currentItem.threatLevel? -
How to use the Table Function defined in package in OWB?
Hi,
I defined a table function in a package. I am trying to use that in owb using Table function operator. But I came to know that, owb R1 supports only standalone table functions.
Is there any other way to use the table function defined in a package. As like we create synonyms for functions, is there any other way to do this.
I tryed to create synonyms, it is created. But it is showing compilation error. Finally I found that, we can't create synonyms for functions which are defined in packages.
Any one can explain it, how to resolve this problem.
Thank you,
Regards
Gowtham Sen.Hi Marcos,
Thank you for reply.
OWB R1 supports stand alone table functions. Here what I mean is, the table fucntion which is not inculded in any package is a stand alone table function.
for example say sample_tbl_fn is a table function. It is defined as a function.It is a stand alone function. We call this fucntion as "samp_tbl_fn()";
For exampe say sample_pkg is a package. say a function is defined in a package.
then we call that function as sample_pkg.functionname(); This is not a stand alone function.
I hope you understand it.
owb supports stand alone functions.
Here I would like to know, is there any other way to use the functions which are defined in package. While I am trying to use those functions (which are defined in package -- giving the name as packagename.functionname) it is throwing an error "Invalid object name."
Here I would like know, is there any other way to use the table functions which are defined in a package.
Thank you,
Regards,
Gowtham Sen. -
Table function on a collection in Dynamic SQL
Hello,
I am trying to create a refcursor by selecting from a collection using table function.
If I use the Select statement the query executes, but if I put the Select statement in a string
the collection variable does not get resolved. The resaon I am putiing it in a string is because the
WHERE clause will be passed a parameter. The code below is an anonymous block but will be changed to a
procedure once I get it to work.
I have tried many different ways but was unsuccessful.
Please see if anybody cann assist or what I am trying to achive is not possible, so provide an alternative.
The error I am getting is
ORA-00904: "V_ALARM_REC_TABLE": invalid identifier
ORA-06512: at line 50
Thanks.
Bimal
DECLARE
TYPE c_refcurtype IS REF CURSOR;
x c_refcurtype;
p_recordset c_refcurtype;
v_rec mc2_dev2.mc2_alarm_rec_type := mc2_dev2.mc2_alarm_rec_type(null,null,null,null,null,null,null,null,
null,null,null,null,null,null,null,null,
null,null,null,null,null,null,null);
v_alarm_rec_table mc2_dev2.mc2_alarm_rec_table := mc2_dev2.mc2_alarm_rec_table();
v_select varchar2(200) := 'select a.* from ';
v_table varchar2(200) := 'table(v_alarm_rec_table) a ';
v_where varchar2(200) := 'where a.alarm_rule_def_uid = 9';
v_query varchar2(32000);
BEGIN
MC2_ALARM.create_mc2_alarm(x, 1); --- ( X is a refcursor, which I will use to populate v_alarm_rec_table a (nested table collection)
LOOP
FETCH x INTO v_rec.record_cnt,
v_rec.rn,
v_rec.alarm_precision_order,
v_rec.alarm_rule_def_uid,
v_rec.alarm_type_def_uid,
v_rec.alarm_rule_scope_uid,
v_rec.trigger_tpl_master_uid,
v_rec.alarm_scope_def_uid,
v_rec.alarm_object_uid,
v_rec.error_type,
v_rec.all_error_codes,
v_rec.enabled,
v_rec.start_hour,
v_rec.end_hour,
v_rec.day_type,
v_rec.alarm_severity_def_uid,
v_rec.on_watch_duration,
v_rec.update_on_status_change,
v_rec.log_ind,
v_rec.email_to,
v_rec.email_from,
v_rec.send_email,
v_rec.stale_period;
EXIT WHEN x%NOTFOUND;
v_alarm_rec_table.extend;
v_alarm_rec_table(v_alarm_rec_table.last) := v_rec;
END LOOP;
CLOSE x;
v_query := v_select||v_table||v_where; -- ERROR OCCURS AT THIS LINE as it cannot resolve the TABLE name v_alarm_rec_table)
dbms_output.put_line('sql: '||v_query);
OPEN p_recordset FOR v_query;
LOOP
FETCH p_recordset INTO v_rec.record_cnt,
v_rec.rn,
v_rec.alarm_precision_order,
v_rec.alarm_rule_def_uid,
v_rec.alarm_type_def_uid,
v_rec.alarm_rule_scope_uid,
v_rec.trigger_tpl_master_uid,
v_rec.alarm_scope_def_uid,
v_rec.alarm_object_uid,
v_rec.error_type,
v_rec.all_error_codes,
v_rec.enabled,
v_rec.start_hour,
v_rec.end_hour,
v_rec.day_type,
v_rec.alarm_severity_def_uid,
v_rec.on_watch_duration,
v_rec.update_on_status_change,
v_rec.log_ind,
v_rec.email_to,
v_rec.email_from,
v_rec.send_email,
v_rec.stale_period;
EXIT WHEN p_recordset%NOTFOUND;
some dbms_output statements...
END LOOP;
END;
The error I am getting is
ORA-00904: "V_ALARM_REC_TABLE": invalid identifier
ORA-06512: at line 50Thanks Timur/Solomon,
mc2_dev2 is the schema name.
mc2_alarm_rec_table is a SQL type.
Here are the scripts:
CREATE OR REPLACE TYPE MC2_DEV2.mc2_alarm_rec_type IS OBJECT
( record_cnt NUMBER,
rn number,
alarm_precision_order NUMBER(6),
alarm_rule_def_uid NUMBER(6),
alarm_type_def_uid NUMBER(6),
alarm_rule_scope_uid NUMBER(6),
trigger_tpl_master_uid NUMBER(6),
alarm_scope_def_uid NUMBER(6),
alarm_object_uid NUMBER(6),
error_type VARCHAR2(1),
all_error_codes VARCHAR2(1),
enabled VARCHAR2(1),
start_hour NUMBER(2),
end_hour NUMBER(2),
day_type NUMBER(2),
alarm_severity_def_uid NUMBER(6),
on_watch_duration NUMBER(6),
update_on_status_change VARCHAR2(1),
log_ind VARCHAR2(1),
email_to VARCHAR2(128),
email_from VARCHAR2(128),
send_email VARCHAR2(1),
stale_period NUMBER(6)
CREATE OR REPLACE TYPE MC2_DEV2.MC2_ALARM_REC_TABLE IS TABLE OF MC2_DEV2.mc2_alarm_rec_type;
If I popoulate the cursor with the following code:
OPEN p_recordset FOR
select a.* from table (v_alarm_rec_table) a where a.alarm_rule_def_uid = 9;
there is no issue it works just fine.
But when when I use
OPEN p_recordset FOR v_query; ---- where v_query := v_select||v_table||v_where;
the variable v_alarm_rec_table does not get resolved.
Regards,
Bimal -
Drop default constraint on a table function
I need to drop some default constraints that appear to be tied to table functions (and not actual tables). This means when I try the ALTER TABLE DROP CONSTRAINT command it fails with the error, "unable to drop constraint because object is not
a table" or something similar.
My question is: how do I drop a constraint on a table function?I suggest you review the documentation for TVFs and how they are (and can be) used. The table returned by a TVF (and in this case I refer specifically to multistatement TVFs) are defined using a subset of the create table syntax. They can be
created with constraints of different types - not just defaults. Why? Because it suits the logic of the developer and (perhaps) because it assists the database engine or the logic that depends on the output of the function.
Below is one example that I used (written by Steve Kass) from a LONG time ago. Notice the primary key.
CREATE FUNCTION [dbo].[uf_sequence] (@N int)
RETURNS @T TABLE (
seq int not null primary key clustered
AS
** 04/21/05.sbm - Bug #306. Initial version.
** Code provided by Steve Kass - MS .programming newsgroup
BEGIN
DECLARE @place int
SET @place = 1
INSERT INTO @T (seq) VALUES (0)
WHILE @place <= @N/2 BEGIN
INSERT INTO @T (seq)
SELECT @place + Seq FROM @T
SET @place = @place + @place
END
INSERT INTO @T (seq)
SELECT @place + Seq FROM @T
WHERE Seq <= @N - @place
RETURN
END
go
For your particular case, the choice of a default constraint is likely due to the implementation of the logic in the function. Perhaps there are multiple insert statements and it was simpler/easier/more robust to use a default constraint rather than
repeatedly hard-code the value in each statement. By choosing a default constraint, the developer need only alter the constraint (once) if the value needs to be changed rather than finding and changing each statement that inserts or updates the table.
As you've have already discerned, you can simply ignore any constraints that are defined on the tables returned by a TVF. -
How to use database look up table function in xsl mapping
Can anybody tell me how to use database look up table function while mapping xsl between 2 nodes.
I have an XML file coming in and depending on one of XML elements we need to decide which further path to take. But, using this XML element, we need to query database table, get metadata and accordingly take appropriate path. I have written lookup function which returns metadata value.
Now, the issue is how do I pass the XML element valu as input to look up function? When I tried to drag it to the input node of lookup function, it throws an error like "Maximum number of parameters exceeded"
Thanks,If the lookup table is always going to remain the same (e.g. a character generator or something similar) you can place the values in a 2D array constant on your diagram, with the input value as one column, the equivalent as the other. When you need to perform the lookup you use an index array to return all the values in the "input column", search it using "search 1D array" and use the resulting index number to index the other column's data. If the values may change, then it would probably be best to load an array control with your equivalent values from a file.
P.M.
Putnam
Certified LabVIEW Developer
Senior Test Engineer
Currently using LV 6.1-LabVIEW 2012, RT8.5
LabVIEW Champion -
Invalid table name error ....
Hi,
I have written a function which takes table name dynamically and if column emp_id is null for more than 0 records then 1 is returned else 0 .
My problem is when i compile iam getting invalid table name error .
Below is my function :
create or replace
FUNCTION f_table ( tab_name in varchar2 ) return number is
l_count number;
begin
select count(*) into l_count from tab_name where emp_id is null;
if l_count >0 then
return 1;
else
return 0;
end if;
end;
Please help ...
Thanks in advance ..Looks fine to me, you could use sign() for the last part:
CREATE OR REPLACE FUNCTION f_table (tab_name IN VARCHAR2)
RETURN NUMBER
IS
l_count NUMBER;
v_sql VARCHAR2 (2000);
BEGIN
v_sql := 'SELECT COUNT (*) FROM ' || tab_name || ' WHERE emp_id IS NULL';
EXECUTE IMMEDIATE v_sql
INTO l_count;
RETURN sign(l_count);
END;And if you have large tables, you could consider not counting it all, and do something like this:
CREATE OR REPLACE FUNCTION f_table (tab_name IN VARCHAR2)
RETURN NUMBER
IS
l_count NUMBER;
v_sql VARCHAR2 (2000);
BEGIN
v_sql := 'SELECT COUNT (*) FROM ' || tab_name || ' WHERE emp_id IS NULL AND rownum = 1';
EXECUTE IMMEDIATE v_sql
INTO l_count;
RETURN l_count;
END;Regards
Peter -
How to define a view on a table function
I have a defined a pipelined table function in a package and can return data from it using:
select * from table(mtreport.FN_GET_JOBS_FOR_PROJECT(?));How can I define a view on this so that users can select from it without having to know to use the 'table' syntax?
select *
from vw_jobs_for_project
where id = ?I've tried this, but it doesn't work
CREATE OR REPLACE VIEW vw_jobs_for_project (PROJECT_ID, NAME) AS
SELECT PROJECT_ID, NAME
FROM table(mtreport.FN_GET_JOBS_FOR_PROJECT(PROJECT_ID));
ERROR at line 3:
ORA-00904: "PROJECT_ID": invalid identifierviews do not accept input parameters in the way that you want to use. you can define a CONTEXT parameter, but the users would need to set the context value before selecting from the view, so you'll still need to train them to do something new.
example:
create or replace context my_context using pkg_context;
<br>
create or replace package pkg_context is
procedure set_context(p_parameter in varchar2, p_value in varchar2);
function get_context(p_parameter in varchar2) return varchar2;
end;
show errors
create or replace package body pkg_context is
CONTEXT_NAME constant all_context.namespace%type := 'MY_CONTEXT';
procedure set_context(p_parameter in varchar2, p_value in varchar2) is
begin
dbms_session.set_context(CONTEXT_NAME, p_parameter, p_value);
end;
function get_context(p_parameter in varchar2) return varchar2 is
begin
return sys_context(CONTEXT_NAME, p_parameter);
end;
end;
show errors
create or replace view tree_view as
select level lvl, chid
from tree
connect by prior parid=chid
start with chid = sys_context('MY_CONTEXT','CHID');
-- at runtime
exec pkg_context.set_context('CHID', 14)
select * from tree_view; -
Can I pass a table function parameter like this?
This works. Notice I am passing the required table function parameter using the declared variable.
DECLARE @Date DATE = '2014-02-21'
SELECT
h.*, i.SomeColumn
FROM SomeTable h
LEFT OUTER JOIN SomeTableFunction(@Date) I ON i.ID = h.ID
WHERE h.SomeDate = @Date
But I guess you can't do this?... because I'm getting an error saying h.SomeDate cannot be bound. Notice in this one, I am attempting to pass in the table function parameter from the SomeTable it is joined to by ID.
DECLARE @Date DATE = '2014-02-21'
SELECT
h.*, i.SomeColumn
FROM SomeTable h
LEFT OUTER JOIN SomeTableFunction(h.SomeDate) I ON i.ID = h.ID
WHERE h.SomeDate = @DateHi
NO you cant pass a table function parameter like this?
As When you declare @date assign value to it and pass as a parameter it will return table which you can use for join as you did it in first code
But when you pass date from some other table for generating table from your funtion it doesnt have date as it is not available there
Ref :
http://www.codeproject.com/Articles/167399/Using-Table-Valued-Functions-in-SQL-Server
http://technet.microsoft.com/en-us/library/aa214485(v=sql.80).aspx
http://msdn.microsoft.com/en-us/library/ms186755.aspx
https://www.simple-talk.com/sql/t-sql-programming/sql-server-functions-the-basics/
http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
Mark
as answer if you find it useful
Shridhar J Joshi Thanks a lot -
HOW to pass page parameter into table function in HTMLDB
I created this object and table function in database.
create or replace TYPE date_flow_type
AS OBJECT (
time date,
max_time number,
avg_total NUMBER,
sum_total NUMBER,
max_total NUMBER,
change_rate number
create or replace TYPE date_flow_table_type AS TABLE OF date_flow_type;
create or replace function ret_date(p_date date default sysdate) return date_flow_table_type is
v_tbl1 date_flow_table_type :=date_flow_table_type();
begin
v_tbl1.extend;
v_tbl1(v_tbl1.last):=date_flow_type (p_date,1,1,1,1,1);
return v_tbl1;
end;
and it is correct in htmldb when using in these ways
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow ) a;
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow( sysdate-1 )) a;
but return error
ORA-00904: "RET_ICP_DATE_FLOW": 无效的标识符
when pasing page parameter into the table function
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow( to_date(:p1_date,'yyyy-mm-dd') )) a
and this sql is correct while running in sqlplus .Hi!
Thanks for your reply!
I have tried this solution but it doesn't work!
When I do getInitParameter in the init function, the servlet take the default values...
Maybe I have wrote something wrong?
Excuse me for my english,
Thanks -
Pipeline Table Function returning a fraction of data
My current project involves migrating an Oracle database to a new structure based on the new client application requirements. I would like to use pipelined table functions as it seems as though that would provide the best performance.
The first table has about 65 fields, about 75% of which require some type of recoding for the new app. I have written a function for each transformation and have all of these functions stored in a package. If I do:
create new_table as select (
pkg_name.function1(old_field1),
pkg_name.function2(old_field2),
pkg_name.function3(old_field3),
it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.
I wrote a function that is passed the old table as a cursor, runs all the functions for the transformations and then pipes the new row back to the insert statement that called the function. It is incredibly fast but only returns .025% of the data (about 50 rows out of my sample table of 200,000). It does not throw any errors.
So I am trying to determine what is going on. Perhaps one of my functions has a bug. If there was would cause the row to be kicked out? There are 40 or so functions so tracking this down has been a bit of a bear.
Any advice as to how I might resolve this would be much appreciated.
Thanks
Dan. I would like to use pipelined table functions as it seems as though that would provide the best performanceUh huh...
it runs with out any errors but takes about 3 1/2 hours. There are a little more than 10 million rows in the table.Not the first time a lovely theory has been killed by an ugly fact. Did you do any bench marks to see whether the pipelined functions did offer performance benefits over doing it some other way?
From the context of your comments I think you are trying to a populate a new table from a single old table. Is this the case? If so I would have thought a straightforward CTAS with normal functions would be more appropriate: pipelined functions are really meant for situations in which one input produced more than one output. Anyway, ifr we are to help you I think you need to give us more details about how this process works and post a sample transformation function.
There are 40 or so functions so tracking this down has been a bit of a bear.The teaching is: we should code one function and get that working before moving on to the next one. Which might not seem like a helpful thing to say, but the best lesson is often "I'll do it differently next time".
Cheers, APC -
Question about Table Function inside a Package
Hi … I am new in PL/SQL, I am trying to use a table function to create depending on a value passed to it (I am using 10g). Everything works great but the moment I add a parameter everything explode, I am creating it on a package.
SQL that work
CREATE OR REPLACE PACKAGE BODY financial_reports AS
FUNCTION Fund_Amount
RETURN financial_reports.Fund_Amount_Table
pipelined parallel_enable IS
cur_row financial_reports.Fund_Amount_Record;
BEGIN
FOR cur_row IN
SELECT
to_number(substr(bu5.usrdata, 1, 1)) As SECTION_ID
,to_number(substr(bu5.usrdata, 2, 1)) As SUB_SECTION_ID
,to_number(substr(bu5.usrdata, 4, 2)) AS LINE_NUMBER
,to_number(substr(bu5.usrdata, 7, 2)) As FUND_ID
,sum(be.amt) AS AMOUNT
FROM
linc.budgetdb_usr5@stjohnsfp bu5
JOIN linc.budgetdb_event@stjohnsfp be ON
bu5.keyvalue = be.acctno
WHERE
bu5.keyvalue like '__-__-__-____-____-_____'
AND bu5.usrdata like '__-__-__'
AND bu5.fieldnum = 1
AND bu5.ispecname = 'GLMST'
AND to_number(substr(bu5.usrdata, 7, 2)) = 1
GROUP BY
bu5.usrdata
ORDER BY
bu5.usrdata
LOOP
PIPE ROW(cur_row);
END LOOP;
END Fund_Amount;
END financial_reports;
SQL that do not work …
CREATE OR REPLACE PACKAGE BODY financial_reports AS
FUNCTION Fund_Amount (Fund_Id IN NUMBER)
RETURN financial_reports.Fund_Amount_Table
pipelined parallel_enable IS
cur_row financial_reports.Fund_Amount_Record;
fund_id_int NUMBER;
BEGIN
fund_id_int := Fund_Id;
FOR cur_row IN
SELECT
to_number(substr(bu5.usrdata, 1, 1)) As SECTION_ID
,to_number(substr(bu5.usrdata, 2, 1)) As SUB_SECTION_ID
,to_number(substr(bu5.usrdata, 4, 2)) AS LINE_NUMBER
,to_number(substr(bu5.usrdata, 7, 2)) As FUND_ID
,sum(be.amt) AS AMOUNT
FROM
linc.budgetdb_usr5@stjohnsfp bu5
JOIN linc.budgetdb_event@stjohnsfp be ON
bu5.keyvalue = be.acctno
WHERE
bu5.keyvalue like '__-__-__-____-____-_____'
AND bu5.usrdata like '__-__-__'
AND bu5.fieldnum = 1
AND bu5.ispecname = 'GLMST'
AND to_number(substr(bu5.usrdata, 7, 2)) = fund_id_int
GROUP BY
bu5.usrdata
ORDER BY
bu5.usrdata
LOOP
PIPE ROW(cur_row);
END LOOP;
END Fund_Amount;
END financial_reports;
Error … (This works without the parameter)
Error starting at line 43 in command:
select * from table(financial_reports.Fund_Amount(1) )
Error at Command Line:1 Column:14
Error report:
SQL Error: ORA-22905: cannot access rows from a non-nested table item
Any help would be greatly appreciatedtry renaming your parameter so as not to confuse with what you are using in your column " to_number(substr(bu5.usrdata, 7, 2)) AS FUND_ID":
CREATE OR REPLACE PACKAGE BODY financial_reports AS
FUNCTION Fund_Amount (pFund_Id IN NUMBER)
RETURN financial_reports.Fund_Amount_Table
pipelined parallel_enable IS
cur_row financial_reports.Fund_Amount_Record;
fund_id_int NUMBER;
BEGIN
fund_id_int := pFund_Id;
FOR cur_row IN ( SELECT to_number(substr(bu5.usrdata, 1, 1)) As SECTION_ID,
to_number(substr(bu5.usrdata, 2, 1)) As SUB_SECTION_ID,
to_number(substr(bu5.usrdata, 4, 2)) AS LINE_NUMBER,
to_number(substr(bu5.usrdata, 7, 2)) As FUND_ID,
sum(be.amt) AS AMOUNT
FROM linc.budgetdb_usr5@stjohnsfp bu5
JOIN linc.budgetdb_event@stjohnsfp be ON bu5.keyvalue = be.acctno
WHERE bu5.keyvalue like '__-__-__-____-____-_____'
AND bu5.usrdata like '__-__-__'
AND bu5.fieldnum = 1
AND bu5.ispecname = 'GLMST'
AND to_number(substr(bu5.usrdata, 7, 2)) = fund_id_int
GROUP BY bu5.usrdata
ORDER BY bu5.usrdata ) LOOP
PIPE ROW(cur_row);
END LOOP;
END Fund_Amount;
END financial_reports; -
Hi All,
im Oracle 9i R2 :
there are package with table function and JDBC client .
In client code :
Statement st = con.createStatement() ;
ResultSet rs = st.executeQuery("select * from table(pkg.func)") ;
while (rs.next())
System.out.println(rs.getString(1)) ;
- and that not work ...
What is wrong ? :)
With best regards, SlavaCan you define "not work" here... Do you get an error? If so, what is the error number you are getting (ORA-xxxxx)?
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Using table function with merge
I wanna use table function on a table type in a merger statement inside a procedure .
1 create or replace procedure fnd_proc as
2 cursor fnd_c is
select * from fnd_columns;
3 type test_t is table of fnd_columns%rowtype;
4 fnd_t test_t;
5 begin
6 merge into sample s using (select * from table (fnd_pkg1.get_records(cursor(select * from fnd_columns)))) f
7 on (s.application_id = f.application_id)
8 when matched then
9 update set last_update_date=sysdate
10 when not matched then
11 insert(APPLICATION_ID,TABLE_ID,COLUMN_ID) values(f.APPLICATION_ID,f.TABLE_ID,f.COLUMN_ID);
12 end;
create or replace package fnd_pkg1 as
type fnd_type is table of fnd_columns%rowtype;
function get_records(p_cursor IN SYS_REFCURSOR) return fnd_type;
end;
create or replace package body fnd_pkg1 as
function get_records(p_cursor IN SYS_REFCURSOR) return fnd_type is
fnd_data fnd_type;
begin
fetch p_cursor bulk collect into fnd_data;
return fnd_data;
end;
end;
/When i compile the procedure fnd_proc I get the following error
LINE/COL ERROR
6/11 PL/SQL: SQL Statement ignored
6/52 PL/SQL: ORA-22905: cannot access rows from a non-nested table
item
6/67 PLS-00642: local collection types not allowed in SQL statements
Let me know what has to be donemichaels> CREATE TABLE fnd_columns (application_id ,table_id ,column_id ,last_update_date )
AS SELECT object_id,data_object_id,ROWNUM,created FROM all_objects
Table created.
michaels> CREATE TABLE SAMPLE (application_id INTEGER,table_id INTEGER,column_id INTEGER,last_update_date DATE)
Table created.
michaels> CREATE OR REPLACE TYPE fnd_obj AS OBJECT (
application_id INTEGER,
table_id INTEGER,
column_id INTEGER,
last_update_date DATE
Type created.
michaels> CREATE OR REPLACE TYPE fnd_type AS TABLE OF fnd_obj
Type created.
michaels> CREATE OR REPLACE PACKAGE fnd_pkg1
AS
FUNCTION get_records (p_cursor IN sys_refcursor)
RETURN fnd_type;
PROCEDURE fnd_proc;
END fnd_pkg1;
Package created.
michaels> CREATE OR REPLACE PACKAGE BODY fnd_pkg1
AS
FUNCTION get_records (p_cursor IN sys_refcursor)
RETURN fnd_type
IS
fnd_data fnd_type;
BEGIN
FETCH p_cursor
BULK COLLECT INTO fnd_data;
RETURN fnd_data;
END get_records;
PROCEDURE fnd_proc
AS
CURSOR fnd_c
IS
SELECT *
FROM fnd_columns;
TYPE test_t IS TABLE OF fnd_columns%ROWTYPE;
fnd_t test_t;
BEGIN
MERGE INTO SAMPLE s
USING (SELECT *
FROM TABLE
(fnd_pkg1.get_records
(CURSOR (SELECT fnd_obj (application_id,
table_id,
column_id,
last_update_date
FROM fnd_columns
)) f
ON (s.application_id = f.application_id)
WHEN MATCHED THEN
UPDATE
SET last_update_date = SYSDATE
WHEN NOT MATCHED THEN
INSERT (application_id, table_id, column_id)
VALUES (f.application_id, f.table_id, f.column_id);
END fnd_proc;
END fnd_pkg1;
Package body created.
michaels> BEGIN
fnd_pkg1.fnd_proc;
END;
PL/SQL procedure successfully completed.
michaels> SELECT COUNT (*)
FROM SAMPLE
COUNT(*)
47469Now I'd like to see the stats and the ferrari too ;-) -
10g: delay for collecting results from parallel pipelined table functions
When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
This delay is huge compared with the run-time of the worker threads.
For my application it goes like this:
main thread timing efforts to start worker and collect their results:
[10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
worker threads:
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
Why does it need #10 sec more just to save the data?
Here's a sample sqlplus script to demonstrate this:
--------------------------- snip -------------------------------------------------------
set serveroutput on;
drop table perf_data;
drop table test_table;
drop table tmp_test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table perf_data
sid number,
t1 timestamp with time zone,
t2 timestamp with time zone,
client varchar2(256)
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create global temporary table tmp_test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
--------------------------- snap -------------------------------------------------------
best regards,
FrankHello
I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
function TF_Any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY);
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
function TF_any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY)
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 1037943675
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 8168 | 3972K| | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 931K| 140M| 136 (2)| 00:00:02 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statement
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 4097140875
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
Pipelined function partition by hash has extra sort#
I ran your tests with HASH vs ANY and the results are in line with the observations above....
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
(1) copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '918' ): enter
test_pkg.TF( sid => '918' ): exit, piped #200000 records
[01:40:19-01:40:29]: client master / sid #918
[01:40:19-01:40:29]: client slave / sid #918
... saved #600000 records
(2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
[01:40:31-01:40:36]: client master / sid #918
[01:40:31-01:40:32]: client slave / sid #659
[01:40:31-01:40:32]: client slave / sid #880
[01:40:31-01:40:32]: client slave / sid #1045
[01:40:31-01:40:32]: client slave / sid #963
[01:40:31-01:40:32]: client slave / sid #712
... saved #600000 records
(3) copy physical 'test_table' to 'mylist2' by streaming via table function:
[01:40:37-01:41:05]: client master / sid #918
[01:40:37-01:40:42]: client slave / sid #738
[01:40:37-01:40:42]: client slave / sid #568
[01:40:37-01:40:42]: client slave / sid #618
[01:40:37-01:40:42]: client slave / sid #659
[01:40:37-01:40:42]: client slave / sid #963
... saved #3000000 records
(4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
[01:41:12-01:41:16]: client master / sid #918
[01:41:12-01:41:16]: client slave / sid #712
[01:41:12-01:41:16]: client slave / sid #1045
[01:41:12-01:41:16]: client slave / sid #681
[01:41:12-01:41:16]: client slave / sid #754
[01:41:12-01:41:16]: client slave / sid #880
... saved #600000 records
(5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
[01:41:18-01:41:38]: client master / sid #918
[01:41:18-01:41:38]: client slave / sid #681
[01:41:18-01:41:38]: client slave / sid #712
[01:41:18-01:41:38]: client slave / sid #754
[01:41:18-01:41:37]: client slave / sid #880
[01:41:18-01:41:38]: client slave / sid #1045
... saved #3000000 recordsHTH
David
Maybe you are looking for
-
I've had my Macbook for a 2 months now and ever since then I have not been able to open the Imovie application. Every time I try to open it I get the same message. The application iMovie quit unexpectedly. Click Relaunch to launch the application aga
-
WD external harddisk not recognised after failed format
Hi folks, So I have an old WD 320GB harddisk I pulled from an old windows acer 5535 I wasn't using, I tried formatting it to OSX journaled so i could use it to back my friends mac up. The drive was initially readable and showed my windows 7 and files
-
Proper format in which to render video?
I have some VHS tapes that contain home movies that were originally transferred to the tapes from an 8mm camcorder. They were recorded from VHS to DVD as an interim step (720). I've got the film in PP and I'm ready for export. However, I am unsure
-
Access file placed in my application folder
I have a TXT file in my application folder.. I have not added it into the resource.. How can I access this file. I do not want to add this file to resource. Your help will be appreciated.
-
OT: New Audio Editor: Fission
I'm happy with Peak, but I must admit there are a couple of interesting features in this new editor. Lossless Audio Edits - Edit With No Quality Loss With Fission, you can edit MP3 and AAC encoded files without re-encoding them. This means you can tr