Drop default constraint on a table function
I need to drop some default constraints that appear to be tied to table functions (and not actual tables). This means when I try the ALTER TABLE DROP CONSTRAINT command it fails with the error, "unable to drop constraint because object is not
a table" or something similar.
My question is: how do I drop a constraint on a table function?
I suggest you review the documentation for TVFs and how they are (and can be) used. The table returned by a TVF (and in this case I refer specifically to multistatement TVFs) are defined using a subset of the create table syntax. They can be
created with constraints of different types - not just defaults. Why? Because it suits the logic of the developer and (perhaps) because it assists the database engine or the logic that depends on the output of the function.
Below is one example that I used (written by Steve Kass) from a LONG time ago. Notice the primary key.
CREATE FUNCTION [dbo].[uf_sequence] (@N int)
RETURNS @T TABLE (
seq int not null primary key clustered
AS
** 04/21/05.sbm - Bug #306. Initial version.
** Code provided by Steve Kass - MS .programming newsgroup
BEGIN
DECLARE @place int
SET @place = 1
INSERT INTO @T (seq) VALUES (0)
WHILE @place <= @N/2 BEGIN
INSERT INTO @T (seq)
SELECT @place + Seq FROM @T
SET @place = @place + @place
END
INSERT INTO @T (seq)
SELECT @place + Seq FROM @T
WHERE Seq <= @N - @place
RETURN
END
go
For your particular case, the choice of a default constraint is likely due to the implementation of the logic in the function. Perhaps there are multiple insert statements and it was simpler/easier/more robust to use a default constraint rather than
repeatedly hard-code the value in each statement. By choosing a default constraint, the developer need only alter the constraint (once) if the value needs to be changed rather than finding and changing each statement that inserts or updates the table.
As you've have already discerned, you can simply ignore any constraints that are defined on the tables returned by a TVF.
Similar Messages
-
How to drop all constraints on a table?
Oracle 11gR2
I tried this but no luck!
ALTER TABLE testDB.dbo.testTable1
DROP ALL CONSTRAINT
GOOracle 11gR2
I tried this but no luck!
ALTER TABLE testDB.dbo.testTable1
DROP ALL CONSTRAINT
GO
You never will have any 'luck' trying to execute SQL SERVER statements on an Oracle database.
There is no Oracle command to drop all constraints from a table.
1. create a new table using CTAS - CREATE newtable AS SELECT * FROM oldtable
2. drop the orginal table - DROP oldtable
3. rename the new table to the old name - RENAME newtable to oldtable
Constraints will be gone. -
How to solve ora-00054 error while drop the constraint
i am trying to drop the constraint for the table but it will give the below error.
ora-00054 resource busy and acquire.
can you please tell me solve this problem. but in my pc i am not using that table at any where in the system.
ALTER TABLE EIIS_JBWSTOCK
DROP CONSTRAINT CHK_TRAN_JOB_TYPE;
this is my code for alter table constraint.
thanksYou may find <sid, serial#> and kill the session.
SELECT c.owner,
c.object_name,
c.object_type,
b.SID,
b.serial#,
b.status,
b.osuser,
b.machine
FROM v$locked_object a, v$session b, dba_objects c
WHERE b.SID = a.session_id AND a.object_id = c.object_id; --You may add extra condition for your table.
ALTER SYSTEM KILL SESSION '<sid, serial#>' -
Script will create database, 3 database objects and publish.
The error is due to the generation script to create the conflict tables that is not stripping out default constraints that reference a UDF.
As you can see below, the failure is on the generation script for the conflict table.
The conflict table should be a bucket table that shouldn’t enforce data integrity.
See how the default constraints for the columns someint and somestring were stripped out of the generation logic however the default constraint that utilizes a UDF persist and uses the same object name that was used on the production table (The
bold line) , this occurs if I explicitly name the constraint or let the system generate the name for me like in the example posted.
The only way I could see getting around this right now is to drop all default constraints in the system that uses a UDF, publish then add the constraints back which is vulnerable to invalid data and a lot of moving
steps. This all worked with SQL 2000, 2005, 2008, 2008r2, it’s stopped working in SQL 2012 and continues to not work in SQL 2014.
Error messages:
Message: There is already an object named 'DF__repTable__id__117F9D94' in the database.
Could not create constraint. See previous errors.
Command Text: CREATE TABLE [dbo].[MSmerge_conflict_MergeRepFailurePublication_repTable](
[id] [varchar](8) NULL CONSTRAINT [DF__repTable__id__117F9D94] DEFAULT ([dbo].[repUDF]()),
[somedata] [varchar](64) NULL,
[rowguid] [uniqueidentifier] ROWGUIDCOL NULL,
[someint] [int] NULL,
[somestring] [varchar](64) NULL
Parameters:
Stack: at Microsoft.SqlServer.Replication.AgentCore.ReMapSqlException(SqlException e, SqlCommand command)
at Microsoft.SqlServer.Replication.AgentCore.AgentExecuteNonQuery(SqlCommand command, Int32 queryTimeout)
at Microsoft.SqlServer.Replication.AgentCore.ExecuteDiscardResults(CommandSetupDelegate commandSetupDelegate, Int32 queryTimeout)
at Microsoft.SqlServer.Replication.Snapshot.YukonMergeConflictTableScriptingManager.ApplyBaseConflictTableScriptToPublisherIfNeeded(String strConflictScriptPath)
at Microsoft.SqlServer.Replication.Snapshot.BaseMergeConflictTableScriptingManager.DoConflictTableScriptingTransaction(SqlConnection connection)
at Microsoft.SqlServer.Replication.RetryableSqlServerTransactionManager.ExecuteTransaction(Boolean bLeaveTransactionOpen)
at Microsoft.SqlServer.Replication.Snapshot.BaseMergeConflictTableScriptingManager.DoConflictTableScripting()
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateTableArticleCftScript(Scripter scripter, BaseArticleWrapper articleWrapper, Table smoTable)
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateTableArticleScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateArticleScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.SmoScriptingManager.GenerateObjectScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.SmoScriptingManager.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.MergeSnapshotProvider.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.GenerateSnapshot()
at Microsoft.SqlServer.Replication.SnapshotGenerationAgent.InternalRun()
at Microsoft.SqlServer.Replication.AgentCore.Run() (Source: MSSQLServer, Error number: 2714)
Get help: http://help/2714
Server COL-PCANINOW540\SQL2012, Level 16, State 0, Procedure , Line 1
Could not create constraint. See previous errors. (Source: MSSQLServer, Error number: 1750)
Get help: http://help/1750
Server COL-PCANINOW540\SQL2012, Level 16, State 0, Procedure , Line 1
Could not create constraint. See previous errors. (Source: MSSQLServer, Error number: 1750)
Get help: http://help/1750
Pauly C
USE [master]
GO
CREATE DATABASE [MergeRepFailure]
ALTER DATABASE [MergeRepFailure] SET COMPATIBILITY_LEVEL = 110
GO
USE [MergeRepFailure]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create view
[dbo].[repView] as select right(newid(),8) as id
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[repUDF]()
RETURNS varchar(8)
BEGIN
declare @val varchar(8)
select top 1 @val = id from [repView]
return @val
END
GO
create table repTable
id varchar(8) default([dbo].[repUDF]()),
somedata varchar(64) null,
rowguid uniqueidentifier ROWGUIDCOL default(newid()),
someint int default(1),
somestring varchar(64) default('somestringvalue')
GO
insert into reptable (somedata) values ('whatever1')
insert into reptable (somedata) values ('whatever2')
go
/*test to make sure function is working*/
select * from reptable
GO
/*Publish database*/
use [MergeRepFailure]
exec sp_replicationdboption @dbname = N'MergeRepFailure', @optname = N'merge publish', @value = N'true'
GO
declare @Descrip nvarchar(128)
select @Descrip = 'Merge publication of database ''MergeRepFailure'' from Publisher ''' + @@servername +'''.'
print @Descrip
-- Adding the merge publication
use [MergeRepFailure]
exec sp_addmergepublication @publication = N'MergeRepFailurePublication', @description = N'@Descrip',
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'true',
@enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true', @compress_snapshot = N'false', @ftp_port = 21,
@ftp_subdirectory = N'ftp', @ftp_login = N'anonymous', @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
@dynamic_filters = N'false', @conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false',
@max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null, @publication_compatibility_level = N'100RTM',
@replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false', @allow_web_synchronization = N'false', @allow_partition_realignment = N'true',
@retention_period_unit = N'days', @conflict_logging = N'both', @automatic_reinitialization_policy = 0
GO
exec sp_addpublication_snapshot @publication = N'MergeRepFailurePublication', @frequency_type = 4, @frequency_interval = 14, @frequency_relative_interval = 1,
@frequency_recurrence_factor = 0, @frequency_subday = 1, @frequency_subday_interval = 5, @active_start_time_of_day = 500, @active_end_time_of_day = 235959,
@active_start_date = 0, @active_end_date = 0, @job_login = null, @job_password = null, @publisher_security_mode = 1
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repTable', @source_owner = N'dbo', @source_object = N'repTable', @type = N'table',
@description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x000000010C034FD1, @identityrangemanagementoption = N'manual',
@destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false', @subset_filterclause = null, @vertical_partition = N'false',
@verify_resolver_signature = 1, @allow_interactive_resolver = N'false', @fast_multicol_updateproc = N'true', @check_permissions = 0, @subscriber_upload_options = 0,
@delete_tracking = N'true', @compensate_for_errors = N'false', @stream_blob_columns = N'false', @partition_options = 0
GO
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repView', @source_owner = N'dbo', @source_object = N'repView',
@type = N'view schema only', @description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x0000000008000001,
@destination_owner = N'dbo', @destination_object = N'repView', @force_reinit_subscription = 1
GO
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repUDF', @source_owner = N'dbo', @source_object = N'repUDF',
@type = N'func schema only', @description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x0000000008000001,
@destination_owner = N'dbo', @destination_object = N'repUDF', @force_reinit_subscription = 1
GOMore information, after running a profile trace the following 2 statements, the column with the default on a UDF returns a row while the other default does not. This might be the cause of this bug. Is the same logic to generate the object on
the subscriber used to generate the conflict table?
exec sp_executesql N'
select so.name, schema_name(so.schema_id)
from sys.sql_dependencies d
inner join sys.objects so
on d.referenced_major_id = so.object_id
where so.type in (''FN'', ''FS'', ''FT'', ''TF'', ''IF'')
and d.class in (0,1)
and d.referenced_major_id <> object_id(@base_table, ''U'')
and d.object_id = object_id(@constraint, ''D'')',N'@base_table nvarchar(517),@constraint nvarchar(517)',@base_table=N'[dbo].[repTable]',@constraint=N'[dbo].[DF__repTable__id__117F9D94]'
exec sp_executesql N'
select so.name, schema_name(so.schema_id)
from sys.sql_dependencies d
inner join sys.objects so
on d.referenced_major_id = so.object_id
where so.type in (''FN'', ''FS'', ''FT'', ''TF'', ''IF'')
and d.class in (0,1)
and d.referenced_major_id <> object_id(@base_table, ''U'')
and d.object_id = object_id(@constraint, ''D'')',N'@base_table nvarchar(517),@constraint nvarchar(517)',@base_table=N'[dbo].[repTable]',@constraint=N'[dbo].[DF__repTable__somein__1367E606]'
Pauly C -
Using Pipeline Table functions with other tables
I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables. In my tests, selecting from just the pipelined table perform pretty well (whether it is directly from the pipleined table or the view I created on top of it). Where I start to see some degregation when I try to join the pipelined tabe view to other tables and add where conditions.
ie:
SELECT A.empno, A.empname, A.job, B.sal
FROM EMP_VIEW A, EMP B
WHERE A.empno = B.empno AND
B.mgr = '7839'
I have seen some articles and blogs that mention this as a cardinality issue, and offer some undocumented methods to try and combat.
Can someone please give me some advice or tips on this. Thanks!
I have created a simple example using the emp table below to help illustrate what I am doing.
DROP TYPE EMP_TYPE;
DROP TYPE EMP_SEQ;
CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT
( EMPNO NUMBER(10),
ENAME VARCHAR2(100),
JOB VARCHAR2(100));
CREATE OR REPLACE TYPE EMP_TYPE AS TABLE OF EMP_SEQ;
CREATE OR REPLACE FUNCTION get_emp return EMP_TYPE PIPELINED AS
BEGIN
FOR cur IN (SELECT
empno,
ename,
job
FROM emp
LOOP
PIPE ROW(EMP_SEQ(cur.empno,
cur.ename,
cur.job));
END LOOP;
RETURN;
END get_emp;
create OR REPLACE view EMP_VIEW as select * from table(get_emp());
SELECT A.empno, A.empname, A.job, B.sal
FROM EMP_VIEW A, EMP B
WHERE A.empno = B.empno AND
B.mgr = '7839'I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables
Which begs the question: WHY? What PROBLEM are you trying to solve and what makes you think using pipelined table functions is the best way to solve that problem?
The lack of information about cardinality is the likely root of the degradation you noticed as already mentioned.
But that should be a red flag about pipelined functions in general. PIPELINED functions hide virtually ALL KNOWLEDGE about the result set that is produced; cardinality is just the tip of the iceberg. Those functions pretty much say 'here is a result set' without ANY information about the number of rows (cardinality), distinct values for any columns, nullability of any columns, constraints that might apply to any columns (foreign key, primary key) and so on.
If you are going to hide all of that information from Oracle that would normally be used to help optimize queries and select the appropriate execution plan you need to have a VERY good reason.
The use of PIPELINED functions should be reserved for those use cases where ordinary SQL and PL/SQL cannot get the job done. That is they are a 'special case' solution.
The classic use case for those functions is for the transform stage of ETL where multiple pipelined functions are chained together: one function feeds its rows to the next function which feeds its rows to another and so on. Each of those 'chained' functions is roughly analogous to a full table scan of the data that often does not need to be joined to other data except perhaps low volumn lookup tables where the data may even be cached.
I suggest that any exploratory or prototyping work you do use standard relational tables until such point as you run into a problem whose solution might require PIPELINED functions to solve. -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
How to change the existing constraint for the table.
hi my table having the check constraint. but now i want to change the values in that constraint.
is it possible to change the check constraint values. and with out disable or drop my constraint. i dont want to change the constraint name also.
now i will give you my existing constraint and proposed constraint syntax.
my existing constraint Syntax is :
CONSTRAINT CONS_MRANTYPE
CHECK (MRAN_TYPE IN ('SP','JW','SD','SC','OT')
my proposed constraint syntax is :
CONSTRAINT CONS_MRANTYPE
CHECK (MRAN_TYPE IN ('SP','JW','SD','SC','OT','JR')
Thanks
IndraHi Indra,
this forum is for problems related to Oracle SQL Developer Data Modeler.
Philip -
Need some help with the Table Function Operator
I'm on OWB 10gR2 for Sun/Solaris 10 going against some 10gR2 DB's...
I've been searching up and down trying to figure out how to make OWB use a Table Function (TF) which will JOIN with another table; allowing a column of the joined table to be a parameter in to the TF. I can't seem to get it to work. I'm able to get this to work in regular SQL, though. Here's the setup:
-- Source Table:
DROP TABLE "ZZZ_ROOM_MASTER_EX";
CREATE TABLE "ZZZ_ROOM_MASTER_EX"
( "ID" NUMBER(8,0),
"ROOM_NUMBER" VARCHAR2(200),
"FEATURES" VARCHAR2(4000)
-- Example Data:
Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (1,'Room 1',null);
Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (2,'Room 2',null);
Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (3,'Room 3','1,1;2,3;');
Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (4,'Room 4','5,2;5,4;');
Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (5,'Room 5',' ');
-- Destination Table:
DROP TABLE "ZZZ_ROOM_FEATURES_EX";
CREATE TABLE "ZZZ_ROOM_FEATURES_EX"
( "ROOM_NUMBER" VARCHAR2(200),
"FEATUREID" NUMBER(8,0),
"QUANTITY" NUMBER(8,0)
-- Types for output table:
CREATE OR REPLACE TYPE FK_Row_EX AS OBJECT
ID NUMBER(8,0),
QUANTITY NUMBER(8,0)
CREATE OR REPLACE TYPE FK_Table_EX AS TABLE OF FK_Row_EX;
-- Package Dec:
CREATE OR REPLACE
PACKAGE ZZZ_SANDBOX_EX IS
FUNCTION UNFK(inputString VARCHAR2) RETURN FK_Table_EX;
END ZZZ_SANDBOX_EX;
-- Package Body:
CREATE OR REPLACE
PACKAGE BODY ZZZ_SANDBOX_EX IS
FUNCTION UNFK(inputString VARCHAR2) RETURN FK_Table_EX
AS
RETURN_VALUE FK_Table_EX := FK_Table_EX();
i NUMBER(8,0) := 0;
BEGIN
-- TODO: Put some real code in here that will actually read the
-- input string, parse it out, and put data in to RETURN_VALUE
WHILE(i < 3) LOOP
RETURN_VALUE.EXTEND;
RETURN_VALUE(RETURN_VALUE.LAST) := FK_Row_EX(4, 5);
i := i + 1;
END LOOP;
RETURN RETURN_VALUE;
END UNFK;
END ZZZ_SANDBOX_EX;
I've got a source system built by lazy DBA's and app developers who decided to store foreign keys for many-to-many relationships as delimited structures in driving tables. I need to build a generic table function to parse this data and return it as an actual table. In my example code, I don't actually have the parsing part written yet (I need to see how many different formats the source system uses first) so I just threw in some stub code to generate a few rows of 4's and 5's to return.
I can get the data from my source table to my destination table using the following SQL statement:
-- from source table joined with table function
INSERT INTO ZZZ_ROOM_FEATURES_EX(
ROOM_NUMBER,
FEATUREID,
QUANTITY)
SELECT
ZZZ_ROOM_MASTER_EX.ROOM_NUMBER,
UNFK.ID,
UNFK.QUANTITY
FROM
ZZZ_ROOM_MASTER_EX,
TABLE(ZZZ_SANDBOX_EX.UNFK(ZZZ_ROOM_MASTER_EX.FEATURES)) UNFK
Now, the big question is--how do I do this from OWB? I've tried several different variations of my function and settings in OWB to see if I can build a single SELECT statement which joins a regular table with a table function--but none of them seem to work, I end up getting SQL generated that won't compile because it doesn't see the source table right:
INSERT
/*+ APPEND PARALLEL("ZZZ_ROOM_FEATURES_EX") */
INTO
"ZZZ_ROOM_FEATURES_EX"
("ROOM_NUMBER",
"FEATUREID",
"QUANTITY")
(SELECT
"ZZZ_ROOM_MASTER_EX"."ROOM_NUMBER" "ROOM_NUMBER",
"INGRP2"."ID" "ID_1",
"INGRP2"."QUANTITY" "QUANTITY"
FROM
(SELECT
"UNFK"."ID" "ID",
"UNFK"."QUANTITY" "QUANTITY"
FROM
TABLE ( "ZZZ_SANDBOX_EX"."UNFK2" ("ZZZ_ROOM_MASTER_EX"."FEATURES")) "UNFK") "INGRP2",
"ZZZ_ROOM_MASTER_EX" "ZZZ_ROOM_MASTER_EX"
As you can see, it's trying to create a sub-query in the FROM clause--causing it to just ask for "ZZZ_ROOM_MASTER_EX"."FEATURES" as an input--which isn't available because it's outside of the sub-query!
Is this some kind of bug with the code generator or am I doing something seriously wrong here? Any help will be greatly appreciated!Hello Everybody!
Thank you for all your response!
I had changes this work area into Internal table and changed the select query. PLease let me know if this causes any performance issues?
I had created a Z table with the following fields :
ZADS :
MANDT
VKORG
ABGRU.
I had written a select query as below :
I had removed the select single and insted of using the Structure it_rej, I had changed it into Internal table
select vkorg abgru from ZADS into it_rej.
Earlier :
IT_REJ is a Work area:
DATA : BEGIN OF IT_REJ,
VKORG TYPE VBAK-VKORG,
ABGRU TYPE VBAP-ABGRU,
END OF IT_REJ.
Now :
DATA : BEGIN OF IT_REJ occurs 0,
VKORG TYPE VBAK-VKORG,
ABGRU TYPE VBAP-ABGRU,
END OF IT_REJ.
I guess this will fix the issue correct?
PLease suggest!
Regards,
Developer. -
Default constraints replicating when I didn't want them to
Using SQL Server 2008, I used a TSQL script to set up transactional replication, including sp_addarticle. I did not plan to replicate default values, but they replicated anyway. After seeing them on the subscriber, I generated the script for the
publication (using SSMS) to check the @schema_option value. It was 0x000000000803108F. Notice that 0x800 is not set. So why are default constraints replicating? That's my question.
As scripted by SSMS, after seeing the defaults show up on the subscriber:
exec sp_addarticle @publication = N'DBDistribution-GroupCharlie-Tables', @article = N'DistributionContract', @source_owner = N'dbo'
, @source_object = N'DistributionContract', @type = N'logbased', @description = N'', @creation_script = N'', @pre_creation_cmd = N'truncate'
, @schema_option = 0x000000000803108F
, @identityrangemanagementoption = N'none', @destination_table = N'DistributionContract', @destination_owner = N'dbo'
, @status = 24, @vertical_partition = N'false'
, @ins_cmd = N'CALL [dbo].[sp_MSins_dboDistributionContract]'
, @del_cmd = N'CALL [dbo].[sp_MSdel_dboDistributionContract]'
, @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboDistributionContract]'
GO
Note: the table did not exist on the subscriber, so applying the snapshot created it. This query against the subscriber shows that all the publisher's constraints were created about 10 minutes after the table was created.
select t.name, t.create_date, df.name, df.create_date
from sys.default_constraints df
join sys.tables t on df.parent_object_id = t.object_id
where t.name = 'distributioncontract';If these are unique constraints they will be replicated as part of the indexes - but it does not sound like this is the problem here. Can I see your script of the problem table?
looking for a book on SQL Server 2008 Administration?
http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941 -
HOW to pass page parameter into table function in HTMLDB
I created this object and table function in database.
create or replace TYPE date_flow_type
AS OBJECT (
time date,
max_time number,
avg_total NUMBER,
sum_total NUMBER,
max_total NUMBER,
change_rate number
create or replace TYPE date_flow_table_type AS TABLE OF date_flow_type;
create or replace function ret_date(p_date date default sysdate) return date_flow_table_type is
v_tbl1 date_flow_table_type :=date_flow_table_type();
begin
v_tbl1.extend;
v_tbl1(v_tbl1.last):=date_flow_type (p_date,1,1,1,1,1);
return v_tbl1;
end;
and it is correct in htmldb when using in these ways
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow ) a;
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow( sysdate-1 )) a;
but return error
ORA-00904: "RET_ICP_DATE_FLOW": 无效的标识符
when pasing page parameter into the table function
SELECT TIME da,
max_time max_time,
sum_total total,
max_total max_total,
change_rate
FROM TABLE ( ret_icp_date_flow( to_date(:p1_date,'yyyy-mm-dd') )) a
and this sql is correct while running in sqlplus .Hi!
Thanks for your reply!
I have tried this solution but it doesn't work!
When I do getInitParameter in the init function, the servlet take the default values...
Maybe I have wrote something wrong?
Excuse me for my english,
Thanks -
Oracle 11g Table function returns no records on first call
Hello,
On a Oracle 11g R2 I've a table function ( PIPELINED ) returning rows selected from a table.
The first time the function is selected, in a session ( I've tried to disconnect and log in again ), it returns no rows.
I've tried to log the call using DBMS_OUTPUT and from what I see the select on the table function returns no rows and no output is printed. So I presume Oracle is not calling the function.
The same function on a similar environment ( same db versions, patches and database structure ) works fine. The second environment is a production environment so it has more memory and some other settings enabled.
Does anyone know of settings that can relate to this behaviour ?
Thanks in advance for the help.
Regards,
Stefano MuretThank you for answering so fast.
Here's the function code:
FUNCTION template_parameters (iTemplate IN TEMPLATE_RAW_DATA.TMPL_ID%TYPE := NULL)
RETURN table_type_tmpl_parameters PIPELINED
IS
li_exception INTEGER DEFAULT -20025;
POUT_PARM TABLE_TYPE_TMPL_PARAMETERS;
lt_parms table_type_tmpl_parms_raw;
sParmCheck VARCHAR2(4000);
iOccurrence INTEGER;
BEGIN
pOut_Parm := table_type_tmpl_parameters();
pOut_Parm.EXTEND;
select
tmpl_id
*,tmpl_name*
*,replace(upper(trim(sql_out)),'[SCHEMA].')*
*,UPPER(TRIM(out_tmpl_parms))*
bulk collect into lt_parms
from ref_templates
where tmpl_id = NVL(iTemplate,tmpl_id)
order by tmpl_id;
FOR k IN 1..lt_parms.COUNT
LOOP
pOut_Parm(1).tmpl_id := lt_parms(k).tmpl_id;
pOut_Parm(1).tmpl_name := lt_parms(k).tmpl_name;
FOR i IN 1..2
LOOP
IF i = 1 THEN
sParmCheck := lt_parms(k).sql_out;
ELSE
sParmCheck := lt_parms(k).sql_parms;
END IF;
iOccurrence := 1;
*pOut_Parm(1).parameter_name := regexp_substr(sParmCheck,'\[[^\[]+\]',1,iOccurrence);*
WHILE pOut_Parm(1).parameter_name IS NOT NULL
LOOP
PIPE ROW (pOut_Parm(1));
iOccurrence := iOccurrence + 1;
*pOut_Parm(1).parameter_name := regexp_substr(sParmCheck,'\[[^\[]+\]',1,iOccurrence);*
END LOOP;
END LOOP;
END LOOP;
RETURN;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(li_exception,SUBSTR(SQLERRM,1,1000));
RETURN;
END template_parameters;
This function is part of a package.
The data on both environments is the same. -
How to Add column with default value in compress table.
Hi ,
while trying to add column to compressed table with default value i am getting error.
Even i tried no compress command on table still its giivg error that add/drop not allowed on compressed table.
Can anyone help me in this .
Thanks.Aman wrote:
while trying to add column to compressed table with default value i am getting error.This is clearly explain in the Oracle doc :
"+You cannot add a column with a default value to a compressed table or to a partitioned table containing any compressed partition, unless you first disable compression for the table or partition+"
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5163
Nicolas. -
10g: delay for collecting results from parallel pipelined table functions
When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
This delay is huge compared with the run-time of the worker threads.
For my application it goes like this:
main thread timing efforts to start worker and collect their results:
[10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
worker threads:
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
Why does it need #10 sec more just to save the data?
Here's a sample sqlplus script to demonstrate this:
--------------------------- snip -------------------------------------------------------
set serveroutput on;
drop table perf_data;
drop table test_table;
drop table tmp_test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table perf_data
sid number,
t1 timestamp with time zone,
t2 timestamp with time zone,
client varchar2(256)
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create global temporary table tmp_test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
--------------------------- snap -------------------------------------------------------
best regards,
FrankHello
I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
function TF_Any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY);
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
function TF_any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY)
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 1037943675
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 8168 | 3972K| | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 931K| 140M| 136 (2)| 00:00:02 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statement
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 4097140875
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
Pipelined function partition by hash has extra sort#
I ran your tests with HASH vs ANY and the results are in line with the observations above....
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
(1) copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '918' ): enter
test_pkg.TF( sid => '918' ): exit, piped #200000 records
[01:40:19-01:40:29]: client master / sid #918
[01:40:19-01:40:29]: client slave / sid #918
... saved #600000 records
(2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
[01:40:31-01:40:36]: client master / sid #918
[01:40:31-01:40:32]: client slave / sid #659
[01:40:31-01:40:32]: client slave / sid #880
[01:40:31-01:40:32]: client slave / sid #1045
[01:40:31-01:40:32]: client slave / sid #963
[01:40:31-01:40:32]: client slave / sid #712
... saved #600000 records
(3) copy physical 'test_table' to 'mylist2' by streaming via table function:
[01:40:37-01:41:05]: client master / sid #918
[01:40:37-01:40:42]: client slave / sid #738
[01:40:37-01:40:42]: client slave / sid #568
[01:40:37-01:40:42]: client slave / sid #618
[01:40:37-01:40:42]: client slave / sid #659
[01:40:37-01:40:42]: client slave / sid #963
... saved #3000000 records
(4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
[01:41:12-01:41:16]: client master / sid #918
[01:41:12-01:41:16]: client slave / sid #712
[01:41:12-01:41:16]: client slave / sid #1045
[01:41:12-01:41:16]: client slave / sid #681
[01:41:12-01:41:16]: client slave / sid #754
[01:41:12-01:41:16]: client slave / sid #880
... saved #600000 records
(5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
[01:41:18-01:41:38]: client master / sid #918
[01:41:18-01:41:38]: client slave / sid #681
[01:41:18-01:41:38]: client slave / sid #712
[01:41:18-01:41:38]: client slave / sid #754
[01:41:18-01:41:37]: client slave / sid #880
[01:41:18-01:41:38]: client slave / sid #1045
... saved #3000000 recordsHTH
David -
Parallel pipelined table function, autonomous_transaction to global tmp tab
Hi,
i try to speed up my parallel pipelined table function and switch from pl/sql collection to global temporary table inside.
This requires to use PRAGMA AUTONOMOUS_TRANSACTION (and commit), because inserting into global temporary table (DML)
within select - for invoking the table function - is not allowed without.
As a consequence of commit it next requires to have on commit preserve rows for the global temporary table.
Now:
Inserts into the global temporary table are done - indicated by sql%rowcount.
But a select afterwards doesn't show any record anymore.
Here is a program to demonstrate it:
set serveroutput on;
drop type TestTableOfNumber_t;
create or replace type TestTableOfNumber_t is table of number;
drop type TestStatusList;
drop type TestStatusObj;
create or replace type TestStatusObj as object(
sid number,
ctr1 number,
ctr2 number,
ctr3 number
create or replace type TestStatusList is table of TestStatusObj;
drop table TestTmpTable;
create global temporary table TestTmpTable (
value number
) on commit preserve rows;
create or replace package test_pkg
as
type TestStatusRec is record (
sid number,
ctr1 number,
ctr2 number,
ctr3 number
type TestStatusTab is table of TestStatusRec;
function FillTmpTable(id in varchar2)
return TestStatusRec;
FUNCTION ptf (p_cursor IN sys_refcursor)
RETURN TestStatusList PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any);
end;
create or replace package body test_pkg
as
function FillTmpTable(id in varchar2)
return TestStatusRec
is
PRAGMA AUTONOMOUS_TRANSACTION;
result TestStatusRec;
sid number;
type ton is table of number;
tids TestTableOfNumber_t := TestTableOfNumber_t();
records number := 0;
begin
select userenv('SID') into sid from dual;
result.sid := sid;
delete from TestTmpTable;
for i in 1..100 loop
tids.extend;
tids(tids.last) := i;
end loop;
forall i in 1..tids.count
insert into TestTmpTable (value) values (tids(i));
-- get number of records inserted
records := sql%rowcount;
result.ctr1 := records;
-- retrieve again before commit
select count(*) into records from TestTmpTable;
result.ctr2 := records;
commit;
-- retrieve again after commit
select count(*) into records from TestTmpTable;
result.ctr3 := records;
return result;
end;
FUNCTION ptf (p_cursor IN sys_refcursor)
RETURN TestStatusList PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any)
IS
rec test_pkg.TestStatusRec;
value number;
sid number;
ctr integer := 0;
BEGIN
select userenv('SID') into sid from dual;
rec := FillTmpTable('IN PTF');
LOOP
FETCH p_cursor into value;
EXIT WHEN p_cursor%NOTFOUND;
ctr := ctr + 1;
END LOOP;
-- as a result i am only interested in the results of FillTmpTable():
PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
RETURN;
END;
end;
declare
tons TestTableOfNumber_t;
counts TestTableOfNumber_t;
status test_pkg.TestStatusRec;
statusList test_pkg.TestStatusTab;
begin
status := test_pkg.FillTmpTable('MAIN');
dbms_output.put_line('main thread:'
|| ' sid #' || status.sid
|| ' / #' || status.ctr1 || ' inserted '
|| ' / #' || status.ctr2 || ' before commit'
|| ' / #' || status.ctr3 || ' after commit');
select value bulk collect into tons from TestTmpTable;
select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2) */ value from TestTmpTable tab)));
for i in 1..StatusList.count loop
dbms_output.put_line('worker thread #' || i || ':'
|| ' sid #' || statusList(i).sid
|| ' / #' || statusList(i).ctr1 || ' inserted '
|| ' / #' || statusList(i).ctr2 || ' before commit'
|| ' / #' || statusList(i).ctr3 || ' after commit');
end loop;
end;
/The output is:
main thread: sid #881 / #100 inserted / #100 before commit / #100 after commit
worker thread #1: sid #421 / #100 inserted / #0 before commit / #0 after commit
worker thread #2: sid #321 / #100 inserted / #0 before commit / #0 after commitThe 1st line is for the main thread invoking FillTmpTable().
The next #2 lines are for the worker threads of the parallel pipelined table function for invoking the same FillTmpTable().
For the main thread everything is as expected.
But for the worker threads the logs for before commit and after commit both give #0 for the number of available records in the global temporary table.
However all indicate #100 for the SQL insert
regards,
Frank
Edited by: user8704911 on Jul 7, 2011 10:13 AM
Edited by: user8704911 on Jul 7, 2011 10:20 AM
Edited by: user8704911 on Jul 7, 2011 10:27 AMSQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
SQL> set serveroutput on;
SQL> drop type TestTableOfNumber;
drop type TestTableOfNumber
ERROR at line 1:
ORA-04043: object TESTTABLEOFNUMBER does not exist
SQL> /
drop type TestTableOfNumber
ERROR at line 1:
ORA-04043: object TESTTABLEOFNUMBER does not exist
SQL>
SQL> create or replace type TestTableOfNumber_t is table of number;
2 /
Type created.
SQL>
SQL> drop type TestStatusObj;
drop type TestStatusObj
ERROR at line 1:
ORA-04043: object TESTSTATUSOBJ does not exist
SQL> /
drop type TestStatusObj
ERROR at line 1:
ORA-04043: object TESTSTATUSOBJ does not exist
SQL>
SQL> create or replace type TestStatusObj as object(
2 sid number,
3 ctr1 number,
4 ctr2 number,
5 ctr3 number
6 );
7 /
Type created.
SQL>
SQL> drop type TestStatusList;
drop type TestStatusList
ERROR at line 1:
ORA-04043: object TESTSTATUSLIST does not exist
SQL> /
drop type TestStatusList
ERROR at line 1:
ORA-04043: object TESTSTATUSLIST does not exist
SQL>
SQL> create or replace type TestStatusList is table of TestStatusObj;
2 /
Type created.
SQL>
SQL> drop table TestTmpTable;
drop table TestTmpTable
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> /
drop table TestTmpTable
ERROR at line 1:
ORA-00942: table or view does not exist
SQL>
SQL> create global temporary table TestTmpTable (
2 value number
3 ) on commit preserve rows;
Table created.
SQL> /
create global temporary table TestTmpTable (
ERROR at line 1:
ORA-00955: name is already used by an existing object
SQL>
SQL> create or replace package test_pkg
2 as
3
4 type TestStatusRec is record (
5 sid number,
6 ctr1 number,
7 ctr2 number,
8 ctr3 number
9 );
10
11 type TestStatusTab is table of TestStatusRec;
12
13 function FillTmpTable(id in varchar2)
14 return TestStatusRec;
15
16 FUNCTION ptf (p_cursor IN sys_refcursor)
17 RETURN TestStatusList PIPELINED
18 PARALLEL_ENABLE(PARTITION p_cursor BY any);
19
20 end;
21 /
Package created.
SQL>
SQL> create or replace package body test_pkg
2 as
3
4 function FillTmpTable(id in varchar2)
5 return TestStatusRec
6 is
7 PRAGMA AUTONOMOUS_TRANSACTION;
8
9 result TestStatusRec;
10
11 sid number;
12
13 type ton is table of number;
14 tids TestTableOfNumber_t := TestTableOfNumber_t();
15
16 records number := 0;
17 begin
18 select userenv('SID') into sid from dual;
19 result.sid := sid;
20
21 delete from TestTmpTable;
22
23 for i in 1..100 loop
24 tids.extend;
25 tids(tids.last) := i;
26 end loop;
27
28 forall i in 1..tids.count
29 insert into TestTmpTable (value) values (tids(i));
30
31 -- get number of records inserted
32 records := sql%rowcount;
33 result.ctr1 := records;
34
35 -- retrieve again before commit
36 select count(*) into records from TestTmpTable;
37 result.ctr2 := records;
38
39 commit;
40
41 -- retrieve again after commit
42 select count(*) into records from TestTmpTable;
43 result.ctr3 := records;
44
45 return result;
46 end;
47
48 FUNCTION ptf (p_cursor IN sys_refcursor)
49 RETURN TestStatusList PIPELINED
50 PARALLEL_ENABLE(PARTITION p_cursor BY any)
51 IS
52 rec test_pkg.TestStatusRec;
53 value number;
54 sid number;
55 ctr integer := 0;
56 BEGIN
57 select userenv('SID') into sid from dual;
58 rec := FillTmpTable('IN PTF');
59 LOOP
60 FETCH p_cursor into value;
61 EXIT WHEN p_cursor%NOTFOUND;
62 ctr := ctr + 1;
63 END LOOP;
64
65 -- as a result i am only interested in the results of FillTmpTable():
66 PIPE ROW (TestStatusObj(rec.sid, rec.ctr1, rec.ctr2, rec.ctr3));
67
68 RETURN;
69 END;
70 end;
71 /
Package body created.
SQL>
SQL> declare
2 tons TestTableOfNumber_t;
3 counts TestTableOfNumber_t;
4 status test_pkg.TestStatusRec;
5 statusList test_pkg.TestStatusTab;
6 begin
7 status := test_pkg.FillTmpTable('MAIN');
8 dbms_output.put_line('main thread:'
9 || ' sid #' || status.sid
10 || ' / #' || status.ctr1 || ' inserted '
11 || ' / #' || status.ctr2 || ' before commit'
12 || ' / #' || status.ctr3 || ' after commit');
13
14 select value bulk collect into tons from TestTmpTable;
15
16 select * bulk collect into statusList from TABLE(test_pkg.ptf(CURSOR(select /*+ parallel(tab,2
) */ value from TestTmpTable tab)));
17
18 for i in 1..StatusList.count loop
19 dbms_output.put_line('worker thread #' || i || ':'
20 || ' sid #' || statusList(i).sid
21 || ' / #' || statusList(i).ctr1 || ' inserted '
22 || ' / #' || statusList(i).ctr2 || ' before commit'
23 || ' / #' || statusList(i).ctr3 || ' after commit');
24 end loop;
25
26 end;
27 /
main thread: sid #1023 / #100 inserted / #100 before commit / #100 after commit
worker thread #1: sid #1045 / #100 inserted / #100 before commit / #100 after
commit
worker thread #2: sid #1019 / #100 inserted / #100 before commit / #100 after
commit
PL/SQL procedure successfully completed.
SQL> I am getting a different result.
Regards
Raj -
Table function giving crappy code.
Hi there,
I'm trying to implement a table function for in a simple mapping.
It has 1 input variable, the table function uses this variable to return 2 variables in a type object (type = table of rows).
The problem is that the code that the mapping generates is not working, because it is trying to create a cursor.
Without the cursor the code works fine.
Funny thing is I have done this before and that time it generated different code, without the cursor statement, and it worked.
What am I missing here??
See the code that's not working below.
Error message is: wrong number or types of arguments in call..
INSERT
/*+ APPEND PARALLEL(TEST, DEFAULT, DEFAULT) */
INTO
"TEST"
("MP_NAME",
"LP_DATE")
(SELECT
/*+ NO_MERGE */
"LAST_PROCESS_DATE".MP_NAME "MP_NAME",
"LAST_PROCESS_DATE".LP_DATE "LP_DATE"
FROM TABLE ( LAST_PROCESS_DATE (
CURSOR (SELECT
FULLPACKAGENAME."MP_NAME$1" "MP_NAME$0"
FROM DUAL ))) "LAST_PROCESS_DATE"
);... you are right. You cannot write the code of a table function inside owb. You must write the table function external and create it in the database. In owb you only can call the table function.
Regards,
Detlef
Maybe you are looking for
-
Problem - Adding item using wwsbr_api.add_item
Hi, I´m using portal 10.1.2.0.2 and when I try to add an file using wwsbr_api.add_item, it returns me the following error "ORA-29532: Java call terminated by uncaught Java exception: java.lang.NullPointerException" But It happens only when the file n
-
Data change after call xml transformation for RAW data type
Hi, Can someone explain what is workarround or any help for below problem when call transformation is called for usr02 records, after xml string is generated is showing different data for RAW data type for example content of fields BCODE,PASSCODE e
-
Trex queue status - to be synchronized
Hello All, We have upgraded our portal to EP6 SP12 and also Trex to SP12. I have created two indexes, one for searching the documents folder and other for Who's Who Iview(ume). The status of the queues is still "to be synchronized". Previously the st
-
I am getting the above error when I try to execute a simple select against a Oracle 9i database. My connection pool is defined below: Name: OracleDevPool URL: jdbc:oracle:thin:@162.81.130.140:1521:RHDD02 Driver ClassName: oracle.jdbc.driver.OracleDri
-
System Will Not Shut Down & Apps No Launch After Moving File -- HELP
I downloaded a "WordServices.service" file and read to copy it to the "Services" folder in the Library folder of the home drive. I created a Services folder in the Library folder that I found below the Applications folder, moved the file to it and re