Maximum No. of Partitions in a Table?
Hi,
What is the Maximum no.of partitions in a table?
Best Regards,
Naresh Kumar C.
All resides in what is 1K ?
First option (more popular) : 1K=1024, so 1024K = 1024*1024
Second option (less popular) : 1K = 1000, 1024K = 1024*1000
http://en.wikipedia.org/wiki/Byte
Have you really need to 1024K-1 partitions (either one or other option, it's already huge for number of partitions) ? Good luck for maintenance tasks...
Nicolas.
Similar Messages
-
Maximum number of partitions allowed per table.
Interesting findings with interval partitioning:
SQL> SELECT *
2 FROM v$version
3 /
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> DROP TABLE tbl PURGE
2 /
Table dropped.
SQL> CREATE TABLE tbl(
2 id number(6),
3 dt date
4 )
5 PARTITION BY RANGE(dt)
6 INTERVAL (INTERVAL '1' DAY)
7 (
8 PARTITION p1 VALUES LESS THAN (date '-857-12-31')
9 )
10 /
Table created.
SQL> select partition_name,
2 high_value
3 from user_tab_partitions
4 where table_name = 'TBL'
5 /
PARTITION_NAME HIGH_VALUE
P1 TO_DATE('-0857-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
LS_CALENDAR=GREGORIAN')
SQL> INSERT
2 INTO tbl
3 VALUES(
4 1,
5 sysdate
6 )
7 /
1 row created.
SQL> DROP TABLE tbl PURGE
2 /
Table dropped.
SQL> CREATE TABLE tbl(
2 id number(6),
3 dt date
4 )
5 PARTITION BY RANGE(dt)
6 INTERVAL (INTERVAL '1' DAY)
7 (
8 PARTITION p1 VALUES LESS THAN (date '-858-01-01')
9 )
10 /
Table created.
SQL> select partition_name,
2 high_value
3 from user_tab_partitions
4 where table_name = 'TBL'
5 /
PARTITION_NAME HIGH_VALUE
P1 TO_DATE('-0858-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
LS_CALENDAR=GREGORIAN')
SQL> INSERT
2 INTO tbl
3 VALUES(
4 1,
5 sysdate
6 )
7 /
INTO tbl
ERROR at line 2:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
SQL> From Logical Database Limits:
Maximum number of partitions allowed per table or index: 1024K - 1
I always thought limit implies to number of actual, not potential partitions, however it looks like I was wrong, although it makes not much sense to limit potential and not actual partitions:
SQL> select trunc(sysdate) - date '-858-01-01',
2 1024 * 1024 - 1
3 from dual
4 /
TRUNC(SYSDATE)-DATE'-858-01-01' 1024*1024-1
1048661 1048575
SQL> select to_char(DATE'-858-01-01' + 1048575,'MM/DD/YYYY')
2 from dual
3 /
TO_CHAR(DA
11/17/2012
SQL> So tomorrow "magic" date should increase by one day. I'll test it. But more interesting if tomorrow I will be able to insert a row that forms a new partition into table TBL.
SY.rp0428 wrote:
The other argument is that Oracle has to be able to automatically create any partition required and it can only create 1024k - 1. So if you create yours with sysdate how could it create all of the others?Not sure I follow. What is the purpose of counting potential partitions? Partition part# iin sys.tabpart$ is not assigned based on potential partition position. If I issue a DDL to create new partition regardless of interval/non-interval partitioning Oracle has to check how many partitions table has so far or po and raise same/similar exception if partition I am asking to create is over the limit. And, in any case, knowing we can create all potential partitions at table create time doesn't mean I will not try to insert data outside the range. So there is absolutely no guarantee Oracle can automatically create any partition requested. Again, I don't understand why creating non-interval partitioned table with a single initial partition has partition count of 1:
SQL> DROP TABLE tbl1 PURGE
2 /
Table dropped.
SQL> CREATE TABLE tbl1(
2 id number(6),
3 dt date
4 )
5 PARTITION BY RANGE(dt)
6 (
7 PARTITION p1 VALUES LESS THAN (date '-857-12-31')
8 )
9 /
Table created.
SQL> SELECT partition_count
2 FROM user_part_tables
3 WHERE table_name = 'TBL1'
4 /
PARTITION_COUNT
1
SQL>And interval partitioned table with same single initial partition has partition count of 1048575:
SQL> CREATE TABLE tbl1(
2 id number(6),
3 dt date
4 )
5 PARTITION BY RANGE(dt)
6 INTERVAL (INTERVAL '1' DAY)
7 (
8 PARTITION p1 VALUES LESS THAN (date '-857-12-31')
9 )
10 /
Table created.
SQL> SELECT partition_count
2 FROM user_part_tables
3 WHERE table_name = 'TBL1'
4 /
PARTITION_COUNT
1048575
SQL> Would be interesting to find out what forces Oracle to go into potential partition mode for interval partitioning.
SY. -
Problem in truncate/drop partitions in a table having nested table columns.
Hi,
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it help?
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
Thanks in advance.>
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). Can anybody help me telling how I will be able to truncate/drop partition from this table?
>
Unfortunately you can't do those operations when a table has a nested table column. No truncate, no drop, no exchange partition at the partition level.
A nested table column is stored as a separate table and acts like a 'child' table with foreign keys to the 'parent' table. It is these 'foreign keys' that prevent the truncation (just like normal foreign keys prevent truncating partions and must be disabled first) but there is no mechanism to 'disable' them.
Just one excellent example (there are many others) of why you should NOT use object columns at all.
>
IF I change column types from nested table to varray type, will it help?
>
Yes but I STRONGLY suggest you take this opportunity to change your data model to a standard relational one and put the 'child' (nested table) data into its own table with a foreign key to the parent. You can create a view on the two tables that can make data appear as if you have a nested table type if you want.
Assuming that you are going to ignore the above advice just create a new VARRAY type and a table with that type as a column. Remember VARRAYs are defined with a maximum size. So the number of nested table records needs to be within the capacity of the VARRAY type for the data to fit.
>
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
>
Sure - just CAST the nested table to the VARRAY type. Here is code for a VARRAY type and a new table that shows how to do it.
-- new array type
CREATE OR REPLACE TYPE ARRAY_T AS VARRAY(10) OF VARCHAR2(64)
-- new table using new array type - NOTE there is no nested table storage clause - arrays stored inline
CREATE TABLE partitioned_table_array
( ID_ INT,
arra_col ARRAY_T )
PARTITION BY RANGE (ID_)
( PARTITION p1 VALUES LESS THAN (40)
, PARTITION p2 VALUES LESS THAN(80)
, PARTITION p3 VALUES LESS THAN(100)
-- insert the data from the original table converting the nested table data to the varray type
INSERT INTO PARTITIONED_TABLE_ARRAY
SELECT ID_, CAST(NESTED_COL AS ARRAY_T) FROM PARTITIONED_TABLENaturally since there is no more nested table storage you can truncate or drop partitions in the above table
alter table partitioned_table_array truncate partition p1
alter table partitioned_table_array drop partition p1 -
ORA 01792 maximum number of columns in a table or view is 1000
Hello every1, I wish to register a large xmlschema doc, I am using the command
begin
dbms_xmlschema.registerschema(
schemaurl=>'xxxx',
schemadoc=>bfilename('XMLDIR','xxxxxx.xsd'),
csid=>nls_charset_id('AL32UTF8'));
end;
But the schema file exists 1000 col and it gives me the error msg of
ORA-01792: maximum number of columns in a table or view is 1000
is there anyway to solve the problems without edit the original schema document?
Thanks for your helpFirst create this package
create or replace package XDB_ANALYZE_XMLSCHEMA_10200
authid CURRENT_USER
as
function analyzeStorageModel(P_COMPLEX_TYPE_NAME VARCHAR2) return XMLTYPE;
function analyzeComplexType(COMPLEX_TYPE VARCHAR2) return XMLTYPE;
procedure renameCollectionTable (XMLTABLE varchar2, XPATH varchar2, COLLECTION_TABLE_PREFIX varchar2);
function printNestedTables(XML_TABLE varchar2) return XMLType;
function getComplexTypeElementList(P_SQLTYPE VARCHAR2, P_SQLSCHEMA VARCHAR2) return XDB.XDB$XMLTYPE_REF_LIST_T;
procedure scopeXMLReferences;
procedure indexXMLReferences(INDEX_NAME VARCHAR2);
function generateSchemaFromTable(P_TABLE_NAME varchar2, P_OWNER varchar2 default USER) return XMLTYPE;
function showSQLTypes(schemaFolder varchar2) return XMLType;
function generateCreateTableStatement(XML_TABLE_NAME varchar2, NEW_TABLE_NAME varchar2) return CLOB;
end XDB_ANALYZE_XMLSCHEMA_10200;
show errors
create or replace package body XDB_ANALYZE_XMLSCHEMA_10200
as
G_DEPTH_COUNT NUMBER(2) := 0;
TYPE BASETYPE_T is RECORD
SUBTYPE varchar2(128),
SUBTYPE_OWNER varchar2(32),
BASETYPE varchar2(128),
BASETYPE_OWNER varchar2(32)
TYPE BASETYPE_LIST_T IS TABLE OF BASETYPE_T;
BASETYPE_LIST BASETYPE_LIST_T := BASETYPE_LIST_T();
function findStorageModel(P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2, P_INCLUDE_SUBTYPES VARCHAR2 DEFAULT 'YES') return XMLType;
function getLocalAttributes(P_TYPE_NAME varchar2, P_TYPE_OWNER VARCHAR2) return XMLType;
function makeElement(P_NAME varchar2)
return xmltype
as
V_NAME varchar2(4000) := P_NAME;
begin
-- -- dbms_output.put_line('Processing : ' || P_NAME);
if (P_NAME LIKE '%$') then
V_NAME := SUBSTR(V_NAME,1,LENGTH(V_NAME) - 1);
end if;
if (P_NAME LIKE '%$%') then
V_NAME := REPLACE(V_NAME,'$','_0x22_');
end if;
return XMLTYPE( '<' || V_NAME || '/>');
end;
function getPathToRoot(SUBTYPE VARCHAR2, SUBTYPE_OWNER VARCHAR2)
return varchar2
as
TYPE_HIERARCHY varchar2(4000);
begin
SELECT sys_connect_by_path( OWNER || '.' || TYPE_NAME , '/')
INTO TYPE_HIERARCHY
FROM ALL_TYPES
WHERE TYPE_NAME = SUBTYPE
AND OWNER = SUBTYPE_OWNER
CONNECT BY SUPERTYPE_NAME = PRIOR TYPE_NAME
AND SUPERTYPE_OWNER = PRIOR OWNER
START WITH SUPERTYPE_NAME IS NULL
AND SUPERTYPE_OWNER IS NULL;
return TYPE_HIERARCHY;
end;
function expandSQLType(ATTR_NAME VARCHAR2, SUBTYPE VARCHAR2, SUBTYPE_OWNER VARCHAR2)
return XMLType
as
STORAGE_MODEL XMLTYPE;
ATTRIBUTES XMLTYPE;
EXTENDED_TYPE XMLTYPE;
ATTR_COUNT NUMBER := 0;
CURSOR FIND_EXTENDED_TYPES
is
select TYPE_NAME, OWNER
from ALL_TYPES
where SUPERTYPE_NAME = SUBTYPE
and SUPERTYPE_OWNER = SUBTYPE_OWNER;
begin
-- dbms_output.put_line('Processing SQLType : "' || SUBTYPE_OWNER || '.' || SUBTYPE || '".' );
STORAGE_MODEL := makeElement(ATTR_NAME);
select insertChildXML(STORAGE_MODEL,'/' || STORAGE_MODEL.getRootElement(),'@type',SUBTYPE)
into STORAGE_MODEL
from dual;
select insertChildXML(STORAGE_MODEL,'/' || STORAGE_MODEL.getRootElement(),'@typeOwner',SUBTYPE_OWNER)
into STORAGE_MODEL
from dual;
ATTRIBUTES := getLocalAttributes(SUBTYPE, SUBTYPE_OWNER);
ATTR_COUNT := ATTR_COUNT + ATTRIBUTES.extract('/' || ATTRIBUTES.getRootElement() || '/@columns').getNumberVal();
select appendChildXML(STORAGE_MODEL,'/' || STORAGE_MODEL.getRootElement(),ATTRIBUTES)
into STORAGE_MODEL
from DUAL;
for t in FIND_EXTENDED_TYPES loop
EXTENDED_TYPE := expandSQLType('ExtendedType',T.TYPE_NAME,T.OWNER);
ATTR_COUNT := ATTR_COUNT + EXTENDED_TYPE.extract('/' || EXTENDED_TYPE.getRootElement() || '/@columns').getNumberVal();
select appendChildXML(STORAGE_MODEL,'/' || STORAGE_MODEL.getRootElement(),EXTENDED_TYPE)
into STORAGE_MODEL
from DUAL;
end loop;
select insertChildXML(STORAGE_MODEL,'/' || STORAGE_MODEL.getRootElement(),'@columns',ATTR_COUNT)
into STORAGE_MODEL
from dual;
return STORAGE_MODEL;
end;
function getLocalAttributes(P_TYPE_NAME varchar2, P_TYPE_OWNER VARCHAR2)
return XMLType
as
V_ATTRIBUTE_COUNT NUMBER := 0;
V_TOTAL_ATTRIBUTES NUMBER := 0;
V_TEMP_RESULT NUMBER;
V_COLLECTION_TYPE varchar2(32);
V_COLLECTION_OWNER varchar2(32);
CURSOR FIND_CHILD_ATTRS
is
select ATTR_NAME, ATTR_TYPE_OWNER, ATTR_TYPE_NAME, INHERITED
from ALL_TYPE_ATTRS
where TYPE_NAME = P_TYPE_NAME
and OWNER = P_TYPE_OWNER
and INHERITED = 'NO'
order by ATTR_NO;
V_ATTR DBMS_XMLDOM.DOMATTR;
V_ATTRIBUTE_LIST XMLTYPE;
V_ATTRIBUTE_LIST_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ATTRIBUTE_LIST_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTRIBUTE XMLTYPE;
V_ATTRIBUTE_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ATTRIBUTE_ROOT DBMS_XMLDOM.DOMELEMENT;
V_TYPE_DEFINITION XMLTYPE;
V_TYPE_DEFINITION_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_TYPE_DEFINITION_ROOT DBMS_XMLDOM.DOMELEMENT;
begin
V_ATTRIBUTE_LIST := makeElement('Attributes');
V_ATTRIBUTE_LIST_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_ATTRIBUTE_LIST);
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_ATTRIBUTE_LIST_DOCUMENT);
for ATTR in FIND_CHILD_ATTRS loop
-- Finding Element / Attribute Name could be tricky. Use SQLName
V_ATTRIBUTE := makeElement(ATTR.ATTR_NAME);
V_ATTRIBUTE_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_ATTRIBUTE);
V_ATTRIBUTE_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_ATTRIBUTE_DOCUMENT);
begin
-- Check for Attributes based on collection types, With Nested Table storage each Collection will cost 2 columns.
select ELEM_TYPE_NAME, ELEM_TYPE_OWNER
into V_COLLECTION_TYPE, V_COLLECTION_OWNER
from ALL_COLL_TYPES
where TYPE_NAME = ATTR.ATTR_TYPE_NAME
and OWNER = ATTR.ATTR_TYPE_OWNER;
-- -- dbms_output.put_line('Adding "' || ATTR.ATTR_NAME || '". Collection of "' || ATTR.ATTR_TYPE_OWNER || '"."' || ATTR.ATTR_TYPE_NAME || '".');
-- Attribute is a Collection Type.
-- Collection will be managed as a NESTED TABLE
-- Each Collection cost 2 columns.
-- May want to count the number of columns in the NESTED_TABLE at a later date.
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLCollType');
DBMS_XMLDOM.SETVALUE(V_ATTR,ATTR.ATTR_TYPE_NAME);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLCollTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,ATTR.ATTR_TYPE_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLType');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_COLLECTION_TYPE);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_COLLECTION_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,2);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
exception
when no_data_found then
-- Attribute is not a collection type.
begin
-- Check for Attributes based on non-scalar types.
select 1
into V_TEMP_RESULT
from ALL_TYPES
where TYPE_NAME = ATTR.ATTR_TYPE_NAME
and OWNER = ATTR.ATTR_TYPE_OWNER;
-- Attribute is based on a non-scalar type. Find the Storage Model for this type.
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLType');
DBMS_XMLDOM.SETVALUE(V_ATTR,ATTR.ATTR_TYPE_NAME);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,ATTR.ATTR_TYPE_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_TYPE_DEFINITION := findStorageModel(ATTR.ATTR_TYPE_NAME, ATTR.ATTR_TYPE_OWNER, 'YES');
V_TYPE_DEFINITION_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_TYPE_DEFINITION);
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_TYPE_DEFINITION_DOCUMENT);
V_ATTRIBUTE_COUNT := DBMS_XMLDOM.GETATTRIBUTE(V_TYPE_DEFINITION_ROOT,'columns');
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_ATTRIBUTE_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT),TRUE));
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_ROOT),DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_TYPE_DEFINITION_DOCUMENT);
if (ATTR.ATTR_TYPE_NAME = 'XDB$ENUM_T' and ATTR.ATTR_TYPE_OWNER = 'XDB') then
-- The cost of a XDB$ENUM_T is 2 columns
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,2);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
else
-- The cost of a non scalar Type is the number of attributes plus one for Type and one for the TYPEID.
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_ATTRIBUTE_COUNT + 2);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
end if;
exception
when no_data_found then
-- Attribute is based on a scalar type
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'SQLType');
DBMS_XMLDOM.SETVALUE(V_ATTR,ATTR.ATTR_TYPE_NAME);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,1);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_ROOT,V_ATTR);
end;
end;
V_TOTAL_ATTRIBUTES := V_TOTAL_ATTRIBUTES + DBMS_XMLDOM.GETATTRIBUTE(V_ATTRIBUTE_ROOT,'columns');
V_ATTRIBUTE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_ATTRIBUTE_LIST_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_ROOT),TRUE));
V_ATTRIBUTE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_LIST_ROOT),DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_ATTRIBUTE_DOCUMENT);
if (V_TOTAL_ATTRIBUTES > 25000) then
exit;
end if;
end loop;
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_ATTRIBUTE_LIST_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_TOTAL_ATTRIBUTES);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ATTRIBUTE_LIST_ROOT,V_ATTR);
return V_ATTRIBUTE_LIST;
end;
function getSubTypes(P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2)
return XMLType
as
CURSOR FIND_SUBTYPES
is
select TYPE_NAME, OWNER
from ALL_TYPES
where SUPERTYPE_NAME = P_TYPE_NAME
and SUPERTYPE_OWNER = P_TYPE_OWNER;
CURSOR FIND_SUBTYPE_HEIRARCHY
is
select LEVEL, TYPE_NAME, OWNER
from ALL_TYPES
where TYPE_NAME <> P_TYPE_NAME
and OWNER <> P_TYPE_OWNER
connect by SUPERTYPE_NAME = PRIOR TYPE_NAME
and SUPERTYPE_OWNER = PRIOR OWNER
start with TYPE_NAME = P_TYPE_NAME
and OWNER = P_TYPE_OWNER;
V_SUBTYPE_LIST XMLType;
V_SUBTYPE_LIST_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_SUBTYPE_LIST_ROOT DBMS_XMLDOM.DOMELEMENT;
V_TYPE_DEFINITION XMLType;
V_TYPE_DEFINITION_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_TYPE_DEFINITION_ROOT DBMS_XMLDOM.DOMELEMENT;
V_SUBTYPE_DEFINITIONS XMLType;
V_SUBTYPE_DEFINITIONS_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_SUBTYPE_DEFINITIONS_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTRIBUTE_LIST XMLType;
V_ATTRIBUTE_LIST_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ATTRIBUTE_LIST_ROOT DBMS_XMLDOM.DOMELEMENT;
V_SUBTYPES_EXIST BOOLEAN := FALSE;
V_TOTAL_columns number;
V_ATTRIBUTE_COUNT number;
V_ATTR DBMS_XMLDOM.DOMATTR;
V_COMPLEX_TYPE VARCHAR2(256);
begin
V_SUBTYPE_LIST := makeElement('SubTypeDefinitions');
V_SUBTYPE_LIST_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_SUBTYPE_LIST);
V_SUBTYPE_LIST_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_SUBTYPE_LIST_DOCUMENT);
V_TOTAL_columns := 0;
for t in FIND_SUBTYPES() loop
V_SUBTYPES_EXIST := TRUE;
V_TYPE_DEFINITION := makeElement(t.TYPE_NAME);
V_TYPE_DEFINITION_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_TYPE_DEFINITION);
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_TYPE_DEFINITION_DOCUMENT);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_TYPE_DEFINITION_DOCUMENT,'SQLTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,t.OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_TYPE_DEFINITION_ROOT,V_ATTR);
begin
select x.XMLDATA.NAME
into V_COMPLEX_TYPE
from XDB.XDB$COMPLEX_TYPE x
where x.XMLDATA.SQLTYPE = t.TYPE_NAME
and x.XMLDATA.SQLSCHEMA = t.OWNER;
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_TYPE_DEFINITION_DOCUMENT,'type');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_COMPLEX_TYPE);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_TYPE_DEFINITION_ROOT,V_ATTR);
-- Consider adding Schema URL Attribute
exception
when no_data_found then
null;
when others then
raise;
end;
V_ATTRIBUTE_LIST := getLocalAttributes(t.TYPE_NAME, t.OWNER);
V_ATTRIBUTE_LIST_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_ATTRIBUTE_LIST);
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_ATTRIBUTE_LIST_DOCUMENT);
V_ATTRIBUTE_COUNT := DBMS_XMLDOM.GETATTRIBUTE(V_ATTRIBUTE_LIST_ROOT,'columns');
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_TYPE_DEFINITION_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_LIST_ROOT),TRUE));
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT),DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_LIST_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_ATTRIBUTE_LIST_DOCUMENT);
V_SUBTYPE_DEFINITIONS := getSubTypes(t.TYPE_NAME,t.OWNER);
if (V_SUBTYPE_DEFINITIONS is not NULL) then
V_SUBTYPE_DEFINITIONS_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_SUBTYPE_DEFINITIONS);
V_SUBTYPE_DEFINITIONS_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_SUBTYPE_DEFINITIONS_DOCUMENT);
V_ATTRIBUTE_COUNT := V_ATTRIBUTE_COUNT + DBMS_XMLDOM.GETATTRIBUTE(V_SUBTYPE_DEFINITIONS_ROOT,'columns');
V_SUBTYPE_DEFINITIONS_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_TYPE_DEFINITION_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_SUBTYPE_DEFINITIONS_ROOT),TRUE));
V_SUBTYPE_DEFINITIONS_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT),DBMS_XMLDOM.MAKENODE(V_SUBTYPE_DEFINITIONS_ROOT)));
end if;
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_TYPE_DEFINITION_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_ATTRIBUTE_COUNT);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_TYPE_DEFINITION_ROOT,V_ATTR);
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_SUBTYPE_LIST_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT),TRUE));
V_TYPE_DEFINITION_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_SUBTYPE_LIST_ROOT),DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION_ROOT)));
V_TOTAL_columns := V_TOTAL_columns + V_ATTRIBUTE_COUNT;
end loop;
if (V_SUBTYPES_EXIST) then
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_SUBTYPE_LIST_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_TOTAL_columns);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_SUBTYPE_LIST_ROOT,V_ATTR);
return V_SUBTYPE_LIST;
else
return NULL;
end if;
end;
function findSuperTypeModel(P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2)
return XMLType
as
begin
-- dbms_output.put_line('Processing Super Type : "' || P_TYPE_OWNER || '"."' || P_TYPE_NAME || '"');
return findStorageModel(P_TYPE_NAME, P_TYPE_OWNER,'NO');
end;
function getStorageModel(P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2, P_INCLUDE_SUBTYPES VARCHAR2 DEFAULT 'YES')
return XMLType
as
V_TYPE_DEFINITION XMLTYPE;
V_ATTRIBUTE_COUNT NUMBER := 0;
SUBTYPE_STORAGE_MODEL XMLTYPE;
V_SUPERTYPE_DEFINITION XMLTYPE;
V_SUPERTYPE_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_SUPERTYPE_ROOT DBMS_XMLDOM.DOMELEMENT;
V_SUBTYPE_DEFINITION XMLTYPE;
V_SUBTYPE_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_SUBTYPE_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTRIBUTE_LIST XMLTYPE;
V_ATTRIBUTE_LIST_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ATTRIBUTE_LIST_ROOT DBMS_XMLDOM.DOMELEMENT;
cursor FIND_SUPERTYPE_HEIRARCHY
is
select TYPE_NAME, OWNER
from ALL_TYPES
where TYPE_NAME <> P_TYPE_NAME
and OWNER <> P_TYPE_OWNER
connect by TYPE_NAME = PRIOR SUPERTYPE_NAME
and OWNER = PRIOR SUPERTYPE_OWNER
start with TYPE_NAME = P_TYPE_NAME
and OWNER = P_TYPE_OWNER
order by LEVEL;
V_COMPLEX_TYPE varchar2(256);
V_SUPERTYPE_NAME varchar2(256);
v_SUPERTYPE_OWNER varchar2(256);
V_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTR DBMS_XMLDOM.DOMATTR;
begin
-- dbms_output.put_line('Generating Storage Model for : "' || P_TYPE_OWNER || '"."' || P_TYPE_NAME || '"');
V_TYPE_DEFINITION := makeElement(P_TYPE_NAME);
V_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_TYPE_DEFINITION);
V_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_DOCUMENT);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'SQLTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,P_TYPE_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
begin
select x.XMLDATA.NAME
into V_COMPLEX_TYPE
from XDB.XDB$COMPLEX_TYPE x
where x.XMLDATA.SQLTYPE = P_TYPE_NAME
and x.XMLDATA.SQLSCHEMA = P_TYPE_OWNER;
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'type');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_COMPLEX_TYPE);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
-- Consider adding Schema URL Attribute
exception
when no_data_found then
null;
when others then
raise;
end;
select SUPERTYPE_NAME, SUPERTYPE_OWNER
into V_SUPERTYPE_NAME, V_SUPERTYPE_OWNER
from ALL_TYPES
where TYPE_NAME = P_TYPE_NAME
and OWNER = P_TYPE_OWNER;
-- Process SuperType.
if (V_SUPERTYPE_NAME is not null) then
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'SQLParentType');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_SUPERTYPE_NAME);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'SQLParentTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_SUPERTYPE_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
-- Find the Definition for the super type. Do not include the definition of it's subtypes.
V_SUPERTYPE_DEFINITION := findSuperTypeModel(V_SUPERTYPE_NAME, V_SUPERTYPE_OWNER);
-- -- dbms_output.put_line(dbms_lob.substr(V_SUPERTYPE_DEFINITION.getClobVal(),1000,1));
V_SUPERTYPE_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_SUPERTYPE_DEFINITION);
V_SUPERTYPE_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_SUPERTYPE_DOCUMENT);
V_ATTRIBUTE_COUNT := V_ATTRIBUTE_COUNT + DBMS_XMLDOM.GETATTRIBUTE(V_SUPERTYPE_ROOT,'columns');
V_SUPERTYPE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_SUPERTYPE_ROOT),TRUE));
V_SUPERTYPE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ROOT),DBMS_XMLDOM.MAKENODE(V_SUPERTYPE_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_SUPERTYPE_DOCUMENT);
end if;
-- Process Attributes defined directly by the Type.
V_ATTRIBUTE_LIST := getLocalAttributes(P_TYPE_NAME, P_TYPE_OWNER);
V_ATTRIBUTE_LIST_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_ATTRIBUTE_LIST);
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_ATTRIBUTE_LIST_DOCUMENT);
V_ATTRIBUTE_COUNT := V_ATTRIBUTE_COUNT + DBMS_XMLDOM.GETATTRIBUTE(V_ATTRIBUTE_LIST_ROOT,'columns');
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_LIST_ROOT),TRUE));
V_ATTRIBUTE_LIST_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ROOT),DBMS_XMLDOM.MAKENODE(V_ATTRIBUTE_LIST_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_ATTRIBUTE_LIST_DOCUMENT);
if (P_INCLUDE_SUBTYPES = 'YES') then
-- Process any Sub-Types...
V_SUBTYPE_DEFINITION := getSubTypes(P_TYPE_NAME, P_TYPE_OWNER);
if (V_SUBTYPE_DEFINITION is not null) then
V_SUBTYPE_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_SUBTYPE_DEFINITION);
V_SUBTYPE_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_SUBTYPE_DOCUMENT);
V_ATTRIBUTE_COUNT := V_ATTRIBUTE_COUNT + DBMS_XMLDOM.GETATTRIBUTE(V_SUBTYPE_ROOT,'columns');
V_SUBTYPE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_SUBTYPE_ROOT),TRUE));
V_SUBTYPE_ROOT := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ROOT),DBMS_XMLDOM.MAKENODE(V_SUBTYPE_ROOT)));
DBMS_XMLDOM.FREEDOCUMENT(V_SUBTYPE_DOCUMENT);
end if;
end if;
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,V_ATTRIBUTE_COUNT);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
-- Cache the type definition.
-- dbms_output.put_line('Cached Storage Model for "' || P_TYPE_OWNER || '.' || P_TYPE_NAME || '".');
insert into XDBPM.XDBPM_STORAGE_MODEL_CACHE (TYPE_NAME, TYPE_OWNER, EXTENDED_DEFINITION, STORAGE_MODEL) VALUES (P_TYPE_NAME, P_TYPE_OWNER, P_INCLUDE_SUBTYPES, V_TYPE_DEFINITION);
return V_TYPE_DEFINITION;
end;
function findStorageModel(P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2, P_INCLUDE_SUBTYPES VARCHAR2 DEFAULT 'YES')
-- Find the Storage Model for the Base Type.
-- If the type is derived from another type we need the storage model of the Base Type
-- As storage models are calculated they are cached in the global temporary table XDBPM_STORAGE_MODEL_CACHE. This makes
-- the process much more efficient. A global temporary table is used to minimize memory usage.
return XMLType
as
V_STORAGE_MODEL XMLType;
V_STORAGE_MODEL_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_STORAGE_MODEL_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTRIBUTE_COUNT VARCHAR2(10);
begin
dbms_output.put_line('findStorageModel(' || G_DEPTH_COUNT || ') : Processing "' || P_TYPE_OWNER || '"."' || P_TYPE_NAME || '".' );
begin
SELECT STORAGE_MODEL
into V_STORAGE_MODEL
from XDBPM.XDBPM_STORAGE_MODEL_CACHE
where TYPE_NAME = P_TYPE_NAME
and TYPE_OWNER = P_TYPE_OWNER
and EXTENDED_DEFINITION = P_INCLUDE_SUBTYPES;
-- dbms_output.put_line('Resolved Storage Model from cache.');
exception
when no_data_found then
G_DEPTH_COUNT := G_DEPTH_COUNT + 1;
V_STORAGE_MODEL := getStorageModel(P_TYPE_NAME,P_TYPE_OWNER, P_INCLUDE_SUBTYPES);
G_DEPTH_COUNT := G_DEPTH_COUNT - 1;
when others then
raise;
end;
V_STORAGE_MODEL_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_STORAGE_MODEL);
V_STORAGE_MODEL_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_STORAGE_MODEL_DOCUMENT);
V_ATTRIBUTE_COUNT := DBMS_XMLDOM.GETATTRIBUTE(V_STORAGE_MODEL_ROOT,'columns');
dbms_output.put_line('findStorageModel : Attribute Count for "' || P_TYPE_OWNER || '"."' || P_TYPE_NAME || '" = ' || V_ATTRIBUTE_COUNT || '.' );
return V_STORAGE_MODEL;
end;
function analyzeStorageModel(P_COMPLEX_TYPE_NAME VARCHAR2, P_TYPE_NAME VARCHAR2, P_TYPE_OWNER VARCHAR2)
-- Generate a map showing the number of columns required to persist an instance of the SQL type.
return XMLType
as
V_STORAGE_MODEL XMLTYPE;
V_COUNT NUMBER := 0;
V_DOCUMENT DBMS_XMLDOM.DOMDOCUMENT;
V_ROOT DBMS_XMLDOM.DOMELEMENT;
V_ATTR DBMS_XMLDOM.DOMATTR;
V_MODEL DBMS_XMLDOM.DOMELEMENT;
V_TYPE_DEFINITION DBMS_XMLDOM.DOMELEMENT;
begin
V_STORAGE_MODEL := makeElement(P_COMPLEX_TYPE_NAME);
V_DOCUMENT := DBMS_XMLDOM.NEWDOMDOCUMENT(V_STORAGE_MODEL);
V_ROOT := DBMS_XMLDOM.GETDOCUMENTELEMENT(V_DOCUMENT);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'SQLType');
DBMS_XMLDOM.SETVALUE(V_ATTR,P_TYPE_NAME);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'SQLTypeOwner');
DBMS_XMLDOM.SETVALUE(V_ATTR,P_TYPE_OWNER);
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
V_TYPE_DEFINITION := DBMS_XMLDOM.GETDOCUMENTELEMENT(DBMS_XMLDOM.NEWDOMDOCUMENT(findStorageModel(P_TYPE_NAME, P_TYPE_OWNER, 'YES')));
V_TYPE_DEFINITION := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.IMPORTNODE(V_DOCUMENT,DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION),TRUE));
V_TYPE_DEFINITION := DBMS_XMLDOM.MAKEELEMENT(DBMS_XMLDOM.APPENDCHILD(DBMS_XMLDOM.MAKENODE(V_ROOT),DBMS_XMLDOM.MAKENODE(V_TYPE_DEFINITION)));
V_ATTR := DBMS_XMLDOM.CREATEATTRIBUTE(V_DOCUMENT,'columns');
DBMS_XMLDOM.SETVALUE(V_ATTR,DBMS_XMLDOM.GETATTRIBUTE(V_TYPE_DEFINITION,'columns'));
V_ATTR := DBMS_XMLDOM.SETATTRIBUTENODE(V_ROOT,V_ATTR);
return V_STORAGE_MODEL;
end;
function analyzeStorageModel(P_COMPLEX_TYPE_NAME VARCHAR2)
return XMLTYPE
-- Generate a map showing the number of columns required to persist an instance of the complex type.
as
pragma autonomous_transaction;
V_SQLTYPE VARCHAR2(128);
V_SQLSCHEMA VARCHAR2(32);
V_RESULT XMLType;
begin
G_DEPTH_COUNT := 0;
select ct.XMLDATA.SQLTYPE, ct.XMLDATA.SQLSCHEMA
into V_SQLTYPE, V_SQLSCHEMA
from XDB.XDB$COMPLEX_TYPE ct, XDB.XDB$SCHEMA s
where ct.XMLDATA.NAME = P_COMPLEX_TYPE_NAME
and ref(s) = ct.XMLDATA.PARENT_SCHEMA
and s.XMLDATA.SCHEMA_OWNER = USER;
delete from XDBPM.XDBPM_STORAGE_MODEL_CACHE;
-- delete from XDBPM.XDBPM_STORAGE_MODEL;
V_RESULT := analyzeStorageModel(P_COMPLEX_TYPE_NAME,V_SQLTYPE,V_SQLSCHEMA);
COMMIT;
return V_RESULT;
exception
when no_data_found then
-- dbms_output.put_line('Unable to find SQLType mapping for complexType : "' || USER || '"."' || P_COMPLEX_TYPE_NAME || '".' );
return null;
when others then
raise;
end;
function analyzeSQLType(ATTR_NAME VARCHAR2, TARGET_TYPE_NAME VARCHAR2, TARGET_TYPE_OWNER VARCHAR2)
return XMLType
as
ROOT_NODE_NAME VARCHAR2(128);
ATTR_DETAIL XMLTYPE;
XPATH_EXPRESSION VARCHAR2(129);
CURSOR FIND_CHILD_ATTRS is
select ATTR_NAME, ATTR_TYPE_OWNER, ATTR_TYPE_NAME, INHERITED
from ALL_TYPE_ATTRS
where OWNER = TARGET_TYPE_OWNER
and TYPE_NAME = TARGET_TYPE_NAME
order by ATTR_NO;
CHILD_ATTR XMLTYPE;
ATTR_COUNT NUMBER := 0;
TEMP number;
COLLECTION_TYPE_NAME varchar2(256);
COLLECTION_TYPE_OWNER varchar2(256);
begin
-- -- dbms_output.put_line('Processing Attribute ' || ATTR_NAME || ' of ' || TARGET_TYPE_OWNER || '.' || TARGET_TYPE_NAME );
ATTR_DETAIL := makeElement(ATTR_NAME);
XPATH_EXPRESSION := '/' || ATTR_DETAIL.GETROOTELEMENT();
for ATTR in FIND_CHILD_ATTRS loop
begin
select ELEM_TYPE_NAME, ELEM_TYPE_OWNER
into COLLECTION_TYPE_NAME, COLLECTION_TYPE_OWNER
from ALL_COLL_TYPES
where TYPE_NAME = ATTR.ATTR_TYPE_NAME
and OWNER = ATTR.ATTR_TYPE_OWNER;
CHILD_ATTR := analyzeSQLType(ATTR.ATTR_NAME, COLLECTION_TYPE_NAME, COLLECTION_TYPE_OWNER );
ATTR_COUNT := ATTR_COUNT + CHILD_ATTR.extract('/' || CHILD_ATTR.GETROOTELEMENT() || '/@sqlAttrs').getNumberVal();
select appendChildXML(ATTR_DETAIL,XPATH_EXPRESSION,CHILD_ATTR)
into ATTR_DETAIL
from DUAL;
exception
when no_data_found then
begin
select 1
into TEMP
from ALL_TYPES
where TYPE_NAME = ATTR.ATTR_TYPE_NAME
and OWNER = ATTR.ATTR_TYPE_OWNER;
CHILD_ATTR := analyzeSQLType(ATTR.ATTR_NAME, ATTR.ATTR_TYPE_NAME, ATTR.ATTR_TYPE_OWNER );
ATTR_COUNT := ATTR_COUNT + CHILD_ATTR.extract('/' || CHILD_ATTR.GETROOTELEMENT() || '/@sqlAttrs').getNumberVal();
select appendChildXML(ATTR_DETAIL,XPATH_EXPRESSION,CHILD_ATTR)
into ATTR_DETAIL
from DUAL;
exception
when no_data_found then
ATTR_COUNT := ATTR_COUNT + 1;
end;
end;
end loop;
select insertChildXML(ATTR_DETAIL,XPATH_EXPRESSION,'@sqlAttrs',ATTR_COUNT)
into ATTR_DETAIL
from dual;
return ATTR_DETAIL;
end;
function analyzeComplexType(COMPLEX_TYPE VARCHAR2)
return XMLType
as
RESULT xmltype;
SQLTYPE varchar2(128);
SQLTYPE_OWNER varchar2(32);
begin
select SQLTYPE, SQLTYPE_OWNER
into SQLTYPE, SQLTYPE_OWNER
from USER_XML_SCHEMAS,
xmlTable
xmlnamespaces
'http://www.w3.org/2001/XMLSchema' as "xsd",
'http://xmlns.oracle.com/xdb' as "xdb"
'/xsd:schema/xsd:complexType'
passing Schema
columns
COMPLEX_TYPE_NAME varchar2(4000) path '@name',
SQLTYPE varchar2(128) path '@xdb:SQLType',
SQLTYPE_OWNER varchar2(32) path '@xdb:SQLSchema'
where COMPLEX_TYPE_NAME = COMPLEX_TYPE;
result := analyzeSQLType(COMPLEX_TYPE,SQLTYPE,SQLTYPE_OWNER);
select insertChildXML(RESULT,'/' || COMPLEX_TYPE,'@SQLType',SQLTYPE)
into result
from dual;
return result;
end;
function showSQLTypes(schemaFolder varchar2) return XMLType
is
xmlSchema XMLTYPE;
begin
select xmlElement
"TypeList",
xmlAgg
xmlElement
"Schema",
xmlElement
"ResourceName",
extractValue(res,'/Resource/DisplayName')
xmlElement
"complexTypes",
select xmlAgg
xmlElement
"complexType",
xmlElement
"name",
extractValue(value(XML),'/xsd:complexType/@name',XDB_NAMESPACES.XDBSCHEMA_PREFIXES)
xmlElement
"SQLType",
extractValue(value(XML),'/xsd:complexType/@xdb:SQLType',XDB_NAMESPACES.XDBSCHEMA_PREFIXES)
from table
xmlsequence
extract
xdburitype(p.path).getXML(),
'/xsd:schema/xsd:complexType',
XDB_NAMESPACES.XDBSCHEMA_PREFIXES
) xml
-- order by extractValue(value(XML),'/xsd:complexType/@name',XDB_NAMESPACES.XDBSCHEMA_PREFIXES)
).extract('/*')
into xmlSchema
from path_view p
where under_path(res,schemaFolder) = 1
order by extractValue(res,'/Resource/DisplayName');
return xmlSchema;
end;
procedure renameCollectionTable (XMLTABLE varchar2, XPATH varchar2, COLLECTION_TABLE_PREFIX varchar2)
as
SYSTEM_GENERATED_NAME varchar2(256);
COLLECTION_TABLE_NAME varchar2(256);
CLUSTERED_INDEX_NAME varchar2(256);
PARENT_INDEX_NAME varchar2(256);
RENAME_STATEMENT varchar2(4000);
begin
COLLECTION_TABLE_NAME := COLLECTION_TABLE_PREFIX || '_TABLE';
CLUSTERED_INDEX_NAME := COLLECTION_TABLE_PREFIX || '_DATA';
PARENT_INDEX_NAME := COLLECTION_TABLE_PREFIX || '_LIST';
select TABLE_NAME
into SYSTEM_GENERATED_NAME
from ALL_NESTED_TABLES
where PARENT_TABLE_NAME = XMLTABLE
and PARENT_TABLE_COLUMN = XPATH
and OWNER = USER;
RENAME_STATEMENT := 'alter table ' || USER || '."' || SYSTEM_GENERATED_NAME || '" rename to "' ||COLLECTION_TABLE_NAME || '"';
-- -- dbms_output.put_line(RENAME_STATEMENT);
execute immediate RENAME_STATEMENT;
begin
select INDEX_NAME
into SYSTEM_GENERATED_NAME
from ALL_INDEXES
where TABLE_NAME = COLLECTION_TABLE_NAME
and INDEX_TYPE = 'IOT - TOP'
and OWNER = USER;
RENAME_STATEMENT := 'alter index ' || USER || '."' || SYSTEM_GENERATED_NAME || '" rename to "' || CLUSTERED_INDEX_NAME || '"';
-- -- dbms_output.put_line(RENAME_STATEMENT);
execute immediate RENAME_STATEMENT;
exception
when NO_DATA_FOUND then
null;
end;
begin
select INDEX_NAME
into SYSTEM_GENERATED_NAME
from ALL_IND_columns
where COLUMN_NAME = XPATH
and TABLE_NAME = XMLTABLE
and TABLE_OWNER = USER;
RENAME_STATEMENT := 'alter index ' || USER || '."' || SYSTEM_GENERATED_NAME || '" rename to "' || PARENT_INDEX_NAME || '"';
-- -- dbms_output.put_line(RENAME_STATEMENT);
execute immediate RENAME_STATEMENT;
exception
when NO_DATA_FOUND then
null;
end;
end;
function processNestedTable(currentLevel in out number, currentNode in out XMLType, query SYS_REFCURSOR)
return XMLType
is
thisLevel number;
thisNode xmlType;
result xmlType;
begin
thisLevel := currentLevel;
thisNode := currentNode;
fetch query into currentLevel, currentNode;
if (query%NOTFOUND) then
currentLevel := -1;
end if;
while (currentLevel >= thisLevel) loop
-- Next Node is a decendant of sibling of this Node.
if (currentLevel > thisLevel) then
-- Next Node is a decendant of this Node.
result := processNestedTable(currentLevel, currentNode, query);
select xmlElement
"Collection",
extract(thisNode,'/Collection/*'),
xmlElement
"NestedCollections",
result
into thisNode
from dual;
else
-- Next node is a sibling of this Node.
result := processNestedTable(currentLevel, currentNode, query);
select xmlconcat(thisNode,result) into thisNode from dual;
end if;
end loop;
-- Next Node is a sibling of some ancestor of this node.
return thisNode;
end;
function printNestedTables(XML_TABLE varchar2)
return XMLType
is
query SYS_REFCURSOR;
result XMLType;
rootLevel number := 0;
rootNode xmlType;
begin
open query for
select level, xmlElement
"Collection",
xmlElement
"CollectionId",
PARENT_TABLE_COLUMN
) as XML
from USER_NESTED_TABLES
connect by PRIOR TABLE_NAME = PARENT_TABLE_NAME
start with PARENT_TABLE_NAME = XML_TABLE;
fetch query into rootLevel, rootNode;
result := processNestedTable(rootLevel, rootNode, query);
select xmlElement
"NestedTableStructure",
result
into result
from dual;
return result;
end;
function generateSchemaFromTable(P_TABLE_NAME varchar2, P_OWNER varchar2 default USER)
return XMLTYPE
as
xmlSchema XMLTYPE;
begin
select xmlElement
"xsd:schema",
xmlAttributes
'http://www.w3.org/2001/XMLSchema' as "xmlns:xsd",
'http://xmlns.oracle.com/xdb' as "xmlns:xdb"
xmlElement
"xsd:element",
xmlAttributes
'ROWSET' as "name",
'rowset' as "type"
xmlElement
"xsd:complexType",
xmlAttributes
'rowset' as "name"
xmlElement
"xsd:sequence",
xmlElement
"xsd:element",
xmlAttributes
'ROW' as "name",
table_name || '_T' as "type",
'unbounded' as "maxOccurs"
xmlElement
"xsd:complexType",
xmlAttributes
table_name || '_T' as "name"
xmlElement
"xsd:sequence",
xmlAgg(ELEMENT order by INTERNAL_COLUMN_ID)
into xmlSchema
from (select TABLE_NAME, INTERNAL_COLUMN_ID,
case
when DATA_TYPE in ('VARCHAR2','CHAR') then
xmlElement
"xsd:element",
xmlattributes
column_name as "name",
decode(NULLABLE, 'Y', 0, 1) as "minOccurs",
column_name as "xdb:SQLName",
DATA_TYPE as "xdb:SQLType"
xmlElement
"xsd:simpleType",
xmlElement
"xsd:restriction",
xmlAttributes
'xsd:string' as "base"
xmlElement
"xsd:maxLength",
xmlAttributes
DATA_LENGTH as "value"
when DATA_TYPE = 'NUMBER' then
xmlElement
"xsd:element",
xmlattributes
column_name as "name",
decode(NULLABLE, 'Y', 0, 1) as "minOccurs",
column_name as "xdb:SQLName",
DATA_TYPE as "xdb:SQLType"
xmlElement
"xsd:simpleType",
xmlElement
"xsd:restriction",
xmlAttributes
decode(DATA_SCALE, 0, 'xsd:integer', 'xsd:double') as "base"
xmlElement
"xsd:totalDigits",
xmlAttributes
DATA_PRECISION as "value"
when DATA_TYPE = 'DATE' then
xmlElement
"xsd:element",
xmlattributes
column_name as "name",
decode(NULLABLE, 'Y', 0, 1) as "minOccurs",
'xsd:date' as "type",
column_name as "xdb:SQLName",
DATA_TYPE as "xdb:SQLType"
when DATA_TYPE like 'TIMESTAMP%WITH TIME ZONE' then
xmlElement
"xsd:element",
xmlattributes
column_name as "name",
decode(NULLABLE, 'Y', 0, 1) as "minOccurs",
'xsd:dateTime' as "type",
column_name as "xdb:SQLName",
DATA_TYPE as "xdb:SQLType"
else
xmlElement
"xsd:element",
xmlattributes
column_name as "name",
decode(NULLABLE, 'Y', 0, 1) as "minOccurs",
'xsd:anySimpleType' as "type",
column_name as "xdb:SQLName",
DATA_TYPE as "xdb:SQLType"
end ELEMENT
from all_tab_cols c
where c.TABLE_NAME = P_TABLE_NAME
and c.OWNER = P_OWNER
group by TABLE_NAME;
return xmlSchema;
end;
function appendElementList(V_ELEMENT_LIST IN OUT XDB.XDB$XMLTYPE_REF_LIST_T, V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T) return XDB.XDB$XMLTYPE_REF_LIST_T;
function expandModel(P_MODEL XDB.XDB$MODEL_T) return XDB.XDB$XMLTYPE_REF_LIST_T;
function expandChoiceList(P_CHOICE_LIST XDB.XDB$XMLTYPE_REF_LIST_T) return XDB.XDB$XMLTYPE_REF_LIST_T;
function expandSequenceList(P_SEQUENCE_LIST XDB.XDB$XMLTYPE_REF_LIST_T) return XDB.XDB$XMLTYPE_REF_LIST_T;
function expandGroupList(P_GROUP_LIST XDB.XDB$XMLTYPE_REF_LIST_T) return XDB.XDB$XMLTYPE_REF_LIST_T;
function appendElementList(V_ELEMENT_LIST IN OUT XDB.XDB$XMLTYPE_REF_LIST_T, V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
begin
SELECT CAST
SET
CAST(V_ELEMENT_LIST as XDBPM.XMLTYPE_REF_TABLE_T)
MULTISET UNION
CAST(V_CHILD_ELEMENT_LIST as XDBPM.XMLTYPE_REF_TABLE_T)
as XDB.XDB$XMLTYPE_REF_LIST_T
into V_ELEMENT_LIST
from DUAL;
return V_ELEMENT_LIST;
end;
function expandModel(P_MODEL XDB.XDB$MODEL_T)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
V_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
begin
V_ELEMENT_LIST := XDB.XDB$XMLTYPE_REF_LIST_T();
if P_MODEL.ELEMENTS is not null then
V_ELEMENT_LIST := P_MODEL.ELEMENTS;
end if;
if (P_MODEL.CHOICE_KIDS is not NULL) then
V_CHILD_ELEMENT_LIST := expandChoiceList(P_MODEL.CHOICE_KIDS);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end if;
if (P_MODEL.SEQUENCE_KIDS is not NULL) then
V_CHILD_ELEMENT_LIST := expandSequenceList(P_MODEL.SEQUENCE_KIDS);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end if;
-- Process ANYS
if (P_MODEL.GROUPS is not NULL) then
V_CHILD_ELEMENT_LIST := expandGroupList(P_MODEL.GROUPS);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end if;
return V_ELEMENT_LIST;
end;
function expandChoiceList(P_CHOICE_LIST XDB.XDB$XMLTYPE_REF_LIST_T)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
V_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
cursor getChoices is
select c.XMLDATA MODEL
from XDB.XDB$CHOICE_MODEL c, TABLE(P_CHOICE_LIST) cl
where ref(c) = value(cl);
begin
V_ELEMENT_LIST := XDB.XDB$XMLTYPE_REF_LIST_T();
for c in getChoices loop
V_CHILD_ELEMENT_LIST := expandModel(c.MODEL);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end loop;
return V_ELEMENT_LIST;
end;
function expandSequenceList(P_SEQUENCE_LIST XDB.XDB$XMLTYPE_REF_LIST_T)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
V_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
cursor getSequences is
select s.XMLDATA MODEL
from XDB.XDB$SEQUENCE_MODEL s, TABLE(P_SEQUENCE_LIST) sl
where ref(s) = value(sl);
begin
V_ELEMENT_LIST := XDB.XDB$XMLTYPE_REF_LIST_T();
for s in getSequences loop
V_CHILD_ELEMENT_LIST := expandModel(s.MODEL);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end loop;
return V_ELEMENT_LIST;
end;
function expandGroupList(P_GROUP_LIST XDB.XDB$XMLTYPE_REF_LIST_T)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
V_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T;
V_CHILD_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T; V_MODEL XDB.XDB$MODEL_T;
cursor getGroups is
SELECT CASE
-- Return The MODEL Definition for the CHOICE, ALL or SEQUENCE
WHEN gd.XMLDATA.ALL_KID is not NULL
THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = gd.XMLDATA.ALL_KID)
WHEN gd.XMLDATA.SEQUENCE_KID is not NULL
THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = gd.XMLDATA.SEQUENCE_KID)
WHEN gd.XMLDATA.CHOICE_KID is not NULL
THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = gd.XMLDATA.CHOICE_KID)
END MODEL
FROM XDB.XDB$GROUP_DEF gd, XDB.XDB$GROUP_REF gr, TABLE(P_GROUP_LIST) gl
WHERE ref(gd) = gr.XMLDATA.GROUPREF_REF
and ref(gr) = value(gl);
begin
V_ELEMENT_LIST := XDB.XDB$XMLTYPE_REF_LIST_T();
for g in getGroups loop
V_CHILD_ELEMENT_LIST := expandModel(g.MODEL);
V_ELEMENT_LIST := appendElementList(V_ELEMENT_LIST,V_CHILD_ELEMENT_LIST);
end loop;
return V_ELEMENT_LIST;
end;
function getComplexTypeElementList(P_COMPLEX_TYPE_REF REF XMLTYPE)
return XDB.XDB$XMLTYPE_REF_LIST_T
as
V_MODEL XDB.XDB$MODEL_T;
V_BASE_TYPE REF XMLTYPE;
V_ELEMENT_LIST XDB.XDB$XMLTYPE_REF_LIST_T := XDB.XDB$XMLTYPE_REF_LIST_T();
begin
SELECT ct.XMLDATA.BASE_TYPE,
CASE
-- Return The MODEL Definition for the CHOICE, ALL or SEQUENCE
WHEN ct.XMLDATA.ALL_KID is not NULL
THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = ct.XMLDATA.ALL_KID)
WHEN ct.XMLDATA.SEQUENCE_KID is not NULL
THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = ct.XMLDATA.SEQUENCE_KID)
WHEN ct.XMLDATA.CHOICE_KID is not NULL
THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = ct.XMLDATA.CHOICE_KID)
WHEN ct.XMLDATA.GROUP_KID is not NULL
-- COMPLEXTYPE is based on a GROUP.
THEN (
-- RETURN The CHOICE, ALL or SEQUENCE for GROUP
SELECT CASE
WHEN gd.XMLDATA.ALL_KID is not NULL
THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = gd.XMLDATA.ALL_KID)
WHEN gd.XMLDATA.SEQUENCE_KID is not NULL
THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = gd.XMLDATA.SEQUENCE_KID)
WHEN gd.XMLDATA.CHOICE_KID is not NULL
THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = gd.XMLDATA.CHOICE_KID)
END
FROM XDB.XDB$GROUP_DEF gd, xdb.xdb$GROUP_REF gr
WHERE ref(gd) = gr.XMLDATA.GROUPREF_REF
and ref(gr) = ct.XMLDATA.GROUP_KID
-- WHEN ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.ALL_KID is not NULL
-- THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.ALL_KID)
-- WHEN ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.SEQUENCE_KID is not NULL
-- THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.SEQUENCE_KID)
-- WHEN ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.CHOICE_KID is not NULL
-- THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.CHOICE_KID)
-- WHEN ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.GROUP_KID is not NULL
-- -- COMPLEXTYPE is based on a GROUP.
-- THEN (
-- -- RETURN The CHOICE, ALL or SEQUENCE for GROUP
-- SELECT CASE
-- WHEN gd.XMLDATA.ALL_KID is not NULL
-- THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = gd.XMLDATA.ALL_KID)
-- WHEN gd.XMLDATA.SEQUENCE_KID is not NULL
-- THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = gd.XMLDATA.SEQUENCE_KID)
-- WHEN gd.XMLDATA.CHOICE_KID is not NULL
-- THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = gd.XMLDATA.CHOICE_KID)
-- END
-- FROM XDB.XDB$GROUP_DEF gd, xdb.xdb$GROUP_REF gr
-- WHERE ref(gd) = gr.XMLDATA.GROUPREF_REF
-- and ref(gr) = ct.XMLDATA.COMPLEXCONTENT.RESTRICTION.GROUP_KID
WHEN ct.XMLDATA.COMPLEXCONTENT.EXTENSION.ALL_KID is not NULL
THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = ct.XMLDATA.COMPLEXCONTENT.EXTENSION.ALL_KID)
WHEN ct.XMLDATA.COMPLEXCONTENT.EXTENSION.SEQUENCE_KID is not NULL
THEN ( SELECT s.XMLDATA from XDB.XDB$SEQUENCE_MODEL s where ref(s) = ct.XMLDATA.COMPLEXCONTENT.EXTENSION.SEQUENCE_KID)
WHEN ct.XMLDATA.COMPLEXCONTENT.EXTENSION.CHOICE_KID is not NULL
THEN ( SELECT c.XMLDATA from XDB.XDB$CHOICE_MODEL c where ref(c) = ct.XMLDATA.COMPLEXCONTENT.EXTENSION.CHOICE_KID)
WHEN ct.XMLDATA.COMPLEXCONTENT.EXTENSION.GROUP_KID is not NULL
-- COMPLEXTYPE is based on a GROUP.
THEN (
-- RETURN The CHOICE, ALL or SEQUENCE for GROUP
SELECT CASE
WHEN gd.XMLDATA.ALL_KID is not NULL
THEN ( SELECT a.XMLDATA from XDB.XDB$ALL_MODEL a where ref(a) = gd.XMLDATA.ALL_KID)
-
Maximum number of columns in a table or view is 1000
Post Author: TinaReifer
CA Forum: Formula
I am trying to create a direct link from Tririga / Oracle database into Crystal XI. The data I am attempting to pull is from the RETransaction (Escalations). The error I keep receiving is shown below. Has anyone run across this problem and found a solution that can be shared?
Very Appreciative for any feedback.
Tina Reifer
"Failed to retrieve data from the database.
Details: HY000:[Oracle][ODBC][Ora]ORA-01792: maximum number of columns in a table or view is 1000
[Database Vendor Code: 1792]"Post Author: synapsevampire
CA Forum: Formula
Try posting your software, its version, and the type of connectivity you're using for oracle.
Older versions of Crystal used a proprietary Oracle ODBC driver that came with Crystal, but you should SWITCH away from using ODBC anyway, not sure why you elected to use it, it's slower and more problematic.
You'll see Oracle Server listed as a data source, that is generally the best connectivity to use.
Also, what version of Oracle is it, and what Oracle client are you using?
You might need your Oracle dbas assistance here.
Anyway, the error is not a Crystal error, it's being raised by the ODBC driver, and Oracle used to have a maximum of 1024 colums I think it was...been a while...
-k -
I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
Is this a good idea?Jason,
In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
In general, your suggestion for the generated surrogate key would work.
Thanks,
Mark. -
How can we modify the maximum no. of columns in pivot table ?
hi all,
How can we modify the maximum no. of columns in pivot table ?
do i need to change the nqconfig.ini or instanceconfig file or else?
thnx..A little search on the forum :
In the instanceconfig.xml add a <PivotView> element under <ServerInstance> if one does
not exist already.
Within the <PivotView> element add an entry that looks like:
<MaxCells>nnnnnn</MaxCells>
where nnnnnn is your desired limit for the maximum total number of cells allowed
in a pivot.
Warning: an excessively large number will cause more memory consumption and
slower browser performance.The details here :
Oracle BI EE (10.1.3.2): Maximum total number of cells in Pivot Table excee -
Diagnostic pack, Tuning pack are not in OEM 10g, Add partition to a table
Hi All,
I have 2 questions:
Q.1: In Oracle 9i Oracle Enterprise Manager java console, we had "Diagnostic Pack" and "Tuning Pack" which helped us seeing performance tuning info (session's statistics such as cpu time, pga memory, physical disk reads, etc.) and privded help for sql tuning and performance improvements. But in 10g, the same product (Oracle Enterprise Manager java console) does not include these 2 packs, due to which we are unable to monitor and control performance tuning information/statistics. Now my question is, do we need to install these 2 packs separately in 10g? if yes, where can we get them? (I am sure in 9i, these packages came with OEM console and we didnt need to get them separately)
Q.2: I have a partitioned table having 5 partitions based on range partitioning. Now our requirements have changed and we need to insert values beyong the 5th partition, so we need a 6th partition. Can someone tell me the command to add a new partition to the table? I tried "Alter table xxx add partition yyy ....", but it didn't work. If any one can me the correct syntax, it will be great.
Thanks in advance.OP is talking about java-based EM, not web-based DBConsole. In fact he/she has to change to DBConsole, because 10g java EM doesn't longer support these tuning features.
Alter table ... partition syntax depends on the kind of partitioning, see the documentation:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131048
Werner -
Table partitioning (intervel partitioning) on existing tables in oracle 11g
Hi i'm newbie to table partitioning. I'm using 11g. I have table of size 32 gb (which has 22 million records) and i want to apply interval partition on that table. I created a empty table with a partition having columns same as source table and take dump of the source table and import into the new partition table. can you please suggest how to import table dump into new table? also is there any other better idea to do the same.
Hi,
imp user/password file=exp.dmp ignore=y
The ignore=y causes the import to skip the table creation and continues to load all rows.
On the other hand, you can insert data into the partitioned table with a subquery from the non-partitioned table such as follows;
insert into patitioned_table
select * from original_table;
Hope it helps, -
How to create partition from one table to another?
Hi,
Can any one help me in this :
I have copied table from one schema to another by using the following command. The table is around 300 GB in size and the partitions are not copied. Now i got a request to create partitions in the destination table similar to the source table.
CREATE TABLE user2.table_name AS SELECT * FROM user1.table_name;
I am using Oracle 9i database. This is very urgent, could any one please suggest me how can i create the partitions in user2 table as user1 table.If you have the TOAD/ SQL DEVELOPER , get the partition DDL from the source table and execute it in the newly created table.
or you can use DBMS_METADATA:
SET LONG 10000
SELECT dbms_metadata.get_ddl('TABLE', 'TEST')
FROM DUAL;
Thanks,
<Moderator Edit - deleted link signature - pl see FAQ link on top right> -
Best approach to do Range partitioning on Huge tables.
Hi All,
I am working on 11gR2 oracle 3node RAC database. below are the db details.
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
Thanks,
Hari>
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
>
Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
See Exchanging Partitions in the VLDB and Partitioning doc
http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
>
When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
>
Comments below are limited to working with ONE table only.
ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
Partitioning an Existing Table using DBMS_REDEFINITION
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all. -
Use multiple partitions on a table in query
Hi All,
Overview:-
I have a table - TRACK which is partitioned on weekly basis. Im using this table in one of my SQL queries in which I require to find a monthly count of some column data. The query looks like:-
Select count(*)
from Barcode B
inner join Track partition P(99) T
on B.item_barcode = T.item_barcode
where B.create_date between 20120202 and 20120209;In the above query I am fetching the count for one week using the Partitions created on that table during that week.
Desired output:-
I want to fetch data between 01-Feb and 01-Mar and use the rest of the partitions for that table during the duration in the above query. The weekly partitions currently present for Track table are -
P(99) - 20120202
P(100) - 20120209
P(101) - 20120216
P(102) - 20120223
P(103) - 20120301
My question is that above Ive used one partition successfully, now how can I use the other 4 partitions in the same query if I am finding the count for one month (i.e. from 201201 to 20120301) ?
Environment:-
Oracle version - Oracle 10g R2 (10.2.0.4)
Operating System - AIX version 5
Thanks.
Edited by: Sandyboy036 on Mar 12, 2012 10:47 AMI'm with damorgan on this one, though I was lazy and only read it twice.
You've got a mix of everything in this one and none of it is correct.
1. If B.create_date is VARCHAR2 this is the wrong way to store dates.
2. All Track partitions are needed for one month if you only have the 5 partitions you listed so there is no point in mentioning any of them by name. So the answer to 'how can I use the other 4 partitions' is - don't; Oracle will use them anyway.
3. BETWEEN 01_Feb and 01-Mar - be aware that the BETWEEN operator includes both endpoints so if you actually used it in your query the data would include March 1. -
Hi Gurus,
I have a question regarding partitioning the cube. When you partition the cube from the Extras menu, will it partition the F table or is it the E table or both.
Second question: After partitioning, how will i know the newly created table names.
Thanks,
ANUHi Anu,
Partition Need and its definition
Infocube contains huge amount of data n the table size of the cube increases regularly. so
when a query is executed on cube then it has to check entire table to get the records for
example Sales in Jan08.
Advantage of Partition
so we can partition the cube so that smaller tables are formed .so that report performance
will increase bcoz the query hits that particular partition.
Steps for Partition
1.To partiotion a cube, it must not contain data.
2. Partition can be done using time characteristics 0CALMONTH and fiscalperiod.
steps:
1. in change of the cube, select extras menu and Partioning option.
2. select calmonth time characteristic.
3. it will ask for the time period to partiotion and no. of partiotions. give them
4. activate the cube.
In BI 7 we can partition the cube even if it contains data.
select the cube, right click , select repartitioning.
1. we can delete existing partitions
2. create new ones
3. merge partitions.
Partitioning of the Cube
http://help.sap.com/saphelp_nw04s/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
Partitioning Fields and Values
partition of Infocube
Partitioning of cube using Fiscal period
Infocube Partition
After Partition
You can find the Partition in the following tables in SE11 >
E tables /BIC/E* or /BIC/E(cube name)
Please also go through the following links
Partioning of Cube
partioning
Partioning of ODS object
/thread/733456 [original link is broken]
Hope i had answered your question
Assign points if helpful,
Thanks and regards
Bala -
How to create a partition on existing table?
Hey
Could some one please tell me on how to create a partition on existing table?Could some one please tell me on how to create a partition on existing table?
You can't - that isn't possible. Unless a table is already partitioned you can NOT create another partition on it.
You must either redefine the table as a partitioned table (using the DBMS_REDEFINITION package) or create a new partitioned table and move the data to its new partitions.
The choice will depend on how much data the existing table has and whether you can do it offline. -
What is the maximum number of row for a table control in LabWindows/CVI ?
I use LabWindows CVI 8.5.1 (MMI first developped in version 6.0).
In one of our many MMI, a table control contains a list of aircraft parameters.
We can add as many parameters (row) as we want but over 40 000 we observe a crash of the LabWindows CVI runtime.
Our client want to inscrease the number of parameters (row) up to 200 000 !!!
So my questions are:
What is the real maximum number of row for the table control ?
Is this maximum number of row different on LabWindows 2010 version ?
Or is there an other solution ?
ThanksGreetings,
Can you clarify what you mean by "crash"? Is there an error message thrown? Is it possible that you've consumed all of the available memory?
Please let me know.
Thanks, James Duvall
Product Support Engineer
National Instruments
Maybe you are looking for
-
Error in app but same query works in sqlplus JBO-27122
Hi All, Using Jdev 10.1.3 ADF BC and JSF. I am using a VO. It has Jdev generated standard SQL. Then programatically I set a where clause I follows. tableUserAuth.setWhereClause("TableUserAuth.USERS_NAME = '"+userid+"'" ); tableUserAuth.executeQuery()
-
I'm using Oracle 8i with JDBC. I use something like this in my Java application: pStmt = con.prepareStatement("SELECT MY_ID.NEXTVAL FROM DUAL"); rs = pStmt.executeQuery(); rs.next(); int id = rs.getInt(1); At this point of fetching the value from the
-
Possible NIC hardware problem?
Hi, I have two sun servers, both solaris 9, that act as firewalls. I have installed only OS Core. The firewall software is Checkpoint. These two servers are in cluster (checkpoint xl). I have many other sun servers configured in this way that work pe
-
Using jquery to animate this (link included)
Hi there, I'm in the completion stages of a website for a client. Still some things to fix (like activating the form) but essentially, this is the heart of the site and how it will work. http://www.vilverset.com/main.php As you can see by clicking th
-
WD ABAP: Navigation to external application
Hi all, I want to realize a link to an external application by clicking a button. For that purpose I created a suspend plug and a resume plug on the MAIN window. After that I created a button on the view. Which logic has to be added to the button to