XML Query Performance on Nested Tables
I am running an Xpath query on an XMLTYPE column extracting rows from a nested table which is two levels down from the root element:
e.g.
select extractValue(x.xmldata,'/rootElement/VersionNumber') VerNumber
, etc...
, etc.
, extractValue(value(m),'/Member/Name/Ttl') Member_title
, to_char(extractValue(value(e),'/Event/EventDate'),'yyyy-mm-dd') Event_Date
, extractValue(value(e),'/Event/Type') Event_type
from xml_table x,
table(XMLSequence(extract(x.xmldata,'/rootElement/Member'))) m
,table(XMLSequence(extract(value(m),'/Member/Event'))) e
The query is taking around 7 minutes to process 2000 XML docs and the result returns 61,000 rows.
Statspack reports indicate that the main resource used is the LOBSEGMENT which supports the SYS_XDBPD$ column (system generated) in the Member nested table - result was over 100 million Logical Reads and Consistent Gets.
Is there any way to influence the Schema Registration/XML Table creation to result in the XDBPD$ column being stored as a Table or VARRAY - the underlying data type for XDBPD$ is:
XDB.XDB$RAW_LIST_T VARRAY(1000) OF RAW(2000)
OR
Should I spend time tuning access to the LOBSEGMENT?
Anyone tuned this area of XDB before??
Thanks
Dave
Do you need DOM Fidelity at all. DOM Fidelity is typically only needed in a document orientated application or where items like processing instructions and comments are present and need to be maintaned.
Here are some notes from some training ppt slides that may help you decide
Dom Fidelity requires that XML DB track document level meta data
It creates overhead when inserting and retrieving XML due to processing as well as adding additional storage requirements. Dom Fidelity can be turned on and off at the 'Type' Level Each SQL Type corresponding to a complexType where DOM is enabled includes a System Binary attribute called SYS_XDBPD$. The SYS_XDBPD$ attribute is referred to as the Positional Descriptor (PD)
The PD is used to manage the instance level meta data. The format of the PD is not documented The PD is stored as a (in-line) LOB.
The PD is used to track
Comments and Processing Instructions
Location and Prefixes in Namespace declarations
Empty Vs Missing Nodes
Presence of Default Values
Use of Substitutable elements
Ordering of elements in all and choice models
XMLSchemaInstance attributes
xsi:nil, xsi:Type
Mixed content
Text() nodes that are interspaced with elements
Disabling DOM Fidelity improves performance
Reduces storage requirements
Reduces elapsed time for insert and retrieval operations
DOM Fidelity is disabled on a Type by Type basis
Use annotation xdb:maintainDOM="false on complexType definition
Specifying xdb:maintainDOM="false
Removes the PD attribute from the SQLType.
Causes the information managed in the PD to be discarded when shredding the XML document
In general you can disable DOM Fidelity when
All information is represented as simple content
Elements with text() node children and or attribute values
XML does not make use Mixed Text
No Comments or Processing Instructions
Applications do not need to differentiate between empty and missing elements
Applications do not differentiate between default Vs missing values
Order of elements in repeating Sequence, All and Choice models is not meaningful
Typically Data-Centric XML does not require DOM Fidelity while Document Centric XML does. With DOM Fidelity disabled retrieved documents will still be valid per the XML Schema
Order is preserved when repetition is specified at element or complexType level
Use annotation xdb:maintainOrder=false for slight performance improvement in this case that order is not required in these cases.
Order is not preserved when repetition is specified at the model level, eg on a Choice, Sequence or All containing multiple child elements.
Take the following document
<?xml version="1.0" encoding="UTF-8"?>
<!--Sample XML file generated by XMLSPY v2004 rel. 4 U (http://www.xmlspy.com)-->
<dft:root xmlns:dft="domFidelityTest"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="domFidelityTest domFidelityExample">
<?SomePI SomeValue?>
<xxx:choice xmlns:xxx="domFidelityTest">
<!--SomeComment-->
<xxx:A>String</xxx:A>
<xxx:B>String</xxx:B>
<?Another PI SomeOtherValue?>
<xxx:C>String</xxx:C>
<xxx:A>String</xxx:A>
<!--Another Comment-->
<xxx:B>String</xxx:B>
<xxx:C xsi:nil="true"/>
</xxx:choice>
</dft:root>
With DOM Fidelity disabled it would come back as
<?xml version="1.0" encoding="WINDOWS-1252"?>
<!--Sample XML file generated by XMLSPY v2004 rel. 4 U (http://www.xmlspy.com)-->
<dft:root xmlns:dft="domFidelityTest testAttr="NotInOriginal">
<dft:choice>
<dft:A>String</dft:A>
<dft:A>String</dft:A>
<dft:B>String</dft:B>
<dft:B>String</dft:B>
<dft:C>String</dft:C>
<dft:C/>
</dft:choice>
</dft:root>
In general you can disable DOM Fidelity and see reduced storage requirements and increased throughout...
Hope this helps...
Similar Messages
-
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
Sql:variable and XML query performance
Can someone help with sql:variable() in xml queries? It seems that when I attempt to reference variables with the sql:variable(...) function in an xpath function (exist or nodes) it comes up with a totally different query plan, possibly ignoring
my secondary indices like the ones for VALUE, PATH.
But if I replace sql:variable("@p_ObjectIdentifierForReference") with the literal (ie. "ord/p/ord0616.p") then it uses secondary indices more consistently.
Below you will see an unsuccessful attempt to get the query to "OPTIMIZE FOR" a specific literal value of @p_ObjectIdentifierForReference. But this doesn't give work. It doesn't give me a plan using the secondary index I expect.
Ideally there would be a way to get the sql:variable(...) function to give the same query plan as a literal. Not sure why that isn't the default behavior.
DECLARE
@p_ObjectIdentifierForReference
varchar(500);
SET
@p_ObjectIdentifierForReference
= 'ord/p/ord0616.p';
WITH
XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004'
as D)
SELECT
XREF_FileDataReference.XREF_FileData
AS XrefFileData,
InnerRowNode.value('/D:Reference[1]/D:File-num[1]',
'int')
AS FileNumber,
InnerRowNode.value('/D:Reference[1]/D:Line-num[1]',
'int')
AS LineNumber
FROM
(SELECT
XREF.XREF_FileData.XREF_FileData,
XREF.XREF_FileData.XREF_FileEntry,
InnerRow.query('.')
AS InnerRowNode
FROM
XREF.XREF_FileData
OUTER APPLY
DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference")
and @Reference-type = "RUN"]')
as T(InnerRow)
WHERE DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier
= sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]')
= 1)
AS XREF_FileDataReference
INNER
JOIN XREF.XREF_MemberBuilt
ON XREF_MemberBuilt.XREF_FileData
= XREF_FileDataReference.XREF_FileData
INNER
JOIN XREF.XREF_FileEntry
ON XREF_FileEntry.XREF_FileEntry
= XREF_FileDataReference.XREF_FileEntry
WHERE
XREF_MemberBuilt.XREF_ProjectBuilt
= 69
OPTION(RECOMPILE,
OPTIMIZE FOR (@p_ObjectIdentifierForReference
= 'ord/p/ord0616.p')I tried to create a "repro" of your query so we can work on it and try and improve it, but I got the best results by just adding text() and [1] to it, eg
SELECT
XREF_FileDataReference.XREF_FileData AS XrefFileData,
InnerRowNode.value('(/D:Reference/D:File-num/text())[1]', 'int') AS FileNumber,
InnerRowNode.value('(/D:Reference/D:Line-num/text())[1]', 'int') AS LineNumber
FROM (
In my main repro, even with a large piece of xml with 100,000 elements, there still wasn't much difference between the queries:
USE tempdb
GO
IF NOT EXISTS ( SELECT * FROM sys.schemas WHERE name = 'XREF' )
EXEC( 'CREATE SCHEMA XREF' )
GO
IF OBJECT_ID('XREF.XREF_FileData') IS NOT NULL DROP TABLE XREF.XREF_FileData
CREATE TABLE XREF.XREF_FileData
rowId INT IDENTITY,
DataXref XML,
XREF_FileData INT,
XREF_FileEntry INT,
CONSTRAINT PK_XREF_FileData PRIMARY KEY ( rowId )
GO
IF OBJECT_ID('XREF.XREF_MemberBuilt') IS NOT NULL DROP TABLE XREF.XREF_MemberBuilt
CREATE TABLE XREF.XREF_MemberBuilt
XREF_ProjectBuilt INT,
XREF_FileData INT
GO
IF OBJECT_ID('XREF.XREF_FileEntry') IS NOT NULL DROP TABLE XREF.XREF_FileEntry
CREATE TABLE XREF.XREF_FileEntry
XREF_FileEntry INT
GO
-- Create larger piece of xml for repro
;WITH XMLNAMESPACES ( DEFAULT 'uri:schemas-progress-com:XREFD:0004' ), cte AS (
SELECT TOP 100000 ROW_NUMBER() OVER ( ORDER BY ( SELECT 1 ) ) rn
FROM master.sys.columns c1
CROSS JOIN master.sys.columns c2
CROSS JOIN master.sys.columns c3
INSERT INTO XREF.XREF_FileData ( DataXref, XREF_FileData, XREF_FileEntry )
SELECT
SELECT
CASE rn WHEN 9999 THEN 'ord/p/ord0616.p' ELSE CAST( rn AS VARCHAR(20) ) END AS "@Object-identifier",
'RUN' AS "@Reference-type",
SELECT
rn AS "File-num",
rn * 10 AS "Line-num"
FOR XML PATH(''), TYPE
) AS "*"
FROM cte
FOR XML PATH('Reference'), ROOT('Source'), TYPE
).query('<Cross-reference xmlns="uri:schemas-progress-com:XREFD:0004">{.}</Cross-reference>'), 1, 100
INSERT INTO XREF.XREF_FileEntry ( XREF_FileEntry )
VALUES ( 100 )
INSERT INTO XREF.XREF_MemberBuilt ( XREF_ProjectBuilt, XREF_FileData )
VALUES ( 69, 1 )
GO
--SELECT * FROM XREF.XREF_FileData
--SELECT * FROM XREF.XREF_FileEntry
--SELECT * FROM XREF.XREF_MemberBuilt
--GO
-- Add primary XML index
CREATE PRIMARY XML INDEX xidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
GO
-- Add value, property and path xml indexes
CREATE XML INDEX xvalidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
USING XML INDEX xidx_XREF_FileData FOR VALUE
CREATE XML INDEX xpthidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
USING XML INDEX xidx_XREF_FileData FOR PATH
CREATE XML INDEX xprpidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
USING XML INDEX xidx_XREF_FileData FOR PROPERTY
GO
:exit
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO
DECLARE @p_ObjectIdentifierForReference varchar(500);
SET @p_ObjectIdentifierForReference = 'ord/p/ord0616.p';
;WITH XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004' as D)
SELECT
XREF_FileDataReference.XREF_FileData AS XrefFileData,
InnerRowNode.value('/D:Reference[1]/D:File-num[1]', 'int') AS FileNumber,
InnerRowNode.value('/D:Reference[1]/D:Line-num[1]', 'int') AS LineNumber
FROM (
SELECT
XREF.XREF_FileData.XREF_FileData,
XREF.XREF_FileData.XREF_FileEntry,
InnerRow.query('.') AS InnerRowNode
FROM XREF.XREF_FileData
OUTER APPLY DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') as T(InnerRow)
WHERE DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') = 1
) AS XREF_FileDataReference
INNER JOIN XREF.XREF_MemberBuilt ON XREF_MemberBuilt.XREF_FileData = XREF_FileDataReference.XREF_FileData
INNER JOIN XREF.XREF_FileEntry ON XREF_FileEntry.XREF_FileEntry = XREF_FileDataReference.XREF_FileEntry
WHERE XREF_MemberBuilt.XREF_ProjectBuilt = 69
OPTION( RECOMPILE, OPTIMIZE FOR (@p_ObjectIdentifierForReference = 'ord/p/ord0616.p') )
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO
DECLARE @p_ObjectIdentifierForReference varchar(500);
SET @p_ObjectIdentifierForReference = 'ord/p/ord0616.p';
;WITH XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004' as D)
SELECT
XREF_FileDataReference.XREF_FileData AS XrefFileData,
InnerRowNode.value('(/D:Reference/D:File-num/text())[1]', 'int') AS FileNumber,
InnerRowNode.value('(/D:Reference/D:Line-num/text())[1]', 'int') AS LineNumber
FROM (
SELECT
XREF.XREF_FileData.XREF_FileData,
XREF.XREF_FileData.XREF_FileEntry,
InnerRow.query('.') AS InnerRowNode
FROM XREF.XREF_FileData
OUTER APPLY DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') as T(InnerRow)
WHERE DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') = 1
) AS XREF_FileDataReference
INNER JOIN XREF.XREF_MemberBuilt ON XREF_MemberBuilt.XREF_FileData = XREF_FileDataReference.XREF_FileData
INNER JOIN XREF.XREF_FileEntry ON XREF_FileEntry.XREF_FileEntry = XREF_FileDataReference.XREF_FileEntry
WHERE XREF_MemberBuilt.XREF_ProjectBuilt = 69
OPTION( RECOMPILE, OPTIMIZE FOR (@p_ObjectIdentifierForReference = 'ord/p/ord0616.p') )
GO
So I guess I'm saying I cannot reproduce your problem on SQL 2008 R2 or SQL 2012. Does anything about this repro stand out as different from your situation?
Looking at your query I would say you might consider the following:
are you really seeing big differences in query duration?
pretty much ignore estimated plan costs for xml queries
consider breaking it up; eg carve off the xml then do the joins? If poor cardinality estimation is part of the problem this might help
Understand what PATH, PROPERTY and VALUE are for, then only create the ones you need
do you really have the range of queries that requires all three?
this is still a great article on xml indexes:
http://technet.microsoft.com/en-us/library/ms191497.aspx
What's performance like with the primary xml index only?
If performance is that important, consider materialising the columns permanently
I think the buffer_descriptors stuff is a distraction - mostly your cache is warm right?
plan forcing could be a last resort
Selective XML indexes in SQL 2012 onwards are great : ) much less storage required for example but much more specific -
XML Query filtering by child table column
Hello,
If anyone can help with this one... it would be nice. I need to make the output of an query to be in the format of XML, but the problem is that the initial filtering needs to be done in the child table.
Example:
CREATE TABLE PRIMARY(
ID NUMBER(19,0),
CODE_PRIMARY VARCHAR2(32));
CREATE TABLE SECONDARY(
ID NUMBER(19,0),
IDPRIMARY(19,0),
CODE_SECONDARY VARCHAR2(32));
INSERT INTO PRIMARY(ID,CODE_PRIMARY)
VALUES (1,'A');
INSERT INTO PRIMARY(ID,CODE_PRIMARY)
VALUES (2,'B');
INSERT INTO SECONDARY(ID,IDPRIMARY,CODE_SECONDARY)
VALUES (1,1,'C');
INSERT INTO SECONDARY(ID,IDPRIMARY,CODE_SECONDARY)
VALUES (2,1,'D');
INSERT INTO SECONDARY(ID,IDPRIMARY,CODE_SECONDARY)
VALUES (3,2,'E');
Now what we need is to build an XML tree like the following, INNER JOINING PRIMARY and SECONDARY tables with this condition in the where clause -> WHERE SECONDARY.CODE IN ('C','D')
<result>
<record>
<id>1</id>
<code>A</code>
<childs>
<child>
<id>1</id>
<idprimary>1</idprimary>
<codesecondary>C</codesecondary>
</child>
<child>
<id>2</id>
<idprimary>1</idprimary>
<codesecondary>D</codesecondary>
</child>
</childs>
</record>
</result>
In this example only one record is returned since we only have one record in PRIMARY table that has a child having codesecondary=C or D. The ideia is to get many records... but I think that this is enough for the sake of the example. And the solution is the same.
Thanks in advance!
GMFound the answer. Used distinct keyword instead of grouping the output table columns. This way XMLAgg didn't broke up the result:
SELECT
XMLElement("Processos",
XmlAgg(XMLElement("Processo",
XMLForest(T.ID as "Id",T.CODIGO as "Codigo",T.DESCRICAO as "Descricao"),
XMLElement("Funcionalidades",
SELECT
XMLAgg(
XMLElement("Funcionalidade",F2.ID)
FROM TWBASEDB.LISTA_UNICA_FUNCIONALIDADE F2
WHERE F2.ID_processo=T.ID
and f2.ACTIVIDADE IN ('1_ACTC1','1_ACTC2','1_ACTC3','2_ACTC1')
from
select distinct P.ID,P.CODIGO,p.DESCRICAO
FROM TWBASEDB.LISTA_UNICA_PROCESSOS P
INNER JOIN TWBASEDB.LISTA_UNICA_FUNCIONALIDADE F ON P.ID=F.ID_PROCESSO
WHERE ACTIVIDADE IN ('1_ACTC1','1_ACTC2','1_ACTC3','2_ACTC1')
order by p.id
) T -
Query performance on same table with many DML operations
Hi all,
I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
If i created same table again newly with same data and fire the same select statement, it is taking less time.
My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
Thanks in advance,
PalTry searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
For example if you had a table like this where seq_no is populated by a sequence and indexed
seq_no NUMBER
processed_flag VARCHAR2(1)
trans_date DATEand then did deletes like:
DELETE FROM t
WHERE processed_flag = 'Y' and
trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
HTH
John -
XML Type - ExtractValue for Nested Tables
Hi all,
PROCEDURE TESTING
IS
v_xml CLOB := '<EMPLOYEE><RECORD><EMPID>12</EMPID><EMPNAME>ROCK</EMPNAME></RECORD><RECORD><EMPID>13</EMPID>13<EMPNAME>PETER</EMPNAME></RECORD><RECORD><EMPID>14</EMPID><EMPNAME>JOHN</EMPNAME></RECORD></EMPLOYEE>';
BEGIN
FOR i IN
(SELECT EXTRACTVALUE (indv_xml, 'RECORD/EMPID') eid,EXTRACTVALUE (indv_xml, 'RECORD/EMPNAME') ename
FROM (SELECT VALUE (v) indv_xml
FROM (SELECT XMLTYPE (v_xml) main_xml
FROM DUAL),
TABLE
(XMLSEQUENCE
(EXTRACT
(main_xml,
'EMPLOYEE/RECORD'
) v))
LOOP
insert into employee values(eid,ename);
END LOOP;
END;
Now i have the xml like this:-
'<EMPLOYEE>
<EMPTYPE>
<TYPEID>121</TYPEID>
<ROW><EMPID>12</EMPID><EMPNAME>ROCK</EMPNAME></ROW>
<ROW><EMPID>13</EMPID>13<EMPNAME>PETER</EMPNAME></ROW>
</EMPTYPE>
<EMPTYPE>
<TYPEID>122</TYPEID>
<ROW><EMPID>14</EMPID><EMPNAME>JOHN</EMPNAME></ROW>
</EMPTYPE>
</EMPLOYEE>'
In this case, I have populate into two tables first EMP_TYPE (typeid - PK) and EMPLOYEE(EID,ENAME,TYPEID -FK)...
Pls provide me some sample code.
Thanks
SimbhuHi,
I have a similar requirement, where i need to extract a particular segment from an xml doc, but am getting errors:
The XML Doc is :
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<Code39Response> <Code39Result>iVBORw0KGgoAAAANSUhEUgAAAC0AAAAeCAYAAAC49JeZAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAO1JREFUWEftlgkOhTAIRPX+h3ZLaxCBMnWt0uQnml/Ikw5T+mFaXWtrhm5tda0BL8rQoCfFwN9jxXjzefaJZDnQk4B+Gbpfqoonx3+g52rkn1Rp67+8vybHGsuPiB8P8s5haaz2nCx3xbBynAZtadoL7c0hQtMj4888sVcCVA5Hc6jQaEcjx+6x15IcNz5dshrNBmugLbBLoHPjvEYe8BX4UAB+Vz8EumniFzDACFFpuGSVAd+qNB9oqE9Tu5P8W7pNpQGJe7w2RO3mI+uW0uACOlXNmnWi0nQWLukzNB2NaDRVyONT8qgcC24JGwGQIbYsvaAvwQAAAABJRU5ErkJggg==</Code39Result>
</Code39Response>
</soap:Body>
</soap:Envelope>
select XMLTYPE('<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<Code39Response> <Code39Result>iVBORw0KGgoAAAANSUhEUgAAAC0AAAAeCAYAAAC49JeZAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAO1JREFUWEftlgkOhTAIRPX+h3ZLaxCBMnWt0uQnml/Ikw5T+mFaXWtrhm5tda0BL8rQoCfFwN9jxXjzefaJZDnQk4B+Gbpfqoonx3+g52rkn1Rp67+8vybHGsuPiB8P8s5haaz2nCx3xbBynAZtadoL7c0hQtMj4888sVcCVA5Hc6jQaEcjx+6x15IcNz5dshrNBmugLbBLoHPjvEYe8BX4UAB+Vz8EumniFzDACFFpuGSVAd+qNB9oqE9Tu5P8W7pNpQGJe7w2RO3mI+uW0uACOlXNmnWi0nQWLukzNB2NaDRVyONT8qgcC24JGwGQIbYsvaAvwQAAAABJRU5ErkJggg==</Code39Result>
</Code39Response>
</soap:Body>
</soap:Envelope>') from dual
does NOT give any error. BUT
SELECT VALUE(v) xml_data
FROM (select XMLTYPE('<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<Code39Response>
<Code39Result>iVBORw0KGgoAAAANSUhEUgAAAC0AAAAeCAYAAAC49JeZAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAAO1JREFUWEftlgkOhTAIRPX+h3ZLaxCBMnWt0uQnml/Ikw5T+mFaXWtrhm5tda0BL8rQoCfFwN9jxXjzefaJZDnQk4B+Gbpfqoonx3+g52rkn1Rp67+8vybHGsuPiB8P8s5haaz2nCx3xbBynAZtadoL7c0hQtMj4888sVcCVA5Hc6jQaEcjx+6x15IcNz5dshrNBmugLbBLoHPjvEYe8BX4UAB+Vz8EumniFzDACFFpuGSVAd+qNB9oqE9Tu5P8W7pNpQGJe7w2RO3mI+uW0uACOlXNmnWi0nQWLukzNB2NaDRVyONT8qgcC24JGwGQIbYsvaAvwQAAAABJRU5ErkJggg==</Code39Result>
</Code39Response>
</soap:Body>
</soap:Envelope>') main_xml from dual), TABLE(xmlsequence(extract(main_xml,'soap:Envelope/soap:Body/child::node()'))) v
Gives me the error below:
ORA-31011: XML Parsing Failed
ORA-19202: Error occured in XML Parsing
LPX-00601: Invalid token in '<soap:Envelope/soap:Body'
31011.00000 - "XML Parsing Failed"
Am i doing something wrong?
Thanks
Ashish -
Query Nested table (collection)
I try to query on a nested table (collection).
How ever, I get a 600 error.
Is there any way to reuse the result of query in the store procedure?
SQL> DECLARE
2 TYPE T_PER_ID IS TABLE OF PERIOD.PER_ID%TYPE;
3 V_EXP_PER_ID T_PER_ID;
4 V_EXP_PER_ID2 T_PER_ID;
5 BEGIN
6 SELECT PER_ID BULK COLLECT INTO V_EXP_PER_ID FROM PERIOD;
7 SELECT * BULK COLLECT INTO V_EXP_PER_ID2 FROM TABLE(V_EXP_PER_ID);
8 END;
9 /
DECLARE
ERROR at line 1:
ORA-00600: internal error code, arguments: [15419], [severe error during PL/SQL
execution], [], [], [], [], [], []
ORA-06544: PL/SQL: internal error, arguments: [pfrrun.c:pfrbnd1()], [], [], [],
ORA-06553: PLS-801: internal error [0]Oracle Version 8.1.6
-
Speed up table query performance
Hi,
I have a table with 200mil rows (approx 30G size). What are the multiple ways to improve query performance on this table? We have tried indexing, but doesn't help much. Partitioning - I am not sure how to use it, since there is no clear criteria for creating partitions. What other options could I use?
If this is in the wrong forum, please suggest appropriate forum.
Thanks in advance.Assuming you have purchased the partitioning option from Oracle and are not violating the terms of your license agreement then partitioning is the way to go.
For the concept docs go to http://tahiti.oracle.com
For demos of all of the various forms of partitioning go to Morgan's Library at www.psoug.org and look up partitioning.
New partitioning options have been added in 11gR1 so some of the demos may not work for you: Most will. -
Nested Tables and Advanced Queues- Please Help.
How do i work with NestedTable type and Advanced Queue.
I have done the following
I have Oracle 8.1.7 enterprise edition.
create type myType as TABLE OF varchar(32);
create type myObject as OBJECT (
id int,
myt myType);
DECLARE
BEGIN
dbms_aqadm.create_queue_table(
queue_table => 'my_queue_table',
multiple_consumers => TRUE,
queue_payload_type => 'myObject',
compatible => '8.1.3'
END;
The Nested Table and Object are created successfully.
but the queue is not created.
I get the following message.
DECLARE
ERROR at line 1:
ORA-22913: must specify table name for nested table column or
attribute
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 2012
ORA-06512: at "SYS.DBMS_AQADM", line 55
ORA-06512: at line 3
I know how to specify the nested table storage clause for
create table statement, but there is no provision for
it in the create_queue_table procedure.
Any help will be greately appriciated.
i have already created and tested aqs with simple data types,
also i have created simple tables with nested table type
elements.
but the combo of Nested tables and AQ is not working.
thanks in advance.Hi Francois. Thank you very much for your reply, but it seems that i still get errors. So let me tell what i have done.
As you suggested me: i have done a block based on a sub-query for the nested-table:
'select courses from department where name= :department.name'.
In the master block(department) i have the when-new-record-instance trigger:
Declare
LC$Req varchar2(256);
Begin
LC$Req := '(select ns.courses from table
( select courses from department where name = ''' || :DEPARTMENT.name || ''' ) ns )';
Go_block('block11');
Clear_Block ;
Set_Block_Property( 'block11', QUERY_DATA_SOURCE_NAME, LC$Req ) ;
Execute_query ;
End ;
Now the errors i receive, this time in the runtime mode are:
- FRM-41380: Cannot set the blocks query data source
-FRM-41003: This function cannot be performed here.
Since it seems that you know how to work with the nested table i would really appreaciate your help. I am new in the nested table and if you could give an ex with my tables it would be great.
Thank you in advance. -
Nested Tables and Forms 9i-please help
Hi
i have the folowing example of a nested table:
CREATE TYPE CourseList AS TABLE OF VARCHAR2(10) -- define TABLE type
CREATE TYPE Student AS OBJECT ( -- create object
id_num INTEGER(4),
name VARCHAR2(25),
address VARCHAR2(35),
status CHAR(2),
courses CourseList) -- declare nested table as attribute
CREATE TYPE Course AS OBJECT (
course_no NUMBER(4),
title VARCHAR2(35),
credits NUMBER(1));
CREATE TYPE CourseList AS TABLE OF Course;
CREATE TABLE department (name VARCHAR2(20),
director VARCHAR2(20),
office VARCHAR2(20),
courses CourseList)
NESTED TABLE courses STORE AS courses_tab;
In Forms i have created a data block based on table department and for managing the nested table courses_tab i found out that i should use a stored procedure:
PROCEDURE procedure_nest (c_c out courses_tab%rowtype) IS
c_c courselist;
BEGIN
select c.name, c.director,office into :department.name, :department.director,
:department.office from department c , table
(select n.courses from department n where c.name =n.name and rownum<=1);
END;
compil error:
error 999 at line 1, column 35
implemntation restriction (may be temporary) ADT or schema level
collection not supported at client side with non-OCI mode.
i really would like to know what shall i do to use and manage the nested table in forms.
Thank you very muchHi Francois. Thank you very much for your reply, but it seems that i still get errors. So let me tell what i have done.
As you suggested me: i have done a block based on a sub-query for the nested-table:
'select courses from department where name= :department.name'.
In the master block(department) i have the when-new-record-instance trigger:
Declare
LC$Req varchar2(256);
Begin
LC$Req := '(select ns.courses from table
( select courses from department where name = ''' || :DEPARTMENT.name || ''' ) ns )';
Go_block('block11');
Clear_Block ;
Set_Block_Property( 'block11', QUERY_DATA_SOURCE_NAME, LC$Req ) ;
Execute_query ;
End ;
Now the errors i receive, this time in the runtime mode are:
- FRM-41380: Cannot set the blocks query data source
-FRM-41003: This function cannot be performed here.
Since it seems that you know how to work with the nested table i would really appreaciate your help. I am new in the nested table and if you could give an ex with my tables it would be great.
Thank you in advance. -
Nested tables.... Please help
Hi Friends,
I have some strange issues when I am trying to create a nested dynamic table with SAP DATA in Adobe Designer.
In the context, under the MAIN node
I had dragged the header table. Under the header table I had dragged the item table and had created a WHERE condition to link the HEADER and ITEM table.
Now my layout is pretty complicated..
I want to result to be as below:
header row.... with 0001 Customer name
item row with 0001 0010 price qty
0001 0020 price qty
0001 0030 price qty
header row... field4
(field4 from the header table has to be displayed after the item table has generated all the items and this process should repeat for all header data)
My design in layout is as below
subform1 for headerdata
( set as positioned and has set to 'Repeat form for each data item')
headerdata fields for subform1 (field1, field2, field3)
subform2 for itemdata
(set as folowed)
subform3 for itemdata
(set as positioned, 'Allow page breaks within content' and had set 'Repeat form for each data item')
itemdata fields in subform3
subform4 for headerrow field4
but when I activated and execute the form, the result was wrong.
The result should be like below...
|->field1, field2, field3
|--->field1 charge111
|--->field1 charge211
|-> field4
|->field2
|--->field2 charge122
|--->field2 charge222
|-> field4
but it is printed like below and field4 is overlapped with the item rows if the item rows are more...
DATA
|->field1 field2 field3
|--->field1 charge111 This inner table is overlapping the subform4 for each item
|--->field1 charge211
|
|->field1 field2 field3
|--->field1 charge121 This inner table is overlapping the subform4 for each item
|--->field1 charge221
2) If I have more item data, all the item rows are overlapped with the field4 (which I had explained in my layout) and the item rows are not moved to the next page, but still it is continued in the same page, meaning that if the page can fill 20 lines and my item row are more than 20 lines, 20 lines are printed and the remaining items are not moved to the next page... it is invisible...
How can I solve this issue?
Please help me
Thanks in advance.
Jaffer Ali.SHi Francois. Thank you very much for your reply, but it seems that i still get errors. So let me tell what i have done.
As you suggested me: i have done a block based on a sub-query for the nested-table:
'select courses from department where name= :department.name'.
In the master block(department) i have the when-new-record-instance trigger:
Declare
LC$Req varchar2(256);
Begin
LC$Req := '(select ns.courses from table
( select courses from department where name = ''' || :DEPARTMENT.name || ''' ) ns )';
Go_block('block11');
Clear_Block ;
Set_Block_Property( 'block11', QUERY_DATA_SOURCE_NAME, LC$Req ) ;
Execute_query ;
End ;
Now the errors i receive, this time in the runtime mode are:
- FRM-41380: Cannot set the blocks query data source
-FRM-41003: This function cannot be performed here.
Since it seems that you know how to work with the nested table i would really appreaciate your help. I am new in the nested table and if you could give an ex with my tables it would be great.
Thank you in advance. -
The best way to insert values in a Nested Table
Hi!
I want to insert values from a SQL-query in a Nested Table.What's the best way to do it?
In addition,the only way that I've found is doing a query and when I've got the query result I insert it into the Nested Table.For instance:
FOR cur_row IN (SELECT id,nstreet from example Where id=3) LOOP
--here I'm inserting the values of the query in the nested table.
-- VarNestedTable is a Nested Table of Row_Type.
--Row_Type is an object with two fields:Id,Nstreet
VarNestedTable.extend;
VarNestedTable(Coincidents.Last):= Row_type(cur_row.id,cur_row.nstreet);
END LOOP;How to Use Tables: Creating a Table Model.
very bad example:
class DataObject {
String name, age, numberOfMonkeys;
class DOTableModel extends AbstractTableModel {
final String[] COLUMN_HEADER = { "Name", "Age", Monkeys" };
List dataObjects;
public String getColumnName(int column) {
return COLUMN_HEADER[ column ];
public int getRowCount() {
return dataObjects.size();
public int getColumnCount() {
return COLUMN_HEADER.size();
public Object getValueAt(int row, int column) {
DataObject do = dataObject.get( row );
switch( column ) {
case 0: return do.name;
case 1: return do.age;
case 2: return do.numberOfMonkeys;
} -
Producing nested xml with querying attributes from a nested table
How can one produce nested xml querying columns from a nested table? Looking at the object documentation, I can readily unnest the tables. Using your examples in the book, unnesting is select po.pono, ..., l.* from purchaseorder_objtab po, table (po.lineitemlist_ntab) l where l.quantity = 2;
what if I don't want to unnest and don't want a cursor. I would like to produce nested xml.Gail,
Although you can use XSU (XML-SQL Util) in 8.1.7, I would recommend that you upgrade to 9i for much better support (both in functionality and performance) of XML in the database. For example, in Oracle9i there are:
- a new datatype - XMLType for storing and retrieving XML documents
- DBMS_XMLGEN package and SYS_XMLGEN, SYS_XMLAGG functions for generating XML document from complex SQL queries
- A pipelined table function to break down large XML documents into rows.
You can check out some examples using SYS_XMLGEN and DBMS_XMLGEN for your specific needs at http://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a88894/adx05xml.htm#1017141
Regards,
Geoff -
Sql query, from xml to nested table
Hello!
I have DB table: my_table
It has 2 fields: file_id and file_data (it's clob with xml)
I need to write query that returns info from xml using nested table (Oracle v.9)
The number of rows witch will return query must be equal to number of files (file_id)
Structure of XML:
<?xml version = "1.0" encoding = "utf-8"?>
<head>
<AAA v1="a" v2="b">
<BBB p1="1" p2="2"/>
<BBB p1="3" p2="4"/>
</AAA>
<AAA v1="c" v2="d">
<BBB p1="5" p2="6"/>
<BBB p1="7" p2="8"/>
<BBB p1="9" p2="0"/>
</AAA>
</head>
I have query, witch works! but not optimally! each CLOB scaned 3 times - I want to scan it once!
SELECT an.file_id,
CAST(MULTISET(SELECT extract(VALUE(val2),'//@v1').getStringVal() v1,
extract(VALUE(val2),'//@v2').getStringVal() v2,
CAST(MULTISET(SELECT extract(VALUE(val3),'//@p1').getStringVal() p1,
extract(VALUE(val3),'//@p2').getStringVal() p2,
FROM TABLE (XMLSEQUENCE(XMLTYPE(an.file_data).EXTRACT('/head/AAA/BBB[../@v1='||extract(VALUE(val2),'//@v1').getStringVal()||']'))) val3) AS T_VAL3) info
FROM TABLE (XMLSEQUENCE(XMLTYPE(an.file_data).EXTRACT('/head/AAA'))) val2) AS T_VAL2) head
FROM (SELECT olr.*
FROM my_table olr,
TABLE (XMLSEQUENCE(XMLTYPE(file_data)).EXTRACT('/head'))) val1) an
PLEASE, help me to rewrite this query!I assume you're using nested objects like these to hold the result?
create type t_val3_rec as object ( p1 number, p2 number );
create type t_val3 as table of t_val3_rec;
create type t_val2_rec as object ( v1 varchar2(30), v2 varchar2(30), c1 t_val3 );
create type t_val2 as table of t_val2_rec;
/then this query should work :
SELECT t.file_id
, CAST(
MULTISET(
SELECT extractvalue(value(x1), '/AAA/@v1')
, extractvalue(value(x1), '/AAA/@v2')
, CAST(
MULTISET(
SELECT extractvalue(value(x2), '/BBB/@p1')
, extractvalue(value(x2), '/BBB/@p2')
FROM TABLE(XMLSequence(extract(value(x1), '/AAA/BBB'))) x2
AS t_val3
FROM TABLE(XMLSequence(extract(value(x), '/head/AAA'))) x1
AS t_val2
FROM my_table t
, TABLE(XMLSequence(extract(xmltype(t.file_data), '/head'))) x
;
Maybe you are looking for
-
How to repair "No bootable device insert disk and press any key" problem?
I was browsing sites using Google Chrome when all of a sudden, my laptop completely turned off. I turned the power back on only to find the error message "Intel UNDI, PXE-2.1 (build 082)" appear in white letters, followed by a bunch of other error me
-
HT1657 For those of us who are technically challanged.
Can I rent/download a movie and later watch it on a plane that does not have WiFi ?
-
Can't set required duration in iMovie 09?
I'm using iMovie 09 and trying to set the duration of a clip (using inspector) as 0.9s but as soon as I set the vallue and hit enter it just becomes 0.8s. It seems that the duration can only go up in 0.2s intervals. Does anyone have a solution? Is th
-
How the 3rd party process will be updated in to credit management
Hi gurus, I have a doubt on how to update 3rd party process in to credit management. So far I know update group 000018 will help to update it in to credit management. But I already have 000012 in credit control area. Then how can I place two update g
-
TS1702 Application was a fraud and it's not doing what it's supposed to do
I ordered phone tracker gps The company is photo off the wall!! You are tricked to paying 1.99 and the only thing u can track is your own number, no matter what number you put in!!! I want a refund