How to minimise performance degradation when querying a growing table during processing...?
Hi everyone
Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
Read record from source table
Check to see if matching header record exists in TableA (using indexed field)
If match found then store TXH_ID (PK in TableA)
If no match found then create new header record in TableA with new TXH_ID
Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
DB is 11GR2 and OS is Windows Server 2008
Thanks
Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
Read record from source table
Check to see if matching header record exists in TableA (using indexed field)
If match found then store TXH_ID (PK in TableA)
If no match found then create new header record in TableA with new TXH_ID
Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
What do you mean 'presumably the stats . .'?
In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
So there should be NO impact due to the header table 'getting big'.
Similar Messages
-
How do I avoid ORA-01473 when querying hierarchial on tables with VPD predicates
My question is how to circumvent what seems to be a limitation i ORACLE, if at all possible. Please read on.
When using VPD (Virtual Private Database) predictaes on a table and performing a hierarchial query on that table I get the following error message:
ORA-01473: cannot have subqueries in CONNECT BY CLAUSE
My query may look like the folwing:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
As my predicate contains a query in it self, I suspect that the implicit augmentation of the predicate results in a query that looks like:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
AND OWNER IN (SELECT OWNER FROM TABLE2 WHERE ...)
at least, when executing a query like the one above (with the explicit predicate) I get the identical error message.
So my question is:
Do you know of any way to force the predicate to augment itslef onto the WHERE-clause? I would be perfectly happy with a query that looks like:
SELECT FIELD
FROM TABLE
START WITH ID = 1
CONNECT BY PRIOR ID = PARENT
WHERE OWNER IN (SELECT OWNER FROM TABLE2 WHERE ...)
or do you know of any fix/patch/release to ORACLE that allows you to include subqueries in the CONNECT BY-clause and eliminates the error message?The WHERE clause or AND clause applies to the line directly above it. Please see the examples of valid and invalid queries below, which differ only in the placement of the WHERE or AND clause. If this is not sufficient, please provide some sample data and desired output to clarify what you need.
-- valid:
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 WHERE deptno IN
6 (SELECT deptno
7 FROM dept
8 WHERE dname = 'RESEARCH')
9 START WITH mgr = 7566
10 CONNECT BY PRIOR empno = mgr
11 /
EMPNO MGR DEPTNO
7788 7566 20
7876 7788 20
7902 7566 20
800 7902 20
-- invalid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 CONNECT BY PRIOR empno = mgr
7 WHERE deptno IN
8 (SELECT deptno
9 FROM dept
10 WHERE dname = 'RESEARCH')
11 /
WHERE deptno IN
ERROR at line 7:
ORA-00933: SQL command not properly ended
-- valid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 AND deptno IN
7 (SELECT deptno
8 FROM dept
9 WHERE dname = 'RESEARCH')
10 CONNECT BY PRIOR empno = mgr
11 /
EMPNO MGR DEPTNO
7788 7566 20
7876 7788 20
7902 7566 20
800 7902 20
-- invalid:
SQL>
SQL> SELECT empno,
2 mgr,
3 deptno
4 FROM emp
5 START WITH mgr = 7566
6 CONNECT BY PRIOR empno = mgr
7 AND deptno IN
8 (SELECT deptno
9 FROM dept
10 WHERE dname = 'RESEARCH')
11 /
FROM emp
ERROR at line 4:
ORA-01473: cannot have subqueries in CONNECT BY clause -
Performance degradation when using foreign keys
Hi,
I face drastic performance degradation when I add foreign keys to a table and perform insert / update on that table.
I have a row store table to which I need to insert around 1,50,000 records.
If the table has no foreign key reference it takes maximum of 5 seconds but if the same table has references to other tables (in my case there are 3 references), the processing speed reduces drastically to 2 minutes.
Is there any solution / best practice that can help me in gaining performance (processing speed) in this situation?
Thanks
S.SrivatsanHi Sri,
When you perform one insert in any database table which is having foreign key relationships, it will check the corresponding parent tables to check whether the master data is available or not. If your table is having 2 foreign key relationship, it happen twice per insert. Hence the performance will degrade. This is one of the reasons why ECC doesn't establish foreign key relationship in the back end database. The case is not just for INSERT, for UPDATE & DELETE the same is applicable.
Sreehari -
How to trigger a workflow when data inside a table changes
Hi
How to trigger a workflow when data inside a table changes ??
We need to trigger a workflow when STAT2 field value in PA0000 table changes.
rgds
ChemmanzMake use of Business Object BUS1065. In this business Object you have an attribute Status which you can use. There are a number of events that will get triggered when the status is changed.
Thanks
Arghadip -
How can I find out when was a particular table last updated?
How can I find out when was a particular table last updated? I need to find out the usage of this table - when was it last updated, etc. Thanks in advance. The version I am using is Oracle 9i.
If you don't have any application level logging, and auditing is not enabled, there's not much hope.
You could, if you have archive logs available, go trawling through archive logs via logminer, but that's likely to prove painful and not very fruitful, unless you're very meticulous and patient...
-Mark -
How to get the field name of an internal table during runtime?
How to get the field name of an internal table during runtime?
Hi Sudhir,
Declare and Use Get Cursor Field in Your Prm to get the field Name of the Intenal Table
Example Code:
<b> DATA: v_field(60). " Insert this code.
GET CURSOR FIELD v_field. " Insert this code.</b>
<b>CHECK v_field = 'ITAB-KUNNR'. " Insert this code. (or)
Write: v_field.</b>
Regards,
Ramganesan K. -
How to improve performance of attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
Performance Degradation when server added to Cluster
Hi,
I am having some performance issues with my weblogic cluster.
I am running 2 WLS 5.1 sp8 on Solaris 7. And 4 apache 1.3.12
webservers using the apache/weblogic proxy.
The performance seems to be fine when only one server is running.
But when both servers are running the application slows to a crawl.
It seems to be VERY slow when hitting the database (only with
both servers running).
I also have the same exact application running in clustered mode
in my staging environment, and it has NO performance issues when
both servers are running.
My thoughts are that something is configured incorrectly, and
is causing the 2 servers in the cluster to have problems communicating.
Any ideas or thoughts would be greatly appreciated.
Thank you.
Hi,
According to your description, my understanding is that you want to run your custom code in the feature event receiver automatically without re-activating
the feature.
In the feature event receiver, the event needs to be triggered by activating or deactivating the feature. So there is no easy way to run code directly
without re-activating the feature in feature event receiver.
As a workaround, I suggest you can create a task scheduler to reactive feature using PowerShell Command to run your custom code.
More information:
Activating and Deactivating Features with PowerShell:
http://sharepointgroup.wordpress.com/2012/05/04/activating-and-deactivating-features-with-powershell/
Running a SharePoint PowerShell script from Task Scheduler:
http://get-spscripts.com/2011/01/running-sharepoint-powershell-script.html
Best regards,
Zhengyu Guo
Zhengyu Guo
TechNet Community Support -
Performance problem with query on bkpf table
hi good morning all ,
i ahave a performance problem with a below query on bkpf table .
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat.
is ther any possibility to improve the performanece by using index .
plz help me ,
thanks in advance ,
regards ,
srinivashi,
if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
for ex:
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat
and bukrs in s_bukrs.
or
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
for all entries in itab
WHERE budat IN s_budat
and bukrs = itab-bukrs.
Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement. -
How to get the non technical query name from table?
Hello,
The table RSZCOMPDIR gives me the list of queries. The field COMPID contains the technical query name.
But how can I get the non technical name of a query? Which table holds this information?
Thanks and Regards,
SheetalHi Sheetal,
You can get this info from RSZELTTXT.
Hope this helps... -
How to improve performance of select query when primary key is not referred
Hi,
There is a select query where we are unable to refrence primary key of the tables:
Since, the the below code is refrensing to vgbel and vgpos fields instead of vbeln and posnr..... the performance is very slow.
select vbeln posnr into (wa-vbeln1, wa-posnr1)
from lips
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endselect.
Please le t me know if you have some tips..hi,
I hope you are using the select statement inside a loop ...endloop get that outside to improve the performance ..
if not i_vbap[] is initial.
select vbeln posnr into table it_lips
from lips
for all entries in i_vbap
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endif. -
How to Improve performance issue when we are using BRM LDB
HI All,
I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
Please suggest me how to improve the performance issue.
Thanks in advance
ChakradharModerator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob -
How to improve Performance of a Query whcih is on a Vritual Cube
Hi All,
Please suggest me some tips through which we can improve the performance of a queries that were built on Viirtual Cubes.
Thanks iin advance.
Regards,
RajHi Raj,
How is your direct access datasource built ? Is this a standard datasource or generic datasource on any view/table/function module. This strengthens my second point.
Suppose you built a virtual cube on direct access datasource built on AUFK table with Order as primary key (Order master data). when you use Order as selection on query built on this virtual cube then it retrievies the data faster than firing the query on other selections.
If your selections are different. You can possibly create a secondary index on the table with selections used in query.
Regards
vamsi -
How to improve performance of a query that is based on an xmltype table
Dear Friends,
I have a query that is pulling records from an xmltype table with 9000 rows and it is running very slow.
I am using XMLTABLE command to retreive the rows. It is taking upto 30 minutes to finish.
Would you be able to suggest how I can make it faster. Thanks.
Below is the query.....
INSERT INTO temp_sap_po_receipt_history_t
(po_number, po_line_number, doc_year,
material_doc, material_doc_item, quantity, sap_ref_doc_no_long,
reference_doc, movement_type_code,
sap_ref_doc_no, posting_date, entry_date, entry_time, hist_type)
SELECT :pin_po_number po_number,
b.po_line_number, b.doc_year,
b.material_doc, b.material_doc_item, b.quantity, b.sap_ref_doc_no_long,
b.reference_doc, b.movement_type_code,
b.sap_ref_doc_no, to_date(b.posting_date,'rrrr-mm-dd'),
to_date(b.entry_date,'rrrr-mm-dd'), b.entry_time, b.hist_type
FROM temp_xml t,
XMLTABLE(XMLNAMESPACES('urn:sap-com:document:sap:rfc:functions' AS "n0"),
'/n0:BAPI_PO_GETDETAIL1Response/POHISTORY/item'
PASSING t.object_value
COLUMNS PO_LINE_NUMBER VARCHAR2(20) PATH 'PO_ITEM',
DOC_YEAR varchar2(4) PATH 'DOC_YEAR',
MATERIAL_DOC varchar2(30) PATH 'MAT_DOC',
MATERIAL_DOC_ITEM VARCHAR2(10) PATH 'MATDOC_ITEM',
QUANTITY NUMBER(20,6) PATH 'QUANTITY',
SAP_REF_DOC_NO_LONG VARCHAR2(20) PATH 'REF_DOC_NO_LONG',
REFERENCE_DOC VARCHAR2(20) PATH 'REF_DOC',
MOVEMENT_TYPE_CODE VARCHAR2(4) PATH 'MOVE_TYPE',
SAP_REF_DOC_NO VARCHAR2(20) PATH 'REF_DOC_NO',
POSTING_DATE VARCHAR2(10) PATH 'PSTNG_DATE',
ENTRY_DATE VARCHAR2(10) PATH 'ENTRY_DATE',
ENTRY_TIME VARCHAR2(8) PATH 'ENTRY_TIME',
HIST_TYPE VARCHAR2(5) PATH 'HIST_TYPE') b;Based on response from mdrake on this thread:
Re: XML file processing into oracle
For large XML's, you can speed up the processing of XMLTABLE by using a registered schema...
declare
SCHEMAURL VARCHAR2(256) := 'http://xmlns.example.org/xsd/testcase.xsd';
XMLSCHEMA VARCHAR2(4000) := '<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
<xs:element name="cust_order" type="cust_orderType" xdb:defaultTable="CUST_ORDER_TBL"/>
<xs:complexType name="groupType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="item" type="itemType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="id" type="xs:byte" use="required"/>
</xs:complexType>
<xs:complexType name="itemType" xdb:maintainDOM="false">
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="id" type="xs:short" use="required"/>
<xs:attribute name="name" type="xs:string" use="required"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<xs:complexType name="cust_orderType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="group" type="groupType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="cust_id" type="xs:short" use="required"/>
</xs:complexType>
</xs:schema>';
INSTANCE CLOB :=
'<cust_order cust_id="12345">
<group id="1">
<item id="1" name="Standard Mouse">100</item>
<item id="2" name="Keyboard">100</item>
<item id="3" name="Memory Module 2Gb">200</item>
<item id="4" name="Processor 3Ghz">25</item>
<item id="5" name="Processor 2.4Ghz">75</item>
</group>
<group id="2">
<item id="1" name="Graphics Tablet">15</item>
<item id="2" name="Keyboard">15</item>
<item id="3" name="Memory Module 4Gb">15</item>
<item id="4" name="Processor Quad Core 2.8Ghz">15</item>
</group>
<group id="3">
<item id="1" name="Optical Mouse">5</item>
<item id="2" name="Ergo Keyboard">5</item>
<item id="3" name="Memory Module 2Gb">10</item>
<item id="4" name="Processor Dual Core 2.4Ghz">5</item>
<item id="5" name="Dual Output Graphics Card">5</item>
<item id="6" name="28inch LED Monitor">10</item>
<item id="7" name="Webcam">5</item>
<item id="8" name="A3 1200dpi Laser Printer">2</item>
</group>
</cust_order>';
begin
dbms_xmlschema.registerSchema
schemaurl => SCHEMAURL
,schemadoc => XMLSCHEMA
,local => TRUE
,genTypes => TRUE
,genBean => FALSE
,genTables => TRUE
,ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
execute immediate 'insert into CUST_ORDER_TBL values (XMLTYPE(:INSTANCE))' using INSTANCE;
end;
SQL> desc CUST_ORDER_TBL
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.org/xsd/testcase.xsd" Element "cust_order") STORAGE Object-relational TYPE "cust_orderType222_T"
SQL> set autotrace on explain
SQL> set pages 60 lines 164 heading on
SQL> col cust_id format a8
SQL> select extract(object_value,'/cust_order/@cust_id') as cust_id
2 ,grp.id as group_id, itm.id as item_id, itm.inm as item_name, itm.qty as item_qty
3 from CUST_ORDER_TBL
4 ,XMLTABLE('/cust_order/group'
5 passing object_value
6 columns id number path '@id'
7 ,item xmltype path 'item'
8 ) grp
9 ,XMLTABLE('/item'
10 passing grp.item
11 columns id number path '@id'
12 ,inm varchar2(30) path '@name'
13 ,qty number path '.'
14 ) itm
15 /
CUST_ID GROUP_ID ITEM_ID ITEM_NAME ITEM_QTY
12345 1 1 Standard Mouse 100
12345 1 2 Keyboard 100
12345 1 3 Memory Module 2Gb 200
12345 1 4 Processor 3Ghz 25
12345 1 5 Processor 2.4Ghz 75
12345 2 1 Graphics Tablet 15
12345 2 2 Keyboard 15
12345 2 3 Memory Module 4Gb 15
12345 2 4 Processor Quad Core 2.8Ghz 15
12345 3 1 Optical Mouse 5
12345 3 2 Ergo Keyboard 5
12345 3 3 Memory Module 2Gb 10
12345 3 4 Processor Dual Core 2.4Ghz 5
12345 3 5 Dual Output Graphics Card 5
12345 3 6 28inch LED Monitor 10
12345 3 7 Webcam 5
12345 3 8 A3 1200dpi Laser Printer 2
17 rows selected.Need at least 10.2.0.3 for performance i.e. to avoid COLLECTION ITERATOR PICKLER FETCH in execution plan...
On 10.2.0.1:
Execution Plan
Plan hash value: 3741473841
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 24504 | 89M| 873 (1)| 00:00:11 |
| 1 | NESTED LOOPS | | 24504 | 89M| 873 (1)| 00:00:11 |
| 2 | NESTED LOOPS | | 3 | 11460 | 805 (1)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3777 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | SYS_IOT_TOP_774117 | 3 | 129 | 1 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | |
Predicate Information (identified by operation id):
4 - access("NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statementOn 10.2.0.3:
Execution Plan
Plan hash value: 1048233240
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 17 | 132K| 839 (0)| 00:00:11 |
| 1 | NESTED LOOPS | | 17 | 132K| 839 (0)| 00:00:11 |
| 2 | MERGE JOIN CARTESIAN | | 17 | 131K| 805 (0)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3781 | 3 (0)| 00:00:01 |
| 4 | BUFFER SORT | | 17 | 70839 | 802 (0)| 00:00:10 |
|* 5 | INDEX FAST FULL SCAN| SYS_IOT_TOP_56154 | 17 | 70839 | 802 (0)| 00:00:10 |
|* 6 | INDEX UNIQUE SCAN | SYS_IOT_TOP_56152 | 1 | 43 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | SYS_C006701 | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
6 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
filter("SYS_NC_TYPEID$" IS NOT NULL AND
"NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
7 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
Note
- dynamic sampling used for this statement----------------------------------------------------------------------------------------------------------
-- CLEAN UP
DROP TABLE CUST_ORDER_TBL purge;
exec dbms_xmlschema.deleteschema('http://xmlns.example.org/xsd/testcase.xsd'); -
How to improve performance of my query
Hello Friends,
Good Morning.
I am having the following query which is never ending - Can any one throw some light on how to improve the performance of my said said query ..This is the query generated in ODI ( ORACLE DATA INTEGRATOR 11G )
The only thing I can put in this query is optimizers
- issue resolved
Please advice .
Thanks / Kumar
Edited by: kumar73 on May 18, 2012 6:38 AM
Edited by: kumar73 on May 18, 2012 6:39 AM
Edited by: kumar73 on May 18, 2012 12:04 PMThe two DISTINCTs are redundant, as UNION results in unique records, as a set can't have duplicates.
Other than that the query is not formatted and unreadable, and you didn't provide a description of the tables involved.
Your strategy seems to be maximum help from this forum with minimum effort from yourself, other than hitting copy and paste.
Sybrand Bakker
Senior Oracle DBA
Maybe you are looking for
-
Hello, I was changing the headers of a series of word documents (some .doc, some .docx) and, in spite of inserting the image from a file in each one, i just copied and pasted the image from one to another after a first insertion. My source image was
-
Ok, I'm feeling REALLY stupid today. Went to check out the edl import of P-Pro projects into Sg ... and ... simple .mov files slightly edited from a gh3, and opened in Sg ... in the program or "media" monitor window, there's no image, just a gray box
-
Hello, I try to handle the signal which is throwed when someone kill the process in the taskmanager, but it does'nt work. If I try to handle the Ctrl+C signal (SIGINT) it's working, but not whith SIGTERM. Why, How can I do to handle this signal ? Her
-
Old version of Labview: Adding a frame to sequence
I need to add a frame to a sequence in an old (Labview 3.0 for a Mac) version of Labview. I have no documentation for this version of labview. There is no sort of border pop-up menu with an "add frame after" feature. Any help is appreciated. Thanks,
-
Yosemite cannot share numbers 09 speadsheet using mail as excel format
Hi folks, I hope this isn't just piling on, but Numbers '09 cannot share a spreadsheet using Mail. I know, I know, switch to the new Numbers. Since the new version has not reached feature parity with Number 09 I will remain with 09. Any advice on t