Performance of resource view using domain index XDBHI_IDX
Hello all,
We have a home grown XMLDB application for doing schema validation against incoming messages. The application issues several queries like below. I know, it hard parses, which is very bad, but that's not where the real performance problem is right now. The problem is with the domain index XDBHI_IDX against the xdb$resource table. We use a very specific path which results in 0 or 1 returned rows, but as you can see the domain index has an intermediate result of 48041 rows, so the index is not effective here. Statistics are up to date, the index has been rebuild. Does anybody know what we can do with the domain index to make it do what it should do: identify the one xml document in that path?
SELECT 0
FROM
resource_view WHERE any_path=
'/home/app/incoming/ARCH_IN/2012/01/592174'
call count cpu elapsed disk query current rows
Parse 2 0.02 0.02 0 1041 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 81.26 85.43 0 202556 576158 0
total 6 81.28 85.45 0 203597 576158 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 67 (app) (recursive depth: 3)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID XDB$RESOURCE (cr=341531 pr=0 pw=0 time=60883725 us)
48041 DOMAIN INDEX XDBHI_IDX (cr=311030 pr=0 pw=0 time=1548774 us)Regards,
Rob.
PS: Database version is 10.2.0.3.0
Hi odie and Marco,
Thanks for your tips. Using equals_path instead of "=" did the trick indeed. Here are some results from our test system to prove it.
Old query:
SQL> SELECT /*+ gather_plan_statistics */ 0
2 FROM
3 resource_view WHERE any_path=
4 '/home/app/incoming/ARCH_IN/2012/01/592174'
5 /
no rows selected
SQL> select *
2 from table(dbms_xplan.display_cursor(null,null,'allstats last'))
3 /
PLAN_TABLE_OUTPUT
SQL_ID 0cnfnzqc2k5bd, child number 0
SELECT /*+ gather_plan_statistics */ 0 FROM resource_view WHERE any_path=
'/home/app/incoming/ARCH_IN/2012/01/592174'
Plan hash value: 2859544236
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
|* 1 | TABLE ACCESS BY INDEX ROWID| XDB$RESOURCE | 1 | 1 | 0 |00:00:00.15 | 12104 |
|* 2 | DOMAIN INDEX | XDBHI_IDX | 1 | | 2590 |00:00:00.10 | 10576 |
Predicate Information (identified by operation id):
1 - filter("XDB"."ABSPATH"(9999)='/home/app/incoming/ARCH_IN/2012/01/592174')
2 - access("XDB"."UNDER_PATH"("P"."SYS_NC00033$",'/',9999)=1)
20 rows selected.New query:
SQL> SELECT /*+ gather_plan_statistics */ 0
2 FROM
3 resource_view
4 where equals_path(res, '/home/app/incoming/ARCH_IN/2012/01/592174') = 1
5 /
no rows selected
SQL> select *
2 from table(dbms_xplan.display_cursor(null,null,'allstats last'))
3 /
PLAN_TABLE_OUTPUT
SQL_ID b8y3sra82yj7u, child number 0
SELECT /*+ gather_plan_statistics */ 0 FROM resource_view where
equals_path(res, '/home/app/incoming/ARCH_IN/2012/01/592174') = 1
Plan hash value: 3533888242
| Id | Operation | Name | Starts | A-Rows | A-Time | Buffers |
|* 1 | DOMAIN INDEX | XDBHI_IDX | 1 | 0 |00:00:00.01 | 2 |
Predicate Information (identified by operation id):
1 - access("XDB"."EQUALS_PATH"("P"."SYS_NC00033$",'/home/app/incoming
/ARCH_IN/2012/01/592174')=1)Regards,
Rob.
Similar Messages
-
Domain index query takes 12 hours to execute
Hi Friends,
My query which uses domain index takes 12 hours to execute. Can you please help me tuning this query?
select /*+ NO_UNNEST ORDERED index_ffs(Term idx_recanon_term_ysm1) parallel_index(Term, idx_recanon_term_ysm1, 8) */ term.rowid
from cmpgn.recanon_search_terms,cmpgn.recanon_term_ysm Term
where cmpgn.recanon_search_terms.search_type=3 and
contains(Term.RAW_TERM_TEXT,cmpgn.recanon_search_terms.search_text) > 0 and
Term.pod_id=11
Thanks in advance.
Regards
BalaFirst your driving table is recanon_search_terms to get the required search terms, then use them to query Oracle Text. What you are trying to do is to get a subset of table term first and than run every individual row of table term against the search terms. This approach will take a very long time.
Not sure what you want to do, but it looks something I have been working on previously. As far I can see there is a table with controlled terms which are matched against other raw terms in a text document using Oracle Text. The issue is that you do a join in the contains clause without knowing the number of query expressions formed. It can be that 7 thousand individual queries run at once, than use indexes of the other predicates, do some sorting, etc. which would explain the long time needed. It will probably run out of memory causing all sorts of issues.
As a quick fix try first following statement without hints.
Tell us how many rows you get, the distinct counts for pod_id and search_type and how long it takes.
CREATE TABLE test_table AS
select term.pod_id pod_id, cmpgn.recanon_search_terms.search_type search_type, term_primary_key, ....
from cmpgn.recanon_search_terms,cmpgn.recanon_term_ysm Term
where contains(Term.RAW_TERM_TEXT,cmpgn.recanon_search_terms.search_text) > 0
Create index test_idx on test_table(pod_id, search_type) and use test_table instead to get the results, just by providing pod_id and search_type without the join in the contains clause.
SELECT ..
FROM test_table
WHERE pod_id = X
AND search_type = Y
Maybe this approach is sufficient for your purpose. For sure, it will give you instant results. In that case a materialized view instead of the table could work for maintenance reasons; I had some issues with materialized views for above scenario.
However check very carefully the results. I would have some doubts that all rows in search_text form a valid query expression for Oracle Text. If search_text has just single tokens or phrases wrapping curly brackets around will probably resolve the issue.
Think about to form one query expression through a function call instead of a table join inside the contains clause. Sometimes to run a set of individual queries are faster than one big query.
select term.rowid --, form_query_exrpession(3) query_exrpession
from cmpgn.recanon_term_ysm Term
where contains(Term.RAW_TERM_TEXT, form_query_exrpession(3)) > 0
The above function will form one valid Oracle Text query expression by using the table recanon_search_terms inside the function. This approach normally helps, at least in debugging and fine-tuning. Avoid using bind variables first in order to identify highly skewed distribution of search_type.
The other performance issue is the additional predicate of pod_id = X, here the suggestion from radord works very well. If you want to get your hands dirty have a look at user_datastore in the Oracle Text documentation, this will give you all the freedom you want. -
Spatial Query not using Spatial Index
Hi All,
I have a query which uses the SDO_WITHIN_DISTANCE operator, but is taking far too long to complete.
SELECT
RT.*,RD.RPD_NODE_ID, RD.RPD_XCOORD,RD.RPD_YCOORD
FROM
railplan_data RD
LEFT JOIN Walk_data_sets WDS ON RD.RPD_RPS_ID = WDS.WDS_RPS_ID
LEFT JOIN RWNet_Temp RT ON WDS.WDS_ID = RT.RW_WDS_ID
WHERE
WDS.wds_id = 441
AND
MDSYS.SDO_WITHIN_DISTANCE(RT.RW_GEOM,RD.RPD_GEOLOC,'DISTANCE=' || TO_CHAR(RT.RW_BUFFER) || ' UNIT=METER') = 'TRUE';
Upon generation of the explain plan I have realised that the spatial index is not being used in the query, but I can't for the life of me get the thing working
3 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
4 ------------------------------------------------------------------------------------------
5 | 0 | SELECT STATEMENT | | 25841 | 99M| | 201 |
6 |* 1 | FILTER | | | | | |
7 | 2 | MERGE JOIN OUTER | | | | | |
8 |* 3 | HASH JOIN | | 12652 | 420K| 2968K| 185 |
9 | 4 | TABLE ACCESS FULL | RAILPLAN_DATA | 75910 | 2075K| | 60 |
10 | 5 | TABLE ACCESS BY INDEX ROWID| WALK_DATA_SETS | 1 | 6 | | 1 |
11 |* 6 | INDEX UNIQUE SCAN | WDS_PK | 1 | | | |
12 |* 7 | SORT JOIN | | 16 | 63760 | | 16 |
13 |* 8 | TABLE ACCESS FULL | RWNET_TEMP | 16 | 63760 | | 4 |
If anyone could help me out in figuring out why the spatial index is not being used, I would be most appreciative.
TIA
DanHi all again,
Well I finally got an upgrade to Oracle 10 (yay!), so I am now trying to implement the SDO_JOIN method as per my earlier posts. In fact it is actually working, but I have a question. When I run an explain plan it does not show the use of any domain indexes which I would expect to see, but performs fine (1.07s) with just a few records (10 in 1 table, 15000 in the other), please see code and explain plan below:
SELECT
Distinct
RT.RW_ID, RD.RPD_NODE_ID,
RD.RPD_XCOORD,RD.RPD_YCOORD
FROM
RPD_TEMP_762 RD,
WALK_DATA_SETS WDS,
RWNET_TEMP RT,
TABLE
(SDO_JOIN
( 'RWNET_TEMP',
'RW_GEOM',
'RPD_TEMP_762',
'RPD_GEOLOC',
'distance= ' || TO_CHAR(RT.RW_BUFFER) || ' unit=meter')) SPATIAL_JOIN_RESULT
WHERE WDS.WDS_ID = RT.RW_WDS_ID
AND WDS.WDS_ID = 762
AND SPATIAL_JOIN_RESULT.ROWID1 = RT.ROWID
AND SPATIAL_JOIN_RESULT.ROWID2 = RD.ROWID
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 74 | 5994 | 21753 (1)|
| 1 | SORT UNIQUE | | 74 | 5994 | 21691 (1)|
|* 2 | HASH JOIN | | 1046K| 80M| 1859 (1)|
| 3 | NESTED LOOPS | | 6076 | 213K| 1824 (1)|
| 4 | NESTED LOOPS | | 74 | 2516 | 194 (1)|
|* 5 | INDEX UNIQUE SCAN | WDS_PK | 1 | 4 | 0 (0)|
|* 6 | TABLE ACCESS FULL | RWNET_TEMP | 74 | 2220 | 194 (1)|
|* 7 | COLLECTION ITERATOR PICKLER FETCH| SDO_JOIN | | | |
| 8 | TABLE ACCESS FULL | RPD_TEMP_762 | 17221 | 756K| 28 (0)|
------------------------------------------------------------------------------------------ When i try to add hints to force the use of spatial indexes the performance of this query drops through the floor (it takes minutes / hours), index hint shown below:
/*+ ORDERED INDEX(RW rw_geom) INDEX(RD rpd_geoloc) */My question is is the first query using domain indexes, and if not, how do I get it to?
TIA
Dan -
Peformance tuning of query using bitmap indexes
Hello guys
I just have a quick question about tuning the performance of sql query using bitmap indexes..
Currently, there are 2 tables, date and fact. Fact table has about 1 billion row and date dim has about 1 million. These 2 tables are joined by 2 columns:
Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber
I have query that needs to be run as the following:
Select dates.dayofweek, dates,dates, fact.opened_amount from dates, facts
where Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber and dates.dayofweek = 'monday'.
Currently this query is running forever. I think it is joining that takes a lot of time. I have created bitmap index on dayofweek column because it's low on distinctive rows. But it didn't seem to speed up with the performance..
I'd like to know what other indexes will be helpful for me.. I am thinking of creating another one for companynumber since it also have low distinctive records.
Currently the query is being generated by frontend tools like OBIEE so I can't change the sql nor can't I purge data or create smaller table, I have to what with what I have..
So please let me know your thoughts in terms of performance tunings.
ThanksThe explain plan is:
Row cost Bytes
Select statement optimizer 1 1
nested loops 1 1 299
partition list all 1 0 266
index full scan RD_T.PK_FACTS_SNPSH 1 0 266
TABLE ACCESS BY INDEX ROWID DATES_DIM 1 1 33
INDEX UNIQUE SCAN DATES_DIM_DATE 1 1
There is no changes nor wait states, but query is taking 18 mins to return results. When it does, it returns 1 billion rows, which is the same number of rows of the fact table....(strange?)That's not a bitmap plan. Plans using bitmaps should have steps indicating bitmap conversions; this plan is listing ordinary btree index access. The rows and bytes on the plan for the volume of data you suggested have to be incorrect. (1 row instead of 1B?????)
What version of the data base are you using?
What is your partition key?
Are the partioned table indexes global or local? Is the partition key part of the join columns, and is it indexed?
Analyze the tables and all indexes (use dbms_stats) and see if the statistics get better. If that doesn't work try the dynamic sampling hint (there is some overhead for this) to get statistics at runtime.
I have seen stats like the ones you listed appear in 10g myself.
Edited by: riedelme on Oct 30, 2009 10:37 AM -
Ora-29886 feature not supported for domain indexes ??
Could anyone tell me the reason for the following error
ora-29886 feature not supported for domain indexes
What are domain indexes ..??
Thanks in advance ..It would have been better if you posted the statement that caused the error.
If you are using something like MERGE INTO, it is not supported with with Domain Indexes. Workaround is to complete your insert with individual insert statements or drop the Domain Index before insert and recreate the index after insert
Domain indexes are built for specific applications (specific domain) like Oracle text, Oracle Spatial etc. So depending on what application you are running, you might be using domain indexes. You create domain indexes as you create b-tree indexes, but the difference is that you have to define the INDEXTYPE.
You can find domain indexes in DBA_SECONDARY_OBJECTS. Find the index on the table you are using, then check the definition of the index and see what it looks like. -
View the Resource Performance in Resource Center
Hello.
I use ProjectServer 2013.
I always check the resource's capacity and plan in Resource center.
But There is no information about resource performance in Resource Center.
Is it possible to view the all resource's performance(actual work) in Resource center ?
if possible, which parameter should be changed in server settings?
Have a good time
---- Yoon -----Hi,
after selecting a resource in Resource Center, use "Resource Assignments" in the ribbon. In Resource Assignments view, use "Timephase Data" (second button from left).
This will give you information about Actual Work.
Hope that helps?
Barbara
To increase the value of this forum, please mark the replies that helped to solve your issue as answer. If you find answers to questions from other forum participants to be helpful, please mark them as helpful. Your participation will help others to find
an appropriate solution faster. Thanks for your support! -
How to increase the performance of a domain index?
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
I am using a dynamic query which uses a domain index for a clob column. The table has 1021487 records. How can i increase the performance of a select query based on this table? This is the only part where my application gets slowed down.
The query is
SELECT :search_id, 100, :item_type, c.search_college, c.search_colname, TRUNC(dbms_random.value(900000000000, 999999999999)) rand, COUNT(1), c.college_rating, min(DECODE(c.search_postcode, NULL, 99999, NULL)) search_distance FROM w_search_text c WHERE c.search_course_type is not null
and (contains(c.search_text,:search_text,2) > 0)
AND c.search_item_type = :item_bind
GROUP BY c.search_college, c.search_colname, c.college_rating
ORDER BY c.search_college
here c.search_text is the domain indexed column?
Is there a way to tune it?
RamHere c.search_text is the clob column.
i am using 4 bind variables and they are search id : 123456
item_type : O
search_text : French
item_bind : Z
I tried explain plan and here are the results
Plan
SELECT STATEMENT ALL_ROWSCost: 5 Bytes: 38,280 Cardinality: 264
3 SORT GROUP BY Cost: 5 Bytes: 38,280 Cardinality: 264
2 TABLE ACCESS BY INDEX ROWID TABLE HOT_ADMIN.W_SEARCH_TEXT_LIVE2 Cost: 4 Bytes: 38,280 Cardinality: 264
1 DOMAIN INDEX INDEX (DOMAIN) HOT_ADMIN.W_SRCH_TXT_IDX2 Cost: 4
Its using the domain index, But the column is highly fed up with data...so ithink that may be the reason.Is there any HINTS in particular to tune domain indexes?
Ram -
Indexed views using indexes on base table
Hi all,
CREATE VIEW Sales.vOrders
WITH SCHEMABINDING
AS
SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
OrderDate, ProductID, COUNT_BIG(*) AS COUNT
FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
WHERE od.SalesOrderID = o.SalesOrderID
GROUP BY OrderDate, ProductID;
GO
--Create an index on the view.
CREATE UNIQUE CLUSTERED INDEX IDX_V1
ON Sales.vOrders (OrderDate, ProductID);
GO
--This query can use the indexed view even though the view is
--not specified in the FROM clause.
SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Rev,
OrderDate, ProductID
FROM Sales.SalesOrderDetail AS od
JOIN Sales.SalesOrderHeader AS o ON od.SalesOrderID=o.SalesOrderID
AND ProductID BETWEEN 700 and 800
AND OrderDate >= CONVERT(datetime,'05/01/2002',101)
GROUP BY OrderDate, ProductID
ORDER BY Rev DESC;
In the above code block, Sales.SalesOrderDetail and Sales.SalesOrderHeader are base tables.
Say suppose there are some indexes on some of the columns of these base tables. Are these indexes used when we write a query in which indexed view is mentioned
in the from clause?
Thanks, SrikarSO far as its a indexed view it wont use the indexes on base tables when you use it in a query as indexed view is persisted and exists as a physical object. SO it doent require definition to be substituted and data to be retrieved from the base objects.
The indexes will come handy while populating the indexed view.
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
Is it NOT possible to create indexes on views using SSMS?
I'm looking at:
http://www.mssqltips.com/sqlservertip/1610/sql-server-schema-binding-and-indexed-views/
The thing is I'm not able to find that index dialogue ANYWHERE. When I google I keep coming up with examples of how to create indexes using TSQL. Is there no way to create an index on an view with SSMS?
Using SQL Server 2008 R2.Expand Views -> expand view -> right click Indexes - > new index
-
Use CONTEXT index on mview or use mview's rewrite directly
Please let me explain what I mean in Subject. I have 9 tables. Each of these tables has about 40,000 rows and one table has 2 million rows. Using first approach, I can build a join-only materialized view on top of nine table's mview log. then query on these nine tables directly. Advantage for doing that is use rewrite.
<p>
for second approach, I build a CONETXT index on several columns on the mview, and then query through CONTEXT index on mview directly. This is pretty much like Barbara did on CREATE INDEX book_idx ON book_search
[http]Indexing multiple columns of multiple tables using CTXCAT
but she used CTXCAT instead of CONTEXT index.
<p>
My question is will second approach better than first one and why. Unlike basic join several tables which gives you predictable performance, I often feel that CONTEXT index performance is unpredictable when tables have more than several thousands rows (maybe I did something wrong, but still looking for document regarding performance) .
<p>
I will appreciate someone could show hints on the issue.
<p>
Message was edited by:
qwe15933
Message was edited by:
qwe15933
Message was edited by:
qwe15933
Message was edited by:
qwe15933The best method to find out what is best for any individual situation is to test and compare. In general, Oracle Text is best at searching unstructured data, such as large documents and has a lot of features that enable you to do different kinds of searches. If you only have structured data, with each column only containing one short thing and you only need simple searches, then you probably don't need Text. There are also a few things that can be done to indexes and/or queries to optimize performance. It would help to have an idea what your typical data is like and what sorts of searches you anticipate needing.
-
Hi all,
i'm using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 on Windows.
I created this table
create table PERSISTENT_COMPOSITION
COMPOSITION_ID NUMBER(19) not null,
XML_CONTENT SYS.XMLTYPE not null,
)and filled it with more or less 1.000.000 records (that si 1.000.000 xml document loaded into XML_CONTENT).
Then first of all i tested it with a simple query just like the following:
SELECT *
FROM PERSISTENT_COMPOSITION t
WHERE existsNode(t.xml_content, '/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]') = 1;obtaining the expected result: 50,000 records found.
Now, in order to improve query performances, i created a CTXXPATH index as follows:
CREATE INDEX IDX#COMP_CTXXPATH ON PERSISTENT_COMPOSITION(XML_CONTENT) INDEXTYPE IS CTXSYS.CTXXPATH;Then i tested the new performances using exactly the same query shown above...and here comes the problem: the query returns NO RESULT! No record was found! I looked at the query execution plan and it uses the created index IDX#COMP_CTXXPATH...but no record could be found...
I thought it could be a matter of namespace: in fact loaded xml documents have a xmlns set and so i changed the query as follows:
SELECT *
FROM persistent_composition t
WHERE existsNode(t.xml_content,
'/composition/archetype_details/archetype_id[value="openEHR-EHR-COMPOSITION.composition_test.v1"]',
'xmlns="http://this.is.an.xmlns.url.org/v1"') = 1and surprise: i obtained my 50,000 results just like before BUT, looking at the query execution plan, the IDX#COMP_CTXXPATH index HASN'T BEEN USED!!!
I really don't understand why using the IDX#COMP_CTXXPATH i get no result....can someone help me?
Thank you very much
P.S: i tried using ANALYZE (both on index and on table), CTX_DDL.sync_index and CTX_DDL.optimize_index but got no result..
Edited by: user11295548 on 29-giu-2009 5.47Besides following Mark's advice, and I could be mistaken regarding this in combination with domain indexes, you should NOT use ANALYZE anymore in a Oracle 10 environment. Instead use DBMS_STATS. Its more flexible.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_4005.htm#SQLRF01105
Note:
Do not use the COMPUTE and ESTIMATE clauses of ANALYZE to collect optimizer statistics.
These clauses are supported for backward compatibility.
Instead, use the DBMS_STATS package, which lets you collect statistics in parallel,
collect global statistics for partitioned objects, and fine tune your statistics collection
in other ways. The optimizer, which depends upon statistics, will eventually use only
statistics that have been collected by DBMS_STATS.
See PL/SQL Packages and Types Reference for more information on the
DBMS_STATS package. You must use the ANALYZE statement (rather than
DBMS_STATS) for statistics collection not related to the cost-based optimizer, such as:
- To use the VALIDATE or LIST CHAINED ROWS clauses
- To collect information on freelist blocks -
Why isn't the CBO using my indexes?
Why isn't Oracle using my indexes to join 2 big tables? I ran statistics last night before kicking off this job. I don't want to have to use hints.
STAGING_TXN_081 has Primary Key on VSYS_STAGE_ROW_ID
TRANSACTION has an index on STAGING_RECORD_ID
Record counts are as follows:
SQL> select count(*) from STAGING_TXN_081;
COUNT(*)
613071
1* select distinct count(staging_record_id) from transaction
SQL> /
COUNT(STAGING_RECORD_ID)
10,662,828
1* select distinct count(*) from transaction where staging_record_id is null
SQL> /
COUNT(*)
1,150,819
So 1,150,819 / 10,662,828 = approximately 10.8% of the rows are null.
This is the Query
select
st.*,
rlog.reject_code as rlogRejectCode
from
STAGING_TXN_081 st
join transaction t
on st.VSYS_STAGE_ROW_ID = t.STAGING_RECORD_ID
left outer join txn_reject_log rlog
on t.transaction_id = rlog.transaction_id
where
not exists
(select 1
from
rl_loyalty_txn_trans rl
join loyalty_txn lt
on lt.loyalty_txn_id=rl.loyalty_txn_id
where
rl.transaction_id=t.transaction_id)
order by st.VSYS_STAGE_ROW_ID, rlog.reject_code Here is the execution plan.
Execution Plan
Plan hash value: 447420266
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 223 | | 144K (6)| 00:29:00 |
| 1 | SORT ORDER BY | | 1 | 223 | | 144K (6)| 00:29:00 |
| 2 | NESTED LOOPS OUTER | | 1 | 223 | | 144K (6)| 00:29:00 |
|* 3 | HASH JOIN ANTI | | 1 | 203 | 121M| 144K (6)| 00:29:00 |
| 4 | VIEW | | 613K| 114M| | 84466 (7)| 00:16:54 |
|* 5 | HASH JOIN | | 613K| 71M| 72M| 84466 (7)| 00:16:54 |
| 6 | TABLE ACCESS FULL | STAGING_TXN_081 | 613K| 65M| | 3237 (5)| 00:00:39 |
|* 7 | TABLE ACCESS FULL | TRANSACTION | 10M| 111M| | 65229 (7)| 00:13:03 |
| 8 | VIEW | VW_SQ_1 | 10M| 69M| | 44389 (6)| 00:08:53 |
|* 9 | HASH JOIN | | 10M| 197M| 192M| 44389 (6)| 00:08:53 |
| 10 | INDEX FAST FULL SCAN | PK_LOYALTY_TXN | 10M| 71M| | 7314 (7)| 00:01:28 |
| 11 | INDEX FAST FULL SCAN | PK_RL_LOYALTY_TXN_TRANS | 10M| 128M| | 13947 (4)| 00:
| 12 | TABLE ACCESS BY INDEX ROWID| TXN_REJECT_LOG | 1 | 20 | | 3 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | VIDX_309 | 1 | | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("TRANSACTION_ID"="from$_subquery$_003"."TRANSACTION_ID")
5 - access("ST"."VSYS_STAGE_ROW_ID"="T"."STAGING_RECORD_ID")
7 - filter("T"."STAGING_RECORD_ID" IS NOT NULL)
9 - access("RL"."LOYALTY_TXN_ID"="LT"."LOYALTY_TXN_ID")
13 - access("T"."TRANSACTION_ID"="RLOG"."TRANSACTION_ID"(+))
Statistics
467 recursive calls
0 db block gets
324842 consistent gets
249318 physical reads
0 redo size
955 bytes sent via SQL*Net to client
240 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processedHi,
Take these points in consideration
- STAGING_TXN_081 has to be full table scanned as there is NO filter on that
- if all the matching 613071 rows from the TRANSACTION table are in different blocks, it might be equivallant to reading full TRANSACTION table (Oracle read blocks not rows)
- if we are reading all blocks from TRANSACTION table anyway, why waste resources in reading the index blocks
- How can we tell Oracle that, required TRANSACTION records are not located like 'each row in different block' ??? if that is really the case !!!
- Your 'not exists' clause, how may records from TRANSACTION table will be filtered out because of that
- If the 'not exists' will take out huge number of rows, I would suggest creating a in-line view of TRANSACTION table and the 'not exist' logic and then joining that with the STAGING*** table
- Try to minimise the number of rows taking part in these joins (I know, I am not telling anything new here !!!)
Cheers -
I am running explain plan for queries using a domain index. i.e. an oracle text 'contains' clause. The usage of the domain index appears in the plan o.k, but I am interested in seeing the underlying access to the DR$...$I etc tables and assoicated indexes belonging to ctxsys. Is it possible to see this access ?
Basically I have all the objects created by the domain index in their own tablespace using a storage preference, but I am wondering if the sub indexes should be moved out from the sub tables for better performance ? Any guidance is appreciated.I am running explain plan for queries using a domain index. i.e. an oracle text 'contains' clause. The usage of the domain index appears in the plan o.k, but I am interested in seeing the underlying access to the DR$...$I etc tables and assoicated indexes belonging to ctxsys. Is it possible to see this access ?
Basically I have all the objects created by the domain index in their own tablespace using a storage preference, but I am wondering if the sub indexes should be moved out from the sub tables for better performance ? Any guidance is appreciated. -
Hi,
I've got a report done in discoverer desktop and I want to delivery it to the end users through the web.
I know that discoverer plus is to create reports as desktop and discoverer viewer is to view reports, but viewer doesn't give me good response times as discoverer plus does.
It is normal that plus is faster than viewer by far? (I'm saying that plus takes about 6 sec and the same drilldown the viewer takes about 45 sec and I've noticed that to run the query in viewer takes about 5 secs and building the sheets about 40 sec) what can I do to improve that?
Also I have tunned the report, I created a fact table, all calculations are done on the table's load, it has indexes and I check the sql inspector and in fact it is using the indexes
Any tip will be really helpfull
Regards
Mario A Villamizar
System Analist
Bogota - ColombiaHi Mario
No it is not right that Viewer should be that number of times slower than Plus. As I found out today, there is a bug in the 10.1.2 Viewer code that causes drills or crosstabs to take a really long time when compared to Desktop. As this poor performance was also there in 9.0.4 I can only assume that Oracle copied the offending code from one to the other.
At this moment in time there is nothing that you can tweak or set behind the scenes that will help performance. So long as your workbooks are efficient with minimum usage of crosstabs and page items this is all you can do until Oracle fix Viewer.
Oracle tell me that they have identified the issue with 10.1.2 and are in the process of getting a patch together. They also tell me that the patch will be released in a couple of months so I guess they still have some issues to resolve.
I have no idea at this stage whether a similar patch will be released for 9.0.4 although I cannot imagine that they will not do so.
Hope this information helps
Regards
Michael -
Which design is best from a performance point of view?
Hello
I'm writing a small system that needs to track changes to certain columns on 4 different tables. I'm using triggers on those columns to write the changes to a single "change register" table, which has 12 columns. Beacuse the majority of tracked data is not shared between the tables, most of the columns will have null values. From a design point of view it is apparent that having 4 separate change register tables (one for each main table that is being tracked), would be better in terms of avoiding lots of null columns for each row, but I was trying to trade this off against having a single table to see all changes that have been made across the tracked tables.
From a performance point of view though, will there be any real difference whether there are 4 separate tables or 1 single register table? I'm only ever going to be inserting into the register table, and then reading back from it at a later date and there won't be any indexes on it. Someone I work with suggested that there would be more overhead on the redo logs if a single table was used rather than 4 separate tables.
Any help would be appreciated.
DavidThe volumes of data are going to be pretty small,
maybe a couple of thousand records each day, it's an
OLTP environment with 150 concurrent users max.Consider also the growing of data and if you'll put data constantly to an historical db or if the same tables will contain the increasing number of record.
The point that my colleague raised was that multiple
inserts into a single table across multiple
transactions could cause a lot of redo contention,
but I can't see how inserting into one table from
multiple triggers would result in more redo
contention that inserting into multiple tables. The
updates that will fire the triggers are only ever
going to be single row updates, and won't normally
cause more than one trigger to fire within a single
transaction. Is this a fair assumption to make?
David
I agree with you, the only thing I will consider, instead of redo problem, is the locking on the table that could be generated when logs of different tables will have to be put in a single table, i mean if after insert of a log record you could need to update it....
In this case if 2 or more users have to update the same log you could have problems.
Maybe you are looking for
-
Hi, whenever I try to transfer music from the itunes to my iPhone 5s it wont work. It simply does not let me drag the music, sinc my music or add it. I have tried everything I have found on the internet. Please help. Thanks
-
Replicate services for object functionality in my customised program
Hi I can see the "Services for object "functionality in ME23N transaction. I want to replicate the same functionality for my custom program.Is there any badi or Function Module . Please let me know if I can get any pointers for this. Regards Deesanth
-
Strange problem - app install failed randomly - OSD SCCM 2012 R2
Hi, We have this problem for a few weeks and despite various test the problem persists. I hope the community can help me on this problem. Problem : During TS New computer Win 7 x64 or X86 OSD, app failed to install randomly. Some example : on the sam
-
Dont know what this effect is called, help?? how would i do it in FCE?
hi, I need to get two me's in the same shot. E.g. i need a video feed of me talking to myself, but in the same shot. I could do this with super imposing but am not impressed with the results. Is there any other way i could do it, e.g. some way i coul
-
Hi robman, Welcome to Sony Community! Thank you for this message. The model# VGN-FZ140E only supports Windows XP and Vista. We dont have driver updates available for other operating system as of yet. Jen If my post answers your question, please mar