Interesting Performance problem..Query runs fast in TOAD and Reports 6i..
Hi All,
I have a query which runs with in 4 mins in Toad and also report 6i. But when ran through applications takes 3 to 4 hrs to complete normal.This report fetches huge amount of data will that be the reason for poor performance?? I am unable to figure that out. I was able to avoid full table scan on the tables used. But still have the problem.
Any suggestions please.
Thank you in advance.
Prathima
If you want to have a look at the query. This report gives a way to monitor the receipts entered for pay from receipt POs.
SELECT
hou.name "OPERATING_UNIT_NAME"
,glc.segment1 "UEC"
,glc.segment2 "DEPT"
,pov.vendor_name "VENDOR_NAME"
,msi.SEGMENT1 "ITEM_NUM"
,rcvs.receipt_num "RECEIPT_NUM"
,poh.segment1 "PO_NUMBER"
,pol.line_num "PO_LINE_NUM"
,por.RELEASE_NUM "RELEASE_NUMBER"
,poll.shipment_num "SHIPMENT_NUM"
,hrou.name "SHIP_TO_ORGANIZATION"
,trunc(rcv.transaction_date) "TRANSACTION_DATE"
,decode (transaction_type,'RECEIVE', 'ERS', 'RETURN TO VENDOR','RTS') "RECEIPT_TYPE"
,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)* rcv.quantity "RECEIPT_QTY"
,rcv.po_unit_price "PO_UNIT_PRICE"
,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)*rcv.quantity*po_unit_price "RECEIPT_AMOUNT"
,rcvs.packing_slip "PACKING_SLIP"
,poll.quantity "QUANTITY_ORDERED"
,poll.quantity_received "QUANTITY_RECEIVED"
,poll.quantity_accepted "QUANTITY_ACCEPTED"
,poll.quantity_rejected "QUANTITY_REJECTED"
,poll.quantity_billed "QUANTITY_BILLED"
,poll.quantity_cancelled "QUANTITY_CANCELLED"
,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled)) "QUANTITY_OVER_RECEIVED"
,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled))*po_unit_price "OVER_RECEIVED_AMOUNT"
,poh.currency_code "CURRENCY_CODE"
,perr.full_name "RECEIVER"
,perb.full_name "BUYER"
FROM
po.po_vendors pov
,po.po_headers_all poh
,po.po_lines_all pol
,po.po_line_locations_all poll
,po.po_distributions_all pod
,po.po_releases_all por
,hr.hr_all_organization_units hou
,hr.hr_all_organization_units hrou
,po.rcv_transactions rcv
,po.rcv_shipment_headers rcvs
,gl.gl_code_combinations glc
,hr.per_all_people_f perr
,hr.per_all_people_f perb
,inv.mtl_system_items_b msi
where
poh.org_id = hou.organization_id
and pov.vendor_id (+) = poh.vendor_id
and pod.po_header_id = poh.po_header_id
and pod.po_line_id = pol.po_line_id
and pod.line_location_id = poll.line_location_id
and poll.po_header_id = poh.po_header_id
and poll.po_line_id = pol.po_line_id
and pol.po_header_id = poh.po_header_id
and poh.pay_on_code like 'RECEIPT'
and pod.po_header_id = rcv.po_header_id
and pod.po_line_id = rcv.po_line_id
and pod.po_release_id = rcv.po_release_id
and pod.po_release_id = por.po_release_id
and por.po_header_id = poh.po_header_id
and hrou.organization_id = poll.ship_to_organization_id
and pod.line_location_id = rcv.po_line_location_id
and pod.po_distribution_id = rcv.po_distribution_id
and rcv.transaction_type in ('RECEIVE','RETURN TO VENDOR')
and rcv.shipment_header_id = rcvs.shipment_header_id (+)
and pod.code_combination_id = glc.code_combination_id
and rcvs.employee_id = perr.person_id
and por.agent_id = perb.person_id (+)
and perr.person_type_id = 1
and perb.person_type_id = 1
and msi.organization_id = 1 --poll.ship_to_organization_id
and msi.inventory_item_id = pol.item_id
and poh.type_lookup_code = 'BLANKET'
and hou.organization_id = nvl(:p_operating_unit,hou.organization_id)
and trunc(rcv.transaction_date) between :p_transaction_date_from and :p_transaction_date_to
Message was edited by:
Prathima
Similar Messages
-
Performance problems when running PostgreSQL on ZFS and tomcat
Hi all,
I need help with some analysis and problem solution related to the below case.
The long story:
I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
Within a non-global zone Im running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
NPROC USERNAME SWAP RSS MEMORY TIME CPU
49 postgres 749M 669M 4,7% 7:14:38 13%
1 jboss 2519M 2536M 18% 50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
NPROC USERNAME SWAP RSS MEMORY TIME CPU
1 jboss 3104M 913M 6,4% 0:22:48 0,1%
#sar -g 5 5
SunOS vbn-back 5.10 Generic_142901-03 i86pc 05/28/2010
07:49:08 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
07:49:13 27.67 316.01 318.58 14854.15 0.00
07:49:18 61.58 664.75 668.51 43377.43 0.00
07:49:23 122.02 1214.09 1222.22 32618.65 0.00
07:49:28 121.19 1052.28 1065.94 5000.59 0.00
07:49:33 54.37 572.82 583.33 2553.77 0.00
Average 77.34 763.71 771.43 19680.67 0.00Making more memory available to tomcat seemed to worsen the problem or at least didnt prove to have any positive effect.
My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
An unofficial performance evaluation on the database with vacuum analyze took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
The short story:
Im trying different steps but running out of ideas. Weve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didnt find much information on the matter so if any can help please recommend how to make this change
Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
Any help appreciated and I will try to provide additional information on request if needed
Thanks in advance,
Kasperraidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
You can change the record size by "zfs set recordsize=8k <dataset>"
It will only take effect for newly written data. Not existing data. -
Hi Gurus,
We are using Report 10g on 10g Application server and solaris. we created a report on a table which has 10,000 rows. The report has 25 columns. when we run this query in Toad it took 12 sec for fetching all these 10,000 rows
But when we run the report with Destype = 'FILE' and Desformat = 'DELIMITEDDDATA', it is taking 5 to 8 minutes
to open in excel ( we concatenated mimetype=vnd-msexcel at the end of the url if the Destype=FILE). We removed the layout in the report as it is taking 10 to 15 mins to run to Screen with Desformat=HTML/PDF(formating pages taking more time). We are wondering why DELIMITEDDATA format is taking long time as it runs only query.
Does RWSERVLET take more time of writing the data to the Physical file in the cache dir? Our cache size is 1 GB. we have 2 report servers clustered. Tracing is off.
Please advise me if there are any report server settings to boost the performance.
Thanks alot,
Ram.Duplicate of Strange problem... Query runs faster, but report runs slow... in the Reports forum.
[Thread closed] -
Performance problem when running a personalization rule
We have a serious performance problem when running a personalization rule.
The rule is defined like this:
Definition
Rule Type: Content
Content Type: LoadedData
Name: allAnnouncements
Description: all announcements of types: announcement, deal, new release,
tip of the day
If the user has the following characteristics:
And when:
Then display content based on:
(CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
of the week)
and CONTENT.endDate > now
and CONTENT.startDate <= now
END---------------------------------
and is invoked in a JSP page like this:
<%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
|| CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
(CONTENT.userType ='retailer')"%>
<pz:contentselector
id="cdocs"
ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
nHome/b2boost"
rule="allAnnouncements"
sortBy="startDate DESC"
query="<%=customQuery%>"
contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
The customQuery is constructed at runtime from user information, and cannot
be constructed with rules
administration interface.
When I turn on debugging mode, I can see that the rule is parsed and a SQL
query is generated, with the correct parameters.
This is the generated query (with the substitutions):
select
WLCS_DOCUMENT.ID,
WLCS_DOCUMENT.DOCUMENT_SIZE,
WLCS_DOCUMENT.VERSION,
WLCS_DOCUMENT.AUTHOR,
WLCS_DOCUMENT.CREATION_DATE,
WLCS_DOCUMENT.LOCKED_BY,
WLCS_DOCUMENT.MODIFIED_DATE,
WLCS_DOCUMENT.MODIFIED_BY,
WLCS_DOCUMENT.DESCRIPTION,
WLCS_DOCUMENT.COMMENTS,
WLCS_DOCUMENT.MIME_TYPE
FROM
WLCS_DOCUMENT
WHERE
((((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = ''
AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'language'
AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = '*'
AND NOT (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
At this moment, the server makes the user wait more than 10 min for the
query to execute.
This is what I found out about the problem:
1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
, it takes 5-10 seconds.
2)If I remove the second term of (CONTENT.Country='nl' ||
CONTENT.Country='*' ) in the custom query,
thus retricting to CONTENT.Country='nl', the performance is OK.
3)There are currently more or less 130 records in the DB that have
Country='*'
4)When I run the page on our QA server (solaris), which is at the same time
our Oracle server,
the response time is OK, but if I run it on our development server (W2K),
response time is ridiculously long.
5)The problem happens also if I add the term (CONTENT.Country='nl' ||
CONTENT.Country='*' )
to the rule definition, and I remove this part from the custom query.
Am I missing something? Am I using the personalization server correctly?
Is this performance difference between QA and DEV due to differences in the
OS?
Thank you,
Luis MuñizLuis,
I think you are working through Support on this one, so hopefully you are in good
shape.
For others who are seeing this same performance issue with the reference CM implementation,
there is a patch available via Support for the 3.2 and 3.5 releases that solves
this problem.
This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
WLPS 3.5.
Regards,
PJL
"Luis Muniz" <[email protected]> wrote:
We have a serious performance problem when running a personalization
rule.
The rule is defined like this:
Definition
Rule Type: Content
Content Type: LoadedData
Name: allAnnouncements
Description: all announcements of types: announcement, deal, new release,
tip of the day
If the user has the following characteristics:
And when:
Then display content based on:
(CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
deal)
or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
tip
of the week)
and CONTENT.endDate > now
and CONTENT.startDate <= now
END---------------------------------
and is invoked in a JSP page like this:
<%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
|| CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
(CONTENT.userType ='retailer')"%>
<pz:contentselector
id="cdocs"
ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
nHome/b2boost"
rule="allAnnouncements"
sortBy="startDate DESC"
query="<%=customQuery%>"
contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
The customQuery is constructed at runtime from user information, and
cannot
be constructed with rules
administration interface.
When I turn on debugging mode, I can see that the rule is parsed and
a SQL
query is generated, with the correct parameters.
This is the generated query (with the substitutions):
select
WLCS_DOCUMENT.ID,
WLCS_DOCUMENT.DOCUMENT_SIZE,
WLCS_DOCUMENT.VERSION,
WLCS_DOCUMENT.AUTHOR,
WLCS_DOCUMENT.CREATION_DATE,
WLCS_DOCUMENT.LOCKED_BY,
WLCS_DOCUMENT.MODIFIED_DATE,
WLCS_DOCUMENT.MODIFIED_BY,
WLCS_DOCUMENT.DESCRIPTION,
WLCS_DOCUMENT.COMMENTS,
WLCS_DOCUMENT.MIME_TYPE
FROM
WLCS_DOCUMENT
WHERE
((((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = ''
AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'language'
AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
AND ((WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
)) OR (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
AND WLCS_DOCUMENT_METADATA.VALUE = '*'
AND NOT (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
AND (WLCS_DOCUMENT.ID IN (
SELECT
WLCS_DOCUMENT_METADATA.ID
FROM
WLCS_DOCUMENT_METADATA
WHERE
WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
At this moment, the server makes the user wait more than 10 min for the
query to execute.
This is what I found out about the problem:
1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
, it takes 5-10 seconds.
2)If I remove the second term of (CONTENT.Country='nl' ||
CONTENT.Country='*' ) in the custom query,
thus retricting to CONTENT.Country='nl', the performance is OK.
3)There are currently more or less 130 records in the DB that have
Country='*'
4)When I run the page on our QA server (solaris), which is at the same
time
our Oracle server,
the response time is OK, but if I run it on our development server (W2K),
response time is ridiculously long.
5)The problem happens also if I add the term (CONTENT.Country='nl' ||
CONTENT.Country='*' )
to the rule definition, and I remove this part from the custom query.
Am I missing something? Am I using the personalization server correctly?
Is this performance difference between QA and DEV due to differences
in the
OS?
Thank you,
Luis Muñiz -
Hello , please am having a problem with my iphone 5 , the battery runs out quickly ,another problem also if i turn on 3G it will run faster , 15 minutes and the iphone battery will be out . My final problem is that "no service " appears a lot especially when opening wifi or 3G , can you help ?
Your battery is running quickly because your cellular data connection is weak.
Is your phone carrier a supported carrier: Wireless carrier support and features for iPhone in the United States and Canada - Apple Support
For your no service issues: If you see No Service in the status bar of your iPhone or iPad - Apple Support -
What are the ways to make Query run fast?
Hi Experts,
When a query runs slow, we generally go for creating an aggregate. My doubt is - what other things can be done to make a query run faster before creating an aggregate? What is the thumb rule to be carried out for creating an aggregate?
Regards,
ShreeemHi Shreem,
If you keep Query simple not complicate it with runtime calculations , it would be smooth. However as per business requirements we will have to go for it anyways mostly.
regarding aggregates:
Please do not use the standard proposal , it will give you hundreds based on std. rules , which consumes lots of space and adds up to load times. If you have users already using the Query and you are planning to tune it then go for the statistics tables:
1.RSDDSTAT_OLAP find the query with long runtimes get the Stepuid
2. RSDDSTAT_DM
3. RSDDSTATAGGRDEF - use the stepuid above to see which aggregate is necessary for which cube.
Another way to check ; check the users as in 1 to find the highest runtime users and find the last used bookmarks by user thru RSZWBOOKMARK for this query and check if the time matches and create the aggregates as in 3 above.
You can also Use Transaction RSRT > execute & debug (display stats ) - to create generic aggregates to support navigations for New queries and later refine as above.
Hope it helps .
Thnks
Ram -
Does SQL Query run faster with/without Conditions....
Hi All, forgive my novice question.
Was just wondering" In general if we run a SQL query on a single table; does my query run faster if there are multiple where conditions? or without. What happens if the conditions increase? My table is a big one with 5 million rows and some bitmap indexes defined on it.
Thanks,
KonI think it's difficult to give general rule because there are too much dependencies on the fact that the columns are indexed or not, on the way tables and indexes statistics are computed or not, on the possible session or instance parameters that the optimizer may use, on the Oracle version, etc.
Message was edited by:
Pierre Forstmann -
Performance problem with selecting records from BSEG and KONV
Hi,
I am having performance problem while selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
Regards,
PrashantHi,
Some steps to improve performance
SOME STEPS USED TO IMPROVE UR PERFORMANCE:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
5. Avoid using nested SELECT statement SELECT within LOOPs.
6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
7. Avoid using SELECT * and Select only the required fields from the table.
8. Avoid nested loops when working with large internal tables.
9. Use assign instead of into in LOOPs for table types with large work areas
10. When in doubt call transaction SE30 and use the examples and check your code
11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
12. Use "CHECK" instead of IF/ENDIF whenever possible.
13. Use "CASE" instead of IF/ENDIF whenever possible.
14. Use "MOVE" with individual variable/field moves instead of "MOVE-
CORRESPONDING" creates more coding but is more effcient. -
Performance problems - query on the copy of a table is faster than the orig
Hi all,
I have sql (select) performance problems with a specific table (costs 7800)(Oracle 10.2.0.4.0, Linux).
So I copied the table 1:1 (same structure, same data, same indexes etc. ) under the same area (database, user, tablespace etc.) and gathered the table_stats with the same parameters. The same query on this copied table is faster and the costs are just 3600.
Why for gods sake is the query on this new table faster??
I appreciate any idea.
Thank you!
Fibo
Edited by: user954903 on 13.01.2010 04:23Could you please share more information/link which can elaborate the significance of using SHRINK clause.
If this is so useful and can shrink the unused space , why not my database architect has suggested this :).
Any disadvantage also?It moves the highwater mark back to the lowest position, therefore full tables scans would work faster in some cases. Also it can reduce number of migrated rows and number of used blocks.
Disadvantage is that it involves row movement, so operations which based on rowid are permitted during the shrinking.
I think it is even better to stop all operations on the table when shrinking and disable all triggers. Another problem is that this process can take a long time.
Guru's, please correct me if I'm mistaken.
Edited by: Oleg Gorskin on 13.01.2010 5:50 -
Performance problem querying multiple CLOBS
We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
select count(*) from articles
where contains (abstract , 'matter OR dark OR detection') > 0
or contains (subject , 'matter OR dark OR detection') > 0
Columns abstract and subject are CLOBs. However, this query works fine for AND;
select count(*) from articles
where contains (abstract , 'matter AND dark AND detection') > 0
or contains (subject , 'matter AND dark AND detection') > 0
The explain plan gives a cost of 2157 for OR and 14.3 for AND.
I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
create index article_abstract_search on article(abstract)
INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
The data and index tables are on separate tablespaces.
Can anyone suggest what is going on here, and any alternatives?
Many thanks,
Geoff RobinsonThanks for your reply, Omar.
I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
Add an extra CONTAINS and it becomes 300 times more costly!
The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
Regards
Geoff -
Any problems if run SQL Server 2005 and Essbase Server on 1 machine?
<p>Hi, I would like to know if there's any problems in terms ofperformance when running SQL Server 2005 and Essbase Server on 1machine?</p><p> </p><p>Currently, my users are using Excel File (Lock & Send) toupload data to the Essbase Server and it took about 30 mins perexcel file upload. Application Manager is installed on client's andserver PC for adminstration.</p><p> </p><p>Now, I need to implement Datawarehouse and wish to installSQL Server 2005 on the current machine my Essbase Server isrunning. I will need to do simple SQL statements on the SQL Server2005 such as Update, Select, Insert etc.</p><p> </p><p>When comes to performance issues, will my current Essbase Serverbe affected? What if 2 users are accessing the Essbase Server andSQL Server at the same time? Will any data be lost in the midst ofextracting data?</p><p> </p><p>I hope someone can advise. Appreciate that. Thank you.</p>
<blockquote>quote:<br><hr><i>Originally posted by: <b>roy_choy</b></i><BR><BR>Currently, my users are using Excel File (Lock & Send) to<BR>upload data to the Essbase Server and it took about 30 mins per<BR>excel file upload. <BR>I hope someone can advise. Appreciate that. Thank you.</p><hr></blockquote><BR>My test server is running both Essbase and SQLServer, but I think that you have a bigger problem.<BR><BR>If a lock and send is taking 30 minutes, you are using the wrong technology to load Essbase. You should really consider doing the load using a data load rule. If that isn't the problem, it may be that you are having a real performance problem. If you are moving enough data to require a half hour load, it should be probably be loaded from a text file, or even a spreadsheet, using a data load rule. Text file loads are a bit better than excel, especially when there are formatting issues. <BR><BR>Lock and sends are fine for changing parameters and doing adjustments, but heavy duty data loads really work a lot better with load rules.
-
Performance problems with SAP GUI 7.10 and BEx 3.5 Patch 400?
Hi everybody,
we installed SAP GUI 7.10 and BEx 3.5 Patch 400 and detected hugh performance problems with this version in comparison to the SAP GUI 6.40 and BEx 3.5 or BEx 7.0 Patch 800.
Does anybody detect the same problems?
Best regards,
UlliMost important question when you are talking about performance-issues:
which OC are you working on and which excel version?
ciao
Joke -
Performance Problems with MS IE 6.0 and EP 7.0 (2004s)
Hello,
we have a problem with ie 6.0 webbrowser and EP 7.0 (2004s). When we open for example the theme editor in the ep-system-administration site we must wait with MS ie 6.0 webbrowser ~ 18s for the site opening. With firefox 2.0 we can open the theme editor with ~ 1s.
Have/Had anybody the same problem? - Or is this a knowing problem with ie 6.0.
We used for the network analyse the tool: httpwatch 3.2. We can see, that the ie 6.0 must wait into the all-time of 18sec. 13sec for opening the site: emptyhover.html.
- Thanks in advance for a tip !
Best Regards,
RalfHello Ameya and hello Shao,
we use the version SP12 NW 2004s. We have the this problems with a base application of portal: theme editor.
We can see in the httpWatch 3.2 analysis tool, that the ie 6.0 load a much of cache files from the client webbrowser. Could it be this problem? - I red in this forum problems with the webbrowser-parameter: "Empty Temporary Internet Files folder when browser is closed"
Thank you for your helping.
Best Regards,
Ralf -
Performance problem: Query explain plan changes in pl/sql vs. literal args
I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between 123 and 456;
Its performance was excellent with literal arguments (1 sec per execution).
But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
Ex:
CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
CURSOR LT_CURSOR IS
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between param1 AND param2;
BEGIN
FOR aRecord IN LT_CURSOR
LOOP
(print timestamp...)
END LOOP;
END runQuery;
Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
Solution:
Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
Downside: Query not cached for future use. Perfectly fine for this query's purpose.
Other suggestions are welcome.AmandaSoosai wrote:
I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between 123 and 456;
Its performance was excellent with literal arguments (1 sec per execution).
But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
Ex:
CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
CURSOR LT_CURSOR IS
select * from largeTable large
join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
join...(other aux tables)
where large.pk_id between param1 AND param2;
BEGIN
FOR aRecord IN LT_CURSOR
LOOP
(print timestamp...)
END LOOP;
END runQuery;
Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
Solution:
Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
Downside: Query not cached for future use. Perfectly fine for this query's purpose.
Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
A basic illustration of my guess
http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus -
Query runs faster when there is no statistics
Hi Gurus
I have a Oracle 10.2.0.1 instance and I'm seeing the following weird behavior, I appreciate any pointers or references to resolve this problem.
Thank you very much.
1. Run the below mentioned query for 10 times continuously from sqlplus. Elapsed time is around 115 seconds (around 2
min) for each execution. Elapsed time is constant, no increase or decrease. Tables involved in the query have statistics with 100% sampling.
2. delete the statistics on 2 tables involved in query. Flush the shared_pool
(alter system flush shared_pool).
3. Run the query for 10 times. Elapsed time is less than 2 seconds for each execution. Elapsed time is constant, no
increase or decrease.
The Query: (it is a generated query, no option for modifying it).
select count(distinct itm1.itm_id) FROM ita ita1, ita ita2, itm itm1, itm itm2, itm itm3
where itm1.itm_container_id = 2812
and itm1.itm_version_id <= 999999999
and itm1.itm_next_version_id >= 999999999
and itm2.itm_primary_key = 'RAYBESTOS'
and itm3.itm_primary_key = '1'
and ita1.ita_node_id = 3111
and itm2.itm_container_id = 2020
and ita1.ita_item_id = itm1.itm_id
and ita1.ita_version_id <= 999999999
and ita1.ita_next_version_id >= 999999999
and itm2.itm_id = ita1.ita_value_numeric
and itm2.itm_version_id <= 999999999
and itm2.itm_next_version_id >= 999999999
and ita2.ita_node_id = 3118
and itm3.itm_container_id = 2025
and ita2.ita_item_id = itm1.itm_id
and ita2.ita_version_id <= 999999999
and ita2.ita_next_version_id >= 999999999
and itm3.itm_id = ita2.ita_value_numeric
and itm3.itm_version_id <= 999999999
and itm3.itm_next_version_id >= 999999999;
Query uses dynamic sampling when there is no statistics.
tkprof report shows there is a small difference in execution plan between 2 cases. When there is no statistics, there is table access by index rowid. It may be the reason for faster response time.
Rows Row Source Operation
1 SORT GROUP BY (cr=47235 pr=0 pw=0 time=919461 us)
758 {color:#ff0000}TABLE ACCESS BY INDEX ROWID TCTG_ITA_ITEM_ATTRIBUTES (cr=47235 pr=0 pw=0 time=600473 us){color}
14163 NESTED LOOPS (cr=40652 pr=0 pw=0 time=299694 us)
7081 NESTED LOOPS (cr=25708 pr=0 pw=0 time=538463 us)
12771 NESTED LOOPS (cr=90 pr=0 pw=0 time=255699 us)
1 MERGE JOIN CARTESIAN (cr=6 pr=0 pw=0 time=271 us)
1 INDEX RANGE SCAN ICTG_ITM_1 (cr=3 pr=0 pw=0 time=74 us)(object id 105409)
1 BUFFER SORT (cr=3 pr=0 pw=0 time=112 us)
1 INDEX RANGE SCAN ICTG_ITM_1 (cr=3 pr=0 pw=0 time=43 us)(object id 105409)
12771 INDEX RANGE SCAN ICTG_ITA_1 (cr=84 pr=0 pw=0 time=102210 us)(object id 105399)
7081 INDEX RANGE SCAN ICTG_ITM_0 (cr=25618 pr=0 pw=0 time=363715 us)(object id 105408)
7081 INDEX RANGE SCAN ICTG_ITA_0 (cr=14944 pr=0 pw=0 time=239803 us)(object id 105398)Hi Jonathan,
Thanks again for your response. Yes, you are correct. Most of the rows have all 9s, and a small percentage have something else - and there are only a small number of distinct values.
Here is the histogram info when there is statistics (some old data have been trimmed for this test).
TABLE_NAME COLUMN_NAME NUM_DISTINCT NUM_BUCKETS HISTOGRAM
TCTG_ITA_ITEM_ATTRIBUTES ITA_COMPANY_ID 2 2FREQUENCY
TCTG_ITA_ITEM_ATTRIBUTES ITA_CATALOG_ID 62 62FREQUENCY
TCTG_ITA_ITEM_ATTRIBUTES ITA_ITEM_ID 720867 1NONE
TCTG_ITA_ITEM_ATTRIBUTES ITA_NODE_ID 118 118FREQUENCY
TCTG_ITA_ITEM_ATTRIBUTES ITA_VALUE_NUMERIC 587504 254HEIGHT BALANCED
TCTG_ITA_ITEM_ATTRIBUTES ITA_VALUE_STRING 1060930 254HEIGHT BALANCED
TCTG_ITA_ITEM_ATTRIBUTES ITA_VERSION_ID 48 48FREQUENCY
TCTG_ITA_ITEM_ATTRIBUTES ITA_NEXT_VERSION_ID 1 1FREQUENCY
TCTG_ITA_ITEM_ATTRIBUTES ITA_OCCURRENCE_ID 16250 254HEIGHT BALANCED
TCTG_ITA_ITEM_ATTRIBUTES ITA_VALUE_STRING_IGNORECASE 1498257 254HEIGHT BALANCED
TCTG_ITM_ITEM ITM_COMPANY_ID 2 2FREQUENCY
TCTG_ITM_ITEM ITM_ID 720867 1NONE
TCTG_ITM_ITEM ITM_CONTAINER_ID 62 62FREQUENCY
TCTG_ITM_ITEM ITM_PRIMARY_KEY 531960 1NONE
TCTG_ITM_ITEM ITM_VERSION_ID 48 48FREQUENCY
TCTG_ITM_ITEM ITM_NEXT_VERSION_ID 1 1FREQUENCY
TCTG_ITM_ITEM ITM_STATUS 3 1NONE
TCTG_ITM_ITEM ITM_COLLAB_INFO 7 1NONE
TCTG_ITM_ITEM ITM_LAST_MODIFIED 717098 1NONEdisplay_cursor without statistics
SQL> @disp-cursor
756
Elapsed: 00:00:03.81
SQL_ID d9q6j48ns19zv, child number 0
select /*+ gather_plan_statistics */ count(distinct itm1.itm_id) FROM ita ita1, ita ita2, itm itm1, itm itm2, itm itm3 where
itm1.itm_container_id = 2812 and itm1.itm_version_id <= 999999999 and itm1.itm_next_version_id >= 999999999 and
itm2.itm_primary_key = 'RAYBESTOS' and itm3.itm_primary_key = '1' and ita1.ita_node_id = 3111 and
itm2.itm_container_id = 2020 and ita1.ita_item_id = itm1.itm_id and ita1.ita_version_id <= 999999999 and
ita1.ita_next_version_id >= 999999999 and itm2.itm_id = ita1.ita_value_numeric and itm2.itm_version_id <= 999999999
and itm2.itm_next_version_id >= 999999999 and ita2.ita_node_id = 3118 and itm3.itm_container_id = 2025 and
ita2.ita_item_id = itm1.itm_id and ita2.ita_version_id <= 999999999 and ita2.ita_next_version_id >= 999999999 and
itm3.itm_id = ita2.ita_value_numeric and itm3.itm_version_id <= 999999999 and itm3.itm_next_version_id >= 999999999
Plan hash value: 2184662757
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | SORT GROUP BY | | 1 | 1 | 1 |00:00:03.61 | 178K| 73728 | 73728 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| TCTG_ITA_ITEM_ATTRIBUTES | 1 | 1 | 756 |00:00:00.96 | 178K| | | |
| 3 | NESTED LOOPS | | 1 | 1 | 69879 |00:00:01.27 | 145K| | | |
| 4 | NESTED LOOPS | | 1 | 1 | 34939 |00:00:02.66 | 71815 | | | |
| 5 | NESTED LOOPS | | 1 | 1 | 35695 |00:00:00.71 | 229 | | | |
| 6 | MERGE JOIN CARTESIAN | | 1 | 1 | 1 |00:00:00.01 | 6 | | | |
|* 7 | INDEX RANGE SCAN | ICTG_ITM_1 | 1 | 1 | 1 |00:00:00.01 | 3 | | | |
| 8 | BUFFER SORT | | 1 | 1 | 1 |00:00:00.01 | 3 | 73728 | 73728 | |
|* 9 | INDEX RANGE SCAN | ICTG_ITM_1 | 1 | 1 | 1 |00:00:00.01 | 3 | | | |
|* 10 | INDEX RANGE SCAN | ICTG_ITA_1 | 1 | 1 | 35695 |00:00:00.29 | 223 | | | |
|* 11 | INDEX RANGE SCAN | ICTG_ITM_0 | 35695 | 1 | 34939 |00:00:01.14 | 71586 | | | |
|* 12 | INDEX RANGE SCAN | ICTG_ITA_0 | 34939 | 1 | 34939 |00:00:01.20 | 73590 | | | |
Predicate Information (identified by operation id):
2 - filter("ITM3"."ITM_ID"="ITA2"."ITA_VALUE_NUMERIC")
7 - access("ITM2"."ITM_PRIMARY_KEY"='RAYBESTOS' AND "ITM2"."ITM_CONTAINER_ID"=2020 AND "ITM2"."ITM_NEXT_VERSION_ID">=999999999 AND
"ITM2"."ITM_VERSION_ID"<=999999999)
filter("ITM2"."ITM_VERSION_ID"<=999999999)
9 - access("ITM3"."ITM_PRIMARY_KEY"='1' AND "ITM3"."ITM_CONTAINER_ID"=2025 AND "ITM3"."ITM_NEXT_VERSION_ID">=999999999 AND
"ITM3"."ITM_VERSION_ID"<=999999999)
filter("ITM3"."ITM_VERSION_ID"<=999999999)
10 - access("ITA1"."ITA_NODE_ID"=3111 AND "ITM2"."ITM_ID"="ITA1"."ITA_VALUE_NUMERIC" AND "ITA1"."ITA_NEXT_VERSION_ID">=999999999
AND "ITA1"."ITA_VERSION_ID"<=999999999)
filter("ITA1"."ITA_VERSION_ID"<=999999999)
11 - access("ITA1"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITM1"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM1"."ITM_CONTAINER_ID"=2812 AND
"ITM1"."ITM_VERSION_ID"<=999999999)
filter(("ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999))
12 - access("ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITA2"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA2"."ITA_NODE_ID"=3118 AND
"ITA2"."ITA_VERSION_ID"<=999999999)
filter(("ITA2"."ITA_NODE_ID"=3118 AND "ITA2"."ITA_VERSION_ID"<=999999999))
Note
- dynamic sampling used for this statement
54 rows selected.
Elapsed: 00:00:00.04Display_cursor with statistics
SQL> @disp-cursor
756
Elapsed: 00:01:57.53
SQL_ID d9q6j48ns19zv, child number 0
select /*+ gather_plan_statistics */ count(distinct itm1.itm_id) FROM ita ita1, ita ita2, itm itm1, itm itm2, itm
itm3 where itm1.itm_container_id = 2812 and itm1.itm_version_id <= 999999999 and itm1.itm_next_version_id
= 999999999 and itm2.itm_primary_key = 'RAYBESTOS' and itm3.itm_primary_key = '1' andita1.ita_node_id = 3111 and itm2.itm_container_id = 2020 and ita1.ita_item_id = itm1.itm_id and
ita1.ita_version_id <= 999999999 and ita1.ita_next_version_id >= 999999999 and itm2.itm_id =
ita1.ita_value_numeric and itm2.itm_version_id <= 999999999 and itm2.itm_next_version_id >= 999999999
and ita2.ita_node_id = 3118 and itm3.itm_container_id = 2025 and ita2.ita_item_id = itm1.itm_id and
ita2.ita_version_id <= 999999999 and ita2.ita_next_version_id >= 999999999 and itm3.itm_id =
ita2.ita_value_numeric and itm3.itm_version_id <= 999999999 and itm3.itm_next_version_id >= 999999999
Plan hash value: 332134648
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | SORT GROUP BY | | 1 | 1 | 1 |00:01:54.75 | 3041K| 73728 | 73728 | |
| 2 | NESTED LOOPS | | 1 | 1 | 756 |00:00:30.95 | 3041K| | | |
| 3 | NESTED LOOPS | | 1 | 1 | 34939 |00:00:02.73 | 71818 | | | |
| 4 | NESTED LOOPS | | 1 | 1 | 35695 |00:00:00.75 | 230 | | | |
| 5 | MERGE JOIN CARTESIAN| | 1 | 1 | 1 |00:00:00.01 | 6 | | | |
|* 6 | INDEX RANGE SCAN | ICTG_ITM_1 | 1 | 1 | 1 |00:00:00.01 | 3 | | | |
| 7 | BUFFER SORT | | 1 | 1 | 1 |00:00:00.01 | 3 | 73728 | 73728 | |
|* 8 | INDEX RANGE SCAN | ICTG_ITM_1 | 1 | 1 | 1 |00:00:00.01 | 3 | | | |
|* 9 | INDEX RANGE SCAN | ICTG_ITA_1 | 1 | 1 | 35695 |00:00:00.32 | 224 | | | |
|* 10 | INDEX RANGE SCAN | ICTG_ITM_0 | 35695 | 1 | 34939 |00:00:01.19 | 71588 | | | |
|* 11 | INDEX RANGE SCAN | ICTG_ITA_1 | 34939 | 1 | 756 |00:01:52.76 | 2969K| | | |
Predicate Information (identified by operation id):
6 - access("ITM3"."ITM_PRIMARY_KEY"='1' AND "ITM3"."ITM_CONTAINER_ID"=2025 AND
"ITM3"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM3"."ITM_VERSION_ID"<=999999999)
filter("ITM3"."ITM_VERSION_ID"<=999999999)
8 - access("ITM2"."ITM_PRIMARY_KEY"='RAYBESTOS' AND "ITM2"."ITM_CONTAINER_ID"=2020 AND
"ITM2"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM2"."ITM_VERSION_ID"<=999999999)
filter("ITM2"."ITM_VERSION_ID"<=999999999)
9 - access("ITA1"."ITA_NODE_ID"=3111 AND "ITM2"."ITM_ID"="ITA1"."ITA_VALUE_NUMERIC" AND
"ITA1"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA1"."ITA_VERSION_ID"<=999999999)
filter(("ITA1"."ITA_VALUE_NUMERIC" IS NOT NULL AND "ITA1"."ITA_VERSION_ID"<=999999999))
10 - access("ITA1"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITM1"."ITM_NEXT_VERSION_ID">=999999999 AND
"ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999)
filter(("ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999))
11 - access("ITA2"."ITA_NODE_ID"=3118 AND "ITM3"."ITM_ID"="ITA2"."ITA_VALUE_NUMERIC" AND
"ITA2"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND
"ITA2"."ITA_VERSION_ID"<=999999999)
filter(("ITA2"."ITA_VALUE_NUMERIC" IS NOT NULL AND "ITA2"."ITA_VERSION_ID"<=999999999 AND
"ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID"))
51 rows selected.
Elapsed: 00:00:00.28
Maybe you are looking for
-
Aperture 3.1.2, system won't allow us to update the software. Always get a message saying we need aperture 3
-
Imbed ical calendar into webpage
is there a way to set up an iweb page so that an ical calendar uploaded to mobileme can be shown? currently, i just have a page that says "click here to view the calendar," but i would like the actual calendar to be shown on the page.
-
Mail won't retrieve messages from my primary Comcast account
Apple Mail, Version 2.1 (752/752.2), downloads emails from my yahoo.com and gmail.com accounts, but it has suddenly stopped downloading emails from my primary Comcast account to my G5 iMac. I get no error messages. Trying to retrieve email in Apple M
-
Problem Reloading Info cube after Transport
I have made some changes to a cubes dimensions. Then I transported these to Test and loaded the cube there, all worked fine. Then i deleted all the data from the same cube in production before transporting it up. In production my cube was clean and r
-
How do I sync PDFs from my iMac to my iPad? Thanks.