Spatial Query Response Time
O/S - Sun Solaris
ver - Oracle 8.1.7
I am trying to improve the response time of the following query. Both tables contain polygons.
select a.data_id, a.GEOLOC from information_data a, shape_data b where a.info_id = 2 and b.shape_id = 271 and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE'
The response time with info_id not indexed is 9 seconds. When I index info_id, I get the following error. Why is indexing info_id causing a spatial index error ? Also, other than manipulating the tiling level, is there anything else that could improve the response time ?
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-13208: internal error while evaluating [window SRID does not match layer
SRID] operator
ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
ORA-06512: at line 1
Thanks,
Ravi.
Hello Ravi,
Both layers should have SDO_SRID values set in order for the index to work properly.
After you do that you might want to add an Oracle hint to the query:
select /*+ ordered */ a.data_id, a.GEOLOC
from shape_data b, information_data a
where a.info_id = 2 and b.shape_id = 271
and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE' ;
Hope this helps,
Dan
Also, if only one or very few rows have a.info_id=2 then the function sdo_geom.relate
might also work quickly.
Similar Messages
-
Help required in optimizing the query response time
Hi,
I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
The first table can have maximum of 6 million rows but the second table rows will be around 9000.
My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
the second query selection criteria is to find the value in the range .
for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
My first query returns a result which then will be used to select using the following query
select column1 from my_table where start_range < my_value and end_range> my_value;
I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
I am using a preparedStatement for the second query loop.
Can some one suggest me how I can improve the query response time?
Regards,
ShyamTry the code below.
Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
I have written a sample db code for the same interraction.
Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
Good luck.
DROP TYPE idlist;
CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
CREATE OR REPLACE PACKAGE mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
END mypkg1;
CREATE OR REPLACE PACKAGE BODY mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
AS
ctr NUMBER;
BEGIN
DBMS_OUTPUT.put_line (myval_list.COUNT);
FOR x IN (SELECT object_name, object_id, myvalue
FROM user_objects a,
(SELECT myval_list (ROWNUM + 1) myvalue
FROM TABLE (myval_list)) b
WHERE a.object_id < b.myvalue)
LOOP
DBMS_OUTPUT.put_line ( x.object_name
|| ' - '
|| x.object_id
|| ' - '
|| x.myvalue
END LOOP;
END;
END mypkg1;
[pre]
Testing the code above. Make sure dbms output is ON.
[pre]
DECLARE
a idlist;
refc sys_refcursor;
c number;
BEGIN
SELECT x.nu
BULK COLLECT INTO a
FROM (SELECT 5000 nu
FROM DUAL) x;
mypkg1.get_list (a, refc);
END;
[pre]
Vishal V. -
How to obtain the Query Response Time of a query?
Given the Average Length of Row of tables and the number of rows in each table,
is there a way we get the query response time of a query involving
those tables. Query includes joins as well.
For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
time it takes for the following query:
Query
SELECT t1.col1, t2.col2
FROM t1, t2, t3
WHERE t1.col1 = t2.col2
AND t1.col2 IN ('a', 'c', 'd')
AND t2.col1 = t3.col2
AND t2.col1 = t1.col1 (+)
ORDER BY t1.col1
Given are:
Average Row Length of t1 = 200 bytes
Average Row Length of t2 = 100 bytes
Average Row Length of t3 = 500 bytes
No of rows in t1 = 100
No of rows in t2 = 1000
No of rows in t3 = 500
What is required is the 'query response time' for the said query.I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
http://www.databasejournal.com/features/oracle/article.php/3492521
http://www.databasejournal.com/features/oracle/article.php/3387011
http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
Have fun reading:
You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
Regards
Tim -
How to get query response time from ST03 via a script ?
Hello People,
I am trying to get average query response time for BW queries with a script (for monitoring/historisation).
I know that this data can be found manually in ST03n in the "BI workload'.
However, I don't know how to get this stat from a script.
My idea is to run a SQL query to get this information, here is the state of my query :
select count(*) from sapbw.rsddstat_olap
where calday = 20140401
and (eventid = 3100 or eventid = 3010)
and steptp = 'BEX3'
The problem is that this query is not returning the same number of navigations as the number shown in ST03n.
Can you help me to set the correct filters to get the same number of navigation as in ST03n ?
Regards.Hi Experts,
Do you have ideas for this SQL query ?
Regards. -
How to get Query response Time?
II am on BI 7.0. I ran some queries using RSRT command. I want to find how much time the queries took.
I went to
st03 -> expert mode -> BI system load-> select today / week/month according to the query runtime day
I do not see any Info Providers. Query was on a cube so why no Info Providers.
Does something have to turned on InfoPorvider to show.
When I look in RSDDSTAT_OLAP table, I do see many rows but cannot make any sense. Is there some documentation on how to get total query time from this table?
Is there any other way to get query response time?
Thanks a lot.HI,
why not use RSRT ? You can add database statistics option in "Execut & Debug" and you get all the runtime metrics of your query
In transaction RSRT, enter the query name and press u2018Execute +Debugu2019.
Selecting u2018Display Statistics Datau2019 .
After executing the query will return a list of the measured metrics.
The event id / text describes the steps (duration in seconds):
"OLAP: Read data" gives the SQL statements repsonse time (ok - because the SAP
application server acts as an Oracle client a little network traffic from the db server is included,
but as far as you not transferring zillions of rows it can be ignored)
But it gives you much more (i.e. if the OLAP cache gets used or not )...
In the "Aggreagate statistcs" you get all the infoproviders involved in that query.
bye
yk -
Rules of thumb for acceptable query response times?
Once I read a statement from SAP which said that users typically tolerate
response times up to 7 seconds. So you should strive to achive this goal.
Unfortunately I cannot remember in which document or webpage this
statement was made. Does anybody happen to have a reference for this rule
or any other SAP rules about the BW query response times?
Regards,
MarkShare with us the situations where your query is running slow. How many rows are you returning via SQL Commands?
On reports; the maximum row count being returned and/or the pagination scheme can have impact on performance.
Jeff -
How to improve sql server query response time
I have a table that contains 60 million records with the following structure
1. SEQ_ID (Bigint),
2. SRM_CLIENT_ENTITIES_SEQ_ID (Bigint),
3. CUS_ENTITY_DATA_SEQ_ID (Bigint),
4. SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID (Bigint),
5. ATTRIBUTE_DATETIME (DateTime),
6. ATTRIBUTE_DECIMAL (Decimal(18,2)),
7. ATTRIBUTE_STRING (nvarchar(255)),
8. ATTRIBUTE_BOOLEAN (Char(1)),
9. SRM_CLIENTS_SEQ_ID (Bigint)
Clustered index with key SEQ_ID
Non unique non clustered index : I've following four composite indexes
a. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_DATETIME
b. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_DECIMAL
c. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_STRING
d. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_BOOLEAN
The problem is that when i execute a simple query over this table it does not return the results in an acceptable time.
Query:
SELECT CUS_ENTITY_DATA_SEQ_ID FROM dbo.CUS_PIVOT_NON_UNIQUE_INDEXES WHERE SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID = 51986 AND ATTRIBUTE_DECIMAL = 4150196
Execution Time : 2 seconds
ThanksDid you look at the execution plan.
The query may not use none of the indexes. The Clustered index is on SEQ_ID and the non clustered index doesn't start with SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID
OR ATTRIBUTE_DECIMAL.
The order of the columns in an index matters. Just for testing ( if it is not prod. environment) Create an NCI with SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID
and ATTRIBUTE_DECIMAL and check.
Please use Marked as Answer if my post solved your problem and use
Vote As Helpful if a post was useful. -
Query response time takes more time when calling from package
SELECT
/* UTILITIES_PKG.GET_COUNTRY_CODE(E.EMP_ID,E.EMP_NO) COUNTRY_ID */
(SELECT DISTINCT IE.COUNTRY_ID
FROM DOCUMENT IE
WHERE IE.EMP_ID =E.EMP_ID
AND IE.EMP_NO = E.EMP_NO
AND IE.STATUS = 'OPEN' ) COUNTRY_ID
FROM EMPLOYEE E
CREATE OR REPLACE PACKAGE BODY UTILITIES_PKG AS
FUNCTION GET_COUNTRY_CODE(P_EMP_ID IN VARCHAR2, P_EMP_NO IN VARCHAR2)
RETURN VARCHAR2 IS
L_COUNTRY_ID VARCHAR2(25) := '';
BEGIN
SELECT DISTINCT IE.COUNTRY_ID
INTO L_COUNTRY_ID
FROM DOCUMENT IE
WHERE IE.EMP_ID = P_EMP_ID
AND IE.EMP_NO = P_EMP_NO
AND IE.STATUS = 'OPEN';
RETURN L_COUNTRY_ID;
EXCEPTION
WHEN OTHERS THEN
RETURN 'CONT';
END;
END UTILITIES_PKG;
when I run above query its coming in 1.2 seconds.but when comment subquery and call from package its taking 9 seconds.query returns more than 2000 records.i am not able to find the reason why it is taking more time when calling from package?You are getting a different plan when you run it as PL/SQL most likely. Comment your statement:
SELECT /* your comment here */then find them in V$SQL and get the SQL IDs. You can then use DBMS_XPLAN.DISPLAY_CURSOR to see what is actually happening.
http://www.psoug.org/reference/dbms_xplan.html -
Needed some help to understand what may be the reason :
I have a query that appears to have different numbers even though the execution paths across two different database servers are same. Both servers have all the tables, indexes, data ,etc exactly the same.
And my query is
select DISTINCT a.PART , a.PART_DESC from PARTS a, MIKE.PART_RATE b where a.state = 'GA' and (a.source='R')
and (a.state = b.state and a.part = b.part and b.business = 'Y')
ON DATABASE 1
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=1788 Card=630 Bytes=39060)
1 0 SORT (UNIQUE) (Cost=1788 Card=630 Bytes=39060)
2 1 NESTED LOOPS (Cost=1779 Card=630 Bytes=39060)
3 2 TABLE ACCESS (FULL) OF 'PART_RATE' (Cost=3 Card=592 Bytes=6512)
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'PARTS' (Cost=3 Card=13840 Bytes=705840)
5 4 INDEX (RANGE SCAN) OF 'PARTS_X1' (NON-UNIQUE) (Cost=2 Card=13840)
ON DATABASE 2
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=6 Card=1 Bytes=198)
1 0 SORT (UNIQUE) (Cost=6 Card=1 Bytes=198)
2 1 NESTED LOOPS (Cost=4 Card=1 Bytes=198)
3 2 TABLE ACCESS (FULL) OF 'PART_RATE' (Cost=2 Card=1 Bytes=9)
4 2 TABLE ACCESS (BY INDEX ROWID) OF 'PARTS' (Cost=2 Card=48 Bytes=9072)
5 4 INDEX (RANGE SCAN) OF 'PARTS_X1' (NON-UNIQUE) (Cost=1 Card=48)
The same query on DATABASE1 is taking much longer time. What might be the reason ? Your help is appreciatedIf your tables are not analyzed and you provide a hint like FIRST_ROWS, Oracle will use the CBO to determine the query plan. If the tables have not been analyzed, the CBO will make up statistics, either using default values or doing dynamic sampling (depending on the version of Oracle and the initialization parameters). If Oracle has to do dynamic sampling, it may take significantly more time to parse the query and the query plan may be significantly slower. I would be a bit suprised if the query plan on both systems was really identical, which is why I was asking you how you found the query plan.
If you trace both sessions and run tkprof, you should see the breakdown in parse and execution time on both systems to see whether the slower system is spending the time parsing.
As I suggested originally, however, if you gather statistics on one system, you really ought to gather statistics on the other.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Morning SQL gurus,
Running queries on APEX takes very long compared to running the script directly on the database.
I tried setting optimizer mode to all_rows, used driving_site and index hints, any other suggestions? The scripts on Apex and DB make use of a db :-/
Regards,
Pirate
Edited by: Pirate on Jun 22, 2011 11:04 AMThis is the fourm for issues with the SQL Developer tool. You will get better answers in the Apex forum or the SQL and PL/SQL forum.
-
OBIEE -10.1.3.4.1 - high physical and logical query response
Hi All,
I am facing an performance issue in OBIEE 10g .My report is taking 2 mins to come up and when i fired the physical query in the db the data is coming in 2 secs.
Below is the details from the log file.Here I observed that response time for physical and logical query is 109 sec ~ 2 mins.Please provide me the helpful pointers.
+++Administrator:370000:370015:----2013/01/22 07:28:04
-------------------- Execution Node: <<2650466>>, Close Row Count = 3332, Row Width = 26000 bytes
+++Administrator:370000:370015:----2013/01/22 07:28:04
-------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
+++Administrator:370000:370015:----2013/01/22 07:28:04
-------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
+++Administrator:370000:370015:----2013/01/22 07:28:04
-------------------- Query Status: Successful Completion
+++Administrator:370000:370015:----2013/01/22 07:28:05
-------------------- Rows 3332, bytes 86632000 retrieved from database query id: <<2650466>>
+++Administrator:370000:370015:----2013/01/22 07:28:05
-------------------- Physical query response time 109 (seconds), id <<2650466>>
+++Administrator:370000:370015:----2013/01/22 07:28:05
-------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 109, DB-connect time 0 (seconds)
+++Administrator:370000:370015:----2013/01/22 07:28:05
-------------------- Rows returned to Client 3332
+++Administrator:370000:370015:----2013/01/22 07:28:05
-------------------- Logical Query Summary Stats: Elapsed time 109, Response time 109, Compilation time 0 (seconds)Did you run the SQL from a client on the OBIEE server or your local machine? Does the Physical SQL on the OBIEE server against the DB run in 2 Seconds and when sent by the OBIEE server it takes 109 seconds?? Is that correct?
-
Fact Dimension Attribute causing extreme slow query response
I am using SSAS OLAP in SQL Server 2012. My server has 16 GB or RAM - it has only OLAP engine that is being used - nothing related to the relational engine. My fact table has a column called Product Status - for which I've a Fact dimension. Fact table has
18M rows, but the Product Staus has only 2-3 distinct values. When my users bring that attribute in their analysis, the response time is very slow - 10 to 15 minutes. Can anyone share their experiences with how I can go about optimizing this?
I tried to use the user based optimization, but the wizard suggests no aggregations.
Thanks!
GBMHi GBM,
For a cube of moderate to large size, partitions can greatly improve query performance, load performance, and ease of cube maintenance. Have you define partitions in your cube? Here is a good article regaridng Analysis Services Query Performance Top 10 Best
Practices for your reference, please see:
http://technet.microsoft.com/en-us/library/cc966527.aspx
To troubheshoot MDX query performance issue, we can create a trace to capture some events for further investigation in this case. Please see the articles below how to troubleshoot SSAS MDX query performance issue:
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/f1f57e7b-eced-4009-b635-3ebb1d7fa5b0/how-do-i-troubleshoot-the-slow-mdx-query-performance
http://www.mssqltips.com/sqlservertip/2886/improving-sql-server-analysis-services-query-response-time-with-msmdsrv/
Hope this helps.
Regards,
Elvis Long
TechNet Community Support -
Response Time of a query in 2 different enviroment
Hi guys Luca speaking, sorry for the bad written english
the questions is:
The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
*) I have a query in Benchmark with good results in execution time, the execution plan is really good
*) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
#### The Execution Plan are different ####
#### The stats are the same ####
this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
in Production the stas are the same
the other one is an external_table
the only differences that I noticed at the moment is about the tablespace used to defined the table on:
Production
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
Benchmark
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
I'm studing on at the moment
What I have to check to obtain the same execution plan (without change the query)
This is the query:
SELECT
'test query',
sysdate,
storico.tc_scarti_seq.NEXTVAL,
NULL, --ROW_ID
-- A.AZIONE,
'I',
A.CODE_PREF_TCN,
A.CODE_NUM_TCN,
'ADSL non presente su CRM' ,
-- a.AZIONE
'I'
|| ';' || a.CODE_PREF_TCN
|| ';' || a.CODE_NUM_TCN
|| ';' || a.DATA_ATVZ_CMM
|| ';' || a.CODE_PREF_DSR
|| ';' || a.CODE_NUM_TFN
|| ';' || a.DATA_CSSZ_CMM
|| ';' || a.TIPO_EVENTO
|| ';' || a.INVARIANTE_FONIA
|| ';' || a.CODE_TIPO_ADSL
|| ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
|| ';' || a.TIPO_RICHIESTA_CESSAZIONE
|| ';' || a.ROW_ID_ATTIVAZIONE
|| ';' || a.ROW_ID_CESSAZIONE
FROM storico.FLUSSO_ASTCM_INC A
WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
Esito di set autotrace traceonly explain ESERCIZIO
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
Esito di set autotrace traceonly explain BENCHMARK
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
tes=291895338)
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
8)
3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
Card=2861719 Bytes=183150016)
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
t=2 Card=1 Bytes=38)
2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
3 PARALLEL_FROM_SERIAL
4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
The differences on the InitOra are on these parameters:
Could they influence the Optimizer, and the execution plan are so different
background_dump_dest
cpu_count
db_file_multiblock_read_count
db_files
db_32k_cache_size
dml_locks
enqueue_resources
event
fast_start_mttr_target
fast_start_parallel_rollback
hash_area_size
log_buffer
log_parallelism
max_rollback_segments
open_cursors
open_links
parallel_execution_message_size
parallel_max_servers
processes
query_rewrite_enabled
remote_login_passwordfile
session_cached_cursors
sessions
sga_max_size
shared_pool_reserved_size
sort_area_retained_size
sort_area_size
star_transformation_enabled
transactions
undo_retention
user_dump_dest
utl_file_dir
Please Help me
Thanks a lot LucaHi Luca,
test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
you're using external tables. Are the speed of these drives are identically?
have you analyzed the schema with the same statement? Could you send me the statement?
have you system statistics?
have you testet the statement in an environment which is nearly like the production? concurrent user etc.
Could you send me the top 5 wait events from the statspack report.
Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
Regards
Marc -
Query Tuning - Response time Statistics collection
Our Application is Load tested for a period of 1 hour with peak load.
For this specific period of time, say thousands of queries gets executed in the database,
What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
Number of times Executed
Average Response time
Maximum response time
minimum response time
90th percentile response time ( sorted in ascending order, 90th percentile guy)
All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
I tried using sql trace and TKPROF but unable to get all these statistics...
Application uses connection pooling, so connections are taken as and when needed...
Any thoughts on this?
Appreciate your help.I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
Any more thoughts on this? -
Significant difference in response times for same query running on Windows client vs database server
I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
In both cases the query plans are the same.
The query and plan is shown below :
{code}
SQL> explain plan
2 set statement_id = 'SLOW'
3 for
4 SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
5 FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
6 ;
Explained.
SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 2852K| 46M| | 69851 (1)|
| 1 | HASH UNIQUE | | 2852K| 46M| 153M| 69851 (1)|
|* 2 | TABLE ACCESS FULL| DOCUMENTS | 2852K| 46M| | 54063 (1)|
{code}
Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
The version on the database server is 10.2.0.1.0
The version of the oracle client is also 10.2.0.1.0
I am happy to provide any further information if required.
Thank you in advance.I have a query which is taking a long time to return the results using the Oracle client.
When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
When I run the same query on a Windows client it completes in 47 minutes.
There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.
Maybe you are looking for
-
How to look at your internet history on your hard drive on a mac
how to look at your internet history on your hard drive on a mac
-
HP Laserjet 6p is not recognized in Printer Setup Utility
I bought recently used HP Laserjet 6p printer and connected my PB to it's Apple Talk port via AsanteTalk Box. After I installed the HP 6p printer driver (downloaded from the HP site) I've tried several times to find the printer in the Printer Setup U
-
Sending e-mail over 3G doesn't work
I can send e-mail from all accounts OK on wi-fi,however when on 3G only g-mail account will send.I am with Vodafone and have plus net e-mail account. Can anyone help. ( i-phone 4) Thanks
-
How to reinstall I.Tunes 6.0.5 because iTunes 7.0 crashes extremly
I would be pleased if someone could give me the way to reinstall ITune previous release (6.0.5, I think). Indeed, I have a party at home next saturday and unfortunately with the release 7.0, my ITunes crashes very often with the explanation bellow. T
-
Will changing the partition of the Home Folder automatically move files from old location?
I want to change the home folder for my users. Currently they are saving to the C: drive and using up the space on the root drive. There is a D: partition that has 774 GB free, so I would like to move use directories over to a new Home Folder on t