Select statement is slow
Hi Folks,
I thought maybe you could help me here -
Oracle version is 9.2.0.7
I am running a select statement against a table. There are no filter conditions
The statement is
SELECT * from Table1
This statement takes 500 sec to execute before I see any data. The table has around 1 million records.
There is no VPD on the table; there are no locks or latches on the table when the query is executed.
Other issues with the table are:
1. SQL*Loader takes 2-3 hours to load data
2. A simple delete of 1 record takes 1 hour (there are no constraints on this table).
I monitored the WAIT events: I see db file scattered read and a lot of time is spent on db_single_file_read.
This happens in production. The server has 8 CPUs and I am the only user logged in.
These issues are not observed in UAT environment.
In production, workarea_size_policy is set to AUTO and db_cache_advice is ON.
The same parameters are set to MANUAL and READY, respectively, in UAT.
Any suggestions on why getting the first record using a straight SELECT statement would take 500 sec?
Thanks
Justin here are the Trace details:
The wait is on db_file_scattered_read: 700+sec
The SELECT statement executed is
SELECT * from tmpfeedsettlement where rownum < 10
Used 10046 events with level 12
TKPROF: Release 9.2.0.1.0 - Production on Thu May 24 12:44:19 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: v:\shakti\o01scb3_ora_14197.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace=true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 296 (P468707)
alter session set events '10046 trace name context forever,level 12'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 296 (P468707)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 22.45 22.45
select *
from
tmpfeedsettlement where rownum < 10
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 102.42 765.63 977140 977202 0 9
total 4 102.43 765.63 977140 977202 0 9
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 296 (P468707)
Rows Row Source Operation
9 COUNT STOPKEY
9 TABLE ACCESS FULL TMPFEEDSETTLEMENT
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
9 COUNT (STOPKEY)
9 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TMPFEEDSETTLEMENT'
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net more data to client 2 0.00 0.00
db file scattered read 61181 5.62 719.27
db file sequential read 7 0.00 0.00
SQL*Net message from client 2 336.11 336.12
alter session set sql_trace=false
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 0
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 296 (P468707)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 3 0.01 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 2 102.42 765.63 977140 977202 0 9
total 9 102.43 765.64 977140 977202 0 9
Misses in library cache during parse: 3
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 336.11 358.57
SQL*Net more data to client 2 0.00 0.00
db file scattered read 61181 5.62 719.27
db file sequential read 7 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
4 user SQL statements in session.
0 internal SQL statements in session.
4 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: v:\shakti\o01scb3_ora_14197.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
4 user SQL statements in trace file.
0 internal SQL statements in trace file.
4 SQL statements in trace file.
4 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
P468707.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
61246 lines in trace file.
Similar Messages
-
Slow query results for simple select statement on Exadata
I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
I ran an execution plan it returns just this:
Plan
SELECT STATEMENT ALL_ROWSCost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576
1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576 I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
Edited by: k1ng87 on Apr 24, 2013 7:58 AMk1ng87 wrote:
I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?) -
Hi,
we are running a query with a big dunamic select statement from VB code using ADO command object. When Execute method is called system hangs and control won't return back to the application. it seems to be that there is some type limitation on Query string length. Please tell us if there is any?
we are running Oracle 8.1.7 Server on Windows 200 Server and connecting from a W2K professional, ADO 2.6 and Oracle OLEDB 8.1.7.1 OLEDB Driver.
Sample code:
Dim rs As ADODB.Recordset
Dim cmd As ADODB.Command
Set cmd = New Command
With cmd
.CommandText = ' some text with more than 2500 characters
.CommandType = adCmdText
Set rs = .Execute
End With
when i debug using VB6 and when .Execute line is called system hangs or return a message method <<somemethod> of <<some class name>> failed error.
Any help is appreciated.
Thanks,
AnilA stored procedure would only slow you down here if it was poorly written. I suspect you want to use the translate function. I'm cutting & pasting examples from the documentation-- a search at tahiti.oracle.com will give you all the info you'll need.
Examples
The following statement translates a license number. All letters 'ABC...Z' are translated to 'X' and all digits '012 . . . 9' are translated to '9':
SELECT TRANSLATE('2KRW229',
'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ',
'9999999999XXXXXXXXXXXXXXXXXXXXXXXXXX') "License"
FROM DUAL;
License
9XXX999
The following statement returns a license number with the characters removed and the digits remaining:
SELECT TRANSLATE('2KRW229',
'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ', '0123456789')
"Translate example"
FROM DUAL;
Translate example
2229
Also, LIKE '%<string>%' is going to be rather expensive simply because it has to compare the entire string and because it forces full table scans, rather than using indexes. You could speed this sort of query up by using interMedia Text (Oracle Text now in 9i). If you can eliminate one of the '%' options, you could also improve things.
My guess is that your stored procedure is inefficient and that's causing the problem-- 5k rows per table should be pretty trivial.
If you post your query over on the PL/SQL forum, there are better performance tuners than I that might have more hints for you. To get really good advice, though, you'lllikely have to get at least the execution plan for this statement and may need to do some profiling to identify the problem areas.
Justin -
Performance Issue in select statements
In the following statements, it is taking too much time for execution,Is there any best way to change these selection statements...int_report_data is my final internal table....
select fsplant fvplant frplant fl1_sto pl1_delivery pl1_gr pl2_sto pl2_delivery perr_msg into (dochdr-swerks,dochdr-vwerks,dochdr-rwerks,dochdr-l1sto,docitem-l1xblnr, docitem-l1gr,docitem-l2sto, docitem-l2xblnr,docitem-err_msg) from zdochdr as f inner join zdocitem as p on fl1_sto = pl1_sto where fsplant in s_werks and
fvplant in v_werks and frplant in r_werks and pl1_delivery in l1_xblnr and pl1_gr in l1_gr and p~l2_delivery in l2_xblnr.
move : dochdr-swerks to int_report_data-i_swerks,
dochdr-vwerks to int_report_data-i_vwerks,
dochdr-rwerks to int_report_data-i_rwerks,
dochdr-l1sto to int_report_data-i_l1sto,
docitem-l1xblnr to int_report_data-i_l1xblnr,
docitem-l1gr to int_report_data-i_l1gr,
docitem-l2sto to int_report_data-i_l2sto,
docitem-l2xblnr to int_report_data-i_l2xblnr,
docitem-err_msg to int_report_data-i_errmsg.
append int_report_data.
endselect.
Goods receipt
loop at int_report_data.
select single ebeln from ekbe into l2gr where ebeln = int_report_data-i_l2sto and bwart = '101' and bewtp = 'E' and vgabe = '1'.
if sy-subrc eq 0.
move l2gr to int_report_data-i_l2gr.
modify int_report_data.
endif.
endloop.
first Billing document (I have to check fkart = ZRTY for second billing *document..how can i write the statement)
select vbeln from vbfa into (tabvbfa-vbeln) where vbelv = int_report_data-i_l2xblnr or vbelv = int_report_data-i_l1xblnr.
select single vbeln from vbrk into tabvbrk-vbeln where vbeln = tabvbfa-vbeln and fkart = 'IV'.
if sy-subrc eq 0.
move tabvbrk-vbeln to int_report_data-i_l2vbeln.
modify int_report_data.
endif.
endselect.
Thanks in advance,
YadHi!
Which of your selects is slow? Make a SQL-trace, check which select(s) is(are) slow.
For EKBE and VBFA you are selecting first key field - in general that is fast. If your z-tables are the problem, maybe an index might help.
Instead of looping and making a lot of select singles, one select 'for all entries' can help, too.
Please analyze further and give feedback.
Regards,
Christian -
Problem with Select Statements
Hi All,
I have a performance problem for my report because of the following statements.
How can i modify the select statements for improving the performance of the report.
DATA : shkzg1h LIKE bsad-shkzg,
shkzg1s LIKE bsad-shkzg,
shkzg2h LIKE bsad-shkzg,
shkzg2s LIKE bsad-shkzg,
shkzg1hu LIKE bsad-shkzg,
shkzg1su LIKE bsad-shkzg,
shkzg2hu LIKE bsad-shkzg,
shkzg2su LIKE bsad-shkzg,
kopbal1s LIKE bsad-dmbtr,
kopbal2s LIKE bsad-dmbtr,
kopbal1h LIKE bsad-dmbtr,
kopbal2h LIKE bsad-dmbtr,
kopbal1su LIKE bsad-dmbtr,
kopbal2su LIKE bsad-dmbtr,
kopbal1hu LIKE bsad-dmbtr,
kopbal2hu LIKE bsad-dmbtr.
*These statements are in LOOP.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1s , kopbal1s)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1su , kopbal1su)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1h , kopbal1h)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1hu , kopbal1hu)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2s , kopbal2s)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2su , kopbal2su)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2h , kopbal2h)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2hu , kopbal2hu)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.>
Siegfried Boes wrote:
> Please stop writing answers if you understrand nothing about database SELECTS!
> All above recommendations are pure nonsense!
>
> As always with such questions, you must do an analysis before you ask! The coding itself is perfectly o.k., a SELECT with an aggregate and a GROUP BY can not be changed into a SELECT SINGLE or whatever.
>
> But your SELECTS mustr be supported by indexes!
>
> Please run SQL Trace, and tell us the results:
>
> I see 8 statements, what is the duration and the number of records coming back for each statement?
> Maybe only one statement is slow.
>
> See
> SQL trace:
> /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
>
>
> Siegfried
Nice point there Siegfried. Instead of giving constructive suggestion, people here give a very bad suggestion on using SELECT SINGLE combined with SUM and GROUP BY.
I hope the person already look at your reply before he try using select single and wondering why he has error.
Anyway, the most important thing is how many loop expected for those select statements?
If you have like thousands of loop, you can expect a poor performance.
So, you should also look at how many times the select statement is called and not only performance for each select statement when you're doing SQL trace.
Regards,
Abraham -
Multiple select statements in PL/SQL
Hi All
I am new to PL/SQL and my experience is in writing TSQL. There we can write a SQL statement like this to return 3 result set
SELECT empname FROM Employee
SELECT authname FROM Author
SELECT athname FROM sport
how can we write the same 3 statements in PL/SQL and attain the 3 resultsets.
I tried to implement the same using PL/SQL anonymous blocks. But it didn't worked.
DECLARE
P_RECORDSET OUT SYS_REFCURSOR
BEGIN
OPEN P_RECORDSET FOR
SELECT empname FROM Employee;
SELECT authname FROM Author;
SELECT athname FROM sport;
END;
can anybody show how it can be done.
Thanks in advance
George
Edited by: user6290570 on Sep 16, 2009 11:23 PMgeorge2009 wrote:
No i just want to select 3 result sets from 3 select statements, so that it is helpful to compare the resultsets. Compare? How? This is done using the SQL language. Not PL/SQL. Not Java. Not VB. Not anything else.
You would use these other language for flow control and certain forms of conditional logic - but the actual comparison of data sets is done in SQL.
Of course, that is if you do want to do it the most optimal way, that will perform well, and scale well.
SQL is not an I/O API layer - to be used to read() a record and write() a record as if the RDBMS is an ISAM file. That form of row-by-row and slow-by-slow processing dates back to the 80's when we used Cobol.. (or at least for those old farts like me that can actually remember coding in Cobol in the 80's ;-) ).
You want to design and code database applications that are fast, robust, and can scale? Then learn how to use SQL correctly. -
Auto retry of select statements on failure
I have a IBM Message Broker message flow that accesses the database to fetch some data. following are the steps in a message flow. (similar pattern is there in other flows as well)
1) Parse the input message
2) Invoke a ESQL compute node that accesses the database. This uses DataDirect ODBC drivers to access the database.
3) process the data
4) Invoke an external Java class that also accesses the database. This Java class uses Spring/Hibernate and uses the Oracle UCP library.
Steps 2 and 4 access an Oracle database on which failover features are NOT enabled. Following is our observation.
If the database fails when executing step 2, then the message flow pauses until a valid connection is available and then proceeds with the execution, the point to note is that the message flow does not experience a failure. It simply pauses until it gets a connection and continues once it gets a connection.
If the database fails when executing step 4, the message flow gets an error.
What we want is for step 2 and 4 to execute the same way, meaning that we want the message flows to wait until a valid connection is available and then continue without any errors.
I feel that there is some feature in DataDirect driver that cause step 2 to pause the message flow and prevent an error. We want the same behaviour in step 4 as well.
So, is there some way (via configuration or any other means) to get this behaviour using oracle UCP library.
One thing to note is that we are not in position to change the Java code since it has been developed by a third party.
To achieve this I have written a test and the following are details.
For this I have created a service with the following properties. Point to note is that we run the service on only one instance at a time, if it goes down then it is started on the second instance.
Service name: LDL_TEST02
Service is enabled
Server pool: CSAHEDA_LDL_TEST02
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: true
Failover type: SELECT
Failover method: NONE
TAF failover retries: 180
TAF failover delay: 5
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: CSAHEDA1
Available instances: CSAHEDA2
Following is in my tnsnames.ora file and I am using the oracle oci driver.
TESTA =
(DESCRIPTION =
(ENABLE = BROKEN)
(LOAD_BALANCE = off)
(FAILOVER = on)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip.host1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip.host2)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = LDL_TEST02)
(FAILOVER_MODE =
(BACKUP = TESTA2)
(TYPE = SELECT)
(METHOD = PRECONNECT)
(RETRIES = 120)
(DELAY = 5)
This is how I run my test.
1)I have a simple test that simply loops through a list of select statements.
2)each time it takes a connection from the connection pool, executes the statement and returns the connection to the connection pool.
3)I keep this running for around 10 minutes.
4) once the test has started I bring down the service wait for 3 minutes and the start the service on the second instance.
5) what I expect is for the test to pause for 3 minutes and then get the correct value from executing the select statement. and continue without pause.
Following is the config for the connection pool.
<bean id="dd_Datasource" class="oracle.ucp.jdbc.PoolDataSourceFactory" factory-method="getPoolDataSource">
<property name="connectionFactoryClassName" value="oracle.jdbc.xa.client.OracleXADataSource"/>
<!-- property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/ -->
<!-- <property name="connectionFactoryClassName" value="sun.jdbc.odbc.ee.DataSource"/-->
<!-- <property name="URL" value="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host2)(PORT=1521))(LOAD_BALANCE=yes)(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=LDL_TEST02)))" /> -->
<!-- <property name="URL" value="jdbc:oracle:thin:@(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host2)(PORT=1521))(LOAD_BALANCE=yes)(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=LDL_TEST02)))" />-->
<!-- property name="URL" value="jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(FAILOVER=on)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host1) (PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host2) (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=LDL_TEST02)(FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))" /-->
<!-- <property name="URL" value="jdbc:oracle:oci:@(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS_LIST=(LOAD_BALANCE=on)(FAILOVER=on)(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host1) (PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=vip.host2) (PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=LDL_TEST01)(FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))" /> -->
<property name="URL" value="jdbc:oracle:oci:/@TESTA" />
<!-- JDBC ODBC Bridge Driver -->
<!-- property name="URL" value="jdbc:odbc:LT_Test_DB"/-->
<!-- DataDirect URLs -->
<!-- property name="URL" value="datadirect:oracle://vip.host1:1521;SID=LDL_TEST01"/-->
<property name="user" value="user" />
<property name="password" value="user" />
<property name="connectionPoolName" value="dd_connectionpool" />
<property name="minPoolSize" value="2" />
<property name="maxPoolSize" value="4" />
<property name="initialPoolSize" value="2" />
<property name="connectionWaitTimeout" value="10000" />
<!-- Note:
The setSQLForValidateConnection property is not recommended when using an Oracle JDBC driver.
UCP for JDBC performs an internal ping when using an Oracle JDBC driver.
The mechanism is faster than executing an SQL statement and is overridden if this property is set.
Instead, set the setValidateConnectionOnBorrow property to true and do not
include the setSQLForValidateConnection property. -->
<property name="validateConnectionOnBorrow" value="true"/>
<!-- FCF stuff -->
<!-- property name="connectionCachingEnabled" value="true"/-->
<!-- property name="fastConnectionFailoverEnabled" value="true"/-->
<!-- <property name="ONSConfiguration" value="nodes=vip.host1:1521,vip.host2:1521"/>-->
</bean>
As you can see I have tried many combinations.
Sometimes I see the thread pause for 3 minutes without error and then continue.
Sometime I start seeing 'oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource' as soon as the servic is down.
Sometime the first error above is seen after 30 seconds (always the first one).
Can you guys shed any light on this? And how can I get the desired behaviour?
Edited by: user12181209 on 30-May-2012 02:22
reworded.
Edited by: user12181209 on May 30, 2012 6:28 AM
reworded the title for clarity.
Edited by: user12181209 on Jun 1, 2012 6:01 AMHi,
Yesterday, a similar question on running slow view was post :
View is tooo slow....
You can read different questions to try to help you.
Nicolas. -
How to increment variable value in single select statement
Hi guys
in this select statement i have hard coded date value but i need to put variable instead of hard coded date and then i want to increment that variable value till sysdate.. i have tried using curser , type tables but they are very very slow .. any experiance guys can give me good hint what should i use.
my query
select
start_dt,
end_dt,
hi_start_dt,
hi_end_dt,
ph_start_dt,
ph_end_dt,
h_start_date,
h_end_date,
g_code,
emp_det.ref,
u_code,
costing,
emp_nm,
emp_no
from
emp_det,
emp_ph_det,
emp_hi_det,
emp_h_det
where
emp_det.ref(+) = emp_ph_det.ref
and emp_hi_det.p_ref(+) = emp_ph_det.p_ref
and emp_h_det.ph_ref = emp_ph_det.ph_ref
and emp_h_det.ph_st_dt(+) = emp_hi_det.st_date;
and to_date('01-MAR-2008') between i.start_dt and nvl(i.end_dt, to_date('01-MAR-2008') +1)
and to_date('01-MAR-2008') between i.hi_start_dt and nvl(i.hi_end_dt, to_date('01-MAR-2008') + 1)
and to_date('01-MAR-2008') between i.ph_start_dt and nvl(i.ph_end_dt, to_date('01-MAR-2008') + 1)
and to_date('01-MAR-2008') between i.h_start_date and nvl(i.h_end_date, to_date('01-MAR-2008') + 1)
or
(----emp has left this month
i.start_dt < i.emp_end_dt
and i.end_dt between add_months(to_date('01-MAR-2008'), -1) + 1 and to_date('01-MAR-2008')
and i.hi_start_dt < i.hi_end_dt
and i.hi between add_months(to_date('01-MAR-2008'), -1) + 1 and to_date('01-MAR-2008')
and i.ph_start_dt < i.ph_end_dt
and i.ph_end_dt between add_months(to_date('01-MAR-2008'), -1) + 1 and to_date('01-MAR-2008')
and i.h_start_date < i.h_end_date
and i.h_end_date between add_months(to_date('01-MAR-2008'), -1) + 1 and to_date('01-MAR-2008')Hi Anurag
Thanks for the reply.please find my sample data below . below i am only showing data for one employee only.. i want to write a query where i will query for a month like march 2008 and then i need to find out the record for employe where this month march 2008 is between all the start and end dates like it should be between start_dt and end_dt and h_start_date and h_end_date and hi_strt_dt and hi_end_dt and ph_start_dt and ph_end_dt and where all the combination are true show me that record only .. i don't want any other record.
h_start h_end_
start_dt end_dt date date histrt_dt hi_end_dt ph_start_dt ph_end_dt
1-Sep-07 31-Dec-08 8-Feb-08 31-Aug-08 1-Sep-07 31-Dec-07 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 1-Sep-07 31-Dec-07 1-Sep-07 31-Dec-07 1-Sep-07 31-Dec-07
1-Sep-07 31-Dec-08 1-Sep-08 31-Dec-08 1-Sep-07 31-Dec-07 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 8-Feb-08 31-Aug-08 1-Aug-08 31-Aug-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 1-Sep-07 31-Dec-07 1-Aug-08 31-Aug-08 1-Sep-07 31-Dec-07
1-Sep-07 31-Dec-08 1-Sep-08 31-Dec-08 1-Aug-08 31-Aug-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 8-Feb-08 31-Aug-08 1-Oct-08 31-Dec-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 1-Sep-07 31-Dec-07 1-Oct-08 31-Dec-08 1-Sep-07 31-Dec-07
1-Sep-07 31-Dec-08 1-Sep-08 31-Dec-08 1-Oct-08 31-Dec-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 8-Feb-08 31-Aug-08 1-Sep-08 30-Sep-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 1-Sep-07 31-Dec-07 1-Sep-08 30-Sep-08 1-Sep-07 31-Dec-07
1-Sep-07 31-Dec-08 1-Sep-08 31-Dec-08 1-Sep-08 30-Sep-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 8-Feb-08 31-Aug-08 8-Feb-08 31-Jul-08 8-Feb-08 31-Dec-08
1-Sep-07 31-Dec-08 1-Sep-07 31-Dec-07 8-Feb-08 31-Jul-08 1-Sep-07 31-Dec-07
1-Sep-07 31-Dec-08 1-Sep-08 31-Dec-08 8-Feb-08 31-Jul-08 8-Feb-08 31-Dec-08 -
How to use bind variable in this select statement
Hi,
I have created this procedure where table name and fieldname is variable as they vary, therefore i passed them as parameter. This procedure will trim leading (.) if first five char is '.THE''. The procedure performs the required task. I want to make select statement with bind variable is there any possibility to use a bind variable in this select statement.
the procedure is given below:
create or replace procedure test(tablename in varchar2, fieldname IN varchar2)
authid current_user
is
type poicurtype is ref cursor;
poi_cur poicurtype;
sqlst varchar2(250);
THEVALUE NUMBER;
begin
sqlst:='SELECT EMPNO FROM '||TABLENAME||' WHERE SUBSTR('||FIELDNAME||',1,5)=''.THE ''';
DBMS_OUTPUT.PUT_LINE(SQLST);
OPEN POI_CUR FOR SQLST ;
LOOP
FETCH POI_CUR INTO THEVALUE;
EXIT WHEN POI_CUR%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(THEVALUE);
SQLST:='UPDATE '||TABLENAME|| ' SET '||FIELDNAME||'=LTRIM('||FIELDNAME||',''.'')';
SQLST:=SQLST|| ' WHERE EMPNO=:X';
DBMS_OUTPUT.PUT_LINE(SQLST);
EXECUTE IMMEDIATE SQLST USING THEVALUE;
END LOOP;
COMMIT;
END TEST;
Best Regards,So you want to amend each row individually? Is there some reason you're trying to make this procedure run as slow as possible?
create or replace procedure test (tablename in varchar2, fieldname in varchar2)
authid current_user
is
sqlst varchar2 (250);
thevalue number := 1234;
begin
sqlst := 'update ' || tablename || ' set ' || fieldname || '= ltrim(' || fieldname || ',''.'') where substr(' || fieldname
|| ',1,5) = ''.THE ''';
dbms_output.put_line (sqlst);
execute immediate sqlst;
end test;will update every row that satisfies the criteria in a single statement. If there are 10 rows that start with '.THE ' then it will update 10 rows. -
Problem with SELECT statement. What is wrong with it?
Why is this query....
<cfquery datasource="manna_premier" name="kit_report">
SELECT Orders.ID,
SaleDate,
Orders.UserID,
Distributor,
DealerID,
Variable,
TerritoryManager,
US_Dealers.ID,
DealerName,
DealerAddress,
DealerCity,
DealerState,
DealerZIPCode,
(SELECT SUM(Quantity)
FROM ProductOrders PO
WHERE PO.OrderID = Orders.ID) as totalProducts,
FROM Orders, US_Dealers
WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN #CreateODBCDate(FORM.Start)# AND #CreateODBCDate(FORM.End)# AND Variable = '#Variable#'
</cfquery>
giving me this error message...
Error Executing Database Query.
[Macromedia][SequeLink JDBC Driver][ODBC Socket][Microsoft][ODBC Microsoft Access Driver] The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect.
The error occurred in D:\Inetpub\mannapremier\kit_report2.cfm: line 20
18 : WHERE PO.OrderID = Orders.ID) as totalProducts,
19 : FROM Orders, US_Dealers
20 : WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN #CreateODBCDate(FORM.Start)# AND #CreateODBCDate(FORM.End)# AND Variable = '#Variable#'
21 : </cfquery>
22 :
SQLSTATE
42000
SQL
SELECT Orders.ID, SaleDate, Orders.UserID, Distributor, DealerID, Variable, TerritoryManager, US_Dealers.ID, DealerName, DealerAddress, DealerCity, DealerState, DealerZIPCode, (SELECT SUM(Quantity) FROM ProductOrders PO WHERE PO.OrderID = Orders.ID) as totalProducts, FROM Orders, US_Dealers WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN {d '2009-10-01'} AND {d '2009-10-31'} AND Variable = 'Chick Days pre-book'
VENDORERRORCODE
-3504
DATASOURCE
manna_premier
Resources:
I copied it from a different template where it works without error...
<cfquery name="qZVPData" datasource="manna_premier">
SELECT UserID,
TMName,
UserZone,
(SELECT COUNT(*)
FROM Sales_Calls
WHERE Sales_Calls.UserID = u.UserID) as totalCalls,
(SELECT COUNT(*)
FROM Orders
WHERE Orders.UserID = u.UserID) as totalOrders,
(SELECT SUM(Quantity)
FROM ProductOrders PO
WHERE PO.UserID = u.UserID AND PO.NewExisting = 1) as newItems,
(SELECT SUM(NewExisting)
FROM ProductOrders PO_
WHERE PO_.UserID = u.UserID) as totalNew,
SUM(totalOrders)/(totalCalls) AS closePerc
FROM Users u
WHERE UserZone = 'Central'
GROUP BY UserZone, UserID, TMName
</cfquery>
What is the problem?It's hard to say: what's your request timeout set to?
700-odd records is not much of a fetch for a decent DB, and I would not expect that to case the problem. But then you're using Access which doesn't fit the description of "decent DB" (or "fit for purpose" or "intended for purpose"), so I guess all bets are off one that one. If this query is slow when ONE request is asking for it, what is going to happen when it goes live and multiple requests are asking for it, along with all the other queries your site will want to run? Access is not designed for this. It will really struggle, and cause your site to run like a dog. One that died serveral weeks ago.
What else is on the template? I presume you're doing something with the query once you fetch it, so could it be that code that's running slowly? Have you taken any steps to isolate which part of the code is taking so long?
How does the query perform if you take the subquery out of the select line? Is there any other way of getting that data? What subquery will be running once for every row of the result set... not very nice.
Adam -
Button animation in selected state.
Can I have a video or animation playing on a button only in the selected state(and still in the the other state)?
atm I can have a video looping in in a normal menu but this dosen't work in a layered menu?I replied in the other thread you posted to.
Since layered menus are effectively a series of normal menus linked by auto-activating buttons, you can have a "selected" version of your menu that has each button animating by itself. Then when you navigate to that button, it can auto-activate and jump to the appropriate animated menu with the correct button animating.
As I said in the other thread, the drawback is that navigation from button to button is slow. -
Benchmarks on select statements
Hi,
I'm during preparing test on select statement to check what clause cause the most slow down. Therefore I prepare 3 select statement:
1) select which transform all columns in source table for oracle's functions like: substr, rpad, decode, nvl,upper, mod, greatest, length, power, instr etc..
2) select with big where clause (about 7 lines) which take from 2 tables
3) select which have where, group by, having and order by clauses
Perhaps I will have first result today. But I am very interested what is your experience in this subject?? Which case cause the biggest slow down?
Best.Hi Tut,
what is your experience in this subject?? Every database is different, but in general:
1) select which transform all columns in source table for oracle's functions like: substr, rpad, decode, nvl,upper, mod, greatest, length, power, instr etc..Very low overhead.
2) select with big where clause (about 7 lines) which take from 2 tablesLong time to parse, and sometime Oracle does not get the cardinality right and joins the tables in the wrong order. To fix this issue:
1 - apply histograms. I have my notes here: http://www.dba-oracle.com/art_otn_cbo_p4.htm
2 - Use an ORDERED hint to enforce the best table join order: I have my notes here: http://www.dba-oracle.com/t_table_join_order.htm
3) select which have where, group by, having and order by clausesJust make sure that you have a large enough PGA to void sorts to disk (sort_area_size, pga_aggregate_target)
Hope this answers your questions . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference":
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Performance required in select statement
hi gurus,
my select statement below is taking around 15 sec to execute, need your help in this regard.
select single * from tablename where no = itab-no.
Thanks,
vjVIQMEL is not a table itself - it is a view of tables QMIH, QMEL and ILOA.
the field you are searching on (EQFNR) is a non-key field on table ILOA which is not an index field either.
This will be very slow depending on the size of the tables.
Is there any other data you can use to search by? perhaps it would be quicker to get extra key data for this view by reading some other tables first?
If not, you may need to add an index to ILOA for this field - I would check with SAP as adding indexes to standard tables such as this which already appears to have several indexes may have unforseen impacts on standard processing. Generally it is all right, but an extra index for the system to maintain can cause problems sometimes. -
Latency is very high when SELECT statements are running for LONG
We are a simple DOWN STREAM streams replication environment ( Archive log is shipped from source , CAPTURE & APPLY are running on destination DB).
Whenever there is a long running SELECT statement on TARGET the latency become very high.
SGA_MAX_SIZE = 8GB
STREAMS_POOL_SIZE=2GB
APPLY parallelism = 4
How can resolve this issue?Is the log file shipped but not acknowledge? -- NO
Is the log file not shipped? -- It is shipped
Is the log file acknowledged by not applied? -- Yes...But Apply process was not stopped. it may be slow or waiting for something?
It is 10g Environment. I will run AWR.. But what should i look for in AWR? -
Need suggestions on my select statement.
Hello experts,
I am having with my select statement since it is running very slow. Normally, the itab it_vendor has records exceeding 7,000. So it loops 7, 000 times and I have 2 select statements which adds to the performance slowdown. Here it is guys:
*Select records records from BSIK and BSAK based on itab it_vendor
LOOP AT it_vendor.
*Select records from BSIK
SELECT belnr lifnr budat buzei gjahr sgtxt dmbtr
shkzg saknr hkont zlspr FROM bsik
INTO TABLE it_bsak
FOR ALL ENTRIES IN it_vendor
WHERE bukrs EQ p_bukrs
AND budat LE p_keydt
AND hkont IN so_saknr
AND lifnr EQ it_vendor-lifnr
AND umsks EQ space
AND umskz EQ space.
*Select records from BSAK
SELECT belnr lifnr budat buzei gjahr sgtxt dmbtr
shkzg saknr hkont zlspr FROM bsak
APPENDING TABLE it_bsak
FOR ALL ENTRIES IN it_vendor
WHERE bukrs EQ p_bukrs
AND augdt GT p_keydt
AND budat LE p_keydt
AND hkont IN so_saknr
AND lifnr EQ it_vendor-lifnr
AND umsks EQ space
AND umskz EQ space.
ENDLOOP.Since you are using the FOR ALL ENTRIES extension of the select statement, you should not have these inside the loop. What it is correctly doing is getting all the records for all the vendors every time thru the loop. The FOR ALL ENTRIES will get all the records for all of the vendors in one shot, no need to LOOP at the internal table. Make sure that you change your where clauses to how I have them below.
* Get the IT_VENDOR itab here
<b> check not it_vendor[] is initial.
sort it_vendor ascending by lifnr .</b>
*Select records from BSIK
SELECT belnr lifnr budat buzei gjahr sgtxt dmbtr
shkzg saknr hkont zlspr FROM bsik
INTO TABLE it_bsak
FOR ALL ENTRIES IN it_vendor
WHERE<b> lifnr EQ it_vendor-lifnr</b>
and bukrs EQ p_bukrs
AND budat LE p_keydt
AND hkont IN so_saknr
AND umsks EQ space
AND umskz EQ space.
*Select records from BSAK
SELECT belnr lifnr budat buzei gjahr sgtxt dmbtr
shkzg saknr hkont zlspr FROM bsak
APPENDING TABLE it_bsak
FOR ALL ENTRIES IN it_vendor
WHERE <b>lifnr EQ it_vendor-lifnr</b>
and bukrs EQ p_bukrs
AND augdt GT p_keydt
AND budat LE p_keydt
AND hkont IN so_saknr
AND umsks EQ space
AND umskz EQ space.
Regards,
Rich Heilman
Maybe you are looking for
-
Modification in accounting view of mm01 tcode
Hi Experts, My Requirement is when i am using mm01 tcode in that i used accounting view for creation, i gave all required fields in that view but value for field VALUATION CLASS ( BKLAS ) is assigned by default as i filled all required fields but i
-
Java & Dreamweaver MX - problems
When i create websites via my Dreamweaver MX software and enter Java tid-bits into the html code it comes out as a broken document, I have updated my Java but on any other application such as Safari the Java works fine but only when I use Dreamweaver
-
How to call the secured EJB from timer ejb timedout method.
Hi All, I have a couple of questions on EJB 3.1 Timer Service. 1. How to call the secured EJB (annotated @RolesAllowed) from Timer EJB @Timeout Method? 2. What's the default role/principal with which the Timer @Timeout Method gets called? Please let
-
Where is the list of allowed applets stored? Can this be pre-filled?
The new version (update 11) now asks for permission to run appllets by default. The dialog has a checkbox to allow this applet to run in the future. Where is this information stored? Is it possible to pre-fill it with a list of allowed applets at ins
-
Hi Exps, when i tried to post the bank downloaded file in FF_5, the program immediatly asking maintain the alternative account no. this development belongs to electronic bank statement. we are having 4 bank accounts with different currencies. main ac