Database is slow
Hi,
we have migrated database from single instance to 2 node RAC recently and since then we have been observing degardation of perfromance.The mostly observed wait events on the database are buffer busy waits and db file sequential read.another observation is that library cache miss rate always hangs above 60%.
It'a documentum application.
The application is using only two tablespaces one for data and another for indexes.Kindly suggest the way in whch i can boost up the perfromance.
vamsi
Hello,
Well moving from single install to RAC is more then just high availability. You need to do some tuning, and I mean serious tuning.
The are huge amounts of resources out on the web about this. In short here is what you are facing:
"The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.
The resolution of a "buffer busy wait" events is one of the most confounding problems with Oracle. In an I/O-bound Oracle system, buffer busy waits are common, as evidenced by any system with read (sequential/scattered) waits in the top-five waits."
Hope this helps you get on your way. Check in the database forums for more help, or just have your local DBA tune the database. If he you are running enterprise edition then you should have access to the performance tools, including SQL Tuning advisor, Segment advisor.
When you generate an snapshot report, check if you don't have any ITL waits, see what segments and block the database is hot for. see what SQL statement are hot(meaning how many times its been executed, and how many buffer is reads every time).
Hope this helps.
Jan
Similar Messages
-
User say ur database is slow how u slove this how identified reasons
user say ur database is slow how u slove this how identified reasons
blame the developers for that, a bad tuned query will reduce your performance. Also you may blame the users for that, a lot of concurrent users running inefficient sql statements will strangle your system.
You may find further suggestions in your duplicated thread --> Some inter view Questions Please give prefect answer help me -
Database is slow due to indexes.
Hi,
Our database is slow and we are trying to identify duplicate indexes in a table within a schema where the index name may be different but the columns indexed are same. They may not necessarily be in the same order.Please update me the query which gives duplicate indexes.
ThanksDear,
As Nicolas did emphasize it, you need facts and proof before claiming that your performance problem is due to indexes.
Anyway, when speaking about duplicate indexes you need first to understand what is a duplicate index. Let me then show you a simple example about this
mhouri>drop table t1;
Table dropped.
mhouri>create table t1(a number, b number, c varchar2(10), d date, x number, y number);
Table created.
mhouri>create index t1_i1 on t1(a,b);
Index created.I have created a simple table and added to it a simple composite index on (a,b)
Now, I will create a duplicate index
mhouri>create index t1_i2 on t1(a);
Index created.You know why this index is considered as a duplicate one? simply because the first index I did create has its leading column = a and hence the second index t1_i2 can be covered by the first index t1_i1.
And what about the following index
mhouri>create index t1_i3 on t1(b,a);
Index created.is it a duplicate index? the answer is NO; it is not a duplicate one. Because there is no index starting with the couple (b,a). However, if you have the intention to create an index say t1_i4(b) then do not do it because it will be covered by the index t1_i3(b,a).
Finally, you can use simple select as the following one
define m_table_name = &m_table
set verify off
set linesize 100
select substr(uc1.table_name,1,25) table_name
,substr(uc1.index_name,1,30) index_name
,substr(uc1.column_name,1,10) column_name
,uc1.column_position column_pos
from user_ind_columns uc1
where uc1.table_name = upper('&m_table_name')
order by
uc1.index_name
,uc1.column_position
,uc1.column_position
;which will give you a list of existing indexes per input table together with their columns and the position of those columns. Based on your knowledge of what a duplicate index is you can analyse and act accordingly.
The above select when executed against our current table t1 gives the following picture
mhouri>start c:\red-index.sql
Enter value for m_table: t1
TABLE_NAME INDEX_NAME COLUMN_NAM COLUMN_POS
T1 T1_I1 A 1
T1 T1_I1 B 2
T1 T1_I2 A 1
T1 T1_I3 B 1
T1 T1_I3 A 2 In which we can point out that there exist two indexes t1_i and t1_i2 starting with the same column A . This is why the index t1_i2 is not necessary and should not have been created originally.
Hope this helps
Mohamed Houri -
Database very slow/hangs.
Hello,
My company server has a database of size 400GB. It has has been distributed on 2 HDD of 300GB each. Previously i had faced the problem of database performing very slow . That time the size of the database was much shorter & i just moved the REDOLOG FILE to other HDD using rename command. That worked & again the database was working great. But now the scenario is quite different than previous... as given below ...
Tablespace Name - LHSERP
No. Of. Datafiles in LHSERP Tablespace - 55 ( Which are distributed on 2 HDD )
Hard Disk Drive (HDD) in server - 2
Capacity of HDD - 300GB each.now both HDD has datafiles from LHSERP tablespace. I tried to move the redo log files to other HDD but no use. No improvement in the performance. Still the database is slow.
One more thing i need to make a point of , when i see the task manager on the server it shows some RED and GREEN color graph of CPU usage. Does it mean anything serious ???? Even the whole OS works quite slow on the server. Right from opening MY COMPUTER , to loging into the user ....to fire the query in the user ...everything is very slow. Should i try to short out this problem in some different direction ???
Can you suggest me what to do next ... to improve database performance. If you have anymore ideas please let me know.
ORACLE DATABASE 10g
Windows Server 2008 64-Bit
Thanks in advance ....Hi,
What is your AWR snapshot keep time? I normally set it to 30 days so that in case of some problem, i can compare my AWR with my history AWRs. Now, can you take out an AWR when your database was doing good and then the latest AWR and then compare it and see what is the difference? Is redo log generation has increased? What are the top 5 wait events in the good AWR report and now in current bad AWR report. What are the top SQLs (elapsed time, CPU time) in good and bad AWR. What were the top segments in bad and good AWRs.
Doing this will give you insight of the problematic area.
Can you check ADDM report, is oracle recommending you anything t o look into?
what is the CPU usage. Even Oracle is the top consumer, can you check the CPU usage for past 24 hours, is it touching 100 %?
You should have OEM configured with the database. Form OEM, can you check host hard disks performance and check the busy percentage of your hard disks for past one month and check if there was any increase in the hard disk busy rate.
Doing all above will certainly help you identify the problematic area.
Salman -
AWR - Database Performance Slow
If my Whole Database Performance is slow,
running AWR report include current time statistics when the DB Performance is slow ?The default AWR Snapshot Interval is 1 hour. So, if you have the default implementation, you will be able to create an AWR report for the period 10am to 11am. It will not reflect what or why "slowness" occurred at 10:45. The statistics in the AWR report will be a summation / averaging of all the activity in the entire hour.
You could modify the Snapshot Interval (using dbms_workload_repository.modify_snapshot_settings) to have Oracle collect snapshots every 15minutes. But that will apply after the change has been made. So, if you have a slowness subsequently, you will be able to investigate it with the AWR report for that period. But what has been collected in the past at hourly intervals cannot be refined any further.
Hemant K Chitale -
Hi to all,
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
I will list out the findidngs I found...
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
(Will the tables were analyzed the performance will be improved for this scenario)
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
Memory Configuration:
Buffer Cache: 504 MB
Shared Pool: 600 MB
Java Pool: 24MB
Large Pool: 24MB
SGA Max Size is: 1201.72 MB
PGA Aggregate is: 400 MB
My Database resided in Windows 2003 Server Standard Edition with 4GB of RAM.
Please give me suggestions.
Thanks and Regards,
Vijayaraghavan KVijayaraghavan Krishnan wrote:
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
You are in an awkward situtation - your database is behaving badly, but it has been in an unhealthy state for a very long time, and any "simple" change you make to address the performance could have unpredictable side effects.
At this moment you have to think at two levels - tactical and strategic.
Tactical - is there anything you can do in the short term to address the immediate problem.
Strategic - what is the longer-term plan to sort out the state of the database.
Strategically, you should be heading for a database with correct indexing, representative data statistics, optimium resource allocation, minimum hacking in the parameter file, and (probably) implementation of "system statistics".
Tactically, you need to find out which queries (old or new) have suddenly introduced an extra work load, or whether there has been an increase in the number of end-users, or other tasks running on the machine.
For a quick and dirty approach you could start by checking v$sql every few minutes for recent SQL that might be expensive; or run checks for SQL that has executed a very large number of times, or has used a lot of CPU, or has done a lot of disk I/O or buffer gets.
You could also install statspack and start taking snapshots hourly at level 7, then run off reports covering intervals when the system is slow - again a quick check would be to look at the "SQL ordered by .." sections of the report to the expensive SQL.
If you are lucky, there will be a few nasty SQL statements that you can identify as responsible for most of your resource usage - then you can decide what to do about them
Regarding pga_aggregate_target: this is a value that is available for sharing across all processes; from the name you've used, I think you may be looking at a figure for a single specific process - so I wouldn't reduce the pga_aggregate_target just yet.
If you want to post a statspack report to the forum, we may be able to make a few further suggestions. (Use the "code" tags - in curly brackets { } to make the report readable in a fixed fontRegards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The temptation to form premature theories upon insufficient data is the bane of our profession."
Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear". -
RMAN duplicate database suddenly slow
Hi Everyone,
I posted at wrong forum last time, sorry about that.
I used RMAN to duplicate database to different box in the same local network area. Here are the scenario:
boxA: target database (PROD) -- 250G
database:Oracle 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
OS:Windows Server 2003 Enterprise x64 Edition
boxB: cloned database
database:Oracle 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
OS:Windows Server 2003 Enterprise x64 Edition
After prepared necessary steps,listener and tnsnames,etc, I run the following scripts:
C:\>rman target sys/[email protected] auxiliary sys/[email protected] catalog rman/rman
RMAN>RUN
SET NEWNAME FOR DATAFILE 1 TO 'D:\oracle\undb\system01.dbf';
.... (some set newname omitted here)
SET NEWNAME FOR DATAFILE 15 TO 'D:\oracle\undb\RMAN01.ORA';
SET NEWNAME FOR TEMPFILE 1 TO 'D:\oracle\undb\TEMP01.ORA';
# to manually allocate three auxiliary channels for disk issue
ALLOCATE AUXILIARY CHANNEL aux1 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL aux2 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL aux3 DEVICE TYPE DISK;
DUPLICATE TARGET DATABASE TO undb
LOGFILE
GROUP 1 ('D:\oracle\undb\redo01.log') SIZE 100M REUSE,
GROUP 2 ('D:\oracle\undb\redo02.log') SIZE 100M REUSE,
GROUP 3 ('D:\oracle\undb\redo03.log') SIZE 100M REUSE;
database duplicated successfully last week.
Monday, I created another database in boxA (testdb) and used the similiar scripts (just made necessary changed, database name,file names,etc), and the duplication also successful.
Yesterday, I tried to duplicate PROD, did the exactly same thing as Monday,
everything was very slow,
RMAN>report schema; took over 30 minutes,
RMAN>list backup;took another 30 minutes,
then I delete testdb in boxA, nothing better.
I have tried from boxA, also tried from boxB, things were similiar.
(when test from boxB, I input: rman target sys/[email protected] auxiliary sys/[email protected] catalog rman/[email protected] )
when I input:
RMAN> report schema;
and checked from Enterprise Manager,
SELECT RECID , STAMP , THREAD# , SEQUENCE# , FIRST_CHANGE# LOW_SCN , FIRST_TIME LOW_TIME , NEXT_CHANGE# NEXT_SCN ,
RESETLOGS_CHANGE# , RESETLOGS_TIME FROM V$LOG_HISTORY WHERE RECID BETWEEN :1 AND :1 AND RESETLOGS_TIME IS NOT NULL
AND STAMP >= :1 ORDER BY RECID
the query is running for a long time, my V$LOG_HISTORY has 18688 rows, select count(*) from V$LOG_HISTORY took less than 1 second.
I am wondering what is the reason?
Can anybody give me a clue?
Thank you very much.Thanks Alan,
Hardware spec:
duplicate database server CPU: 2x1995 Mhz ,RAM:4G, hard disk 500G
production server CPU: 2x2993 Mhz ,RAM:5G,hard disk 900G
the production server is better than the duplicate server. the production only run Oracle 10g server, the duplicate server is brand new and nothing is runing except Windows and Oracle software only.
I noticed that when in peek hour, RMAN is really slow, no matter which box I run the RMAN, when in non peek hour, it's reasonable.
Is there anything I can do on the RMAN side?
Thanks again -
Logging large .ctl to citadel database is slow
Hi, All
I'm using LV 8.5 with DSC.
I'm logging my own.ctl types shared variables to citadel database. This own.ctl includes 52 piece doubles, strings and boolean datatypes. I have 200 piece this tape of shared variables and I record all that shared variable information to citadel database. All operations as writing, reading, achieving, and deleting database are very slow. They also consume lot CPU time. Is there any capacity limits on citadel database ? Which is best structure to record this kind of data to database? I have tested (see attacment) structure but it seems to be very slow because I have to delete and create database trace references after 1st database is full.
BR,
Attachments:
code_structure.jpg 51 KBWhen you say "...to Log Data only when necessary... " I assume you are using Set Tag Attribute.vi to establish this behavior.
National Instruments recently added a feature (as a hot-fix) to the DSC Engine to be able to ignore timestamps coming from servers to prevent to log values with "back-in-time" timestamps. Citadel is really critical taking values back in time (logs a ) and therefore retrieval of Citadel data with such back-in-time traces could act wired.
You can find more info from:
Why Do I See a Lot of NaN (Not-a-Number) In My Citadel Database When I Use the Set Tag Attribute.vi?
blic.nsf/websearch/B871D05A1A4742FA86256C70006BBE00?OpenDocument>How Do I Avoid Out-of-Synch (a.k.a....
The Hot-Fix can be found:
LabVIEW Datalogging and Supervisory Control Module Version 6.1 for Windows 2000/95/98/ME/NT/XP -- Fi...
I assume you run into such a use case - maybe. It happen to me, too . And I've created a small VI which would analyze traces on back-in-time (NaN - Not a number) values. I assume the missing Data in DIAdem are those Not-a-Numbers aka Break.
If you still encounter some problems after applying the DSCEngine.ini UseServerTimestamps=false, you might contact a National Instruments Support Engineer.
Hope this helps
Roland
Attachments:
BackInTimeAnalyzer.llb 622 KB -
DSC 7.1 - Compact Database is SLOW
Another time DSC shows how slow it really is.
I need to compact my database since it is holding 5 months of data (configured to hold 90 days)
So I load MAX and ask it to compact the database. ERROR "unspecified error"
ok shut down the tag engine - I can live with no fresh data being logged for an hour.
Start the compact - oooh 50% almost immediatly
OK lots of activity on my SCSI RAID 0 (two Seagate 76gig 15krpm drives on adaptec dual channel raid board)
the computer is working pretty good (1% in 15 minutes) CPU usage (Dual 3.0ghz Xeon) is only 7%
file manager shows there are files being added (70+)
hmmm finally done after 1 hour 17 minutes
file manager still shows 5 months of data (was 15gig now 14.7gig)
use MAX to see how much data is there
-LOTS of SCSI activity
-ta da MAX shows all the data is still there
-MAX shows the "lifespan" of the data equals 90 days
This may seem to be another DSC "rant" and it probably is. I just want the software to do what it is advertised to do. If Compact does not work "don't include a menu for it" - If Archive takes 18 hours to extract some data "fix it" - If the database is corrupt "TELL US! don't make us look at CPU Usage to find out when it finishes"
OK I am done with the rant.
How do I remove the expired data?Hello,
To remove expired data, you can do destructive archiving. Unfortunately it is slow and we are addressing this issue. I am sure you know how to do destructive archiving but to the benefit of others who might come across this discussion, here it is.
1. In MAX, under Historical Database, select Citadel 5 Universe. This brings up all the databases on the right handside.
2. Right click on the database you want to archive and select Archive.
3. In the wizard, select the data you want to archive and hit next.
4. Select the destination datbase you want the data to be archived and hit next
5. In this final step, please make sure that you have selected the option of destroying the data after it has been archived.
Regards,
Arun V
National Instruments -
Hi all,
I am testing data guard at my laptop with winxp, i m using Oracle 9.2 version.
Data guard is implemented and working fine.
The problem i am facing is very slow response from standby database, it takes so much time while startup, change recovery mode and changing read only mode. Where as primary database is working fine.
Any idea what could be the reason.
Regards,
Asimcouple of days earlier i implement data guard on the same laptop, at that time primary and standby database was working fine no performance (slow) issues, then i need to reinstall windows, now when i once again configured data guard and face the stated (standby slow) problem.
I have tested some hit and trials and finally stop all other database basis and started only standby in nomount mode but same problem.
:-) I agree with your point, this is a not a normal behaviour.
Further i think to solve this problem, i might reinstall windows, probably then this problem will be solved and some one new might arise.
Regards, -
Database upgrade - slow query performance
Hi,
recently we have upgraded our 8i database to an 10g database.
While we was testing our forms application against the new
10g database there was a very slow sql-statements which runs
several minutes but it runs against the 8i database within seconds.
With sqlplus it runs sometimes fast, sometimes slow (see execution plan below)
in 10g.
The sql-statement in detail:
SELECT name1, vornam, aboid, liefstat
FROM aktuellerabosatz
WHERE aboid = evitadba.get_evitaid ('0000002100')
"aktuellerabosatz" is a view on a table with about 3.000.000 records.
The function get_evitaid gets only the substring of the last 4 diggits of the whole
number.
execution plan with slow responce time:
12:05:31 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:05:35 2 FROM aktuellerabosatz
12:05:35 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:55.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
100 recursive calls
0 db block gets
121353 consistent gets
121285 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
execution plan with fast response time:
12:06:43 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
12:06:58 2 FROM aktuellerabosatz
12:06:58 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
Statistiken
110 recursive calls
8 db block gets
49 consistent gets
0 physical reads
0 redo size
613 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
In the fast response the consistent gets and physical reads are very small
but in the another time very high what (it seems) results in the slow performance.
What could be the reasons?
kind regards
MarcoThe two execution plans above are both 10g-sqlsessions on the same database with the same user. We gather statistics for the database with the dbms_stats package. Normally we have the all_rows option. The confusing thing is sometimes the sql-statement runs fas sometimes slow in a sqlplus session with the same executin plan only the physical gets, constent reads are extreme different.
If we rewrite the sql-statement to use the table evtabo with the an additional
where clause (which is from the view) instead of using the view then it runs fast:
14:24:04 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:24:14 2 FROM aktuellerabosatz
14:24:14 3 WHERE aboid = evitadba.get_evitaid ('0000000246');
Es wurden keine Zeilen ausgewählt
Abgelaufen: 00:00:43.07
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27315 Card=1204986
Bytes=59044314)
1 0 VIEW OF 'EVTABO_V1' (VIEW) (Cost=27315 Card=1204986 Bytes=
59044314)
2 1 TABLE ACCESS (FULL) OF 'EVTABO' (TABLE) (Cost=27315 Card
=1204986 Bytes=45789468)
14:24:59 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
14:25:26 2 FROM evtabo
14:25:26 3 WHERE aboid = evitadba.get_evitaid ('0000002100')
14:25:26 4 and gueltab <= TRUNC(sysdate) AND (gueltbs >=TRUNC(SYSDATE) OR gueltbs IS NULL);
NAME1 VORNAM ABOID L
RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
1 Zeile wurde ausgewählt.
Abgelaufen: 00:00:00.00
Ausführungsplan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
Card=1 Bytes=38)
2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
3 Card=1)
What could be the reason for the different performance in 8i and 10g?
Thanks
Marco -
Database becomes slow maybe due to "SUCCESS: diskgroup ORAARCH was dismount
Hello,
I got performance complaint from client side like
" when we are running the software GUI interface from our laptops we are noticing a problem when we add new users and do any type of device sort to the data base. The system slows down to a crawl and any other users on the system are unable to do any tasks. "
I checked database alert log, got
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
Is the reason? any solution?
thel database is 10.2.0.4.0 on Linux
thank you
Edited by: ROY123 on Feb 23, 2012 1:09 PMROY123 wrote:
Hello,
I got performance complaint from client side like
" when we are running the software GUI interface from our laptops we are noticing a problem when we add new users and do any type of device sort to the data base. The system slows down to a crawl and any other users on the system are unable to do any tasks. "
I checked database alert log, got
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
SUCCESS: diskgroup ORAARCH was mounted
SUCCESS: diskgroup ORAARCH was dismounted
Is the reason? any solution?
thel database is 10.2.0.4.0 on Linux
This is just an indicator how many times diskgroup was mounted and unmounted. Mounted and dismount events will happen whenever any requests are sent to diskgroup and sent back to database. So we can say when nothing will happen from database side i.e. database don't need IO, diskgroup will be ideal and will show its status as dismounted in alert.log. So having said this, its not a performance problem for you.
Rather you should be checking at AWR and OS stats to look for performance problem.
Hope this help -
Xml spry database, really slow in IE
Ok,
So I am trying to display an xml database listing and then with buttons go display different sections of the database. It is a restaurant listing, and I am trying to show all the restaurants for different styles, mexican, american, etc.
I've got it working fine, but I probably did it with too much code, because it takes almost 10 seconds to load in Internet Explorer, but only 2 seconds to load in Safari, and about 7 seconds in Firefox. Now this is just way too long and I need to cut it down.
So here's the question: My code is working fine as it is, but when I load the xml on the loading of the page and parse it to create buttons based on the styles of food available, can I load that same variable and display different parts of it, rather than reloading the original xml and parsing that for each style of food? Right now the xml parses based on a URL parameter which is the city name coded as city_din.
Thanks for any help or advice, the code is for the loading of the initial xml and one of the styles loading, right now I have six styles I think.
var params = Spry.Utils.getLocationParamsAsObject();
var xpath = "/root/dining";
if ((params.city_din))
xpath = "/root/dining[city_din = '"+params.city_din+"']";
var styles1 = new Spry.Data.XMLDataSet("dining.xml", xpath, {distinctOnLoad: true, distinctFieldsOnLoad:['type_din']});
</script>
<script type="text/javascript">
//mexican sort
var params = Spry.Utils.getLocationParamsAsObject();
var xpath = "/root/dining";
if ((params.city_din))
xpath = "/root/dining[city_din = '"+params.city_din+"' and type_din= '"+'Mexican'+"']";
var mexican = new Spry.Data.XMLDataSet("dining.xml", xpath);Hello, thank you for your response...
Here is a link to the page with the code in question.
I had to upload it to our test server since it is not actually
a website it is part of an interactive kiosk so the content
will sit on the local hard drive of the unit.
I know there is a better way to do this code but this was the only way I could
figure out how to get it to work.
http://www.sunfunstayplay.com/test/sharedfiles/NewDining1.html?city_din=Bakersfield -
Slow down Database after upgrading to 10.2.0.5
Hi
I am having performance problems after upgrading to 10.2.0.5
At the beginning I thought the problem was sga too smalle ( ini: 598M , now 1408M ) but even after recreating the database with the new value the problem remains.
I am sending reports so that someone could give me an idea.
Thanks in advance!
DETAILED ADDM REPORT FOR TASK 'TASK_240' WITH ID 240
Analysis Period: 22-JUN-2011 from 08:34:06 to 16:00:13
Database ID/Instance: 2462860799/1
Database/Instance Names: DXT/DXT
Host Name: thoracle
Database Version: 10.2.0.5.0
Snapshot Range: from 71 to 78
Database Time: 6726 seconds
Average Database Load: .3 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FINDING 1: 38% impact (2540 seconds)
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 26% benefit (1763 seconds)
ACTION: Investigate the SQL statement with SQL_ID "30rku9qg2y30j" for
possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 30rku9qg2y30j and
PLAN_HASH 2734400036
select a.owner, a.object_name, INSTR(a.object_type, :"SYS_B_00"),
:"SYS_B_01" from sys.all_objects a where a.object_type IN
(:"SYS_B_02",:"SYS_B_03") and a.status = :"SYS_B_04" and a.owner
like:"SYS_B_05"escape:"SYS_B_06" and a.object_name
like:"SYS_B_07"escape:"SYS_B_08" union all select c.owner,
c.synonym_name, INSTR(a.object_type, :"SYS_B_09"), :"SYS_B_10" from
sys.all_objects a, sys.all_synonyms c where c.table_owner = a.owner
and c.table_name = a.object_name and a.object_type IN
(:"SYS_B_11",:"SYS_B_12") and a.status = :"SYS_B_13" and c.owner
like:"SYS_B_14"escape:"SYS_B_15" and c.synonym_name
like:"SYS_B_16"escape:"SYS_B_17" union all select distinct b.owner,
CONCAT(b.package_name, :"SYS_B_18" || b.object_name),
min(b.position), max(b.overload) from sys.all_arguments b where
b.package_name IS NOT NULL and b.owner
like:"SYS_B_19"escape:"SYS_B_20" and b.package_name
like:"SYS_B_21"escape:"SYS_B_22" group by b.owner,
CONCAT(b.package_name, :"SYS_B_23" || b.object_name) union all select
distinct c.owner, CONCAT(c.synonym_name, :"SYS_B_24" ||
b.object_name), min(b.position), max(b.overload) from
sys.all_arguments b, sys.all_synonyms c where c.table_owner = b.owner
and c.table_name = b.package_name and b.package_name IS NOT NULL and
c.owner like:"SYS_B_25"escape:"SYS_B_26" and c.synonym_name
like:"SYS_B_27"escape:"SYS_B_28" group by c.owner,
CONCAT(c.synonym_name, :"SYS_B_29" || b.object_name) union all select
distinct c.owner, c.synonym_name, min(b.position), max(b.overload)
from sys.all_arguments b, sys.all_synonyms c where c.owner = b.owner
and c.table_owner=b.package_name and c.table_name=b.object_name and
c.owner like:"SYS_B_30"escape:"SYS_B_31" and c.synonym_name
like:"SYS_B_32"escape:"SYS_B_33" group by c.owner, c.synonym_name
RATIONALE: SQL statement with SQL_ID "30rku9qg2y30j" was executed 12270
times and had an average elapsed time of 0.036 seconds.
RATIONALE: Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 7% of the database time spent in
processing the SQL statement with SQL_ID "30rku9qg2y30j".
RECOMMENDATION 2: SQL Tuning, 23% benefit (1550 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "7yv1ba0c8y86t" was executed 47
times and had an average elapsed time of 32 seconds.
RECOMMENDATION 3: SQL Tuning, 14% benefit (926 seconds)
ACTION: Use bigger fetch arrays while fetching results from the SELECT
statement with SQL_ID "7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
FINDING 2: 37% impact (2508 seconds)
Time spent on the CPU by the instance was responsible for a substantial part
of database time.
RECOMMENDATION 1: SQL Tuning, 26% benefit (1763 seconds)
ACTION: Investigate the SQL statement with SQL_ID "30rku9qg2y30j" for
possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 30rku9qg2y30j and
PLAN_HASH 2734400036
select a.owner, a.object_name, INSTR(a.object_type, :"SYS_B_00"),
:"SYS_B_01" from sys.all_objects a where a.object_type IN
(:"SYS_B_02",:"SYS_B_03") and a.status = :"SYS_B_04" and a.owner
like:"SYS_B_05"escape:"SYS_B_06" and a.object_name
like:"SYS_B_07"escape:"SYS_B_08" union all select c.owner,
c.synonym_name, INSTR(a.object_type, :"SYS_B_09"), :"SYS_B_10" from
sys.all_objects a, sys.all_synonyms c where c.table_owner = a.owner
and c.table_name = a.object_name and a.object_type IN
(:"SYS_B_11",:"SYS_B_12") and a.status = :"SYS_B_13" and c.owner
like:"SYS_B_14"escape:"SYS_B_15" and c.synonym_name
like:"SYS_B_16"escape:"SYS_B_17" union all select distinct b.owner,
CONCAT(b.package_name, :"SYS_B_18" || b.object_name),
min(b.position), max(b.overload) from sys.all_arguments b where
b.package_name IS NOT NULL and b.owner
like:"SYS_B_19"escape:"SYS_B_20" and b.package_name
like:"SYS_B_21"escape:"SYS_B_22" group by b.owner,
CONCAT(b.package_name, :"SYS_B_23" || b.object_name) union all select
distinct c.owner, CONCAT(c.synonym_name, :"SYS_B_24" ||
b.object_name), min(b.position), max(b.overload) from
sys.all_arguments b, sys.all_synonyms c where c.table_owner = b.owner
and c.table_name = b.package_name and b.package_name IS NOT NULL and
c.owner like:"SYS_B_25"escape:"SYS_B_26" and c.synonym_name
like:"SYS_B_27"escape:"SYS_B_28" group by c.owner,
CONCAT(c.synonym_name, :"SYS_B_29" || b.object_name) union all select
distinct c.owner, c.synonym_name, min(b.position), max(b.overload)
from sys.all_arguments b, sys.all_synonyms c where c.owner = b.owner
and c.table_owner=b.package_name and c.table_name=b.object_name and
c.owner like:"SYS_B_30"escape:"SYS_B_31" and c.synonym_name
like:"SYS_B_32"escape:"SYS_B_33" group by c.owner, c.synonym_name
RATIONALE: SQL statement with SQL_ID "30rku9qg2y30j" was executed 12270
times and had an average elapsed time of 0.036 seconds.
RATIONALE: Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 7% of the database time spent in
processing the SQL statement with SQL_ID "30rku9qg2y30j".
RATIONALE: Average CPU used per execution was 0.036 seconds.
RECOMMENDATION 2: SQL Tuning, 23% benefit (1550 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"7yv1ba0c8y86t".
RELEVANT OBJECT: SQL statement with SQL_ID 7yv1ba0c8y86t and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "7yv1ba0c8y86t" was executed 47
times and had an average elapsed time of 32 seconds.
RATIONALE: Average CPU used per execution was 32 seconds.
RECOMMENDATION 3: SQL Tuning, 5.8% benefit (390 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"cbtd2nt52qn1c".
RELEVANT OBJECT: SQL statement with SQL_ID cbtd2nt52qn1c and
PLAN_HASH 2897530229
Select DAE_.ROWID, DAE_.*, HAE_.*
From THPR.GACCENTRYD DAE_, THPR.GACCENTRY HAE_ Where ((HAE_.TYP_0(+)
= DAE_.TYP_0) AND (HAE_.NUM_0(+) = DAE_.NUM_0))
And HAE_.CPY_0 = :1 And HAE_.ACCDAT_0 >= :2 And HAE_.ACCDAT_0 <= :3
And DAE_.ACC_0 = :4 And HAE_.FCY_0 >= :5 And HAE_.FCY_0 <= :6
Order by DAE_.BPR_0,DAE_.CUR_0,DAE_.ACC_0
RATIONALE: SQL statement with SQL_ID "cbtd2nt52qn1c" was executed 12980
times and had an average elapsed time of 0.03 seconds.
RATIONALE: Average CPU used per execution was 0.029 seconds.
RECOMMENDATION 4: SQL Tuning, 2.1% benefit (138 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"33t7fszkr29gy".
RELEVANT OBJECT: SQL statement with SQL_ID 33t7fszkr29gy and
PLAN_HASH 2684283631
Select WSTJ_.ROWID, WSTJ_.*, WMVD_.*
From THPR.STOJOU WSTJ_, THPR.SMVTD WMVD_ Where ((WMVD_.VCRTYP_0(+) =
WSTJ_.VCRTYP_0) AND (WMVD_.VCRNUM_0(+) = WSTJ_.VCRNUM_0) AND
(WMVD_.VCRLIN_0(+) = WSTJ_.VCRLIN_0))
And WMVD_.CCE2_0 = :1 And WSTJ_.IPTDAT_0 <= :2 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_0",:"SYS_B_1") <> :3 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_2",:"SYS_B_3") <> :4 And
Substr(WMVD_.ITMDES1_0,:"SYS_B_4",:"SYS_B_5") <> :5 And
(((WSTJ_.TRSFAM_0 = :6) Or (WSTJ_.TRSFAM_0 = :7)))
Order by WSTJ_.STOFCY_0,WSTJ_.UPDCOD_0,WSTJ_.ITMREF_0,WSTJ_.IPTDAT_0
Desc,WSTJ_.MVTSEQ_0,WSTJ_.MVTIND_0
RATIONALE: SQL statement with SQL_ID "33t7fszkr29gy" was executed 1
times and had an average elapsed time of 136 seconds.
RATIONALE: Average CPU used per execution was 138 seconds.
FINDING 3: 15% impact (1008 seconds)
SQL statements with the same text were not shared because of cursor
environment mismatch. This resulted in additional hard parses which were
consuming significant database time.
RECOMMENDATION 1: Application Analysis, 15% benefit (1008 seconds)
ACTION: Look for top reason for cursor environment mismatch in
V$SQL_SHARED_CURSOR.
ADDITIONAL INFORMATION:
Common causes of environment mismatch are session NLS settings, SQL
trace settings and optimizer parameters.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
FINDING 4: 8.5% impact (570 seconds)
Wait class "User I/O" was consuming significant database time.
NO RECOMMENDATIONS AVAILABLE
ADDITIONAL INFORMATION:
Waits for I/O to temporary tablespaces were not consuming significant
database time.
The throughput of the I/O subsystem was not significantly lower than
expected.
FINDING 5: 5.3% impact (355 seconds)
The SGA was inadequately sized, causing additional I/O or hard parses.
RECOMMENDATION 1: DB Configuration, 3.2% benefit (215 seconds)
ACTION: Increase the size of the SGA by setting the parameter
"sga_target" to 1740 M.
ADDITIONAL INFORMATION:
The value of parameter "sga_target" was "1392 M" during the analysis
period.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
SYMPTOM: Wait class "User I/O" was consuming significant database time.
(8.5% impact [570 seconds])
INFO: Waits for I/O to temporary tablespaces were not consuming
significant database time.
The throughput of the I/O subsystem was not significantly lower
than expected.
FINDING 6: 4.2% impact (281 seconds)
Cursors were getting invalidated due to DDL operations. This resulted in
additional hard parses which were consuming significant database time.
RECOMMENDATION 1: Application Analysis, 4.2% benefit (281 seconds)
ACTION: Investigate appropriateness of DDL operations.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Hard parsing of SQL statements was consuming significant
database time. (20% impact [1336 seconds])
SYMPTOM: Contention for latches related to the shared pool was
consuming significant database time. (2% impact [135
seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of
database time.
SYMPTOM: Wait class "Concurrency" was consuming significant
database time. (2.3% impact [154 seconds])
FINDING 7: 4% impact (266 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
RECOMMENDATION 1: Host Configuration, 4% benefit (266 seconds)
ACTION: Investigate the possibility of improving the performance of I/O
to the online redo log files.
RATIONALE: The average size of writes to the online redo log files was
26 K and the average time per write was 2 milliseconds.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Wait class "Commit" was consuming significant database time.
(4% impact [266 seconds])
FINDING 8: 2.9% impact (192 seconds)
Soft parsing of SQL statements was consuming significant database time.
RECOMMENDATION 1: Application Analysis, 2.9% benefit (192 seconds)
ACTION: Investigate application logic to keep open the frequently used
cursors. Note that cursors are closed by both cursor close calls and
session disconnects.
RECOMMENDATION 2: DB Configuration, 2.9% benefit (192 seconds)
ACTION: Consider increasing the maximum number of open cursors a session
can have by increasing the value of parameter "open_cursors".
ACTION: Consider increasing the session cursor cache size by increasing
the value of parameter "session_cached_cursors".
RATIONALE: The value of parameter "open_cursors" was "800" during the
analysis period.
RATIONALE: The value of parameter "session_cached_cursors" was "20"
during the analysis period.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Contention for latches related to the shared pool was consuming
significant database time. (2% impact [135 seconds])
INFO: Waits for "cursor: pin S wait on X" amounted to 1% of database
time.
SYMPTOM: Wait class "Concurrency" was consuming significant database
time. (2.3% impact [154 seconds])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ADDITIONAL INFORMATION
Wait class "Application" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
The analysis of I/O performance is based on the default assumption that the
average read time for one database block is 10000 micro-seconds.
An explanation of the terminology used in this report is available when you
run the report with the 'ALL' level of detail.user12023161 wrote:
I have upgraded 10.2.0.3.0 to 10.2.0.5.0 and facing same issue. The database is slow in general after upgrade compared to 10.2.0.3.0.Try setting OPTIMIZER_FEATURE_ENABLE parameter to 10.2.0.3.
Refer following link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm -
Slow Problems with Oracle Forms 10g and Oracle Database 11g
Hi, I wonder if there is a compatibility problem between Version 10.1.2.0.2 32 Oracle Forms and Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production. This is because my application runs correctly on a version of Oracle Database 10g, and when we migrated the database to an Oracle Database 11g, slowness problems came out.
Thanks.We have the same issue happening with our custom forms and with some of the standard forms in EBSO. So far we have found that the form invoking a view causes ridiculous slowness in opening the form (40 mins). Using a table access has shortened the open time significantly. At this time Oracle DBAs at OOD have no clear idea why it is happening.
we are on 11.1 database with 11.5 EBSO
Edited by: user3223867 on Feb 4, 2011 7:55 AM
Maybe you are looking for
-
Whenever a record is Edited the rows are not getting displayed in the repor
Hi all, When a new record is added or edited, It is physically getting added or edited according to the process, but those records are not getting displayed in the report.. Please Any body faced this situation... Help me to identify what error i am d
-
The camera clicking found on my iPad is gone. How do I get it back
I've lost my keyboard clicks and I can't get them back. i've also lost the camera click when I use the iPad to take a picture
-
I have a .pdf that contains a table which the conversion process to .rtf interpreted as a picture. As a result, I cannot edit the table. Is there a way to convert without the table being interpreted as a picture?
-
Charged a disconnection fee for BT Total Broadband...
I can barely suppress the anger I feel as customer service at BT reaches a new low. After months and months and months, and call after call of talking to nice but totally ineffective people in which I tried to get all my BT services moved from one h
-
wanted to translate some long text ... in se63 i select translation tab > copyright statement>now i select SMART as text maintained in SO10. i want to convert this to english..... but not able to do the same.. has anyone faced the same problem