Database Performance task
Hi,
Who takes up the task of database performance tuning....is it the BW consulant or anyother?
Thanks
JB
We actually never involve in Database performance tuning!, All the time DBA will handle the
database tuning according SAP notes. We all the time involve in loading and query perfomance issues.
Srinivas.D
Similar Messages
-
I have read all other cases that relate to this error and cannot get this to work. Running SQL Server 2012 sp1 on Windows server 2012 R2. Disk space and permissions are fine, but I get the error below when I try and use the check database integrity task
within my maintenance plan on both system and user databases. I have researched this and fragmentation is not the issue. I'm lost at this point and would appreciate at least some steps to try. databases are not "read only" as I have read this may
contribute to the problem. All other maintenance tasks run fine.
Error message from SQL LOG
Check Database integrity on Local server connection
Databases: All system databases
Task start: 2014-01-13T11:00:04.
Task end: 2014-01-13T11:00:04.
Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'master', NOINDEX)
" failed with the following error: "A database snapshot cannot be created because it failed to start.
A database snapshot cannot be created because it failed to start.
MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand the physical file 'E:\\SQLdata\\MSSQL11.MSSQLSERVER\\MSSQL\\DATA\\master.mdf:MSSQL_DBCC9'.
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
The database could not be exclusively locked to perform the operation.
Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous
errors for more details.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Error Message from Log File Viewer in SSMS:
Source: Check Database Integrity Task Executing query "USE [ReportServer] ".: 50% complete End Progress Error: 2014-01-13 11:31:54.92 Code: 0xC002F210
Source: Check Database Integrity Task Execute SQL Task Description: Executing the query "DBCC CHECKDB(N'ReportServer') WITH NO_INFOMSGS " failed with the following error: "A database snapshot cannot be created
because it failed to start. A database snapshot cannot be created because it failed to start. MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to
expand the physical file 'E:\SQLdata\MSSQL11.MSSQLSERVER\MSSQL\DATA\ReportServer.mdf:MSSQL_DBCC9'. The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not
support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. The database could not be exclusively locked to perform the operation. Check statement aborted. The database could not be checked as a database
snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.". Possible failure reasons: Problems with
the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error Progress: 2014-01-13 11:31:54.93 Source: Check Database Integrity Task
Executing query "USE [ReportServerTempDB] ".: 50% complete End Progress Error: 2014-01-13 11:31:55.02 Code: 0xC002F210 Source: Check Database Integrity Task Execute SQL Task
Description: Executing the query "DBCC CHECKDB(N'ReportServerTempDB') WITH NO_INFOM..." failed with the following error: "A database snapshot cannot be created because it failed to start. A database snapshot cannot be created because
it failed to start. MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand the physical file 'E:\SQLdata\MSSQL11.MSSQLSERVER\MSSQL\DATA\ReportServerTempDB.mdf:MSSQL_DBCC9'.
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.".
Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error Progress: 2014-01-13 11:31:55.02 Source:
Check Database Integrity Task Executing query "USE [AddressUpload] ".: 50% complete End Progress Error: 2014-01-13 11:31:55.13 Code: 0xC002F210 Source:
Check Database Integrity Task Execute SQL Task Description: Executing the query "DBCC CHECKDB(N'AddressUpload') WITH NO_INFOMSGS " failed with the following error: "A database snapshot cannot be created because
it failed to start. A database snapshot cannot be created because it failed to start. MODIFY FILE encountered operating system error 665(The requested operation could not be completed due to a file system limitation) while attempting to expand
the physical file 'E:\SQLData\MSSQL11.MSSQLSERVER\MSSQL\DATA\database1.mdf:MSSQL_DBCC9'. The database snapshot for online checks could not be created. Either th... The package execution fa... The step failed.ReFS is NOT supported in use with SQL Server 2012. Once such item, which you've stumbled upon is the fact that alternate streams and sparse files are not implemented in ReFS and thus these issues are caused. You *could* force the checkdb to execute by using
WITH TABLOCKX but that'll require exclusive access to the database for the duration of the checkdb scan and that's not something I would advise to do.
Sean Gallardy | Blog |
Twitter -
Hi to all,
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
I will list out the findidngs I found...
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
(Will the tables were analyzed the performance will be improved for this scenario)
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
Memory Configuration:
Buffer Cache: 504 MB
Shared Pool: 600 MB
Java Pool: 24MB
Large Pool: 24MB
SGA Max Size is: 1201.72 MB
PGA Aggregate is: 400 MB
My Database resided in Windows 2003 Server Standard Edition with 4GB of RAM.
Please give me suggestions.
Thanks and Regards,
Vijayaraghavan KVijayaraghavan Krishnan wrote:
My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
Some tables were not analyzed since Dec2007. Some tables were never analyzed.
PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
(I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
You are in an awkward situtation - your database is behaving badly, but it has been in an unhealthy state for a very long time, and any "simple" change you make to address the performance could have unpredictable side effects.
At this moment you have to think at two levels - tactical and strategic.
Tactical - is there anything you can do in the short term to address the immediate problem.
Strategic - what is the longer-term plan to sort out the state of the database.
Strategically, you should be heading for a database with correct indexing, representative data statistics, optimium resource allocation, minimum hacking in the parameter file, and (probably) implementation of "system statistics".
Tactically, you need to find out which queries (old or new) have suddenly introduced an extra work load, or whether there has been an increase in the number of end-users, or other tasks running on the machine.
For a quick and dirty approach you could start by checking v$sql every few minutes for recent SQL that might be expensive; or run checks for SQL that has executed a very large number of times, or has used a lot of CPU, or has done a lot of disk I/O or buffer gets.
You could also install statspack and start taking snapshots hourly at level 7, then run off reports covering intervals when the system is slow - again a quick check would be to look at the "SQL ordered by .." sections of the report to the expensive SQL.
If you are lucky, there will be a few nasty SQL statements that you can identify as responsible for most of your resource usage - then you can decide what to do about them
Regarding pga_aggregate_target: this is a value that is available for sharing across all processes; from the name you've used, I think you may be looking at a figure for a single specific process - so I wouldn't reduce the pga_aggregate_target just yet.
If you want to post a statspack report to the forum, we may be able to make a few further suggestions. (Use the "code" tags - in curly brackets { } to make the report readable in a fixed fontRegards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The temptation to form premature theories upon insufficient data is the bane of our profession."
Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear". -
Can archive log backup influence database performance?
Hi,
can archive log backup generally influence the database performance? I mean: users can view their query to go slowly during backup of archived redolog?Are you asking about backing up the archived redo logs via rman or directly to tape or the actual archive process where Oracle backs the online redo to disk?
-- comments on archive process
Normally the redo log archiving process should have no noticable effect on database performance. About the only way for the process to have a noticable performance impact while it is running is if you store all your online redo logs on the same physical disk. You would also want the backup to be on a different physical disk.
Check your alert log to make sure you do not have error messages related to being unable to switch redo logs and checkpoint incomplete messages. These would be an indication that your online redo logs are defined too small and you are trying to cycle around before Oracle has finished archiving the older logs.
-- comments on archived redo log backup
Archived reodo logs should not be on the same disk as the database so using rman or an OS task to back these files up should not impact Oracle unless you server is itself near capacity and any additional task effects the server.
HTH -- Mark D Powell -- -
AWR - Database Performance Slow
If my Whole Database Performance is slow,
running AWR report include current time statistics when the DB Performance is slow ?The default AWR Snapshot Interval is 1 hour. So, if you have the default implementation, you will be able to create an AWR report for the period 10am to 11am. It will not reflect what or why "slowness" occurred at 10:45. The statistics in the AWR report will be a summation / averaging of all the activity in the entire hour.
You could modify the Snapshot Interval (using dbms_workload_repository.modify_snapshot_settings) to have Oracle collect snapshots every 15minutes. But that will apply after the change has been made. So, if you have a slowness subsequently, you will be able to investigate it with the AWR report for that period. But what has been collected in the past at hourly intervals cannot be refined any further.
Hemant K Chitale -
Database performance is very slow
Hi DBA's
Plz help me out !!!
Application users complaining database performance is very slow. Its an 10g DB in IBM AIx Server.
Any changes needed pls be post as soon as possible
Buffer Cache Hit Ratio 94.69
Chained Row Ratio 0
Database CPU Time Ratio 17.21
Database Wait Time Ratio 82.78
Dictionary Cache Hit Ratio 99.38
Execute Parse Ratio -25.6
Get Hit Ratio 70.62
Latch Hit Ratio 99.65
Library Cache Hit Ratio 99.43
Parse CPU to Elapsed Ratio 8.4
Pin Hit Ratio 81.6
Soft-Parse Ratio 94.29
=====================================
NAME TYPE VALUE
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
nls_currency string
nls_dual_currency string
nls_iso_currency string
open_cursors integer 600
optimizer_secure_view_merging boolean TRUE
session_cached_cursors integer 20
sql92_security boolean FALSE
===========================================================
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 4272M
sga_target big integer 4G
pga_aggregate_target big integer 2980M
Total Ram Size is 8 GBSQL> select username,sid from v$session where username='WPCPRODUSR';
USERNAME SID
WPCPRODUSR 378
WPCPRODUSR 379
WPCPRODUSR 380
WPCPRODUSR 381
WPCPRODUSR 382
WPCPRODUSR 383
WPCPRODUSR 384
WPCPRODUSR 385
WPCPRODUSR 386
WPCPRODUSR 387
WPCPRODUSR 388
USERNAME SID
WPCPRODUSR 389
WPCPRODUSR 390
WPCPRODUSR 391
WPCPRODUSR 392
WPCPRODUSR 393
WPCPRODUSR 394
WPCPRODUSR 395
WPCPRODUSR 396
WPCPRODUSR 397
WPCPRODUSR 398
WPCPRODUSR 399
USERNAME SID
WPCPRODUSR 400
WPCPRODUSR 401
WPCPRODUSR 402
WPCPRODUSR 403
WPCPRODUSR 404
WPCPRODUSR 405
WPCPRODUSR 406
WPCPRODUSR 407
WPCPRODUSR 408
WPCPRODUSR 409
WPCPRODUSR 410
USERNAME SID
WPCPRODUSR 411
WPCPRODUSR 412
WPCPRODUSR 413
WPCPRODUSR 414
WPCPRODUSR 415
WPCPRODUSR 416
WPCPRODUSR 417
WPCPRODUSR 418
WPCPRODUSR 419
WPCPRODUSR 420
WPCPRODUSR 421
USERNAME SID
WPCPRODUSR 422
WPCPRODUSR 423
WPCPRODUSR 424
WPCPRODUSR 425
WPCPRODUSR 426
WPCPRODUSR 427
WPCPRODUSR 428
WPCPRODUSR 429
WPCPRODUSR 430
WPCPRODUSR 431
WPCPRODUSR 432
USERNAME SID
WPCPRODUSR 433
WPCPRODUSR 434
WPCPRODUSR 435
WPCPRODUSR 436
WPCPRODUSR 437
WPCPRODUSR 438
WPCPRODUSR 439
WPCPRODUSR 440
WPCPRODUSR 441
WPCPRODUSR 442
WPCPRODUSR 443
USERNAME SID
WPCPRODUSR 444
WPCPRODUSR 445
WPCPRODUSR 446
WPCPRODUSR 447
WPCPRODUSR 448
WPCPRODUSR 449
WPCPRODUSR 450
WPCPRODUSR 451
WPCPRODUSR 452
WPCPRODUSR 453
WPCPRODUSR 454
USERNAME SID
WPCPRODUSR 455
WPCPRODUSR 456
WPCPRODUSR 457
WPCPRODUSR 458
WPCPRODUSR 459
WPCPRODUSR 460
WPCPRODUSR 461
WPCPRODUSR 462
WPCPRODUSR 463
WPCPRODUSR 464
WPCPRODUSR 465
USERNAME SID
WPCPRODUSR 466
WPCPRODUSR 467
WPCPRODUSR 468
WPCPRODUSR 469
WPCPRODUSR 470
WPCPRODUSR 471
WPCPRODUSR 472
WPCPRODUSR 473
WPCPRODUSR 474
WPCPRODUSR 475
WPCPRODUSR 476
USERNAME SID
WPCPRODUSR 477
WPCPRODUSR 478
WPCPRODUSR 479
WPCPRODUSR 480
WPCPRODUSR 481
WPCPRODUSR 482
WPCPRODUSR 483
WPCPRODUSR 484
WPCPRODUSR 485
WPCPRODUSR 486
WPCPRODUSR 487
USERNAME SID
WPCPRODUSR 488
WPCPRODUSR 489
WPCPRODUSR 490
WPCPRODUSR 491
WPCPRODUSR 492
WPCPRODUSR 493
WPCPRODUSR 494
WPCPRODUSR 495
WPCPRODUSR 496
WPCPRODUSR 497
WPCPRODUSR 498
USERNAME SID
WPCPRODUSR 499
WPCPRODUSR 500
WPCPRODUSR 501
WPCPRODUSR 502
WPCPRODUSR 503
WPCPRODUSR 504
WPCPRODUSR 505
WPCPRODUSR 506
WPCPRODUSR 507
WPCPRODUSR 508
WPCPRODUSR 509
USERNAME SID
WPCPRODUSR 510
WPCPRODUSR 511
WPCPRODUSR 512
WPCPRODUSR 513
WPCPRODUSR 514
WPCPRODUSR 515
WPCPRODUSR 516
WPCPRODUSR 517
WPCPRODUSR 518
WPCPRODUSR 519
WPCPRODUSR 520
USERNAME SID
WPCPRODUSR 521
WPCPRODUSR 522
WPCPRODUSR 523
WPCPRODUSR 524
WPCPRODUSR 525
148 rows selected. -
Regarding Database Performance
Hi All,
I have installed *10gR2 on RHEL4 (4GB -- RAM, space is enough)*. One application (oracle ucm) is running on that. Its contains apache and content server. After 2-3 weeks, developers were saying taking long time for opening url. So done gather database statistics (after that daily gathering db stats using scheduler). After that, it was working fine. Again after week they are having the prob. They are doing lot of dml on db. Checked in os level using top command. But oracle ( installed entire application as oracle) user is not consuming that much memory. set pga_aggregate_target to about 500M. Sga (sga_max_size --- 950M) is auto tuning. db is of size 8GB. workarea_policy_size is auto.
Please suggest any solutions for improving database performance.
Thanks,
Manikandan.daily gathering db stats using scheduler)Done by default on V10+
Please suggest any solutions for improving database performance.Ready, Fire, Aim!
Is any OS resource the bottleneck; CPU, RAM, IO, network?
During slow period what is reported by AWR?
Please read these:
When your query takes too long
When your query takes too long ...
How to Post a SQL statement tuning request
HOW TO: Post a SQL statement tuning request - template posting
Edited by: sb92075 on Jul 27, 2010 10:01 AM -
Database performance degradation issue
Hi,
We are having the database performance related problem.
Oracle database 8.1.7.0
when we use statement,
SQL> select name,value from v$sysstat where name ='redo buffer allocation retries';
NAME VALUE
redo buffer allocation retries 2540
Here, Redo retries value shown above is too big, which it should not be.
Currently we are having log_buffer = 65536 bytes (64 kb)
Is it necessary to increase the size of log_buffer ? does increasing the size of log_buffer will improve the database performance issue upto some extent ?
Also, regarding database buffer cache,
SQL> SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME IN ('db block gets', 'consistent gets', 'physical reads');
NAME VALUE
db block gets 4365099
consistent gets 1309280457
physical reads 103708616
From the above values, buffer cache hit ratio is 0.921052817
So, is it necessary to increase the size of database buffer cache ?
With RegardsLog_buffer 64k is likely too small. The default is 512k per CPU.
Increasing log buffer will decrease the number of redo allocation retries.
You need to set to 512K or 1M.
Buffer Cache Hit Ratio is a Meaningless Indicator of the Performance of the System, as Connor McDonald has demonstrated on http://www.oracledba.co.uk
You'd better strive to reduce I/O.
Also you will notice you need very big amounts of memory to get very little improvement.
Personally I would probably do something if BCHR was below 80 percent, but I know of situations where the problem is in the application and no value of db_blockf_buffers will be big enough.
Hth
Sybrand Bakker
Senior Oracle DBA -
Switching from PC to Mac mini, enough power to perform tasks?
Switchingfrom PC to Mac mini, enough power to perform tasks?
Tasks:archiving slides, scans at 9600 DPI result 2.5 GB files, some of the scans are around 60 MB files. Also intend to edit video. Mostly working in Photoshop cs5.
Miniconfiguration:
i7processor with discret video memory
500GB HD (with external USB 3 storage)
4x1GB RAM and add another 4x1 later
PCDell XPS 8300 configuration:
i7processor
500GB HD (with external USB 3 storage)
12-16GB RAM
Don't know if the mini will be enough of a machine. It can only go up to 8GB RAM, but I know the architecture is different. Also, does the minirun cool and quiet? This is a home environment.
ps operating system is Mac OS X , don't know about the (10.7.2) .I have both the new Mini Core i5 and my existing Gateway Core 2 Quad PC; both running 64 bit OS. If you're running 32bit OS on a PC, only the first 4Gb is recognized and utilized. The Mini is Lion, the PC is running Vista 64 and I do digital media stuff both for a living and also for fun. First things first..
You need at least 8Gb of RAM to start with the Mini if you're going to deal with HD video and working with CS5 in a 64bit environment. 8Gb of RAM on the Mini just flies, 16Gb even better; albeit noticeable faster than 4Gb especially if you are working with 64bit compliant applications -- I am. My Mini came stock with only 2Gb of RAM, but a pair of Corsair Mac Memory certified RAM I bought from a PC store did the trick. According to OWC (macsales.com), the Mini can be upgraded up to 16Gb of RAM, eventhough Apple official stance is 8Gb. Secondly, throw in a SSD (Solid State Drive); this will speed bootup and applications plus act as a page memory place just in case 8Gb of RAM isn't enough, which is already the case as I am using 64bit applications in dealing with RAW images and HD videos. But because SSD drives are fast, there isn't any lag at all on the Mac. I have a SandForce 6G SSD drive installed on my Mini as a second drive (Mini Mid 2011 has 2 drive bays inside). With the PC, however, you need at least 8Gb of RAM, and even then my PC is complaining that it is running low on memory when I'm working with DXO Optics Pro 7 Elite on RAW files. So the Mac is more memory efficient in usage compared to the PC. My PC too has a pair of SandForce RAIDed in 0 mode to help with memory paging and overall performance boost. The beauty of the PC is that, you have lots of internal bays where you can configure the drives to function in RAID mode, especially RAID 0. With the Mac Mini, you will only have a few choices; there is no option to add USB 3 storage unless you want to go the Sonnet Thunderbolt to Express 34 card USB 3 route which is severely limited by the Express 34 bus speed.
However, with Thunderbolt, you get faster than USB 3 port speed and is the future. You can buy the Pegasus RAID array box for Thunderbolt. It is pricy, but worth the investment.
Last but not least. Mini mostly runs cool if not pushed. If it is pushed, it will run warm enough to keep my coffee warm when placed ontop of the aluminium case. At least, it runs a lot quieter than my PC with those blazing fans spinning at insane speeds when rendering RAW and HD videos (both CPU and GPU).
Hope this helps. -
Oracle 11 G database performance tuning
How to indexing the oracle 11G database?
Thanks in advance.Your question is like if you ask "Tell me how to fix a car". As you know people spend years learning how to fix different problems with cars; the same applies to database performance tuning. There is no way to answer this question in one post. Please ask a more specific question in an apropriate (database related) forum.
cheers -
Hi,
I am running Oracle10g in Windows and i have
SGA - 289406976
Fixed Size- 1248576
Variable Size - 96469696
Database Buffer - 184549376
Redo Buffer - 7139328
i am enclosing the init.ora file for better understanding
# Cache and I/O
db_block_size=8192
db_file_multiblock_read_count=16
# Cursors and Library Cache
open_cursors=300
# Database Identification
db_domain=""
db_name=orcl
# Diagnostics and Statistics
background_dump_dest=D:\oracle\product\10.2.0/admin/orcl/bdump
core_dump_dest=D:\oracle\product\10.2.0/admin/orcl/cdump
user_dump_dest=D:\oracle\product\10.2.0/admin/orcl/udump
# File Configuration
control_files=("D:\oracle\product\10.2.0\oradata\orcl\control01.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control02.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control03.ctl")
db_recovery_file_dest=D:\oracle\product\10.2.0/flash_recovery_area
db_recovery_file_dest_size=2147483648
# Job Queues
job_queue_processes=10
# Miscellaneous
compatible=10.2.0.1.0
# Processes and Sessions
processes=150
# SGA Memory
sga_target=287309824
# Security and Auditing
audit_file_dest=D:\oracle\product\10.2.0/admin/orcl/adump
remote_login_passwordfile=EXCLUSIVE
# Shared Server
dispatchers="(PROTOCOL=TCP) (SERVICE=orclXDB)"
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=95420416
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_tablespace=UNDOTBS1
and the Total Physical Memory - 1037864
Available - 206124
kindly pls explain why the database is running slow?Pls tell me what parameter shuld i change in the init.ora so that the database performance increases?Is only Oracle running slow?
Are some query running slow?
I think that you might not be able to increase performance
by changing only oracle parameter.
What kind of programs and services are running on your Windows?
Are they disturbing <s>Oracle sleeping</s> Oracle running?
Please check them first.
Oops, I'm not native, so I have mistake in using word.
Sorry.
Message was edited by:
ushitaki -
Can anyone help me? I just anted to know which are best and
precise sites on internet where I can find info about Oracle 8i
database performance monitoring,correction,best practises and
prevention. Many times our database performance goes down and
need to trouble shoot and correct the problems
Thanks
sandeepHi sue..
please send the papers to me also..my mail id [email protected]
bye bye
subbu
The Oracle Performance and Tuning class is very good at least
when I took it around 2 years ago.
Oracle Performance Tuning by Mark Curry and Peter Corrigan is
good but is probably too deep for someone new in Oracle. (It's
too deep for me most of the time since I've worked with Oracle
for 2.5 years now)
I could also send you a paper that gives a fairly good
explanation on use of V$SESSION_WAIT, V$SESSION_EVENT, and
V$SYSTEM_WAIT. V$SESSION_WAIT tells you what application is
wating and what is the event wait occuring.
Let me know if you want it. -
Database Performance Evaluation Benchmarking Tuning
Does anyone by chance have any articles or websites that deal with Oracle (or generic) database performance evaluation, benchmarking, and/or tuning for a STANDALONE PC database installation? I have Oracle 10g installed on my personal machine and I want to find some information that will help me place the performance evaluation into a more professinal context. Any links relating to this would be very appreciated.
regards,
JohnWhy do you expect there to be a difference in evaluating performance for a standalone PC database installation vs. a server database installation? Other than the fact that, presumably, you won't find network events in the top wait events, I'm hard-pressed to think of any differences.
You would evaluate performance and tune the system the same way on a standalone database as on a database on a server box-- you would figure out what operations are important, figure out how quickly those operations need to run, figure out which of the important operations are running slowly, figure out what those operations are spending time doing, and then figure out how to reduce the runtime. Of course, each of those steps can potentially be rather involved. There are plenty of articles and books on performance tuning-- Oracle has a few manuals, Jonathan Lewis's book on the cost based optimizer is excellent, Cary Millsap's optimization book is top flight, etc.
As for benchmarking, unless the intention is to run something like the TPC benchmark on your desktop, which would seem odd, your benchmark is generally closely tied to your application-- i.e. figuring out how quickly the system performs a particular business operation. Generic benchmarks like TPC tend not to be particularly useful in the real world because they are unlikely to mimic your real workload.
Justin -
Database Performance Monitoring
Hi,
I use oracle 11.2.0.2.0,IBM AIX 6.1 operating System.
My client/User complainting that the Application process is taking long time than usual,especially when they implementing some module in their applications.
So when i closely monitoring my production(LIVE) database at the time of implementation,im unable to find any issues in DB side.So what are all the possibile areas to be focus on this situation?
I really thinks it could also possible that the issue belongs to the Network failure/bandwith running slow.
So what i really expect is ,Are they any monitoring tool or any trigger applicable/available for this scenario?
Looking for Helpful Answers..
Regards
Faizfor information ,Here i post my actuall scenario
Only in two out of 200 client branches ,the application performance was taking long time than usual,
So that i enable trace (TKPROOF)for that corresponding sessions,also i generate and analyze AWR report durig that particular time,
I found that no issues from database side.Later i come to know the actual issue was being in the Network side(i.e Network speed was very poor).
So henceforth,i have been asked that if the problem persists again,i need to make ensure that the problem which not belongs to Network part before go and check
database performance.
So any tool or monitoring script or any packages available to make ensure that the actuall problem not belongs to Network related issues,befor check DB performance. -
Oracle : 10.2.0.3
OS : Linux 64 bit
Issue : Slow performance at 11-30PM complained by client.
Checks done :
1. Ran AWR between 11 PM and 12 Noon.
CPUs : 4 SGA Size : 2,000M (100%) Buffer Cache : 1,584M (79.2%) Shared Pool 1,129M (56.4%)
ADDM suggest SGA_TARGET to increase from 2000MB to 2500MB.
2. top 5 events
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
db file scattered read 1,952,811 4,804 2 30.5 User I/O
CPU time 3,448 21.9
db file sequential read 149,712 1,921 13 12.2 User I/O
read by other session 293,022 877 3 5.6 User I/O
log file sync 9,920 157 16 1.0 Commit
------------------------------------------------------------- 3. Stats are upto date.
4. Index rebuild requirement is not there
SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;
no rows selected5. Average 100 sessions will connect to the database
6. Checked all logs fr any disconnection details
7. Application is running from weblogic
Questions : How to certify the performance is good or slow from the above observations. I am able to feel the statistics are the similar for the different periods where I ran AWR report.
: Other than the user as a DBA what are the other checks can be done to monitor the performanceIt's difficult to use AWR or Statspack to "certify" database performance is good. It just depends what "performance is good" means.
Most of the time it's application response time which is the right metric: database response time is only a part of application response time and AWR/Statspack cannot easily link database response time and application response time.
[11.2 Concepts Guide Principles of Application Design and Tuning| http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/cncptdev.htm#CHDEHHIJ] says
>
Define clear performance goals and keep historical records of metrics
An important facet of development is determining exactly how the application is expected to perform and scale. For example, you should use metrics that include expected user load, transactions per second, acceptable response times, and so on. Good practice dictates that you maintain historical records of performance metrics. In this way, you can monitor performance proactively and reactively (see "Performance Diagnostics and Tuning").
Maybe you are looking for
-
NewTek Tricaster MPEG 2 with Premiere CS4
Just curious if anyone on here has uses or is using the NewTek Tricaster and if so what are you doing with the MPEG 2 files that it records to get a good looking DVD? The Tricaster records footage in MPEG 2. It has two different settings to record w
-
Hi, I am currently creating XML schemas to validate XML files with. I am using the Validator class available in Java5. The problem that I have is that if I declare sinple types in XML a certain way, then the java Validator does not understand it. For
-
Qosmio G20 AC Adapter damge - need IC chip part number
Hi, My Toshiba Qosmio g20 AC Adapter has been damaged due to an electrical short circuit. It's 8-pin IC Chip inside the board has been completely burn out so that i could not even read the part no of that IC chip. If some one knows its part no, pleas
-
Stopping loop when output = 0
Hi, I was wondering if anyone knew how to stop a loop when the final output is 0. I'm doing battery life testing with a agilent multimeter and want to loop the reading of the voltage until the output = 0 so that the timer can stop at that point. Ho
-
Can you watch iTunes Movies Extras on the iPad?
I just bought the movie "Drive" on iTunes on my ipad2. Itunes states when i next connect on my DESKTOP i will get the extras and i did but -- they won't sync or transfer to my ipad. What's the deal???