Database Performance Checks

Oracle : 10.2.0.3
OS : Linux 64 bit
Issue : Slow performance at 11-30PM complained by client.
Checks done :
1. Ran AWR between 11 PM and 12 Noon.
CPUs : 4 SGA Size : 2,000M (100%) Buffer Cache : 1,584M (79.2%) Shared Pool 1,129M (56.4%)
ADDM suggest SGA_TARGET to increase from 2000MB to 2500MB.
2. top 5 events
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
db file scattered read            1,952,811       4,804      2   30.5   User I/O
CPU time                                          3,448          21.9          
db file sequential read             149,712       1,921     13   12.2   User I/O
read by other session               293,022         877      3    5.6   User I/O
log file sync                         9,920         157     16    1.0     Commit
          -------------------------------------------------------------       3. Stats are upto date.
4. Index rebuild requirement is not there
SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;
no rows selected5. Average 100 sessions will connect to the database
6. Checked all logs fr any disconnection details
7. Application is running from weblogic
Questions : How to certify the performance is good or slow from the above observations. I am able to feel the statistics are the similar for the different periods where I ran AWR report.
: Other than the user as a DBA what are the other checks can be done to monitor the performance

It's difficult to use AWR or Statspack to "certify" database performance is good. It just depends what "performance is good" means.
Most of the time it's application response time which is the right metric: database response time is only a part of application response time and AWR/Statspack cannot easily link database response time and application response time.
[11.2 Concepts Guide Principles of Application Design and Tuning| http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/cncptdev.htm#CHDEHHIJ] says
>
Define clear performance goals and keep historical records of metrics
An important facet of development is determining exactly how the application is expected to perform and scale. For example, you should use metrics that include expected user load, transactions per second, acceptable response times, and so on. Good practice dictates that you maintain historical records of performance metrics. In this way, you can monitor performance proactively and reactively (see "Performance Diagnostics and Tuning").

Similar Messages

  • Performance Check - ABAP and Database color bars

    Hello everyone,
    When i go for the performance check of my object, i see Database in red bar and ABAP in green bar.
    The smaller the difference between these two bars, the better.
    But what does the color mean ? ( Somethimes the smaller bar is Red )
    Thanks

    Here is the explanation (depends % of total) in [sap library - SE30|http://help.sap.com/saphelp_nw2004s/helpdata/en/c6/617d2ae68c11d2b2ab080009b43351/frameset.htm]
    >
    shalaxy s wrote:
    > The smaller the difference between these two bars, the better.
    NO, it depends completely of what does your program!

  • Regarding Database Performance

    Hi All,
    I have installed *10gR2 on RHEL4 (4GB -- RAM, space is enough)*. One application (oracle ucm) is running on that. Its contains apache and content server. After 2-3 weeks, developers were saying taking long time for opening url. So done gather database statistics (after that daily gathering db stats using scheduler). After that, it was working fine. Again after week they are having the prob. They are doing lot of dml on db. Checked in os level using top command. But oracle ( installed entire application as oracle) user is not consuming that much memory. set pga_aggregate_target to about 500M. Sga (sga_max_size --- 950M) is auto tuning. db is of size 8GB. workarea_policy_size is auto.
    Please suggest any solutions for improving database performance.
    Thanks,
    Manikandan.

    daily gathering db stats using scheduler)Done by default on V10+
    Please suggest any solutions for improving database performance.Ready, Fire, Aim!
    Is any OS resource the bottleneck; CPU, RAM, IO, network?
    During slow period what is reported by AWR?
    Please read these:
    When your query takes too long
    When your query takes too long ...
    How to Post a SQL statement tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    Edited by: sb92075 on Jul 27, 2010 10:01 AM

  • Error: SNAP_ADT does not exist in the database - manual check required

    Hello Friends,
    We are  performing an upgrade+migration to HDB using the DMO option(Oracle to HANA).
    We are getting below error in the phase MAIN_SHDCRE/SUBMOD_SHDDBCLONE/DBCLONE!
    Also find the output of SE11 & SE14 attached.
    SE14->
    Please help us with the manual check.
    Regards
    Sury

    Hi Sury,
    In regards to the error below:
    1EETGCLN Error: SNAP_ADT does not exist in the database -> manual check
    required
    1EETGCLN SNAP_ADT
    1EETGCLN Table does not exist
    Could you please try to activate the following tables via SE11?
    -  SNAP_ADT
    Once they are activated, Could you please repeat the phase and update the result?
    Thanks and Regards,
    James Wong
    Follow us:
    SAP System Upgrade & Update Troubleshooting Wiki Space.
    SAP Product Support Twitter  ( Hashtag: #NWUPGRADE)

  • Database Performance Problem

    Hi,
    I am running Oracle10g in Windows and i have
    SGA - 289406976
    Fixed Size- 1248576
    Variable Size - 96469696
    Database Buffer - 184549376
    Redo Buffer - 7139328
    i am enclosing the init.ora file for better understanding
    # Cache and I/O
    db_block_size=8192
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=orcl
    # Diagnostics and Statistics
    background_dump_dest=D:\oracle\product\10.2.0/admin/orcl/bdump
    core_dump_dest=D:\oracle\product\10.2.0/admin/orcl/cdump
    user_dump_dest=D:\oracle\product\10.2.0/admin/orcl/udump
    # File Configuration
    control_files=("D:\oracle\product\10.2.0\oradata\orcl\control01.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control02.ctl", "D:\oracle\product\10.2.0\oradata\orcl\control03.ctl")
    db_recovery_file_dest=D:\oracle\product\10.2.0/flash_recovery_area
    db_recovery_file_dest_size=2147483648
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.2.0.1.0
    # Processes and Sessions
    processes=150
    # SGA Memory
    sga_target=287309824
    # Security and Auditing
    audit_file_dest=D:\oracle\product\10.2.0/admin/orcl/adump
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=orclXDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=95420416
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    and the Total Physical Memory - 1037864
    Available - 206124
    kindly pls explain why the database is running slow?Pls tell me what parameter shuld i change in the init.ora so that the database performance increases?

    Is only Oracle running slow?
    Are some query running slow?
    I think that you might not be able to increase performance
    by changing only oracle parameter.
    What kind of programs and services are running on your Windows?
    Are they disturbing <s>Oracle sleeping</s> Oracle running?
    Please check them first.
    Oops, I'm not native, so I have mistake in using word.
    Sorry.
    Message was edited by:
            ushitaki

  • Database Health Check Enquiry

    Hi all,
    Some enquires regarding database health check. I did some research regarding health check, got overwhelmed by the information available out there.
    Currently I don't have any report on hand, but currently working and starting on one. This report serves as a report for reference, to understand if there's any database performance issue, and as a report for management.
    Wish to check with you folks, what are the typical things that I can look at on a daily basis to understand my database health status, eg. Buffer Hitrate, database I/O etc, especially those that may/will contribute to impact on the database performance. Or is there any good reference link whereby I can do some readup regarding such a health check?
    Thanks in advance for any input.
    Eugene

    Hi Eugene,
    Well that's a pretty open question and I guess you will get a lot of replies which I look forward to monitoring as there should be some very interesting ones there.
    Anyway, let me just open with one point that I have found very useful in the past. As regards Database Performance , you can look as much as you like at the statistics and a great deal of discretion is required in interpreting them but the real test for me of how well a database is performing is in terms of the user (or application) perception. Are the responses from the database good enough to meet the users expectations. Check the average response time for example and set guidellines for what is acceptable, very good response , very bad response etc.
    I use this as a guideline so then once the database is perfoming in the sense that the user is satisfied (or better still happy with the performance) we can gather the statistics (from Oracle 10g onwards there are lots of tools built in like ADDM and AWR for gathering and storing the database statistics) and create baselines. Once we have baselines for a normally performing system, as soon as problems are reported we can run off a diagnostic tool like ADDM for that period , compare it against the baseline and look for the striking differences. From there we can start an analysis of individual numbers, buffer hit ratios etc. to delve further.
    Hope this helps , I am sure lots of other people will chip in to this
    Regards

  • Performance Check

    Hi All,
    I changed the ORACLE_HOME by making changes in the oratab, .profile and also made the changes in listener.ora file.
    earlier my database was pointing to 11.1.0.7.versionA and now its pointing to 11.1.0.7.versionB.
    This was done in the Dev environment and now i am query to database properly.. But now i have to do this in all other databases.. but before that i want to ensure that this is not hampering the performance of the database.
    How can i check that ? are there any steps/methods to see how is the database performing ?
    - Kk

    Thanks for your reply.
    Actually thats what i wanted to know ? That will it be hampering my database performance ? :O
    Since not any major activity is happening on CIDEV right now i dont know how to test the performace.. any queries/methods if available to check the performance of the database would be really helpful.
    - Kk

  • Database Performance Monitoring

    Hi,
    I use oracle 11.2.0.2.0,IBM AIX 6.1 operating System.
    My client/User complainting that the Application process is taking long time than usual,especially when they implementing some module in their applications.
    So when i closely monitoring my production(LIVE) database at the time of implementation,im unable to find any issues in DB side.So what are all the possibile areas to be focus on this situation?
    I really thinks it could also possible that the issue belongs to the Network failure/bandwith running slow.
    So what i really expect is ,Are they any monitoring tool or any trigger applicable/available for this scenario?
    Looking for Helpful Answers..
    Regards
    Faiz

    for information ,Here i post my actuall scenario
    Only in two out of 200 client branches ,the application performance was taking long time than usual,
    So that i enable trace (TKPROOF)for that corresponding sessions,also i generate and analyze AWR report durig that particular time,
    I found that no issues from database side.Later i come to know the actual issue was being in the Network side(i.e Network speed was very poor).
    So henceforth,i have been asked that if the problem persists again,i need to make ensure that the problem which not belongs to Network part before go and check
    database performance.
    So any tool or monitoring script or any packages available to make ensure that the actuall problem not belongs to Network related issues,befor check DB performance.

  • Database Performance Slow

    Hi to all,
    My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
    I will list out the findidngs I found...
    Some tables were not analyzed since Dec2007. Some tables were never analyzed.
    (Will the tables were analyzed the performance will be improved for this scenario)
    PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
    (I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
    Memory Configuration:
    Buffer Cache: 504 MB
    Shared Pool: 600 MB
    Java Pool: 24MB
    Large Pool: 24MB
    SGA Max Size is: 1201.72 MB
    PGA Aggregate is: 400 MB
    My Database resided in Windows 2003 Server Standard Edition with 4GB of RAM.
    Please give me suggestions.
    Thanks and Regards,
    Vijayaraghavan K

    Vijayaraghavan Krishnan wrote:
    My database performance is suddenly going slow. My PGA Cahe hit percentage remain in 96%.
    Some tables were not analyzed since Dec2007. Some tables were never analyzed.
    PGA Allocated is 400MB. But till now the max pga allocated is 95MB since Instance started (11 Nov 08 - Instance started date).
    (I persume we have Over allocated PGA can i reduce it to 200MB and increase the Shared pool and Buffer Cache 100MB each?)
    You are in an awkward situtation - your database is behaving badly, but it has been in an unhealthy state for a very long time, and any "simple" change you make to address the performance could have unpredictable side effects.
    At this moment you have to think at two levels - tactical and strategic.
    Tactical - is there anything you can do in the short term to address the immediate problem.
    Strategic - what is the longer-term plan to sort out the state of the database.
    Strategically, you should be heading for a database with correct indexing, representative data statistics, optimium resource allocation, minimum hacking in the parameter file, and (probably) implementation of "system statistics".
    Tactically, you need to find out which queries (old or new) have suddenly introduced an extra work load, or whether there has been an increase in the number of end-users, or other tasks running on the machine.
    For a quick and dirty approach you could start by checking v$sql every few minutes for recent SQL that might be expensive; or run checks for SQL that has executed a very large number of times, or has used a lot of CPU, or has done a lot of disk I/O or buffer gets.
    You could also install statspack and start taking snapshots hourly at level 7, then run off reports covering intervals when the system is slow - again a quick check would be to look at the "SQL ordered by .." sections of the report to the expensive SQL.
    If you are lucky, there will be a few nasty SQL statements that you can identify as responsible for most of your resource usage - then you can decide what to do about them
    Regarding pga_aggregate_target: this is a value that is available for sharing across all processes; from the name you've used, I think you may be looking at a figure for a single specific process - so I wouldn't reduce the pga_aggregate_target just yet.
    If you want to post a statspack report to the forum, we may be able to make a few further suggestions. (Use the "code" tags - in curly brackets { } to make the report readable in a fixed fontRegards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The temptation to form premature theories upon insufficient data is the bane of our profession."
    Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear".

  • Database performance: avg. DB time is high

    My system is ECC6, MSSQL 2005 64 bit.
    I got a ERA report from my solution manager and it showed that we had a performance problem.
    Our performance overview is red because of the database performance.
    The avg. DB time in ms is 2768890,3. It's very high.
    Could you please help us on this issue?
    Leo

    Hai,
    There might number of reasons for high DB time.
    Check your I/O statistics, check your table structure, create Indexes if needed for efficient table access.
    You can schedule Optimize statistics in DB13 to get the statistics upto date which helps DB to get the best possible way to access data from your DB.
    Analyse the DB buffers and adjust them to get optimal performance. (Should be done by experienced DB admins).
    Check the below links....
    http://help.sap.com/saphelp_nw70/helpdata/EN/f2/31add7810c11d288ec0000e8200722/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/EN/f2/31add7810c11d288ec0000e8200722/frameset.htm
    You can also take help from SAP to analyze your case.
    Regards,
    Yoganand.V

  • Database structure check

    Hello All,
    In the Live Cache alert monitor for a production system I am getting a red alert for the node "Database Structure Check" The message is "No data consistency check in the last three months"
    Can anyone please let me know if I can schedule the "Check database structure" from the DB Planning Calendar ( LC10 ) in the production system? Does it have any effect on the system performance?  If there is any prerequisite steps that need to done before running the "Check database structure" please let me know.
    The Live Cache version that is currently running is 7.6.02   BUILD 014-123-152-175.
    Thanks and Best Regards,
    Sanjay

    Hello Sanjay,
    you can use the TA DB13 or DB13C for planning the Check Data. But there are also other possibilities to do it. I think all you questions in the FAQ note to the Check Data procedure.
    Please try this link
    https://websmp230.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=940420&nlang=&smpsrv=
    If it doesn't work you can check the note 940420 directly.
    Best regards,
    Oksana Alekseious

  • Oracle 10g Database Health Check!

    Can anyone guide me the best way to perform a complete database health check?
    Thanks

    Metalink note:
    How to Perform a Healthcheck on the Database - 122669.1

  • Oracle database performance after server reboot

    hi masters,
    this is not some kind of question, but a discussion. some statements come from our client that after weekly reboot of system, the oracle database performance is low for some time and increase after some time(say 2 days).
    i think it is but obvious, because at reboot oracle flushes all cache, and temporary space, so it need to re parse the sql statements and perform some disc I/O's so it might need some time and hence performance will degrade.
    but at the same time some people claim that after reboot their database performance is better than their normal performance for some days. it seems controversial that's why i am posting it here.
    what might be the reason behind this?? prior can have a valid reason of hard parsing, but what with second case??
    any clarification is highly appreciated...
    thank you
    regadrs
    VD

    Vikrant,
    You should wait for some time buddy, its weekend ;-) .
    this is not some kind of question, but a discussion. some statements come from our client that after weekly reboot of system, the oracle database performance is low for some time and increase after some time(say 2 days).i think it is but obvious, because at reboot oracle flushes all cache, and temporary space, so it need to re parse the sql statements and perform some disc I/O's so it might need some time and hence performance will degrade.
    >
    I would start from saying that checking the performance when the system just started, is a wrong approach. There would be lots of IOs , parsing, calculations(related to memory allocations) happening so there would be a delay/bad performance at that time. Very simple example can be parsing, another can be memory allocation. Oracle doesn't allocate the entire memory in the instance startup that is allocated to the memory areas but allocates just the bare minimum that is needed to start the instance and than after the startup, it keeps on allocating the memory. So surely enough, with the startup and after a while of it, there would be a different performance than that after the instance hsa already been started and the workload informations have started coming up.
    Its correct that Oracle would deallocate all the caches with the reboot as the instance is on the memory(physical) and with the reboot , that would be flushed including the SGA which is allocated over it. Temporary tablespace is now not freed with the reboot. I guess its a rather illogical thing to do but that's what is there now. Oracle keeps the segment allocated even after the reboot is issued, hence the reason for larger temporary tablespaces.
    >
    but at the same time some people claim that after reboot their database performance is better than their normal performance for some days. it seems controversial that's why i am posting it here.
    what might be the reason behind this?? prior can have a valid reason of hard parsing, but what with second case??
    >
    This should not come as a surprise once we understand what might be happening with this process. Assume a situation where you have undersized caches. For example, shared pool . which is very heavily used for database , if this is going to be undersized and you are not using automatic memory management, you won't be enjoying the dynamic management of this parameter. Now, if you do lots of parsing , thanks to your wrongly written queries, you would eventually end up filling up shared pool to its max thus leaving no space for incoming new hard parsed cursors. Here , if you can't manage to add more memory to add to it, the only solution left would be to flush the shared pool( as good as rebooting the db because this would do the same) and than make space for the new cursors. The performance is going to be better becausethe cursors would not be getting flushed out immediately and will be kept in the shared pool as long as its not filled up again.Once you have reached to limit of it, again there would be performance benefit. So there are always odds added to the statements like this that I rebuilt my index , I got better, I rebooted my server, my querie are much faster now. Most of the time when these kind of statements are given, they are based on what we have seen, without understading what actually might have happened. So I would siggest to hear the statement but not take them as a rule of thumb to follow.
    Hope this all makes some sense for you and would help somewhat.
    Aman....

  • Performance checking inside the source code

    performance checking inside the source code who to check it.
    thanks and regards
    chandra sekhar

    I guess you are asking how to check it, then here is the answer
    SQL Trace transaction ST05
    The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use

  • Can archive log backup influence database performance?

    Hi,
    can archive log backup generally influence the database performance? I mean: users can view their query to go slowly during backup of archived redolog?

    Are you asking about backing up the archived redo logs via rman or directly to tape or the actual archive process where Oracle backs the online redo to disk?
    -- comments on archive process
    Normally the redo log archiving process should have no noticable effect on database performance. About the only way for the process to have a noticable performance impact while it is running is if you store all your online redo logs on the same physical disk. You would also want the backup to be on a different physical disk.
    Check your alert log to make sure you do not have error messages related to being unable to switch redo logs and checkpoint incomplete messages. These would be an indication that your online redo logs are defined too small and you are trying to cycle around before Oracle has finished archiving the older logs.
    -- comments on archived redo log backup
    Archived reodo logs should not be on the same disk as the database so using rman or an OS task to back these files up should not impact Oracle unless you server is itself near capacity and any additional task effects the server.
    HTH -- Mark D Powell --

Maybe you are looking for