STATSPACK vs AWR

Hello All,
I am using Oracle 11g R2
I am new to the STATSPACK, and I have few questions:
Is it only a command line tool ? or it can be accessed through Oracle Enterprise Manager.
Can you provide me the details of how to generate reports? snapshots ? in STATSPACK
Regards,

NB wrote:
Hello All,
I am using Oracle 11g R2
I am new to the STATSPACK, and I have few questions:
Is it only a command line tool ? or it can be accessed through Oracle Enterprise Manager.
Can you provide me the details of how to generate reports? snapshots ? in STATSPACK
Regards,You dont need to take statspack reports anymore in 11gR2 use AWR instead. AWR has its advance functionality and includes almost everything to know whats running in instance.
Yes you can generate AWR report from OEM too.
from sql prompt you can run like
SQL>@?/rdbms/admin/awrrpt.sql
(specify begin snaphot time and end snapshot time)
you can have HTML or Text file as output.
And these will be created on you current OS path. hope this help.

Similar Messages

  • Is statspack and awr report are same?

    Hi
    All,
    Is statspack and awr report are same? and in which oracle version they are?
    Thanks,
    Vishal

    I wouldn't expect AWR or statspack to be able to generate snapshots or to run reports while the database is in restricted mode. I've never tested it, but I wouldn't expect it to work.
    Restricted mode is not normally used to allow normal users to test a database. Normally, if you're upgrading a production database, you have already validated that the upgrade works correctly by upgrading and testing the dev, test, and potentially staging databases. So when it comes to upgrading prod, you do the upgrade, verify that the upgrade scripts didn't throw any errors, and open up the database for users to do some quick validation. If there is some fear that users logging in would "mess up the database", that implies that you don't have enough confidence in the upgrade to even think about upgrading prod.
    Occasionally, you'll have a standby database that gets upgraded and opened up and a select number of users given information on how to connect to the new database. Those users would verify the new database and then some sort of switchover would take place to move everyone from the old system to the new one. That generally requires a lot more work, though, because you have to replicate data during the parallel production phase.
    Justin

  • STATSPACK vs AWR - 10g DB

    Hi,
    I am trying to rewrite a custom Oracle DB Statistics report which used to run on PERFSTAT.statspack in 9i which is no longer running in our 10g RAC as we are using AWR. Apart from a lot of custom queries we used STATSPACK.STAT_CHANGES to get a long list of params and kept in our custom tables but in 10g AWR I don't see such procedure. I checked dbms_workload_repository pkg but doesn't help. Is there any replacement for that procedure in AWR to get those params.
    Also is there any doc which has the ER Model of AWR for better understanding and a comparative study of STATSPACK to help us rewrite existing programs.
    Appreciate any help
    cpa

    All queries from the AWR report comes from the DBA_HIST tables and these are pulled from the DBMS_SWRF_REPORT_INTERNAL package body.
    AWR is not really different from Statspack, the main difference is the data collection and repository architecture (AWR repository) and how this data is being utilized (advisors and metrics).
    But as for the AWR tables (DBA_HIST) there are counterpart tables in Statspack.. and possibly achieve similar results.
    dba_hist_snapshot = STATS$SNAPSHOT
    dba_hist_osstat = STATS$OSSTAT
    dba_hist_sys_time_model = STATS$SYS_TIME_MODEL
    dba_hist_sysstat = STATS$SYSSTAT
    Also another issue you may have when doing your porting of Statspack reports to AWR is the SNAP_TIME and END_INTERVAL_TIME data type differences
    SNAP_TIME = is for Statspack which is based on DATE data type
    END_INTERVAL_TIME = is for AWR which is based on TIMESTAMP data type
    So my simple query to get the SNAP duration in minutes in Statspack is this:
    (s1.snap_time-s0.snap_time)*24*60 dur
    While on AWR I have to do this:
    round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60
    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME)
    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur
    You can see my example here http://karlarao.wordpress.com/2010/01/31/workload-characterization-using-dba_hist-tables-and-ksar/
    - Karl Arao
    karlarao.wordpress.com
    karlarao.tiddlyspot.com

  • Statspack Best Practices

    Hello Everyone:
    Common sense tells me that (within reason) statspack snapshots should be run fairly frequently. I have a set of users who is challenging that notion, saying that Statspack is spiking the system and slowing them down, and so they want me to only take snapshots every 12 hours.
    I remember seeing a document (I thought it was on MetaLink, but I dunno...) that spoke of best practices for Statspack snapshots. My customers want to limit me to one snapshot every 12 hours, and I contend that I might as well not run it with that window.
    Can someone point me to some best practice or other documentation that will support my contentions that:
    1) Statspack is NOT a resource hog, and
    2) twice-a-day is not going to provide meaningful data.
    Thanks,
    Mike
    P.S. If I'm all wet, and you know it, I'd like to see that documentation, too!

    Hi Mike,
    saying that Statspack is spiking the system and slowing them downI wrote both of the Oracle Press STATSPACK books and I've NEVER seen STATSPACK cause a burden. Remember a "snapshot" is a simple dump of the X$ memory structures into tables, very fast . . .
    they want me to only take snapshots every 12 hours.Why bother? STATSPACK and AWR reports are elapsed-time reports, and long-term reports are seldom useful . . . .
    An important thing to remember is that even if statistics are gathered too frequently with STATSPACK, reporting can always be done on a larger time window. For example, if snapshots are at five-minute intervals and there is a report that takes 30 minutes to run, that report may or may not be slow during any given five-minute period.
    After looking at the five-minute windows, the DBA can decide to look at a 30-minute window and then run a report that spans six individual five-minute windows. The moral of the story is to err on the side of sampling too often rather than not often enough.
    I have over 600 pages dedicated to STATSPACK and AWR analysis at the link below, if you want a super-detailed explaination:
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm
    I'm not as authoritative as the documentation, but even hourly snapshot durations can cause loss of performance details.
    Ah, this Oracle Best Practices document may help:
    http://www.oracle.com/technology/products/manageability/database/pdf/ow05/PS_S998_273998_106-1_FIN_v1.pdf
    By default, every hour a snapshot of all workload andstatistics information is taken and stored in the AWR. The data is retained for 7 days by default and both snapshot interval and retention settings are userconfigurable."
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • STATSPACK ANALYSIS TOOL

    hi guys,
    I need to analysis my statspack report of my own. Is there any free tool provided to analyse the report of my own.
    TIA,

    Hi,
    You could take a look at a great set of statspack and AWR analysis tools here http://www.statsviewer.netfirms.com/
    Cheers

  • Statspack for slow database

    Guys,
    This is my first time i generated a statspack report as the users were complaining slow response from datbase.
    This report is taken when the reports that takes a long time were run.
    This is my statspack report.
    Message was edited by:
    jkestely

    Hi,
    FYI, some tuning guru's are developing an open source free tool to help analyze STATSPACK and AWR reports, which is helpful to beginners:
    http://www.statspackanalyzer.com
    Also, It may be be a good idea to publish your prouction SQL here (or anywhere else). I've seen DBA's get canned for disclosing proprietary details about their employers database . . . .
    Also, a short duration snapshot is not so bad, and I often do on minute snaps when diagnosing a system. Anyhow, here are some observations from SP analyzer:
    Hope this helps. . . .
    Donald K. Burleson
    Author of "Oracle 9i High Performance Tuning with STATSPACK" by Oracle Press
    You have enabled system-level parallel query. This can influence the cost-based optimizer to favor full-table scans over index access. Consider using parallel hints instead, or invoking parallelism at the session level.
    You may have an application issue causing excessive rollbacks with 47.62% rollbacks per transaction. Due to Oracle´s assumption of a commit, the rollback process is very expensive and should only be used when necessary. You can identify the specific SQL and user session that is executing the rollbacks by querying the v$sesstat view.
    Remember that some applications may automatically perform rollback operations (commit-then-rollback or rollback-then-exit) after each commit. If this is the case, speak with your application developers to find out if there is a way to disable this. While these "empty rollbacks" do not incur performance expense, it will case this metric to appear very high.
    You have high latch free waits of 1.3 per transaction. The latch free wait occurs when the process is waiting for a latch held by another process. Check the later section for the specific latch waits. Latch free waits are usually due to SQL without bind variables, but buffer chains and redo generation can also cause them.
    You have 27,717.0 consistent gets examination per second. "Consistent gets - examination" is different than regular consistent gets. It is used to read undo blocks for consistent read purposes, but also for the first part of an index read and hash cluster I/O.
    You have 203,044 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
    You have 1.1 long table full-table scans per second. This might indicate missing indexes, and you can run plan9i.sql to identify the specific tables and investigate the SQL to see if an index scan might result in faster execution. If your large table full table scans are legitimate, look at optimizing your db_file_multiblock_read_count parameter.
    You have high small table full-table scans, at 1.7 per second. Verify that your KEEP pool is sized properly to cache frequently referenced tables and indexes.
    You are not using your KEEP pool to cache frequently referenced tables and indexes. This may cause unnecessary I/O. When configured properly, the KEEP pool guarantees full caching of popular tables and indexes. Remember, an average buffer get is often 100 times faster than a disk read.
    Any table or index that consumes > 10% of the data buffer, or tables & indexes that have > 50% of their blocks residing in the data buffer should be cached into the KEEP pool. You can fully automate this process using scripts.

  • Explain statspack values for tablespace & file IO

    10.2.0.2 aix 5.2 64bit
    in the Tablespace IO Stats & File IO Stats section of statspack and awr reports can someone help to clear up a little confusion I have with the value for AV Reads/s & AV Rd(ms). I'll reference some values I have from one of my reports over a 1 hour snap shot period, with the first three columns being reads, av rd/s, av rd(ms) respectively for both sections.
    For Tablespace IO I have the following.
    PRODGLDTAI
    466,879 130 3.9 1.0 8,443 2 0 0.0
    For File IO I have the following for each file within this tablespace.
    PRODGLDTAI /jdb10/oradata/jde/b7333/prodgldtai04.dbf
    113,530 32 2.6 1.0 1,302 0 0 0.0
    PRODGLDTAI /jdb14/oradata/jde/b7333/prodgldtai03.dbf
    107,878 30 1.6 1.0 1,898 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai01.dbf
    114,234 32 5.8 1.0 2,834 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai02.dbf
    131,237 36 5.2 1.0 2,409 1 0 0.0
    From this I can calculate that there were on average 129.68 reads every second for the tablespace and that matches what is listed. But where does the av rd(ms) come from? If there are 1000 milli-seconds in a second and there were 130 reads per second, doesn't that work out to 7.6 ms per read?
    What exactly is av rd(ms)? Is it how many milli-seconds it takes on average for 1 read? I've read in the Oracle Performance Tuning doc that it shouldn't be higher than 20. What exactly is this statistic? Also, we are currently looking at the purchase of a SAN and we were told that value shouldn't be above 10, is that just a matter of opinion? Would these values be kind of useless on tablespaces and datafiles that aren't very active over an hours period of time?

    10.2.0.2 aix 5.2 64bit
    in the Tablespace IO Stats & File IO Stats section of statspack and awr reports can someone help to clear up a little confusion I have with the value for AV Reads/s & AV Rd(ms). I'll reference some values I have from one of my reports over a 1 hour snap shot period, with the first three columns being reads, av rd/s, av rd(ms) respectively for both sections.
    For Tablespace IO I have the following.
    PRODGLDTAI
    466,879 130 3.9 1.0 8,443 2 0 0.0
    For File IO I have the following for each file within this tablespace.
    PRODGLDTAI /jdb10/oradata/jde/b7333/prodgldtai04.dbf
    113,530 32 2.6 1.0 1,302 0 0 0.0
    PRODGLDTAI /jdb14/oradata/jde/b7333/prodgldtai03.dbf
    107,878 30 1.6 1.0 1,898 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai01.dbf
    114,234 32 5.8 1.0 2,834 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai02.dbf
    131,237 36 5.2 1.0 2,409 1 0 0.0
    From this I can calculate that there were on average 129.68 reads every second for the tablespace and that matches what is listed. But where does the av rd(ms) come from? If there are 1000 milli-seconds in a second and there were 130 reads per second, doesn't that work out to 7.6 ms per read?
    What exactly is av rd(ms)? Is it how many milli-seconds it takes on average for 1 read? I've read in the Oracle Performance Tuning doc that it shouldn't be higher than 20. What exactly is this statistic? Also, we are currently looking at the purchase of a SAN and we were told that value shouldn't be above 10, is that just a matter of opinion? Would these values be kind of useless on tablespaces and datafiles that aren't very active over an hours period of time?

  • STATSPACK in 10G

    Hi,
    We are moving to 10g and I was wondering if STATSPACK is the tool (like in 9i) for performance monitoring in 10g or is there something else ? How would I install this tool?
    Thanks
    Vissu

    Hello,
    Yes, STATSPACK utility in Oracle10g is available and can be installed using $ORACLE_HOME/rdbms/admin/spcreate.sql script. Furthermore, Oracle10g introduced a new automated replacement for STATSPACK called Automatic Workload Repository AWR and Automatic Database Diagnostic Monitor ADDM.
    You could review a tool "Statspack Viewer Enterprise" that supports both Oracle10g STATSPACK and AWR. Get it at http://www.statsviewer.narod.ru/
    Cheers

  • Statspack-doubt about Module

    Hi,
    I am analyzing a statspack report. All queries starts with a
    'Module' clause, like
    MIS_VFPRIME.exe
    Module: MIS_VFS.exe
    Module: VF_Rec_UnAdj.exe
    FC_TopUp.exe
    RUU-Utility
    Module: oracle@acdgcbsodb1b (TNS V1-V3)
    Module: C:\Documents and Settings\Administrator\Desktop\
    Module: f90runm@acdgcbsfas3a (TNS V1-V3)
    Module: oracle@PROJDBSDB2 (S003)
    Module: NPA-DPD-LPP PROCESS =
    How do identify from where the query is executed?
    regards,
    Mat
    Message was edited by:
    user505933
    Message was edited by:
    user505933

    Hi,
    Thank you for the replay.
    We are working on Oracle9i on RH Linux AS4.
    Since the SQL is hidden within the module, you likely cannot see the originating source within a STATSPACK or AWR report, sorry.Then please explain what is the meaning of module clause?
    MIS_VFPRIME.exe
    Module: MIS_VFS.exe
    Module: VF_Rec_UnAdj.exe
    FC_TopUp.exe
    RUU-Utility
    Module: oracle@acdgcbsodb1b (TNS V1-V3)
    Module: C:\Documents and Settings\Administrator\Desktop\
    Module: f90runm@acdgcbsfas3a (TNS V1-V3)
    Module: oracle@PROJDBSDB2 (S003)
    Module: NPA-DPD-LPP PROCESS =
    Regards,
    Mathew

  • [open] Performance of Database

    Hello Gurus
    I have around 5 databases Dev01,Dev02,Dev03,Dev04,Dev05 in my server
    To increase the performance of one Dev04 DB i decrease the resources(sga_max_size,shared_pool_size,db_cache_size) for other DBs and increase the resources of Dev04 DB upto maximun extended
    But i am observered the performance of Dev04 DB is poor
    Is there any suggestion/tricks to increase or decraese resources for DB to perform better
    Thanks in advance

    It would be a little tough to say what to do with the information that you have provided. The first most important missing part is the database version(4 digits). Alot has changed with the versions so what and why can be different based on it.
    It would be required for you to see that what's happening on the Dev 4 db that its slow. The parameters , even though given generously doesn't gurantee that there would be a performance boost all the time. If there is a hard parse issue in the statements itself than it won't matter how much shared pool you allocate,it will eventually get filled up. Changing the parameter and expecting performance gain is called instance tuning and its highly unlikely that it would be of much use.
    You have mentioned high resource utilization in the db. Did you check that in the system itself is not choked up due to resource crunch? There are commands like sar, top, vmstat which can help you in doing this. The objective is to remove anything over the o/s itself which is causing it.
    Once you are done with this, you should come to the db and check which are the areas that may have beocme the contention point,for example, due to heavy parsing of the statemen aka hard parsing of the statements, cpu would be used veyr heavily. This may cause the resource crunch also. The best way to check this would be to generate STatspack or AWR report, depending on the version of your db. Generate that and post it here than it would be a little accurate to advice you over the performance gain.
    HTH
    Aman....

  • How to avoid db file parallel read for nestloop?

    After upgraded to 11gr2, one job took more than twice as long as before on 10g and 11gr1 with compatibility being 10.2.0.
    Same hardware. (See AWR summary below). My analysis points to that Nestloop is doing index range scan for the inner table's index segment,
    and then use db file parallel read to read data from the table segment, and for reasons that I don't know, the parallel read is very slow.
    AVG wait is more than 300ms. How can I fluence optimier to choose db file sequential read to fetch data block from inner table by tweaking
    parameters? Thanks. YD
    Begin Snap: 13126 04-Mar-10 04:00:44 60 3.9
    End Snap: 13127 04-Mar-10 05:00:01 60 2.8
    Elapsed: 59.27 (mins)
    DB Time: 916.63 (mins)
    Report Summary
    Cache Sizes
    Begin End
    Buffer Cache: 4,112M 4,112M Std Block Size: 8K
    Shared Pool Size: 336M 336M Log Buffer: 37,808K
    Load Profile
    Per Second Per Transaction Per Exec Per Call
    DB Time(s): 15.5 13.1 0.01 0.01
    DB CPU(s): 3.8 3.2 0.00 0.00
    Redo size: 153,976.4 130,664.3
    Logical reads: 17,019.5 14,442.7
    Block changes: 848.6 720.1
    Physical reads: 4,149.0 3,520.9
    Physical writes: 16.0 13.6
    User calls: 1,544.7 1,310.9
    Parses: 386.2 327.7
    Hard parses: 0.1 0.1
    W/A MB processed: 1.8 1.5
    Logons: 0.0 0.0
    Executes: 1,110.9 942.7
    Rollbacks: 0.2 0.2
    Transactions: 1.2
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 75.62 In-memory Sort %: 100.00
    Library Hit %: 99.99 Soft Parse %: 99.96
    Execute to Parse %: 65.24 Latch Hit %: 99.95
    Parse CPU to Parse Elapsd %: 91.15 % Non-Parse CPU: 99.10
    Shared Pool Statistics
    Begin End
    Memory Usage %: 75.23 74.94
    % SQL with executions>1: 67.02 67.85
    % Memory for SQL w/exec>1: 71.13 72.64
    Top 5 Timed Foreground Events
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    db file parallel read 106,008 34,368 324 62.49 User I/O
    DB CPU 13,558 24.65
    db file sequential read 1,474,891 9,468 6 17.21 User I/O
    log file sync 3,751 22 6 0.04 Commit
    SQL*Net message to client 4,170,572 18 0 0.03 Network

    Its not possible to say anything just by looking at the events.You must understand that statspacks and AWR actualy aggergate the data and than show the results.There may be a very well possibility that some other areas also need to be looked at rather than just focussin on one event.
    You have not mentioned any kind of other info about the wait event like their timings and all that.PLease provide that too.
    And if I understood your question corretly,you said,
    How to avoid these wait events?
    What may be the cause?
    I am afraid that its not possible to discuss each of these wait event here in complete details and also not about what to do when you see them.Please read teh Performance Tuning book which narrates these wait events and corresponding actions.
    Please read and follow this link,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#i18202
    Aman....

  • WARNING: out of private memory [1]

    The customer is seeing this warning message in the alert log file and the trace file. There is no other ORA error codes. What should we check for this warning?. Any inputs appreciated.
    Regards
    Sathya

    Odd they aren't seeing 4030 or 600 errors. We definitely need more information, which versions and platforms and such. Just pulling rabbits out of my hat, I'd say they either hit some memory leak or are seriously off on kernel settings.
    Cutting and pasting the alert log may be helpful too.
    It could be some user session that does something odd, like recursively generating rows in memory, blowing past pga aggregate suggestions. Still, I'd expect some other errors. If you could get some pga statistics (or better, a statspack or AWR), that might inform.
    I'd say, check out bug 5947623 and try that setting pgamax_size (with support, of course). My crystal ball says the customer is blowing way past any sensible PGA setting, getting an hp-ux error before a 4030 can be generated. But then again, my advice is worth what you are paying for it, or perhaps less.

  • Upgrade 9i to 10g Performance Issue

    Hi All,
    DBA team recently upgraded the database from 9i to 10g (Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi).
    There is a process as follows
    Mainframe --> Flat file --> (Informatica) --> Oracle DB --> (processing through SPs) --> Report
    The whole process normally used to take 1.5 - 2 hrs which is now taking 4-5 hrs after 9i to 10g upgrade.
    I tried searching on Google but could not find any detailed procedure to root out the cause. Below is the link of such an instance-
    http://jonathanlewis.wordpress.com/2007/02/02/10g-upgrade/
    Can someone suggest where to start digging to find out the reason behind the slowdown (if possible, please tell the detailed procedure or reference)?
    It is to be noted that there was no other change besides Database upgrade. Also, for about 20 days after the upgrade the whole process took near 1 hr (which was a pleasnt gain) but afterward it shoot up to 4-5 hrs suddenly and since then its taking same time.
    Thanks.

    Without more information (Statspack reports, AWR reports, or at least query plans), it's going to be exceptionally hard to help you.
    There are dozens, more likely hundreds, of parameters which change between 9i and 10g. And one of which could cause an individual query to perform worse. Plus, obviously something changed after the process was running quickly for 20 days, so there is some change on your side that would need to be identified and accounted for. There is no way for anyone to guess which of the hundreds of parameter and behavioral changes might have contributed to the problem, nor for us to guess what might have changed after 20 days in your database.
    If you can at least provide Statspace/ AWR reports, we may be able to help you out with a general tuning exercise. If you can identify what changed between the time the process was running well to the time it stopped running well, we may be able to help there as well.
    Justin

  • TOP 10 things you should AVOID in a OLTP DB

    So, what do you think?
    Who wants to give a contribuition?
    I vote on flooding the server with very cold water, but what the hell do i know?
    This post is serious, please give you're expert opinion.
    Cheers! And enjoy.

    This post is serious, please give you're expert opinion.OK, here are mine:
    http://www.dba-oracle.com/t_worst_practices.htm
    Inadequate Indexing - One of the top causes of excessive I/O during SQL execution is missing indexes, especially function-based indexes, and failure to tune the instance according to the SQL load is a major worst practice. It's no coincidence that the Oracle 10g SQLAccess advisor recommends missing indexes.
    Poor optimization of initialization parameters - The worst Oracle practice of all is undertaking to tune your SQL before these global parameters have been optimized to the workload.
    Poor Schema Statistics Management - The Oracle worst practice (before 10g automatic statistics collection) was to re-analyze the schema on a schedule, forgetting that the purpose of re-analyzing schema statistics is to change your production execution plans. This worst practice has become so commonplace that it has been dubbed the "Monday Morning Suprise". Shops with strict production change control procedures forget that analyzing the production schema can effect the execution of thousands of SQL statements.
    Poor change control testing - This is the worst of the worst practices, where an Oracle shop relies on a "test case proof" to preview how a database change will effect production behavior.
    No performance tracking - With STATSPACK (free) and AWR in Oracle10g, there is no excuse for not tracking your database performance. STATSPACK and AWR provide a great historical performance record and set the foundation for DBA predictive modeling.
    HTH . . . .
    Donald K. Burleson
    Oracle Press author

  • No.of Transcations in a Database

    hello,
    we are using oracle 10g on solaris 5.6.
    How can i find out the number of transcations that are done on a database in 1 day or number or reads and writes on the datafiles in a day.
    Thanks for any help.

    You can calculate the number of transactions that have ran since the last start of the database using v$sysstat statistic values but Oracle does not store the transaction count except as part of statspack and AWR snapshots. Depending on if and how often you take snapshots (and if you have a performace pack license) you could potentially use data from these.
    set echo off
    -- SQL*Plus script to calculate Transactions Per Second for version 8+
    -- 20020513  Mark D Powell   New, cre as resp 2 metalink req fr ver 7 Query
    --  Version 7 Query:
    --  SELECT SUM(s.value/
    --  (86400*(SYSDATE - TO_DATE(i.VALUE,'J')))) "tps"
    --  FROM V$SYSSTAT s, V$INSTANCE i
    --  WHERE s.NAME in ('user commits','transaction rollbacks')
    --  AND i.KEY = 'STARTUP TIME - JULIAN'
    select
      round(sum(s.value / (86400 * (SYSDATE - startup_time))),3) "TPS"
    from
      v$sysstat  s
    ,v$instance i
    where s.NAME in ('user commits','transaction rollbacks')
    /For file IO information look at v$filestat.
    HTH -- Mark D Powell --

Maybe you are looking for