Determine database read/write statistics

From the following (in Oracle documentation)
DB_WRITER_PROCESSES parameter is useful for systems that modify data heavily. It specifies the initial number of database writer processes for an instance.
And from the "Deployment Guide for Oracle on Windows using Dell PowerEdge Servers.pdf" in http://www.oracle.com/technology/tech/windows/index.html
RAID LEVELS I have heard that it is for disks where datafiles reside the following is true
If I/O is <= 90% reads, then it is advisable to go for RAID 10. If I/O is > 90% reads, then RAID 5 could be considered.
I would like to know
1. How do we find out whether our database is "read heavy" or "write heavy"? Are there are scripts available please?
2. In commercial environments, what sort of RAID Levels are normally used for "read heavy and write heavy databases?
Edited by: sandeshd on Oct 14, 2009 3:11 PM

We were in a similar situation some weeks ago, we decided to make a trigger (logout) for saving the butes for a specific schema, you can work with this data importing it with Excel or something similar.
DROP TABLESPACE BYTES_USUARIOS INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE BYTES_USUARIOS DATAFILE
'/oradata/oradata/ewok/bytes_usuarios.dbf' SIZE 1024M AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
+++++++++++
CREATE USER B1
IDENTIFIED BY VALUES %password%
DEFAULT TABLESPACE BYTES_USUARIOS
TEMPORARY TABLESPACE TEMP
PROFILE MONITORING_PROFILE
ACCOUNT UNLOCK;
-- 1 Role for B1
GRANT CONNECT TO B1;
ALTER USER B1 DEFAULT ROLE NONE;
-- 2 System Privileges for B1
GRANT CREATE TABLE TO B1;
GRANT CREATE SESSION TO B1;
-- 1 Tablespace Quota for B1
ALTER USER B1 QUOTA UNLIMITED ON BYTES_USUARIOS;
++++++++++
CREATE TABLE b1.BYTES_USUARIOS
USERNAME VARCHAR2(30 BYTE),
SID NUMBER,
SERIAL# NUMBER,
MACHINE VARCHAR2(64 BYTE),
LOGON_TIME DATE,
CLS VARCHAR2(53 BYTE),
NAME VARCHAR2(64 BYTE),
VALUE NUMBER
TABLESPACE BYTES_USUARIOS
PCTUSED 40
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
grant all on b1.bytes_usuarios to system;
++++++++++++++++
grant select on v_$mystat to system;
grant select on v_$session to system;
grant select on v_$statname to system;
DROP TRIGGER SYSTEM.TRG_LOGOFF;
CREATE OR REPLACE TRIGGER SYSTEM.TRG_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
--VAR_CADENA VARCHAR(20);
begin
--VAR_CADENA := "%bytes%";
--execute immediate '       
insert into b1.bytes_usuarios (
select
ss.username,
ss.sid, ss.serial#, ss.machine, ss.logon_time,
decode (bitand( 1,class), 1,'User ', '') ||
decode (bitand( 2,class), 2,'Redo ', '') ||
decode (bitand( 4,class), 4,'Enqueue ', '') ||
decode (bitand( 8,class), 8,'Cache ', '') ||
decode (bitand( 16,class), 16,'Parallel Server ', '') ||
decode (bitand( 32,class), 32,'OS ', '') ||
decode (bitand( 64,class), 64,'SQL ', '') ||
decode (bitand(128,class),128,'Debug ', '') cls,
name,(value/1024/1024) from sys.v_$statname m, sys.v_$mystat s, sys.v_$session ss
where
m.statistic# = s.statistic#
and (name like '%bytes sent%' or name like '%bytes received%')
and ss.sid = (select distinct sid from sys.v_$mystat)
end;
++++++++++++
TODO
select username, name, sum(value)
from b1.bytes_usuarios
group by username, name
order by username, name
SOLO bytes enviados
select username, name, sum(value)
from b1.bytes_usuarios
where
and name like '%sent%'
group by username, name
order by username, name
SOLO bytes recibidos
select username, name, sum(value)
from b1.bytes_usuarios
where
and name like '%received%'
group by username, name
order by username, name

Similar Messages

  • Database Read/Write Ratio

    Hii All..
    Could I use physical read/physical write per second in monthly awr report Or v$sysstat physical reads/physical writes in order to calculate database read/write percentage ?
    Or is there diffrent method to understand database's act to sizing IO workload.
    Best Regards

    If I have to tune performance, i would take a time interval when performance degrades.
    When you take the whole month you put all in one sack (eg. daily OLTP transactions and nightly backup), so the statistics may be useless.
    Summarizing, look at intervals between selected snapshots (eg. 6am till 6pm).
    If however you want to calculate for calculating sake, then you may use v$sysstat which contains statistics since the DB startup.

  • Open standby database read/write

    What's the syntax to open a standby database read/write?
    Any help will be appreciated.
    Thanks

    Technically it's not open Standy database read/write,
    Activate the standby database using the SQL ALTER DATABASE ACTIVATE STANDBY DATABASE statement.
    This converts the standby database to a primary database, creates a new reset logs branch, and opens the database. See Section 8.5 to learn how the standby database reacts to the new reset logs branch.
    Physical Standby can only be open read/write in 11g with active standby option.

  • Metrics on database read/write/delete based on size of the table

    Hi,
    Though we have many performance measurement tools,sometimes it is difficult for developers to trace and measure each read, some reads which look fine in the development environment turn out to be show stoppers in quality environments.
    I am trying to find if we can give a rough estimate of ideal response time of an RFC based on the number of database fetches/writes in the RFC assuming that the loops and internal table reads etc are optimized.
    e.g.: if my RFC performs two reads and one insert.I would like to arrive at a figure e.g.: 200ms should be the ideal runtime of the RFC.
    I would like to base my calculations based on the following parameters:
    - Table Size
    - Key/Index used
    e.g.: For a FETCH operation
        Table Size | Key Used | Total Time 
         Upto 1 G  | Primary  | 100ms
         1 - 5  G  | Primary  | 200ms
    Similarly for insert and delete..
    I have the following questions for the forum in relation to the above:
    - Is the above approach good enough for arriving at a
      approximate metric on total response time of an RFC ?
    - Are there any other alternatives, apart from using the
      standard SAP tools ?
    - How are metrics decided for implementations with Java
      and  .NET frontends ?
    Thank you,
    Chaitanya

    Hi There
    Do you mean dba_segments table?
    My boss want to export 2 big tables and import to training environment, each table contains more than 2 million rows.
    I want to know how big(bytes, or megabytes) are those two tables on the hard drive, because we are going to run out of the space on the same server. I am not sure the diskspace can afford such big export or not, so if I can know how big are those 2 tables, and then I can decide what I can do for export. For example: I got 200MB left on my /home directory, that is the only place we can put export, those 2 tables could be bigger than 400MB even though I compress the export file.
    Hopefully this time it is clear.

  • Making sql server database read -write from read only

    hey guys
    i attached adventure works in sql server 2008 and it showing as read only ,
    so please guide me to make it read write or remove read only tag from database
    thanks in advance
    sujeet software devloper kolkata

    Hi,
    Is there an error message while you attach (Or restore) the database if so please provide it.
    If no Right click on your database choose properties -> go to options -> scroll to end then change read only option to false
    I hope this is helpful.
    Elmozamil Elamir
    MyBlog
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    http://elmozamil.blogspot.com

  • Access form ABAP to external MySQL-Database (read/write)

    Hello!
    We have an external MySQL-DB (running on Linux). Now we should read this database from our SAP-System (running on Linux with Oracle-DB) to create a purchase order. After creating this order in our SAP we should update the dataset in the MySQL-DB (creating order was succesfully).
    How can we create the Connection to the MySQL-DB?
    Thank you.
    Best Regards
    Markus

    Hi Markus!
    Sorry for the delay, the day was well filled
    For example of ADBC, as Kennet said, you can use ADBC_DEMO.
    About RFC, I advise you to take a read on the SAP JCo (SAP Java Connector), this is a SAP middleware component that enables the development of SAP-compatible components and applications in Java. From this you can send what you want interfacing to SAP.
    As said above in my last post, I would advise creating an RFC instead Native SQL. Not sure how the scenario you have to develop this solution, but I believe will be a more secure.
    Regards.

  • Missing file read/write feature on Mac OS X?

    HI,
    i am missing the file read/write statistics on Mac OS X. First i thought i just need to enable it in the template editor under "Advanced -> Java Application" but the two list items are simple not there. I ve seen a screenshot somewhere from the JMC where the two options are present but not for me. Can anyone tell me if this is a platform issue or how to enable it?
    Thx
    Marc

    You didn't say which version you are using, but I think what you are seeing is that the file and socket events are not available in JDK 8 only in JDK 7. This is a known regression that will be fixed in the next update to JDK 8.

  • How to open a "manual" Physical standby database in read/write mode

    Hi,
    I am running Oracle Database 10g Release 10.2.0.3.0 - 64bit Production Standard Edition on Linux version 2.6.9-42.0.8.ELsmp ([email protected]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-3))
    I've created a physical standby database, but since I am running Standard Edition, I am not using the DataGuard features. I use the rsync utility to copy over the archivelogs to the standby database, and I apply them periodically to the standby database.
    The standby database is started this way :
    startup nomount pfile='/u01/oradata/orcl/initorcl.stdby';
    alter database mount standby database;
    Everything runs perfectly fine, I can do "alter database open read only" and then I can do selects into tables to confirm that everything is up to date.
    The thing is, if I shutdown immediate the database, then do just startup :
    shutdown immediate;
    startup;
    The database opens with no error messages, but is still in read-only mode...
    I read that the default behavior for a standby database is to open read-only, like I am experiencing, but I would like to know what is the right way to open it correctly in read-write mode (I understand that after that, my standby will not be standby anymore and that I will have to recreate my standby database).
    Thanks,
    Mat

    Hello,
    There're features which allows you to open a Standby database in Read/Write mode but for all I know
    it needs Entreprise Edition.
    In Enterprise Edition you can use Logical Standby database. More over, for Physical standby there's
    a way by using flashback database so as to rolling backward the database and avoiding to recreate
    the Standby.
    In Standard Edition I'm afraid that you'll have to recreate your Standby database.
    Best regards,
    Jean-Valentin

  • Clone Production Database and Convert into Read Write Mode

    Hi,
    Please help me for below question...
    How to Create Test Database from Production Database without transporting backup of Production Database to Test Database and the Test Database should be in different directory structure and converting into read write mode?
    Please find me a solution as early as possible...
    Thanks & Regards
    Akhil

    if you don't need to move backup from prod to dev you need to create rman catalog and have access on it from dev server , after that you will be able to duplicate your prod to dev without moving backup and database will be in Read write by default .

  • Online read + write from/to ms sql server database

    hi all,
    we're using R 4.6C. Want to connect to MS Sql server database and read/write data from abap program.
    what's the best (and fastest) way do to this ?
    joerg

    I know only DBCON (Database multiconnect): see the notes 178949 and 323151 for more details.
    Message was edited by: max bianchi

  • Can PL/SQL read/write from a database server to another server?

    hi,
    please advise.
    thanks

    what i mean is output text file to other servers using pl/sql through utl_file package.No. utl_file reads/writes only on the server where your pl/sql code is running.
    But maybe you could map a network drive. Haven't tried that.

  • Read/Write Rules: Generic Identifiers & FM Tags

    I have seen this question answered in one of the dev guides but I cannot find it again.
    My question is whether it is legal within the read/write rules to refer various generic identifiers to the same FM tag/element.
    For instance:
    element "body" is fm element "Paragraph";
    element "preface" is fm element "Paragraph";
    Thanks!
    [moved to FM Structured forum]

    Does every structured fm document need a prior XML document?
    No. It depends upon the application. In my case, I create user manuals using structured FrameMaker. Our writing team created an EDD that does what we want and need it to do. We do not export or save the structured FrameMaker files as XML; however, we could if the need arose, for example, to create HTML versions for viewing in a browser. This application works for our needs.
    I have one application in which I begin with XML. When I create a parts catalog, the part information is exported from a database as XML. The structure of this XML file is determined by the database; it does not match the structure design in my EDD. But I use an XSL transform to convert the parts XML into a structure that is valid with respect to my EDD. Again, it depends upon your application.
    Some people DO export their structured FrameMaker files to XML. They may want to do something else with the content that requires its being in XML. Or they may use the XML for storage because XML files are smaller than FrameMaker files. Then when they open the XML files in FrameMaker, they are imported into a clean template, which cleans out any overrides and any junk that may have acummulated in the FrameMaker files.
    If your goal is to convert unstructured FrameMaker files into structured files, then I suggest concentrating on developing the EDD that works for your needs. Export to XML can come later if you need it.
    Van

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Powershell use Connection String to query Database and write to Excel

    Right now I have a powershell script that uses ODBC to query SQL Server 2008 / 2012 database and write to EXCEL
    $excel = New-Object -Com Excel.Application
    $excel.Visible = $True
    $wb = $Excel.Workbooks.Add()
    $ws = $wb.Worksheets.Item(1)
    $ws.name = "GUP Download Activity"
    $qt = $ws.QueryTables.Add("ODBC;DSN=$DSN;UID=$username;PWD=$password", $ws.Range("A1"), $SQL_Statement)
    if ($qt.Refresh()){
    $ws.Activate()
    $ws.Select()
    $excel.Rows.Item(1).HorizontalAlignment = $xlCenter
    $excel.Rows.Item(1).VerticalAlignment = $xlTop
    $excel.Rows.Item("1:1").Font.Name = "Calibri"
    $excel.Rows.Item("1:1").Font.Size = 11
    $excel.Rows.Item("1:1").Font.Bold = $true
    $filename = "D:\Script\Reports\Status_$a.xlsx"
    if (test-path $filename ) { rm $filename }
    $wb.SaveAs($filename, $xlOpenXMLWorkbook) #save as an XML Workbook (xslx)
    $wb.Saved = $True #flag it as being saved
    $wb.Close() #close the document
    $Excel.Quit() #and the instance of Excel
    $wb = $Null #set all variables that point to Excel objects to null
    $ws = $Null #makes sure Excel deflates
    $Excel=$Null #let the air out
    I would like to use connection string to query the database and write results to EXCEL, i.e.
    $SQL_Statement = "SELECT ..."
    $conn = New-Object System.Data.SqlClient.SqlConnection
    $conn.ConnectionString = "Server=10.10.10.10;Initial Catalog=mydatabase;User Id=$username;Password=$password;"
    $conn.Open()
    $cmd = New-Object System.Data.SqlClient.SqlCommand($SQL_Statement,$conn)
    do{
    try{
    $rdr = $cmd.ExecuteReader()
    while ($rdr.read()){
    $sql_output += ,@($rdr.GetValue(0), $rdr.GetValue(1))
    $transactionComplete = $true
    catch{
    $transactionComplete = $false
    }until ($transactionComplete)
    $conn.Close()
    How would I read the columns and data for $sql_output into an EXCEL worksheet. Where do I find these tutorials?

    Hi Q.P.Waverly,
    If you mean to export the data in $sql_output to excel document, please try to format the output with psobject:
    $sql_output=@()
    do{
    try{
    $rdr = $cmd.ExecuteReader()
    while ($rdr.read()){
    $sql_output+=New-Object PSObject -Property @{data1 = $rdr.GetValue(0);data2 =$rdr.GetValue(1)}
    $transactionComplete = $true
    catch{
    $transactionComplete = $false
    }until ($transactionComplete)
    $conn.Close()
    Then please try to use the cmdlet "Export-Csv" to export the data to excel like:
    $sql_output | Export-Csv d:\data.csv
    Or you can export to worksheet like:
    $excel = New-Object -ComObject Excel.Application
    $excel.Visible = $true
    $workbook = $excel.Workbooks.Add()
    $sheet = $workbook.ActiveSheet
    $counter = 0
    $sql_output | ForEach-Object {
    $counter++
    $sheet.cells.Item($counter,1) = $_.data1$sheet.cells.Item($counter,2) = $_.data2}
    Refer to:
    PowerShell and Excel: Fast, Safe, and Reliable
    If there is anything else regarding this issue, please feel free to post back.
    Best Regards,
    Anna Wang

  • Help with utl_file (read/write file from local directory)

    Need help reading/writing file on local machine from plsql using 10.2 DB.
    I am trying to read/write a file from a local directory(laptop) without success.
    I have been able to read/write to the database server directory but can't write to directory on local machine.
    The utl_file_dir parm has been set to * and the db restarted but I can't get it to work... Here's the plsql statement.
    out_file := UTL_FILE.FOPEN ( 'C:\PLSQL', 'TEST.TXT', 'W' ,32767);
    Whenever I run it continues to write to c:\PLSQL dir on the database server. Have looked at the "Directory" object and created MY_DIR = C:\PLSQL but it writes to the db server.
    Running 10.2 on a remote windows server, running PLSQL using sql*navigator.
    Thanks in advance for your help..

    I don't see how you expect the server to be able to see your laptop across the network, hack into it and start writing files. Even if it could, what if there is more than one laptop with a C: drive? How would it know which one to write to?
    Is there a shared drive on the server you can access via the laptop?

Maybe you are looking for