Metrics on database read/write/delete based on size of the table

Hi,
Though we have many performance measurement tools,sometimes it is difficult for developers to trace and measure each read, some reads which look fine in the development environment turn out to be show stoppers in quality environments.
I am trying to find if we can give a rough estimate of ideal response time of an RFC based on the number of database fetches/writes in the RFC assuming that the loops and internal table reads etc are optimized.
e.g.: if my RFC performs two reads and one insert.I would like to arrive at a figure e.g.: 200ms should be the ideal runtime of the RFC.
I would like to base my calculations based on the following parameters:
- Table Size
- Key/Index used
e.g.: For a FETCH operation
    Table Size | Key Used | Total Time 
     Upto 1 G  | Primary  | 100ms
     1 - 5  G  | Primary  | 200ms
Similarly for insert and delete..
I have the following questions for the forum in relation to the above:
- Is the above approach good enough for arriving at a
  approximate metric on total response time of an RFC ?
- Are there any other alternatives, apart from using the
  standard SAP tools ?
- How are metrics decided for implementations with Java
  and  .NET frontends ?
Thank you,
Chaitanya

Hi There
Do you mean dba_segments table?
My boss want to export 2 big tables and import to training environment, each table contains more than 2 million rows.
I want to know how big(bytes, or megabytes) are those two tables on the hard drive, because we are going to run out of the space on the same server. I am not sure the diskspace can afford such big export or not, so if I can know how big are those 2 tables, and then I can decide what I can do for export. For example: I got 200MB left on my /home directory, that is the only place we can put export, those 2 tables could be bigger than 400MB even though I compress the export file.
Hopefully this time it is clear.

Similar Messages

  • Open standby database read/write

    What's the syntax to open a standby database read/write?
    Any help will be appreciated.
    Thanks

    Technically it's not open Standy database read/write,
    Activate the standby database using the SQL ALTER DATABASE ACTIVATE STANDBY DATABASE statement.
    This converts the standby database to a primary database, creates a new reset logs branch, and opens the database. See Section 8.5 to learn how the standby database reacts to the new reset logs branch.
    Physical Standby can only be open read/write in 11g with active standby option.

  • Database Read/Write Ratio

    Hii All..
    Could I use physical read/physical write per second in monthly awr report Or v$sysstat physical reads/physical writes in order to calculate database read/write percentage ?
    Or is there diffrent method to understand database's act to sizing IO workload.
    Best Regards

    If I have to tune performance, i would take a time interval when performance degrades.
    When you take the whole month you put all in one sack (eg. daily OLTP transactions and nightly backup), so the statistics may be useless.
    Summarizing, look at intervals between selected snapshots (eg. 6am till 6pm).
    If however you want to calculate for calculating sake, then you may use v$sysstat which contains statistics since the DB startup.

  • I have a  macbook, an ipad, and an iphone with the Mail app on all three synced through icloud, but when I read and delete emails on my phone, the inbox on my ipad is not update, so I have to go through and mark as read or delete messages again. How can I

    I have a  macbook, an ipad, and an iphone with the Mail app on all three synced through icloud, but when I read and delete emails on my phone, the inbox on my ipad is not update, so I have to go through and mark as read or delete messages again. How can I sync my iphone and ipad? (it seems like each is synced only to my laptop, so don't synchronize until I go home and use my laptop.

    POP: yahoo, aol, comcast/time warnder/road runner
    Imap: google, hotmail, and more including icloud.
    If you want to use multiple devices - move to imap, I would even say "exchange", but Google does not support free exchange anymore, since January of that year. So Ironically  Icloud or Hotmail would be my choices right now.
    To find out more about what happens to you, search on Google "difference between pop and imap"

  • How to read/write a binary file from/to a table with BLOB column

    I have create a table with a column of data type BLOB.
    I can read/write an IMAGE file from/to the column of the table using:
    READ_IMAGE_FILE
    WRITE_IMAGE_FILE
    How can I do the same for other binary files, e.g. aaaa.zip?

    There is a package procedure dbms_lob.readblobfromfile to read BLOB's from file.
    http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#sthref3583
    To write a BLOB to file you can use a Java procedure (pre Oracle 9i R2) or utl_file.put_raw (there is no dbms_lob.writelobtofile).
    http://asktom.oracle.com/pls/ask/f?p=4950:8:1559124855641433424::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:6379798216275

  • How do I resolve this error message? The iPhoto library is on a locked volume.  Reopen iPhoto when you have read/write access, or reopen iPhoto with the Option key held down to choose another library.

    How do I resolve this error message? The iPhoto library is on a locked volume.  Reopen iPhoto when you have read/write access, or reopen iPhoto with the Option key held down to choose another library.

    Hi j,
    I don't know if this will work, but I'd try logging in to an admin account, going to your main library (not user library), opening it, opening Application Support, selecting iPhoto, hold down the command key and press i, click on the lock in the lower left, entering the password and making sure you have Read & Write privileges for System and Admin.

  • When I try to print sth I can´t choose my printer. Adobe Reader only shows a printer I used years ago.I aleady deleted and reinstalled the Reader and deleted all other printers from the computer.How can I add a new printer to Adobe Reader?Thanks for help!

    When I try to print sth I can´t choose my printer. Adobe Reader only shows a printer I used years ago.I aleady deleted and reinstalled the Reader and deleted all other printers from the computer.How can I add a new printer to Adobe Reader?Thanks for help!

    Hi,
    I would suggest you to uninstall Adobe Reader using the cleaner tool and then re-install the latest version.
    Adobe Cleaner Tool:- Download Adobe Reader and Acrobat Cleaner Tool - Adobe Labs.
    Latest version of Adobe Reader:- http://get.adobe.com/reader/
    If you still experience the same issue, please share the following information:-
    - Screenshot of Adobe Reader showing printer options
    - Screenshot of Microsoft Word showing printer options
    - Screenshot of control panel- Control Panel\All Control Panel Items\Devices and Printers
    Regards,
    Nakul 

  • This message shows up when I try to access my auxiliary iPhone library: "The iPhoto library is on a locked volume. Reopen iPhoto when you have read/write access, or reopen iPhoto with the Option key held down to choose another library."

    This message shows up when I try to access my auxiliary iPhone library: "The iPhoto library is on a locked volume. Reopen iPhoto when you have read/write access, or reopen iPhoto with the Option key held down to choose another library."
    What did I do wrong?  I have been downloading all my photos into this same library since Janurary with no problems.

    What version of Mac OS X?
    Click the black Apple icon on the top left of the screen and select About This Mac. The next screen will show the information.

  • How to speed up the deletion of 11million records from the table

    Hi,
    How to speed up the deletion of 11million records from the table.
    I need expiditious reply. Please do the needfull in advising
    Regards

    Please try to understand the question.Well it would help if you would answer some of the questions you have been asked as your question is not complete and clear and no matter how hard we try, we really need you to try and ask the question properly.
    So as previously asked
    Which simply supports the idea that we need:
    1) better definition of the business purpose (why)
    2) oracle version
    3) operating system
    4) hardware configuration
    to give a moderately accurate answer.
    I would like to add
    5) How many rows in total in the table to begin with.
    6) What is your delete statement
    7) Is this a one time operation or will it happen regularly
    8) Can you use partitioning.

  • Making sql server database read -write from read only

    hey guys
    i attached adventure works in sql server 2008 and it showing as read only ,
    so please guide me to make it read write or remove read only tag from database
    thanks in advance
    sujeet software devloper kolkata

    Hi,
    Is there an error message while you attach (Or restore) the database if so please provide it.
    If no Right click on your database choose properties -> go to options -> scroll to end then change read only option to false
    I hope this is helpful.
    Elmozamil Elamir
    MyBlog
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    http://elmozamil.blogspot.com

  • Berkeley DB needs too many write locks on specific size of the records

    Hi,
    I put records to the Berkeley db row by row in single transaction. And i have discovered significant increase of write locks on some specific size of the data.
    For example when i put 1000 records, where each record data size is around 3500 bytes, transaction uses 428 locks. On bigger or smaller data size transaction needs fewer locks.
    I put statistic in the table:
    Record size Lock number needed by transaction
    ~1400 169
    ~3500 428
    ~4300 6
    I think it is somehow related to the page size(16384) or cache size (64Mb)
    Could someone explain why transaction needs so many write locks with data size ~3500 and fewer locks with data size ~4300?
    Is there any way to avoid that raise of lock number? If not, I need to measure maximum number of locks needed for successful end of transactions. Understanding of the source of that issue would help me to prepare data which require the biggest number of locks for putting it in the database in one transaction.
    Thanks in advance.

    Please delete this post and repost in the appropriate forum. Thank you.

  • Determine database read/write statistics

    From the following (in Oracle documentation)
    DB_WRITER_PROCESSES parameter is useful for systems that modify data heavily. It specifies the initial number of database writer processes for an instance.
    And from the "Deployment Guide for Oracle on Windows using Dell PowerEdge Servers.pdf" in http://www.oracle.com/technology/tech/windows/index.html
    RAID LEVELS I have heard that it is for disks where datafiles reside the following is true
    If I/O is <= 90% reads, then it is advisable to go for RAID 10. If I/O is > 90% reads, then RAID 5 could be considered.
    I would like to know
    1. How do we find out whether our database is "read heavy" or "write heavy"? Are there are scripts available please?
    2. In commercial environments, what sort of RAID Levels are normally used for "read heavy and write heavy databases?
    Edited by: sandeshd on Oct 14, 2009 3:11 PM

    We were in a similar situation some weeks ago, we decided to make a trigger (logout) for saving the butes for a specific schema, you can work with this data importing it with Excel or something similar.
    DROP TABLESPACE BYTES_USUARIOS INCLUDING CONTENTS AND DATAFILES;
    CREATE TABLESPACE BYTES_USUARIOS DATAFILE
    '/oradata/oradata/ewok/bytes_usuarios.dbf' SIZE 1024M AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED
    LOGGING
    ONLINE
    PERMANENT
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    BLOCKSIZE 8K
    SEGMENT SPACE MANAGEMENT MANUAL
    FLASHBACK ON;
    +++++++++++
    CREATE USER B1
    IDENTIFIED BY VALUES %password%
    DEFAULT TABLESPACE BYTES_USUARIOS
    TEMPORARY TABLESPACE TEMP
    PROFILE MONITORING_PROFILE
    ACCOUNT UNLOCK;
    -- 1 Role for B1
    GRANT CONNECT TO B1;
    ALTER USER B1 DEFAULT ROLE NONE;
    -- 2 System Privileges for B1
    GRANT CREATE TABLE TO B1;
    GRANT CREATE SESSION TO B1;
    -- 1 Tablespace Quota for B1
    ALTER USER B1 QUOTA UNLIMITED ON BYTES_USUARIOS;
    ++++++++++
    CREATE TABLE b1.BYTES_USUARIOS
    USERNAME VARCHAR2(30 BYTE),
    SID NUMBER,
    SERIAL# NUMBER,
    MACHINE VARCHAR2(64 BYTE),
    LOGON_TIME DATE,
    CLS VARCHAR2(53 BYTE),
    NAME VARCHAR2(64 BYTE),
    VALUE NUMBER
    TABLESPACE BYTES_USUARIOS
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    grant all on b1.bytes_usuarios to system;
    ++++++++++++++++
    grant select on v_$mystat to system;
    grant select on v_$session to system;
    grant select on v_$statname to system;
    DROP TRIGGER SYSTEM.TRG_LOGOFF;
    CREATE OR REPLACE TRIGGER SYSTEM.TRG_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    --VAR_CADENA VARCHAR(20);
    begin
    --VAR_CADENA := "%bytes%";
    --execute immediate '       
    insert into b1.bytes_usuarios (
    select
    ss.username,
    ss.sid, ss.serial#, ss.machine, ss.logon_time,
    decode (bitand( 1,class), 1,'User ', '') ||
    decode (bitand( 2,class), 2,'Redo ', '') ||
    decode (bitand( 4,class), 4,'Enqueue ', '') ||
    decode (bitand( 8,class), 8,'Cache ', '') ||
    decode (bitand( 16,class), 16,'Parallel Server ', '') ||
    decode (bitand( 32,class), 32,'OS ', '') ||
    decode (bitand( 64,class), 64,'SQL ', '') ||
    decode (bitand(128,class),128,'Debug ', '') cls,
    name,(value/1024/1024) from sys.v_$statname m, sys.v_$mystat s, sys.v_$session ss
    where
    m.statistic# = s.statistic#
    and (name like '%bytes sent%' or name like '%bytes received%')
    and ss.sid = (select distinct sid from sys.v_$mystat)
    end;
    ++++++++++++
    TODO
    select username, name, sum(value)
    from b1.bytes_usuarios
    group by username, name
    order by username, name
    SOLO bytes enviados
    select username, name, sum(value)
    from b1.bytes_usuarios
    where
    and name like '%sent%'
    group by username, name
    order by username, name
    SOLO bytes recibidos
    select username, name, sum(value)
    from b1.bytes_usuarios
    where
    and name like '%received%'
    group by username, name
    order by username, name

  • Access form ABAP to external MySQL-Database (read/write)

    Hello!
    We have an external MySQL-DB (running on Linux). Now we should read this database from our SAP-System (running on Linux with Oracle-DB) to create a purchase order. After creating this order in our SAP we should update the dataset in the MySQL-DB (creating order was succesfully).
    How can we create the Connection to the MySQL-DB?
    Thank you.
    Best Regards
    Markus

    Hi Markus!
    Sorry for the delay, the day was well filled
    For example of ADBC, as Kennet said, you can use ADBC_DEMO.
    About RFC, I advise you to take a read on the SAP JCo (SAP Java Connector), this is a SAP middleware component that enables the development of SAP-compatible components and applications in Java. From this you can send what you want interfacing to SAP.
    As said above in my last post, I would advise creating an RFC instead Native SQL. Not sure how the scenario you have to develop this solution, but I believe will be a more secure.
    Regards.

  • Help!  SAP-LSO - LSO-AE "A Read-Write Error Occurred when checking in the Object"

    Using version 14 of LSO-AE
    WBT publishes fine
    Error occurs during check-in to Master Repository
    It seems to be a 3rd party WBT created by the NTSB, so I only have the SCORM 1.2 package, no source files unfortunately.
    Thanks in advance for any advice that is offered!

    Figured it out.  Not that this forum helped!  But here is the answer should anyone else have a similar problem on check-in.  The standard setting for the LSO Repository is 100MB.  Trying to check in anything larger than this, caused the above error.  Increasing the configuration setting fixed the issue and allowed successful course check in without the above error.

  • Database design: add attribute to some records of the table.

    I have table T(ID, Name, Group, <some 10 columns more>), sample data:
    1, 'a', 'group1',...
    2, 'b', 'group2',...
    3, 'c', 'group1',...
    4, 'd', 'group1',...
    5, 'e', 'group3',...Column "T.Group" puts record to some kind of logical group, some records belong to "group1" some not, as you see.
    In such situations there come business need oftenly:
    "Add to group1 T-s one additional attribute/column called A".
    Where to define column A in such situations, and is there for the decision some aspects like table T size (in rows) or other aspect that can make the design solution different?
    I understand this way, that new table S(T_ID, A) should be created for this business need. The column A should not be created in table T, that would be wrong solution because some T-records doesn't need that attribute.I understand that, if table T is small in size then it is no problem creating new child-table S, but if T would be very big table then better would be to create the column A in table T.
    Can you agree/disagree with my understanding?

    CharlesRoos wrote:
    Should i then create table called "T_details" where i would but all attributes that come with new business requirements and belongs to certain T-records?
    Or do you suggest to add the columns only and only always to table T even if some T-records doesn't need the new attribute?If every master records(rows) will have only one child records(it means one to one relationship) then only add new column else if there will one to many relation ship then create new table(child)

Maybe you are looking for