OLTP performance

Approximately How many seconds/milliseconds does a query with 4 - 5 table joins that is used in OLTP environment Execute( I'm checking in Oracle for the performance and the Query is called from JAVA / XML application.) so that the performance is optimized when it is called from the application.
Thanks,
GM

There is no single answer to that question. It depends on how big the tables are. It depends if there are proper indexes for what you are trying to do. It depends on the hardware. It depends on how many other users there are on the database.
The answer is less than 100 milliseconds.

Similar Messages

  • SQL Server 2008R2 vs 2012 OLTP performance difference - log flushes size different

    Hi all,
    I'm doing some performance test against 2 identical virtual machine (each VM has the same virtual resources and use the same physical hardware).
    The 1° VM has Windows Server 2008R2 and SQL Server 2008R2 Standard Edition
    the 2° VM has Windows Server 2012R2 and SQL Server 2012 SP2 + CU1 Standard Edition
    I'm using hammerDB (http://hammerora.sourceforge.net/) has benchmark tool to simulate TPC-C test.
    I've noticed a significative performance difference between SQL2008R2 and SQL2012, 2008R2 does perform better. Let's explain what I've found:
    I use a third VM as client where HammerDB software is installed, I run the test against the two SQL Servers (one server at a time), in SQL2008R2 I reach an higher number of transaction per minutes.
    HammerDB creates a database on each database server (so the database are identical except for the compatibility level), and then HammerDB execute a sequence of query (insert-update) simulating the TPC-C standard, the sequence is identical on both servers.
    Using perfmon on the two servers I've found a very interesting thing:
    In the disk used by the hammerDB database's log (I use separate disk for data and log) I've monitored the Avg. Disk Bytes/Write and I've noticed tha the SQL2012 writes to the log with smaller packet (let's say an average of 3k against an average of 5k written
    by the SQL 2008R2).
    I've also checked the value of Log flushes / sec on both servers and noticed that SQL2012 do, on average, more log flushes per second, so more log flushes of less bytes...
    I've searched for any documented difference in the way log buffers are flushed to disk between 2008r2 and 2012 but found no difference.
    Anyone can piont me in the correct direction?

    Andrea,
    1) first of all fn_db_log exposes a lot of fields that do not exist in SQL2008R2
    This is correct, though I can't elaborate as I do not know how/why the changes were made.
    2) for the same DML or DDL the number of log record generated are different
    I thought as much (but didn't know the workload).
    I would like to read and to study what this changes are! Have you some usefu link to interals docs?
    Unfortunately I cannot offer anything as the function used is currently undocumented and there are no published papers or documentation by MS on reading log records/why/how. I would assume this to all be NDA information by Microsoft.
    Sorry I can't be of more help, but you at least know that the different versions do have behavior changes.
    Sean Gallardy | Blog | Microsoft Certified Master

  • Split the load between OLTP and Reporting

    Hi,
    my new client is in need of a method for splitting the load on one same database where OLTP as well as Reporting applications are both running against the same tables.
    Environment description: Oracle 10g release 2 on AIX.
    Within the OLTP application, 4 tables are heavily used with several millions of DMLs per day (mostly INSERTS and UPDATES). Those 4 tables are sharing the same primary key and are constantly joined by the reporting tool.
    The business requirement is that the data being read by the reporting tool needs to be "near live" to "live".
    My first idea is to implement an MView log for each of the 4 OLTP tables (on independent physical drives), as well as a "FAST REFRESH on COMMIT" materialized view joining the 4 tables, this way, the reporting tool can read "live" data from this new physical object (stored independently) instead of the actual OLTP tables.
    The Reporting tool needs to perform some light DMLs on some of the other OLTP tables, and the new Materialized view (joining the 4 tables) cannot be stored in another physical database for this reason.
    Any thoughts on this setup (any different architecture ideas ?)
    Did someone implement a similar setup ? (with similar large volume involved)
    Would FAST REFRESH ON COMMIT slow down the OLTP system too much ?
    Thank you,
    Patrick
    www.renaps.com

    Hi Justin,
    I agree with you that the benefit we would gain from not having to join the reporting tables would affect the transactional performance.
    Although, by doing so, we would also get rid of about half of the transactional indexes (indexes that were added to improve the performance of the reporting queries on the actual highly used OLTP tables).
    I would think that the I/O performance lost by adding materialized view logs on the transactional tables would be grossly canceled by removing the additional indexes.
    Of course, this is only in theory as I can't really confirm this fact until the actual tests.
    To answer your question, the main reasons the load needs to be splited from the OLTP are:
    1) to improve the reporting performance (since those tables are always joined together).
    2) to reduce the over-head on the OLTP system so the sporadic reporting transactions do not affect the OLTP performance.
    3) keeping both the OLTP and Reporting systems in the same database is for now a necessity since the Reporting system also initiates some DML operations on the OLTP tables.
    please let me know your thoughts..
    Thanks!
    Patrick

  • Exadata and OLTP

    hello experts,
    in our environment, OLTP databases (10g,11g) are on single instance mode and we are planning to do a feasibility analysis on moving to Exadata
    1) as per exadata related articles, exadata can provide better OLTP performance with flash cache.
    if we can allocate enough SGA as per the application workload then what is the meaning in moving in exadata?
    2) any other performance benefits for OLTP databases?
    3) since exadata is pre-configured RAC, will it be a problem for non-RAC databases which are not tested on RAC
    in general, how can we conduct an effective feasibility analsysis for moving non RAC OLTP databases to exadata
    thanks,
    charles

    Hi,
    1.Flash cache is one of the advantage in Exadata, to speed up your sql statement processing.Bear in ind that it s done on the storage level and it should not be compared directly with a non Exadata machine.
    2.As far as I know, besides faster query elapsed, we can also benefit from compression (hybrid columnar compression - Exadata spesific)
    and also as the storage is located inside the Exadata machine, thus will decrease the I/O factor of your database perfromance.
    3.you can have a single node database in Exadata.just set the connection to directly used the phyisical ip, instead of using scanip (11g) for RAC.
    I think the best thing to access, is project the improvement and cost saving if you are going to migrate to Exadata.Access the processing improvement you will gain, the storage used and also the license cost.usually,most shops used Exadata to consolidate their different physical db boxes
    br,
    mrak

  • High water mark creates performance degradation

    Hi all,
    I need help...
    How to find High water mark for tables. How to decide what are all tables needs to be reorginised. I want clear cut idea on this issue.
    Please help me.
    Regards,
    Kiran

    You use online segment shrink to reclaim fragmented free space below the high water mark in an Oracle Database segment. The benefits of segment shrink are these:
    Compaction of data leads to better cache utilization, which in turn leads to better online transaction processing (OLTP) performance.
    The compacted data requires fewer blocks to be scanned in full table scans, which in turns leads to better decision support system (DSS) performance.
    Please visit here:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#ADMIN10161
    what were the objects and recommendations generated after Segment Advisor job execution?
    SQL> select af.task_name, ao.attr2 segname, ao.attr3 partition, ao.type, af.message
    from dba_advisor_findings af, dba_advisor_objects ao
    where ao.task_id = af.task_id
    and ao.object_id = af.object_id
    what are the objects my database that can be reduced?
    select tablespace_name, segment_name, segment_type, partition_name,
    recommendations, c1 from table(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE'));
    where c1 is the command to be used.Before such operations this has to be issued,
    ALTER TABLE ... ENABLE ROW MOVEMENT.
    Hope it helps.
    Adith

  • Remove duplicates in oracle table

    Hi,
    I want to remove duplicates from a Account table
    Contains 2 column Account_id and Account_type
    values in Account table are
    Account_id Account_type
    1 GPR
    1 GPR
    1 GPR
    I want only one entry and remove other entry with Account_Id = 1
    Thanks,
    Petri

    Petri wrote:
    Hi,
    I want to remove duplicates from a Account table
    Contains 2 column Account_id and Account_type
    values in Account table are
    Account_id Account_type
    1 GPR
    1 GPR
    1 GPR
    I want only one entry and remove other entry with Account_Id = 1
    Thanks,
    PetriHi Petri,
    Depending on how important performance is for you, go for option 1 if performance is key, else go for option 2. Option 3 is highly recommended, if this is one time exercise:
    Option 1. For OLTP [performance is important]
    DELETE FROM account_table
    WHERE ROWID IN
    (SELECT ROWID
    FROM (SELECT ROWID,
    ROW_NUMBER ()
    OVER (
    PARTITION BY Account_id,
    Account_type
    ORDER BY
    Account_id, Account_type
    rn
    FROM account_table)
    WHERE rn > 1);
    Option 2. [If you are playing around]
    DELETE FROM account_table
    WHERE rowid not in ( SELECT min(rowid)
    FROM account_table
    GROUP BY Account_id,
    Account_type);
    Option 3. [Seriously considering to make sure that no more dirty stuff]
    a) Create a temporary table account_table_temp
    b) Use the below
    INSERT INTO ACCOUNT_TABLE_TEMP
    (SELECT *
    FROM ACCOUNT_TABLE
    WHERE ROWID IN ( SELECT MIN (ROWID)
    FROM ACCOUNT_TABLE
    GROUP BY ACCOUNT_ID, ACCOUNT_TYPE));
    c) Create constraint so that in future you won't have duplicates in future.
    Thanks,

  • LIS Update Mode

    Hello Experts,
    I am working in BI 7.0.
    I have a question about 'Direct Delta'. I have planned to use 'Direct Delta' update mode for 2LIS_11_VAITM, 2LIS_13_VDITM and 2LIS_12_VCITM. There would be delta extractions of documents up to 10000 for my customer. Am I right about using Direct Delta' ?
    And, when using 'Direct Delta' , what happens about Setting Tables? I mean, could you explain technically and step by step how to load data from R/3 while using 'Direct Delta' ? Should I use set-up tables?
    Thank you very much.
    Points waiting for you.

    HI Muju,
    In case of queued delta LUW's r posted to extractor queue
    by scheduling V3job we move the documents from Extactor
    queue to Delta queue and we extract LUW's from Deltaqueue
    to SAP BW by running Delta loadsQueued delta maintains
    Extractor log to handle the LUW's which r missed.
    In case of Direct delta LUW's r directly posted to Delta
    Queue(RSA7)and we extact the LUW's from DeltaQueue to SAP
    BW by running Delta loads.It degrades the OLTP performance
    becoz when LUW's r directly posted to DeltaQueue the
    application is kept waiting until all the enhancement code
    is executed.
    Queued Delta is better than Direct Delta for performance reasons.
    Hope it Helps
    Srini

  • About Data Guard - Physical Standby Database

    Dear All,
    I have read many documents regading Data Guard.
    I am about to setup Data Guard in our current environment but want to clear few tings.
    I have a confusion between physical and logical standby database.
    What I have read from different documents is:
    -Physical Standby Database in (11g)
    1) It is the most efFicient
    2) I can be either in mount or open state
    3) Select queries and reporting can be done to improve performance of the primary database.
    4) Schema on primary and standby database is always the same
    -- Logical Standby Database
    1) You can create additional tables, indexes, etc.
    2) Always in open mode
    3) Select queries and reporting can be done to improve performance of the primary database
    Now our scenerio is, that we have one server at the moment, OS is linux and operating system is 11g. We want to setup an other server in another country, will also be 11g on Linux in a way that it acts as standby/backup server. Schema and data is always the same. In case due to unavailability of primary server, standby server acts as the primary server (This has to be automated). The reason for unavailability could be any like maintenance work on primary server, network or hardware failure at primary server. The last and the most important thing is that users from this country where we will setup standby database will insert/update data on the primary server BUT queries and reporting will be done from this newly created standby data.
    Kindly recomend the best Data Guard in this scenerio and kindly correct me where I am wrong.
    Thanks, Imran

    A logical standby has various limitations on things like data types. It's also a much more complex architecture, which makes it more likely that something will break periodically and require attention. Applying redo to a physical standby is code that has been around forever and is as close to bullet-proof as you'll get. And you would generally prefer to fail over to a physical standby-- if you do things like create new objects in the logical standby, you may have to get rid of those objects during a failover to get acceptable OLTP performance.
    Justin

  • BW OLAP

    Hi All,
    Can anyone advise what is the differences between operational systems data modeling and data warehouse data modeling.
    Any document or guides on these.
    TQ
    Nathan

    Hi Nathan,
    Operational level reporting indicates reporting capabilities in transactional ( Operational ) systems like SAP R/3 etc. These are also called as OLTP systems. These are basically meant for recording the transactions and for the reporting purpose, there used to be separate systems like SAP BW, which picks up reporting relevant data from the sources like R/3.
    You can also have reporting in the operational system, but then systems resources will be devided for recording and reporting tasks, so to make trasaction recording easier and to improve OLTP performance, only basic reporting is done at operational level and all other reporting is done in OLAP Systems like BW.
    Hope this will helps you to resolve your doubt. Please assign the points if u satisfy with this reply.

  • Index maintenance at target

    When data replication happens at the target database (both source & target databases are Oracle) via GoldenGate for a specific table, will the corresponding Index table at the target also get updated ?
    Is there a requirement that a similar index should also exist in the source table to achieve this ?

    GoldenGate uses standard DML operations to apply the data on the target system. Since it uses standard DML (via OCI) any indexes that exist on the target system are automatically updated by Oracle.
    You can have a totally different indexing scheme on the target than you do on the source. While creating a live reporting environment many customers add additional indexes on the target to help reports run faster, since OLTP performance is no longer a consideration.
    You can even have different keys on the source and target. In this case, there is some manual work, and you'll need to specify those keys in the parameter files using the KEYCOLS parameter.

  • ODS  vs  IC

    hi gurus,
                please tell me Difference b/n ODS & IC
               ThanQ.
        Regards,
       Ramesh.

    Hi,
    Diff btw ODS and cube
    1. ODS having the Over write functionality but Cube contains the Additive
    functionality
    2. ODS is Two Dimentional Data Container but Cube is Multi Dimentional..
    3. In ODS Characterstic will be called as a Key Field Nd Keyfigure called as a
    Data Field But in Cube the Characterstic Nd Key Figures only..
    4. In ODS data will be in Flat and Transparent model but In Cube data will be in
    Star Schema Model.
    5. ODS contains Current data but Cube Contanis Historical Data..
    6. ODS maintains in 3 different tables but Cube can't
    7. In ODS can extract from diffenrent datasources but cube can't
    8. In ODS data can't be summarized but In Cube data Can be Summarized /
    Aggregated.
    9. ODS is not suggestable for Reporting always but Cube is suggestable for
    better Granularity.
    Yah Exactly for Reporting purpose generally data will be taken from the InfoCube for bettter reporting i.e., In the Level of Granularity..
    ODS is not always suggestable for Reporing, why because if u use ODS directly to the Reporting the OLTP performance will degrades why bcoz it takes current data..
    So always load into Cube via ODS for better reporting..
    Also go thru these links for more details....
    Difference Between ODS and Infocube.
    Difference between InfoCube and ODS Object
    what's difference between ODS and Infocube's Delta update?
    ODS and Cube
    Thanks,
    Happy Life,
    Aravind

  • RAID0+1 vs RAID1

    I am building a write intensive OLTP system and need to get the
    best performance from disk availability.
    (12 disk system)
    All documentation I have read suggests RAID-0+1. However
    recommendations also suggest OS, indexes, rollback, redo logs,
    data etc on separate disks.
    Am I better off using RAID-1 and having a larger number of
    logical volumes, or using RAID0+1 with with a few volumes?
    Grateful on any suggestions on how best to arrange things.

    I've seen good results with both, however, I've only used
    the RAID 0+1 approach with very large databases (i.e.
    several hundred disks). The confuguration I am thinking of
    used one set of 12 RAID 0+1 disks for system, redo, rollback,
    and temp, and used the remaining 50 sets of 12 RAID 0+1 disks
    for data and indexes. (Note: we did not bother to separate
    data from index -- it was not worth the bother.)
    There is nothing wrong with using RAID-0 with Oracle databases
    (at least where performance is concerned). Just be sure to make
    your stripe-width a multiple of the database block size, and
    make certain that database blocks are aligned with the RAID-0
    stripes. (It's bad when a single I/O has to visit more than one
    disk.)
    If performance is your main goal, (and you are really looking
    at high transaction rates) you probably do not want your online
    redo logs (or your archived redo logs) to sit on the same disk(s)
    as your data -- separate these. This is also a highly
    recommended practice for data integrity -- if you lose a single
    device that contains both data and redo, you can permanently lose
    committed transactions.
    I would personally want to consider some things other than raw
    OLTP performance, one of the main ones being backup/recovery.
    RAID-1 is reliable, but not infallible; how much data do you want
    to recover in the event of media failure, and how fast do you
    need to do it? A 12-disk RAID 0+1 set on 72GB disks makes your
    "minimum unit of recovery" about 400GB, so you'd better have fast
    tape drives! (And they had better not be network-attached!)
    Before you can really make a decision of this kind (it can be
    fundamental to your performance) you should really do some
    benchmarking. Then you can answer the question yourself. In the
    meantime, make certain that your benchmark includes more than
    just the OLTP aspect of the system (a common benchmarking error).
    Try also to benchmark backup/recovery, batch processing, and
    reporting. All of these are likely to have very different I/O
    characteristics from your OLTP workload, and in most cases, they
    are equally important (if not nearly as glamourous).
    One thing to consider as you move forward is that disks are
    cheap. One of the cheaper ways to improve performance is to add
    more spindles... Maybe your database will fit on 12 disks, but
    this does not mean you should not use 36 or 48.

  • Datafile issue

    Hi
    we are using oracle 10g for our database. Whenever my application is running it will delete the all records which after 24hrs. I am getting on avg daily 3000 new records. After 24hrs it will be deleted but datafile size keep on increasing. Even i used command ALTER DATABASE tablename DEALLOCATE UNUSED but space is not resizing. what is the reason? I want to resize the datafile
    Please urgent & reply me
    regards
    murali krishna

    You have to Shrink Database Segments
    Shrinking Database Segments Online
    You use online segment shrink to reclaim fragmented free space below the high water mark in an Oracle Database segment. The benefits of segment shrink are these:
    Compaction of data leads to better cache utilization, which in turn leads to better online transaction processing (OLTP) performance.
    The compacted data requires fewer blocks to be scanned in full table scans, which in turns leads to better decision support system (DSS) performance.
    Segment shrink is an online, in-place operation. DML operations and queries can be issued during the data movement phase of segment shrink. Concurrent DML operation are blocked for a short time at the end of the shrink operation, when the space is deallocated. Indexes are maintained during the shrink operation and remain usable after the operation is complete. Segment shrink does not require extra disk space to be allocated.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#ADMIN10161
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • Anyway to shrink tablespace online?

    I have a 500GB tablespace with 10 files, that was full, after deleting, purge, it is half free now, I want to reduce datafile size, but I can not do "alter database datafile resize...." as some object in the end of the file.
    anyway to shrink tablespace/datafiles online?
    Thanks!!

    Oh, sorry I thought they pointed you in the right direction.
    What about using the SHRINK SPACE command? Take a look at it here...
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/schema003.htm#ADMIN10161
    It seems to do what you want.
    You use online segment shrink to reclaim fragmented free space below the high water mark in an Oracle Database segment. The benefits of segment shrink are these:
    Compaction of data leads to better cache utilization, which in turn leads to better online transaction processing (OLTP) performance.
    The compacted data requires fewer blocks to be scanned in full table scans, which in turns leads to better decision support system (DSS) performance.
    Segment shrink is an online, in-place operation. DML operations and queries can be issued during the data movement phase of segment shrink. Concurrent DML operation are blocked for a short time at the end of the shrink operation, when the space is deallocated. Indexes are maintained during the shrink operation and remain usable after the operation is complete. Segment shrink does not require extra disk space to be allocated.
    Segment shrink reclaims unused space both above and below the high water mark. In contrast, space deallocation reclaims unused space only above the high water mark. In shrink operations, by default, the database compacts the segment, adjusts the high water mark, and releases the reclaimed space.
    Segment shrink requires that rows be moved to new locations. Therefore, you must first enable row movement in the object you want to shrink and disable any rowid-based triggers defined on the object. You enable row movement in a table with the ALTER TABLE ... ENABLE ROW MOVEMENT command.

  • Materialized View Logs on OLTP DB- Performance issues

    Hi All,
    We have a request to check what would be the performance impact of having Matirialized View (with FAST refresh each 5 and each 30 min).
    We have been using some APIs( I don't have full details of this job) to refresh tables in Reportign DB and want to switch to MVIEWS in the next release.
    The base tables for this MVs are in DB1 with high DML activity.
    We are planing to create 7 MVs on a reporting DB pointing to the corresponding tables in DB1.
    I am setting up the env with the required tables to test this and also want to know your experiences in implementing Mviews with Fast refresh pointing to a typical OLTP DB as I am new to MVIEWS.
    How it affects the performance of DML statements on base tables?
    How often you had to do complete refresh because of invalid/outdated Mview Logs?
    other Maintenance overheads?
    any possible workarounds?
    Oracle Version: 9.2.0.8
    OS : HP-UX
    Thank you for sharing your experiences.

    Doing incremental refreshes every 5 minutes willadd some amount of load to the OLTP system. Again,
    depending on your environment, that may or may not be
    significant.
    what factors can effet this? Among other things, the size of the materialized view logs that need to be read, which in turn depends on the number of transactions and the setup of the materialized views, the current load on the OLTP system, etc. If you're struggling with an I/O or CPU bottleneck at peak processing now, for example, having a dozen materialized view refresh processes running as well would probably generate noticable delays in the system. If you have plenty of spare CPU & I/O bandwidth, the refreshes will be less noticable.
    is it the same in 10g R2 too? we are upgrading to the
    same in coming October.Streams is far easier to deal with in 10.2.
    Justin

Maybe you are looking for

  • Satellite L350D Works for a few minutes and then completely stop

    Hi all! I am having a weird problem with my laptop Satellite L350D-216 I've used it for a long time in a row and starts to show some strange vertical lines at the screen and kind of loose focus and then it went completely black Since then if I switch

  • Improve quality of JPEG images

    Hi I have an application which saves images in JPEG format after editing (well, many applications does that, but this is only a fraction of what this application does) :-) I can't use any other formats due to size restrictions. The problem is Image E

  • Getting exception whil calling an oracle stored procedure from java program

    Dear All, I encounter this error in my application when I call only the stored procedure but the view is executing fine from the application and my environment is as follow: Java 1.4 oracle 10g oracle jdbc driver:9.2.0.8.0 websphere portal 6.0.0.1 th

  • Getting list of stored queries

    Is there a way programmatically through APIs or the like to return information on the stored queries in the server, such as the query names, type and number of parameters, etc... ? Thanks

  • Problem instalation ora database 11g r2 in windows server 2008 r2 PATH variable failed

    Dear Experts I'm installing a database Oracle Database 11 r2, in an operating system Window Server 2008 R2. The problem: When the installer verifying that the path it ends. simply. I did: the basic steps, how to put all the unzipped files in a single