Questions about Real Application Testing(RAT)

Hi All,
We have a production database running on 10gR3 on a server with local drives, while we have a Oracle 11gR2 DB running on a server with NFS mounts (using S7310 - AmberRoad) i.e. Faster and better storage.
We captured the load on 10gR3 and replayed it on 11gR2. We noticed the following:
(1) Replay is considerably slow even though Oracle11gR2 instance has a faster storage. We suspect that it may be something to do with the buffer cache / SGA because there is nothing in cache on the target (we didn't shutdown 10gR2 DB for capture) – what should we do then?
(2) To make sure that we can take the advantage of cache, we replayed the load 2nd time right after the 1st replay and everything ran to our surprise. So we are wondering how’s that possible since we did not restore the DB as we do not want to wipe off the cache (chicken and egg situation)? Does Oracle rollback the changes after the replay?
(3) Do we have to restore database on Target every time we do replay? But if we do that then we won't have anything in the SGA.
So we need your advise and also would like to know how everyone else is doing this testing?
Regards,
RJiv.

DB Replay's workload capture facility allows you to either start capture from a closed (mounted) database (capture starts upon opening the DB), or to begin capture mid-stream during normal activity. Starting capture on the production system from a closed database eliminates the divergence in performance resulting from a primed cache, as well as possible data divergence issues from open, partially-completed transactions at the time the capture started.
For many customers, it will clearly not be possible to close their database during peak periods (!!)
One way to address the cache priming issue is to start capture in production from a closed state during a low period of activity, and the allow capture to run through the peak period.
Another approach is to start capture mid-stream with the DB open and to run capture for a long period (long enough to stabilize the cache). When performing the replay, begin a new AWR snapshot after the cache has stabilized.
Your question about running the replay again after the first replay is done is confusing. Of course you will not get meaningful data from that, since replay must begin from the capture start SCN. If you run replay twice in a row without reverting the database to the capture start SCN, it will be applying meaningless changes to a database in a state that is unlike that of the original. You will be testing the data errors codepath instead of real performance.
It is typical to enable database flashback on the repay database so that it can be repeatedly reverted to the capture start SCN for testing under a variety of scenarios.
Regards,
Jeremiah Wilton
Blue Gecko, Inc.
http://www.bluegecko.net

Similar Messages

  • Real Application Testing (RAT); questions

    Hi,
    We are currently working on a missing involving RAT, both SPA and DB Replay.
    The customer raised some questions and I was wondering if anybody has more detailed information to share?
    FYI: we are using in most cases the API calls, as we don’t have an EMGC 11G available, only Database control.
    h4. Q1: If the DB capture runs out of space, what would be the impact on Prod database
    My response would be: no impact on the database as the recorder stops
    Best Practice: create a separate location to store the recording
    h4. Q2: What could be missing with RAT (SPA/DB Replay) --> gap analysis.
    Reduce as most as possible the time needed by ADM (Development Team) in order to validate database migration.
    The database team wants to be as independent as possible of the development team in order to do the upgrade of the databases more autonomous. They want to have an idea on what they could be missing if they use SPA and DB Replay to make sure they don't miss anything which could have an impact on the developers after the upgrade. Are there any particular things that SPA and DB-replay would miss, and as a consequence aren't tested, which could lead to unhappy developers after the upgrade of the database.
    h4. Q3: Difference between DB Replay and SPA (when to use which one)
    h4. Q4: Oracle recommends shutting down database before recording workload, but may not be always possible (Web Banking), when and how to do it then?
    My response was to start recording at a the lowest load time.
    Any other ideas to minimize the impact of a running instance
    h4. Q5: Does it need to filter out sys/system, rman,... Activities during captures?
    h4. Q6: The capture report is showing 5 user calls captured with Errors, but I can’t find any related information on what went wrong.
    To what is this related?
    Captured Workload Statistics
    •     'Value' represents the corresponding statistic aggregated across the entire captured database workload.
    •     '% Total' is the percentage of 'Value' over the corresponding system-wide aggregated total.
    Statistic Name     Value     % Total
    DB time (secs)     7.77     10.43
    Average Active Sessions     0.03     
    User calls captured     1713     9.09
    User calls captured with Errors+_     5     
    Session logins     23     32.39
    Transactions     33     3.74
    h4. Q7: During the capture we can execute an export AWR, but we also need to do an import AWR somewhere, and this is not documented on how this is done in the RAT User Guide.
    Can you comment on how this process is working in order to do some proper reporting.

    Q1
    I have not had a space issue in the RAT capture files that caused a database issue when space ran out.
    Q2
    You will miss anything you filtered the capture on, anything that errored during the capture
    Capture will also not get and to be aware of
    Direct Path Load (SQL Loader)
    Shared server (Oracle MTS)
    Oracle Streams & Advanced Replication Streams
    Non-PL/SQL based Advanced Queuing (AQ)
    Flashback queries
    OCI-based Object Navigation
    Non SQL-Based Object Access
    Distributed transactions, remote describe/commit operations (will be replayed as local transactions)
    Q3
    I typically use DB Replay for an overall work load perspective, SPA for SQL Level.
    For example I will to a DB replay find from my reports SQL that needs a further look see and use SPA from there.
    or potentially use SPA to focus on the SQL itself and not concern myself the the whole workload until I am ready.
    Q4
    Set the SCN for when capture is started, have the replay database be recovered to that SCN prior to any replay operation. No real need to shutdown, done this many times with success.
    Q5
    Typically I will filer any grid control agent or other agent software activities from the capture for sure.
    Q6
    Your Capture Report does not show anything?
    Q7
    You can extract and load AWR with the following packed procedures
    sys.dbms_swrf_internal.awr_extract
    sys.dbms_swrf_internal.awr_load

  • Need Help on Real Application Testing (RAT) installation and configarations

    Hi Folks,
    We are expecting an opportunity across RAT implementation in near future and our team is trying explore on RAT and need help in installation and configurations .Am looking for some RAT
    contacts ,please help me..
    Thanks,
    Jay.

    Q1
    I have not had a space issue in the RAT capture files that caused a database issue when space ran out.
    Q2
    You will miss anything you filtered the capture on, anything that errored during the capture
    Capture will also not get and to be aware of
    Direct Path Load (SQL Loader)
    Shared server (Oracle MTS)
    Oracle Streams & Advanced Replication Streams
    Non-PL/SQL based Advanced Queuing (AQ)
    Flashback queries
    OCI-based Object Navigation
    Non SQL-Based Object Access
    Distributed transactions, remote describe/commit operations (will be replayed as local transactions)
    Q3
    I typically use DB Replay for an overall work load perspective, SPA for SQL Level.
    For example I will to a DB replay find from my reports SQL that needs a further look see and use SPA from there.
    or potentially use SPA to focus on the SQL itself and not concern myself the the whole workload until I am ready.
    Q4
    Set the SCN for when capture is started, have the replay database be recovered to that SCN prior to any replay operation. No real need to shutdown, done this many times with success.
    Q5
    Typically I will filer any grid control agent or other agent software activities from the capture for sure.
    Q6
    Your Capture Report does not show anything?
    Q7
    You can extract and load AWR with the following packed procedures
    sys.dbms_swrf_internal.awr_extract
    sys.dbms_swrf_internal.awr_load

  • How to use "Oracle Real Application Testing"

    I am analyzing if/how my employer can make best use of "Oracle Real Application Testing". I therefore read various documentation and articles as well as the relevant sections in the "Oracle® Database PL/SQL Packages and Types Reference". However, there is one conceptual question which I could not yet answer from this documentation.
    From what I understood there are (at least) two possible approaches to capture and replay SQL workload:
    1) Capture the workload into a STS (SQL Tuning Set) using DBMS_SQLTUNE. Replay the workload before/after changes made to the system using DBMS_SQLPA. DBMS_SQLPA can be further used to compare the before/after execution and to generate comparison reports thereof.
    2) Capture the workload into OS files (and optionally into a SQL Tuning Set in addition to that) using DBMS_WORKLOAD_CAPTURE. Replay the workload using DBMS_WORKLOAD_REPLAY. DBMS_WORKLOAD_REPLAY can be further used to compare different replay runs and generate reports thereof.
    Now my questions are:
    {noformat}(i){noformat} Is the overview I sketched above correct or did I misunderstand/miss important points?
    (ii) What is the conceptual difference between the two approaches depicted in (1) and (2) above / what is the intended use for each of them?
    (iii) What is the difference between the information that DBMS_WORKLOAD_CAPTURE writes in the corresponding OS files and the information that is stored in a STS? From what I understand, both store information related to SQL execution during a certain period, including the impact of each SQL on the entire workload. Are the two just different formats to store the data (within and outside of the database) or is there any different information stored in them?

    Here you have books related to that theme:
    http://otn.oracle.com/pls/db92/db92.docindex?remark=homepage
    Joel Pérez

  • Real Application Testing , hardware changes.

    hi ,
    Have always wondered How Real application test would over come one of the cost Common issues in Testing. Here is the scenario .
    Lets say , Customer is running oracle 10.2.0.5 on a Server with 16 proc's and 24 cores and 150 GB RAM on his production box ,
    Now he wants to test 11.2 on this test box, with lesser hardware specs. say 8 proc's and 12 cores and 60 GB RAM ,
    My doubt is , how will the replay process in RAT overcome this change and compare the test with production ???

    You must understand that RAT is actually a term. The underlying ways would depend on what you want to test and to what extent? Since you have mentioned a hardware change, the suggested way would be to use DB Replay using which you would capture the workload from the prod box, move it to the test box and replay that workload under the changed hardware. Using DB Replay would allow you to have a delta between the two runs and you can measure the performance gains(or loss) .
    Have a read,
    http://docs.oracle.com/cd/E11882_01/server.112/e16540/dbr_capture.htm#CACCGCFB
    HTH
    Aman....

  • How to learn Real Application Testing

    How to Learn and Test the Real Application Testing and SQL performance analyzer.
    We only have oracle 10g. Does this require 11g.
    Can I learn/test on 10g.

    Hi,
    RAT is the feature of 11g. You can capture the load from 10g though, but for playing it you will need 11g databse.
    Regards

  • Real Application Testing/Sql Performance Analyzer Docs in 10g....

    I believed that both the tools mentioned in the subject are a part of 11g and so are in the 11g docs.
    http://download.oracle.com/docs/cd/B28359_01/server.111/e12253/toc.htm
    But I was just looking at 10g book listing and I saw the same book in 10g documentation as well.
    http://download.oracle.com/docs/cd/B19306_01/server.102/e12024/toc.htm
    Now my best guess is that this is due to the backporting of 11g tools for the previous releases. Means that we can do RAT or use SPA in pre 11g databases as well so this doc book is added. Is it correct or there is some other reason?
    Cheers
    Aman....

    Real Application Testing was introduced as a new feature in Oracle Database 11g. The documentation for Real Application Testing was released in 4 phases.
    In the initial phase, the usage of Real Application Testing was documented in the Performance Tuning Guide for release 11.1.
    In the second phase, the Real Application Testing User's Guide was released for release 10.2 to document certain backported functionalities. For Database Replay, the capability to capture a database workload was backported. For SQL Performance Analyzer, the capability to capture a SQL workload into a SQL tuning set was backported. These functionalities were backported to Oracle Database 10g Release 2 (version 10.2.0.4 or higher) to support customers who want to use Real Application Testing to test database upgrades from a previous version of Oracle Database to Oracle Database 11g.
    In the third phase, the Real Application Testing Addendum was released for release 11.1 to document the added functionality to read SQL trace files from Oracle Database 9i to construct a SQL tuning set that can be used as an input source for SQL Performance Analyzer in release 11.1.0.7.
    In the final phase, all available documentation for Real Application Testing contained in the above documents were consolidated into the Real Application Testing User's Guide for release 11.1. Going forward, this guide will contain all documentation of Real Application Testing and be updated for each release.
    Regards,
    Immanuel

  • Real application testing

    Hi,
    Os:Windows 2003 and Red hat 5 ( two Environments)
    Db:Oracle 11.1.6
    How to check the Load Balancing in Oracle Real Application Testing.
    For Ex:
    Lets assume that we have 5 users 10
    tranactions/SQl statement executions. How will be the system
    perform, if we have 20% increase in the current workload or 10 users 20
    transactions...
    Currently we are having 2 node oracle 10gR2 cluster on windows 2003.
    Regards,
    Vinodh.N

    Hi,
    Thanks for reply.
    Can i check the workload gradually(5users and 10 users and so on...) in Real Application Testing in oracle11g or oracle 10.2.0.4. ?
    If n't,how can i check the workload in oracle 11g or oracle 10g?
    Real Application Testing means only captured workload in production db and replay in test db.Is it correct?
    Edited by: user3266490 on Feb 18, 2010 4:31 PM

  • Real Application Testing 10g 64 bits?

    32 bits version is available here:
    http://www.oracle.com/technology/software/products/database/oracle10g/realapptesting.html
    But anyone know when Real Application Testing for 10g 64 bits will be out?

    But anyone know when Real Application Testing for 10g 64 bits will be out?
    Oracle corp ;-).
    Aman....

  • Newbee questions about XE application builder

    Hi
    I find that Oracle support someting that we can use to build some real database driven applications
    i have some questions about it :
    1-Does this application builder differ from Oracle 10g R2 Application builder ? i means is it limited ?
    2-Does any one use it to build a real world application that works some where in some business place ?
    3-what is its behind code and technology , does it use JSF and managed Beans ? how i can see the generted codes ?
    Thanks

    Hello,
    Here's the link to the Application Express Forum Oracle Application Express (APEX)
    You should also look at the Application Express OTN page, http://www.oracle.com/technology/products/database/application_express/index.html there is howtos tutorials all sorts of good stuff.
    Carl

  • Question about managing application licenses together with netboot

    Hello
    So far I had made no experience with an MacOSX Server netboot configuration but we plan to introduce it in some weeks. Therefore we are "wondering" how management/assignment of application licenses is working in a netboot configuration. Normally in a local MacOSX configuration licenses for application are in generally assigned on the application (some are bound to a certain user name, others to the CPU ID) and is valid for all users login to the same computer. How and at which time are application licenses (for example MS Office 2004, Adobe Acrobat etc.) assigned in a netboot environment?
    Many thanks in advance for any clarification and with best regards, André Müller

    Hi Andre,
    Is this a technical or legal question Either way it is an interesting question about a major grey-area, and I wonder if anyone has the right answer.
    Boiling your question down to specifics, it could be...
    I have 15 individual Photoshop serials. I want to run Photoshop off a NetBoot image for 15 simultaneous users. Can I install one serial on the image, and allow 15 users to open Photoshop simultaneously. What if I want to install the image on 20 machines, but only 15 users will ever use Photoshop simultaneously.
    There is a technical answer and a legal answer to the question. I don't know if there is a 'right' answer since multi-user installations of titles that are technically possible under NetBoot may not be allowed legally under the user license agreement you committed to on breaking the seal on the software. And these agreements are different for different countries.
    Here's the technial answer. Sometime before OSX server 10.0, when Apple introduced netboot technology, developers were advised not to perform network scans checking for duplicate serial numbers when aps start up.
    In practice this means you can install software title on your image, activate with a serial number, and it will not conflict with the other identical versions of the same application on other images on the network.
    FileMaker Pro is the only software I am aware of that doesn't play by these rules, altthough it no longer checks when the applicaton starts, but when any FMPro network activity is started.
    Here's the leagl answer. It depends on the title and the country you are in. The legality is definded under the eulas for the titles concerned. I don't know if the leaglity has ever been tested in a court of law in any country in this context.
    The easy way to do the right thing for some titles it to purchase a multi-user serial number, ie. a single number which will work for 5 or 10, 20, 50... etc. users. Fore example, FileMaker has a volume licensing serial number which I beleieve can be purchased in increments of 10 users. However, not all titles offer this option, though, and it's difficult if you need 12 copies and have to buy 20.
    Sassafras software http://www.sassafras.com/ sells Keyserver, which may be an appropriate solution. Keyserver tracks installations and authenticates serial numbers to ensure no more than the licensed number of copies can be run at the same time.
    I don't think I've answerede your question for you but hope to have expanded the contxt of the question for other suggestions.
    Good luck,
    b.

  • Question about evictor, application performance, and log growth

    Hello,
    I've got an application that using two BDB environments, one with critical data and one with data that can be reconstructed, just for the sake of backup convenience. For a while, I was getting OutOfMemoryErrors with increasing frequency. This caused the app to crash without closing the databases or the environment until the recovery time was around 30 minutes. After reading through many of the posts here and trying various workarounds, I got to a state where that stopped happening. I think that what did that fixed is was decreased the cache sizes.
    Then, I was seeing what appeared to be evictor deadlocks. The application would hang for thirty minutes before a monitoring script that I've got would detect that no work was being done and would restart the application. Another round of searching ensued and I seem to have gotten it to a state where it doesn't run out of memory or deadlock anymore.
    After getting rid of all these issues and letting it run for four or five days, the size of one of the critical BDB environment was 550GB. Since it was almost filling the drive, I moved it aside and created a new BDB for the app to use. Now, I'm trying to figure out what happened. It looks like the BDB contains about 15G of actual data.
    It seems like everything starts out fine and then, somehow, it starts to collapse under the weight of itself. I'd like to figure out what type of abuse I may be guilty of that could cause this.
    Here's a little bit about my application:
    First (critical) BDB environment:
    - add only, no deletes
    - two databases
    - average data sizes are 1k and 11k respectively
    - average key sizes are around 80 and 16 bytes respectively
    - almost no locality of reference to access pattern in either
    - minimal lookups, mostly used as an archive
    - cache size is 1MB
    - 8 threads reading/writing, 40 threads reading only
    Second (reconstructable) BDB environment:
    - six databases
    - five databases are mostly add with purges of two-week-old data
    - one database has frequent adds and deletes
    - lots of lookups, things are likely to be looked up shortly after they're added
    - cache size is 64MB
    - 40 threads reading/writing
    On both environments,I've set je.evictor.forcedYield=true.
    I'm running the JVM with:
    -Xmx1800m
    -Dje.adler32.chunkSize=8192
    -XX:+UseMembar
    JVM version is:
    java version "1.6.0_01"
    Java(TM) SE Runtime Environment (build 1.6.0_01-b06)
    Java HotSpot(TM) Server VM (build 1.6.0_01-b06, mixed mode)
    BDB version is 3.2.23
    OS/Kernel is:
    Linux app2 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:17:21 EST 2007 i686 i686 i386 GNU/Linux
    So, my question is:
    What could cause the critical database (especially with no deletes) to grow to 550GB when there's only 15GB of data in it? It was my understanding that the cleaner should take care of any unused log file cruft. It seems like the log files grow faster as the performance gets worse. I have no idea which is the cause or the effect.
    And, I guess the real question is how can I prevent this in the future?
    Thanks,
    -Justin

    For the record, the problem was that the user's cache size was too small for the size of the database thereby creating a large number of records in the log when eviction of dirty records occurred. The cleaner was unable to keep up with the log load.
    Note to readers: eviction of dirty records causes records to be written to the log. JE has to evict to somewhere and the log is what it uses.
    Charles Lamb

  • Question about multiple application modules.

    Hello,
    Suppose I have 2 database schema’s. For both schemas I have 1 application module. All view objects are in the same project.
    For each application module I have an application definition.
    Is it possible to link between these two applications? Because sometimes these database schema’s link to each other and you would like to jump from one application directly to the other.
    For example you have orders locates in 1 DB schema and customers in another DB schema.
    You would want to jump from the order directly to the customer details in the second schema.
    Is this possible? Or do I need to make nested application modules etc?
    I can find very little info about this; the JHS def guide does not offer much information about multiple application modules etc.
    -Anton

    Anton,
    Yes, you can jump aroundd as you like.
    You just can't use view links between VO's based on different db schema tables.
    Nesting AM's doesn't help there, the DB connection of the top-level AM will be used.
    Why don't you create synomyms for the tables in the other schema?
    Steven Davelaar,
    JHeadstart Team.

  • Oracle Real Application Testing

    I'm capturing a 10.2.0.5 RAC workload and attempting to replay it on a 11.2.0.3 RAC. All the Docs describe going from a standalone node to RAC and all the steps for doing so. However, I can't find anything outlining the process from RAC to RAC.
    Any help would be appreciated.
    Thanks

    user12006502 wrote:
    I'm capturing a 10.2.0.5 RAC workload and attempting to replay it on a 11.2.0.3 RAC. All the Docs describe going from a standalone node to RAC and all the steps for doing so. However, I can't find anything outlining the process from RAC to RAC.
    Any help would be appreciated.
    ThanksIt's in there, you just need to look around for it (and probably do a bit of reading).
    Things like
    http://docs.oracle.com/cd/E11882_01/server.112/e16540/dbr_capture.htm#CACICAAC
    Only one workload capture can be performed at any given time. If you have a Oracle Real Application Clusters (Oracle RAC) configuration, workload capture is performed for the entire database. To enable a clean state before starting to capture the workload, all the instances need to be restarted.
    and
    http://docs.oracle.com/cd/E11882_01/server.112/e16540/dbr_replay.htm#CHDBCADJ
    For Oracle Real Application Clusters (Oracle RAC) databases, you can map all connection strings to a load balancing connection string. This is especially useful if the number of nodes on the replay system is different from the capture system. Alternatively, if you want to direct workload to specific instances, you can use services or explicitly specify the instance identifier in the remapped connection strings.
    Cheers,

  • Question about writing Junit test for SQLj application

    Just inherite a SQLj application without unit test. After making few enchancement I plan to add unit test about it to make it more solid. What I plan to do is invoke SQLj translator dynamically to generate java source, then compile and invoke the result class dynamically at Junit test. Does anyone how to call SQLj translator as an API call ??

    I guess I might add that the error I am getting is
    java.lang.nullPointerException
    i've got this now:
         public E top() throws EmptyStackException {
              if (numItems == 0)
                   throw new EmptyStackException("Stack is empty");
              return S[numItems--];
         }With the same test, as it would throw an emptystackexception not a full..
    But the test still doesn't pass.

Maybe you are looking for