Performance testing doubt with XI.

Hi All,
How is performance testign done inside XI?
We need to propose to the testing team the different methids of testing the interfaces that run through XI.
Is there some document available to explain the same?
Also any other method of testing for XI?
Thanks.

Hi
How is performance testign done inside XI?
Performance testing can be done by passing certain number of the messages and you can get how fast those messages being processed in Integration Engine or BPE.
On RWB, you can get the reading from "Performance Monitoring"
We need to propose to the testing team the different methids of testing the interfaces that run through XI.
Unit test is done by developer.
End-to-Ent test: is functionality test of your interface, a basic message are triggered from your sending system, and final verification point should be on your receiving system.
Loading test: A test to stress application server to identify bottbleneck:
              e.g. can my system process single message with size of 50 MB ?
                   if I sending 10000 messages within very short of time period, is my system is able to process them without error ?
UAT: User acceptance test: This test is conducted by business user, will execute all the business scenarios to see if your interface can handle without error.
Hope it helps
Liang

Similar Messages

  • ECM performance testing with mercury loadrunner

    Hi all,
    A team within our technical divisions is currently trying to record mercury loadrunner scripts to performance test the ECM standard functionality that we have implemented on our portal. They are having a lot of problems to get these to work, they are complaining about random errors being generated, http 500 errors and values that are not passed. Now, I realise this isn't a lot of information and frankly I don't have much to give you myself as I'm not part of that team. But I was wondering if anyone has run performance tests on ECM with mercury loadrunner scripts and were succesful. To be honest I don't see what the problem is with recording and running these scripts, but the pressure is on me to try and resolve the issues. They keep saying they've got errors when running their scripts but I can't see anything in the logs.
    Does anyone have any experience on performance testing ECM with loadrunner?
    Thanks for your time.
    best regards,
    Dion

    Hi James,
    thanks for your replies. I don't have access to the scripts as I'm not part of the team. I'm sitting on the SAP side trying to support them but I can't understand what the problem would be. They claim to have issues with the parameters for the ECM in their scripts and I was wondering if anyone else had similar difficulties recording scripts for these components.
    regards,
    Dion

  • Database Performance Testing Tool

    Hi Gurus,
    Can anyone suggest me some Performance Testing Tools with respect to Database Environment?
    The Tools in the Open Source Environment would be preferable.
    Thanks in advance.
    ~Anup

    Hi Anup,
    There's a tool called Orion in OTN page that's used to simulate database activity I/O. Try it!
    Regards,
    Jonathan Ferreira - Brazil
    http://www.ebs11i.com.br

  • Performance testing with PetStore

    I'm going to make some performance tests with WebLogic 7.0 and Oracle 8.1.7.
    Sun´s PetStore is some kind of basic web application. So I decided to use it for
    test application. The PetStore isn´t the best J2EE application, but it suits for
    this test just fine.
    Has anybody already made database creation clauses or scripts?

              http://www.amazon.com/exec/obidos/ASIN/1904284000/qid%3D1051548860/sr%3D11-1
              /ref%3Dsr%5F11%5F1/002-8384620-0940857
              "Tony Glaccum" <[email protected]> wrote in message
              news:3ead5520$[email protected]..
              >
              > Tom Barnes you mentioned a book "J2EE Performance Testing with BEA
              WebLogic Server"
              > by Peter Zadrozny, Philip Aston, Ted Osborne. Any idea when this was
              published
              > - I have searched Amazon plus a few book stores and cannot find it.
              >
              > Thanks
              

  • UI performance testing of pivot table

    Hi,
    I was wondering if anyone could direct me to a tool that I can use to do performance testing on a pivot table. I am populating a pivot table(declaratively) with a data source of over 100,000 cells and I need to record the browser rendering time of the pivot table using 50 or so parallel threads(requests). I tried running performance tests using JMeter, but that didn't help.
    This is what I tried so far with JMeter:
    I deployed the application in the integratedweblogicserver and specify the Url to hit in JMeter ( http://127.0.0.1:7101/PivotTableSample-ViewController-context-root/faces/Sample) and added a response assertion for the response code 200. Although I am able to hit the url successfully, the response I get is a javascript with a message that says "This is the loopback script to process the url before the real page loads. It introduces a separate round trip". When I checked in firebug, it looks like request redirect of some sort happens from this javascript to another Url (with some randomly generated parameters) which then returns the html response of the pivot table. I am unable to hit that Url directly as I get a message saying "session expired". It looks like a redirect happens from the first request and then session is created for that request and a redirect occurs.
    I am able to check the browser rendering time of the pivot table in firebug (.net tab), but that is only for a single request. I'd appreciate it if anyone could guide me on this.
    Thanks
    Naveen

    I found the link below that explains configuration of JMeter for performance testing of ADF applications(Although I couldn't find a solution to figure out the browser rendering time for parallel threads).
    http://one-size-doesnt-fit-all.blogspot.com/2010/04/configuring-apache-jmeter-specifically.html
    Edited by: Naveen Ramanathan on Oct 3, 2010 10:24 AM

  • Can Web Performance Test work on AJAX or Javascript Project which will show only one URL for all the pages?

    Hi there,
    I'm working on testing a AJAX and JavaScript Project which has several pages but all in the same URL. I need to test some attribute on the page or parameter past by AJAX or Javascript. Can Web Performance Test work to get what I want?
    Thanks,
    

    Hello,
    Thank you for your post.
    Web performance test is used to test if a server responses correctly and the response is consistent with what we expected. And we test the response speed, the stability and scalability.
    The Web Performance Test Recorder records both AJAX requests and requests that were submitted from JavaScript, but
     web test does not execute JavaScript. I am afraid that you can’t use web test to test parameter past by AJAX or JavaScript.
    Please see:
    Web Performance Test Engine Overview
    About JavaScript and ActiveX Controls in Web Performance Tests
    From the first link, “Client-side scripting that sets parameter values or results in additional HTTP requests, such as AJAX, does affect the load on the server and might require you to manually modify the Web Performance Test to simulate the scripting.”
    If you want to execute the function typically performed by script in web test, you need to accomplish it in coded web performance test or a web performance test plugin. Please see:
     How to: Create a Coded Web Performance Test
    How to: Create a Web Performance Test Plug-In
    I am not sure what the ‘some attribute on the page’ is. If you mean that you want to test those controls on the page, you can do coded UI test, which can test that the user interface for an application functions correctly. The coded UI test performs actions
    on the user interface controls for an application and verifies that the correct controls are displayed with the correct values. You can refer to this article for detailed information about code UI test:
    Verifying Code by Using Coded User Interface Tests
    Best regards,
    Amanda Zhu [MSFT]
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • Crane Aerospace and electronics is looking for Test Engineers with LabVIEW experience - please disregard previous post.

    Here is the correct post:
    Are you detail-oriented, creative, and technically skilled at Engineering design and development?  Come to Crane Aerospace & Electronics and use your excellent Engineering skills to design, improve, and deliver the next generation of products in the aerospace and electronics Industry!
    We have a unique and exciting career opportunity for Engineer II, Test.
    You will be responsible for maximizing new product development and manufacturing performance through the creation and deployment of test strategies, tools, and plans.  Design and implement high performance hardware and software for test equipment.  Authoring test procedures and performing Qualification test activities.  Ensure high product quality.
    Responsibilities:
    Collaborate with customers and multi-disciplined engineers to establish/clarify test, qualification, verification and validation requirements.
    Write test plans, procedures, requirements and reports in a highly structured environment.
    Analyze, develop and deploy complex and high performance test hardware and software solutions for automated test equipment. 
    Design, develop, debug, validate & verify the fabrication of manual and automated test equipment at the circuit board and system level, and specify and procure COTS test equipment.
    Develop/maintain hardware documentation including block diagrams, schematics, BOMs, wiring diagrams and wiring lists, software documentation, and configuration control of initial release and updates. 
    Perform detailed calculations to establish test equipment specifications and design margins.
    Maintain existing test systems through bug fixes, improvements and modifications.
    Support the estimation of costs and schedules to develop or upgrade test platforms.
    To perform a number of the above responsibilities with limited supervision.
    Minimum Requirements:
    Experience: 2-5 years.  Previous work experience in aerospace, space or medical electronics industry preferred.
    Knowledge: Microprocessor / Microcontroller hardware and firmware design; Analog Circuit and power supply design; Digital Circuit Design including high-speed serial communication design; Firmware programming in c; Schematic Capture, PADS Logic preferred; Circuit Simulation; Fundamentals of magnetic proximity, temperature, and pressure sensing electronics; ESD; Familiarity with testing standards (MIL-810, MIL-704, and DO-160 preferred).  Basic laboratory test equipment; LabVIEW experience, certification preferred; Developing hardware per DO-254 and software per DO-178 preferred; Experience with Adobe FrameMaker, IBM Rational tools, TestStand, Microsoft Project preferred.
    Skills: Good interpersonal and communication skills (verbal and written)- effectively lead and/or participate in multifunctional teams in a dynamic work environment. Ability to manage multiple tasks, flexibility to switch between tasks and prioritize tasks. 
    Education/Certification: Bachelors Degree in electrical engineering, computer science, physics or related technical discipline.
    Eligibility Requirement: Must be a US Person (under ITAR rules) to be eligible.
    Working Conditions:
    Working conditions are normal for an office/manufacturing environment. Machinery operation requires the use of safety equipment to include but not limited to safety glasses, heel straps, and shop coats.
    Requires lifting 25 lbs
    Apply online today: http://ch.tbe.taleo.net/CH06/ats/careers/requisition.jsp?org=CRANEAE&cws=5&rid=3170
    Crane Aerospace & Electronics offers competitive salaries and outstanding opportunities for career growth and development.  Visit our website at CraneAE.com for more information on our company, benefits and great opportunities.
    In our efforts to maintain a safe and drug-free workplace, Crane Aerospace & Electronics requires that candidates complete a satisfactory background check and pass a drug screen prior to employment.  FAA sensitive positions require employees to participate in a random drug test pool.

    How can you say you are hiring test engineers with LabVIEW, yet the job description doesn't even mention LabVIEW.  All I see in there is CAD design.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Need to install Visual C++ 2010 in MDT before performance testing.

    Hi all,
    I'm in a situation in which I'm deploying a laptop over MDT, and the display driver is captured by MDT fine. However, upon initial boot after installing the OS, I come across this error when running WINSAT.exe; "The program can't start because MSVCR100.dll
    is missing from your computer. Try reinstalling the program to fix the problem." This isn't the first report I've seen of ATI drivers triggering this, but I'm stuck with it regardless.
    This is attributed to a component of Visual C++ 2010 not being installed. (It's not preinstalled in our capture, and I'm trying to avoid having to do that again.)
    Being that we're a 0-touch organization when imaging, I need a way to remedy this. I'm currently trying to run the install, 'vcredist_x86.exe /q:a'   from "run a command line" (sourced on the server's C:\ drive) before the performance tests,
    but I can't find a proper place to put this command at in the task sequence. Is this even a viable method? Is there a method to skip WINSAT.exe pre-boot? My initial tests with this aren't working.
    Any advice or pointers appreciated! 

    Hi,
    Normally I think WinSAT is only relevant when deploying the image to your targeted computers (hardware). So why not integrate Visual C++ components into your reference image?
    At the customer I'm currently working for we also put all our Visual C++ installations in the so called Build image.
    I have provided a list with the install commands for the various programs.
    Visual C++ 2005 SP1 ATL Security Update x64
    msiexec /i "vcredist.msi" /qb
    Visual C++ 2005 SP1 ATL Security Update x86
    msiexec /i "vcredist.msi" /qb
    Visual C++ 2005 SP1 MFC Security Update x64
    msiexec /i "vcredist.msi" /qb
    Visual C++ 2005 SP1 MFC Security Update x86
    msiexec /i "vcredist.msi" /qb
    Visual C++ 2008 SP1 ATL Security Update x64
    install.exe /q
    Visual C++ 2008 SP1 ATL Security Update x86
    install.exe /q
    Visual C++ 2008 SP1 MFC Security Update x64
    install.exe /q
    Visual C++ 2008 SP1 MFC Security Update x86
    install.exe /q
    Visual_C_2005_SP1_x64_8_0_56336_EN_M1
    msiexec /i "vcredist.msi" /qb
    Visual_C_2005_SP1_x64_8_0_59192_EN_M1
    msiexec /i "vcredist.msi" /qb
    Visual_C_2005_SP1_x86_8_0_50727_42_EN_M1
    msiexec /i "vcredist.msi" /qb
    Visual_C_2005_SP1_x86_8_0_59193_EN_M1
    msiexec /i "vcredist.msi" /qb
    Visual_C_2008_SP1_x64_9_0_30729_17_EN_M1
    msiexec /i "vc_red.msi" /qb
    Visual_C_2008_SP1_x64_9_0_30729_4148_EN_M1
    msiexec /i "vc_red.msi" /qb
    Visual_C_2008_SP1_x86_9_0_210022_EN_M1
    install.exe /q
    Visual_C_2008_SP1_x86_9_0_30729_17_EN_M1
    msiexec /i "vc_red.msi" /qb
    Visual_C_2008_SP1_x86_9_0_30729_4148_EN_M1
    msiexec /i "vc_red.msi" /qb
    Visual_C_2010_x64_10_0_40219_EN_M1
    msiexec.exe /i "vc_red.msi" /qb
    Visual_C_2010_x86_10_0_40219_EN_M1
    msiexec.exe /i "vc_red.msi" /qb
    Trying to install Visual C++ and than run WinSAT post OS installation will not work.
    If this post is helpful please click "Mark for answer", thanks! Kind regards

  • FORMS CRASHES (FRM-92101) ON AS 10.1.2.0.2 DURING LOAD PERFORMANCE TESTING

    Hiya
    We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
    The Load Testing software start up each process very quickly, about every 10 seconds.
    The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
    However, those users who have already logged-in into our application, they are able to carry on their tasks.
    We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
    The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
    Any help will be much appreciated.
    Cheers
    Mayur

    Hi Shekhawat
    In Windows Registry go to
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
    In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
    %SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
    Now if you read it carefully in the above string, you will find this parameter
    SharedSection=1024,20480,768
    Here SharedSection specifies the system and desktop heaps using the following format:
    SharedSection=xxxx,yyyy,zzzz
    The default values are 1024,3072,512
    All the values are in Kilobytes (KB)
    xxxx = System-wide Heapsize. There is no need to modify this value.
    yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
    zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
    On our server the values were as follows :
    1024,20480,768
    We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
    Cheers
    Mayur

  • SQL Developer Unit Testing - Validation with PL/SQL

    Hi,
    I am trying to create Unit tests using SQL Developer UT framework.
    But when i am creating validation using User PL/SQL code option.
    Then how can i check value returned by l_count in code snippet below:
    -- Please raise an exception if the validation fails.
    -- For example:
    DECLARE
    l_count NUMBER;
    wrong_count EXCEPTION;
    BEGIN
    SELECT count(*) into l_count
    FROM test_recon
    WHERE
    match_num = 99836936
    AND Stg_status_flag <> 'E';
    IF l_count = 0
    THEN
    RAISE wrong_count;
    END IF;
    END;
    Also, can someone please refer me to few more demo examples (apart from oracle docs) to implement good test cases with SQL developer.
    I appreciate your help.
    Regards
    Dipali

    Probably not the answer you're looking for, but back when I was playing around with the Unit Test stuff, I didn't have sys privs, and the DBAs were a little busy at the time to set a up a repository for me. Rather than wait, I installed Oracle XE on my machine and created a small dev schema and deployed unit test to that. It's so much easier to perform quick proof of concepts when you have full control.

  • How to create  a test plan with specific transactions (or program)

    Hello,
    I'm a new user in Sol Man !
    How to create  a test plan with specific transactions (or program).
    In my Business Blueprint (SOLAR01) I've created in 'transaction tab' the name of my specific transactions and linked it.
    In my test plan (STWB_2) those specific doesn't appear to be selected !
    Thanks in advance.
    Georges HUYNEN

    Hi 
    In solar01 you have defined but you have to assign the test case in solar02 for this test case in the test cases tab.
    When you do so expand the business sceanario node in test plan generation of STWB_2 transaction and now that will appear.
    Also visit my weblog
    /people/community.user/blog/2006/12/07/organize-and-perform-testing-using-solution-manager
    please reward points.

  • JDK Performance tests...interesting results...

    In an effort to try and eek out as much performance from a Java application as possible, I decided to conduct a little experiment on various JDKs on Sun Solaris 8. What I found was very interesting, and I thought I would share this with the group.
    I tested 2 main areas, Class Generation and Number Crunching. I wrote a little application that does a series of tests a multitude of times, timming each one and the overall, and reporting the time. I tested the 1.2.2, 1.3.1, and 1.4.0 (both 32-bit and 64-bit). Here is what I found...
    Class Generation
    There has been this argument at the water cooler for sometime that cloning an object is faster than creating a new one. I created two tests...one that creates 1000 objects 10000 times using the constructor, the other creates a single "default" object, then clones 1000 objects 10000 times using the Object.clone() method. The two methods are identical, except the cloning requres a try/catch block around the clone() call and it creates the default class in it's constructor (both use a home-grown class called DummyClass, which implements java.lang.Cloneable). All classes where compiled with the JDK 1.3.1.
    What I found is that cloning was about 26.16% <b>slower<b> than actually creating the object, finishing on average in 295822.133ms vs. 218438.8ms. The slowest performers overall was the JDK 1.4.1, 64-bit, and the fastest was the 1.3.1 with a -server flag. Here is the chart:
    JDK                             Inst.                Clone
    1.3.1                           232874.667           280938.333
    1.3.1, -server                  190872.000           252238.000
    1.4.0                           206234.333           340177.667
    1.4.0 64-bit                    231025.333           302989.333
    1.4.0 64-bit, -server           231188.667           302767.333JDK Performance
    In this test I pitted the 1.2.2, 1.3.1, and 1.4.0 against one another. This uses the same Class Generation test as above, plus a Fibinochi number test to perform a calculation intensive test. I also tested the code compiled in different means. Here is the chart (it is a link, because the chart is pretty large):
    http://www.phuongphoto.com/jdk_tests/
    Ironically, the 1.2.2 outperformed the other JDK tests hands down. The only explination I have is that the 1.2.2 is running in native threads, and I cannot figure out how to turn that off the 1.2.2, nor turn it on the other JDK versions.
    Another interesting note is that the 64-bit 1.4.0 was out performed in class creation, but did pretty well in raw calculations, almost matching the 1.2.2, even with it's native threads. It also seemed to perform ever-so-slightly better without the -server switch, but all-in-all it didn't make much of a difference. The other JDKs all performed much better with the -server flag on.
    I am interested to find what everyone else thinks about this. In particular, can anyone instruct me on how to turn on native threads in the 1.3.1 and 1.4.0 so we can level the playing field? Also, I'd be interested to see some other numbers, if anyone else has any.
    Mike Bauer

    The reason the -server option didn't make a difference when you used the 64-bit mode is that the -server is implicit when you use 64-bit.
    In other words, if you are using java1.4, the options "-d64 -server" and plain "-d64" are the same thing.

  • Howto capture/replay workload (for performance testing purposes)

    Hi,
    We have a customer that will buy new hardware for his Oracle database server. Because he is hesitating between 2 possible storage solutions and is not convinced that solution A will be significantly better than solution B he wants a proof of concept.
    So this is what we will do:
    - Set up a test environment with hardware solution A and another one with hardware solution B.
    - We will backup and restore his database on both test servers.
    - We will run a workload on both servers and monitor performance with AWR and ADDM
    - Compare performance
    I would like to:
    - Make a consistent backup of the production database
    - Capture 24 hours work on the customers production database
    - Restore database on both test servers.
    - Replay the captured workload on both test servers.
    - Compare performance
    Does anyone know what tools I can use to do the capture/replay part?
    All suggestions are appreciated.
    Thank You,
    Pieter

    I have been playing with logminer and auditing but these don't solve the problem for 100%...
    Start logminer:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(-
    STARTSCN => 404809, -
    ENDSCN   => 404975, -
    OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
    DBMS_LOGMNR.CONTINUOUS_MINE + -
    DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
    DBMS_LOGMNR.NO_ROWID_IN_STMT);Get SQL:
    SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS'
    AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM', 'SYSMAN');This works for insert/update/delete, but it can't capture selects...
    Auditing:
    AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY ACCESS;
    SELECT SQL_TEXT FROM DBA_AUDIT_TRAIL;These sql_text statements show the bind variables
    eg: SELECT TO_NUMBER(PARAMETER_VALUE) FROM MGMT_PARAMETERS WHERE PARAMETER_NAME = :B1
    This is not executable, I need an executable result...
    Does anyone has a better way to accomplish what I need?
    Thank You.

Maybe you are looking for

  • Need profit center in Vendor & customer Line items in reports FBL1N & FBL5N

    Hi, how to get profit center field in reports FBL1N and FBL5N.  Please help. Regards, Sumit Jain Moderator: Please, search SDN; it has been answered several times

  • Where can i find an  older versions of Oracle Application Server

    Hi, I've downloaded 9ias v1.0.2.2.2a from oracle site but when read installation FAQ i saw the statement: "Oracle9i Application Server supports database version 8.1.6 and above." Our database is Oracle 7.3.4 My problem is where can i download an appl

  • Capture only the email address in a muse form

    How do you create a form in Muse where you capture only the email address?

  • Oracle Auditing for SYSDBA

    I have auditing turned off in 10g R2 but it's still logging audit files when I log in as sysdba. Is there any way to turn that off? SQL> show parameter audit; NAME                                 TYPE        VALUE audit_file_dest                     

  • Help for .NoClassDefFoundError

    Hi, everyone: When I did my Program I encountered some problems: I downloaded the lastest version of SDK 2 from Sun and install it and set path and classpath in autoexec.bat according to instructions of sun and the previous artical about set path and