Some Thoughts On An OWB Performance/Testing Framework

Hi all,
I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
Observations On The Current State of OWB Performance Tuning & Testing
At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
Some initial thoughts on how this could be accomplished
- Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
- Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
- Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
- Put in place a way of tracing a collection of mappings, i.e. a process flow
- The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
- Perhaps store trace results in a repository? reporting? exception reporting?
at an instance level, come up with some stock recommendations for instance settings
- identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
- put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
- Incorporate any existing "performance best practices" for OWB development
- define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
other ideas around testing?
Suggested Approach
- For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
- For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
- Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
- get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
- identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
- Investigate what additional tuning options and advisers are available with 10g
- Investigate the effect of system statistics & come up with recommendations.
Further reading / resources:
- Diagnosing Performance Problems Using Extended Trace" Cary Millsap
http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
- "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
- "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
- "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
- "Why Isn't Oracle Using My Index?!" Jonathan Lewis
http://www.dbazine.com/jlewis12.shtml
- "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
- Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
http://www.hotsos.com/downloads/registered/00000029.pdf
- Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
http://otn.oracle.com/pub/articles/schumacher_10gwait.html
- Article referencing an OWB forum posting
http://www.rittman.net/archives/001031.html
- How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
- What is the fastest way to load data from files? - OWB exchange tip
http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
- Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
- OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
- Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
- Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
http://utplsql.sourceforge.net/
Relevant postings from the OTN OWB Forum
- Bulk Insert - Configuration Settings in OWB
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
- Default Performance Parameters
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
- Performance Improvements
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
- Map Operator performance
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
- Performance of mapping with FILTER
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
- Poor mapping performance
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
- Optimizing Mapping Performance With OWB
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
- Performance of mapping with FILTER
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
- Performance of the OWB-Repository
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
- One large JOIN or many small ones?
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
- NATIVE PL SQL with OWB9i
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
Next Steps
Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

Hi Mark,
interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
All suggestions on how to do that grafting gratefully received!
To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
Cheers,
Donna
http://www.donnapkelly.pwp.blueyonder.co.uk

Similar Messages

  • Besides LoadRunner, any other SAP Performance Test tool?

    Hi, Dear All:
    I'm doing some planning for the SAP performance test. But our budget is very limited. It seems my company could not afford LR. LoadRunner took over 95% of this market. Beside LR, I heard Worksoft Certify is another option. Is there any other tool or open source tool?
    Thank you all so much in advance.

    Hello,
    If you are looking for simulating load using minimalist tools, a crude way to do this would be configuring Ecatt scripts and running them from multiple machines
    But I think it doesn't work for web based transactions.
    Regards,
    Siddhesh

  • [svn] 3229: Made some updates to the config test framework.

    Revision: 3229
    Author: [email protected]
    Date: 2008-09-16 12:15:34 -0700 (Tue, 16 Sep 2008)
    Log Message:
    Made some updates to the config test framework. This should be able to run on all the regression boxes now assuming I got the names of all the log files correct for the different app servers. After this checkin, I will update the regression scripts to start running the config framework tests under automation. This will be another antcall from the run.tests target in automation.xml which will run the tests and then load the results to the test results db.
    Modified Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/build.xml
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoChannelTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/DestinationWith NoIDTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidAckn owledgeModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDeli veryModeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidDest inationTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/InvalidMess ageTypeTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoConnectio nFactoryTest/error.txt
    blazeds/trunk/qa/apps/qa-regress/testsuites/config/tests/messagingService/jms/NoJNDINameT est/error.txt
    blazeds/trunk/qa/resources/frameworks/qa-frameworks.zip

    despite the workaround, it doesn't fix the real problem. It shouldn't be a huge deal for adobe to add support for multiple svn versions. Dreamweaver is the first tool i've used that works with svn that doesn't support several types of svn meta data. If they're going to claim that Dreamweaver supports svn is should actually support svn, the current version, not a version several years old. This should have been among the first patches released, or at least after snow leopard came out (and packaged with it the current version of svn).
    does anyone know if the code that handles meta data formatting is something that is human readable, or where it might be, or is it in compiled code.
    i signed up for the forums, for the sole purpose of being able to vent about this very frustrating and disappointing situation.

  • Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.

    Hi Team,
    Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
    Thank You,
    Best Regards,
    neeraj

    Hi Team,
    Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
    Thank You,
    Best Regards,
    neeraj

  • New version of sapyto - SAP Penetration Testing Framework

    Hello list,
    I'm glad to let you know that a new version of sapyto, the SAP Penetration Testing Framework, is available.
    You can download it by accessing the following link: http://www.cybsec.com/EN/research/sapyto.php
    News in this version:
    This version is mainly a complete re-design of sapyto's core and architecture to support future releases. Some of the new features now available are:
    . Target configuration is now based on "connectors", which represent different ways to communicate with SAP services and components. This makes the
    framework extensible to handle new types of connections to SAP platforms.
    . Plugins are now divided in three categories:
         . Discovery: Try to discover new targets from the configured/already-discovered ones.
         . Audit: Perform some kind of vulnerability check over configured targets.
         . Exploit: Are used as proofs of concept for discovered vulnerabilities.
    . Exploit plugins now generate shells and/or sapytoAgent objects.
    . New plugins!: User account bruteforcing, client enumeration, SAProuter assessment, and more...
    . Plugin-developer interface drastically simplified and improved.
    . New command switches to allow the configuration of targets/scripts/output independently.
    . Installation process and general documentation improved.
    . Many (many) bugs fixed. :P
    Enjoy!
    Cheers,
    Mariano

    Hi Mariano,
    Thanks for the update.
    We implemented secinfo restrictions 5 years ago, but used a rather complicated approach. We did some tests today (the "local" setting works okay so far) and will continue tomorrow.
    We now use the HOST and USER-HOST set to "local" and let the application security deal with who-can-do-what and this works quite well; though we have encountered some external 3rd party server programs in some cases. It seems to be popular amongst the business folks and some of the products use the gateway monitor to comunicate with the SAP system to find out when it has completed processing.
    I think this is a design error, but they of course think otherwise
    What was interesting to note, was that we locked ourselves out of an unprotected system. We changed the gw/monitor from 2 to 1 in a test. This worked. But then the gwmon cannot be used to change it back to 2! To we tried RZ11, and experienced the same. So we changed it to 0 in a test, and then 1 was blocked as well. This appears to be implemented in the kernel, as even hobbling the application coding does not help. The parameter is only dynamic when decreasing the value and increasing the security.
    We had to restart the whole system for the instance profile to take effect again. Rather noisy and a few developers could take an additional 10 minute coffee break as a result
    We are testing this on 3 different releases with different config:
    - 4.6C (46D)
    - 6.40
    - 7.00
    The different config relates to:
    - gw/sec_info
    - gw/monitor
    - auth/rfc_authority_check
    Our intention behind this is to improve baseline security and harden some special systems further.
    Cheers,
    Julius

  • How to have continouse performance testing during development phase?

    I understand that for corporate projects there are always requirements like roughy how long it can take for certain process.
    Is there any rough guideline as how much time certain process will take?
    And is there anyway i can have something like JMeter that will do constant monitor of the performance as i start development?
    Can go down to method level, but should also be able to show total time taken for a certain module or action etc.
    I think it is somthing like continuous integration like cruise control..but is more for performance continouse evaluation..
    Any advice anyone

    Just a thought: how useful would continuous performance testing be? First off, I wouldn't have the main build include performance tests. What if the build fails on performance? It isn't necessarily something you'll fix quickly, so you could be stuck with a broken build for quite some time, which means either your devs won't be committing code, or they'll be comitting code on a broken build which kind-of negates the point of CI. So you'd have a nightly build for performance, or something. Then what? Someone comes in in the morning and sees the performance build failed, and fixes it? Hmmm, maybe your corporate culture is different, but we've got a nightly metrics build that sits broken for weeks on end before someone looks at it. As long as the master builds are OK, nobody cares. Given that performance problems might well take several weeks of dedicated time to fix, I reckon they're far more likely to be fixed as a result of failing acceptance tests, rather than the CI environment reporting them
    Just my opinions, of course

  • EA6500 Wireless Issues Performance Test and MTU setting

    Just changed from DSL to Comcast Internet with a 25Mbps download service.  Purchased the Comcast the Linksys cable modem to match the EA6500.  Last night when testing doing some Netflix HD streaming while downloading Direct TV HD movie noticed that performance was not streaming HD in fact it was as bad as the slow DSL.
    So tried the Smart Wifi Performance test several time and downloads were terrible 3.0 Mbps, however, when testing with other testers, speakeasy, speedtest performance was above 25 Mbps.  Called support and they had no answer.
    What is wrong with the speed test smart wifi?  Why was I not getting HD streaming speeds?  I shoulf have plenty of bandwidth.
    Also I can seem to find the right MTU setting.  Every packet amount I ping is working so I can not get a fragmented error.
    I thought this router was advertised as a HD video router.
    I have just about everything disabled including Media Priority.
    Comments, Ideas, help,
    Thank you

    Hi!
    To get the optimum HD streaming performance, you can try setting the following on the router's page :
     - disable or turn off  WMM Support  under Media Prioritization.
     - personalize the wireless settings, set different names on the 2.4 and 5 GHz networks.
     Let the streaming devices connect to the 5Ghz network.

  • UI performance testing of pivot table

    Hi,
    I was wondering if anyone could direct me to a tool that I can use to do performance testing on a pivot table. I am populating a pivot table(declaratively) with a data source of over 100,000 cells and I need to record the browser rendering time of the pivot table using 50 or so parallel threads(requests). I tried running performance tests using JMeter, but that didn't help.
    This is what I tried so far with JMeter:
    I deployed the application in the integratedweblogicserver and specify the Url to hit in JMeter ( http://127.0.0.1:7101/PivotTableSample-ViewController-context-root/faces/Sample) and added a response assertion for the response code 200. Although I am able to hit the url successfully, the response I get is a javascript with a message that says "This is the loopback script to process the url before the real page loads. It introduces a separate round trip". When I checked in firebug, it looks like request redirect of some sort happens from this javascript to another Url (with some randomly generated parameters) which then returns the html response of the pivot table. I am unable to hit that Url directly as I get a message saying "session expired". It looks like a redirect happens from the first request and then session is created for that request and a redirect occurs.
    I am able to check the browser rendering time of the pivot table in firebug (.net tab), but that is only for a single request. I'd appreciate it if anyone could guide me on this.
    Thanks
    Naveen

    I found the link below that explains configuration of JMeter for performance testing of ADF applications(Although I couldn't find a solution to figure out the browser rendering time for parallel threads).
    http://one-size-doesnt-fit-all.blogspot.com/2010/04/configuring-apache-jmeter-specifically.html
    Edited by: Naveen Ramanathan on Oct 3, 2010 10:24 AM

  • Performance testing of servlets / beans / jsp ?

    Hi. I'd like to performance test my applications, anyone have a clue on what software to use?
    I use Fort� for Java CE 3 as the IDE and TomCat 3.23 as the servlet / jsp container.
    Hopefully there are some opensource tools to use for this?
    Regards,
    Chris

    You can precompile JSP's, this removes the small hickup when they are requested the first time (making the server translate and compile them). Check the documentation of your specific web/application server on how to do this.
    Otherwise:
    - buy better hardware
    - use a better application server
    - make sure your network is properly configured (so packets don't get routed around the network four times before they reach their destination for example)
    - make sure your program logic doesn't create bottlenecks such as
    unnecessary HTTP requests, redundant loops, etc.
    - optimize your database access, use connection pooling
    - optimize your database queries. Create indexes, make sure the SQL queries themselves aren't doing unnecessary trips around the database, etc.

  • Can Web Performance Test work on AJAX or Javascript Project which will show only one URL for all the pages?

    Hi there,
    I'm working on testing a AJAX and JavaScript Project which has several pages but all in the same URL. I need to test some attribute on the page or parameter past by AJAX or Javascript. Can Web Performance Test work to get what I want?
    Thanks,
    

    Hello,
    Thank you for your post.
    Web performance test is used to test if a server responses correctly and the response is consistent with what we expected. And we test the response speed, the stability and scalability.
    The Web Performance Test Recorder records both AJAX requests and requests that were submitted from JavaScript, but
     web test does not execute JavaScript. I am afraid that you can’t use web test to test parameter past by AJAX or JavaScript.
    Please see:
    Web Performance Test Engine Overview
    About JavaScript and ActiveX Controls in Web Performance Tests
    From the first link, “Client-side scripting that sets parameter values or results in additional HTTP requests, such as AJAX, does affect the load on the server and might require you to manually modify the Web Performance Test to simulate the scripting.”
    If you want to execute the function typically performed by script in web test, you need to accomplish it in coded web performance test or a web performance test plugin. Please see:
     How to: Create a Coded Web Performance Test
    How to: Create a Web Performance Test Plug-In
    I am not sure what the ‘some attribute on the page’ is. If you mean that you want to test those controls on the page, you can do coded UI test, which can test that the user interface for an application functions correctly. The coded UI test performs actions
    on the user interface controls for an application and verifies that the correct controls are displayed with the correct values. You can refer to this article for detailed information about code UI test:
    Verifying Code by Using Coded User Interface Tests
    Best regards,
    Amanda Zhu [MSFT]
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • How to setup the environment for doing the Performance Testing?

    Hi,
    We are planning to start the performance testing for iProcurement Product, Max number of user we are going to use 1000. For this simulation, what are all basic setups need to do for Application Tier, Database Tier,etc... Can anyone suggest what is procedure to setup environment depending upon the load?

    User Guides for thee rv120W are here:
    http://www.cisco.com/en/US/docs/routers/csbr/rv220w/quick_start/guide/rv220w_qsg.pdf
    http://www.cisco.com/en/US/docs/routers/csbr/rv120w/administration/guide/rv120w_admin.pdf
    and theres some more stuff over on my site:
    http://www.linksysinfo.org/index.php?forums/cisco-small-business-routers-and-vpn-solutions.49/

  • Some problems in measuring system performance

    Dear all
    I'm a new in Solaris world. Recently, my team is doing some performance tests on the Solaris 10 platform. But I find that it puzzles me about how Solaris system will measure CPU load
    for example, I use command prstat -L -p <pid> to determine the CPU load of each threads for a process and get the result like:
    [zhf@SunOS@whale]/export/home/zhf/PCS_Rel/conf> prstat -L -p 12685
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    12685 zhf 58M 34M sleep 52 0 0:00:06 3.8% pcs/4
    12685 zhf 58M 34M sleep 42 0 0:00:05 3.7% pcs/6
    12685 zhf 58M 34M sleep 59 0 0:00:05 3.6% pcs/5
    12685 zhf 58M 34M sleep 59 0 0:00:02 1.4% pcs/8
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.5% pcs/15
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.2% pcs/16
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.1% pcs/7
    12685 zhf 58M 34M sleep 59 0 0:00:01 0.1% pcs/1
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/3
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/2
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/14
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/13
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/12
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/11
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/10
    12685 zhf 58M 34M sleep 59 0 0:00:00 0.0% pcs/9
    and prstat -mL -p <pid> to determine the microstate of each thread for a process. the example like:
    [zhf@SunOS@whale]/export/home/zhf/PCS_Rel/conf> prstat -mL -p 12685
    PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
    12685 zhf 28 0.4 0.0 0.0 0.0 72 0.0 0.0 377 15 762 0 pcs/4
    12685 zhf 24 0.3 0.0 0.0 0.0 75 0.0 0.0 332 16 666 0 pcs/6
    12685 zhf 21 0.3 0.0 0.0 0.0 78 0.0 0.0 290 8 584 0 pcs/5
    12685 zhf 4.8 0.6 0.0 0.0 0.0 95 0.0 0.0 501 4 4K 0 pcs/8
    12685 zhf 2.4 0.3 0.0 0.0 0.0 97 0.0 0.1 1K 3 2K 0 pcs/15
    12685 zhf 0.9 0.3 0.0 0.0 0.0 0.0 99 0.0 503 10 1K 0 pcs/16
    12685 zhf 0.3 0.2 0.0 0.0 0.0 0.0 99 0.0 501 0 1K 0 pcs/7
    12685 zhf 0.1 0.1 0.0 0.0 0.0 0.0 100 0.1 501 2 501 0 pcs/3
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 77 0 47 0 pcs/2
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/14
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/13
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/12
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/11
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/10
    12685 zhf 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 0 0 0 0 pcs/9
    12685 zhf 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pcs/1
    Let's look at thread thread 4, I can see from -L result that thread 4 occupies 3.8% CPU time. But in -mL result shows that thread 4 user part only take 28% time and 72% is waiting for locks.
    My question is, is the "waitig for locks" also calculated into CPU load? That's to say, if the 3.8% CPU load includes lock time, is that means the real processing time is 3.8%*28%? or the 3.8% CPU load not includes lock time, So, the 3.8% CPU load is the real cost of this thread (which is 28% user processing). I wish my explanation will not mess you :)
    For my colleagues have many arguments on this, but no one could be sure. So I ask the experts here to seek the answers.
    many many thanks in advance
    Cheers
    Shen
    Message was edited by:
    lishen

    #1. The first display you have (without the -m) is not an immediate display. The CPU figures are long-term averages, so they can lie considerably.
    Take an otherwise idle machine and run a CPU intensive program. It will take 100% of one CPU immediately, but 'top' and 'prstat' will take many seconds to reflect that.
    #2. Whether 'waiting on a lock' takes CPU time probably depends on how it's wating. Solaris has adaptive locks, so sometimes the wait will take CPU time and other times it sleeps. (Going to sleep and waking up again has an overhead associated with it. So if the lock is going to release "quickly", then it makes sense to just spin on the CPU and do nothing for a few cycles until the lock is released. However if it's going to take "a while" for the lock to release, then it's better to release the CPU and let other processes have it while we wait for the lock.)
    In most circumstances (and almost certainly in your example) the processes are sleeping while waiting for the lock. However there might be other situations where that is not true.
    Darren

  • ActiveX Control recording but not playing back in a VS 2012 Web Performance Test

    I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
    add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
    Window 7 OS 64 bit
    IE 8 recorded on 32 bit version
    I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.

    Hi IndyJason,
    Thank you for posting in MSDN forum.
    As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
    Reference:
    https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
    I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
    reasons.
    For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
    http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • OWB Performance Tuning

    Hi Every body,
    I searched for OWB performance tuning guidelines for OWB11gR2.
    1) The posted link Please check: http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    is not pulling the desired white paper. It points to Oracle OWB resource page. I did not find any links related to performance tuning. Any idea?
    2) I reviewed https://blogs.oracle.com/warehousebuilder/entry/performance_tuning_mappings
    Performance tuning mappings By David Allan
    The links in the blog (a) There are reports in the utility exchange (see here)
    (b) There is a viewlet describing some of this here.
    Not working. Could you post the working links?
    Regards
    Ram Iyer

    Hi Ram
    The blog links should be fixed now, let me know if not. The blog has been rehosted a zillion times and each time stuff is broken in the migration - sound familiar?
    Cheers
    David

Maybe you are looking for

  • Control Access in Message Monitoring

    Can I control access to payload within PI-Message-Monitoring? The topic has been discussed already:  How to Control Access To Payload By using the "S_XMB_MONI-authorization-object" I can protect access to payload for certain messages, regardless of v

  • How can I have a use a variable view for a Popover in Xcode Applescript-ObjC?

    I'm adding Popovers to my application, but I have it coming from buttons in different views. One button might be in a Tab-View, and one in the main window. If I specify the wrong view, the Popover is off to the left and down, like the image below: He

  • Volume control not working , which database controls volume?

    I have read this solution elsehwhere, but I came to the blackberry forums to see if you guys can help me up with this question.  [quote] Originally Posted by ArtSmart I had the same failure. You can verify that the button has physically failed by goi

  • Drill Down in DP

    Hi, I want to make a copy using a macro. One option I tried was to drill down until the lowest levels, so I tried to use the drill down function. How can I see the effects of the drill down function in the interactive table (SDP94)? Is there an easie

  • I can't down load my images from my Nikon D4 to Apeture.

    I can't down load my images from my Nikon D4 to Apeture.