Performance parameters

Hello,
Is there any document or such describing all possible parameters which can be used for improvment of query or load performance?
Best regards,
F C

performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
Data load performance:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
hope it helps.
Regards

Similar Messages

  • Network Performance Parameters

    Hi All,
    Hi All,
    I need your advise on following point:
    1.     To measure network performance parameters (Availability, Packet Drop, Latency, and Jitter & Throughput) through NMS on Cisco device, what commands need to be configured on Cisco devices?
    2.     Is it through IP SLA commands, does it require SNMP RW commands to be enabled instead of SNMP RO command?
    3.     What activity would be required at NMS to fetch the data from Cisco device so as to generate performance reports from NMS tool? Is it walkthrough? What impact this activity will have in production environment to re-discover the elements in NMS?
    Please revert.

    If you are using ciscoworks, you can use IPM to do this. you only need to configure snmp ro and rw strings on the cisco device.
    IPM does a set and get to get the IP SLA information from the routers.

  • Performance parameters WLS4.03 & Heartbeat Lost

    Hello,
    We are currently having performance problems with WLS 4.03:
    1. Heartbeat Lost
    Sometimes, we get messages as "Heartbeat/PublicKey Resend detected". In previous newsgroup postings, this was said to be due to a combination of JDK1.1.7 + WLS4.03 + NTPerformance Pack enabled. However, after disabling the Performance pack, problems persist. Does anyone have experience with this?
    Now we have the Perf. pack enabled and performance seems to be slightly better than when disabled; even with the above messages.
    2. Parameters.
    As performance tuning on the production server on one side and the simulation of many users on a test machine on the other hand is rather difficult, we do not know whether current parameters are really suitable on our production machine:
    weblogic.system.executeThreadCount=45
    weblogic.jdbc.connectionPool.Pool_name=\
    url=jdbc:weblogic:oracle:DB_name,\
    driver=weblogic.jdbc.oci.Driver,\
    loginDelaySecs=1,\
    initialCapacity=5,\
    maxCapacity=20,\
    capacityIncrement=2,\
    refreshMinutes=10,\
    WLS 4.03 is running on a dual-CPU NT4.0 server and started with:
    java -ms512m -mx512m -noasyncgc -Dweblogic.system.home=e:\weblogic weblogic.Server
    DB Server: Oracle815 database on seperate unix server.
    We are planning an upgrade to WLS5.1 & JDK1.2.2 but this will not be before spring 2001: in the meantime would anyone have any remarks on the above settings of our WLS4.03?
    Thanks & best regards,
    Lieven.
    (System Analyst - Atraxis Belgium)

    Update: I mucked about more after making this post; I installed gnome-alsa-mixer and checked the "internal mic" option and "Auto-Mute mode" (both were off), after this my headphone output works again, and does the built-in microphone (!)
    I have no idea how I ended up in this "broken" situation, or even what this option does, or how to use it without gnome-alsa-mixer (I had to install 15 packages to get this, as I normally don't use gnome)... Sound on Linux still seems no better than when I started using it 15 years ago :-( (I spent about 5 hours on this...)

  • RMAN performance parameters

    Hello,
    I have to backup big databases (more than 10 To) with RMAN in Oracle 10gR2 in a Networker environnement. I have to do full , incremental and archivelog backup.
    Could you tell me, please, what are the parameters which have an impact on the performance, the speed of the backup?
    I am thinking of number of channels, FILESPERSET ...
    Could you tell me the relevance of each parameter of performance of RMAN with an example, for the backup and also for a restoration?
    Thank you very much

    Simple example using catalog stored on test1 database
    BACKUP
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    backup full tag backup_1 filesperset 2
    format '/data/oracle9/BACKUP/rman_BACKUP_%d_%t.%p.%s.%c.%n.%u.bus'
    database;
    EOF
    RESTORE
    export ORACLE_SID=TEST2
    sqlplus /nolog <<EOF
    connect / as sysdba
    shutdown immediate
    startup mount
    exit
    EOF
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    restore database;
    recover database;
    alter database open resetlogs;
    EOFSs

  • Bluetooth performance parameters in OS X

    have bluetooth connectivity to blackberry 8830 and 8330. Performance is poor. when i connect the blackberry 8830 via usb and vz access manager i get up to 2Mbs down link speed. When i connect same blackberry via bluetooth i get 100K downlink.
    does anyone know of a way to increase performance of the link by changing parameters on the command line in OS X or by chance on the blackberry.

    You can't expect Bluetooth to work as fast as USB - it's not designed to be...
    *USB 1.1:* 12 Mbits/second
    *USB 2.0:* 480 Mbits/second
    *Bluetooth 1.2:* 1 Mbit/second
    *Bluetooth 2.0:* 3 Mbits/second
    So, even the latest *Bluetooth 2.0 +EDR* standard is still a quarter of the speed of the old *USB 1.1*
    And *USB 2.0* is up to 160 times faster than *Bluetooth 2.0*
    Things never run at the fastest possible speed, so you're unlikely even to get near to the 'advertised' speeds listed above with these technologies.
    The speeds you're getting are quite normal for Bluetooth.

  • Performance parameters - page load - adf pages

    I am developing a webcenter portal application. most of it's pages are displaying adf tables which data coming from web services.
    business has not given any numbers for performance of system and i need to put numbers in requirement catalog so requirements can be measured later.
    we are into development phase now, services are not yet ready.
    I was thinking how can I come up with these numbers like -
    a 'simple' page should load in 2 sec?
    a 'medium' page should load in 4 sec?
    a 'complex' page should load in 6 sec?
    How this is determined?
    help appreciated.
    thanks.

    Hi,
    You can use a utility called HTTP watch http://www.httpwatch.com/ to measure the page performance. You can also see which files are cached and which are not etc etc.
    Based on that you can tweak your pages to meet the baselines.
    Hope it help,
    Zeeshan

  • Performance parameters of the meter reading result entry

    Hi Guys,
    Can any one explain me about the below parameter in the meter reading result entry.
    1."No Entry of Tech. MRs at Installation Outside Installation"
    If we enable this option,it should not allow to enter the meter readings in the technical installation.
    But it is accepting. How? can any one explain.
    2.Turbo booster also.
    Thanks in advance.
    Regards,
    Oven
    Edited by: Richard oven on Feb 18, 2009 11:41 AM

    Hi Oven,
    I hope the following information is useful:
    No Entry of Tech. MRs at Installation Outside Installation ->
    As a rule, technical meter readings at installation are entered using the appropriate transactions (Full Installation, Technical Installation).
    In addition, it is possible to enter meter readings for technical installation before the installation occurs. This is done by uploading meter readings using IDoc ISU_MR_UPLOAD or BAPI. In exceptional cases you can also enter meter readings manually using single entry. The meter reading is then included once technical installation has been executed.
    If you select this field, it is not possible to import meter readings at installation via upload or single entry.
    Turbo Boosting->
    Activates Accelerated Processing
    Transactions and background jobs of the meter reading result
    entry create a high database load. Excessive accesses to the
    database tables EABL, EABLG, V_EGER_H, ETDZ, EASTS and others
    affect the system performance this improves performance......
    Kind Regards
    Olivia

  • Performance Analysis Parameters

    Hai Experts
    I need to audit our whole SAP system DEV, QAS and PRD.
    What are the performance parameters we can check?
    What are the recommand values for increasing perfomance?
    please advice more thing for Database backup also..

    Hi,
    To configure early watch reports, you can check the below url.
    http://service.sap.com/rkt-solman.
    There are documents on how to configure early watch reports.
    Cheers....,
    Raghu

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Calculating roundtime for iviews - Portal performance monitoring

    Dear Gurus,
    I have a bunch of iviews that need to be monitored. I am running these BI integrated iviews and want to calculate the rountime for them. currently the way I am doing it, is I enter the variable values for each of the iviews and run them, I switch on a stop clock and then wait till the results come up in the portal. The average run time right now is around 8 mins. This is not because of the portal but because of the time lapse in the back end as well.
    Are there any performance parameters I can activate, or a log I can look into to see the rountime for the iviews without having to manually track the time.
    Any help is appreciated, and generous points for useful answers.

    The method System.currentTimeMillis() returns the current time in milliseconds of type LONG.
    Define a long attribute in the component controller context. map it to the first and last views in the iview. Then in the doInit of the first view of the iview, set it's value to System.currentTimeMillis(). Then, at the end of the last view, use the formula
    Long l = System.currentTimeMillis() - wdContext.currentContextElement().get<YourLongAttribute>();
    This will give the roundabout time taken by the entire iview to execute.
    Divide it by 1000 and u can get the time in seconds, and then divide it by 60 and it gets converted into minutes. print it using ur message manager.
    Below is the small snippet of code i used.
    // Init of First View in the IView
    public void wdDoInit()
       //@@begin wdDoInit()
       wdContext.currentContextElement().setTime(System.currentTimeMillis());
       //@@end
    // DoExit of last View in the IView
    public void wdDoExit()
      //@@begin wdDoExit()
      long l = System.currentTimeMillis() - wdContext.currentContextElement().getTime();
      float f = (float)l/(float)1000;
      wdComponentAPI.getMessageManager().reportSuccess("Time Taken by the IView "f" Seconds");
      //@@end
    Note: wdDoExit() was My last method to get executed. if u use something else, substitute this there.

  • FTP Adapter, B2b and SOA performance Test

    Hi All,
    We have a requirement to processes Large XML files ranging from 1 MB to 200 MB. Our Flow is FTP Adapter picks the XML's Repeatable Nodes. BPEL Transforms and Loops thru each XML node and calls B2B Adapter in Each Loop. We are doing Advanced EDI batching to aggregate all the nodes in one XML to one EDI. Files upto 7 MB are working fine with FTP fileraise property= 1 and polling frequency=300s. Files with 14MB are failing with JTA transaction Time Out (Time Out Set=500s) and Server running in PROD mode. We are using SOA Suite 11.1.1.7 and HIPAA 834 Transactions. Is there a Payload size Limitation For FTP Adapter or SOA Suite? Do we need to Follow a different approach to achieve our functionality? Do we need to set any Performance parameters? Please share your thoughts.
    Thanks In Advance!!!

    Pl do not post duplicates - FTP Adapter, B2b and SOA performance Test

  • Performance configuration PRD system SCM 5.1 Solaris 10

    Hi!
    I am trying to configure some performance parameters on a PRD system that is running under a Solaris 10 machine. This system have 40CPU and 40GB RAM. SAP system = SAP SCM 5.10
    I have given the sap system 15 GB and Oracle 20GB and OS = 5GB
    I am not sure which SAP instance parameters that I should change.
    The system has for the moment mostly default instance parameters values becuse it is newly installed, the only parameters that I have changed is:
    em/initial_size_mb = 15GB
    PHYS_MEMSIZE = 4096
    Wp Dialog = 50
    Wp btc = 15
    wp eng = 1
    wp vb = 6
    wp vb 1 = 2
    wp sp = 4
    I have read about ZERO MM but I have not found anything for UNIX/SOLARIS and I cannot find the parameter
    "es/implementation " . IS it a default value in the SAP Kernel , and can you use  ZERO MM for UNIX?
    OK, to my question which instance parameters should I change ? Or should I have the default values and change afterwards?
    Thx for the Help
    S.T

    Yes, it seems that you have right about the CPU specification.
    THis is the utput from /usr/sbin/psrinfo -v :
    Status of virtual processor 8 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:10.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 9 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 10 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 11 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 12 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 13 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 14 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 15 as of: 07/17/2009 11:27:58
      on-line since 05/19/2009 18:01:12.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 56 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 57 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 58 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 59 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 60 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 61 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 62 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 63 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 09:54:50.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 88 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:56.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 89 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:56.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 90 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:56.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 91 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:56.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 92 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:57.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 93 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:57.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 94 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:57.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 95 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:13:57.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 112 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:47.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 113 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 114 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 115 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 116 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 117 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 118 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 119 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:16:48.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 144 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 145 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 146 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 147 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 148 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 149 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 150 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    Status of virtual processor 151 as of: 07/17/2009 11:27:58
      on-line since 07/16/2009 10:22:14.
      The sparcv9 processor operates at 2400 MHz,
            and has a sparcv9 floating point processor.
    THX again for the help
    Total = 40 CPU
    Edited by: Stefan Törnqvist on Jul 17, 2009 11:46 AM
    Edited by: Stefan Törnqvist on Jul 17, 2009 11:48 AM

  • TemplateConfigTool vs. Recommended Parameters

    Hello,
    We have implemented a NW '04 SPS15 landscape and used the Template Configuration Tool to apply the appropriate settings on the J2EE.  Just before the Go-Live checks I noticed that the Dispatcher and Server, Manager ==> ThreadManager settings are not what are recommended in the presentation "Fine Tune the Performance of your SAP Portal".
    For example,
    The presentation shows that the ThreadManager should be:
    InitialThreadCount = 100
    MaxThreadCount = 200
    MinThreadCount = 100
    Although the base installation parameters were slightly different from that of the presentations listed "defaults", I now question the changes that were implemented when we ran the Template Configuration Tool with the Portal.zip configuration file...
    I have adjusted the ThreadManager settings to that of the recommended values in the presentation but I am not sure if:
    a) I should keep those settings or are they specificly set by the Tool?
    b) I should review the old tuning documentation and check to see what is recommended vs. current
    I cannot find other documentation on the tuning parameter requirements for J2EE running a portal.  Maybe I am missing something?  I was depending on the Tool for the recommended parameters.
    Any suggestions or ideas are welcomed.
    Judson

    Hi Paul,
    I will issue points but I want you to know that there are many differences in the configuration applied byt he Template Configuration Tool that are not recommended by SAP for a go-live check. I am not sure if it's the "check" process/software or the Portal.ZIP file for the Template Config Tool - but they are not in-synch.
    Mind you - they are not that far out - but again, this is a tool that one would eventually "trust" as the ZIP files are expected to be up-to-date with the best performance parameters... (I guess).
    But thanks very much anyway,
    An example is that the ThreadManager settings (min/max/change) are set to 40 using the Tool, 100/200/100 (respectively) for the tuning presentation and finally, 20/20/20 for the go-live check... (confusing but implemented)
    Kind regards,
    Judson

  • Performance issues (Oracle 9i Solaris 9)

    Hi Guys,
    How do I tell if my database is performing at its optimum level. We seem to be having perfomance issues on one of our applications. There are saying it's the database, network, etc.
    Thank you.

    Hi,
    In order to determine whether or not your Database is having performance Issues,you will need to install and execute Statspack. Statspack is utility which provides information about the Performance Parameters of Oracle Database.
    If you are already using statspack report for performance analysis post the snapshot of the report.........
    Regards,
    Prosenjit Mukherjee.

  • APS_PARAMS parameters missing in Demantra 7.3.1

    The performance parameters 'worksheet.full.load' and "client.worksheet.calcSummaryExpressions" are not available in APS_PARAMS table in Demantra version 7.3.1 The release notes don't mention any changes to these two parameters in 7.3.1. Are these parameters are deprecated?
    Thanks

    Loading some Customer Sample Data into a new model (not the preseeded one as Customer has no EBS nor JDE).
    After Build Model run I got an error in BM : Cannot insert NULL into LOC_LEVELS.ENGINE_PROFILES_ID.
    It never occurred to me in previous versions.
    What can I do?What is the difference between this version and the previous one?
    From the error message, it looks like you are trying to insert NULL values in one of the columns/fields which is mandatory and expect some value to be inserted.
    Thanks,
    Hussein

Maybe you are looking for

  • Error while creating a new entity row for LoginPageEO.jbo.RowCreateExceptio

    hi all, i am new to OAF i have created a login page and trying to validate to a custom table which had two columns username and password, i am calling function from controller class which is in AM and from AM in turn i am calling function in VOimpl.j

  • How do I update podcasts on a public computer running ITunes?

    My Ipod wirks fine with my computer, but when I am away I sometimes like to get a few podcasts before I drive home. The public computers I try this on usually say something like "this IPOD is registered to a different computer. Would you like to unre

  • What happened to my audiobooks?

    I tried to sync my Ipod today and was informed I needed to run an update.  I did so and now I am unable to see any of my audiobooks.  I can see the audiobook folder/file when I click on "more" but it says "No Audiobooks".  I have tons of audiobooks. 

  • Wrong Derived Segment in Accounting Transaction

    Dear FI Experts, When I simulate account clear vendor (tcode F-44), it's shown :                                                                    Profit Center                  Segment                                                                

  • App-V package update best way to deploy?

    Hello<o:p></o:p> I have updated for an App-V 5 package that is currently targeted to users via an application deployment with an WMI AD Group membership query.<o:p></o:p> What is best way for current users to receive the updated APP-V package ? Do I