Performance parameters WLS4.03 & Heartbeat Lost

Hello,
We are currently having performance problems with WLS 4.03:
1. Heartbeat Lost
Sometimes, we get messages as "Heartbeat/PublicKey Resend detected". In previous newsgroup postings, this was said to be due to a combination of JDK1.1.7 + WLS4.03 + NTPerformance Pack enabled. However, after disabling the Performance pack, problems persist. Does anyone have experience with this?
Now we have the Perf. pack enabled and performance seems to be slightly better than when disabled; even with the above messages.
2. Parameters.
As performance tuning on the production server on one side and the simulation of many users on a test machine on the other hand is rather difficult, we do not know whether current parameters are really suitable on our production machine:
weblogic.system.executeThreadCount=45
weblogic.jdbc.connectionPool.Pool_name=\
url=jdbc:weblogic:oracle:DB_name,\
driver=weblogic.jdbc.oci.Driver,\
loginDelaySecs=1,\
initialCapacity=5,\
maxCapacity=20,\
capacityIncrement=2,\
refreshMinutes=10,\
WLS 4.03 is running on a dual-CPU NT4.0 server and started with:
java -ms512m -mx512m -noasyncgc -Dweblogic.system.home=e:\weblogic weblogic.Server
DB Server: Oracle815 database on seperate unix server.
We are planning an upgrade to WLS5.1 & JDK1.2.2 but this will not be before spring 2001: in the meantime would anyone have any remarks on the above settings of our WLS4.03?
Thanks & best regards,
Lieven.
(System Analyst - Atraxis Belgium)

Update: I mucked about more after making this post; I installed gnome-alsa-mixer and checked the "internal mic" option and "Auto-Mute mode" (both were off), after this my headphone output works again, and does the built-in microphone (!)
I have no idea how I ended up in this "broken" situation, or even what this option does, or how to use it without gnome-alsa-mixer (I had to install 15 packages to get this, as I normally don't use gnome)... Sound on Linux still seems no better than when I started using it 15 years ago :-( (I spent about 5 hours on this...)

Similar Messages

  • Network Performance Parameters

    Hi All,
    Hi All,
    I need your advise on following point:
    1.     To measure network performance parameters (Availability, Packet Drop, Latency, and Jitter & Throughput) through NMS on Cisco device, what commands need to be configured on Cisco devices?
    2.     Is it through IP SLA commands, does it require SNMP RW commands to be enabled instead of SNMP RO command?
    3.     What activity would be required at NMS to fetch the data from Cisco device so as to generate performance reports from NMS tool? Is it walkthrough? What impact this activity will have in production environment to re-discover the elements in NMS?
    Please revert.

    If you are using ciscoworks, you can use IPM to do this. you only need to configure snmp ro and rw strings on the cisco device.
    IPM does a set and get to get the IP SLA information from the routers.

  • Performance Vows , Requests get lost ..

    I am performance tuning my JSP/Struts web-application deployed over jboss.. I have not got success yet.
    During performance/load testing of my web application with apache jmeter.
    It is OK up to 1400 request per minute..
    But if i increase the load , i get following
    Issue: from 1800 requests sent to the application only 1283 were processed ,rest of the requests were lost. I reviewed the application logs. It show it received only 1283 requests, and were processed successfully without any database error, or Java heap space error.
    hardware configuration(i use Celeron 3.0Ghz , 1 GB RAM) can be an issue , but the requests should not be lost ..
    Is there any way to tune this load ?
    Does it requires to programatically pool the requests first in a buffer say and than process them . but it will dampen the response time,as this is done by container till now ?
    The required target is whopping 15000 requests per minute.
    Jasdeep

    Thanx for the response BalusC
    But how to acheive this target on my development machine..
    It might be possible through buffering the requests first such that requests are not lost, then processing them...
    Is there any way out that my request are never lost : if my server/application is busy the request are buffered somewhere and are resent when server/application becomes available. ?
    Jasdeep

  • RMAN performance parameters

    Hello,
    I have to backup big databases (more than 10 To) with RMAN in Oracle 10gR2 in a Networker environnement. I have to do full , incremental and archivelog backup.
    Could you tell me, please, what are the parameters which have an impact on the performance, the speed of the backup?
    I am thinking of number of channels, FILESPERSET ...
    Could you tell me the relevance of each parameter of performance of RMAN with an example, for the backup and also for a restoration?
    Thank you very much

    Simple example using catalog stored on test1 database
    BACKUP
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    backup full tag backup_1 filesperset 2
    format '/data/oracle9/BACKUP/rman_BACKUP_%d_%t.%p.%s.%c.%n.%u.bus'
    database;
    EOF
    RESTORE
    export ORACLE_SID=TEST2
    sqlplus /nolog <<EOF
    connect / as sysdba
    shutdown immediate
    startup mount
    exit
    EOF
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    restore database;
    recover database;
    alter database open resetlogs;
    EOFSs

  • Bluetooth performance parameters in OS X

    have bluetooth connectivity to blackberry 8830 and 8330. Performance is poor. when i connect the blackberry 8830 via usb and vz access manager i get up to 2Mbs down link speed. When i connect same blackberry via bluetooth i get 100K downlink.
    does anyone know of a way to increase performance of the link by changing parameters on the command line in OS X or by chance on the blackberry.

    You can't expect Bluetooth to work as fast as USB - it's not designed to be...
    *USB 1.1:* 12 Mbits/second
    *USB 2.0:* 480 Mbits/second
    *Bluetooth 1.2:* 1 Mbit/second
    *Bluetooth 2.0:* 3 Mbits/second
    So, even the latest *Bluetooth 2.0 +EDR* standard is still a quarter of the speed of the old *USB 1.1*
    And *USB 2.0* is up to 160 times faster than *Bluetooth 2.0*
    Things never run at the fastest possible speed, so you're unlikely even to get near to the 'advertised' speeds listed above with these technologies.
    The speeds you're getting are quite normal for Bluetooth.

  • Performance parameters

    Hello,
    Is there any document or such describing all possible parameters which can be used for improvment of query or load performance?
    Best regards,
    F C

    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    Data load performance:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    hope it helps.
    Regards

  • Performance parameters - page load - adf pages

    I am developing a webcenter portal application. most of it's pages are displaying adf tables which data coming from web services.
    business has not given any numbers for performance of system and i need to put numbers in requirement catalog so requirements can be measured later.
    we are into development phase now, services are not yet ready.
    I was thinking how can I come up with these numbers like -
    a 'simple' page should load in 2 sec?
    a 'medium' page should load in 4 sec?
    a 'complex' page should load in 6 sec?
    How this is determined?
    help appreciated.
    thanks.

    Hi,
    You can use a utility called HTTP watch http://www.httpwatch.com/ to measure the page performance. You can also see which files are cached and which are not etc etc.
    Based on that you can tweak your pages to meet the baselines.
    Hope it help,
    Zeeshan

  • Agent heartbeats lost when engine restarted

    When you restart the engine all the agents on linux servers lose their heartbeat connections. is there and easier way to restart the heartbeat connection the using nsmagent-config on each agent server?

    blowder,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Performance parameters of the meter reading result entry

    Hi Guys,
    Can any one explain me about the below parameter in the meter reading result entry.
    1."No Entry of Tech. MRs at Installation Outside Installation"
    If we enable this option,it should not allow to enter the meter readings in the technical installation.
    But it is accepting. How? can any one explain.
    2.Turbo booster also.
    Thanks in advance.
    Regards,
    Oven
    Edited by: Richard oven on Feb 18, 2009 11:41 AM

    Hi Oven,
    I hope the following information is useful:
    No Entry of Tech. MRs at Installation Outside Installation ->
    As a rule, technical meter readings at installation are entered using the appropriate transactions (Full Installation, Technical Installation).
    In addition, it is possible to enter meter readings for technical installation before the installation occurs. This is done by uploading meter readings using IDoc ISU_MR_UPLOAD or BAPI. In exceptional cases you can also enter meter readings manually using single entry. The meter reading is then included once technical installation has been executed.
    If you select this field, it is not possible to import meter readings at installation via upload or single entry.
    Turbo Boosting->
    Activates Accelerated Processing
    Transactions and background jobs of the meter reading result
    entry create a high database load. Excessive accesses to the
    database tables EABL, EABLG, V_EGER_H, ETDZ, EASTS and others
    affect the system performance this improves performance......
    Kind Regards
    Olivia

  • After performing software update, I lost all my data

    I upgraded to the most recent software, when it finished, I got a message to connect to iTunes.  Everything was gone.  My only option was to restore to factory settings.  Even though I had the iPad set up to back up to iCloud, the only backup available was one from my computer from April.  This really stinks.  Why don't they warn you how dangerous it is to update your software.

    Setup as new and restore from iCloud backup.
    Restore from iCloud Backup
    1. Settings>General>Reset>Erase all content and settings
    2. Tap Erase
    3. You'll see Apple logo and progress bar
    4. Restore Completed. Your iPad was restored successfully. There are just a few more steps to follow and then you are done!
    5. Slide to set up
    6. Set language
    7. Set country
    8. Choose Wi-Fi network; enter Wi-Fi password
    9. (a) Use Location Service (b) Don't Use Location Service
    10. You'll be given 3 options (a) Setup as New iPad (b) Restore from iCloud Backup (c) Restore from iTune Backup
    11. Selected Restore from iCloud Backup
    12. You'll be required to enter Apple ID and Password
    13. Agree to Terms and Conditions
    14. Select Backup file in iCloud
    15. You'll see progress of restore
    16. (a) Create a Passcode (b) Don't Add Passcode
    17. Welcome to iPad
    18. Get Started
    19. Restoring Apps and Media. Your settings have been restored. Connect to power to save battery while apps and media are downloading
    20. Restore Incomplete. Some items could not be downloaded from the Store. If they are on your computer, you can restore them by syncing with iTunes.

  • Performance Analysis Parameters

    Hai Experts
    I need to audit our whole SAP system DEV, QAS and PRD.
    What are the performance parameters we can check?
    What are the recommand values for increasing perfomance?
    please advice more thing for Database backup also..

    Hi,
    To configure early watch reports, you can check the below url.
    http://service.sap.com/rkt-solman.
    There are documents on how to configure early watch reports.
    Cheers....,
    Raghu

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Calculating roundtime for iviews - Portal performance monitoring

    Dear Gurus,
    I have a bunch of iviews that need to be monitored. I am running these BI integrated iviews and want to calculate the rountime for them. currently the way I am doing it, is I enter the variable values for each of the iviews and run them, I switch on a stop clock and then wait till the results come up in the portal. The average run time right now is around 8 mins. This is not because of the portal but because of the time lapse in the back end as well.
    Are there any performance parameters I can activate, or a log I can look into to see the rountime for the iviews without having to manually track the time.
    Any help is appreciated, and generous points for useful answers.

    The method System.currentTimeMillis() returns the current time in milliseconds of type LONG.
    Define a long attribute in the component controller context. map it to the first and last views in the iview. Then in the doInit of the first view of the iview, set it's value to System.currentTimeMillis(). Then, at the end of the last view, use the formula
    Long l = System.currentTimeMillis() - wdContext.currentContextElement().get<YourLongAttribute>();
    This will give the roundabout time taken by the entire iview to execute.
    Divide it by 1000 and u can get the time in seconds, and then divide it by 60 and it gets converted into minutes. print it using ur message manager.
    Below is the small snippet of code i used.
    // Init of First View in the IView
    public void wdDoInit()
       //@@begin wdDoInit()
       wdContext.currentContextElement().setTime(System.currentTimeMillis());
       //@@end
    // DoExit of last View in the IView
    public void wdDoExit()
      //@@begin wdDoExit()
      long l = System.currentTimeMillis() - wdContext.currentContextElement().getTime();
      float f = (float)l/(float)1000;
      wdComponentAPI.getMessageManager().reportSuccess("Time Taken by the IView "f" Seconds");
      //@@end
    Note: wdDoExit() was My last method to get executed. if u use something else, substitute this there.

  • FTP Adapter, B2b and SOA performance Test

    Hi All,
    We have a requirement to processes Large XML files ranging from 1 MB to 200 MB. Our Flow is FTP Adapter picks the XML's Repeatable Nodes. BPEL Transforms and Loops thru each XML node and calls B2B Adapter in Each Loop. We are doing Advanced EDI batching to aggregate all the nodes in one XML to one EDI. Files upto 7 MB are working fine with FTP fileraise property= 1 and polling frequency=300s. Files with 14MB are failing with JTA transaction Time Out (Time Out Set=500s) and Server running in PROD mode. We are using SOA Suite 11.1.1.7 and HIPAA 834 Transactions. Is there a Payload size Limitation For FTP Adapter or SOA Suite? Do we need to Follow a different approach to achieve our functionality? Do we need to set any Performance parameters? Please share your thoughts.
    Thanks In Advance!!!

    Pl do not post duplicates - FTP Adapter, B2b and SOA performance Test

  • Exit plug url parameters without XSS encoding

    Hi,
    I fire an exit plug with POST_PARAMETERS = 'X' and passing a different postParameters='sap-client=101&sap-user=5E224C57E11&sap-password=Asdf1234'. However the special characters are encoded and not passed correctly to the webdynpro app.
    I got this behavior from CREATE_EXTERNAL_WINDOW with USE_POST = abap_true settings. I am trying to hide the sap-user and sap-password from the URL when redirecting to a new WebDynpro app in the same way, but remain in the current window.
    My question is how can I prevent the automatic XSS encoding? I have turned off the login XSRF flag in SICF, but it has no effect on this.
    Another option if somebody can tell me how can I do the same as CREATE_EXTERNAL_WINDOW in the same window, it should also be fine.
    Your help is appreciated.
    Charlie

    >
    Frederic Wood wrote:
    > Thank you Frank!  This accomplished exactly what I asked.  Points awarded!
    >
    > However...
    >
    > What is bugging me now is the fact that the BSP app is processed on the lo_client->send( ) and my POSTed parameters are visible, but the BSP is not displayed.  Then, when I actually Fire my WD outbound plug to display the BSP, the BSP is processed and displayed, but the POSTed parameters have been lost... 
    >
    > Your thoughts?
    That is because there are two very different calls to the BSP application occuring. The solution that you said worked is using the HTTP Client (browser if you will) of the application server.  Therefore it is the ABAP system itself that is calling the BSP application.  That is why nothing is displayed on the client side - because the server itself is making the request and receiving the response.  This approach is really only usable if you need to call a data service (REST based or something like that), not if you actually need to dispaly the response to the user.
    When you fire the navigation plug, it is the browser on the client machine that is making the request to the BSP application. This is completely separate from the request made from the application server in the previous step. 
    If you can't use GET and URL parameters, then you should just consider using Server Cookies.  I don't think that hte HTTP Client class approach is going to get you what you want.

Maybe you are looking for

  • How to set size of buttons

    hi, I am working on the GUI. There are four buttons. I wanted the button's width to be just enough to show the text inside the button and not "...." or bigger than required. How should I edit my code, which is shown below? Thanks in advance regards *

  • Af:treeTable sorting issue

    We have a tree table (Jdevloper 11.1.1.6.0), when all the nodes in the tree table are expanded it has more than 500 rows and when we try to sort it in ascending or descending order it errors out: The connection to the server was reset while the page

  • 2 ipods HELP

    I itunes borrowed my friends ipod to copy her music to my library HELP how do I do it???

  • 2 XML data comparison

    Hi, I want to compare 2 xml data and identify the difference in the values enclosed inside the tags and log it across in a table.Please let me know how to compare 2 xml files in Oracle using pl/sql code. P.S: I am having Oracle 10.2 release Thanks, V

  • Safari 6 won't load youtube

    All the software are up to date I have tried reset safari, open in 32 bit mode, but nothing works. It's not the video won't load, but the site. Nothing happens after I press enter. I have tried other browsers, and youtube works just fine. Any idea ho