TimesTen V/s Oracle Response Time

Hi,
Are there any comparative figures b/w Oracle and Times Ten available on the Response Time for Select/Insert/Update/Delete queries.
I would appreciate if the figures are available over JDBC calls.
I tried analyzing the result using one of our application which uses JDBC Thin Client on both Times Ten and Oracle.
The result were not very encouraging.
Let me know if required I can share the results here.
Thanks,
Mangesh Malekar
Message was edited by:
user464199

Hi Guys,
Appreciating your feedback.
But, my application using which i have tested the Response Time is being running robustly for more than 5 year and has about 20 installation world wide.
For supporting Times Ten Database there was no code change done to this application.
So far this application was tested with ORACLE/DB2 as the database.
We have achieved fantastic throughput with these database. But now it is more or less saturated.
In order to improve further we are trying TimesTen Database as it is said to be faster.
The application loads the JDBC Driver and the URL from a Configuration File.
I just supplied the TimesTen JDBC Driver and the Times Ten URL to this application to get the Response Time result.
I firmly believe that ORACLE would not have taken over TIMESTEN if it would not have been faster.
So there definitely is something that i need to incorporate into my application related to TIMES TEN, which we all are missing.
Is there a TEST Application which uses JDBC as the backbone, that i can use to gauge the response time on Oracle and Times Ten.
Thanks,
Mangesh Malekar

Similar Messages

  • Why is Oracle Response time getting slow with time.

    Hi,
         I have DB which was very fast initially with the response time for one of the query < 5 sec.
         I have been using the DB for the last 15 days. Now the same query is taking 10 minutes. In the DB there are lot of operations of additions and deletions been done on the table where the query is being made. The no. of records in the table is constant at around 3 million records from the first day.
         If I import the DB into a new setup then again the response time becomes very good in the new setup.
         What should be the problem of the DB getting slow with the time.
    Thanks,
    Tuhin

    It all depends on several factors.
    Are your tables,indexes have upto-date statistics?
    I have DB which was very fast initially with the response time for one of the query < 5 sec. Initially there might be small amount of data later data might have increased,you dont have proper indexes.
    It could be that your indexes got fragmented to due to heavey deletes? It might need reorg.
    My suggestion would to look into your execution plan of the quries and see where your kernals are waiting.
    As other suggested you, use explain plan, event 10046 and tkprof.
    Jaffar

  • FOR ORACLE XML TEAM -- XML TOOLS BUG FIXING POLICY/RESPONSE TIME/RELEASE SCHEDULE

    Hi,
    The release of Oracle XML tools and utilities by Oracle XML team is a significant milestone and is definitely appreciated. However, as part of a large organization, some factors need to be clarified before using these utilities in production softwares.
    I have noticed that XML parsers and other utilities are now coming as part of other softwares like JDeveloper etc. Following are my questions to the Oracle XML team and urgent and prompt reply will be greatly appreciated.
    1. What is the XML tools support policy. Is it only OTN? Can we buy support? Are these utilities supported if large organizations have corporate server licenses. I have read at the XML site the XDK is fully and freely supported by Oracle Word Wide support. What does this mean?
    2. What is the release schedule for the XML tools?
    3. What is the response time?
    Once again, your help and prompt reply will be appreciated.
    Thanks.

    As you noted many of the Oracle XDK components are production. This means that if your company has an Oracle Server Support contract you will get the corresponding level of support for the production XDK components.
    If you don't have one, then OTN is your support resource. We will also have standalone support agreements in the future which you can purchase through the Oracle Store. The response time would be the same as for the server.
    There is not a specific release schedule for components on OTN as they have different development schedules.
    Oracle XML Team
    null

  • Unable to capture the Citrix network response time using OATS Load testing.

    Unable to capture the Citrix network response time using OATS Load testing. Here is the scenario " in our project users logs into Citrix network and select the Hyperion application and does the Transaction and the Clients wants us to simulate the same scenario for load testing. We have scripted starting from Citrix Login and then launching Hyperion application. But the time taken to launch the Hyperion Application from Citrix network has not been captured whereas Hyperion Transaction time have been recorded. Can any help to resolve this issue ASAP?

    Hi keerthi,
    1. I have pasted the code for the first issue
    web
                             .button(
                                       122,
                                       "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
                             .click();
                        adf
                        .table(
                                  "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
                        .columnSort("Ascending", "Name" );
         }

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • Response time for Error Messages - Please Help

    Hi
    I have a PRO C application talking to an Oracle database.
    The Response time for successful query is within desirable limits.
    But when there is a error condition (eg SQLError -3113,or connection refused) it takes more than 9 minutes for the database to respond with the error code.
    This condition is observed with only one database while the others are working fine.
    What is the reason for this? Can’t it be reduced?
    Regards
    David

    ever been faced with the same problem ?
    Why deleting ? It that only one way to fix this problem ?
    What are the others doing in such cases. Or am I the only one person
    where has this special problem on the world. Becides I dont believe
    in solving the problem through removing mentioned directory and
    reinstalling. Nevertheless I will try it. I let you know about the result.
    bye
    sas

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Spatial Query Response Time

    O/S - Sun Solaris
    ver - Oracle 8.1.7
    I am trying to improve the response time of the following query. Both tables contain polygons.
    select a.data_id, a.GEOLOC from information_data a, shape_data b where a.info_id = 2 and b.shape_id = 271 and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE'
    The response time with info_id not indexed is 9 seconds. When I index info_id, I get the following error. Why is indexing info_id causing a spatial index error ? Also, other than manipulating the tiling level, is there anything else that could improve the response time ?
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-13208: internal error while evaluating [window SRID does not match layer
    SRID] operator
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
    ORA-06512: at line 1
    Thanks,
    Ravi.

    Hello Ravi,
    Both layers should have SDO_SRID values set in order for the index to work properly.
    After you do that you might want to add an Oracle hint to the query:
    select /*+ ordered */ a.data_id, a.GEOLOC
    from shape_data b, information_data a
    where a.info_id = 2 and b.shape_id = 271
    and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE' ;
    Hope this helps,
    Dan
    Also, if only one or very few rows have a.info_id=2 then the function sdo_geom.relate
    might also work quickly.

  • How to Tune the Transactions/ Z - reports /Progr..of High response time

    Dear friends,
    in <b>ST03</b> work load anlysis menu.... there are some z-reports, transactions, and some programmes are noticed contineously that they are taking the <b>max. response time</b> (and mostly >90%of time is  DB Time ).
    how to tune the above situation ??
    Thank u.

    Siva,
    You can start with some thing like:
    ST04  -> Detail Analysis -> SQL Request (look at top disk reads and buffer get SQL statements)
    For the top SQL statements identified you'd want to look at the explain plan to determine if the SQL statements is:
    1) inefficient
    2) are your DB stats up to date on the tables (note up to date stats does not always means they are the best)
    3) if there are better indexes available, if not would a more suitable index help?
    4) if there are many slow disk reads, is there an I/O issue?
    etc...
    While you're in ST04 make sure your buffers are sized adequately.
    Also make sure your Oracle parameters are set according to this OSS note.
    Note 830576 - Parameter recommendations for Oracle 10g

  • How to obtain the Query Response Time of a query?

    Given the Average Length of Row of tables and the number of rows in each table,
    is there a way we get the query response time of a query involving
    those tables. Query includes joins as well.
    For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
    time it takes for the following query:
    Query
    SELECT t1.col1, t2.col2
    FROM t1, t2, t3
    WHERE t1.col1 = t2.col2
    AND t1.col2 IN ('a', 'c', 'd')
    AND t2.col1 = t3.col2
    AND t2.col1 = t1.col1 (+)
    ORDER BY t1.col1
    Given are:
    Average Row Length of t1 = 200 bytes
    Average Row Length of t2 = 100 bytes
    Average Row Length of t3 = 500 bytes
    No of rows in t1 = 100
    No of rows in t2 = 1000
    No of rows in t3 = 500
    What is required is the 'query response time' for the said query.

    I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
    Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
    http://www.databasejournal.com/features/oracle/article.php/3492521
    http://www.databasejournal.com/features/oracle/article.php/3387011
    http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
    http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
    http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
    Have fun reading:
    You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
    Regards
    Tim

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

  • Response times select  on resource_view

    Hi
    I have an application handle quite large volumes of documents. All documents are stored in XDB. Now starting to get long response times. It has not been a problem in the past but are now becoming more acute. I use resource_view frequently to list files and folders. I know that you should use "equals_path" when searching in resource_view for best response times. But for various reasons, there are few places where I have to do select on RESID. My question is are there any easy ways to get better response time if you select by RESID. See the example below, if I search on RESID, it takes 4 seconds if I use "equals_path" a few milliseconds. Is there any way to speed up the search on RESID?
    I'm running Oracle 11g.
    select any_path from resource_view where resid='9F124A513AAC9A44E040240A43227D33'; --4 sec
    select any_path from resource_view where equals_path(RES, '/public/infoportal/stapswe/BLIVATEST117_45100') = 1 --32 msec
    Lennart

    Hi,
    Try with HEXTORAW function :
    select any_path
    from resource_view
    where resid = hextoraw('9F124A513AAC9A44E040240A43227D33')
    ;The optimizer should then consider using an access path based on the index because the datatypes are matching each other.
    On the contrary, with no explicit conversion to the RAW datatype, the optimizer internally converts RESID to VARCHAR2 by applying a function on it, thus preventing the index from being used.
    See both explain plans for the details.
    SQL> explain plan for
      2  select * from resource_view where resid = '8BAE7B7BE7D14E07A13E73F6824648E3'
      3  ;
    Explicité.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3007404872
    | Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |              |     1 |   132 |     3   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| XDB$RESOURCE |     1 |   132 |     3   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | XDBHI_IDX    |       |       |            |          |
    Predicate Information (identified by operation id):
       1 - filter(RAWTOHEX("SYS_NC_OID$")='8BAE7B7BE7D14E07A13E73F6824648E3')
       2 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,
                  "XMLEXTRA","XMLDATA"),'/',9999)=1)
    16 ligne(s) sélectionnée(s).
    SQL> explain plan for
      2  select * from resource_view where resid = hextoraw('8BAE7B7BE7D14E07A13E73F6824648E3')
      3  ;
    Explicité.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1655379850
    | Id  | Operation                        | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                 |              |     1 |   132 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID     | XDB$RESOURCE |     1 |   132 |     3   (0)| 00:00:01 |
    |   2 |   BITMAP CONVERSION TO ROWIDS    |              |       |       |            |          |
    |   3 |    BITMAP AND                    |              |       |       |            |          |
    |   4 |     BITMAP CONVERSION FROM ROWIDS|              |       |       |            |          |
    |*  5 |      INDEX RANGE SCAN            | SYS_C003123  |     1 |       |     0   (0)| 00:00:01 |
    |   6 |     BITMAP CONVERSION FROM ROWIDS|              |       |       |            |          |
    |   7 |      SORT ORDER BY               |              |       |       |            |          |
    |*  8 |       DOMAIN INDEX               | XDBHI_IDX    |     1 |       |            |          |
    Predicate Information (identified by operation id):
       5 - access("SYS_NC_OID$"=HEXTORAW('8BAE7B7BE7D14E07A13E73F6824648E3') )
       8 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,"XMLE
                  XTRA","XMLDATA"),'/',9999)=1)
    22 ligne(s) sélectionnée(s).Edited by: odie_63 on 8 avr. 2011 08:55

  • Faster response time of queries

    I have a query which joins a few tables with seveeral thousand rows each. This query normaly returns tens of thousands and the response time is almost 10 minutes and it's not acceptable for web application.
    To speed it up I just want Oracle to return the first let's say 1000 rows.
    Changing the max rows returned parameter(APEX) to 1000 doesn't help at all. It seems like query executes in full and then only the first 1000 rows of the resultset are sent.
    So my question is: is there way to instruct Oracle to stop execution of the query once first n rows are retrieved?
    I tried the SELECT /* FIRST_ROWS(1000) */ .... but this doesn't help. and I wonder how could it when it seems that TOAD determines this as a comment and doesn't change the optimizer mode - still ALL_ROWS.
    What am I doing wrong here, this is the first time I am trying to use FIRST_ROWS hint , - is there another - better way to speed up my query?

    Hi Bob, thanks for the response. rownum < n was the first thing I tried. One would think that if a query takes 5 minutes to execute and returns 50 000 rows then after adding rownum < 5000 it shuldn't take more than a minute - well it takes pretty much the same time as w/out rownum < n. It seems like rownum is determined for the whole resultset and then the where codition is applied.
    The tables actually have much more than a few thousand rows. one is with close to 250 000 and couple other tables with over a million and I don't see much that I can optimize. I think being able to only return certain first n rows fast for web applications must be fairly common situation when dealing with large tables/views.

  • BCS high response time

    Hi Frns,
    We are currently struggling with high response time in our BCS production system, it ranges anywhere in between 3000-15000 ms. I want to ask:
    1) If there is any project running where BCS has been deployed and what response time they are recording?
    2) What is the ideal RT for BCS?
    3) We have also observed that UCMON00 and UCWB_INT00 transaction/report in dialog mode are responsible for such high response time, is there any alternative or solution in this area?
    Environment
    AIX :6.1
    SEM-BW/FINBASIS : 602 patch level 13.
    DB: ORACLE 11.2.0.2.0
    Please let me know your suggestions to turn down response time.
    Regards,
    Mridul Gupta

    Hi Mridul Gupta
    Could you please explain where you find the response time has juge.
    1. If response time on UCMON log in --> depend on Cons unit hierarchy and the task maintained
    2. Response time on tranport --> Depend on the master data and hierarchy maintained in BW and quantity of data moving from Dev to Quality and to Prod
    3. Response time on Process chain --> Which process chain variant tooks more time and we can find the information uncer UC_STAT0.
    Like mentioned above there are so many different scenarios and also check with Basis team to compare with all your system landscape.
    Regards
    Rajesh SVN
    Assigning points is the way of saying thank you on SCN, even if you choose correct answer or helpful answer,

  • OSB - Service Invocation instance response times

    Hi,
    In my research and discussion with OSB vendor team, I found there is no product feature to gather statistics on per invocation response times for a OSB service.
    My requirement is to gather per invocation response time of service. I am contemplating few ways of doing this
    1. Java call outs before the start and end of service.
    Downside of this approach is in my composite service (composing 10 biz services) with challenging response time requirements, it might be a over head to wrap each biz service with java call outs for measurements. Any thots?
    2. There is a report feature in OSB. How about using SNMP traps for reporting the start and ends. I am wondering if this is any better than java call outs which might be synchronous I/O operation.
    Do you folks see alternate approaches?
    TIA

    I think that generally it's not a good idea to modify production logic (code or configuration) to gather any statistics. It may look simple, but there is still possibility of unexpected failure that would cause failure of your service. Not to mention complexity of such a step.*
    I totally agree.
    This kind of data should be gathered from your infrastructure components. I know that OSB doesn't provide such a feature, but if you have your services published on HTTP protocol, than you can always use some kind of proxy server. In our company, we use feature-rich Apache HTTP server for many reasons. Response time logging is one of such reasons.*
    Interesting. Thanks. This approach might help gather stats on the Proxy services. However the biz services composed inside proxy may not get the stats.
    Another possibility is to use a specialized component. I think that OWSM can be useful. However, I don't have any experience with it and it could be overkill considering your needs. http://www.oracle.com/technology/products/webservices_manager/index.html*
    We are looking into OWSM, as you rightly said, wanted to keep it simple without OWSM.
    Thanks

Maybe you are looking for

  • My ipad will not completely shut off.  If I try to power down, it reboots on its own.

    When I hold the sleep button down to power off my ipad, I get the red slide that confirms that I want to power down.  I slide it like I always have, but once it shuts down, it immediately reboots.  I can't get it to shut off completely.

  • Actions in Acrobat standard 7

    Sorry if this is obvious, but does Acrobat standard 7 support actions or macros? Basically I just want to insert a cover page to proofs. I currently do CTRL-SHIFT-I and grab the cover page off my desktop. Being more familiar with Illustrator/photosho

  • Macbook pro mouse pad issues

    Thanks for the suggestion regarding the stuck mouse pad.  I used a q-tip with alcohol to clean the corners (depress the pad and swab while depressed). 

  • Sign-in to 2 itunes accounts results in "iTunes store is temporarily unavailable"

    Hi All, I'm having trouble signing in to 1 of my 2 itunes accounts on my mac. When i try to connect with the second account I get the error: We could not complete your itunes store request - The itunes store is temporarity unavailable. Please try aga

  • Help needed (converting string to double)

    keep getting an error someone give me a hint public class q1d     public static void main(String [] args)         String whats;         double x;         x = (String)whats;         System.out.println(+x); }