TABLE Bad Response Time

Hello Experts
I have a TABLE Element with the Properties
- selectionChangeBehaviour = auto
- selectionMode = multiNoLead
The user should select some entries and press a 'Continue' button.
Everything is running fine..... but the response time when selecting a single row is very bad. About 3 seconds.
Can anybody help?
Thx in advance
Bodo

Hi Alex
the table has just 14 entries (4 columns)...
Too much???
Regards
Bodo

Similar Messages

  • Calling SAP Webservice from JAVA ME bad response time

    Hello together,
    I'm calling a SAP RFC as a Webservice from JAVA ME (Netbeans 6.8). The stub classes I've generated with the Sun Wireless Toolkit. The RFC function stores entries in a SAP database table. The call of the websevice with transmitting the data and the database update in SAP works fine, but I got the response message from SAP with a delay of 40 seconds.
    Does anyone know why there is so a long delay in the response and how to fix it?

    hi,
    is this reproducible or was it just the first call to that service?
    it usually occurs that once you call a webservice for the first time, some of the programs (be it your application programs or the even the SOAP runtime itself) required have not been compiled until that and so they are compiled during the webservice call.
    This leads to slow response times even time-outs. The effect vanishes once all sources are compiled (i.e. depending on the complexity of your calls after one to a few calls to that service).
    So, if the slow response times persist, you should turn on debugging in SICF and see where time is spent...
    my 2 cents,
    anton

  • Bad response time  for aRFC

    On our R/3 ECC6 production system ,we have one aRFC call by external system,there are 200000 times per day.
    and the response time for the RFC call is 35ms ,Some times its response time(commit time) are much longer and up to 2 seconds.
    There is very simple code for the RFC as shown below
        insert ztable from table lt_internal_table.
        commit work
        call function 'function_name'
        starting new task 'N'.
        tables
           it_tab = lt_internal_table.
    Regards,
    ShiChunQing

    Dear Randorlf ,
      Now i can determine that  statement "commit work"   have a high cost  some times.
      I write a simple report as show below, 
      One high cost will occurs per several minutes.
      report ztest.
      do  10 times.
        GET RUN TIME FIELD t1.
        INSERT ytest_table FROM TABLE lt_tab.
        GET RUN TIME FIELD t2.
        t1 = t2 - t1.
        WRITE:/ 'insert:',t1,sy-uzeit.
        COMMIT WORK.
        GET RUN TIME FIELD t3.
        t2 = t3 - t2.
        WRITE: 'commit:',t2,sy-uzeit.
    enddo.
    Run results:
    insert:        879  16:23:02 commit:      1,713  16:23:02
    insert:      1,607  16:23:05 commit:      4,434  16:23:05
    insert:       1,265  16:23:08 commit:      3,790  16:23:08
    insert:        648  16:23:11 commit:      1,195  16:23:11
    insert:        659  16:23:14 commit:    390,208  16:23:15
    insert:        640  16:23:18 commit:      1,032  16:23:18
    Best regards
    shichunqing

  • LYNC 2013 mobility clients disconnect and AAR bad response time

    Good day,
    I'm new to this, but have to implement LYNC 2013 for our company.
    I have implemented Lync 2013 by following various instructions and all seems fine... apart from all mobile clients (internal/external - Andoid/IOS). They get disconnected after a few minutes. On Android the error is "unknown error" (very useful)
    and on IOPS it's "unhandeled alert type 302 E_Badgateway (E2-3-35).
    What I have noticed is that on AAR, on the LYNCWEB server farm  (under " Monitoring and Management") the response time is huge (10000 + ms) and when the errors come up on the clients I get a "failed request".
    I can not think what may be wrong but all is pointing to AAR?
    AAR is version 3 running on 2021 R2
    any ideas? been stuck for a week on this now.
    Many thanks
    DD

    Hi,
    From your description above, it seems to be the issue with IIS ARR.
    You can check with the following steps:
    Check if DNS records and certificate were appropriate.
    Check if ExposedWebURL set to External.
    Try to increase the time out value from 200 to 1800
    More details:
    http://blogs.technet.com/b/nexthop/archive/2013/02/19/using-iis-arr-as-a-reverse-proxy-for-lync-server-2013.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • WebDynpro SSR / Browser Response Time

    Good morning,
    When we are visualizing a WebDynpro view we take an unacceptable response time (of almost 1 minute) and the CPU of the computer client rises almost until the 100%.
    The View is composed by a menu to the left (which is a embedded view )and a main view, which is formed by a group that contains a Table within a ScrollContainer. So, the view is not much complex.
    The table is mapped to a simple structure whose attributes are simple objects (string) and the maximum table record size is 100.
    Additionally whenever any event takes place, either or in the menu of the left or the own table, the response time remains in 1 minute although business logic is not executed.
    We have proven to delete the ScrollContainer and show the table but the performance doesn’t improve. We have also tested that communication network problems doesn’t exist.
    The performance of the client browser has been verified including the SSR parameter (“sap.session.ssr.showInfo=true”). A document with an
    image is attached, it is possible to see that the browser response time is 45 seconds to display a content of 1 MB (isn´t it too much?? Why WD generates too much HTML??).
    SAP WAS 6.40 y SP15
    Browser:Internet Explorer 6.0.2800.1106 SP1
    Thanks in advance,
    Eloy

    Hi Eloy,
    We also faced a similar problem in our project. When the page size reaches 0.5MB+ the reposonse becomes too slow.
    This is becuase WebDynpro gets marshalled data from backend and unmarshlles it based on your screen design. So in your case if you have 100 rows * 50 columns it will unmarshall all these records at front-end i.e, the client. Hence you see the response time of your CPU reaching 100 %
    You have very few options
    1) Decrease the no of visible rows on the screen at a time. Say max 10. If you have 40-50 columns explore using Tab Strips with 12-15 columns in each tab.
    2) Increase the RAM and Processing capabilities of your Client PC's. We were kind of lucky that our customer agreed to this and got P4 1GB machines.
    Lets hope the performance is improved in the future releases.
    Regards,
    Shubham

  • HTTP response time - From any SAP table.

    Hi,
    Can we get http response time other than from SMICM u2192 Http log?
    Reason for me to ask is we cannot see complete URL in smicm log with response time.
    date - "POST /sap/bc/webdynpro/sap/cprojects/?sap-contextid=......................................... HTTP/1.1" 200 325894 9555 h[-]
    As per sap help link complete URL after contextID is hidden for security reason.
    http://help.sap.com/saphelp_crm70/helpdata/EN/48/442541e0804bb8e10000000a42189b/frameset.htm
    u201CFor security reasons the following information is hidden (replaced with points) from the logging procedureu201D
    Question: can we access complete URL from any table level with http response time, any input is appreciated.
    Thanks,
    Venkat.

    Michael,
    First of all, thanks for the reply.
    In our environment we have cprojects implemented and have lot of webdynpro request to ECC system.
    From http log (SMICM) I can see many webdynpro requests with response time above 30 minutes; I want to automate this monitoring by writing a CCMS data supplier. We already have SAP document to write data supplier.
    Scenario: For example if they are doing a task in cprojects.
    We will be having a GET webdynpro request and then POST. By looking at GET we can know the task user is trying to do. But it is POST request which will be taking most of the response time and POST URL is show u2026(point) after context ID and as per SAP help this is done for security reason.
    HTTP log from SMICMu2026
    [DATE] - "GET /sap/bc/webdynpro/sap/cprojects?sap-client=clnt&STARTVIEW=Tasks&OBJECT_TYPE=TTO&GUID=GUIDNUMBER
    [DATE] - "GET /sap/bc/webdynpro/sap/cprojects/~ucfLOADING?sap-contextid=....................
    [DATE] - "POST /sap/bc/webdynpro/sap/cprojects/?sap-contextid=........................[response time]
    I think SAP is displaying as point in http log file, but it should definitely have complete URL is any of the table.
    So I am looking If we can find the complete POST URL with response time from table then we can write an ABAP program to send the information to CCMS.
    Thanks,
    Venkat.

  • Table name for dialog response time without GUI in ST03?

    In SWNC_COLLECTOR_GET_AGGREGATES, cnt00x (x =1,2,3... 9) gives me the proportion of transaction steps with a response time between 0s and the upper  limit of the individual response time categories (includes GUI time), broken down by task type. But where is the data without GUI time store? any ideas?

    Hello Surya
    here screenshots from my test:
    ST03 for my instance before I have executed something on it, no dialog task type:
    here I have connected to the instance through SM51, still no dialog task type:
    and here I have connected directly through SAP Logon to this instance, now my actions on the instance are displayed as dialog task type:
    If your users connect through RFC to the instance, then all their actions are RFC task type actions. As soon as a user logons directly to the instance, dialog task type will appear in ST03.
    regards,
    Alwina

  • Response Time of a query in 2 different enviroment

    Hi guys Luca speaking, sorry for the bad written english
    the questions is:
    The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
    *) I have a query in Benchmark with good results in execution time, the execution plan is really good
    *) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
    #### The Execution Plan are different ####
    #### The stats are the same ####
    this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
    chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
    True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
    in Production the stas are the same
    the other one is an external_table
    the only differences that I noticed at the moment is about the tablespace used to defined the table on:
    Production
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
    Benchmark
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    I'm studing on at the moment
    What I have to check to obtain the same execution plan (without change the query)
    This is the query:
    SELECT
    'test query',
    sysdate,
    storico.tc_scarti_seq.NEXTVAL,
    NULL, --ROW_ID
    -- A.AZIONE,
    'I',
    A.CODE_PREF_TCN,
    A.CODE_NUM_TCN,
    'ADSL non presente su CRM' ,
    -- a.AZIONE
    'I'
    || ';' || a.CODE_PREF_TCN
    || ';' || a.CODE_NUM_TCN
    || ';' || a.DATA_ATVZ_CMM
    || ';' || a.CODE_PREF_DSR
    || ';' || a.CODE_NUM_TFN
    || ';' || a.DATA_CSSZ_CMM
    || ';' || a.TIPO_EVENTO
    || ';' || a.INVARIANTE_FONIA
    || ';' || a.CODE_TIPO_ADSL
    || ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
    || ';' || a.TIPO_RICHIESTA_CESSAZIONE
    || ';' || a.ROW_ID_ATTIVAZIONE
    || ';' || a.ROW_ID_CESSAZIONE
    FROM storico.FLUSSO_ASTCM_INC A
    WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
    WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
    AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
    AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
    AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
    'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
    Esito di set autotrace traceonly explain ESERCIZIO
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
    4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    Esito di set autotrace traceonly explain BENCHMARK
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
    tes=291895338)
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
    8)
    3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
    Card=2861719 Bytes=183150016)
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
    t=2 Card=1 Bytes=38)
    2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
    E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
    3 PARALLEL_FROM_SERIAL
    4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
    The differences on the InitOra are on these parameters:
    Could they influence the Optimizer, and the execution plan are so different
    background_dump_dest
    cpu_count
    db_file_multiblock_read_count
    db_files
    db_32k_cache_size
    dml_locks
    enqueue_resources
    event
    fast_start_mttr_target
    fast_start_parallel_rollback
    hash_area_size
    log_buffer
    log_parallelism
    max_rollback_segments
    open_cursors
    open_links
    parallel_execution_message_size
    parallel_max_servers
    processes
    query_rewrite_enabled
    remote_login_passwordfile
    session_cached_cursors
    sessions
    sga_max_size
    shared_pool_reserved_size
    sort_area_retained_size
    sort_area_size
    star_transformation_enabled
    transactions
    undo_retention
    user_dump_dest
    utl_file_dir
    Please Help me
    Thanks a lot Luca

    Hi Luca,
    test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
    you're using external tables. Are the speed of these drives are identically?
    have you analyzed the schema with the same statement? Could you send me the statement?
    have you system statistics?
    have you testet the statement in an environment which is nearly like the production? concurrent user etc.
    Could you send me the top 5 wait events from the statspack report.
    Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
    Regards
    Marc

  • CRM_DNO_MONITOR field: initial response time

    Hi,
    In CRM_DNO_MONITOR we were able to see all the data except 'Initial Response time' of Service desk message.
    Kindly let us know, how to configure so that 'Initial Response time'  should be displayed in CRM_DNO_MONITOR.
    Thank you!

    Hello Jerome,
    understand better... sorry. Well unfortunately this value is not available in report CRM_DNO_SERVICE_MONITOR cause when you take a look to the structure used for the display table no field was created for that.
    If that's something YOU NEED to implemement: here is the procedure
    - do a structure append of CRMT_DNO_SERVICE_MONITOR to add field Initial Response Time - ZINITRESPTIME
    - Then Use Badi CRM_DNO_MONITOR to fill the column Initial Response Time for CRM_DNO_MONITOR Transaction
    In this Badi you can use Class: CL_DSMOP_REP_CRM, method:  get_first_react_time as done by SOLAR_EVAL.
    You should put a breakpoint in CL_DSMOP_REP_CRM->PREPARE_SIMPLE_OUTPUTLIST to see how standard handled that. That will definitely help u or the developer you might need
    Hope that helps,
    Regards,
    Khalil

  • Unable to capture the Citrix network response time using OATS Load testing.

    Unable to capture the Citrix network response time using OATS Load testing. Here is the scenario " in our project users logs into Citrix network and select the Hyperion application and does the Transaction and the Clients wants us to simulate the same scenario for load testing. We have scripted starting from Citrix Login and then launching Hyperion application. But the time taken to launch the Hyperion Application from Citrix network has not been captured whereas Hyperion Transaction time have been recorded. Can any help to resolve this issue ASAP?

    Hi keerthi,
    1. I have pasted the code for the first issue
    web
                             .button(
                                       122,
                                       "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
                             .click();
                        adf
                        .table(
                                  "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
                        .columnSort("Ascending", "Name" );
         }

  • SQL tune (High response time)

    Hi,
    I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
    GENERAL INFORMATION SECTION
    Tuning Task Name : BFG_TUNING1
    Tuning Task Owner : ARADMIN
    Scope : COMPREHENSIVE
    Time Limit(seconds) : 60
    Completion Status : COMPLETED
    Started at : 01/28/2013 15:48:39
    Completed at : 01/28/2013 15:49:43
    Number of SQL Restructure Findings: 7
    Number of Errors : 1
    Schema Name: ARADMIN
    SQL ID : 2d61kbs9vpvp6
    SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
    chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
    chg.Change_Title, chg.Change_Type, chg.Change_Description,
    chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
    chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
    chg.Scheduled_End_Date_Int, chg.Outage_Required,
    chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
    chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
    NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
    a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
    AND a.bfg_con_id = chg.contract_id ) UNION SELECT
    /*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
    chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
    chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
    chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
    chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
    chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
    chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
    chg.contract_id AND a.bfg_con_id IS NOT NULL
    FINDINGS SECTION (7 findings)
    1- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 26 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    2- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    3- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 10 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    4- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    5- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 6 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    6- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    7- Restructure SQL finding (see plan 1 in explain plans section)
    An expensive "UNION" operation was found at line ID 1 of the execution plan.
    Recommendation
    - Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
    or uniqueness is guaranteed.
    Rationale
    "UNION" is an expensive and blocking operation because it requires
    elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
    assuming that duplicates are allowed or uniqueness is guaranteed.
    ERRORS SECTION
    - The current operation was interrupted because it timed out.
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1047651452
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 2 | UNION-ALL | | | | | | | |
    |* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
    | 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
    | 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
    |* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    |* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
    | 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 12 | UNION-ALL | | | | | | | |
    |* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    | 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 28 | UNION-ALL | | | | | | | |
    |* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
    6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    7 - access("C536870913"="C536870914")
    9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
    10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    23 - access("C536870913"="C536870914")
    25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
    "EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
    26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    39 - access("C536870913"="C536870914")
    Remote SQL Information (identified by operation id):
    14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    -------------------------------------------------------------------------------

    Please review the following threads:
    {message:id=9360002}
    {message:id=9360003}

  • Response time of a function module

    Hi Friends,
        I'm creating a cutom program where i was using a BAPI ,which exist in other server.
       Now i want to record the response time of the BAPI , after placing the  request in it and display the Time for the 
       corresponding record in output.
      Is there any procedure to record the response time in the program / I'm not asking the transactions where we can  
      measure the performances.
    Moderator message - please do not ask for or promise rewards.
    Thanks & Warm Regards
    Krishna
    Edited by: Rob Burbank on Oct 1, 2009 8:50 AM

    Hello,
    The correct method, as pointed out in previous posts, is with GET RUN TIME. Note that this returns time in microseconds, so you may want to scale this up to a larger unit.
    As to the usefulness: it is perfectly legitimate to include time measurements in your program as long as this has a clear purpose, e.g. comparing response times between different remote systems, identifying erratic response times, etc. In that case I would advise you to also include some other measurement, e.g. the amount of data processed (whether you can do this and how depends on the BAPI, e.g. you could use the number of lines in the returned internal tables as a metric). If your time measurement creates separate log/trace records, then it would also be a good idea to have the option to enable and disable the time measurement.
    Regards,
    Mark

  • Report to calculate avg response time for a transaction using ST03.

    Hi Abap Gurus ,
    I want to develop a report which calculates the average response time(ST 03) for a transaction on hourly basis.
    I have read many threads like in which users are posting which tables/FM to use to extract data such as  dialog step and total response time .
    I am sure many would have created a report like this , would appeciate if you can share pseudo code for same.Any help regarding the same is highly appreciated...
    Cheers,
    Karan

    http://jakarta.apache.org/jmeter/

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to find the Response time for a particular Transaction

    Hello Experts,
            Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
    Response time BEFORE BAdI Implementation
    Response time AFTER BAdI Implementation
    Where can i get this.
    Help me in this regard
    Best Regards
    SRiNi

    Hello,
    Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
    The selection screen also gives you a few options for display mode.
    - Show all statistic records, sorted by star
    This shows you all of the transaction steps, but they are not grouped in any way.
    -Show all records, grouped by business transaction
    This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
    -Show Business Transaction Tots
    This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
    Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
    Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
    - Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
    - Stats on the data selected within the execution
    - Memory utilization of the transaction
    - RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
    - Much more.
    As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
    I hope this information is helpful!
    Kind regards,
    Geoff Irwin
    Senior Support Consultant
    SAP Active Global Support

Maybe you are looking for