Response times select  on resource_view

Hi
I have an application handle quite large volumes of documents. All documents are stored in XDB. Now starting to get long response times. It has not been a problem in the past but are now becoming more acute. I use resource_view frequently to list files and folders. I know that you should use "equals_path" when searching in resource_view for best response times. But for various reasons, there are few places where I have to do select on RESID. My question is are there any easy ways to get better response time if you select by RESID. See the example below, if I search on RESID, it takes 4 seconds if I use "equals_path" a few milliseconds. Is there any way to speed up the search on RESID?
I'm running Oracle 11g.
select any_path from resource_view where resid='9F124A513AAC9A44E040240A43227D33'; --4 sec
select any_path from resource_view where equals_path(RES, '/public/infoportal/stapswe/BLIVATEST117_45100') = 1 --32 msec
Lennart

Hi,
Try with HEXTORAW function :
select any_path
from resource_view
where resid = hextoraw('9F124A513AAC9A44E040240A43227D33')
;The optimizer should then consider using an access path based on the index because the datatypes are matching each other.
On the contrary, with no explicit conversion to the RAW datatype, the optimizer internally converts RESID to VARCHAR2 by applying a function on it, thus preventing the index from being used.
See both explain plans for the details.
SQL> explain plan for
  2  select * from resource_view where resid = '8BAE7B7BE7D14E07A13E73F6824648E3'
  3  ;
Explicité.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3007404872
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |              |     1 |   132 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| XDB$RESOURCE |     1 |   132 |     3   (0)| 00:00:01 |
|*  2 |   DOMAIN INDEX              | XDBHI_IDX    |       |       |            |          |
Predicate Information (identified by operation id):
   1 - filter(RAWTOHEX("SYS_NC_OID$")='8BAE7B7BE7D14E07A13E73F6824648E3')
   2 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,
              "XMLEXTRA","XMLDATA"),'/',9999)=1)
16 ligne(s) sélectionnée(s).
SQL> explain plan for
  2  select * from resource_view where resid = hextoraw('8BAE7B7BE7D14E07A13E73F6824648E3')
  3  ;
Explicité.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1655379850
| Id  | Operation                        | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                 |              |     1 |   132 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID     | XDB$RESOURCE |     1 |   132 |     3   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS    |              |       |       |            |          |
|   3 |    BITMAP AND                    |              |       |       |            |          |
|   4 |     BITMAP CONVERSION FROM ROWIDS|              |       |       |            |          |
|*  5 |      INDEX RANGE SCAN            | SYS_C003123  |     1 |       |     0   (0)| 00:00:01 |
|   6 |     BITMAP CONVERSION FROM ROWIDS|              |       |       |            |          |
|   7 |      SORT ORDER BY               |              |       |       |            |          |
|*  8 |       DOMAIN INDEX               | XDBHI_IDX    |     1 |       |            |          |
Predicate Information (identified by operation id):
   5 - access("SYS_NC_OID$"=HEXTORAW('8BAE7B7BE7D14E07A13E73F6824648E3') )
   8 - access("XDB"."UNDER_PATH"(SYS_MAKEXML('8758D485E6004793E034080020B242C6',734,"XMLE
              XTRA","XMLDATA"),'/',9999)=1)
22 ligne(s) sélectionnée(s).Edited by: odie_63 on 8 avr. 2011 08:55

Similar Messages

  • How to enhance response time on select?

    I'm using ibatis/mySql in my struts application and apparently the response time to receive data (select) is quite long.
    I'm referring to a select statement to receive data from 5 diff tables (~6 column each). there are 500 objects (Cars) and it takes around 63 seconds (my computer has 2GB Ram).
    can someone advise how to enhance the response time.
    thanks

    yes, they all have primary keys and indexes.
    Do you do the JOIN in the database or are yougetting a ResultSet and then iterating over it to
    SELECT the rest? (This will kill you with network
    traffic.)
    yes I'm using the select with no join.
    I'm using abator to auto-generate the sql statements
    and therefore I loop on every car'd id and get my
    releven info. so basically it's all select after
    select and integrate them in the car object. That's the reason then: If your outer select returns N rows, you're doing N network roundtrips to get the rest of the data. It's a certain performance killer.
    is there major diff between reading one by one oever
    reading everything in one go? meaning right now i do
    this:
    get list of all car_idNetwork roundtrip #1
    iterate through db with car_id and construct a CAR object...Which means one round trip per row. You're dead in the water here, no matter how you optimize the database.
    add object to LinkedList move to the next car...Rewrite that query and bring all the data back in one network roundtrip. You'll notice an immediate improvement.
    %

  • Why does HP LoadRunner measure slow response time for SAP GUI selection steps?

    We are running HP LoadRunner 11.52 scenarios out to multiple (up to 4) load generators.  Load generators have 24 GB ram and 8 CPU's.  We have run a maximum of 200 users and see a maximum usage of ~25% on CPU's.  We were previously running SAP GUI 7.4 and downgraded to 7.3 when we came to understand 7.4 is not supported in PAM for LR 11.52.  Regardless, our results have not changed.  What we are trying to resolve is why the first two steps of opening the SAP GUI and entering system-related selection is taking so long to process?  Additionally, we typically see that during the time escalation of those steps, that there is a period where the process seems to pause for a large time gap.  The process below shows a snapshot while in progress.  The SAP GUI steps are hitting severe peaks wthile all other steps during the process tend to track at a steady quick response time.  Any degradation there looks to be tied to the long pauses.
    Would anyone have knowledge or experience with this type of output and understand why it might be consistenty happening?  Thanks.

    > Has anybody else been contacted by SAP regarding a "Very Important Notification about Security and Your SAP System"
    A forum search on ' 1298160", one of the note numbers, reveals the answer to your question.
    Please always use the search first.

  • Unable to capture the Citrix network response time using OATS Load testing.

    Unable to capture the Citrix network response time using OATS Load testing. Here is the scenario " in our project users logs into Citrix network and select the Hyperion application and does the Transaction and the Clients wants us to simulate the same scenario for load testing. We have scripted starting from Citrix Login and then launching Hyperion application. But the time taken to launch the Hyperion Application from Citrix network has not been captured whereas Hyperion Transaction time have been recorded. Can any help to resolve this issue ASAP?

    Hi keerthi,
    1. I have pasted the code for the first issue
    web
                             .button(
                                       122,
                                       "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
                             .click();
                        adf
                        .table(
                                  "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
                        .columnSort("Ascending", "Name" );
         }

  • SQL tune (High response time)

    Hi,
    I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
    GENERAL INFORMATION SECTION
    Tuning Task Name : BFG_TUNING1
    Tuning Task Owner : ARADMIN
    Scope : COMPREHENSIVE
    Time Limit(seconds) : 60
    Completion Status : COMPLETED
    Started at : 01/28/2013 15:48:39
    Completed at : 01/28/2013 15:49:43
    Number of SQL Restructure Findings: 7
    Number of Errors : 1
    Schema Name: ARADMIN
    SQL ID : 2d61kbs9vpvp6
    SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
    chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
    chg.Change_Title, chg.Change_Type, chg.Change_Description,
    chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
    chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
    chg.Scheduled_End_Date_Int, chg.Outage_Required,
    chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
    chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
    NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
    a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
    AND a.bfg_con_id = chg.contract_id ) UNION SELECT
    /*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
    chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
    chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
    chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
    chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
    chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
    chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
    chg.contract_id AND a.bfg_con_id IS NOT NULL
    FINDINGS SECTION (7 findings)
    1- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 26 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    2- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    3- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 10 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    4- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    5- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 6 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    6- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    7- Restructure SQL finding (see plan 1 in explain plans section)
    An expensive "UNION" operation was found at line ID 1 of the execution plan.
    Recommendation
    - Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
    or uniqueness is guaranteed.
    Rationale
    "UNION" is an expensive and blocking operation because it requires
    elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
    assuming that duplicates are allowed or uniqueness is guaranteed.
    ERRORS SECTION
    - The current operation was interrupted because it timed out.
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1047651452
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 2 | UNION-ALL | | | | | | | |
    |* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
    | 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
    | 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
    |* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    |* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
    | 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 12 | UNION-ALL | | | | | | | |
    |* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    | 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 28 | UNION-ALL | | | | | | | |
    |* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
    6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    7 - access("C536870913"="C536870914")
    9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
    10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    23 - access("C536870913"="C536870914")
    25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
    "EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
    26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    39 - access("C536870913"="C536870914")
    Remote SQL Information (identified by operation id):
    14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    -------------------------------------------------------------------------------

    Please review the following threads:
    {message:id=9360002}
    {message:id=9360003}

  • Response Time of a query in 2 different enviroment

    Hi guys Luca speaking, sorry for the bad written english
    the questions is:
    The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
    *) I have a query in Benchmark with good results in execution time, the execution plan is really good
    *) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
    #### The Execution Plan are different ####
    #### The stats are the same ####
    this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
    chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
    True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
    in Production the stas are the same
    the other one is an external_table
    the only differences that I noticed at the moment is about the tablespace used to defined the table on:
    Production
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
    Benchmark
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    I'm studing on at the moment
    What I have to check to obtain the same execution plan (without change the query)
    This is the query:
    SELECT
    'test query',
    sysdate,
    storico.tc_scarti_seq.NEXTVAL,
    NULL, --ROW_ID
    -- A.AZIONE,
    'I',
    A.CODE_PREF_TCN,
    A.CODE_NUM_TCN,
    'ADSL non presente su CRM' ,
    -- a.AZIONE
    'I'
    || ';' || a.CODE_PREF_TCN
    || ';' || a.CODE_NUM_TCN
    || ';' || a.DATA_ATVZ_CMM
    || ';' || a.CODE_PREF_DSR
    || ';' || a.CODE_NUM_TFN
    || ';' || a.DATA_CSSZ_CMM
    || ';' || a.TIPO_EVENTO
    || ';' || a.INVARIANTE_FONIA
    || ';' || a.CODE_TIPO_ADSL
    || ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
    || ';' || a.TIPO_RICHIESTA_CESSAZIONE
    || ';' || a.ROW_ID_ATTIVAZIONE
    || ';' || a.ROW_ID_CESSAZIONE
    FROM storico.FLUSSO_ASTCM_INC A
    WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
    WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
    AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
    AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
    AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
    'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
    Esito di set autotrace traceonly explain ESERCIZIO
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
    4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    Esito di set autotrace traceonly explain BENCHMARK
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
    tes=291895338)
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
    8)
    3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
    Card=2861719 Bytes=183150016)
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
    t=2 Card=1 Bytes=38)
    2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
    E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
    3 PARALLEL_FROM_SERIAL
    4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
    The differences on the InitOra are on these parameters:
    Could they influence the Optimizer, and the execution plan are so different
    background_dump_dest
    cpu_count
    db_file_multiblock_read_count
    db_files
    db_32k_cache_size
    dml_locks
    enqueue_resources
    event
    fast_start_mttr_target
    fast_start_parallel_rollback
    hash_area_size
    log_buffer
    log_parallelism
    max_rollback_segments
    open_cursors
    open_links
    parallel_execution_message_size
    parallel_max_servers
    processes
    query_rewrite_enabled
    remote_login_passwordfile
    session_cached_cursors
    sessions
    sga_max_size
    shared_pool_reserved_size
    sort_area_retained_size
    sort_area_size
    star_transformation_enabled
    transactions
    undo_retention
    user_dump_dest
    utl_file_dir
    Please Help me
    Thanks a lot Luca

    Hi Luca,
    test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
    you're using external tables. Are the speed of these drives are identically?
    have you analyzed the schema with the same statement? Could you send me the statement?
    have you system statistics?
    have you testet the statement in an environment which is nearly like the production? concurrent user etc.
    Could you send me the top 5 wait events from the statspack report.
    Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
    Regards
    Marc

  • How to increase built-in cisco vpn peer response timer?

    Hi,
    I use OS x in-built cisco vpn client to connect to work VPN.
    The VPN server, or perhaps the radius server, takes a long time to return a response. OS X always try for 10 seconds, then drop the conneciton when no response from the remote peer. When I use cisco vpn client on a windows machine, the vpn client has a setting to allow for 90 seconds remote peer response time. It works fine using cisco vpn client.
    I prefer to use os x as my primary working environment, so I need to fix this problme. My question is how to increase the phase 1 & 2 timer for vpn under 10.6.7. I have tried to change racoon.conf phase 1 & phase 2 timer, but it made no difference. OS X only try for 10 seconds.
    Any ideas? (besides asking work people to fix the server or radius problem)
    Thanks
    jmsherry123

    i have the same problem ... certificate is imported in keychain, but cant select it when setup vpn connection

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to find the Response time for a particular Transaction

    Hello Experts,
            Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
    Response time BEFORE BAdI Implementation
    Response time AFTER BAdI Implementation
    Where can i get this.
    Help me in this regard
    Best Regards
    SRiNi

    Hello,
    Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
    The selection screen also gives you a few options for display mode.
    - Show all statistic records, sorted by star
    This shows you all of the transaction steps, but they are not grouped in any way.
    -Show all records, grouped by business transaction
    This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
    -Show Business Transaction Tots
    This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
    Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
    Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
    - Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
    - Stats on the data selected within the execution
    - Memory utilization of the transaction
    - RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
    - Much more.
    As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
    I hope this information is helpful!
    Kind regards,
    Geoff Irwin
    Senior Support Consultant
    SAP Active Global Support

  • How to capture transaction response time in SQL

    I need to capture  Transaction response time (i.e. ping test) to calculated the peak hours and averaged
    on a daily basis.
    and
    Page refresh time that is calculated no less than every 2 hours for peak hours and averaged on a daily basis. 
    Please assist
    k

    My best guess as to what you are looking for is something like the following (C#):
    private int? Ping()
    System.Data.SqlClient.SqlConnection objConnection;
    System.Data.SqlClient.SqlCommand objCommand;
    System.Data.SqlClient.SqlParameter objParameter;
    System.Diagnostics.Stopwatch objStopWatch = new System.Diagnostics.Stopwatch();
    DateTime objStartTime, objEndTime, objServerTime;
    int intToServer, intFromServer;
    int? intResult = null;
    objConnection = new System.Data.SqlClient.SqlConnection("Data Source=myserver;Initial Catalog=master;Integrated Security=True;Connect Timeout=3;Network Library=dbmssocn;");
    using (objConnection)
    objConnection.Open();
    using (objCommand = new System.Data.SqlClient.SqlCommand())
    objCommand.Connection = objConnection;
    objCommand.CommandType = CommandType.Text;
    objCommand.CommandText = @"select @ServerTime = sysdatetime()";
    objParameter = new System.Data.SqlClient.SqlParameter("@ServerTime", SqlDbType.DateTime2, 7);
    objParameter.Direction = ParameterDirection.Output;
    objCommand.Parameters.Add(objParameter);
    objStopWatch.Start();
    objStartTime = DateTime.Now;
    objCommand.ExecuteNonQuery();
    objEndTime = DateTime.Now;
    objStopWatch.Stop();
    objServerTime = DateTime.Parse(objCommand.Parameters["@ServerTime"].Value.ToString());
    intToServer = objServerTime.Subtract(objStartTime).Milliseconds;
    intFromServer = objEndTime.Subtract(objServerTime).Milliseconds;
    intResult = (int?)objStopWatch.ElapsedMilliseconds;
    System.Diagnostics.Debug.Print(string.Format("Milliseconds from client to server {0}, milliseconds from server back to client {1}, and milliseconds round trip {2}.", intToServer, intFromServer, intResult));
    return intResult;
    Now, while the round trip measurement is fairly accurate give or take 100ms, any measurement of latency to and from SQL Server is going to be subject to the accuracy of the time synchronization of the client and server.  If the server's and client's
    time isn't synchronized precisely then you will get odd results in the variables intToServer and intFromServer.
    Since the round trip result of the test is measured entirely on the client that value isn't subject to the whims of client/server time synchronization.

  • Spatial Query Response Time

    O/S - Sun Solaris
    ver - Oracle 8.1.7
    I am trying to improve the response time of the following query. Both tables contain polygons.
    select a.data_id, a.GEOLOC from information_data a, shape_data b where a.info_id = 2 and b.shape_id = 271 and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE'
    The response time with info_id not indexed is 9 seconds. When I index info_id, I get the following error. Why is indexing info_id causing a spatial index error ? Also, other than manipulating the tiling level, is there anything else that could improve the response time ?
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-13208: internal error while evaluating [window SRID does not match layer
    SRID] operator
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
    ORA-06512: at line 1
    Thanks,
    Ravi.

    Hello Ravi,
    Both layers should have SDO_SRID values set in order for the index to work properly.
    After you do that you might want to add an Oracle hint to the query:
    select /*+ ordered */ a.data_id, a.GEOLOC
    from shape_data b, information_data a
    where a.info_id = 2 and b.shape_id = 271
    and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE' ;
    Hope this helps,
    Dan
    Also, if only one or very few rows have a.info_id=2 then the function sdo_geom.relate
    might also work quickly.

  • How to obtain the Query Response Time of a query?

    Given the Average Length of Row of tables and the number of rows in each table,
    is there a way we get the query response time of a query involving
    those tables. Query includes joins as well.
    For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
    time it takes for the following query:
    Query
    SELECT t1.col1, t2.col2
    FROM t1, t2, t3
    WHERE t1.col1 = t2.col2
    AND t1.col2 IN ('a', 'c', 'd')
    AND t2.col1 = t3.col2
    AND t2.col1 = t1.col1 (+)
    ORDER BY t1.col1
    Given are:
    Average Row Length of t1 = 200 bytes
    Average Row Length of t2 = 100 bytes
    Average Row Length of t3 = 500 bytes
    No of rows in t1 = 100
    No of rows in t2 = 1000
    No of rows in t3 = 500
    What is required is the 'query response time' for the said query.

    I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
    Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
    http://www.databasejournal.com/features/oracle/article.php/3492521
    http://www.databasejournal.com/features/oracle/article.php/3387011
    http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
    http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
    http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
    Have fun reading:
    You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
    Regards
    Tim

  • Query Tuning - Response time Statistics collection

    Our Application is Load tested for a period of 1 hour with peak load.
    For this specific period of time, say thousands of queries gets executed in the database,
    What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
    Number of times Executed
    Average Response time
    Maximum response time
    minimum response time
    90th percentile response time ( sorted in ascending order, 90th percentile guy)
    All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
    I tried using sql trace and TKPROF but unable to get all these statistics...
    Application uses connection pooling, so connections are taken as and when needed...
    Any thoughts on this?
    Appreciate your help.

    I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
    There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
    Any more thoughts on this?

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

  • Explain plan - lower cost but higher response time in 11g compared to 10g

    Hello,
    I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
    In 11g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANALYZED NUM_ROWS
    11-08-2012 18:21:12 3413956
    Elapsed: 00:00:00.30
    In 10g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANAL NUM_ROWS
    07-NOV-12 3502160
    Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
    BTW - 'm running the queries directly on the server, so no network latency in play here.
    Thanks in advance
    aBBy.

    *11g Env:*
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    *10g Env:*
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    The query used is:
    explain plan for
    select
    NCP_DETAIL_ID ,
    NCP_ID ,
    STATUS_ID ,
    FIBER_NODE ,
    NODE_DESC ,
    GL ,
    FTA_ID ,
    OLD_BUS_ID ,
    VIRTUAL_NODE_IND ,
    SERVICE_DELIVERY_TYPE ,
    HHP_AUDIT_QTY ,
    COMMUNITY_SERVED ,
    CMTS_CARD_ID ,
    OPTICAL_TRANSMITTER ,
    OPTICAL_RECEIVER ,
    LASER_GROUP_ID ,
    UNIT_ID ,
    DS_SLOT ,
    DOWNSTREAM_PORT_ID ,
    DS_PORT_OR_MOD_RF_CHAN ,
    DOWNSTREAM_FREQ ,
    DOWNSTREAM_MODULATION ,
    UPSTREAM_PORT_ID ,
    UPSTREAM_PORT ,
    UPSTREAM_FREQ ,
    UPSTREAM_MODULATION ,
    UPSTREAM_WIDTH ,
    UPSTREAM_LOGICAL_PORT ,
    UPSTREAM_PHYSICAL_PORT ,
    NCP_DETAIL_COMMENTS ,
    ROW_CHANGE_IND ,
    STATUS_DATE ,
    STATUS_USER ,
    MODEM_COUNT ,
    NODE_ID ,
    NODE_FIELD_ID ,
    CREATE_USER ,
    CREATE_DT ,
    LAST_CHANGE_USER ,
    LAST_CHANGE_DT ,
    UNIT_ID_IP ,
    US_SLOT ,
    MOD_RF_CHAN_ID ,
    DOWNSTREAM_LOGICAL_PORT ,
    STATE
    from markethealth.NCP_DETAIL_TAB
    WHERE UNIT_ID = :B1
    ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
    This is the query used for Query 1.
    Stats differences are:
    1. Rownum differes by apprx - 90K more rows in 10g env
    2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
    3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g.

Maybe you are looking for

  • Low spec iMac and MacBook pro OR high spec MacBook pro 15inch

    I am about to buy my first mac for college. I am going to use it for photo editing and general use. I don't know whether to buy the low spec MacBook pro 13 inch 2011 for £999 to take to college everyday AND buy the low spec iMac 2011 for £999 to use

  • Nothing happens when I launch Acrobat 10 as part of Design & Web Prem CS6 - Win 7 64bit

    I had this problem in the past and called support. Here are some notes: No Adobe PCD Folder[C:\ProgramFilesx86\CommonFiles\Adobe] uninstall Acrobat Pro 10; Acrobat 10 from Design & Web Prem CS6 reinstall Acrobat Pro 10 as part of Design & Web Prem CS

  • Coverflow says Various Artist

    I have seen many topics on here about Coverflow....The one things I cant seem to get past is this...I have two identical album covers one of them says the name of the group and the other cover says Various Artist.....the one with Various Artist has o

  • How to switch SOA-SUITE to another Database?

    I have installed 2 SOA environments: for development: soa1 + db1 and for tests: soa2 + db2 but during 2nd environment installation, i make a mistake and now soa2 like soa1 store its data in db1. How i can switch now, which database soa2 should to use

  • How to compile java to exe by JDeveloper?

    How to compile java to exe by JDeveloper?