Horrible response times

Our ECC Ides system today had . For first time 17 users were working on the system (WIN2008/SQL2005 based). Before the people were maximally 5.
The server is done by making a homogeneous system copy from an blade machine(now it is an VIRTUAL)
There was enaught disk space.  As I could not log in I had to restart system 3 times to get logged on the system and even then it took almost fewminute to log on.
However I checked Wokload st03n transaction) in system. and found out that
there top abap."Login_Pw", "SESSION_MANAGER", "?". (BAtch), "ADMSBUF, >DEleyed Function call, RSPOWPOO""RSWWclear", ""VA01", "SAPMHHTP "Buf Sync" >DDLOC CLEANUP)""rsbtctE"
What can I do?
¸
What I am most suspicios are 2 jobs. Namely I found 2 jobs canceled running in that period of time SAP_CCMS_MONI_BATCH_DP for 3000+ sekunds (cancelled two times as the system was twice restarted)  ,the 2 main users under "users profile" in st03n were ZUGTIN running and SAPSYS( running many system jobs like ."Login_Pw", "SESSION_MANAGER", "?". (BAtch), "ADMSBUF, >DEleyed Function call, RSPOWPOO""RSWWclear", ""VA01", "SAPMHHTP "Buf Sync" >DDLOC CLEANUP)""rsbtctE)?
What else I could look for?
How to analyse/solve the problem for future
What is the role of job SAP_CCMS_MONI_BATCH_DP and how to disable it (I can see it is event triggered and runs every 3 days.

There are a few points coming to my mind:
- How much memory did you give to your database? If you use the default configuration the database will use 50 % of the available memory which is 2 GB then.
- How big are you buffers (ST02)? Did you change them?
If you did not yet run SGEN, the system will load, compile and save each program that has not yet been executed. This will trash the program buffer (abap/buffersize) and fragment it heavily. If you check ST02 you'll see a lot of swaps because programs need to be re-read again and again from the database.
Virtualized environments seriously suffer from syscalls. A syscall is made everytime the system is doing I/O or transferring data over the network.  If you machine is using swap/paging, this will seriously impact performance on virtualized environments.
The increasing number of WPs is not affecting seriously, they use the same (shared) memory pool and just attach to it.
To have a good performance you need to make absolutely sure your database buffers and the SAP buffers fit in physical memory. You can check in ST06 - Detail analysis menu - previous 24 hours - memory if paging of memory is going on. If that is the case, you should tune the memory usage (fixed database memory and check buffers using sappfpar).
You may also set the parameter
es/use_mprotect = false
to decrease the number of syscalls.
Also install the hotfix http://support.microsoft.com/kb/931308.
See Note 1056052 - Windows: VMware ESX Server 3 configuration guideline
Markus

Similar Messages

  • Unable to capture the Citrix network response time using OATS Load testing.

    Unable to capture the Citrix network response time using OATS Load testing. Here is the scenario " in our project users logs into Citrix network and select the Hyperion application and does the Transaction and the Clients wants us to simulate the same scenario for load testing. We have scripted starting from Citrix Login and then launching Hyperion application. But the time taken to launch the Hyperion Application from Citrix network has not been captured whereas Hyperion Transaction time have been recorded. Can any help to resolve this issue ASAP?

    Hi keerthi,
    1. I have pasted the code for the first issue
    web
                             .button(
                                       122,
                                       "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1824fhkchs_6']/web:form[@id='pt1:_UISform1' or @name='pt1:_UISform1' or @index='0']/web:button[@id='pt1:MA:0:n1:1:pt1:qryId1::search' or @value='Search' or @index='3']")
                             .click();
                        adf
                        .table(
                                  "/web:window[@index='0' or @title='Manage Network Targets - Oracle Communications Order and Service Management - Order and Service Management']/web:document[@index='0' or @name='1c9nk1ryzv_6']/web:ADFTable[@absoluteLocator='pt1:MA:n1:pt1:pnlcltn:resId1']")
                        .columnSort("Ascending", "Name" );
         }

  • HP Pavilion hard drive response time is very slow but tests pass (evenutally)

    I have owned a HP Pavilion M1199a originally running Windows XP, but has been running Windows Vista 32-bit for the last 3 years or more.
    The PC started to lock up for 30 seconds or more (2 weeks ago), with the disk light constantly on then go again for no apparent reason. These lock ups have become more frequent and for longer durations. I removed anti-virus software, and a few other applications/services but generally this computer is fairly clean as I use it as a media center PC. 
    I have tried checking the disk (SATA) for errors, and it passes although the tests take a lot longer than they should. The resource monitor shows no excess CPU usage and there is plenty of memory available. The disk monitor shows response times of 5000-20000 ms.
    What is the best way to proceed from here? Is there a SATA controller or motherboard (ASUS PTGD1-LA) test? Should I buy another SATA drive and try that? I suspect that it is either the drive, drive controller and/or the motherboard that is failing but I don't know how to isolate the problem. The computer hardware configuration has remained the same for years.
    The OS has the automatic updates enabled and I uninstalled the recent ones in case they were somehow causing an issue. 

    tr3v wrote:
    Thanks for replying.
    1/ No, but I am running Vista, so assume this will be the same? Or should I just look at the tmp and temp environment variables to see what folders are being used?
     In Vista type temp in the Search programs and files box and double click on the temp files icon that appears above. Delete all the files in the folder that you can. It is safe as they are exactly what they are called and that is temporary files. If you haven't done this ever or in quite a while there should be a noticeable increase in the operating system's response time.
    2/ Yes. It completes after a very long time. 
    5/ No - but will try this out too. 
    ****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
    2015 Microsoft MVP - Windows Experience Consumer

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • After IOS 8 update, the screen is jerky and the response time is really, really slow... Using a PC to submit this!

    Is there a solution to a jerky screen and a slow response time after updating to IOS 8?

    Try some basic troubleshooting:
    (A) Try reset iPad
    Hold down the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears
    (B) Try reset all settings
    Settings>General>Reset>Reset All Settings
    (C) Setup as new (data will be lost)
    Settings>General>Reset>Erase all content and settings

  • Is there a way to speed up the response time from the dock

    Is there a way to speed up the response time for external hard drives from the dock? I have three external HDs but when I click on the alias in the dock there is always hesitation before it opens. I'm leaning towards the fact that it is just a result of the fact that the speed of the Mac is what it is. But, maybe there is something I can do to speed it up. The drives are plugged via usb directly to the Mac and they have there own power source. The curious thing is that they once were plugged into a large separately powered USB hub and I don't recall a lag like I have now. Any thoughts? Thanks...
    Message was edited by: gfann18

    Have you got the drives set up to spin down when not in use?
    Have a look at the Energy Saver settings in System Preference on the Mac.
    Make sure the "Put the Hard Discs to sleep when possible" box is not ticked.

  • SQL tune (High response time)

    Hi,
    I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
    GENERAL INFORMATION SECTION
    Tuning Task Name : BFG_TUNING1
    Tuning Task Owner : ARADMIN
    Scope : COMPREHENSIVE
    Time Limit(seconds) : 60
    Completion Status : COMPLETED
    Started at : 01/28/2013 15:48:39
    Completed at : 01/28/2013 15:49:43
    Number of SQL Restructure Findings: 7
    Number of Errors : 1
    Schema Name: ARADMIN
    SQL ID : 2d61kbs9vpvp6
    SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
    chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
    chg.Change_Title, chg.Change_Type, chg.Change_Description,
    chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
    chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
    chg.Scheduled_End_Date_Int, chg.Outage_Required,
    chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
    chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
    NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
    a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
    AND a.bfg_con_id = chg.contract_id ) UNION SELECT
    /*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
    chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
    chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
    chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
    chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
    chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
    chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
    chg.contract_id AND a.bfg_con_id IS NOT NULL
    FINDINGS SECTION (7 findings)
    1- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 26 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    2- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    3- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 10 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    4- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    5- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 6 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    6- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    7- Restructure SQL finding (see plan 1 in explain plans section)
    An expensive "UNION" operation was found at line ID 1 of the execution plan.
    Recommendation
    - Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
    or uniqueness is guaranteed.
    Rationale
    "UNION" is an expensive and blocking operation because it requires
    elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
    assuming that duplicates are allowed or uniqueness is guaranteed.
    ERRORS SECTION
    - The current operation was interrupted because it timed out.
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1047651452
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 2 | UNION-ALL | | | | | | | |
    |* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
    | 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
    | 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
    |* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    |* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
    | 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 12 | UNION-ALL | | | | | | | |
    |* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    | 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 28 | UNION-ALL | | | | | | | |
    |* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
    6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    7 - access("C536870913"="C536870914")
    9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
    10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    23 - access("C536870913"="C536870914")
    25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
    "EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
    26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    39 - access("C536870913"="C536870914")
    Remote SQL Information (identified by operation id):
    14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    -------------------------------------------------------------------------------

    Please review the following threads:
    {message:id=9360002}
    {message:id=9360003}

  • Will the problems I am having with slow Safari response times continue if I port the image of my 4 to an new iphone 5?

    As in many families, I am ready to upgrade my iphone 4 to a 5 and pass on my 4 to my partner (who has my old 3), BUT I am resistant to do so because I have been having problems with Safari, (very slow response time) ever since I upgraded to the new OS a month or so back. I have completed all the cleaning tips (cookies, individual app shut downs) and hard boot steps with no change. I can sit my partners Iphone 3 next to my 4 and when using Safari it beats it hands down, almost doubling the time it takes on the 3. My 4 is quicker on all the other apps so I don't believe it is a hardware issue. My tests were done using our home wi fi set up the same on both phones after running all the cleaning tips on both. 
    I actually broke down and called the support line and the first level support told me if I upgraded to the 5 I would likely carry the problem to the new phone. She said she suspected malware or a virus and that for $300 she could forward me to second level support that could scan my phone and see what the problem is on my 4 and support any issues on a new iphone 5 OR for a one time lower fee $79, just help me fix my 4.
    Thinking through her options, I chose none of the above. Seems like I should be able to either 1) isolate the issue myself with the help of this forum or 2) upgrade to the iphone 5 and if the problem follows to the new one, get the support via the new device warranty. If the support person is correct and the problem is in the software and would follow the device, then in theroy I could wipe and restore my old 4 with the back up/data of my partners 3 and the problem on the 4 should be resolved.
    I really don't want to carry the problem to a new phone if I can fix the issue first. I have airwatch software to push my work email to my phone and that is usually a pain to do when I upgrade anyhow. I don't need to pay for a new phone that has the same problem as my old one. I will get grief from my partner for passing on an issue that will come up with the daily use of the old iphone 4 that I will be responsible to fix anyhow. I have invested too much time already to generate problems on both our phones. Thanks in advance for any tips or guideance about a streamlined way forward. 

    Basic troubleshooting from the User's Guide is reset, restart, restore (first from backup then as new).  Has any of this been tried?
    FYI, there are no viruses that affect iOS unless the device has been hacked or jailbroken, in which case they cannot be discussed here.

  • Response time for Error Messages - Please Help

    Hi
    I have a PRO C application talking to an Oracle database.
    The Response time for successful query is within desirable limits.
    But when there is a error condition (eg SQLError -3113,or connection refused) it takes more than 9 minutes for the database to respond with the error code.
    This condition is observed with only one database while the others are working fine.
    What is the reason for this? Can’t it be reduced?
    Regards
    David

    ever been faced with the same problem ?
    Why deleting ? It that only one way to fix this problem ?
    What are the others doing in such cases. Or am I the only one person
    where has this special problem on the world. Becides I dont believe
    in solving the problem through removing mentioned directory and
    reinstalling. Nevertheless I will try it. I let you know about the result.
    bye
    sas

  • Response Time of a query in 2 different enviroment

    Hi guys Luca speaking, sorry for the bad written english
    the questions is:
    The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
    *) I have a query in Benchmark with good results in execution time, the execution plan is really good
    *) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
    #### The Execution Plan are different ####
    #### The stats are the same ####
    this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
    chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
    True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
    in Production the stas are the same
    the other one is an external_table
    the only differences that I noticed at the moment is about the tablespace used to defined the table on:
    Production
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
    Benchmark
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    I'm studing on at the moment
    What I have to check to obtain the same execution plan (without change the query)
    This is the query:
    SELECT
    'test query',
    sysdate,
    storico.tc_scarti_seq.NEXTVAL,
    NULL, --ROW_ID
    -- A.AZIONE,
    'I',
    A.CODE_PREF_TCN,
    A.CODE_NUM_TCN,
    'ADSL non presente su CRM' ,
    -- a.AZIONE
    'I'
    || ';' || a.CODE_PREF_TCN
    || ';' || a.CODE_NUM_TCN
    || ';' || a.DATA_ATVZ_CMM
    || ';' || a.CODE_PREF_DSR
    || ';' || a.CODE_NUM_TFN
    || ';' || a.DATA_CSSZ_CMM
    || ';' || a.TIPO_EVENTO
    || ';' || a.INVARIANTE_FONIA
    || ';' || a.CODE_TIPO_ADSL
    || ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
    || ';' || a.TIPO_RICHIESTA_CESSAZIONE
    || ';' || a.ROW_ID_ATTIVAZIONE
    || ';' || a.ROW_ID_CESSAZIONE
    FROM storico.FLUSSO_ASTCM_INC A
    WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
    WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
    AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
    AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
    AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
    'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
    Esito di set autotrace traceonly explain ESERCIZIO
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
    4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    Esito di set autotrace traceonly explain BENCHMARK
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
    tes=291895338)
    1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
    2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
    8)
    3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
    Card=2861719 Bytes=183150016)
    4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
    t=2 Card=1 Bytes=38)
    2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
    E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
    3 PARALLEL_FROM_SERIAL
    4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
    EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
    The differences on the InitOra are on these parameters:
    Could they influence the Optimizer, and the execution plan are so different
    background_dump_dest
    cpu_count
    db_file_multiblock_read_count
    db_files
    db_32k_cache_size
    dml_locks
    enqueue_resources
    event
    fast_start_mttr_target
    fast_start_parallel_rollback
    hash_area_size
    log_buffer
    log_parallelism
    max_rollback_segments
    open_cursors
    open_links
    parallel_execution_message_size
    parallel_max_servers
    processes
    query_rewrite_enabled
    remote_login_passwordfile
    session_cached_cursors
    sessions
    sga_max_size
    shared_pool_reserved_size
    sort_area_retained_size
    sort_area_size
    star_transformation_enabled
    transactions
    undo_retention
    user_dump_dest
    utl_file_dir
    Please Help me
    Thanks a lot Luca

    Hi Luca,
    test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
    you're using external tables. Are the speed of these drives are identically?
    have you analyzed the schema with the same statement? Could you send me the statement?
    have you system statistics?
    have you testet the statement in an environment which is nearly like the production? concurrent user etc.
    Could you send me the top 5 wait events from the statspack report.
    Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
    Regards
    Marc

  • Response time of a function module

    Hi Friends,
        I'm creating a cutom program where i was using a BAPI ,which exist in other server.
       Now i want to record the response time of the BAPI , after placing the  request in it and display the Time for the 
       corresponding record in output.
      Is there any procedure to record the response time in the program / I'm not asking the transactions where we can  
      measure the performances.
    Moderator message - please do not ask for or promise rewards.
    Thanks & Warm Regards
    Krishna
    Edited by: Rob Burbank on Oct 1, 2009 8:50 AM

    Hello,
    The correct method, as pointed out in previous posts, is with GET RUN TIME. Note that this returns time in microseconds, so you may want to scale this up to a larger unit.
    As to the usefulness: it is perfectly legitimate to include time measurements in your program as long as this has a clear purpose, e.g. comparing response times between different remote systems, identifying erratic response times, etc. In that case I would advise you to also include some other measurement, e.g. the amount of data processed (whether you can do this and how depends on the BAPI, e.g. you could use the number of lines in the returned internal tables as a metric). If your time measurement creates separate log/trace records, then it would also be a good idea to have the option to enable and disable the time measurement.
    Regards,
    Mark

  • How to increase built-in cisco vpn peer response timer?

    Hi,
    I use OS x in-built cisco vpn client to connect to work VPN.
    The VPN server, or perhaps the radius server, takes a long time to return a response. OS X always try for 10 seconds, then drop the conneciton when no response from the remote peer. When I use cisco vpn client on a windows machine, the vpn client has a setting to allow for 90 seconds remote peer response time. It works fine using cisco vpn client.
    I prefer to use os x as my primary working environment, so I need to fix this problme. My question is how to increase the phase 1 & 2 timer for vpn under 10.6.7. I have tried to change racoon.conf phase 1 & phase 2 timer, but it made no difference. OS X only try for 10 seconds.
    Any ideas? (besides asking work people to fix the server or radius problem)
    Thanks
    jmsherry123

    i have the same problem ... certificate is imported in keychain, but cant select it when setup vpn connection

  • ISE 1.2 Auth Avg Response Time

    Hi Guys,
    We have recently moved to ISE 1.2 (distributed deployment on UCS C220 blades) from ACS 5.x. We are seeing Avergage Auth response time ~150ms in each PSN nodes (4 in total) & wonder whether this is too slow.
    Is this normal or we should have much lower average response time for thos radius authentications ? What are the typical value you guys observed in those sort of deployment
    Any input would be much appreciated
    Rasika       

    Hi,
    Where did you get your information from? Is it from the ISE Authentication Report Summary? If so, which of the Average responses are you concerned about? Authentications By Day, Identity Group, Identity Store, Allowed Protocol etc.
    In my network average response based on protocol PEAP is 121ms. Authentication by day is 74ms. Then again my network may be smaller than yours. Also I have an appliance and not a Virtual Server. In my opinion, I don't think 150ms is that much to make the user notice. If authentication response gets close to 300ms, then you have an issue.
    If you have a very large network like a University Campus, then 150ms is OK.

  • Report to calculate avg response time for a transaction using ST03.

    Hi Abap Gurus ,
    I want to develop a report which calculates the average response time(ST 03) for a transaction on hourly basis.
    I have read many threads like in which users are posting which tables/FM to use to extract data such as  dialog step and total response time .
    I am sure many would have created a report like this , would appeciate if you can share pseudo code for same.Any help regarding the same is highly appreciated...
    Cheers,
    Karan

    http://jakarta.apache.org/jmeter/

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • How to I set up multiple devices with iMessage so I don't have multiple conversations?

    My family has two iTouchs and 2 iPhones. I have all 4 set up on the same apple ID so when purchases are made from the app store or iTunes then is comes from one account.  The problem I am having is texting. When the iPhones text either of the iPods i

  • Default value for Read-Only Picklist Field

    Hi, all-- I would like to have a default value ("Undetermined") for the "Associated Account" read-only picklist value. I have an account record of this value for this purpose. This is because if there is no value in the field, the record will not be

  • What's wrong with my NVRAM

    please could anyone help me? - kt3 ultra mainboard, RAM 512 MB PC 2700, and Athlon XP 1800. - 2 Harddrive - 2 optical drive. 1 is a CD-ROM and the other is CD writer - ATI Radeon 9500 - PSU 350W i think there's something wrong with my NVRAM or BIOS.

  • Pro*C Context - checking for status

    Does anyone know a function that I can pass an SQL context (the one allocated via EXEC SQL ALLOCATE CONTEXT) to and obtain both the connected status and last error encountered? I'm in the process of writing an API to abstract some thread and database

  • E2E trace analysis in Solman

    Dear all, I have some question about the E2E trace analysis on solman. Hope you can help me!~ 1. We are using Solman 7.1 SP6 and we will upgrade to SP10 or SP11 in recent months. We want to use solman RCA trace analysis to trace ERP ABAP program. Cou