Performance Issue - Update response time

Hi,
I am trying to get the equipment data and I have to link with asset. The only way I could find is through m_equia view. I select the equipment number from m_equia passing the key as asset no and from there i go to equi and get the other data.
But when i am trying to select from m_equia the database time is more. So, can some one suggest me a better option for this other than the m_equia to get the equipment details with asset as the key. I also have cost center details with me.
Thanks,

Hi,
Please find the select on m_equia and further select on it.
Get asset related data from the view
IF NOT i_asset[] IS INITIAL.
   SELECT anlnr
          equnr
     FROM m_equia
     INTO TABLE i_asst_equi
     FOR ALL ENTRIES IN i_asset
     WHERE anlnr = i_asset-anln1 AND
           anlun = i_asset-anln2 AND
           bukrs = i_asset-bukrs.
   IF sy-subrc = 0.
     SORT i_asst_equi BY equnr.
   ENDIF.
ENDIF.
Get Equipment related data
IF NOT i_asst_equi[] IS INITIAL.
   SELECT equi~equnr
          herst
          typbz
          eqart
          mapar
          serge
          iwerk
     FROM equi
     INNER JOIN equz
     ON equiequnr = equzequnr
     INTO TABLE i_equipment
     FOR ALL ENTRIES IN i_asst_equi
     WHERE equi~equnr = i_asst_equi-equnr.
   SORT i_equipment BY equnr.
ENDIF.
Thanks.
Message was edited by:
        lakshmi

Similar Messages

  • High Update Response Time ticket from solman

    Hi Experts,
    We have monitoring setup from solman. We are getting an alert for ECC prod that the there is High Update Response Time. Can you please let me know how to fix it?
    Thanks,
    Asad

    Hi Asad ,
    How many update work process do you have . See a high amount of wait time for the update process
    You may increase their number if you have enough memory .
    Go to SM13 , Go to -> Administration of update system
    Goto - > Statistics
    Response Times: Rules of Thumb - CCMS Monitoring - SAP Library
    Less than 1 second is the recommended , though sometimes you may breach it .
    Thanks ,
    Manu

  • High Update Response Time

    Hi folks,
    how could I to check the High Update Response Time
    Thanks.

    Hi Carlos,
    Goto ST03 and check from there.
    That link was posted by mistake.
    Divyanshu

  • Performance issue / faster response after using of same criteria once again

    Since some days I have the problem that some fast queries (not changed) need now a lot of time. The response time for the same query is very different as well: sometimes only 531ms and then 15 secs.
    Now i found out that when I use the same criteria (or only the column which is indicated) once again that the query runs much faster.
    Does anybody has similiar experiences?
    Thanks! Daniel.

    I don't think that is the issue, a query that requires a 14+ second parse would be something to behold.
    The difference is respnse time is likely to do with the data blocks being cached as result of the first query.

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Performance Issue: Update Statement

    Hi Team,
    My current environment is Oracle 11g Rac...
    My application team is executing a update statement (ofcourse it is there daily activity) ...
    updating rows of 1 lac, daily it takes about 3-4 minutes to run the statement.
    But today its taking more time i.e more than 8 hours.
    then I have generated the explain plan of the update statement and found that its taking full table scan.
    Kindly assist me in fixing the issue by letting me know where and how to look for the problem.
    **Note: Stats gather is updated
    Thanks in advance.
    Regards

    If you notice there are no indexes to the below update statement -
    UPDATE REMEDY_JOURNALS_FACT SET JNL_CREATED_BY_IDENTITY_KEY = ?, JNL_CREATED_BY_HR_KEY = ?, JNL_CREATED_BY_NTWRK_KEY = ?, JNL_MODIFIED_BY_IDENTITY_KEY = ?, JNL_MODIFIED_BY_HR_KEY = ?, JNL_MODIFIED_BY_NTWRK_KEY = ?, JNL_ASSGN_TO_IDENTITY_KEY = ?, JNL_ASSGN_TO_HR_KEY = ?, JNL_ASSGN_TO_NTWRK_KEY = ?, JNL_REMEDY_STATUS_KEY = ?, JOURNALID = ?, JNL_DATE_CREATED = ?, JNL_DATE_MODIFIED = ?, ENTRYTYPE = ?, TMPTEMPDATETIME1 = ?, RELATEDFORMNAME = ?, RELATED_RECORDID = ?, RELATEDFORMKEYWORD = ?, TMPRELATEDRECORDID = ?, ACCESS_X = ?, JOURNAL_TEXT = ?, DATE_X = ?, SHORTDESCRIPTION = ?, TMPCREATEDBY = ?, TMPCREATE_DATE = ?, TMPLASTMODIFIEDBY = ?, TMPMODIFIEDDATE = ?, TMPJOURNALID = ?, JNL_JOURNALTYPE = ?, COPIEDTOWORKLOG = ?, PRIVATE = ?, RELATEDKEYSTONEID = ?, URLLOCATION = ?, ASSIGNEEGROUP = ?, LAST_UPDATE_DT = ? WHERE REMEDY_JOURNALS_KEY = ?
    Explain Plan -
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
    | 0 | UPDATE STATEMENT | | | | 1055 (100)| | | | | | |
    | 1 | UPDATE | REMEDY_JOURNALS_FACT | | | | | | | | | |
    | 2 | PX COORDINATOR | | | | | | | | | | |
    | 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 784 | 1055 (1)| 00:00:05 | | | Q1,00 | P->S | QC (RAND) |
    | 4 | PX BLOCK ITERATOR | | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWC | |
    |* 5 | TABLE ACCESS STORAGE FULL| REMEDY_JOURNALS_FACT | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    5 - storage(:Z>=:Z AND :Z<=:Z AND "REMEDY_JOURNALS_KEY"=:36) filter("REMEDY_JOURNALS_KEY"=:36)
    Note
    - automatic DOP: skipped because of IO calibrate statistics are missing
    Edited by: GeetaM on Aug 17, 2012 2:18 PM

  • Performance issue - smart response required

    Hi Guyz,
    SELECT * FROM regup
    INTO CORRESPONDING FIELDS OF TABLE itab1
    FOR ALL ENTRIES IN itab
    WHERE bukrs = itab-bukrs
    AND belnr = itab-belnr
    AND lifnr = itab-lifnr
    AND gjahr = itab-gjahr
    AND vblnr NE space.
    This query is taking 2.5 mins to execute, I have to increase its performance.
    FYI
    itab has 2677 number of records
    and final data in table itab1 has 3536 number of enteries. Its taking huge time, 2.5 mins to execute this portion. How can I increase the performance of this query.
    Pls it's urgent.
    <i>Note: If google can fetch the data from around the globe in fractoin of seconds, then at least it's not impossible.</i>
    Your answers will be rewarded.

    It would be useful to know what business requirement this is trying to meet - it looks like you are trying to trawl through the payment run line items table for a series of document numbers, but you may already have some of this same data available from BKPF & BSEG, or at least more quickly accessible by travelling through these tables first.
    For example, I'm not sure if it's site configured or standard, but I have seen the LAUFD and LAUFI value concatenated into the BKPF-BKTXT field on the paying (clearing) document, e.g. "20070205-123456", which would mean you could then get to the REGUP data much more efficiently.

  • HI performance issue.. getting time out error

    hi all.
    in below code..commented is my original one n i changed it to up to 1 rows.
    so is it rite this coding..?
    TABLES: DD03L.
    DATA: BEGIN OF WDD03M,
              FIELDNAME LIKE DD03M-FIELDNAME,
              TABNAME LIKE DD03M-TABNAME,
              CHECKTABLE LIKE DD03M-CHECKTABLE,
              ROLLNAME LIKE DD03M-ROLLNAME,
              ENTITYTAB LIKE DD03M-ENTITYTAB,
              DOMNAME LIKE DD03M-DOMNAME,
              DDTEXT LIKE DD03M-DDTEXT,
            END OF WDD03M.
      DATA: FLD LIKE TOBJ-FIEL1.
    SELECT SINGLE FIELDNAME                               
                          TABNAME
                          CHECKTABLE
                          ROLLNAME
                          ENTITYTAB
                          DOMNAME
                          DDTEXT
                     INTO (WDD03M-FIELDNAME
                          ,WDD03M-TABNAME
                          ,WDD03M-CHECKTABLE
                          ,WDD03M-ROLLNAME
                          ,WDD03M-ENTITYTAB
                          ,WDD03M-DOMNAME
                          ,WDD03M-DDTEXT
                     FROM DD03M
                    WHERE TABNAME  LIKE 'AUTH%'
                      AND DDLANGUAGE = SY-LANGU
                      AND FIELDNAME  = FLD
                      AND FLDSTAT    = 'A'
                      AND ROLLSTAT   = 'A'
                      AND DOMSTAT    = 'A'
                      AND TEXTSTAT   = 'A'.
    data: AUTH(30) type c .
    concatenate '%' AUTH '%' into AUTH.
                    SELECT FIELDNAME                               
                           TABNAME
                           CHECKTABLE
                           ROLLNAME
                           ENTITYTAB
                           DOMNAME
                           DDTEXT
                      INTO corresponding fields of WDD03M
                      FROM DD03M
                     WHERE TABNAME  LIKE AUTH
                       AND DDLANGUAGE = SY-LANGU
                       AND FIELDNAME  = FLD
                       AND FLDSTAT    = 'A'
                       AND ROLLSTAT   = 'A'
                       AND DOMSTAT    = 'A'
                       AND TEXTSTAT   = 'A'.
    endselect.
    if sy-subrc <> 0 .
    write:/ ' not ret'.
    else.
    write: wdd03m.
    endif.

    HI,
    Check now, its not taking long time now.
    TABLES: dd03l.
    DATA: BEGIN OF wdd03m occurs 0,
    fieldname LIKE dd03m-fieldname,
    tabname LIKE dd03m-tabname,
    checktable LIKE dd03m-checktable,
    rollname LIKE dd03m-rollname,
    entitytab LIKE dd03m-entitytab,
    domname LIKE dd03m-domname,
    ddtext LIKE dd03m-ddtext,
    END OF wdd03m.
    DATA: fld LIKE tobj-fiel1.
    SELECT SINGLE FIELDNAME
    TABNAME
    CHECKTABLE
    ROLLNAME
    ENTITYTAB
    DOMNAME
    DDTEXT
    INTO (WDD03M-FIELDNAME
    ,WDD03M-TABNAME
    ,WDD03M-CHECKTABLE
    ,WDD03M-ROLLNAME
    ,WDD03M-ENTITYTAB
    ,WDD03M-DOMNAME
    ,WDD03M-DDTEXT
    FROM DD03M
    WHERE TABNAME LIKE 'AUTH%'
    AND DDLANGUAGE = SY-LANGU
    AND FIELDNAME = FLD
    AND FLDSTAT = 'A'
    AND ROLLSTAT = 'A'
    AND DOMSTAT = 'A'
    AND TEXTSTAT = 'A'.
    DATA: auth(30) TYPE c .
    CONCATENATE '%' auth '%' INTO auth.
    SELECT <b>SINGLE</b> fieldname
    tabname
    checktable
    rollname
    entitytab
    domname
    ddtext
    INTO CORRESPONDING FIELDS OF wdd03m
    FROM dd03m
    WHERE
    ddlanguage = sy-langu
    AND fieldname = fld
    AND fldstat = 'A'
    AND rollstat = 'A'
    AND domstat = 'A'
    AND textstat = 'A'
    <b>AND tabname LIKE auth.</b>
    IF sy-subrc <> 0 .
      WRITE:/ ' not ret'.
    ELSE.
      WRITE: wdd03m.
    ENDIF.
    <b>OR</b>
    TABLES: dd03l.
    DATA: BEGIN OF wdd03m occurs 0,
    fieldname LIKE dd03m-fieldname,
    tabname LIKE dd03m-tabname,
    checktable LIKE dd03m-checktable,
    rollname LIKE dd03m-rollname,
    entitytab LIKE dd03m-entitytab,
    domname LIKE dd03m-domname,
    ddtext LIKE dd03m-ddtext,
    END OF wdd03m.
    DATA: fld LIKE tobj-fiel1.
    SELECT SINGLE FIELDNAME
    TABNAME
    CHECKTABLE
    ROLLNAME
    ENTITYTAB
    DOMNAME
    DDTEXT
    INTO (WDD03M-FIELDNAME
    ,WDD03M-TABNAME
    ,WDD03M-CHECKTABLE
    ,WDD03M-ROLLNAME
    ,WDD03M-ENTITYTAB
    ,WDD03M-DOMNAME
    ,WDD03M-DDTEXT
    FROM DD03M
    WHERE TABNAME LIKE 'AUTH%'
    AND DDLANGUAGE = SY-LANGU
    AND FIELDNAME = FLD
    AND FLDSTAT = 'A'
    AND ROLLSTAT = 'A'
    AND DOMSTAT = 'A'
    AND TEXTSTAT = 'A'.
    DATA: auth(30) TYPE c .
    CONCATENATE '%' auth '%' INTO auth.
    SELECT fieldname
    tabname
    checktable
    rollname
    entitytab
    domname
    ddtext
    INTO CORRESPONDING FIELDS OF table wdd03m UP TO 100 ROWS
    FROM dd03m
    WHERE
    ddlanguage = sy-langu
    AND fieldname = fld
    AND fldstat = 'A'
    AND rollstat = 'A'
    AND domstat = 'A'
    AND textstat = 'A'
    AND tabname LIKE auth.
    IF sy-subrc <> 0 .
      WRITE:/ ' not ret'.
    ELSE.
      WRITE: wdd03m.
    ENDIF.
    Regards,
    Sesh

  • Performance Issue - Elapsed = DB Time

    Hi,
    Version 11202.
    Generally , DB Time is higer then DB Time when looking at awr report.
    When awr report that Elapsed=DB Time , what would you suggest me to check ?
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    xxxxxxx          HP-UX IA (64-bit)                  16    16       4     127.84
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     26904 22-Jul-11 16:00:04       392      13.2
      End Snap:     26905 22-Jul-11 17:00:17       383      13.4
       Elapsed:               60.21 (mins)
       DB Time:               63.57 (mins)
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    db file sequential read           9,173,973       2,907      0   76.2 User I/O
    DB CPU                                              875          22.9
    direct path read                    203,093         168      1    4.4 User I/O
    db file parallel read                 5,247          48      9    1.3 User I/O
    SQL*Net break/reset to client        15,109          13      1     .4 ApplicatioThanks

    Yoav wrote:
    Generally , DB Time is higer then DB Time when looking at awr report.I'm not sure what you intended to say here. Which of the two is normally larger in your environment.
    When awr report that Elapsed=DB Time , what would you suggest me to check ?It's quite reasonable for that to happen. It just means that there was, on average, one session clocking database time an any point in time during the snapshot interval. Since you have 16 CPU cores, your system should be more than capable of handling that level of activity.
    Justin

  • J2sdk1.4 takes higher response time in database interaction than jdk1.3

    Hi All
    I am working on performance issues regarding response time . i have upgraded my system from jdk1.3 to j2sdk1.4 . I was expecting the performance gain in terms of response time in j2sdk1.4. but to my surprise it shows varied results with my application. it shows that j2sdk1.4 is taking higher time for executing the application when it has to deal with database. I am using oracle 9i as the backend database server.
    if any body has the idea about, why j2sdk1.4 is showing higher responce time while interacting with database as compare to jdk1.3. then do let me know this.
    Thanx in advance

    Hi All
    I am working on performance issues regarding response time . i have upgraded my system from jdk1.3 to j2sdk1.4 . I was expecting the performance gain in terms of response time in j2sdk1.4. but to my surprise it shows varied results with my application. it shows that j2sdk1.4 is taking higher time for executing the application when it has to deal with database. I am using oracle 9i as the backend database server.
    if any body has the idea about, why j2sdk1.4 is showing higher responce time while interacting with database as compare to jdk1.3. then do let me know this.
    Thanx in advance

  • JDBC Interaction response time difference in j2sdk1.4 and jdk1.3

    Hi All
    I am working on performance issues regarding response time . i have upgraded my system from jdk1.3 to j2sdk1.4 . I was expecting the performance gain in terms of response time in j2sdk1.4. but to my surprise it shows varied results with my application. it shows that j2sdk1.4 is taking higher time for executing the application when it has to deal with database. I am using oracle 9i as the backend database server.
    if any body has the idea about, why j2sdk1.4 is showing higher responce time while interacting with database as compare to jdk1.3. then do let me know this.
    Thanx in advance

    You may use the latest jdbc driver - http://www.oracle.com/technology/tech/java/sqlj_jdbc/index.html
    And check the documentation for the new features and changes between jdbc drivers from JDBC Developer's Guide and Reference - http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/toc.htm
    Best regards.

  • Dialog response time comaprison between cities

    Is there any way where we can compare dialog response time  in different cities for the same transaction?
    As we are facing issues with response time in 2 different cities and response time in 1 city is lesser comared to other city while fetching the same data.
    System is an ABAP ECC 6.0 R/3 system.
    We checked that GUI time is also different in both cities.

    Hi,
    Check the network times from each city with the SAP utility niping.
    Check also if people from the different cities are connected to the same application servers.
    Regards,
    Olivier

  • Performance Issue : Why does ADF Taskflow Portlet (JSF bridge portlet) loading ADF Specific images, css, js everytime from portlet producer with dynamic URL with portlet_id and context parameters?

    Hi All,
    We have used WSRP Portlet in Webcenter Portal Page. The Portlet is created using JSF Bridge out of ADF Bounded Taskflow.
    It is causing Performance issue. Every time static content like js, css and images URLs are downloaded  and the URL contain portlet_id and few other dynamic parameters like resource_id, client_id etc.
    We are not able to cache these static content as these contains dynamic URL. This ADF Specific  images, js and css files  are taking longer time to load.
    Sample URL:
    /<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fring_60.gif~26locale~3Den~26checksum~3D3e839bc581d5ce6858c88e7cb3f17d073c0091c7/ring_60.gif
    /<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fpartition~2525252Fie~2525252Fn~2525252Fdefault~2525252Fopt~2525252Fimagelink-11.1.1.7.0-4251.js~26locale~3Den~26checksum~3Dd00da30a6bfc40b22f7be6d92d5400d107c41d12/imagelink-11.1.1.7.0-4251.js
    Technologies Used:
    Webcenter Portal PS6
    Jdeveloper 11.1.1.7
    Please suggest , how this performance issue can be resolved?
    Thanks.
    Regards,
    Digesh

    Strange...
    I can't reproduce this because i have issues with creating portlets... If i can solve this issue i will do some testing and see if i can reproduce the issue...
    Can you create a new producer with a single portlet that uses a simple taskflow and see if that works?
    Are you also using business components in the taskflows or something? You can try removing some parts of the taskflow and test if it works so you can identify the component(s) that causes the issues.

  • Hard drive passes all tests but extremely high response times causing major performance issues.

    I have a HP Compaq Presario CQ62-360TX pre-loaded with Windows 7 home premium (64-bit) that I purchased just under a year ago.
    Recently my experience has been interrupted by stuttering that ranges from annoying in general use to a major headache when playing music or videos from the hard drive.
    The problem appears to be being caused by extremely hard drive high response times (up to 10 seconds).  As far as I know I didn't install anything that might have caused the problems before this happened, and I can't find anything of note looking back through event viewer.
    In response to this I've run multiple hard drive scans for problems (chkdsk, scandsk, test through BIOS, test through HP software and others) all of which have passed with no problems. The only thing of any note is a caution on crystaldiskinfo due to the reallocated sector count but as none of the other tests have reported bad sectors I'm unsure as to whether this is causing the problem. I've also updated drivers for my Intel 5 Series 4 Port SATA AHCI Controller from the Intel website and my BIOS from HP as well as various other drivers (sound, video etc), as far as I can tell there are none available for my hard drive directly. I've also wanted to mess with the hard drive settings in the BIOS but it appears those options are not available to me even in the latest version.
    System Specs:
    Processor: Intel(R) Pentium(R) CPU P6100 @ 2.00GHz (2 CPUs), ~2.0GHz
    Memory: 2048MB RAM
    Video Card: ATI Mobility Radeon HD 5400 Series
    Sound Card: ASUS Xonar U3 Audio Device or Realtek High Definition Audio (both have problem)
    Hard Drive: Toshiba MK5065GSK
    Any ideas?
     Edit: The drive is nowhere near full, it's not badly fragmented and as far as I can tell there's no virus or malware.

    Sounds like failing sectors are being replaced with good spares sucessfully so far, this is done on the fly and will not show in any test, you have a failing drive, I would back up your data and replace the hard drive.
    Sector replacement on the fly explains the poor performance also, replacing sectors with spares is normal if it is just a few over many years, but crystal is warning you there are too many, a sign of drive failure is around the corner.

  • ISA Server 2006 + Average response time for Non Cached requests = performance issues?!?!?!

    All,
    I am in a predicament with internet browsing speeds...We have a 3rd party look after our line and internet facing f/w  so I cant troubleshoot them, so at the moment Im looking at ISA as the potential bottleneck - we have a fairly standard environment:
    Internal > Local Host > Perimiter n/work > Firewall > Internet
    I have been running custom reports on the ISA server to see what data can be collected - I have noticed that "Average response time for non cached requests" (traffic by time of day) can be as high as 76 seconds!!!!!! Cached hits are between .5
    and 2 seconds.
    I have also coonfigured a connectivity verifier which is also flagging slow connectivity, massively over the >5000ms and also reporting "cant resolve server name on occassions- and this is configured for
    www.Microsoft.com --- DNS ???!?!, however I have looked through DNS (no obvious errors / config issues) which I can see 
    I have run the BPA on ISA server to ensure its Health - - connectivity verifier errors flagged timeouts to microsoft.com as expected...
    Can anyone advise any obvious areas to investigate as Im struggling! - as always the 3rd party have told us the internet pipe is fine :O

    Problem resolved.
    DNS forwarders have been changed on the ISA server / DNS and this has improved lookup speed considerably.
    thanks all :)

Maybe you are looking for