Slow performance of query

Hi,
We have been experiencing performance problems in our database lately. Here is the sample query:
select /*+ ORDERED FIRST_ROWS(6) */ * from
(select DCDATA_ID , score(1) as score from aap.DCDATA_imagearc a
where contains(DCDATA_META, '((BASEBALL and (USA not BROWNLOW)) and ((((({aapdefault} within
document@schema) not ((((((((((Credit\~AP or Credit\~AFP) or Credit\~EPA) or Credit\~30) or
Credit\~113) or Cr
edit\~95) or Credit\~13) or Credit\~33) or Credit\~65) or Credit\~56) or Credit\~42)) not
((({ap+arroyo} within credit) or ({ap+chris} within credit)) or ({ap+graylock} within credit))) not
INTL+OUT) not aus
tralia+only))', 1) > 0
order by DCDATA_ID desc )
where ROWNUM <= 6
============
Plan Table
============
------------------------------------------------------------------+---------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time
| Pstart| Pstop |
------------------------------------------------------------------+---------------------------------
| 0 | SELECT STATEMENT | | | | 11K |
| | |
| 1 | COUNT STOPKEY | | | | |
| | |
| 2 | VIEW | | 7 | 182 | 11K |
00:02:16 | | |
| 3 | TABLE ACCESS BY GLOBAL INDEX ROWID | DCDATA_IMAGEARC | 14K | 1583K | 11K |
00:02:16 | ROW LOCATION| ROW LOCATION|
| 4 | INDEX FULL SCAN DESCENDING | DCDATA_IMAGEARC_ID| 960 | | 20 |
00:00:01 | | |
------------------------------------------------------------------+---------------------------------
This query waits on "db file sequential read" wait event and takes 5-10 mins and this in turn causes our website to hang. I tried increasing
the SGA_TARGET size from 8GB to 10GB. But, it was of no help. The i gathered statistics but, even
that didn't help. Then i changed the hint FIRST_ROWS in the query to ALL_ROWS. It is definitely working fine
now but, im not convinced that it will be a permanent solution for this.
So, I was thinking of increasing the memory and KEEP pool size and pin the indexes used in the
query into KEEP POOL.
Can you please tell me your views on pinning indexes into KEEP pool.
Is rebuilding index going to help? (We are optimizing the text index daily and I feel rebulding index may not help much).
Appreciate your comments on this.
Thanks,
Ramya

"db file sequential read" means the database is waiting for indexes to be read.
It might be the case that you're reading a very large index or a fragmented one.
Maybe it is smart to rebuild the indexes of the table during the night of even rebuild the table by moving it across two tablespaces

Similar Messages

  • Time Out and slow performance of query

    Hi Experts,
    We have a multiprovider giving sales and stock data.we have 1000 article 200 sites this query doesn't respond at all.It is either timed out or analyzer status is not responding. However on single article input ,the query responds in 60 sec.Please help.
    Best gds
    SumaMani

    Hi,
    Did you give the query in RSRT and click on Performance info.
    It will give some message so that you can take steps to improve the performance of the query.
    Regards,
    Rama Murthy.

  • Slow performance of Query-Large Table how to optimize for performance

    Hi Friends,
    I am an ORacle DBA and just recently I have been asked to Administer an ORacle HRMS Database as a substitute for the HRMS DBA who have gone on vacation.
    I have been reported that few queries are taking a long time to refresh and populate the forms. After some investigation it is found that the tables: HR.PAY_ELEMENT_ENTRY_VALUES_F has some more than 15 million rows in it. The storage parameters specified for table r Oracle Defaults. The table has grown a lot big and even a Count(*) takes more than 7 mins to respond.
    My question is: Is there anyway it can be tuned for better performance without an overhaul. Is it normal for this table to grow this big for 6000 employees data for 4 years....
    Any response/help in this regard will be appreciated. U may please ans me at [email protected]
    Thanks in Advance.
    Rajeev.

    That was a good suggestion by Karthick_Arp, but there is a chance that it is not logically identical depending on the data (I believe that is the reason for his warning).
    Try this rewrite, which moves T6 to an inline view and uses the DECODE function to determine if the one row returned from T6 should be used:
    SELECT
      ASSOC.NAME_FIRST || ' ' || ASSOC.NAME_LAST AS CLIENT_MANAGER
    FROM
      T1 ASSOC,
      T2 CE,
      T3 AA,
      T4 ACT,
      T5 CC,
      (SELECT
        CA.ASSOC_ID
      FROM
        T6 CA
      WHERE
        CA.COMP_ID = :P_ENT_ID
        AND CA.CD_CODE IN ('CMG','RCM','BCM','CCM','BAE')
      GROUP BY
        CA.ASSOC_ID) CA
    WHERE
      CE.ENT_ID = ACT.PRIMARY_ENT_ID(+)
      AND CE.ENT_ID = :P_ENT_ID
      AND ASSOC.ID = DECODE(AA.ASSOC_ID, NULL, CA.ASSOC_ID, AA.ASSOC_ID)
      AND NVL(ACT.ACTIVITY_ID, 0) = NVL(AA.ACTIVITY_ID, 0)
      AND ASSOC.BK_CODE = CC.CPY_NO
      AND ASSOC.CENTER = CC.CCT_NO
      AND AA.ROLE_CODE IN ('CMG', 'RCM', 'BCM', 'CCM', 'BAE');Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • SSRS 2008 R2 is extremely slow. The query runs in less than a second in the dataset designer but if you try to view the report it takes over 10 minutes. I have read this is a bug in SSRS 2008 R2. We installed the most recent patches and service packs.

    SSRS 2008 R2 is extremely slow.  The query runs in less than a second in the dataset designer but if you try to view the report it takes over 10 minutes.  I have read this is a bug in SSRS 2008 R2.  We installed the most recent patches and
    service packs.  Nothing we've done so far has fixed it and I see that I'm not the only person with this problem.  However I don't see any answers either.

    Hi Kim Sharp,
    According to your description that when you view the report it is extremely slow in SSRS 2008 R2 but it is very fast when execute the query in dataset designer, right?
    I have tested on my local environment and can‘t reproduce the issue. Obviously, it is the performance issue, rendering performance can be affected by a combination of factors that include hardware, number of concurrent users accessing reports, the amount
    of data in a report, design of the report, and output format. If you have parameters in your report which contains many values in the list, the bad performance as you mentioned is an known issue on 2008 R2 and already have the hotfix:
    http://support.microsoft.com/kb/2276203
    Any issue after applying the update, I recommend you that submit a feedback at https://connect.microsoft.com/SQLServer/ 
    If you don’t have, you can do some action to improve the performance when designing the report. Because how you create and update reports affects how fast the report renders.
    Actually, the Report Server ExecutionLog2  view contains reports performance data. You could make use of below query to see where the report processing time is being spent:
    After you determine whether the delay time is in data retrieval, report processing, or report rendering:
    use ReportServer
    SELECT TOP 10 ReportPath,parameters,
    TimeDataRetrieval + TimeProcessing + TimeRendering as [total time],
    TimeDataRetrieval, TimeProcessing, TimeRendering,
    ByteCount, [RowCount],Source, AdditionalInfo
    FROM ExecutionLog2
    ORDER BY Timestart DESC
    Use below methods to help troubleshoot issues according to the above query result :
    Troubleshooting Reports: Report Performance
    Besides this, you could also follow these articles for more information about this issue:
    Report Server Catalog Best Practices
    Performance, Snapshots, Caching (Reporting Services)
    Similar thread for your reference:
    SSRS slow
    Any problem, please feel free to ask
    Regards
    Vicky Liu

  • Slow performance for large queries

    Hi -
    I'm experiencing slow performance when I use a filter with a very large OR clause.
    I have a list of users, whose uid's are known, and I want to retrieve attributes for all users. If I do this one at a time, I pay the network overhead, and this becomes a bottleneck. However, if I try to get information about all users at once, the query runs ridiculously slow - ~ 10minutes for 5000 users.
    The syntax of my filter is: (|(uid=user1)(uid=user2)(uid=user3)(uid=user4).....(uid=user5000))
    I'm trying this technique because it's similar to good design for oracle - minimizing round trips to the database.
    I'm running LDAP 4.1.1 on a Tru64 OS - v5.1.

    This is a performance/tuning forum for iPlanet Application Server. You'd have better luck with this question on the Directory forum.
    The directory folks don't have a separate forum dedicated to tuning, but they answer performance questions in the main forum all of the time.
    David

  • Slow performance of PreparedStatement

    I am having difficulty with extremely slow performance or a relatively simply Microsoft SQL Server 2000 call from my Java applet. Most of the application runs fine, but there are certain parts that are repeatedly giving me a long delay before completing their execution. The code is as follows:
    // Create the try block for the execution of the SQL code
            try{
                // Create the command to be executed
                String command = new String( "select Description, Enabled " );
                command += "from tLineInfo where( Line = ? )";
                // Create the SQL text to be executed
                PreparedStatement get = _connection.prepareStatement( command );
                get.setInt( 1, lineNo );
                // Execute the SQL command
                ResultSet lineInfo = get.executeQuery();
                // Display the information accordingly
                if( lineInfo.next() ){
                    // Populate the user data fields
                    description.setText( lineInfo.getString( "Description" ) );
                          <! more display code here >
    }   // End of if statment
                else{
                    // Clear the user data fields
                    description.setText( " " );
                           <! more display code here >
                }   // End of else statements
                // Close the result set in preparation for the next query
                lineInfo.close();
            }   // End of try block
            catch( Exception e ){
                // Display a dialog box informing the user of the problem
                Object[] options = { "     OK     " };
                JOptionPane.showOptionDialog( null, e.getMessage(),
                        "Error",
                        JOptionPane.OK_OPTION, JOptionPane.ERROR_MESSAGE,
                        null, options, options[ 0 ] ); 
            }   // End of Exception catch
            // Get the related area information
            populateAreaCombo( lineNo );       
    private void populateAreaCombo( int lineNo ){
            // Format the areaComboBox
             try{
                // Create a command to get the devices from the database
                String command = new String( "select Area, [Name], [Description] " );
                command += "from tAreas where( Line = ? )";
                // Create the SQL statement to grab the information, and
                // populate the search parameter
                PreparedStatement getAreas =
                        _connection.prepareStatement( command );
                getAreas.setInt( 1, lineNo );
                // Execute the command
                ResultSet areas = getAreas.executeQuery();
                // Loop through the result set, and add collect the areas
                Vector< String > controlAreas = new Vector();
                while( areas.next() ){
                    // Add the area to the comboBox
                    controlAreas.add( Integer.toString( areas.getInt( "Area" ) ) +
                            " - " + areas.getString( "Name" ) + ": " +
                            areas.getString( "Description" ) );
                }   // End of while loop
                      <! more display code here >
                    The application always seem to pause at the second PrepraredStatement call:
    PreparedStatement getAreas =
                        _connection.prepareStatement( command );This seems to be a very simple operation, and it is not even the execution of the query where the long delay is realized. Rather, it is in the actual creation of the object prior to the execution.
    The delay is very repeatable at this exact statement each time.
    Additionally, of interest, is that the delay is only realized on computers remotely connected to the database. If I run this code on the localhost, then there is no delay. As soon as I distribute it, then the delay is incurred. That being said, there is not a network related issue that I can identify here. I have even isolated the server to be on the network with just one other PC, and the delay still persisted.
    Does anyone have any ideas?
    Thanks

    I can determine where the delay occurs by adding
    dialog boxes at a bunch of different steps, then
    monitoring them for when they appear; a little
    archaic, of course, but an easy way to find this
    out.You should
    1. Get the start time
    2. Get the current time at each step.
    3. Print the results at the end.
    4. Repeat a number of times to average.
    >
    When you say that I should not mix database code with
    display code... What exactly do you mean? To be
    more precise in my description, I was simply setting
    a bunch of different text fields and/or check boxes,
    etc., based on the result set returned. It was not
    as if I was creating a portion of the GUI there or
    something. I am assuming that is an allowable
    practice...
    You should have a class that does nothing but the database work. That class should be used by other classes (like classes that do GUI.)
    I have made little effort to close my resources, and
    sometimes they are not closed at all. When you say
    "resources", what exactly do you mean by that? Are
    you referring to the result sets, for example? I do
    not know of other resources that need to be closed,
    except for the connection to the DB itself. This, I
    have as persistent throughout the duration of the
    user's session.
    You must close result sets, statements and connections. They must be closed in that order.
    I am not running this code on the Internet, but it
    has been designed to be run on a small corporate
    network ( < 10 users). This is why I opted for the
    applet to run the entire application through instead
    of doing more HTML work.That is ok.

  • BC_MSG slow performance

    I experience slow performance in a SAP PI 7.1EHP1 SP05 installation. The BC_MSG table contains about 550.000 messages. I archive and delete message that are 10 days old. The environment is Redhat Linux 5.5 64bit with 16GB RAM and oracle 10.2.0.4.<br>
    <br>
    The problem is when I want to query all message containg error in the last 7 days or more. It needs about 5 minutes to respond and sometimes more that ends with request timeout from ICM. I shrink the tables, rebuild the indexes, restart the system many times, apply latest service pack (SP06) but the performance remains unchanged.<br>
    <br>
    The query that is executed is the following:<br>
    <br>
    <pre>
    SELECT "MSG_ID","DIRECTION","REF_TO_MSG_ID","CORRELATION_ID","SEQUENCE_ID",
      "SEQUENCE_NBR","CONN_NAME","MSG_TYPE","STATUS","TO_PARTY_NAME",
      "TO_PARTY_TYPE","FROM_PARTY_NAME","FROM_PARTY_TYPE","TO_SERVICE_NAME",
      "TO_SERVICE_TYPE","FROM_SERVICE_NAME","FROM_SERVICE_TYPE","ACTION_NAME",
      "ACTION_TYPE","DELIVERY_SEMANTICS","SENT_RECV_TIME","TRAN_DELV_TIME",
      "SCHEDULE_TIME","PERSIST_UNTIL","TIMES_FAILED","RETRIES","RETRY_INTERVAL",
      "MSG_PROTOCOL","TRANSPORT","ADDRESS","CREDENTIAL","TRAN_HEADER",
      "VALID_UNTIL","NODE","ERROR_CODE","ERROR_CATEGORY","PP_USER","PP_HASH",
      "VERS_NBR","VERS_WAS_EDITED","PRIORITY_TYPE","PRIORITY"
    FROM
    "BC_MSG" WHERE "BC_MSG"."MSG_ID" IS NOT NULL AND ("STATUS" = :1 OR "STATUS" =
       :2) AND "SENT_RECV_TIME" >= :3 AND "SENT_RECV_TIME" <= :4 ORDER BY 21 DESC
    </pre>
    <br>
    Parameters:<br>
    :1 = 'FAIL'<br>
    :2 = 'NDLV'<br>
    :3 = 2011-03-01 08:06:02.34 (example)<br>
    :4 = 2011-03-02 08:06:02.34 (example)<br>
    <br>
    <br>
    This statement is executed for each day for the time interval I specify (e.g. 7 days of the week if the interval is 'Last 7 Days'). I don't know why they don't use one SQL for the specified interval.<br>
    <br>
    When I enable trace in the Open SQL monitor the execution time I see is very disappointing, about 22 - 46 seconds for each query. I also enable SQL trace in Oracle database and the results I get are the following:<br>
    <br>
    <pre>
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch        2     10.74      79.70      50473      62752          0           6
    total        4     10.74      79.70      50473      62752          0           6
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 32
    Elapsed times include waiting on following events:
       Event waited on                             Times   Max. Wait  Total Waited
       Waited  -
      db file sequential read                     50473        0.15         72.90
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net more data to client                     1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
    </pre>
    <br>
    The 'db file sequential read' is what worries me but I don't know what to do.<br>
    <br>
    Also I tried the same query using Db Visualizer and the jdbc jar file. The response was much better and the execution plan was different as you can see in sql trace results:<br>
    <br>
    <pre>
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          7          0           0
    total        3      0.01       0.01          0          7          0           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 32
    Rows     Row Source Operation
          0  SORT ORDER BY (cr=7 pr=0 pw=0 time=636 us)
          0   INLIST ITERATOR  (cr=7 pr=0 pw=0 time=487 us)
          0    TABLE ACCESS BY INDEX ROWID BC_MSG (cr=7 pr=0 pw=0 time=459 us)
          0     INDEX RANGE SCAN I_BC_MSG_STATUS_SENT (cr=7 pr=0 pw=0 time=347 us)(object id 96079)
    Elapsed times include waiting on following events:
       Event waited on                             Times   Max. Wait  Total Waited
       Waited  -
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.18          0.19
    </pre>
    <br>
    The I_BC_MSG_STATUS_SENT index that scans is the index using the columns STATUS, SENT_RECV_TIME I created to see if performance is better.<br>
    <br>
    Generally the system works fine. I handles 50.000 messages per day with no delays. But the monitoring performance is annoying. I need your expertise on that.<br>

    You, also, have waaaay too many user startup/Login items.
    You need to pare these down to the apps you really need to have launched at startup and running in the background.
    Add or remove automatic items
    Choose Apple menu > System Preferences, then click Users & Groups.
    Select your user account, then click Login Items.
    Do one of the following:
    Click Add below the list on the right, select an app, document, folder, or disk, then click Add.If you don’t want an item’s windows to be visible after login, select Hide. (Hide does not apply to servers, which always appear in the Finder after login.)
    Select the name of the item you want to prevent from opening automatically, then click Delete below the list on the right.
    You need to make sure to update all of your third party software if there are OS X Mavericks updates that can be applied.
    You may need to go the third party developers' websites if there are no updates through the Mac App Store.
    Make sure all of your Web browser Internet plugins, extensions and add-ons are updated to recent versions, also.

  • Extremely slow performance on projects under version control using RoboHelp 11, PushOk, Tortoise SVN repository

    We are also experiencing extremely slow performance for RoboHelp projects under version control. We are using RoboHelp 11, PushOk and a Tortoise SVN repository on a Linux server. We are using a Linux server on our IT guys advice because we found SVN version control under Windows was unstable.
    When placing a Robohelp project under version control, and yes the project is on my local machine, it can take up to two hours to complete. We are using the RoboHelp sample projects to test.
    We have tried to put the project under version control from Robohelp, and also tried first putting the project under version control from Tortoise SVN, and then trying to open the project from version control in Robohelp. In both cases, the project takes a ridiculous amount of time to open. The Robohelp status bar displays Querying Version Control Status for about an hour before it starts to download from the repository, which then takes more than an hour to complete. In many cases Robohelp becomes unresponsive and we have to start the whole process again.
    If adding the project to source control completes successfully, and the the project is opened from version control, performing any function also takes a very long time, such as creating a topic. When I generated a printed documentation layout it took an astonishing 218 minutes and 17 seconds to complete. Interestingly, when I generated the printed documentation layout again, it took 1 min and 34 seconds. However when I closed the project, opened from version control, and tried to generate a printed documentation layout, it again took several hours to complete. The IT guys are at a loss, and say it is not a network issue and I am starting to agree that this is a RoboHelp issue.
    I see there are a few other discussions here related to this kind of poor performance, none of which seem to been answered satisfactorily. For example:
    Why does it take so long when adding a new topic in RH10 with PushOK SVN
    Does anybody have any ideas on what we can do or what we can investigate? I know that there are options for version control, but am reluctant to pursue them until I am satisfied that our current issues cannot be resolved.
    Thanks Mark

    Do other applications work fine with the source control repository? The reason I'm asking is because you must first rule out that there are external factors causing this behaviour. It seems that your it guys have already looked at it, but it's better to be safe than sorry.
    I have used both VSS and TFS and I haven't encountered such a performance issue. I would suggest filing it as a bug if you rule out that the problem is not related to external influences: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform&loc=en
    Kind regards,
    Willam

  • Slow performance after installation of new OS-X

    I don't know if anyone else has been experiencing the same problem as me, but it seemed to occur after I installed the new OS-X.
    My MacBook Pro has gone stremely slow and it seems to be related to MAIL. I thought that the standard MAIL program that comes with MAC was the problem, so I installed Postbox and that seemed to work for a while (1 day or 2) and then it was the same. It's not just email programs that are slow, but all programs.
    Click and wait seconds (10, 20, 30 sec) just for a program to open. Moving between files is so slow. I feel like I'm using Windows again.
    I'm no expert, but when I look at MEMORY usage, out of 4GB I'm using 3.8GB. My Virtual Memory is 4.76GB. See screen shot below.
    I would apprecaite if anyone can help me resolve what the problem is.
    Thanks
    Richard Giuliano

    First, back up all data immediately unless you already have a current backup. If you can't back up, stop here. Do not take any of the steps below.
    Step 1
    This diagnostic procedure will query the log for messages that may indicate a system issue. It changes nothing, and therefore will not, in itself, solve your problem.
    If you have more than one user account, these instructions must be carried out as an administrator.
    Triple-click anywhere in the line below on this page to select it:
    syslog -k Sender kernel -k Message CReq 'GPU |hfs: Ru|I/O e|find tok|n Cause: -|NVDA\(|pagin|timed? ?o' | tail | awk '/:/{$4=""; print}' | open -ef
    Copy the selected text to the Clipboard by pressing the key combination command-C.
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V). I've tested these instructions only with the Safari web browser. If you use another browser, you may have to press the return key.
    The command may take a noticeable amount of time to run. Wait for a new line ending in a dollar sign (“$”) to appear.
    A TextEdit window will open with the output of the command. Normally the command will produce no output, and the window will be empty. If the TextEdit window (not the Terminal window) has anything in it, stop here and post it — the text, please, not a screenshot. The title of the TextEdit window doesn't matter, and you don't need to post that.
    Step 2
    There are a few other possible causes of generalized slow performance that you can rule out easily.
    Disconnect all non-essential wired peripherals and remove aftermarket expansion cards, if any.
    Reset the System Management Controller.
    Run Software Update. If there's a firmware update, install it.
    If you're booting from an aftermarket SSD, see whether there's a firmware update for it.
    If you have a portable computer, check the cycle count of the battery. It may be due for replacement.
    If you have many image or video files on the Desktop with preview icons, move them to another folder.
    If applicable, uncheck all boxes in the iCloud preference pane. See whether there's any change.
    Check your keychains in Keychain Access for excessively duplicated items.
    Boot into Recovery mode, launch Disk Utility, and run Repair Disk.
    If you have a MacBook Pro with dual graphics, disable automatic graphics switching in the Energy Saverpreference pane for better performance at the cost of shorter battery life.
    Step 3
    When you notice the problem, launch the Activity Monitor application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Activity Monitor in the icon grid.
    Select the CPU tab of the Activity Monitor window.
    Select All Processes from the View menu or the menu in the toolbar, if not already selected.
    Click the heading of the % CPU column in the process table to sort the entries by CPU usage. You may have to click it twice to get the highest value at the top. What is it, and what is the process? Also post the values for User, System, and Idle at the bottom of the window.
    Select the Memory tab. What value is shown in the bottom part of the window for Swap used?
    Next, select the Disk tab. Post the approximate values shown for Reads in/sec and Writes out/sec (not Reads in andWrites out.)
    Step 4
    If you have more than one user account, you must be logged in as an administrator to carry out this step.
    Launch the Console application in the same way you launched Activity Monitor. Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Select the 50 or so most recent entries in the log. Copy them to the Clipboard by pressing the key combinationcommand-C. Paste into a reply to this message (command-V). You're looking for entries at the end of the log, not at the beginning.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some personal information, such as your name, may appear in the log. Anonymize before posting. That should be easy to do if your extract is not too long.

  • Slow performance on 11g

    Hi ,
    Recently we have migrated our database from 10g to 11.2.0.2 enterprise edition .
    Comparing with 10g , we are getting very slow performance on 11g .
    can anyone suggest me how to to tune this migrated database.
    Also Please suggest me the version of TOMCAT server compatible with 11g
    Thanx in advance....

    Start looking at your top N SQL.
    Even if there are a number of statements whose performance has degraded thanks to the upgrade, you might as well start with the worst performing SQL.
    If you have some performance reports from the period before upgrade then these could be used as comparison.
    If you preserved any execution plans from before the upgrade, these might come in useful as quick fixes to any specific statement issues.
    If you still have plans in AWR from before the upgrade, then
    "Read by other session" is a typical symptom of inefficient execution plan issues where multiple query executions are requesting the same data and it is not in the cache.
    "db file parallel read" is often a symptom of index prefetching - from your top SQL check statements doing an index full scan, particularly if they didn't use to.
    See threads following threads for SQL tuning:
    [url https://forums.oracle.com/forums/thread.jspa?threadID=863295]How to post a sql tuning request
    [url https://forums.oracle.com/forums/thread.jspa?messageID=1812597]When your query takes too long
    So, in summary, check out your top N SQL, check which queries are doing most "read by other sessions", double check plans with "db file parallel read".
    You might find a common cause affecting a number of statments or you might just find a couple of queries who are stressing you out and whose execution plan needs some post-upgrade TLC (which is normal).

  • CS5 or Pc  Slow performance?

    Hi there,I have a problem with my pc at work and because I am not quite sure in which one is exactly the problem I decide to ask you.The pc is intel pentium 4 with 3 Gb Ram  XP professional service pack 3 Photoshop CS 5.
    The pc is connected to server(data base).When I open a bunch of photos (20-30) approximately 8 - 12 MB each one and sometimes I experience a slow performance(for example I am trying to retouch/clone something and it takes a lot of time for the pc to redraw/respond) the interesting part is that it doesn't happen always(sometimes can be even 3 photos and it will take ages to do something) also I notice that after appr. 20 minutes it's suddenly start to work as it should normally(it's like there was an update and when is updated/complete everything is fine).When that start to happen today I restart CS5 and load the same photos and it work fine, so it is really bizarre for me.I put a 89% ram usage to the CS5(I pretty much use only all the time Photoshop and bridge )   also made the history steps to 7 and the cache to 6 for big and flat files.I was checking the Efficiency and it's all the time 100 % so it's like it's not the photoshop but like I said today i just restard the ps and upload the same photos and got fix...I tried to copy the photos on my hard drive(so it should not be affect of the performance of the server but still the same as when I take them straight from the server.Because the hard drive is only 80 gb ( I don't store anything there )   i did clean up and disk defragment still  the same situation.Do you know what can be the problem and is it the Photoshop or the Pc?I'll appreciate it.
    Thank you  

    Could be that your PC is sometimes looking to check things on the server, and maybe the server or network is busy/bogged down and you PC gets caught in that cycle mess.....so,
    suggest next time you use Ps, set win performance monitor perfmon.exe or perfmon.msc ; setup to view graph of disk use and network activity and cpu usage.....then you may see/track a problem area.
    on win7, task manager does most of the same stuff

  • First off, i think it's sad that i have to use my non apple device to post this question... Why has my iPad become absolutely useless after updating to iOS 8.1? I am unable to use my mini because it crashes, slow performance, major battery drain.

    First off, i think it's sad that i have to use my non apple device to post this question... Why has my iPad become absolutely useless after updating to iOS 8.1? I am unable to use my mini because it crashes, slow performance, major battery drain.

    Restore iPad to Factory Default; do not restore from backup. It may be the cause of the problem.
    Settings>General>Reset>Erase all content and settings

  • Slow performance of discoverer portlets

    Hi,
    I am running into slow performance of a particular page in portal, there are multiple sites hosted on the portal but these all have static text and are very quick to load but the one page with discoverer portlets and a web-clippping portlet is taking ages. If I run these reports on their own they are instant. Has anyone had similar problems with 9.0.2 portal and the discoverer reports ? I tried to identify a particular report that may be the culprit by deleting them one by one and to no avail. After a few hours effort I found the design mode was hanging as well. Again, just for this 1 page.
    There are 3 tabs on this page with about 4 report portlets and 1 web-clipping portlet.
    Thanks for any help,
    Brandon

    Thanks for the help.
    Seems to have been resolved although I'm not sure if thats the end of the story. Will wait and see.....
    So far apparently a port conflict of some sort in the discoverer config was causing it to hang and then also other system demands on the network and server housing the discoverer end-user layer were also causing problems.
    No error messages unfortunately......

  • Server 2012 R2 slow performance over all

    This is a DELL PowerEdge R820 / 256GB RAM / 4TB onboard storage configured as Remote Desktop Services Host /TS... running QuickBooks, MS Office 2010 STD, Symantec EndPoint Protection (basic antivirus installation ONLY)... 1 month old server... SUPER fast,
    super powerfull.... Part of an AD in SBS 2008 Premium... single NIC card 4 ports / 3 ports disabled. single IP... no VLANS... server is able to resolve ANY pc, DNS record with no problems... able to resolve DC by name, able to receive GP, able to update, able
    to do everything that I can think of.... EXCEPT... anything I open, word, excel, QuickBooks, IE.... takes forever to open... after the application is open is fast but to open anything takes 3-4 minutes.... open ticket with Symantec all looks good, created
    lots of exceptions for antivirus/real time scanning, open ticket with QuickBooks: files look good application was removed and reinstalled to be sure all was done correctly... check the binding order for the disabled NIC; the active one is TOP option, no errors
    at all in event viewers for system, application, setup, no errors at all in DELL management tool, no hard drive errors, no controllers errors.... this server is replacing an old dell poweredge 2008 STD with 24GB RAM... the old DELL opens the same applications
    way faster than the new one... same Quicken version; old server opens Quicken in seconds, new one 3-4 minutes..... Real time monitoring NEVER goes above 1% for CPU and 3% for memory utilization.... one more thing... removed antivirus 100% restart... same performance
    without antivirus.....
    Any ideas will be great as of how to troubleshoot the slow performance....
    Thank you!

    Hi,
    As Sam suggested, please check if there any issue occurred in hard drive.
    On current situation, please also refer to following steps and troubleshoot, then check if we can find more
    clues.
    Please check if you have installed all necessary updates for the Windows Server 2012 R2.
    Please
    perform a clean boot to check if there has software conflicts.
    Please use Resource Monitor to troubleshoot and check if we can find some more details.
    Using Resource Monitor to Troubleshoot Windows Performance Issues Part 1
               Using
    Resource Monitor to Troubleshoot Windows Performance Issues Part 2
    If any update, please feel free to let me know.
    Hope this helps.
    Best regards,
    Justin Gu

  • Slow Performance - Java Related?

    This is an old box that I bought used recently, but the system install is recent. The system performs very slowly - about 50% of the speed of comparable Mac's in the XBench database. If I use Xupport to manully run "All" of the system maintenance crons I get some improvement, but it quickly goes back to being slow.
    Looking at my logs I have a "boat load" of Java errors under CrashReporter; there will be a string of "JavaNativeCrash_pidXXX.log" entries - many of them, as follows:
    An unexpected exception has been detected in native code outside the VM.
    Unexpected Signal : Bus Error occurred at PC=0x908611EC
    Function=[Unknown.]
    Library=/usr/lib/libobjc.A.dylib
    NOTE: We are unable to locate the function name symbol for the error
    just occurred. Please refer to release documentation for possible
    reason and solutions.
    Many of the line entries that follow, but not all of them, refer to SargentD2OL, which is a Java app, which I installed, but it did not work properly so I removed it. Yet I continue to get Java errors that refer to this now non-existant app.
    I have read that Java apps use a lot of resources, and that D2OL in particular uses a lot of resources. Can my slow performance problem be Java related? If so, any idea of how I can fix this problem?
    G4 AGP Graphics   Mac OS X (10.3.9)   500 MHz, 512M RAM

    Sorry to take so long to respond, but other issues in life have demanded my attention.
    None of the solutions given have had any affect. My Java folder has both a 1.3.1 and a 1.4.2 app - the Java Update 2 will not reinstall because it sees an up-to-date app in the folder. But reading the update file it says the older Java will be removed - but it is still there. Problem?
    On XBench the system scores a 9 to 10, while similar boxes on the XBench database score around 18 to 20. My cpu, memory, and video scores are very low. The HD through-put scores are the only ones that are normal. TechTool Pro 4 finds no problems. I have removed the memory sticks one at a time and retested after each cycle - no difference.
    I have two drives, each with a 10.3.9 install. One works fine, scores around a 17 on XBench, the other scores a 9 to 10. So it appears to be a software problem. The slower install is a drive from a iMac G3 that has been moved to the G4 - are there issues with this?
    My favored drive is the prior G3 one (newer and faster than the other drive that system tests faster in XBench) - it has my profile and all my info on it. It worked fine in the G3 - no problems.
    Thanks for the help,
    G4 AGP Graphics Mac OS X (10.3.9) 500 MHz, 512M RAM, ATI 8500

Maybe you are looking for