Performance Issue OnSingle Database

Hi,
I have a performance issue on a single database on a SQL Server that has 32 databases. All other databases appear OK.
How do I begin investigating this?
I'm running SQL Server 2008 R2.
Regards
Paul

Hi Paul
As you said you are facing performace issue only in one database. So possibility is either on other database nothing big is running causing performance issue or this issue is related to database only.
1 Do a quick health check blocking , high cpu , low memory.
2. If all good find worst prforming queries on db (you can get n no. of script on web for that)
3. If you found specific query is slow, Check its execution plan and find all indexes use , rebuild them (stats will automatically updated)
4. In query check if any scan is there , u can use index to turn into seek.
Let us know if this doesnt help. Also please check error log if some specific error is logged ther
Thanks Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem

Similar Messages

  • Performance issue with database server

    Hello,
    We have installed ECC 6.0 on HP UX 11.23 OS with Oracle as database.
    We have cluster environment in our production severs.
    One server as DB-CI installed and other as Dialogue Instance installed.
    After installation, the dialogue instance (Application server))works fine but if we login into DB server and execute any t-code It takes too much time or just stuck.
    and after that if I go to application and execute that t-code it works fine and after that if I go to database and execute that t-code now even in DB it works fine.
    Kindly guide me.
    Thanks,
    Chandresh Pranami.

    Hi Chandresh,
    Check below things for your Database server.
    SM50 - Workprocess status busy or wait
    ST04 - Data buffer hit ratio
    SM37 - Any background job running there.
    Normally even if you are login through a diffrent app server the database side execution will happen in db server only.Application server will do the execution but the data read and to be updated will happen in db server only.When the sql request will go to db server may select update..types of sql then it willalways will be processed by db server.
    So there will be lot of load already thereIg you execute your work here it will take more time because more load for the application will be there.
    Regards
    Ashok

  • Database migrated from Oracle 10g to 11g Discoverer report performance issu

    Hi All,
    We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
    In database 10g the report is working fine but the same report is not working fine in 11g.
    The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
    The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
    Please advise.
    Regards,

    Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
    How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
    How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
    Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
    HTH
    Srini

  • Oracle Apps Database severe Performance Issue

    Hi Gurus,
    This is regarding a severe performance issue running in our Production E-Business Suite Instance.
    its an R12.1.3 setup installed with 11.2.0.1 Database. All the servers are Solaris Sparc 64 (Solaris 10)
    Let me brief you about the instance first:
    2 Node Application
    - Main Application Server hosting web/forms/concurrent/admin servers
    - iSupplier server hosting web services (placed in DMZ, used by external suppliers via Internet)
    1 Node Database Server
    Database Server Specs
    Memory: 144G phys mem 20G total swap
    - CPUs (8Px4cores, 2Px2cores)
    - I/O - fiber channel hard disk (hitachi SAN Storage) - 7 DATA_TOPs (7 drives with RAID 5) - current DB size 1.6 TB
    - at peak load, around 1000 concurrent forms session and 2000 web sessions.
    We have been facing some serious performance issues and we raised an SR with Oracle Support.
    The Support analyzed a bunch of AWR Reports we provided them and they asked us to increase the DB_CACHE from its current usage of 27G to 40G
    So, we changed SGA_TARGET from 35G to 50G and PGA was increased from 35G to 40G as v$pgastat was also suggesting some lack of memory.
    We made these changes last night.
    Today morning we observed the following:
    1. after start of office hours, we checked in the home page of EM DB Console that ADDM was showing reduced impact due to lack of SGA memory which seemed to be a good sign. Earlier it was around 25% which was now at 12%.
    However, negative aspects were:
    1. lot of swapping was reported by the System Administrators on the DB Server
    2. High CPU Usage
    3. EM DB Console showed a lot of "Concurrency Wait Class" events ...throughout the day lot of blocking sessions were reported which were making other sessions to wait.
    in the AWR Report, following foreground reports were listed:
    Top 5 Timed Foreground Events
    Event
    Waits
    Time(s)
    Avg wait (ms)
    % DB time
    Wait Class
    DB CPU
    132,577
    61.46
    library cache lock
    3,539
    40,683
    11496
    18.86
    Concurrency
    library cache: mutex X
    4,014,083
    21,011
    5
    9.74
    Concurrency
    db file sequential read
    4,138,014
    20,767
    5
    9.63
    User I/O
    latch free
    381,916
    5,897
    15
    2.73
    Other
    This is showing "library cache lock" events as the main culprit apart from the usual suspect, the CPU.
    I am attaching the AWR Report. Please let me know if  i should revert back the memory changes or is there anything else i could do.
    Please help us resolving it because the performance is going worst.
    Regards,
    Muneer.

    Pl do not post duplicates - Oracle Apps Database severe Performance Issue
    For all critical production issues, pl work with Support thru SRs - using the forums to troubleshoot production issues is not wise

  • Tool for diagnosing performance issues in oracle database

    Is there any tool to diagnose performance issues in queries and stored procedures in oracle similar to sql profiler for sql server
    Thanks

    you can use oem oracle enterprise manager to diagnose and monitor database .
    Chapter 10: Monitoring and Tuning the Database(refer the link , oracle obe series, step by step procedures with screenshot presentation)
    This chapter introduces you to some of the monitoring and tuning operations as performed through Enterprise Manager.
    http://www.oracle.com/technology/obe/2day_dba/monitoring/monitoring.htm
    refer: Monitoring and Tuning the Database
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10742/montune.htm
    hope, this will helps you.
    Edited by: rajeysh on Jul 14, 2010 9:28 PM

  • Performance Issue Tracking In Database Level.

    Hi All,
    I am sorry, actually i dont know whether this is the right question to ask in this forum. Below is my question.
    We are working on Oracle 10g and are supposed to moved to 11G. My question is which text book will be best one for getting knowledge regarding database performance issue in broad level and monitoring and resolving the issues. Please suggest.
    Edited by: 930254 on Aug 22, 2012 7:56 AM

    Troubleshooting oracle performance ( Apres) is one book I found very useful along with the Oracle documentation (http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10822/toc.htm).
    btw, please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    regards,
    CSM

  • Database performance issue (8.1.7.0)

    Hi,
    We are having tablespace "payin" in our database (8.1.7.0) .
    This tablespace is the main Tablespace of our database which is dictionary managed and heavily accessed by the user SQL statements.
    Now we are facing the database performance issue during the peak time (i.e. at the month end) when no. of users use to run the no. of large reports.
    We have also increased the SGA sufficiently on the basis of RAM size.
    This tablespace is heavily accessed for the reports.
    Now my question is,
    Is this performance issue is because the tablespace is "dictionary managed" instead of locally managed ?
    because when i monitor the different sessions through OEM, the no. of hard parses is more for the connected users.
    Actually the hard parses should be less.
    In oracle 8.1.7.0 Can we convert dictionary managed tablespace to locally managed tablespace ?
    by doing so will the problem will get somewhat resolve ? will it reduce the overhead on the dictionary tables and on the shared memory ?
    If yes then how what is procedure to convert the tablespace from dictionary to locally managed ?
    With Regards

    If your end users are just running reports against this tablespace, I don't think that the tablespace management (LM/DM) matters here. You should be concerned more about the TEMP tablespace (for heavy sort operations) and your shared pool size (as you have seen hard parses go up).
    As already stated, get statspack running and also try tracing user sessions with wait events. Might give you more clues.

  • Severe performance issues in production database

    Hi Experts,
    we have configured RMAN in our production database recently using some 3 party tool COMMVAULT.
    Problem is just 47 GB database taking around 6 hrs of time to complted the bakcup job. Please let me know what could be the reason.
    Further to this issues i found out some of few things, our application vendor commissioned this database server and it seems that they done some changes in TIMEZONE settings.
    when i query against some of few dictionary tables for example dba_schedular_jobs i am getting the following error.
    ORA-01882: timezone region %s not found
    when ever i connect the database , the connection itself is very slow and suffering severe performance issues in my production database.
    You help would be much appreciated.
    Regards,
    Salai

    Hi,
    also let us know if you use asm or local file system for datafiles.
    Your backup strategy will also be helpful:
    Do you make compressed backups?
    Full or incremental?
    To where do you backup the database? To the local filesystem, SAN Volume, Offsite Storage, Tape storage?
    Is there any other jobs running while the RMAN job is running?
    Please post the stats that you have gathered over the time period when the backup is running.
    Thanks.

  • Performance Issue: Retrieving records from Oracle Database

    While retrieving data from Oracle database we are facing performance issues.
    The query is returning 890 records and while displaying it on the jsp page, the page is taking almost 18 minutes for displaying records.
    I have observed that cpu usage is 100% while processing the request.
    Could any one advise what are the methods at DB end or Java end we can think of to avoid such issues.
    Thanks
    R.

    passion_for_java wrote:
    Will it make any difference if I select columns instead of ls.*
    possibly, especially if there's a lot or data being returned.
    Less data over the wire means a faster response,
    You may also want to look at your database, is that outer join really needed? Does it perform? Are your indexes good?
    A bad index (or a missing one) can kill query performance (we've seen performance of queries drop from seconds to hours when indexes got corrupted).
    A missing index can cause full table scans, which of course kill performance if the table is large.

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues ..with apex in reports version 3.1

    Hello All,
    I am using apex 3.1 oracle 10g.
    I am facing with performance issues with apex . I am generating iteractive reports with apex and the number of records are huge - running in 30 to 40 thousands of records and the reports is taking almost 30 minutes.
    How I can improve the performance of this kind of report. I am using apex collections.
    How apex works in terms of retrieving the records -?
    Please let me know .
    Thanks/kumar
    Edited by: kumar73 on Jun 18, 2010 10:21 AM

    Hello Tony ,
    The following are the sequence of steps to run the test case.
    Note:- All the schemas , tables and variables are populated from database.
    From Schema and Relations tab choose the following:
    1)     Select P3I2008Q4 as schema.
    2)     Choose Relation as query path.
    3)     Select ECLA, ECLB, MTAB as relations.
    From Variables choose the following:
    4)     Choose the variables AGE_SEXA,CLODESCA,ALCNO from ECLA relation.
    5)     Choose the variables AGE_SEXB, ALCNO, CLODESCB from ECLB relation.
    6)     Choose the variables EXPNAME, ALCNO, COST_, COST from MTAB relation.
    From Conditions: Click the Run Report button this generated standard report ( Total no of records in report – 30150 )
    Click on Interactive report button –to generate an interactive report. ( Error occurred )
    We are using return sql statement in generationg the standard report and collections for interactive report.
    thanks/kumar

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

Maybe you are looking for