Tuning problem

If Iam using below conditional statement in a query and :p_customer_po is the input. then query cost is 1461
if i remove the NVL condition then query cost is 52 but user is not provided any input then it has to take default values of sohii.customer_po_number. Please suggest me
i do have 2 indexes on customer_po_number. but if i force to use it then it is costing more for me than below
AND sohii.customer_po_number =
NVL (:p_customer_po, sohii.customer_po_number)---1461 query costs
AND sohii.customer_po_number =
(:p_customer_po)----52 query costs
Regards,
Kiran

The two statements used by you are totally different so you cant compare them.
The first one
AND sohii.customer_po_number =
NVL (:p_customer_po, sohii.customer_po_number)---1461 query costsasks the database get me the :p_customer_po and if its not found then get me all the data.
And the second one
AND sohii.customer_po_number =
(:p_customer_po)----52 query costsasks the database get me the :p_customer_po
Said that, you must first define your problem. Dont see it based on the cost but see it based on Number of rows processed and time taken.
To start with why dont you post your SQL Statement with a nice Explain plan by which we will get a better understanding of the problem. Also dont forget to post the DB Version.

Similar Messages

  • [TV@Master] TV@nywhere Tuning problems

    Alright here's my situation:
    I have an athlon64 3400, 1gig, ATI 9550, and 350 watt PSU
    I installed Windows XP a few months ago, put in the card, got it working and everything was great.  Then I installed Solaris 10, completely screwed up the hard drive partitions, and started again from scratch.  This time, when installing the card, it didn't work quite so well.  The card refuses to tune to any channels, but just shows the fuzz.
    Well, then I installed Linux, and got the card playing video, but no sound.  So now I have a card that will play video in Linux, but nothing in Windows.
    Well, personally I don't care which OS I get it working in, but I'm assuming I will get the most help for windows.  What I noticed when installing the drivers, Windows created a "TV/Cable Connection" under My Network Neighborhood.  Also, under the driver manager, there is a thing called "Microsoft Tun Miniport Adapter #2."  Well, I've used an ATI All-In-Wonder, and had the problem with it interfering with my internet connection (internet and TV go through same cable), and was wondering if this could be the same situation.  My internet works fine, but I cannot use the tuner software.

    Quote from: Dr Stu on 22-September-05, 17:21:08
    compaq put a 250w PSU in an A64 system? well done for replacing that, but what have you replaced it with? specs please to confirm
    I'm pretty sure its a 350 watt one...While I couldn't tell you the specifications on each wire, I know the wattage.  I might just replace it because a bearing is going bad in one of the fans.  I suppose the PSU could be the culprit, I never really thought of it.  But I still can't understand how I had it working on one install of windows and not another.  Also, it works in Linux, and it doesn't throttle the cpu like windows...
    So I guess I have two things to do:  Move PCI slots, and replace the PSU (I really don't want to do that).   
    (BTW, HP put a 200 watt PSU in a P4 system...)

  • Tuning problems - VOX USB 2.0

    Hi please help
    I got i VOX 2.0 USB tuner from a friend, but can't get it to work at my computer. It has worked at my friends computer but not mine. The TV card seems to be installed properly under "Device manager" (USB 2820 Device) but when i start "InterVideo MSIPVS 3" and want to tune the "autoscanning" buttom is inactive and i can't search Chanel's. i can see that InterVideo MSIPVS 3 has found my USB 2820 device as input device.
    I have tried to install it 3-5 time but that didn't help
    Can anyone help me?

    I had this problem also...
    but all i did to solve it was to end task wincinemamgr.exe and restart it.
    also there is  a new version of the msipvs software available to test.
    https://forum-en.msi.com/index.php?topic=73312.0
    go there.
    -cope

  • Field fine tuning problems

    I have a field or two that has a few house cleaning chores to do in order to increase functionality and eliminate some confusion for the user but I have gone about as far as I can with it.
    First, I have a Yes/No radio group that will conditionally show a date field. This part works for all intents and purposes, but.....
    If No is selected the datefield is not visible, if yes is selected the date field is visible.
    If the user selects 'yes' and fills in the datefield and then proceeds to the next page and then hits the back button the 'Yes' is still selected but the date field with their date entered is not visible, but if the radio button is re-selected the field will become visible with the date that they entered. How do I make it so that no matter what that if the 'Yes' option is selected the datefield will be visible?
    Heres my current code at the moment:
    <script type = "text/javascript">
    <!--
    function hide(x) {
    document.getElementById(x).style.visibility="hidden";
    function show(x) {
    document.getElementById(x).style.visibility="visible";
    //-->
    </script>
    <cfinput type="radio" id="sendbcastemail2" name="sendbcastemail" onClick="hide('showhidesendbcastemail')" value="No" checked="#sendbcastemail2_Checked#">No
    <cfinput type="radio" id="sendbcastemail1" name="sendbcastemail" onClick="show('showhidesendbcastemail')" value="Yes" checked="#sendbcastemail1_Checked#">Yes
    <cfdiv id="showhidesendbcastemail" style="visibility:#showhidesendbcastemaildate#">Select date:<br>
    <cfinput type="datefield" name="sendbcastemaildate" value="#getLastedit.sendbcastemaildate#" validate="date" message="Enter a Valid Date">
    Next, how do I make the datefield conditionally required.
    If the user selects 'No' the field is not required, but if 'Yes' is selected the datefield is required?
    TIA for any assistance.

    The browser behaviour when user hits back button is entirely browser dependent and there is very little you can do about it, you might want to look at HTML cache related pragmas and JavaScript browser history handling...
    "how do I make the datefield conditionally required"
    Do you want to validate it on client or server?
    If you show/hide the fields on client side as and when required and submit a form, the hidden fields will NOT be submitted to the server (AFAIR), so on the server you can just check when the fields are present in form scope...

  • Report performance problem

    Hi
    My problem is with performance in reports. The reports I use is 'Reports From SQL Query' and I have the following select statement.
    select * from lagffastadr
    where kommun like :bind_kommun
    and adrniva3 like :bind_adrniva3
    and adrnr like :bind_adrnr
    and adrlitt like :bind_adrlit
    and :bind_show = 1
    this works fine, but I have one big problem. The user may use wildcard to select all, %, this give me a new problem. The table I use contains NULL values. Then we don't get all the rows by using %, we get all that have a value. This can be solved by adding
    where (kommun like :bind_kommun OR kommun IS NULL)
    this will give me all the rows. But, then this will take a long time to evaluate since the sql query don't use the index that I have created. Is there another way to solve this or do I have to change all the NULL values in my database? (I rather not).
    thanks
    /Jvrgen Swensen

    It looks more like a query tuning problem. Please try the SQL, PL/SQL forum.
    You can expect a better answer there.
    Thanx,
    Chetan.

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • Multi-Left Join Query Tuning

    I am tuning a SELECT query with 36 Left Joins in addition to normal Inner Joins and a View.
    I have used the RESULT_CACHE hint with some success.
    I have tried the LEADING hint and USE_MERGE with no success.
    Is there an Undocumented HINT and that may assist me?
    Thanks
    BRAD

    Hi, Brad,
    Welcome to the forum!
    970109 wrote:
    I am tuning a SELECT query with 36 Left Joins in addition to normal Inner Joins and a View.Why does the query need so many outer joins? Could there be a bad table design behind this problem? Post a simplified version ot the problem (with maybe 3 tables that need to be outer-joined). Post CREATE TABLE and INSERT statements for a little sample data (relevant columns only), the results you want from that sample data, and an explanation of how you get those results from that data.
    See the forum FAQ {message:id=9360002}
    For all tuning problems, see {message:id=9360003}

  • Performace Tuning

    Hi Gurus ,
    I know the basic funds of performace tuning.But Can you please tell me the best step by step how to do performace tuning on a production server .Guide for sql tuning,Database tuning,Operating System Tuning.
    I give interview in oracle last day but in last round one of manager rejected me because i was weak on performance tuning.
    So want real hard code hands on it.Please help
    Regards
    Sahil Soni

    user5415179 wrote:
    Hi Gurus ,
    I know the basic funds of performace tuning.But Can you please tell me the best step by step how to do performace tuning on a production server .Guide for sql tuning,Database tuning,Operating System Tuning.
    I give interview in oracle last day but in last round one of manager rejected me because i was weak on performance tuning.
    So want real hard code hands on it.Please help
    Regards
    Sahil Soni
    If you have access to Metalink (now called My Oracle Support), search for Doc ID 233112.1 Diagnosing Query Tuning Problems, and Doc ID 390374.1 Oracle Performance Diagnostic Guide (OPDG).
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Tuning discover4i viewer

    Our workbooks run much slower through the viewer than from a discoverer client. Does anyone have any tips on where to look to resolve tuning problems with discoverer4i viewer.

    I would like to know for instance, how many blocks are being affected with a couple of fixes^^^SET MSG SUMMARY or SET MSG DETAIL and the application log will tell you that. It's sort of a pain to work with, but it's all there.
    Regards,
    Cameron Lackpour

  • Stored Procedure  is taking too long time to Execute.

    Hi all,
    I have a stored procedure which executes in 2 hr in one database, but the same stored procedure is taking more than 6 hour in the other database.
    Both the database are in oracle 11.2
    Can you please suggest what might be the reasons.
    Thanks.

    In most sites I've worked at it's almost impossible to trace sessions, because you don't have read permissions on the tracefile directory (or access to the server at all). My first check would therefore be to look in my session browser to see what the session is actually doing. What is the current SQL statement? What is the current wait event? What cursors has the session spent time on? If the procedure just slogs through one cursor or one INSERT statement etc then you have a straightforward SQL tuning problem. If it's more complex then it will help to know which part is taking the time.
    If you have a licence for the diagnostic pack you can query v$active_session_history, e.g. (developed for 10.2.0.3, could maybe do more in 11.2):
    SELECT CAST(ash.started AS DATE) started
         , ash.elapsed
         , s.sql_text
         , CASE WHEN ash.sql_id = :sql_id AND :status = 'ACTIVE' THEN 'Y' END AS executing
         , s.executions
         , CAST(NUMTODSINTERVAL(elapsed_time/NULLIF(executions,0)/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS avg_time
         , CAST(NUMTODSINTERVAL(elapsed_time/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS total_time
         , ROUND(s.parse_calls/NULLIF(s.executions,0),1) avg_parses
         , ROUND(s.fetches/NULLIF(s.executions,0),1) avg_fetches
         , ROUND(s.rows_processed/NULLIF(s.executions,0),1) avg_rows_processed
         , s.module, s.action
         , ash.sql_id
         , ash.sql_child_number
         , ash.sql_plan_hash_value
         , ash.started
    FROM   ( SELECT MIN(sample_time) AS started
                  , CAST(MAX(sample_time) - MIN(sample_time) AS INTERVAL DAY(0) TO SECOND(0)) AS elapsed
                  , sql_id
                  , sql_child_number
                  , sql_plan_hash_value
             FROM   v$active_session_history
             WHERE  session_id = :sid
             AND    session_serial# = :serial#
             GROUP BY sql_id, sql_child_number, sql_plan_hash_value ) ash
           LEFT JOIN
           ( SELECT sql_id, plan_hash_value
                  , sql_text, SUM(executions) OVER (PARTITION BY sql_id) AS executions, module, action, rows_processed, fetches, parse_calls, elapsed_time
                  , ROW_NUMBER() OVER (PARTITION BY sql_id ORDER BY last_load_time DESC) AS seq
             FROM   v$sql ) s
           ON s.sql_id = ash.sql_id AND s.plan_hash_value = ash.sql_plan_hash_value
    WHERE  s.seq = 1
    ORDER BY 1 DESC;:sid and :serial# come from v$session. In PL/SQL Developer I defined this as a tab named 'Session queries' in the session browser.
    I have another tab named 'Object wait totals this query' containing:
    SELECT LTRIM(ep.owner || '.' || ep.object_name || '.' || ep.procedure_name,'.') AS plsql_entry_procedure
         , LTRIM(cp.owner || '.' || cp.object_name || '.' || cp.procedure_name,'.') AS plsql_procedure
         , session_state
         , CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END AS blocking_session_status
         , event
         , wait_class
         , ROUND(SUM(wait_time)/100,1) as wait_time_secs
         , ROUND(SUM(time_waited)/100,1) as time_waited_secs
         , LTRIM(o.owner || '.' || o.object_name,'.') AS wait_object
    FROM   v$active_session_history h
           LEFT JOIN dba_procedures ep
           ON   ep.object_id = h.plsql_entry_object_id AND ep.subprogram_id = h.plsql_entry_subprogram_id
           LEFT JOIN dba_procedures cp
           ON   cp.object_id = h.plsql_object_id AND cp.subprogram_id = h.plsql_subprogram_id
           LEFT JOIN dba_objects o ON o.object_id = h.current_obj#
    WHERE  h.session_id = :sid
    AND    h.session_serial# = :serial#
    AND    h.user_id = :user#
    AND    h.sql_id = :sql_id
    AND    h.sql_child_number = :sql_child_number
    GROUP BY
           ep.owner, ep.object_name, ep.procedure_name
         , cp.owner, cp.object_name, cp.procedure_name
         , session_state
         , CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END
         , event
         , wait_class
         , o.owner
         , o.object_nameIt's not perfect and the numbers aren't reliable, but it gives me an idea where the time might be going. While I'm at it, v$session_longops is worth a look, so I also have 'Longops' as:
    SELECT sid
         , CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
         , l.opname AS operation
         , l.totalwork || ' ' || l.units AS totalwork
         , NVL(l.target,l.target_desc) AS target
         , ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
         , NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
         , l.start_time
         , CASE
               WHEN  l.time_remaining = 0 THEN l.last_update_time
               ELSE SYSDATE + l.time_remaining/86400
           END AS est_completion
         , l.sql_id
         , l.sql_address
         , l.sql_hash_value
    FROM v$session_longops l
    WHERE :sid IN (sid,qcsid)
    AND  l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
    ORDER BY l.start_time descand 'Longops this query' as:
    SELECT sid
         , CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
         , l.opname AS operation
         , l.totalwork || ' ' || l.units AS totalwork
         , NVL(l.target,l.target_desc) AS target
         , ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
         , NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
         , l.start_time
         , CASE
               WHEN  l.time_remaining = 0 THEN l.last_update_time
               ELSE SYSDATE + l.time_remaining/86400
           END AS est_completion
         , l.sql_id
         , l.sql_address
         , l.sql_hash_value
    FROM v$session_longops l
    WHERE :sid IN (sid,qcsid)
    AND  l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
    AND  l.sql_id = :sql_id
    ORDER BY l.start_time descYou can also get this sort of information out of OEM if you're lucky enough to have access to it - if not, ask for it!
    Apart from this type of monitoring, you might try using DBMS_PROFILER (point and click in most IDEs, but you can use it from the SQL*Plus prompt), and also instrument your code with calls to DBMS_APPLICATION_INFO.SET_CLIENT_INFO so you can easily tell from v$session which section of code is being executed.

  • Too many recursive statements in PRO*C in comparing to SQLPLUS for Intermedia Index.

    Hi,
    I hope someone can help about this Problem. I don't know ehther its Intermedia Index or Database Problem...
    The following Query;
    SELECT SCORE(1),D.DOCUMENT_ID,DOCU_DATE_NUM,DOC_TYPE_ID
    FROM
    DOCUMENT D WHERE CONTAINS(SEARCH_INDEX,:b1,1) > 0 ORDER BY SCORE(1) DESC,
    DOCU_DATE_NUM DESC
    takes approx 7 sec in SQLPLUS , but in PRO*C it takes approx 55 sec. They call the same PL/SQL Stored Proc including the SQL above returning REF Cursor.
    In PROC*C running Program the Trace file contains 139 Statements of:
    SELECT/*+INDEX(T "DR$DOCXML_IX$X")*/ DISTINCT TOKEN_TEXT FROM "GETINFO"."DR$DOCXML_IX$I" T WHERE TOKEN_TEXT LIKE :lkexpr and TOKEN_TYPE NOT IN (1, 2, 5)
    but in SQLPLUS generated Trace File it has only 18 of the Statement.
    The TKPROF Report for PRO*C is:
    SELECT SCORE(1),D.DOCUMENT_ID,DOCU_DATE_NUM,DOC_TYPE_ID
    FROM
    DOCUMENT D WHERE CONTAINS(SEARCH_INDEX,:b1,1) > 0 ORDER BY SCORE(1) DESC,
    DOCU_DATE_NUM DESC
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 33.37 33.72 3 94 0 0
    Fetch 44 0.04 0.04 29 69 4 43
    total 46 33.41 33.76 32 163 4 43
    For SQLPLUS:
    SELECT SCORE(1),D.DOCUMENT_ID,DOCU_DATE_NUM,DOC_TYPE_ID
    FROM
    DOCUMENT D WHERE CONTAINS(SEARCH_INDEX,:b1,1) > 0 ORDER BY SCORE(1) DESC,
    DOCU_DATE_NUM DESC
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 1 4.36 4.37 0 0 0 0
    Fetch 44 0.02 0.02 10 44 0 43
    total 46 4.39 4.40 10 44 0 43
    Why is there so much difference? Even if they both do HARD Parse for the SQL above or run the same Stored Proc many times with "ALTER SESSION SET SESSION_CACHED_CURSORS=10", the difference in Time is the same.
    Can someone help about this, I think it is an important Tuning Problem for Intermedia Index or ORACLE Bug maybe...

    Hi,
    Thanks for answering.
    Yes I'm sure , both of them are using the same bind variables and it is:
    (FUZZY($INTERNATIONAL) AND FUZZY($JOURNAL) AND FUZZY($ELECTRONICS) AND FUZZY($COMMUNICATIONS)) WITHIN SERIES_TITLE';
    The same Query takes too long in Unix SQL*PLUS (~ 54 secs) also, but in Windows SQL*PLUS its faster (~ 7 secs).
    Bu t if I chabe the Bind Variable as :
    STR := '(($INTERNATIONAL OR ?INTERNATIONAL) AND ($JOURNAL OR ?JOURNAL) AND ($ELECTRONICS OR ?ELECTRONICS) AND ($COMMUNICATIONS OR ?COMMUNICATIONS)) WITHIN SERIES_TITLE';
    then its in both Environment faster and nearly the same. But its not the same as the previous one.
    I think the problem is when using the search as ?$<Word> (Fuzzy Stem together). But not everytime. Do you think its a Bug?

  • Bad performance when calling a function in where clause

    Hi All,
    I have a performance problem when executing a query that contains a function call in my where clause.
    I have a query with some joins and a where clause with some regular filters. But one of these filters is a function, and its input parameters are columns of the tables used in the query.
    When I run it with only a few rows in the tables, it goes ok. But as the number of rows grows, performance falls geometrically, even when my where clause filters the result to only a few rows.
    If I take the function call off of the where clause, then run the query and then call the function for each returned row, performance is ok. Even when the number of returned rows is big.
    But I need the function call to be in the where clause, because I can't build a procedure to execute it.
    Does anyone have any clue on how to improve performance?
    Thanks,
    Rafael

    You have given very little information...
    >
    If I take the function call off of the where clause, then run the query and then call the function for each returned row, performance is ok. Even when the number of returned rows is big. Can you describe how you measured the performance for a big result set without the function? For example lets say there had been 10.000 rows returned (which is not really big, but it is astarting point). Did you see all 10.000 rows? A typical mistake is to execute the query in some tool like Oracle SQL Developer or TOAD and measure how fast the first couple of rows are returned. Not the performance of the full select.
    As you can see from this little detail there are many questions that you need to address first before we can drill down to the root of your problem. Best way is to go through the thread that Centinul linked and provide all that information first. During the cause of that you might discover that you learn things on the way that help a lot for later tuning problems/approaches.
    Edited by: Sven W. on Aug 17, 2009 5:16 PM

  • Please suggest DB monitoring tool

    Our DB have some performance issues, I need your suggestions to buy a DB monitoring tool to help us to resolve these problems.
    If you can share your experiences on the tool, it will be the best.

    Hi,
    We don't have enough knowledge on DB.In that case, beware of tools that make recommendations that you cannot understand . . .
    Some advanced monitoring tools can be cryptic for beginners.
    Is OEM or Statspack suitable to a beginner? Beware, the OEM diagnostic pack and the OEM performance pack are really expensive, over $6,000 per seat:
    http://www.dba-oracle.com/news_tuning_packs_price.htm
    Plus, consider 3rd party tuining tools.
    Historically, Oracle tools have had a bad reputation, and Oracle's Oracle Expert tool was widely criticized for making ridiculous tuning recommendations . .
    Everybody has a monitoring tool to plug, I even have a few that I wrote myself.
    There are a zilllion of them, but they all offer test drives, but try to find:
    - A monitoring tool that makes recommendations in plain English
    - A tool that is reasonably priced
    - A tool from a well-know vendor with a good track record
    Lastly, if you are not familiar with Oracle tuning, it may be better and cheaper to buy an Oracle health check.
    A guru can find tuning problems in just a few hours, faster and more efficient that using a tuning tool.
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/t_oracle_tuning_book.htm
    "Time flies like an arrow; Fruit flies like a banana".

  • SAP in a Oracle server

    Is it possible to apply and use a SAP as a layer on a Oracle Database. Should the DBA have the knowledge on SAP to perform this... ?

    Aman.... wrote:
    I believe SAP is an application which has to use either Oracle or some other database for sure.
    About knowing SAP, I believe, its going to be a tough task because it is not a small application.
    If one is a DBA, he may try to know about it but it would be a big learning curve. Yes, SAP is a VERY large/elaborate suite of software, using a 3-tier architecture that can be spread
    across dozens of servers. It requires a back-end database (often Oracle) to store business data
    and to store much of the software code itself. There is a significant learning curve.
    SAP controls the majority of the Oracle setup/install/physical layout. (Non-OFA-file-placement)
    As a DBA, don't expect to learn over 25% of the application layer. (just the basic architecture)
    SAP app details take SAP-experts years to learn fully, with as much internal expertise as the
    Oracle database itself. (if not more)
    Aman.... wrote:
    And to manage the database, I think SAP offers its own console. SAP software supplies two different interfaces into an Oracle database. (that I know of)
    (1) BR-TOOLS: This is a text-based tool with many sub-menus, that will run on the DB server,
    which provides a wrapper to perform several DBA operations. It allows adding files to tablespaces,
    table-reorgs, index rebuilds, running analyze/stats on all tables, backup+restore of database,
    and several other functions. BRTools is a collection of several executables. You don't HAVE to
    use brtools for everything, but if you are not an experienced DBA, it will take care of many things
    without needing to know the SQl to care for an Oracle database. If you DO know you SQL well,
    then BRTools may take more time to find the right menu, than to do things yourself, or script them.
    After learning the sub-tools, you can use command line args + scripts to call exactly what you need.
    There is also a GUI layer to run with BRtools, but I haven't bothered to put that in place.
    BRtools also includes BR-Backup/BR-Archive/BR-Restore. These are only good for FULL Oracle
    backups and archlog backups. Docs say that they will user RMAN and do Incrementals, but
    we tried to get this capability to work, and it's not ready for real production.
    (2) SAP Transaction/T-Code DB02 (app screen) or "DBAcockpt"
    Within the overall SAP application, there are many screens with specific "T-Code" names for
    specific user roles. (ie:sales, manufacturing, shipping, payroll, etc) There are also some to support
    maint + administration, including some for DBA status of Oracle. Some good screens to look for
    in SAP are called DB02, DB03, DB12, DB24, DB50, SE16, SE11, DBACockpit. Most of these are
    read-only, to show status, history, reports, config, schedules, parameters, alert.log, etc.
    They won't perform the DBA work, but are a good place to find SAP perspective of the Oracle
    behavior & status, instead of writing all of your monitoring SQL from scratch.
    Aman.... wrote:
    I am not sure but I have been told that there is a language called SAP Basis as well for the
    administrative tasks related to SAP. NO... the term "Basis" is not a language, but refers to the SAP environment itself, and the config
    install, upkeep, maint, architecture, layout of SAP. The main "language" used to code SAP
    screens is called "ABAP" and is specific/proprietary to SAP... just like PL/SQL is specific to Oracle.
    Almost all of the ABAP code is parsed+stored in the database, to support server-side processing
    within SAP, and to define the screen behavior for SAP end users. ABAP is very detailed, and very few
    DBAs ever need to know ABAP. There are also more recent portions of SAP that use Java coding
    to define screens + behavior... but much like Oracle, this is more of a recent addition, rather than
    the 'core' of the software architecture.
    Extras: There is an SAP add-on called "CCMS" which performs monitoring and alerts from SAP serves.
    CCMS can monitor certain parts of the Oracle database as well, such as Tablespace%free, or filesystems.
    Also: SAP will generate an "EarlyWatch" report, which tells you performance+tuning problems to look for.
    Note: I have been doing Oracle DBA work in SAP/Unix environment for years,
    and still don't know 'everything' that a DBA may want about SAP...

  • Monitor performance

    Hi:
    I have a Sun Fire V440 box (2 CPU, 4G Ram, a 3510 disk storage).
    I login from other machines (Sun Blade 2500), the response of the shell is very slow. However, the shell opened on the Sun Fire V440 box is reasonable. I suspect the network problem and thus change the network cables but problem still there.
    I used RICHPse to monitor the disk. The performance of disk is slow. Below is the output from virtual_adrian.lite:
    Adrian detected slow disk(s): Sat Mar 11 22:11:39 2006
    disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b delay
    c1t1d0 264.2 38.0 10533.3 303.6 48.6 30.9 263.0 11 64 79502.2
    c1t1d0s7 264.0 38.0 10533.3 303.6 48.6 30.9 263.2 11 64 79502.2
    Adrian detected slow disk(s): Sat Mar 11 22:16:39 2006
    disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b delay
    c1t1d0 318.1 17.8 14777.7 142.4 5.3 14.2 58.3 4 59 19579.3
    c1t1d0s7 317.9 17.8 14777.7 142.4 5.3 14.2 58.3 4 59 19579.1
    Adrian detected slow disk(s): Sat Mar 11 22:18:09 2006
    disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b delay
    c1t1d0 62.9 17.3 3282.1 138.4 4.8 13.6 228.9 4 28 18352.8
    c1t1d0s7 62.5 17.3 3281.9 138.4 4.8 13.6 230.0 4 28 18352.0
    Adrian detected slow disk(s): Sat Mar 11 23:48:41 2006
    disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b delay
    c1t1d0 6.1 42.3 46.9 11774.4 0.0 8.4 173.5 0 31 8403.3
    c1t1d0s7 5.9 42.3 46.8 11774.2 0.0 8.4 173.9 0 31 8380.6
    There is a lot of read and write and the disk response is slow. However, running top, I saw the CPU is idle.
    The network card shall be fine as sometimes the link is okay.
    I wonder if there is any method to find out the cause and fix it.
    Thanks
    Yoong

    Seems obvious to the real me as well as the virtual me. Something is doing too much traffic to disk, so the system is unresponsive, idle CPU and network is what you should expect.
    There isn't a standard way to tell which process is generating all the disk traffic, so unless its obvious by looking at the application code you could use a tool like Ortera Atlas (a.k.a. virtual_dave_fisk from www.ortera.com) to see how to tune the disk subsystem. I blogged on it a few times at http://perfcap.blogspot.com/2005/08/solving-storage-tuning-problems.html
    Hope this helps
    Adrian Cockcroft (now at eBay)

Maybe you are looking for