TUNING METHODS

Can any one please tell me in easy language about performance tuning methods. Please don't give online doc as a reference.

904298 wrote:
Can any one please tell me in easy language about performance tuning methods. Please don't give online doc as a reference.Look at an action (such as an SQL statement), determine the duration of that action, identify what portion of that duration is spent waiting for resources and what portion is actively working toward completion of the action.
Performance tuning is simply minimizing the time waiting for resources and making the active work more efficient, while not adversely impacting anything else in the environment.
In other words: find out why it is slow, come up with some ideas how to speed it up, try the ideas, and duck when others yell that their part is now slow.
Methods ... simple: use any and all tools available to determine the time/duration component, and then use the tool between your ears to determine possible solutions. It is all based on experience.
Edited by: Hans Forbrich on Mar 8, 2012 12:41 PM
By the way, the experience starts with reading. Specifically reading the manuals. :-)

Similar Messages

  • Whats the difference between these two queries ? - for tuning purpose

    Whats the difference between these two queries ?
    I have huge amount of data for each table. its takeing such a long time (>5-6hrs).
    here whice one is fast / do we have any other option there apart from listed here....
    QUERY 1: 
      SELECT  --<< USING INDEX >>
          field1, field2, field3, sum( case when field4 in (1,2) then 1 when field4 in (3,4) then -1 else 0 end)
        FROM
          tab1 inner join tab2 on condition1 inner join tab3 on condition2 inner join tab4 on conditon3
        WHERE
         condition4..10 and
        GROUP BY
          field1, field2,field3
        HAVING
          sum( case when field4 in (1,2) then 1 when field4 in (3,4) then -1 else 0 end) <> 0;
    QUERY 2:
       SELECT  --<< USING INDEX >>
          field1, field2, field3, sum( decode(field4, 1, 1, 2, 1, 3, -1, 4, -1 ,0))
        FROM
          tab1, tab2, tab3, tab4
        WHERE
         condition1 and
         condition2 and
         condition3 and
         condition4..10
        GROUP BY
          field1, field2,field3
        HAVING
          sum( decode(field4, 1, 1, 2, 1, 3, -1, 4, -1 ,0)) <> 0;
    [pre]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    My feeling here is that simply changing join syntax and case vs decode issues is not going to give any significant improvement in performance, and as Tubby points out, there is not a lot to go on. I think you are going to have to investigate things along the line of parallel query and index vs full table scans as well any number of performance tuning methods before you will see any significant gains. I would start with the Performance Manual as a start and then follow that up with the hard yards of query plans and stats.
    Alternatively, you could just set the gofast parameter to TRUE and everything will be all right.
    Andre

  • SQL Tuning for Exadata

    Hi,
    I would like to know any SQL tuning methods specific to Oracle exadata so that they could improve the performance of the database?
    I am aware that oracle exadata runs with Oracle 11g, but i would like to know wheather there is any tuning scope w.r.t to SQL's on exadata?
    regards
    sunil

    Well there are some things that are very different about Exadata. All the standard Oracle SQL tuning you have learned already should not be forgotten as Exadata is running standard 11g database code, but there are many optimizations that have been added that you should be aware of. At a high level, if you are doing OLTP type work you should be trying to make sure that you take advantage of Exadata Smart Flash Cache which will significantly speed up your small I/O's. But long running queries are where the big benefits show up. The high level tuning approach for them is as follows:
    1. Check to see if you are getting Smart Scans.
    2. If you aren't, fix what ever is preventing them from being used.
    We've been involved in somewhere between 25-30 DB Machine installations now and in many cases, a little bit of effort changes performance dramatically. If you are only getting 2 to 3X improvement over your previous platform on these long running queries you are probably not getting the full benefit of the Exadata optimizations. So the first step is learning how to determine if you are getting Smart Scans or not and on what portions of the statement. Wait events, session statistics, V$SQL, SQL Monitoring are all viable tools that can show you that information.

  • Applied performance tuning conditions, but no result..!

    Hi,
    Requirement:
    1. Display data from tables like BSEG, BKPF, CSKS etc..,
    2. Amount of data , which needs to be display is very huze.
    3. Changing of requirement will be happen, only if we agree ABAP can not increase performance more than this.
    Result:
    1. Bad Performance, perticulerly at BSEG.
    2. Some times falling into Memory Over flow issue. Because of number of records pulling from tables are very high.
    What I did:
    1. I used for all entreis methid.
    2. Used availble Indexs in where conditions.
    3. Using Clear and Refresh statement where ever neccassary.
    4. Applied all performance related options , as of my knowledge.
    5. There are syntax, exended check, code inspector errors or warings.
    I know this is because of huze data, but as a developer how can reduce the runtime..still...?
    ---How can solve this problem...?
    Is there any way to modify my program acrchitecture to get result...!
    Thanks,
    Naveen Inuganti.

    Hello Naveen,
    As i could see from your question, you have tried most of the basic fine tuning methods to improve the performance of the program.
    There might still be some things like using binary search's with proper sort criterias, fine tuning a loop processing etc. But that could be suggested only looking at the designed program.
    My recommendation is to first recheck the design of the program, verify if any loop processings or search criterias can be fine tuned and make changes accordingly.
    If the run time still does not improve, try splitting the selction criteria logically inside the program (as customer might not agree to change the requirement). Once the individual "split" selection data is processed, you can collect all the data for common processings.
    For memory issues, use FREE and REFRESH statements to delete all the data which is not required. Sometimes, we keep the internal tables with huge data to be used in later processings. My suggestion is to delete such data once your first steps of processing is completed and select the data again when it is required. We can check if "SELECT..ENDSELECT" can be used which again depends on case to case.
    If this is report which displays data based on some criteria from time to time, depending on the various selction criterion being used, you can al alternative solution:
    1) Create a database view and each time the program runs and reports the data, store it also in this database view. The view and design of the data base will depend on how the requirement is and teh complexity of the data being shown.
    2) Next time when the program runs, only the delta data can be processed from actual database table (BKPF, BSEG, CSKS etc) and teh rest of the data can be fetched from previous processed data.
    This might be helpful particularly if the report is working on timestamps.
    If it still does not work, due to huge data, the only resort is to run the program in background.
    Regards,
    Pavan Kolli

  • Good book for Oracle 9i Performance Tuning

    Hi Can anybody suggest good book in Oracle 9i performance Tuning (All the Tuning methods and I/O, tuning Memeory Tuning .......)
    I done my OCP 9i and I worked as Junior DBA and now I want to concentrate only on Tuning.
    Thanks
    Venkataragavan.S

    If you are looking generalized, not exactly 9i performance, but,good in terms of oracle tuning, I would suggest the below apart from the above given,
    Oracle Performance by Cary Milsap and Jef Holt
    Jonathan Lewis, 'Practical Oralce 8i', dont go on 8i name.
    Sql Tuning by Dan Tow
    Jaffar

  • Tuning multiple interrelated PID controllers

    Hi,
    I am attempting to control forces with pneumatic pistons and regulators. Feedback comes from a 6 degree of freedom load cell. I already have a program designed with PID controllers which works ok. The trouble is tuning the parameters.
    So far, I have used Ziegler-Nichols for each single parameter, by testing each piston/PID element individually. Then I have adjusted these manually to make them work together. This means a lot of trial and error, often with unsuccessful results.
    Is there any way to tune these parameters and account for the interdependence between them all at the same time?
    To clarify the situation:
    I am controlling 5 degrees of freedom (Fy is constrained). I am using 7 pistons, and 12 PID controllers - since several pistons affect multiple degrees of freedom.
    I can give more detailed information, but I am most interested in suggestions about tuning methods.
    Richard Godley

    your project seems very interresting.
    however, i do not understand how you use your PID modules. what do they control? is it just force or also position of your load. what is the feedback input itself?
    in general, when having several degree of freedom, try and locate the element (in this case piston) that most sensitively and effectively affect each separate degree. with that, the other pistons should just try and keep their own degree of freedom locked. this lead to automatic self regulation, where each piston has its own tuned PID.
    one other way of doing it, is by using a minimalisation routine, which would set your feedback parameters relative to your input. do that for several desired inputs, and you should get then an array of tuning parameters for each input vaule. you can then create a tuning function for each PID, which will work on whole range of inputs.
    also: try to use only one PID per piston. otherwise they work against each other.
    if you would detail the system, we may have better ideas.
    ... And here's where I keep assorted lengths of wires...

  • Performance Tuning of webi report BO4.0

    Hi ,
    I have report which has two data providers in 1st Data Provider i have 10 objects and in 2nd Data provider i have 3 objects the issue is performance report is taking 6 minutes to run.
    Please help me out how we can increase the performance at Query Level.
    Thanks
    Siva

    Hi
    There are various levels to which you have to check to improve the performance of reports could be at connection levels, query, database etc.
    some of them you can try
    1. check array fetch size in relational connection parameters, deactivating array fetch size can increase the efficiency of retrieving your data, but slows server performance.
    2. Setting options for the default list of values in business layer and data foundation layer
    Automatic refresh - If this is selected, the list of values is automatically refreshed each time the
    list is called. This can have an effect on performance each time the list of values is refreshed. You should disable this option if the list of values returns a large number of values.
    3. set query stripping (Optimize query with Query Stripping in Web Intelligence - Business Intelligence (BusinessObjects) - SCN Wiki)
    4. also check Performance Tuning Methods in BO
    Regards,
    Raghava

  • Performance Issue with Webi report uses SAP BI Query as the data source

    Hello.
    I have created a Webi ad-hoc report which connects to a SAP BI query through BO OLAP universe.
    The layout of Webi is the exactly the same as the BI query.  There are filters in the Webi to restrict the number of data extraction, but even with data result of 5000 rows, it took about 30 seconds.
    If I execute the BI query with the same filter restriction, it tooks less than 10 seconds.
    It seems that large percentage of time is consumed at the MDX part.
    Is there any tuning method that could speed up the process time of MDX?
    Thank you.
    Justine
    Edited by: Justine Liu on Mar 18, 2009 6:59 AM

    Hi,
    please take a look here:
    [https://service.sap.com/sap/support/notes/1142664] (Look under related notes)
    It includes references to various performance improvements of the MDX interface. From what I saw there it is advisable to upgrade your SAP BI (7.0)  up to at least Support Package 21 (you are currently on SP 15).
    This may also be interesting for you: There is a new Fix Pack 1.4 coming out for BOBJ XI 3.1. Combined with the related SAP Enh.Pack (not sure about the version of this one) should also improve WebI performance. This fix pack is not yet officially released though but it should not take look.
    I recommend that you try the upgrade to Support Package 21 first.
    BTW it is also advisable to take a look in the results of your MDX query (e.g using the MDXTEST transaction). You should make sure that your query is indeed restricted as expected. Sometimes the results you see in SAP native reporting tools (e.g. BEx Analyzer) differ from those returned from the MDX component, depending on the way variables/restrictions where defined in the query designer. It is all about making sure that there is no apples/oranges comparison here.
    Regards,
    Stratos

  • Whats the difference between the two XMP packet tags

    Hi,
    I opened a file Bluesquare.indd(from XMP SDK sampke) and I found two XMP packets inside the file.
    One packet with tag
    and another one with tag
    When I tried to extract the xmp using getXMP() method from XMPFiles then, I got the packet with tag
    So can you tell me, what is the difference between two packets, why they are different
    what is its use.
    Thanks & Regards,
    Venkatesh.E

    My feeling here is that simply changing join syntax and case vs decode issues is not going to give any significant improvement in performance, and as Tubby points out, there is not a lot to go on. I think you are going to have to investigate things along the line of parallel query and index vs full table scans as well any number of performance tuning methods before you will see any significant gains. I would start with the Performance Manual as a start and then follow that up with the hard yards of query plans and stats.
    Alternatively, you could just set the gofast parameter to TRUE and everything will be all right.
    Andre

  • Batch Job Performance Issue in BW

    Hi All
    I would like to if there are any performance tuning methods for batch jobs in BW. Few jobs are taking much longer time and i need to figure out a method to tune them.
    Thanks in advance for your help
    Regards
    JP

    Hi JP,
    Dinesh is right, you need to consider the tuning for specific scenarios. If some of the jobs are taking longtime just access at what point they are taking time by going to the JobLog in source sytem/BW, and try to take up the optimization of the resources used inthe source system. For Eg: It can be even in the Extractor. I had come across such issue with long loading times for few ledgers in FISL. But could encounter that by spitting load with small number of records to BW with an ABAP program incorporated inthe InfoPack with the selections for the Reconcilation key.
    In this way, you need to first identify the delay for the jobs. Then you can plan the things accordingly..
    Hope this will help you..
    Regards,
    Madhu

  • Can we do mutiple transpose layers in Global Tracks?

    I have a project that contains loops which have been made to follow the chord progression of the main track by using the Global transpose track (the Global Chords function has been taken from us in X).  But now I find that I have to raise the key of the entire piece a full step.  Is there a way to add another Global Transpose layer to raise everything including the loops being that the are already using the transpose function?   
    Thanks,
    Lee

    Indeed...
    Your work method is the way i used to do it initially but then found the same sort of problem you have come across....
    Unfortunately any of the other global 'tuning' methods... only work for SIs and not Audio tracks.... so they are not much use.
    Thank you.  I appreciate your help.
    You are welcome! Sorry i couldn't help more....
    Cheers..
    Nigel

  • Query regarding parallel

    Hi,
    let's assume 8 CPU machine is there and oracle 10.2.0.4.
    query No. 1-then how many parallel_max_servers we can define at oracle level.
    query No. 2: i defined parallel_max_servers=8 , now if i give in query parallel=10 so how oracle will work?
    thanks in advance.

    To take a stab at ansering your questions:
    1 - Based on the information given there is no way to accurately answer this question because you need to consider more than just the number of cpu are available on the server to determine how many PQO servers you want. Such as how many disks is the database spread across, how is the PQO option going to be invoked: Oracle managed, via SQL hints, defined at the table level, and how many concurrent sessions are going to be on the system. How many of thses sessions will run parallel.?
    Remember that the PQO is a brute force tuning method intended by Oracle to be used in Warehouse environments. A query that runs PQO gets more resouces than a non-PQO run and if you flood your system with PQO queries you can bring it to a halt. You can use it in an OLTP to good effect if you are careful and do not overuse the feature. In a warehouse you would expect the total number of concurrent sessions to be low relative to an OLTP environment but many environments are mixed as to contents and usage so you have to be careful.
    2 - Obviously Oracle cannot assign more resources than it has to the query task. In fact if another session is already using the PQO resources Oracle will run the query normally without PQO.
    HTH -- Mark D Powell --

  • SRA gateway - performance problems

    Hi!
    I'm experiencing slow response with SRA gateway when I stress the home page in a site www.example.com. I redirected the SRA default home page to a custom application home page.
    In my deployment, I have one machine with SRA 2005Q1 and my application is running in another machine with Application Server 7 EE UR4.
    I stressed the application with two tools: Jmeter and Pureload, getting the same results. I put only one worker, 1 thread and 20 iterations, so it's a very low load for the response that I'm getting.
    When I test the application directly, without SRA, I get a very good average response of about 300ms, but when I test the application with SRA gateway protecting the application, I get an average response from 12 to 16 seconds!
    I followed all SUN tuning recomendations and ran the perftune script and still got the same results.
    When I execute the stress tests, the processor has a load of 4% and the memory usage is 14%.
    After analyzing the results I noticed that the main page loads quickly but the average load time of the images is very high. I searched in sunsolve for similar problems with SRA slow loading of images and static content, but I didn't find anything.
    Anyone has the same problem? any ideas how to solve it? where can I find more documentation?
    thx in advance. Hjuarez :)

    Hi There.
    First off, I'd do a few things, what's your threadpool set to? default is 200, i'd bump this to 500.
    Honestly, I'd test it with NO TUNING, (method of my madness) IE your 'gateway' script CMD variables, Just modify the min and max memory setting, default is 64 and 128, i'd set it to 128 and 1024-2048, and remove all over tuning suggestions (unless you have GC monitoring variables.
    my portal system get's 220,000 logins a month, and my gateway is a 280R 2x900mhz sparc 3 system, and it's extremely fast (well, as fast as the gateway goes anyhow :-) )
    - I'd be interested to see your results, I have 2004Q2 on my system. there's various other factors in play here, but I tried a 'tuned' gateway, and it was 25-30 % slower then 'untuned' and i've been running my gateway with this minimal tuning since oct 05. I've had no stability problems.
    in your amconsole gateway setup area, do you have persistent connections on or off? on=faster throughput, but on my system cause JSS/NSS issues, and gateway coredumps, so it's off right now.
    Dave
    PS how is jmeter and pureload working for you?

  • Oracle DB among Multiple Sites

    Helo DBAs
    I have six sites running oracle databases, accessing between any of them at any time.
    5 sites running Oracle 8i and One site running 11g.
    Each site having number of computers, installed with Oracle Client 8i
    All the sites connected by 64kbps dedicated leased lines.
    The issue is,
    the application runs much slower when accessing the remote db.
    Is there any tuning methods to be applied to make it faster either in server or client ?
    Have any hints, welcome please...
    pseswar

    user10778262 wrote:
    Helo DBAs
    I have six sites running oracle databases, accessing between any of them at any time.
    5 sites running Oracle 8i and One site running 11g.
    Each site having number of computers, installed with Oracle Client 8i
    All the sites connected by 64kbps dedicated leased lines.
    The issue is,
    the application runs much slower when accessing the remote db.
    Is there any tuning methods to be applied to make it faster either in server or client ?
    Have any hints, welcome please...
    Too many to list here. The biggest thing is to minimize the amount of data that has to be pushed over the wire. This means doing as much of the work as possible in the server itself. Make sure you are fully utilizing the features available within oracle to filter and message your data before sending it back to the client. Many programmers just treat the db as a data dump, pulling too much data into the client for processing that could have been done by the data base. An example of such coding would be to create a cursor based on an unqualified SELECT and fetching every row (across the wire, mind you) into the client for filtering and deciding on a row-by-row (slow-by-slow) UPDATE to push back across the wire -- all when a single, well crafted UPDATE could have been pushed across the wire on time.

  • Calc scripts are running low

    Hi All,
    Few of our calc cripts are runnig low for EPM applications.
    Its happening like some of the calc scripts are running fine..while a few other are running slow.
    Can you ugget what thing needed to be checked
    Thanks

    Hi,
    The version is not mentioned.
    Hope the below tuning methods are helpful:
    1. Check that compression settings are still present. In EAS, expand the application and database. Right-click on the database > Edit > Properties > Storage tab. Check that your "Data compression" is not set to "No compression" and that "Pending I/O access mode" is set to "Buffered I/O". Sometimes the compression setting can revert to "no compression", causing the rapid growth of the data files on disk.
    2. On the Statistics tab, check the "Average clustering ratio". This shoud be close to 1. If it is not, restructure you database, by right-clicking on it and choosing "Restructure...". This will reduce any fragmentation caused by repeated data import and export. Fragmentation will naturally reduce performance over time, but this can happen quite quickly when there are many data loads taking place.
    3. Check the caches and block sizes.
         a.Recommended block size: 8 to 100Kb
         b.Recommended Index Cache:
              Minimum=1 meg
              Default=10 meg
              Recommendation=Combined size of all ESS*.IND files if possible; otherwise as large as possible                     given the available RAM.
         c.Recommended Data File Cache:
              Minimum=8 meg
              Default=32 meg
              Recommendation=Combined size of all ESS*.PAG files if possible; otherwise as large as possible           given the available RAM, up to a maximum of 2Gb.
              NOTE this cache is not used if the database is buffered rather than direct I/O (Check “Storage”           tab). Since all Planning databases are buffered, and most customers use buffered for native           Essbase applications too, this cache setting is usually not relevant.
         d. Recommended Data Cache:
              Minimum=3 meg
              Default=3 meg
              Recommendation=0.125 * Combined size of all ESS*.PAG files, if possible, otherwise as large as           possible given the available RAM.
    A good indication of the health of the caches can be gained by looking at the “Hit ratio” for the cache on the Statistics tab in EAS. 1.0 is the best possible, lower means lower performance.
    4. Check system resources:
    Recommended virtual memory setting (NT systems): 2 to 3 times the RAM available. 1.5 times the RAM on older systems.
    Recommended disk space:
    A minimum of double the combined total of all .IND and .PAG files. You need double because you have to have room for a restructure, which will require twice the usual storage space whilst it is ongoing.
    Please see the below document for reference:
         Improving the Performance of Business Rules and Calculation Scripts (Doc ID 855821.1)
    -Regards,
    Priya

Maybe you are looking for

  • How can I have two itunes accounts on one computer?

    How can I have two itunes accounts on one computer?

  • I cant view my site in safari... hmmmmm

    i forwarded my own domain to mask the url for my mac.web.com.yadda yadda yadda. i forwarded the frame to web.mac.com/myusername it took hours and it is finally working though my domain co said they didnt think it would. anyways.. it loads great on fi

  • Can I use Mac Pro as a Mainframe to serve 10 PCs?

    I need to furnish a Mac lab, and I thought if I can use a Mac Pro with 12 core, to serve 10 PCs. Anybody can help me in this issue? Thanks...

  • How to compile an EJB?

    Maybe it'a simple question for some of you, but I want an answer... I am new to EJB. I've read j2ee tutorial, I run the samples that sun provides. Now I want to create my bean. It's possible to compile my bean without using ant? I saw that ant use an

  • Sony Bridge for Mac wanted an update, and phone won't switch on now!

    Hello Today I was trying to transfer some files from my phone to my Mac. When I connected my phone the Sony Bridge for Mac has asked me if I would like to update so I clicked yes and it began updating, with my phone connected. Then out of sudden, my