RAID 1 for performance gain??

in contrast of what i would think, it seems that the OSX implementation of RAID 1 doesn't give a performance gain according to this article:
http://docs.info.apple.com/article.html?artnum=106594
since the article is very old, does anyone know whether this the info is still up to date? posts i've found in the archives indicate not..
thanks
arri

Hi arri;
If you are looking to improve performance for an application that is limited by how quickly it is able to get data from or to a disk then you want RAID 0 or striping is the solution. For those applications that are limited by disk I/O RAID 0 can help because it allows the system to do reads/writes to multiple disks in sequence overlapping them for increased performance. This is only true if the performance of the application is limited by it's access to data.
Allan

Similar Messages

  • Performance Gain for IRIX servers if Personal flag is removed from magnus.conf file.

    Performance Gain for IRIX servers if Personal flag is removed from magnus.conf file.
    <p>
    As shipped by SGI, some of the server products have a flag set in the
    magnus.conf file setting a small footprint for the servers, generally
    less than 1 megabyte of memory. This flag is the Personal flag, and
    looks something like:
    <P>
    MaxProcs 1
    MinThreads 1
    MaxThreads 8
    Personal on
    <P>
    For one's own personal use, this is fine. But if CGIs are called, or
    if the site sees more traffic, then the flag may need to be removed,
    to look like:
    <P>
    MaxProcs 1
    MinThreads 1
    MaxThreads 8
    <P>
    A significant performance gain, (and a corresponding increase in
    memory used), would be seen by increasing the MaxThreads, as well.
    <P>
    For more complete tuning recommendations on SGI/IRIX, please see SGI's
    Tuning IRIX 6.2 for a Web Server page

    That's a comment in the file. It has no effect at all.

  • How do I use WD Raptor for OS/Apps drive and RAID 0 for Home/Data drive

    So far I have configured my machine with a WD Raptor 74GB as the startup disk, and created a RAID 0 with the Seagate 250 GB drive (shipped with the machine) and a 250GB partion of a WDC 320GB drive. All drives were zeroed and I used the install disks to load Tiger on the Raptor. I backed up to a LaCie D2 1TB external hard drive with LaCie's Silverkeeeper.
    I would like to use the WD Raptor as startup/apps/scratch disk and the RAID 0 for the user folders and other data. I would then like to create a RAID 1 with the RAID 0 and a 500GB partion of the LaCie 1TB external.
    I have learned however that what I would like and what I can have are not always compatible.
    My questions:
    Is the Silverkeeper backup sufficent, or should I have cloned my previous system?
    What folders should accompany the application folder on the Raptor? So far I have only put the app folder there and nothing works. I suspect things like 'application support' should be there as well.
    Is there a way to keep my home folder on the RAID 0 without using Terminal or spending too much time as the root user? I tried copying my home folder onto to the RAID 0 and putting an alias in the user account I had to create while installing the OS on the Raptor. This kind of works but is not a particularly elegant solution.
    I would appreciate any advice on how to proceed or if the proposed configuration is even achievable.

    Photoshop files of 1GB+ can eat up memory and scratch. And the more drives for scratch, the better.
    Some of the limitations are with the bus, bandwidth, and 'swapping' code in and out of different cores that is inefficient.
    Compilers can do some, but from the time new hardware (8 cores) to seeing improvements to compiler, and out to software (applications) and OS, can take a year or more but provide in the neighborhood sometimes of 40%. In which time, newer designs will change the equation. Caching and VM will improve with Leopard, but beyond that...?
    So... back to "read world" IF you are in the habit of working with 1-2GB images, then a pair of Raptors for boot is helpful, AND 8GB and more of RAM, AND 4-8 drives for scratch. (Think of those Sonnet Port Multiplier controllers and Fusion 500 style case for 5 drives on one port).
    If you work with files smaller than 500MB range, your needs are cut way down.
    150GB Raptor boot drive. Small. subtract 25% for minimum free space, 140GB formatted, kind of tight for some. But it handles 60% well.
    The outer 30% of 465GB (RE2) is also fast and good performance. I partition large boot drives to keep the OS and apps contained in the outer 1/3.
    Mac Pro Memory Usage and Performance
    If your work flow means doing more than one thing at a time on your Mac Pro, then you will see significant gains if you spend extra to get the 8-core version. Our Photoshop CS3 actions were completed 39% faster on the 8-core when we had 3 other apps busy crunching. This advantage emerges in spite of the memory bus limitations of the 8-core Mac Pro.
    http://www.barefeats.com/octopro3.html
    CS3: Justifying 8-Cores
    http://www.barefeats.com/octopro4.html
    Pshop Test G5 Quad 16GB Raptor RAID
    Photoshop and multi-core
    http://blogs.adobe.com/scottbyer/
    Mac Pro 2GHz 4GB 10K Raptor 23" Cinema   Mac OS X (10.4.9)   WD RE RAID Aaxeon FW800 PCIe MDD-G4 APC RS1500 Vista

  • What EBS performance gains can I expect moving non-x86 (sun?) to x86?

    Hi,
    I was hoping some of you would please share any general performance gains you encountered by moving your EBS from non-x86 to x86. I'm familiar with the benchmarks from tpc.org and spec.org. The users however measure performance on how long it takes for a request to complete. For example, when we moved from our EBS from a two node sun E3500 (4*450 sparc II 8GB memory) to a two node v440 (4*1.28ghz sparc IIIi 8GB memory), performance doubled accross the board with a three year pay back.
    I am trying to 'guesstimate' what performance increase we might encounter, if any, moving from sun sparc to x86. We'll be doing our first dev/test migration the first half of '08, but I thought I'd get a reading from all of you about what to expect.
    Right now we're planning on going with a single-node, 6 cpu dual core 3Ghz x86, 16GB ram. The storage is external RAID 10. We process approximately 1000 payroll checks bi-weekly. Our 'Payroll Process' takes 30min to complete. Similarly, 'Deposit Advice' takes about 30min to complete. Our EBS database is a tiny 200GB, we have a mere 80 concurrent users, and we run HRMS, PAY, PA, GL, FA, AP, AR, PO, OTL, Discoverer.
    Thanks for your feedback. These forums are great.
    L5

    Markus and David,
    First let me thank you for your posts. :-).
    Markus:
    Thank you for the tip. However, I usually do installations with a domain adm user. It does a lot of user switching, yes, but then it only switches to users created by SAPINST, that is most of the time it is switching to <sid>adm, which sounds perfect. At the time of my post I had been setting some environment variables so as to try to get the procedure to distribute the various pieces and bits (saparch, sapbackup, saptrace, origlogs and mirror logs, datafiles, etc. exactly where I wanted them and not where the procedure wants them) so I ended up by using <sid>adm to perform the DB instance installation and not the domain adm user I had installed CI with (I forgot to change back). When I noticed I figured it wouldn't make a difference since it usually switches to <sid>adm anyway. However, for the next attempts I settled on ny initially created dom adm user and no change to the results. OracleService<SID> usually logs on as a system account so the issue doesn't arise, I think.
    and
    David:
    The brackets did it. Thank you so much. It went further and only crashed later, I don't usually potter around sdn, so I'm not familiar with the workings of this, I don't know how to reply separately to the posts and I don't know how to include a properly formatted post (I've seen the Plain Text help but I hate to bother with sidetrack details) so I apologize to all for the probably-too-compact jumble that will come out when I post this. I am now looking at the following problem (same migration to 64) so I fear I may have to close this post and get back with a new one if I can't solve this next issue.

  • What is the best RAID configuration for a MacPro as a Logic User?

    There ought to be a universal answer to this question: what is the best RAID configuration for Logic Mac Pro users? I will be more specific.
    I use Logic Studio, Reason, Ableton, and Motu Symphonic Instrument simultaneously.
    I want to fail safe my precious audio files and improve performance as the system reads/writes data from multiple files, from audio tracks to digital samples.
    I want to run video files simultaneously to do nifty audio soundtracking to video.
    Here is the configuration I have in mind.
    HD 1: OS and Logic Studio, Reason/Ableton samples etc. software (non-raid) (250 GB)
    HD 2/3: MIrrored RAID set for AUDIO FILES (500 Gb identical pair)
    HD 4: Video files / Bouncing (1 TB)
    Makes sense right? A disk for reading software. A pair of 500 GB disks for reading/writing audio files and sessions in mirrored array. A 1 TB disk for video and bouncing. The main question I have is, for audio files only, is striped or mirrored better? 64K blocks the best? And are there any more details. I assume to do this in Disk Utility.

    Well, both Mirrored and Striped have their pros and cons. If you use mirrored, it will offer no better performance than the spec'ed drive along with it's sata bus speed. The plus point is, if one drive goes down, you have the second as a backup as the complete contents of one drive are mirrored on the other.
    With striped you will get a performance boost because all files (for example a single project) will get written across both drives and hence split the load on the drives and the SATA busses. The drawback is that you'll have to make sure you have a good, regular backup schedule in place because when one of the drives goes to drive heaven, you're going to be stuffed without a full backup of both drives.
    Exactly what performance boost you'd get will depend on your project, number of files, size of files, fragmentation of files, track count etc. You may find it would be easier to use the 3 drives straight, with no raid and have:
    HD 1: OS and Apps. No samples at all.
    HD2: Audio Files for Logic projects
    HD3: Reason, Ableton, Logic etc instrument sample library
    HD4: Video and bouncing.
    Which is what I ended up doing although I use HD4 as an interchangeable backup for HD1 and 2.
    There is no universal answer to this as each must make their own choice based on their preferences and needs. Mirrored will give you full backup but on-site, in machine backup. Not much good if something untoward and drastic happens to the physical machine. I think a few people toy with striped RAID but fall on the side of using the drives straight, as in their projects they don't see a big enough gain over splitting the data across your 3 remaining drives without RAID. Studios that seriously consider raid often go out and get a dedicated raid that can offer more variations than raid 0 or raid 1 (Striped and Mirrored) and better throughput.
    I hope this helps a little and not just added to the dilemma.

  • Not able to get performance gain using Multidimensional Clustering

    Hi All,
    We are trying to test the effect of Multidimensional Clustering on the
    performance of queries, infocubes/DSO loading and deletion of requests
    from BW objects.
    Unfortunately we are not able to see any performance gain when we use
    MDC.
    We are using following steps to test MDC:
    1> We have created copy infocube ZPOS10_CP from original infocube
    ZPOS10_V5.
    2>In the new infocube ZPOS10_CP, we switched on the MDC Settings.
    3> We are using following dimensions as MDC Dimensions:
           1> SID_0CALMONTH Calendar Year/Month
           2> Dimension ZPOS10_CP5 (This dimension is based on Site, Dist.
    Channel etc).
    Dimensions were selected based on consideration that:
           We should use those dimensions which are often used in query    
    Restriction.
    When we are trying to load the same file in both original infocube with
    NO MDC and new infocube with MDC there does not seem to be any major
    improvement. Both of these loads are taking almost same time.
    Can you please tell us how can we effectively use MDC? Is there any
    setting that we need to do regarding extent size of tablespace etc?
    Kindly help us to resolve this issue.
    Regards,
    Nilima Rodrigues

    Hi,
    In MDC we are not creating any dimensions.
    Basically we just select some dimensions of Infocube as MDC Dimensions.
    based on the suggestions provided by SAP on MDC, we have selected 0CALMONTH as one of the MDC Dimensions.
    Other dimensions are chosen on the following recommendation provided by SAP:
    ●      Select dimensions for which you often use restrictions in queries.
    ●      Select dimensions with a low cardinality.
    Regards,
    Nilima Rodrigues

  • Materialized views performance gain  estimation

    Hi;
    I have to estimate the performance gain of a materialized view for a particular query without creating it.
    that means, i have:
    - A query with initial execution plan
    - A select statement with is considered as a probable MV
    i need to show what is the performance gain of creating the MV for the previous Query.
    I see the DBMS_MVIEW.EXPLAIN_REWRITE but it shows the performance gain for a yet created MV.
    any idea .
    Thanks

    Hi Bidi,
    - for the first (agregation ones), i don't know how to estimate the performance gain.It's the difference in the elapsed time between the original aggregation and the time required to fetch the pre-computed summary, ususally a single logical I/O (if you index the MV).
    i want to determinate the impact of creating materialized views on the performance of a workload of queries. Perfect! Real-world workload tests are always better than contrived test cases!
    If you have the SQLAccess advisor, you can define a SQL Tuning Set, and run a representative benchmark with dbms_sqltune:
    http://www.dba-oracle.com/t_dbms_sqltune.htm
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • HW RAID Config for Oracle DB on Windows

    We are currently running Oracle 8.0.5 on Windows 2000 Advanced Server. This is a DELL PowerEdge 6600 server with 8 73GB Ultra SCSI Hard Drives. We will be upgrading to Oracle 9i in the future. Question: What would be the recommended Hardware RAID Configuration? We've heard that RAID 10 would be a good configuration to implement. What would be the best way for us to configure our server? We would like a high fault tolerance, good performance system. Our database isn't very large in size (10GB). By the way, switching to a different operating system is not an option for us.

    Hi
    In we use 9i on Win 2k3 with RAID5, it is fault tolerance and i think RAID5 is the best configuration for performance and security and fault tolerance. but keep in mind that RAID has the small write penalty.
    And it is suggested by many consultants that if you're using RAID you should put the control files and the redo logs on disks outside the RAID array, but this configuration is for very large DBs.
    RAID5 is working perfect with me, i have dual XEON 2.4 4 x 15000 x 34 GB SCSI HDD and 4 GB Ram ofcourse our DB is almost 30 GB.
    Regards

  • Best use of RAID 0, for O/S or Video Editing data disk ?

    I now have a RAID 0 setup of two 120GB discs (1 x P-ATA & 1 x S-ATA) as well as a S-ATA 160GB and a P-ATA 60GB.
    Now what should I use the RAID disc for ?
    My choices are:
    P-ATA 60GB for operating system (current)
    RAID 0 (220GB) for Video Editing Data
    S-ATA 160GB for other data, swap file, My Documents and System Images.
    Or
    RAID 0 (220GB) for operating system (current) and swap file
    S-ATA 160GB for Video Editing Data
    P-ATA 60GB for other data, My Documents and System Images.
    Or
    S-ATA 160GB for operating system (current) and swap file
    RAID 0 (220GB) for Video Editing Data
    P-ATA 60GB for other data, My Documents and System Images.
    Which will give me the best performance/balance for Games and Video Editing
    Am I likely to hit any problems with using the SATA drive as the operating system drive, I thought I heard somewhere that XP wouldn't allow Drive C to be allocated to a SATA drive ?

    If you do video editing, where speed pays, think that you should use Raid for this kind of work.

  • Creating the best RAID setup for my MacPro using FCP

    I have a MacPro, 2 x 3GHz Dual Core, 16GB ram, 4 x 500GB drives and I work in FCP 5.1.4 and with my Hardware setup I feel it should be faster and I've been wanting to set up a RAID but not sure how to do it, or the best way way to do it.
    Out of the 4 drives I have, Drive one is my main drive (boot drive, apps etc.) Drives (2 & 3) which is a TB combined, I'd like to turn those into a RAID) to speed up rendering, editing etc. In FCP and Motion. Drive 4 is where I keep all my working files.
    My files are backed up regularly onto external harddrives and kept offsite.
    Can I leave everything I have on my entire system the way it is and just turn Drives 2 & 3 into a RAID that's best for this application? People who work in VIDEO I know do this all the time to speed things up but I can't find the steps for the best way to do this. Bits and pieces all over the place but I can't put this puzzle together.
    Can you point me in the direction in how to do this?
    As I'm doing this is there anything I should be careful about?
    Please help me understand this process.
    Just in case you need to know what kind of drives I have here's the info:
    Capacity: 465.76 GB
    Model: ST3500641AS P
    Revision: 3.BTA
    Native Command Queuing: Yes
    Queue Depth: 32
    Removable Media: No
    Detachable Drive: No
    BSD Name: disk1
    Bay Name: "Bay 1"
    OS9 Drivers: No
    S.M.A.R.T. status: Verified
    Volumes:
    Startup Drive:
    Capacity: 465.44 GB
    Available: 367.86 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk1s2
    Mount Point: /

    My advice would be 'yes' to what you are saying... with the exception of the Softraid stuff - not that I think its wrong, but I've never used it, so I can't comment if you need it or whether the Mac OSX raid is sufficient - but others here say they prefer it so fair enough. You can see some comparisons here http://www.amug.org/amug-web/html/amug/reviews/articles/softraid/351/
    amug always have indepth benchmarks of stuff.
    I wouldnt call myself an FCP guru, but I think that your suggestion of putting the FCP scratch disk and client, video files on the raid are the best idea. The scratch folder is essentially where the temp-rendered clips go, so its audio and video - you want that folder to be on a really fast volume. You also want your source video files to be on a really fast volume, so they can be streamed fast enough to play in realtime too when playing unrendered areas, or building a preview.
    Some might say in FCP you get bast performance when your scratch disks and video files are on seperate disks. Thats totally true, so it can read from one disk and write to the other at the same time. But in your case you have a 3disk stripe, which is roughly 3x faster than either of your disks! So it would still be faster to have them all on the same stripe.
    You can leave your FCP app on the sys drive, keeps things cleaner (drive1 for sys and appsm raid for data). You can keep your project files where ever you want, they're not very big and are kept in memory so dont affect performance at all. Though to stay clean I would keep it on the raid, so again the raid is for data, and you can back it up accordingly. The system drive is only for apps and system so you can back that up accordingly too (less frequently probably).
    P.S. Technically your 'point 8' is inaccurate. After creating the raid you will not see drives 2,3 or 4. You will see only one 'volume' for all 3 drives. Overall your mac will have 2 'volumes': the system drive, and the stripe of 2,3,4. Physical drives and 'volumes' that mount in your OS are completely seperate. You can create multiple partitions in a single drive, or you can combine multiple drives into a single volume (e.g. using raid). But basically yes, you copy your client files back to the raid.
    And remember, if any ONE of the disks in the stripe dies, you lose ALL of the data on the entire 1.5TB volume. So it is pretty important to backup regularly!!!!
    (I dont wanna confuse you any more, but raid5 is a good option if you want more security and don't mind paying extra :P, you'll need more hardware for that, and more drives to make it worthwhile - but I would say skip that for now, as you can build your raid0 for free or almost free and use that until you think you need more)

  • Datafiles in swapping mode - for performance

    Hi there,
    One of the Senior DBAs told me that it is better to keep the datafiles in swapping mode, which means..
    Suppose we need to create 4 Tablespaces, 2 for data files and 2 for index files and we have two drives called E, F. In this case he said, the performance will be increased if we prepare
    E drive
    Datafile_Tablespace_A (datafile TS no. 1)
    Index_Tablespace_D (index TS for datafile no.2)
    F drive
    Index_Tablespace_B (index TS for datafile no.1)
    Datafile_Tablespace_C (datafile TS no. 2)
    According to him, Oracle works better in swapping mode, is it true? I was under the impression that index and datafile tablespaces should be built on separate drives.
    Even though my quetions is in general, for reference - The OS we are using is windows 2003 server and parition is Raid-5 and the Oralce 10.2.0.1 version.
    If anybody can clarify, I would be obliged.
    Thanks

    I'm going to default to one of Billy's responses:
    {message:id=4060608}
    >
    Irrelevant as that does not change any of the storage fundamentals in Oracle. The database does not know or care what you use as storage system.. why should it? It is the kernel and disk/file system drivers job to deal with the actual storage hardware. From a database perspective, it wants the ability to read() and write() - in other words, use the standard I/O interface provided by the kernel.
    I/O performance must not be factor. If it is, then you storage layer is incorrectly designed and implemented. Striping (RAID 0) for example must be dealt with at the storage layer and not at the application layer. Tablespaces and datafiles in Oracle makes extremely poor tools to implement striping of any sort. It does not make sense to attempt I/O balancing such as striping at tablespace and datafile level in Oracle.
    So why then use separate tablespaces? You may need different tablespaces to implement different block sizes for performance.. but this is an exception to the rule. And you do not address actual storage performance here, but more how Oracle should manage the smallest unit of data in the tablespace.
    So besides this exception, what other reasons? Could be you want to physically separate one logical data base (Oracle schema) from another. Could be that you want to implement transportable tablespaces.
    All these requirements are quite explicit in that more than one tablespace is needed. If there is no such requirement, why then consider using multiple tablespaces? It only increases the complexity of space management.
    Consider using different tablespaces for indexes and table data. In a year's time, you may find that the index tablespace has been oversized and the data tablespace undersized. You now have too much space on the one hand, too little on the other, and no easy way to "move" the freespace to where it is needed.
    It is far easier to deal with a single tablespace - as it allows far more flexibility in how you use that for data and index objects, then attempting some kind of split.
    So I will look for a sound and unambiguous technical requirement that very clearly says "multiple tablespaces needed". If not, I will not beat myself over the head trying to find reasons for implementing multiple tablespaces.>
    There are also many other threads on this forum about separating data and indexes, try and search for them.

  • Looking for a Hard Drive RAID System for 17" MacBook Pro, any Suggestions?

    I purchased my 17" MacBook Pro in Nov. 09 and it does have an ExpressCard/34 slot. What I am looking to do, is purchase a 4TB Caldigit VR External Hard Drive and put it on RAID 0. I Love the performance fact, but I don't feel too safe, because if that drives go down, there goes my data. So what I want to do is use Carbon Copy Cloner and find another External Hard Drive that will backup my Caldigit VR, since it will be RAID 0. Any Suggestions would help out very much! I am using this RAID System to store my HD Video using Final Cut Pro Studio. I know I can set up the Caldigit VR as a RAID 1, but I would rather set it up as a RAID 0 and have another hard drive back the data up. Let me know if you have set up a RAID System for your MacBook Pro and what you did. Also what express cards do you recommend?
    Thank you,
    Chad

    There is a difference technically between this iStorage unit and the Cal Digit stuff. Cal Digit includes a hardware raid controller, and removable drives. The iStorage Pro unit relies on your computer's CPU to control the raid... so will tax things like render times... plus just to play a software raid can take as much as 30% of your CPU's cycles to do it...
    That said, it may not matter if you're working in lower resolution files, or lower data rate stuff, either company makes great gear though. Both are intended for use with video systems. But ya do get something for the extra money on the CalDigit gear.
    But if all you want to use it for is backup of data... anybody's drives would do this... doesn't have to be a raid either. Single FW drives will certainly hold the data as backup even if they won't play it without dropping frames.
    OH, and Colorado... join the Denver FCP User Group... http://www.dfcpug.com We meet at the Colorado Film School in Denver on a monthly basis.
    Jerry
    Message was edited by: Jerry Hofmann

  • Performance gain ?

    We have an Package with constant definitions. (about 1400 constant)
    A lot of views are using this constants and need about 8 sec to execute
    One of the developers found a performance gain from about factor 4, if he substitutes the constant in a view with the value filled in the constant.
    Do we have a possibility to get the performance gain without hardcoding the values?
    Thanks in advance

    You cannot reference packaged constants in SQL.
    If you mean you are calling packaged functions which return constants then yes - I would expect there to be some small overhead for calling them from SQL (which could be magnified to noticeable levels of degradation if the function is called repeatedly). The overhead has been reduced somewhat with later versions of Oracle.
    Of course there are also reasons why the optimizer might choose a more efficient plan when substituting literal values for unknown values up-front, since you provide more information to the optimizer.

  • Can anyone send tutor for performance tuning?

    can anyone send tutor for performance tuning?I like to chk my coding.

    1.      Unused/Dead code
    Avoid leaving unused code in the program. Either comment out or delete the unused situation. Use program --> check --> extended program to check for the variables, which are not used statically. 
    2.      Subroutine Usage
    For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called. For example:  
    This is better:
    IF f1 NE 0.
      PERFORM sub1.
    ENDIF. 
    FORM sub1.
    ENDFORM.  
    Than this:
    PERFORM sub1.
    FORM sub1.
      IF f1 NE 0.
      ENDIF.
    ENDFORM. 
    3.      Usage of IF statements
    When coding IF tests, nest the testing conditions so that the outer conditions are those which are most likely to fail. For logical expressions with AND , place the mostly likely false first and for the OR, place the mostly likely true first. 
    Example - nested IF's:
      IF (least likely to be true).
        IF (less likely to be true).
         IF (most likely to be true).
         ENDIF.
        ENDIF.
       ENDIF. 
    Example - IF...ELSEIF...ENDIF :
      IF (most likely to be true).
      ELSEIF (less likely to be true).
      ELSEIF (least likely to be true).
      ENDIF. 
    Example - AND:
       IF (least likely to be true) AND
          (most likely to be true).
       ENDIF.
    Example - OR:
            IF (most likely to be true) OR
          (least likely to be true). 
    4.      CASE vs. nested Ifs
    When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient. 
    5.      MOVE statements
    When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to  MOVE-CORRESPONDING a TO b.
    MOVE BSEG TO *BSEG.
    is better than
    MOVE-CORRESPONDING BSEG TO *BSEG. 
    6.      SELECT and SELECT SINGLE
    When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible. If the entire key can be qualified, code a SELECT SINGLE not just a SELECT.   If you are only interested in the first row or there is only one row to be returned, using SELECT SINGLE can increase performance by up to three times. 
    7.      Small internal tables vs. complete internal tables
    In general it is better to minimize the number of fields declared in an internal table.  While it may be convenient to declare an internal table using the LIKE command, in most cases, programs will not use all fields in the SAP standard table.
    For example:
    Instead of this:
    data:  t_mara like mara occurs 0 with header line.
    Use this:
    data: begin of t_mara occurs 0,
            matnr like mara-matnr,
            end of t_mara. 
    8.      Row-level processing and SELECT SINGLE
    Similar to the processing of a SELECT-ENDSELECT loop, when calling multiple SELECT-SINGLE commands on a non-buffered table (check Data Dictionary -> Technical Info), you should do the following to improve performance:
    o       Use the SELECT into <itab> to buffer the necessary rows in an internal table, then
    o       sort the rows by the key fields, then
    o       use a READ TABLE WITH KEY ... BINARY SEARCH in place of the SELECT SINGLE command. Note that this only make sense when the table you are buffering is not too large (this decision must be made on a case by case basis).
    9.      READing single records of internal tables
    When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ.  This means that if the data is not sorted according to the key, the system must sequentially read the table.   Therefore, you should:
    o       SORT the table
    o       use READ TABLE WITH KEY BINARY SEARCH for better performance. 
    10.  SORTing internal tables
    When SORTing internal tables, specify the fields to SORTed.
    SORT ITAB BY FLD1 FLD2.
    is more efficient than
    SORT ITAB.  
    11.  Number of entries in an internal table
    To find out how many entries are in an internal table use DESCRIBE.
    DESCRIBE TABLE ITAB LINES CNTLNS.
    is more efficient than
    LOOP AT ITAB.
      CNTLNS = CNTLNS + 1.
    ENDLOOP. 
    12.  Performance diagnosis
    To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs. 
    13.  Nested SELECTs versus table views
    Since releASE 4.0, OPEN SQL allows both inner and outer table joins.  A nested SELECT loop may be used to accomplish the same concept.  However, the performance of nested SELECT loops is very poor in comparison to a join.  Hence, to improve performance by a factor of 25x and reduce network load, you should either create a view in the data dictionary then use this view to select data, or code the select using a join. 
    14.  If nested SELECTs must be used
    As mentioned previously, performance can be dramatically improved by using views instead of nested SELECTs, however, if this is not possible, then the following example of using an internal table in a nested SELECT can also improve performance by a factor of 5x:
    Use this:
    form select_good.
      data: t_vbak like vbak occurs 0 with header line.
      data: t_vbap like vbap occurs 0 with header line.
      select * from vbak into table t_vbak up to 200 rows.
      select * from vbap
              for all entries in t_vbak
              where vbeln = t_vbak-vbeln.
      endselect.
    endform.
    Instead of this:
    form select_bad.
    select * from vbak up to 200 rows.
      select * from vbap where vbeln = vbak-vbeln.
      endselect.
    endselect.
    endform.
    Although using "SELECT...FOR ALL ENTRIES IN..." is generally very fast, you should be aware of the three pitfalls of using it:
    Firstly, SAP automatically removes any duplicates from the rest of the retrieved records.  Therefore, if you wish to ensure that no qualifying records are discarded, the field list of the inner SELECT must be designed to ensure the retrieved records will contain no duplicates (normally, this would mean including in the list of retrieved fields all of those fields that comprise that table's primary key).
    Secondly,  if you were able to code "SELECT ... FROM <database table> FOR ALL ENTRIES IN TABLE <itab>" and the internal table <itab> is empty, then all rows from <database table> will be retrieved.
    Thirdly, if the internal table supplying the selection criteria (i.e. internal table <itab> in the example "...FOR ALL ENTRIES IN TABLE <itab> ") contains a large number of entries, performance degradation may occur.
    15.  SELECT * versus SELECTing individual fields
    In general, use a SELECT statement specifying a list of fields instead of a SELECT * to reduce network traffic and improve performance.  For tables with only a few fields the improvements may be minor, but many SAP tables contain more than 50 fields when the program needs only a few.  In the latter case, the performace gains can be substantial.  For example:
    Use:
    select vbeln auart vbtyp from table vbak
      into (vbak-vbeln, vbak-auart, vbak-vbtyp)
      where ...
    Instead of using:
    select * from vbak where ... 
    16.  Avoid unnecessary statements
    There are a few cases where one command is better than two.  For example:
    Use:
    append <tab_wa> to <tab>.
    Instead of:
    <tab> = <tab_wa>.
    append <tab> (modify <tab>).
    And also, use:
    if not <tab>[] is initial.
    Instead of:
    describe table <tab> lines <line_counter>.
    if <line_counter> > 0. 
    17.  Copying or appending internal tables
    Use this:
    <tab2>[] = <tab1>[].  (if <tab2> is empty)
    Instead of this:
    loop at <tab1>.
      append <tab1> to <tab2>.
    endloop.
    However, if <tab2> is not empty and should not be overwritten, then use:
    append lines of <tab1> [from index1] [to index2] to <tab2>.
    P.S : Please reward if you find this useful..

  • Comparing SELECTs for performance

    I have a long-running function module and through SAT, I've identified its SELECT from BSEG as a potential hotspot and way that I can improve the run time, etc.
    Through some research in SCN, I found I can/should use one of the component tables (BSID) to improve performance, but I don't know *how much* of an improvement it will be.
    Rather than change the code of the FM and transport it to QA just to analyze performance gains, I was hoping there was some sort of tool that I could use to analyze separate code comparatively.
    I thought of making 2 Quickviewer queries (one for BSEG and one for BSID) and then using ST05 to analyze each.  Is there a better way?
    Basically in PRD, I'd like to compare isolated SELECTs for performance to see what gains I could make.
    Thanks
    Jeremy H.

    You can find some interesting discussions linked here, some also talk about index access:
    FAQ's, intros and memorable discussions in the ABAP Testing and Troubleshooting Space
    Regarding your point 4), make sure your WHERE-condition contains as many fields of the index top down without gaps.
    Looking at BSID~4:
    MANDT
    Client
    BUKRS
    Company Code
    REBZG
    Number of the Invoice the Transaction Belongs to
    REBZJ
    Fiscal Year of the Relevant Invoice (for Credit Memo)
    REBZZ
    Line Item in the Relevant Invoice
    KUNNR
    Customer Number
    UMSKS
    Special G/L Transaction Type
    REBZT
    Follow-On Document Type
    MANDT will be included automatically. BUKRS is not very selective (not many distinct values), but you must include it since it is the leading field of the index. REBZG sounds like a selective field, so you can likely use the index effectively by having just BUKRS and REBZG in your WHERE-condition, but the more the better.
    Also have a look at transactions DB05 and TAANA which can give you a good idea about the actual data distribution.
    Thomas

Maybe you are looking for

  • Can any bady send the WDJ application on event handler & plug  parameters?

    Hi, Experts, I have no idea no WDJ Event handler and plug parameters usage if any bady can send me few WDJ application or scenarios which is based on event handler parameters and plug parameters and method parameters.  Thans in advance, Shaber Ahmed.

  • Data services management console deployment on weblogic

    I have installed SAP installation platform services 4.1 sp2 without Tomcat server. I have deployed BI applications using Weblogic 11 and they are working fine. I have installed SAP data services on top of it. Installation runs fine. I am not able to

  • Risk Terminator error CPIC-CALL: 'ThSAPCMRCV' : cmR

    Hi, I have configured risk terminator. I have tested the RFC connection and it seems fine and referenced this in the configuration. However I get an error when I go to add a role to a user. It attempts to execute a risk analysis and then returns an e

  • Cisco HOTSPOT for Guest

    Dear, Hotspot feature is available with Cisco WLC. But is there any feature comparison sheet with other vendor products like Meru, Aruba, Sophos..

  • Itunes vs mobile me

    Getting bounced around Apple support. Crux of problem. I'm an old itunes customer and a new mobileme customer. I can change my itunes account email to anything... except an apple one! iTunes say talk to MobileMe... MobileMe say talk to iTunes! Apple