Effect of fragmentation in QOS

If the packets are fragmented in ingress and QOS is used, Does the QOS will be maintained after fragmentation. Like in GRE we have QOS pre classify to preserve the bits.
regards
shivlu jain

Hi Shivlu,
During every fragmentation, the IP header is preserved. So if a packet is fragmented, first the header is preserved, the payload is fragmented and the original IP header is applied to all fragmented packets. This way you will still keep the IP Precedence or DSCP values in the IP header.
This is true for GRE too.
However, packet fragmentation can spike up your router processor.
Link efficiency can be improved using Link Fragmentation Interleaving (LFI) by reducing serialization delay.
HTH.
Amit.

Similar Messages

  • My VLAN and QoS rant!

    If I see another post here saying something to the effect that VLANs or QoS are too complicated I think I’m going to scream! Many of you are maintaining business class networks for medium to enterprise size organizations. If you don’t understand these simple concepts and how to configure all the expensive network hardware your company has invested in then it is your responsibility to do a couple hours of study until you understand both concepts. Then you can configure your company’s networks to perform like the engineers that designed the switches, routers or firewalls intended and leverage that investment and earn your pay.I’ve seen posts where someone is just going to throw expensive gigabit switching at a VoIP project thinking it will save them from a couple of hours of figuring out how QoS works and how to configure it, it won’t....
    This topic first appeared in the Spiceworks Community

    By Guest Blogger Brad Mathis, Senior Consultant, InformationSecurity It is mid-2015. By now, we have all seen incoming emails claiming we have been bequeathed a huge sum of money from a Nigerian Prince, or we have won a foreign lottery we never entered. Most employees have seen these scam emails long enough to know they are not real. However, What about the seemingly benign email coming in from a recognizable sender? What if this legitimate looking email has an attached PDF or Word document? What if it contains a seemingly real link to a web site? How many of your employees would open the attachment or click on the link? How many employees will assume it is safe since it made it unscathed through all of your layers of security, including email and web content filters? Do your users understand the ramifications of...

  • MacBook Pro running v.slow - is there a way of optimising HD?

    My MBP is running like an absolute dog! It's seriously frustrating, feels like I'm working on an 8MB RAM 15 yr old machine at times. I bought it from a printer 2nd hand 2 yrs ago, and was running fine for the first 18 months. It came with Adobe CS3 and Quark XPress 7 installed 7.0 installed, so doubt original copies. Quark especially is dire at times. Some egs of what happens:
    - when I log in from sleep with password, sometimes it takes me back to the same log in screen - have to do it twice! Sometimes it literally locks out with a black screen and the arrow cursor - pressing any key or combo of keys (yes, tried force quit & everything) gives what I describe as the 'dom' sound - the low tone single note...?? I have to disconnect from power and remove battery to reboot.
    - I get the occasional (which also happens on my Mac Pro) weird window where it enlarges off the foot of the screen. I have to click the green button at top left of window to make the bottom of the window appear, b4 I can then shrink it to what size I want - very annoying. I think this happens sometimes from switching from cover flow, but not 100%.
    - Generally takes much to long to react in apps - the rotating spectrum symbol appears on nearly every command.
    I just feel the machine is struggling, big time, and feel like I want to totally erase and reinstall, but surely, supposedly the best OS in the world (?), this shouldn't need to be done???
    PS. Battery life is virtually nothing, it does need a new one, but this all happens when connected to the mains so presume this is irrelevant?
    Any help appreciated.

    To answer the question posed in your thread title (sorry, it's long!):
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375?viewlocale=en_US
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.
    You do not mention any maintenance you have/have not done:
    Repairing permissions is important, and should always be carried out both before and after any software installation or update.
    Go to Disk Utility (this is in your Utilities Folder in your Application folder) and click on the icon of your hard disk (not the one with all the numbers).
    In First Aid, click on Repair Permissions.
    This only takes a minute or two in Tiger, but much longer in Leopard.
    Background information here:
    http://docs.info.apple.com/article.html?artnum=25751
    and here:
    http://docs.info.apple.com/article.html?artnum=302672
    An article on troubleshooting Permissions can be found here:
    http://support.apple.com/kb/HT2963
    By the way, you can ignore any messages about SUID or ACL file permissions, as explained here:
    http://support.apple.com/kb/TS1448?viewlocale=en_US
    If you were having any serious problems with your Mac you might as well complete the exercise by repairing your hard disk as well. You cannot do this from the same start-up disk. Reboot from your install disk (holding down the C key). Once it opens, select your language, and then go to Disk Utility from the Utilities menu. Select your hard disk as before and click Repair.
    Once that is complete reboot again from your usual start-up disk.
    More useful reading here:
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck
    http://support.apple.com/kb/TS1417?viewlocale=en_US
    For a full description of how to resolve Disk, Permission and Cache Corruption, you should read this FAQ from the X Lab:
    http://www.thexlab.com/faqs/repairprocess.html
    Lastly, is there any reason why you have not updated to 10.5.8 and/or perhaps other updates for security etc?

  • How to move huge HD video files between external hard drives and defrag ext drive?

    I have huge high definition video files on a 2TB external hard drive (and its clone).  The external hard drive is maxed out.  I would like to move many of the video files to a new 3TB external hard drive (G-drive, and a clone) and leave a sub-group of video files (1+ TB) on the original external hard drive (and its clone).  
    I am copying files from original external drive ("ext drive A") to new external drive ("ext drive B") via Carbon Copy Cloner (selecting iMovie event by event that I want to transfer). Just a note: I do not know how to partition or make bootable drives, I see suggestions with these steps in them.
    My questions:
    1.)  I assume this transfer of files will create extreme fragmentation on drive A.  Should I reformat/re-initialize ext drive A after moving the files I want?  If so, how best to do this?  Do I use "Erase" within Disk Utilities?  Do I need to do anything else before transfering files back onto ext drive A from its clone?
    2.) Do I also need to defrag if I reformat ext drive A? Do I defrag instead of or in addition to reformating?  If so, how to do this? I've read on these forums so many warnings and heard too many stories of this going awry.  Which 3rd party software to use? 
    Thank you in advance for any suggestions, tips, advice.  This whole process makes me SO nervous.

    Here is a very good writeup on de-fragging in the OS environment that I borrowed
    From Klaus1:
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.

  • Do I need to defragment my iMac?

    Hi Guys,
    I have a question - do I need to defragment my iMac? I have noticed it is getting slower.
    blueheron11

    Well, I'm going to put a contrary view to others you have received here, blueheron.
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.
    Cheers
    Rod

  • How much disk space is too little?

    I have an early 2008 15" MBP with the 200GB 7200 drive. I'm currently running at about 20-25GB free space. I do a lot of photography and keep clearing off photo sessions to external drives to continually free up disk space. But the problem is that the computer is getting really slow and I'm sure the lack of free disk space is not helping.
    Does anyone know at what point OSX gets starts getting slow regarding free disk space?
    I really want to replace the drive but I have Apple Care and I believe this would void the warranty.

    RFC2662 wrote:
    Does anyone know at what point OSX gets starts getting slow regarding free disk space?
    I really want to replace the drive but I have Apple Care and I believe this would void the warranty.
    There is no magic number here, RFC2662. It depends a great deal on what you use your computer for.
    You are certainly down in the sort of area where problems can set in though.
    There are two reasons why drives get slow when they get full. Slower sector access speeds and increasing levels of free space (and ultimately file) fragmentation.
    The inner sectors , which generally fill up last on your HD, are, of course, located on parts of the platter(s) with a smaller circumference than the outer sectors. The rotational speed of the standard drives used in modern computers operate at a fixed rotational rate when reading and writing data. Accordingly in any single rotation the heads will traverse a much smaller distance when operating on the inner sectors than on the outer, and the speed at which data can be read and written is accordingly also somewhat slower. The effect is not huge, and is mitigated to some extent by the "hot spot" strategies and the like used by OSX to keep relevant system related and other frequently accessed files in the faster , outer, bands, but it will still slow your drive.
    The second (more serious) cause of slow downs (and potentially other problems) as drives get full is free space fragmentation.
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether.
    Again, this doesn't matter for most users when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio and photographic stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like.
    On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, as the computer struggles to keep track of where everything is l, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), fragmentation doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more of an issue, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while.
    For me, defragmenting (I variously use a "clone , wipe and clone back"process or use iDefrag - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain at least that 20% free buffer until they do. Defragging very regularly (perhaps once a fortnight) will actually benefit them very substantially usually, but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.
    So it comes down to this:
    1) If you really need maximum speed from your drive, for heavy duty video editing and the like, then you are probably best off keeping the internal drive for system and application use, keeping it as empty as possible, and using fast FW800 or eSata drives for your work files.
    2) For the vast majority of users, though, they will notice little or no difference until free space on the drive falls to 20% or less of total capacity. (There will be a reduction in performance, but it won't really matter , or be obvious, to most users). Once the drive falls below about 30% they will benefit a bit from occasional defragmentation, either using the "clone, wipe and clone back" approach, or using a good utility like iDefrag.
    3) Once drives get down below 20% users should be defragmenting more regularly, doing their best to shift unnecessary stuff off the drive, and thinking about upgrading their drives to a larger one. They will need to be cautious about any activity which involves the use of very large files and are likely to see substantial performance degradation when they do so.
    4) once they get below 10% it is definitely time for a bigger drive if they can't at least get back above the 20% free level. They will also need to defragment after freeing up the necessary space.
    In your case your symptoms are typical of an overfull, badly fragmented, drive. You need to free up quite a bit more room, defragment the drive, and organise getting a bigger one installed.
    Cheers
    Rod

  • Defrag on a mac?

    i am trying to tidy up my MAc. i have various saved copies and versions of things i don't need. which as far as I know i will have to do
    methodically and sieve through the excessive saved items, such as saved hardrives onto external hard drive. how mamy copies of my library/aps do I need etc.
    but is there a way of cleaning up generally like when you defrag a PC?

    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.

  • IDEFRAG FAULT: There was a hardware problem accessing your volume

    Idefrag stops defragging when arriving at the following block and gives this warning:
    There was a hardware problem accessing your volume (e.g. bad block or loss of power) and Idefrag cannot continue. The File that Idefrag was working on was: /Previous/Library/Logs/DirectoryService/DirectoryService.server.log -
    What can i do?
    Now i cant finish defragmenting. Only the compact defrag level > cuz this scan level only restore the empty blocks. All the other levels it stops defragging.
    Please help

    iDefrag may well have damaged your system. No such third party utility is required, or even adviseable, on OS X which does such things by itself.
    I would recommend a reinstall from your recovery partition.
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.

  • How to defragement a hard drive

    Just upgraded my iMac 27" purchased in August 2012 to Yosemite.  I move a lot of video files and my machine is running slow (nothing to do with the Yosemite upgrade).  I used iDfrag in the past but it has not been updated for Yosemite.  My questions is:  What is the easiest way to defrag my hard drive?
    Thank you
    VPH

    You don't defrag a drive running Apple OS X.
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.

  • Question on fragmentation and ALTER INDEX REBUILD/REORGANIZE not effecting it

    The problem I ran into was troubleshooting a sporadically slow singleton lookup on a Clustered Index in a table with about 8 million rows, which is a separate issue I may need to submit for help. That aside, during that troubleshooting I noticed fragmentation
    on the Unique Clustered Index (it's a VARCHAR(20)), and then noticed the fragmentation in other indexes on this table. See sys.dm_db_index_physical_stats and DBCC SHOWCONTIG results below.
    SELECT
     substring(OBJECT_NAME(i.object_id),1,30) AS TableName,
     substring(i.name,1,40) AS TableIndexName,
     i.index_id, phystat.index_level,
     phystat.avg_fragmentation_in_percent 
    FROM
     sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') phystat inner JOIN sys.indexes i 
    ON i.object_id = phystat.object_id 
    AND i.index_id = phystat.index_id WHERE
    OBJECT_NAME(i.object_id) = 'CONSUMERS'
    TableName                      TableIndexName                           index_id    index_level avg_fragmentation_in_percent
    CONSUMERS                      UNI2K_CONSUMERS                          1           0      
        0.154827346202469
    CONSUMERS                      UNI2K_CONSUMERS                          1           1      
        35.2941176470588
    CONSUMERS                      UNI2K_CONSUMERS                          1           2      
        0
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           0           0.336078590685822
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           1           100
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           2           0
    CONSUMERS                      UNI1K_CONSUMERS                          3           0      
        0.156451316031658
    CONSUMERS                      UNI1K_CONSUMERS                          3           1      
        61.1510791366906
    CONSUMERS                      UNI1K_CONSUMERS                          3           2      
        0
    CONSUMERS                      IDX1_CONSUMERS                           4           0      
        0.215271389144434
    CONSUMERS                      IDX1_CONSUMERS                           4           1      
        40
    CONSUMERS                      IDX1_CONSUMERS                           4           2      
        100
    CONSUMERS                      IDX1_CONSUMERS                           4           3      
        0
    CONSUMERS                      IDX2_CONSUMERS                           5           0      
        0.222614710968834
    CONSUMERS                      IDX2_CONSUMERS                           5           1      
        38.6281588447653
    CONSUMERS                      IDX2_CONSUMERS                           5           2      
        75
    CONSUMERS                      IDX2_CONSUMERS                           5           3      
        0
    (17 row(s) affected)
    DBCC SHOWCONTIG('CONSUMERS') WITH ALL_INDEXES
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 1, database ID: 5
    TABLE level scan performed.
    - Pages Scanned................................: 70401
    - Extents Scanned..............................: 8827
    - Extent Switches..............................: 8843
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.51% [8801:8844]
    - Logical Scan Fragmentation ..................: 0.15%
    - Extent Scan Fragmentation ...................: 23.76%
    - Avg. Bytes Free per Page.....................: 47.2
    - Avg. Page Density (full).....................: 99.42%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 2, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 27077
    - Extents Scanned..............................: 3402
    - Extent Switches..............................: 3402
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.47% [3385:3403]
    - Logical Scan Fragmentation ..................: 0.34%
    - Extent Scan Fragmentation ...................: 11.88%
    - Avg. Bytes Free per Page.....................: 24.1
    - Avg. Page Density (full).....................: 99.70%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 3, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 54330
    - Extents Scanned..............................: 6803
    - Extent Switches..............................: 6805
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.79% [6792:6806]
    - Logical Scan Fragmentation ..................: 0.16%
    - Extent Scan Fragmentation ...................: 7.03%
    - Avg. Bytes Free per Page.....................: 50.3
    - Avg. Page Density (full).....................: 99.38%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 4, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 54350
    - Extents Scanned..............................: 6808
    - Extent Switches..............................: 6837
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.36% [6794:6838]
    - Logical Scan Fragmentation ..................: 0.22%
    - Extent Scan Fragmentation ...................: 7.17%
    - Avg. Bytes Free per Page.....................: 53.2
    - Avg. Page Density (full).....................: 99.34%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 5, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 54354
    - Extents Scanned..............................: 6804
    - Extent Switches..............................: 6846
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.24% [6795:6847]
    - Logical Scan Fragmentation ..................: 0.22%
    - Extent Scan Fragmentation ...................: 7.13%
    - Avg. Bytes Free per Page.....................: 53.8
    - Avg. Page Density (full).....................: 99.33%
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    This fragmentation I found shocking because I reorg nightly and have a weekly rebuild task running that I set up through the Maintenance Plan wizard (which I've verified has been running). So I attempted to reorg these manually (especially index ID: 1)
    and to my shock the fragmentation % did not change at all. I then took the SQL provided by the Maintenance Plan for rebuilding the indexes and found that after running that it didn't change the fragementation % at all either (commands run shown below).
    ALTER INDEX [IDX1_CONSUMERS] ON [dbo].[CONSUMERS] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, ONLINE = ON, SORT_IN_TEMPDB = ON )
    GO
    ALTER INDEX [IDX2_CONSUMERS] ON [dbo].[CONSUMERS] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, ONLINE = ON, SORT_IN_TEMPDB = ON )
    GO
    ALTER INDEX [UNI1K_CONSUMERS] ON [dbo].[CONSUMERS] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, IGNORE_DUP_KEY  = OFF, ONLINE = ON, SORT_IN_TEMPDB
    = ON )
    GO
    ALTER INDEX [UNI2K_CONSUMERS] ON [dbo].[CONSUMERS] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, IGNORE_DUP_KEY  = OFF, ONLINE = ON, SORT_IN_TEMPDB
    = ON )
    GO
    Fragmentation did not change until I performed a "CREATE ... DROP_EXISTING = ON" on the 4 non PK indexes and a manual rebuild of Primary Key offline not specifying any other parameters, which all seemed completely overkill to ensure the defragmentation
    actually got done. Final sys.dm_db_index_physical_stats and DBCC SHOWCONTIG results below.
    SELECT
     substring(OBJECT_NAME(i.object_id),1,30) AS TableName,
     substring(i.name,1,40) AS TableIndexName,
     i.index_id, phystat.index_level,
     phystat.avg_fragmentation_in_percent 
    FROM
     sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') phystat inner JOIN sys.indexes i 
    ON i.object_id = phystat.object_id 
    AND i.index_id = phystat.index_id WHERE
    OBJECT_NAME(i.object_id) = 'CONSUMERS'
    TableName                      TableIndexName                           index_id    index_level avg_fragmentation_in_percent
    CONSUMERS                      UNI2K_CONSUMERS                          1           0      
        0.0213458562356583
    CONSUMERS                      UNI2K_CONSUMERS                          1           1      
        11.2426035502959
    CONSUMERS                      UNI2K_CONSUMERS                          1           2      
        0
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           0           0.0460971112476951
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           1           14.2857142857143
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           2           0
    CONSUMERS                      UNI1K_CONSUMERS                          3           0      
        0.0225314031431307
    CONSUMERS                      UNI1K_CONSUMERS                          3           1      
        10.6194690265487
    CONSUMERS                      UNI1K_CONSUMERS                          3           2      
        0
    CONSUMERS                      IDX1_CONSUMERS                           4           0      
        0.0225318262045139
    CONSUMERS                      IDX1_CONSUMERS                           4           1      
        10.7296137339056
    CONSUMERS                      IDX1_CONSUMERS                           4           2      
        0
    CONSUMERS                      IDX1_CONSUMERS                           4           3      
        0
    CONSUMERS                      IDX2_CONSUMERS                           5           0      
        0.0225314031431307
    CONSUMERS                      IDX2_CONSUMERS                           5           1      
        12.0171673819742
    CONSUMERS                      IDX2_CONSUMERS                           5           2      
        0
    CONSUMERS                      IDX2_CONSUMERS                           5           3      
        0
    (17 row(s) affected)
    DBCC SHOWCONTIG('CONSUMERS') WITH ALL_INDEXES
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 1, database ID: 5
    TABLE level scan performed.
    - Pages Scanned................................: 56217
    - Extents Scanned..............................: 7029
    - Extent Switches..............................: 7028
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.99% [7028:7029]
    - Logical Scan Fragmentation ..................: 0.02%
    - Extent Scan Fragmentation ...................: 0.44%
    - Avg. Bytes Free per Page.....................: 32.4
    - Avg. Page Density (full).....................: 99.60%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 2, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 26032
    - Extents Scanned..............................: 3256
    - Extent Switches..............................: 3255
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.94% [3254:3256]
    - Logical Scan Fragmentation ..................: 0.05%
    - Extent Scan Fragmentation ...................: 0.31%
    - Avg. Bytes Free per Page.....................: 11.1
    - Avg. Page Density (full).....................: 99.86%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 3, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 53259
    - Extents Scanned..............................: 6659
    - Extent Switches..............................: 6658
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.98% [6658:6659]
    - Logical Scan Fragmentation ..................: 0.02%
    - Extent Scan Fragmentation ...................: 0.35%
    - Avg. Bytes Free per Page.....................: 40.5
    - Avg. Page Density (full).....................: 99.50%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 4, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 53258
    - Extents Scanned..............................: 6659
    - Extent Switches..............................: 6658
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.98% [6658:6659]
    - Logical Scan Fragmentation ..................: 0.02%
    - Extent Scan Fragmentation ...................: 0.53%
    - Avg. Bytes Free per Page.....................: 40.3
    - Avg. Page Density (full).....................: 99.50%
    DBCC SHOWCONTIG scanning 'CONSUMERS' table...
    Table: 'CONSUMERS' (645577338); index ID: 5, database ID: 5
    LEAF level scan performed.
    - Pages Scanned................................: 53259
    - Extents Scanned..............................: 6659
    - Extent Switches..............................: 6658
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.98% [6658:6659]
    - Logical Scan Fragmentation ..................: 0.02%
    - Extent Scan Fragmentation ...................: 0.59%
    - Avg. Bytes Free per Page.....................: 40.5
    - Avg. Page Density (full).....................: 99.50%
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    For the record, here's the version I'm running:
    select @@VERSION
    Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64) 
     Jun 17 2011 00:54:03 
     Copyright (c) Microsoft Corporation
     Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    In summary my question is - why didn't ALTER INDEX ... REBUILD/REORGANIZE modify the index_level 1 fragmentation as reported by sys.dm_db_index_physical_stats, nor would it correct the Extent Scan Fragmentation as reported by DBCC SHOWCONTIG?

    Hi Brian.cs ,
    SQL Server will not rebuild indexes that are not large enough. Could you please have a look a the
    fragment_count which is  one of the fields in the
    sys.dm_db_index_physical_stats view to check whether it is low or not ?
    Best Regards,
    Peja
    Please remember to click "Mark as Answer" on the post that helps you, and to click "Unmark as Answer" if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.
    Peja, here's the information you requested, and of course this was after I dropped/recreated because rebuild didn't seem to actually address the fragmentation - the index I was most concerned about at index_level = 0 showed a fragment_count = 8922. FYI this
    table has over 8 million rows. And apologies for the delayed response, was on vacation over the near year.
    SELECT
     substring(OBJECT_NAME(i.object_id),1,30) AS TableName,
     substring(i.name,1,40) AS TableIndexName,
     i.index_id, phystat.index_level,
     phystat.avg_fragmentation_in_percent, fragment_count 
    FROM
     sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') phystat inner JOIN sys.indexes i 
    ON i.object_id = phystat.object_id 
    AND i.index_id = phystat.index_id WHERE
    OBJECT_NAME(i.object_id) = 'CONSUMERS'
    TableName                      TableIndexName                           index_id    index_level avg_fragmentation_in_percent
    fragment_count
    CONSUMERS                      UNI2K_CONSUMERS                          1           0        
      0.259780818806428            8922
    CONSUMERS                      UNI2K_CONSUMERS                          1           1        
      39.4190871369295             86
    CONSUMERS                      UNI2K_CONSUMERS                          1           2        
      0                            1
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           0           0.240887634434766    
           5182
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           1           80.3738317757009    
            84
    CONSUMERS                      PK__CONSUMER__7F6B0B8B286302EC           2           2           0          
                     1
    CONSUMERS                      UNI1K_CONSUMERS                          3           0        
      0.0661472879611936           8532
    CONSUMERS                      UNI1K_CONSUMERS                          3           1        
      53.4883720930233             86
    CONSUMERS                      UNI1K_CONSUMERS                          3           2        
      0                            1
    CONSUMERS                      IDX1_CONSUMERS                           4           0        
      0.192426334498663            8598
    CONSUMERS                      IDX1_CONSUMERS                           4           1        
      31.5315315315315             95
    CONSUMERS                      IDX1_CONSUMERS                           4           2        
      85.7142857142857             7
    CONSUMERS                      IDX1_CONSUMERS                           4           3        
      0                            1
    CONSUMERS                      IDX2_CONSUMERS                           5           0        
      0.189494094835184            8613
    CONSUMERS                      IDX2_CONSUMERS                           5           1        
      31.8840579710145             97
    CONSUMERS                      IDX2_CONSUMERS                           5           2        
      85.7142857142857             7
    CONSUMERS                      IDX2_CONSUMERS                           5           3        
      0                            1

  • Fragmentation effect

    10g R2
    Table has about 0.5 billion records at this time
    There is a process deleting records (5-6 millions per day)
    At the beginning of process there is a select getting min date (indexed column) to start from.
    I see this select taking more time - plus about 1 min each day (surprise).
    I tried shrink but canceled execution in a couple of hours. Based on previous discussions looks like it never completed on large tables.
    What other people really do in such cases besides full table reorganization?
    Thanks.

    Bolev wrote:
    10g R2
    Table has about 0.5 billion records at this time
    There is a process deleting records (5-6 millions per day)
    At the beginning of process there is a select getting min date (indexed column) to start from.
    I see this select taking more time - plus about 1 min each day (surprise). This looks as if your query for min(adate) is having to walk through increasing numbers of empty blocks each day. I'm going to guess that you then delete data for a given date range above that date - leaving even more empty blocks at the low end of the index. If this is the case, a "coalesce" might be more appropriate than a shrink. This may take some time on the first run, but it operates as a large number of small transactions so won't have to unwind itself if the process crashes.
    Since you are inserting 0.5 M rows per day and deleting 5-6M rows per day for a net loss of 4.5 - 5.5 M rows per day, at some stage it will probably make sense to rebuild most of the indexes, and at some stage it will probably make sense to "move" the table (and rebulid the indexes again).
    Regards
    Jonathan Lewis

  • After Effects CS5.5 Error Rendering known clean project even after Reinstall and after System Restore

    I've been getting frequent, but not consistent, errors during rendering AE CS5.5 projects with lots of 3D layers and lights. I have gone back to a known clean project to confirm the issue isn't with a particular project. The error is usually an error invoking the advanced 3D render engine. This is sometimes followed by a bad tracked memory id popup. But sometimes the program just hangs up.
    My System:
    i7 [email protected]
    24GB RAM
    Windows 7 professional 64 bit SP1
    NVidia Geforce GTX 570
    I have trashed the preferences, cleaned the cache, turned off rendering multiple frames, deleted Open GL, updated the graphics card drivers and even tried reverting to old graphics card drivers. Window's Memory Diagnostic Tool didn't find any problems. System File Checker Tool didn't find any corrupt files. My system drive is not fragmented.  I have reinstalled AE, reinstalled all of CS5.5, and did a system restore to a known working point (both with and without doing the Windows updates that have occurred since the restore point). None of this fixed the problem.
    I have tried the script BG Renderer to render in the background. I have consistently gotten the following error running the script, which I'm hoping will be a clue:
    "aerender ERROR An existing connection was forcible closed by the remote host.
    :Unable to receive at line 314
    aerender ERROR: After Effects can not render for aerender. Another instance of aerender, or another script, may be running; or, AE may be waiting for a response from a modal dialog, or for a render to complete. Try running aerender without the - reuse flag to invoke a separate instance of After Effects."
    The first two times I tried to render with BG Renderer, the error came right after closing AE. But I still got the same error if I left AE open and just sat and watched.
    Any help is appreciated.

    Update: I believe I have discovered the problem! It was a bad RAM stick. Windows Memory Diagnostic tool found nothing wrong, and nothing in my system seemed to be effected other than the ability to render complex AE projects with 3D layers. After a complete system restore to a known working point failed to help, I opened up the machine. Finding nothing loose, I methodically tested all the memory sticks to find the culprit. Everything is running smoothly now, albeit on less RAM.

  • Configuring QoS for FIOS Router MI-424WR: Traffic Priority and Shaping

    Please only read on if you are an experienced internet user familiar with setting the advanced QoS and Firewall settings for the MI-424WR and make use of wireless adaptors from a PC to provide connectivity.
    This is my first post and my first week since I moved from Time Warner Cable over to FIOS for iNet (plus HDTV and phone).     While all my services work, the router as delivered and setup is not optimum for internet quality of service.  Instead it was probably out of the box optimized for HDTV and telephone to satisfy most customers and reduce support overhead.   The average FIOS consumer is multimedia sensitive, but that is not so in my genre of internet consumer.   Here in lies the core of my reason for seeking help from like minded and experienced users in this community.
    One of the main driving forces in my switching to FIOS was to improve my multiplayer gaming experience where ultra low ping latency and high upload data rates dramatically affect the quality of connection and thus gameplay.    The cable internet service from TimeWarner was providing solid 2MB/1MB down/up data rates with no issues like what Im having now with FIOS.   Again the reason for the switch was both financial and in hope of gaining better data rates and quality of service.   Now with FIOS Im getting about 24/15 down/up data rate on the Extreme FIOS 25/25 plan when measured from my house to Los Angeles server (50 miles away) via Speedtest.net or DslReports.com/tests.     Latency wise, the ping has gone down from 150 to 50ms when measured to my friends who I connect to online that are on the East coast.   The data rate and latency has greatly improved in going from Cable to FIOS.   So far, so good.
    Where the problem shows up now, is that now I get an internet "hiccup" every 5-10 minutes that lasts about 1/2 to 2 seconds.   For the average internet user that just streams multimedia or cruises on the net; this is probably undetectable or noticed.   I never had this problem over the same PCs connected wirelessly to my DLINK DGL-4500 Gaming Router when my ISP was TimeWarner's cable service.    Now, using the FIOS and MI-424WR router with everythings being the same; Im experiencing this degregation in quality of service.    Even putting the PC's IP into the DMZ doesnt make any difference, so it is not related to port forwarding.    The issue is squarely in the lap of FIOS and this router as delivered and configured.    This is where the "game" is a foot, and where I need expertise in an area Im new to. 
    I am not new to being hands on with inet trouble shooting asI have been setting up my own home network (I work from home over VPN to work) for decades;  I would like to leverage the skills of those who are experts in the area that I think can address this issue.   That being QoS and the other device class mechanisms of this router.   Its my guess that this periodic hiccup can be minimized and even eliminated using these advanced features of this all-in-one TV/iNet/Tele router.   
    With that context being laid down, this hiccup doesnt show up if:
    a.  I connect two PCs connected to the same ethernet hub of the MI-424WR (traffic just over the LAN and not WAN)
    b.  When I was on Cable with my own gaming router wirelessly DHCP connected to my PC and using port forwarding or using the DMZ.  
    The hiccup does exist when:
    a.  Going from internet through the MI-424WR to the wireless DHCP connected PC with port forwarding
    b.  Even putting the wireless DHCP connected PC into the MI-424WR's DMZ has no effect
    I did read the manual and tried some QoS pritority and shaping and managed to reduce how often the hiccup occured, but I was just making guesses at the settings.   I put in the IP for the PCs I use for my gaming applications (which are very ping and jitter sensitive) into the QoS priority (value 7) and shaping GUI.    Im hoping someone with experience can tell me exactly how to use it and what settings to input.   Im not clear on the device and connection types offered in the QoS menus. 
    Another thing, is I couldnt find settings for the turning on/off the ICMP echo.   But I assume this is on because it can be pinged by folks on the net to my WAN IP.
    Here is the manual for the Verizon provided M424WR router (Current Version of firmware: 20.10.7)
    download link
    Here are the QoS traffic priority and shaping values Ive been experimenting with:
    Click to view QoS Traffic Priority
    Click to view QoS Traffic Shaping
    And why it matters to have a solid and stable inet connection for internet gaming?  The hiccup causes slewing or jitter which equates to positional errors in the 3D world that ruins the smooth gameplay that is needed for high end gaming.
    Heres a snapshot of me flying the wing of another flight simmer who is on the East coast and me on the West coast.
    Click to view
    Thank you in advance.
    Thomas "AV8R"
    MSEE

    TMAS wrote:
    the router as delivered and setup is not optimum for internet quality of service.  Instead it was probably out of the box optimized for HDTV and telephone to satisfy most customers and reduce support overhead.  
    That's not accurate.  VZ telephone service does not go through the Actiontec.  Also, there are no default settings for QOS in the Actiontec since QOS is rarely needed with FIOS upload speeds.
    TMAS wrote:I get an internet "hiccup" every 5-10 minutes that lasts about 1/2 to 2 seconds.  
       You should not be experiencing periodic "hiccups".  Something is clearly amiss.
    TMAS wrote:
    With that context being laid down, this hiccup doesnt show up if:a.  I connect two PCs connected to the same ethernet hub of the MI-424WR (traffic just over the LAN and not WAN)
    The hiccup does exist when:
    a.  Going from internet through the MI-424WR to the wireless DHCP connected PC with port forwarding
    b.  Even putting the wireless DHCP connected PC into the MI-424WR's DMZ has no effect
    Lets see.  The issue shows up on a wireless connection, but not a wired connection.  You think this is a QOS issue and not a wireless issue why?  Have you tried changing the wireless channel?  It very possible you have neighbors on the same channel.  Is the DGL-4500 wireless still on?  Could that be interfering?TMAS wrote:
    Another thing, is I couldnt find settings for the turning on/off the ICMP echo.  
    The settting to enable/disable ICMP echo is on the Firewall/Remote Administration page.
    TMAS wrote:
    Here are the QoS traffic priority and shaping values Ive been experimenting with:Click to view QoS Traffic Priority
    Click to view QoS Traffic Shaping 
    The traffic proirity settings you linked are applied only to your wireless connections.  QOS between the router and your wireless PC will only serve to prioritize traffic between the router and that PC and have no affect on your internet traffic.  Assuming you are not running browsers, VOIP and other traffic from that PC while you're gaming, then that will not accomplish anything.  i.e.  You're giving your only traffic highest priority, but that traffic is not competing with anything (except other nearby wireless connections on the same channel).
    On the traffic shaping screenshot, you have broadband ethernet checked, but according to your other thread, your WAN connection is Broadband Coax, not Broadband ethernet.

  • Can too large a folder cause issues and effect performace of my Mac Pro

    Hi, I have a 180 gb folder filled with important data within my Home folder. This folder has a many subfolders as well. The folder is on my startup drive and where I have Snow Leopard installed. Can too large a folder cause issues with my mac and effect performance? Thanks

    another way to ask, would you make better use of, and improve I/O and performance, if you used your other drive bays? yes.
    Boot drives with even less than 50% free is probably not a good idea. All depends on whether 200GB is on 1TB or on 500GB drive. And how fragmented free space even.
    Lifting, loading and writing or copying 4GB files of course does have an impact, so if you work with 2GB files in CS5....
    Having a dedicated type boot drive, media drive (and isolate media and library files) as well as scratch drive is normally done with Mac Pro.
    The biggest bang in performance: lean mean SSD boot drive.

  • Router Dead , when i applied QOS on virtual-temp interface for vpn !!

    hi all ,
    i have a simple brief topology below :
    PSTN======(R1-7206)>F1=======F2>(R2-7604 catalyst)>>>F1=========Internet
    i have two router
    R2========>MLS 7604
    R1======>cisco 7204
    on R2 , Im doing matching to QOS by dscp , im matching acls ips from internet with dscp values :
    here is CONFIG for matching :
    Gateway7600#sh policy-map LLQX
      Policy Map LLQX
        Class YOUTUBE
          set ip dscp af43
        Class FACEBOOKVIDEOS
          set ip dscp af33
        Class HTTP
          set dscp af23
        Class DNSQOS
          set dscp af13
        Class class-default
          set ip dscp af11
    ================
    Gateway7600#sh class-map
    Class Map match-all FACEBOOKVIDEOS (id 7)
       Match access-group name  facebookvideos
    Class Map match-all DNSQOS (id 8)
       Match access-group name  dnsqos
    Class Map match-all HTTP (id 6)
       Match access-group name  browsing
    Class Map match-any class-default (id 0)
       Match any 
    Class Map match-all YOUTUBE (id 5)
       Match access-group name  youtube
    Gateway7600#
    =========================================================
    on this router i applied this policy map  on interfaxce F1 in  direction
    and here matching is well :
    Gateway7600#sh policy-map  interface gigabitEthernet 1/5 in    
    GigabitEthernet1/5
      Service-policy input: LLQX
        class-map: rate-limit (match-all)
          Match: access-group name rate-limit
          police :
            4088000 bps 384000 limit 384000 extended limit
          Earl in slot 1 :
            139044930 bytes
            30 second offered rate 143032 bps
            aggregate-forwarded 134420937 bytes action: transmit
            exceeded 4623993 bytes action: drop
            aggregate-forward 22544 bps exceed 0 bps
        class-map: YOUTUBE (match-all)
          Match: access-group name youtube
          set dscp 38:
          Earl in slot 1 :
            132693939697 bytes
            30 second offered rate 212144928 bps
            aggregate-forwarded 132693939697 bytes
        class-map: FACEBOOKVIDEOS (match-all)
          Match: access-group name facebookvideos
          set dscp 30:
          Earl in slot 1 :
            10726758352 bytes
            30 second offered rate 20682720 bps
            aggregate-forwarded 10726758352 bytes
        class-map: HTTP (match-all)
          Match: access-group name browsing
          set dscp 22:
          Earl in slot 1 :
            56874058537 bytes
            30 second offered rate 92669832 bps
            aggregate-forwarded 56874058537 bytes
        class-map: DNSQOS (match-all)
          Match: access-group name dnsqos
          set dscp 14:
          Earl in slot 1 :
            160308954 bytes
            30 second offered rate 303552 bps
            aggregate-forwarded 160308954 bytes
        class-map: class-default (match-any)
          Match: any
          set dscp 10:
          Earl in slot 1 :
            67394864030 bytes
            30 second offered rate 126884864 bps
            aggregate-forwarded 67394864030 bytes
    =================================================================================
    now the problem is below
    on router 7200 , it is LNS router connected with LAC roiuter for ADSL customers.
    now here is config of policy map on 7200 router:
    R11#sh policy-map
      Policy Map MATCH_MARKS
        Class MATCH_YOUTUBE
          bandwidth 220000 (kbps)
        Class MATCH_FACEBOOKVIDEOS
          bandwidth 20000 (kbps)
        Class MATCH_HTTP
          bandwidth 100000 (kbps)
    =========================================================
    R1#sh class-map
    Class Map match-all MATCH_FACEBOOKVIDEOS (id 2)
       Match ip  dscp af33 (30)
    Class Map match-all MATCH_HTTP (id 3)
       Match ip  dscp af23 (22)
    Class Map match-any class-default (id 0)
       Match any
    Class Map match-all MATCH_YOUTUBE (id 1)
       Match ip  dscp af43 (38)
    ==========================================================
    here is virtual-template interface before i apply the QOS
    R1#sh running-config interface virtual-template 1
    Building configuration...
    Current configuration : 352 bytes
    interface Virtual-Template1
    bandwidth 1000000
    ip unnumbered Loopback0
    ip tcp adjust-mss 1412
    ip policy route-map private
    no logging event link-status
    qos pre-classify
    peer default ip address pool bitsead1 bitsead2
    ppp mtu adaptive
    ppp authentication pap vpdn
    ppp authorization vpdn
    ppp accounting vpdn
    max-reserved-bandwidth 90
    end
    =========================================
    when i apply the command
    (service-poliy output MATCH_MAKRS ) under virtual-template  interface i have console logs :
    Insufficient bandwidth 149760 kbps for the bandwidth guarantee (220000)
    Insufficient bandwidth 149760 kbps for the bandwidth guarantee (220000)
    Insufficient bandwidth 149760 kbps for the bandwidth guarantee (220000)
    also i have
    *Jul  9 22:28:38.242: Interface Virtual-Access2551 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.250: Interface Virtual-Access627 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.258: Interface Virtual-Access786 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.266: Interface Virtual-Access623 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.274: Interface Virtual-Access2559 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.282: Interface Virtual-Access2281 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:38.290: Interface Virtual-Access142 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul  9 22:28:40.262: %SYS-2-INTSCHED: 'suspend' at level 3 -Process= "VTEMPLATE Background Mgr", ipl= 3, pid= 278,  -Traceback= 0x756FF0z 0x3439C58z 0x2778D70z 0x2CACCD0z 0x2CC63E0z 0x2CC7FF8z 0x2CADC74z 0x2CBE058z 0x2CA0340z 0x2CA04F8z 0x2E0BB18z 0x2D23378z 0x2D1825Cz 0x2D18738z 0x2E66FE0z 0x2D971ACz
    *Jul  9 22:28:40.262: %SYS-2-INTSCHED: 'suspend' at level 3 -Process= "VTEMPLATE Background Mgr", ipl= 3, pid= 278,  -Traceback= 0x756FF0z 0x3439C58z 0x2778D70z 0x2CACD28z 0x2CC63E0z 0x2CC7FF8z 0x2CADC74z 0x2CBE058z 0x2CA0340z 0x2CA04F8z 0x2E0BB18z 0x2D23378z 0x2D1825Cz 0x2D18738z 0x2E66FE0z 0x2D971ACz
    after i apply it ,
    the cpu is 100 %  and the router got down !!!
    now
    what is  the problem ????
    here is ios for 7200 router
    R1#sh version
    Cisco IOS Software, 7200 Software (C7200P-ADVENTERPRISEK9-M), Version 12.4(24)T7, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2012 by Cisco Systems, Inc.
    Compiled Tue 28-Feb-12 12:53 by prod_rel_team
    ROM: System Bootstrap, Version 12.4(12.2r)T, RELEASE SOFTWARE (fc1)
    Bras1 uptime is 13 weeks, 1 day, 9 hours, 24 minutes
    System returned to ROM by reload at 16:24:51 GMT+3 Tue Jun 17 2003
    System image file is "disk2:c7200p-adventerprisek9-mz.124-24.T7.bin"
    Last reload reason: Reload Command
    This product contains cryptographic features and is subject to United
    States and local country laws governing import, export, transfer and
    use. Delivery of Cisco cryptographic products does not imply
    third-party authority to import, export, distribute or use encryption.
    Importers, exporters, distributors and users are responsible for
    compliance with U.S. and local country laws. By using this product you
    agree to comply with applicable laws and regulations. If you are unable
    to comply with U.S. and local laws, return this product immediately.
    A summary of U.S. laws governing Cisco cryptographic products may be found at:
    http://www.cisco.com/wwl/export/crypto/tool/stqrg.html
    If you require further assistance please contact us by sending email to
    [email protected].
    Cisco 7206VXR (NPE-G2) processor (revision A) with 917504K/65536K bytes of memory.
    Processor board ID 36858624
    MPC7448 CPU at 1666Mhz, Implementation 0, Rev 2.2
    6 slot VXR midplane, Version 2.11
    Last reset from power-on
    PCI bus mb1 (Slots 1, 3 and 5) has a capacity of 600 bandwidth points.
    Current configuration on bus mb1 has a total of 0 bandwidth points.
    This configuration is within the PCI bus capacity and is supported.
    PCI bus mb2 (Slots 2, 4 and 6) has a capacity of 600 bandwidth points.
    Current configuration on bus mb2 has a total of 0 bandwidth points.
    This configuration is within the PCI bus capacity and is supported.
    Please refer to the following document "Cisco 7200 Series Port Adaptor
    Hardware Configuration Guidelines" on Cisco.com <http://www.cisco.com>
    for c7200 bandwidth points oversubscription and usage guidelines.
    1 FastEthernet interface
    3 Gigabit Ethernet interfaces
    2045K bytes of NVRAM.
    250880K bytes of ATA PCMCIA card at slot 2 (Sector size 512 bytes).
    65536K bytes of Flash internal SIMM (Sector size 512K).
    Configuration register is 0x2102
    ==============================================================================
    wish to Help ASAP
    regards

    hi ,
    i did
    the same issue ,
    i did a TEST policymap that has 30 percent gurantee
    but the same result!!!!!!!!!!!!!!!!
    the router  god down agian !
    here is logs :
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.605: Interface Virtual-Access1896 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.797: Interface Virtual-Access1317 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.809: Interface Virtual-Access993 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.817: Interface Virtual-Access1699 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.981: Interface Virtual-Access254 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:33.993: Interface Virtual-Access687 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.001: Interface Virtual-Access35 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.009: Interface Virtual-Access160 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.017: Interface Virtual-Access1337 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.029: Interface Virtual-Access1670 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.037: Interface Virtual-Access1948 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.049: Interface Virtual-Access1669 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.109: Interface Virtual-Access1334 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.117: Interface Virtual-Access151 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.125: Interface Virtual-Access761 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.137: Interface Virtual-Access810 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.197: Interface Virtual-Access1522 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.237: Interface Virtual-Access1692 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.257: Interface Virtual-Access368 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.305: Interface Virtual-Access1758 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.317: Interface Virtual-Access2061 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.325: Interface Virtual-Access1203 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.337: Interface Virtual-Access188 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.345: Interface Virtual-Access1975 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.357: Interface Virtual-Access1172 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.509: Interface Virtual-Access1647 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.517: Interface Virtual-Access458 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.609: Interface Virtual-Access608 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.621: Interface Virtual-Access2128 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.633: Interface Virtual-Access1167 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.641: Interface Virtual-Access487 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.653: Interface Virtual-Access1793 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.665: Interface Virtual-Access2280 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.769: Interface Virtual-Access839 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.781: Interface Virtual-Access2311 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.793: Interface Virtual-Access1788 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.857: Interface Virtual-Access8 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.869: Interface Virtual-Access2243 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:34.881: Interface Virtual-Access580 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.057: Interface Virtual-Access6 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.065: Interface Virtual-Access1331 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.077: Interface Virtual-Access1235 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.177: Interface Virtual-Access1748 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.189: Interface Virtual-Access2262 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    *Jul 11 02:40:35.205: Interface Virtual-Access2136 max_reserved_bandwidth config will not
    take effect on the queueing features configured via service-policy
    i want to ask a question , could this be from IOS ????

Maybe you are looking for

  • Adobe Reader for Android to work with dynamic XML PDF forms?

    Any possibility that Reader for Android will ever work with dynamic XML PDF forms created by Abobe Acrobat Pro?

  • Need the keyboard shortcut

    Forum This is driving us a bit nuts. Is there a keyboard shortcut to finding a clip that is in the timeline, in the bin it resides in? Working with a lot of bins, it gets a bit tedious and time consuming opening each bin one by one and looking for th

  • Differences between user task Macro 2.0 and user task

    Hi, All: There are two different user tasks available in Oracle BPEL process. 1. user task Macro 2.0 NS:http://services.oracle.com/bpel/task 2. user task NS:http://xmlns.oracle.com/pcbpel/taskservice/task As far as I know, the first user task has les

  • [svn] 3739: Fixing recent change to prelink to not run more than once for compc.

    Revision: 3739 Author: [email protected] Date: 2008-10-19 09:54:37 -0700 (Sun, 19 Oct 2008) Log Message: Fixing recent change to prelink to not run more than once for compc. QE: Yes Doc: No Checkintests: Pass Reviewer: Gaurav Bugs: SDK-17411 - Font e

  • RMAN-05517:  cloning a database issue

    hi, I am using oracle 10.2.0 on windows xp and try cloning database using RMAN duplicate command on the same system. my original database is orcl and i am cloning it to clone(database name). and got the error RMAN> run 2> { 3> allocate auxiliary chan