Raid configuration -- the elusive best practice/best value framework

After much research on RAID, both on and off the Apple site, I am still looking for answers. I searched RAID threads for thoughts on some of the top users in the Forum so I apologize, I am sure some of this seems old hat to you old Pros.
I just bought a Mac Pro. Haven't even fired it up yet because I want to get the storage issue (RAID 0 or 1, or 0+1) settled before I transfer my jpg and video files. I have pretty much decided against the expense of the Hardware RAID card when its performance seems somewhat less than rock solid (perhaps a myth). Anyway, with a 4-core PRO I assume (correct if I am wrong) that the CPU hit to do software RAID is reasonable.
Other Questions:
1) The single 640 Gb hard drive as retailed. Can I separate "Users and their jpg/video files" to a drive separate from the boot, OSX, and Applications? Is it advisable?
2) If #1 is recommended, should I then mirror the boot drive or simply Time Machine the backup of the single drive?
3) Somewhere someone suggested better performance by putting together drive 1 and 3, and drive 2 and 4? I don't recall if that approach was RAID 1 for drive 1&3 (mirroring the boot), and striped (RAID 0) for drive 2 and 4. Your thoughts on this -- assuming its even relevant based on your recommendation for the boot drive on question #1 and #2?
4) So at this point we're down to how we should use either the remaining two or three drive bays depending the choice taken with the boot drive. What is your recommendation for optimal value -- e.g. maximizing storage, data protection, and costs. Note: I am willing to purchase external drive(s) for Time Machine backup.
5) Name your top two or three internal and external drive picks for this arrangement.
Thanks much for your help on this. Anything else you'd like to suggest or question for clarification.
cougar90

Well, if you're going to be using the system for dual purposes, right off the bat maybe software raid is not for you (especially a 0+1). That's a lot of overhead to be dealing with: video editing plus users accessing files on the same comp.
If you have users needing to access files, I would keep the "video editing" system and "server" system separate.
Hatter's idea of a PC/Mac compatible NAS sounds good; very easy and affordable to implement. I only wonder about the speed, if you will be transferring large video files, and have multiple users connected at once (although if hatter recommended it, I'm sure it's fine). If you do go the NAS route though, make sure you have a gigabit network running. If you have any old computer system laying around (pc or mac), you can also configure that very easily as a server. Add hard drives or external enclosures for space. If it's a spare mac, and you have tiger 10.4, the app sharepoints works very well.
The PVR can be done on the Mac pro; keep it with the "video editing" system.
In regards to Raid card stability, I believe you were looking at the "Apple Raid Card" for the mac pro. Yes, there have been many problems with it regarding the battery. However this is pertaining to only the Apple raid card, NOT hardware raid in general.
There are other companies that put out very solid raid cards. Check the before link to www.amug.org, they are a great resource of raid info. On my setup, I use an ATTO Card connected to a D800RAID from Sonnet (mine is the previous model).
http://www.sonnettech.com/product/fusiondx800raid.html
http://eshop.macsales.com/item/ATTO/ESASR380000/
Also, before even attempting raid, get a good grasp on it.
http://www.acnc.com/040100.html
http://en.wikipedia.org/wiki/RAID
Note RAID 5 needs Hardware Raid. Also if hardware raid is too expensive, you can also go with esata enclosures. This sonnet enclosure with esata card for example:
http://www.sonnettech.com/product/fusiond500p.html
http://www.sonnettech.com/product/temposatae4p.html
More affordable, with great performance. Use for Raid 0 scratch, temp files.

Similar Messages

  • Best practices or design framework for designing processes in OSB(11g)

    Hi all,
    We have been working in oracle 10g, now in the new project we are going to use Soa suite 11g.For 10g we designed our services very similar to AIA framework. But in 11g since OSB is introduced we are not able to exactly fit the AIA framework here because OSB has a structure different than ESB.
    Can anybody suggest best practices or some design framework for designing processes in OSB or 11g SOA Suite ?

    http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10223/04_osb.htm
    http://www.oracle.com/technology/products/integration/service-bus/index.html
    Regards,
    Anuj

  • Best-Practice Best-In-Class Examples

    I am new to flash and have found a few articles on best
    practices and find them useful. I would like to find some
    repository or site that refrerences outstanding Flash appications,
    sites, or objects. I want to learn from the best. I am not
    specifically interested in getting the code to the Flash, although
    that would obviously be helpful. I would like to see what people
    are doing out there and then I can try to apply it to what I'd like
    to do.
    Thanks for the help
    Rich Rainbolt

    Hi,
    Warnings may not be harmful to your code but perfect code may give you the best performance.Defining keys is the best practice becos it will increase the query performance.May be you didnt define keys on unused objects ,so the unused stuff in the RPD may degrade the performance.Anyway we are not always follow the best practices but do which is possible.
    mark if helpful/correct...
    thanks,
    prassu

  • Best practice: text/value formatting based on a value

    Hi all.
    How would you make a field display different text/value based on the actual value of a field?
    I have a field in a bean called retriesPerSecs. If this value is less than 1, then I want to display the text/value "{retriesPerSecs*60} retries per minute". If the value is higher than 1, then I'd like to display "{retriesPerSecs} retries per second". The value in the bean should always be retries per second.
    Should I make a converter to do this, or is there a smarter way to accomplish this?
    Thanks,
    R

    Hi,
    I don't know if when you say "the value in the bean has to be always be retries per second" means that for some specific reason you need to bind that component to that property, but if is not, i think you have two more options..
    1. Build the whole text on your backing bean based on retriesPerSecs property.
    2. Or bind the value to two properties, like value="#{retriesPerSecs*60} #{textDescription}", where text description evaluates what you have to show.

  • Best practice for configuring virtual drive on UCS

    Hi,
    I have two C210 M2 Server with LSI 6G MegaRAID 9261-8i Card with 10 hard drives of each 135GB. When I tried the automatic selection for the RAID configuration, the system has created one virtual drive with RAID 6. My concern is what the best practice to configure virtual drive? Is it RAID 1 and RAID5 or all in one drive with RAID6? Any help will be appreciated.
    Thanks.

    Since you are planning to have UC apps on the server, Voice applications have specified their recommendations here.
    http://docwiki.cisco.com/wiki/Tested_Reference_Configurations_%28TRC%29
    I believe your C210 server specs could be matching TRC #1,  where you need to have
    RAID 1  - First two HDDs for VMware
    RAID 5  - Remaining 8 HDDs as datastore for virtual machines ( CUCM and CUC ) 
    HTH
    Padma

  • Metrics - Thresholds - Best Practices

    All,
    I installed em grid control 11g and configured the targets and notification rules. Now i am trying to set up thresholds, is there a best practice threshold values that someone can share for various metrics at weblogic level. I know this is a generic question and i can tune the thresholds for my environment/application usage. But there must be a ball-park threshold document somewhere for alert purposes.
    Thanks in advance,
    Prasad.

    Hi Prasad,
    There is no document giving recommendations on threshold values. Setting thresholds really depends on your environment. I recommend looking at performance history...looking at for instance a timeframe that had heavy load on the application/server but performance was good. Then set thresholds according to that....either adjust up or down as needed.
    Thanks,
    Nicole

  • Logical architecture+best practice

    Hi,
    what does these mean for you regading Oracle applications :
    1-logical architecture
    2-best practice
    Regards.

    1-logical architecture -> I assume the technical architecture for deployment of Oracle Applications (Single node, multi node, HA, DMZ configuration etc)
    2-best practice -> Best Practices in each function within maintaining and implementing Oracle Applications.. like best practice for Upgrades, Coning Patching etc
    Sam
    http://www.appsdbablog.com

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Raid Configuration MCS 7845 (best practice)

    I'm wondering what is the best practice for RAID configuration. Looking for examples of a 4 disk and 6 disk setups. Also, which drives to pull when breaking the mirror.
    Is it possible to have a RAID 1+0 for 4/6 drives and have the mirroring set so that you would pull the top or bottom drives on a MCS 7835/7845?
    I'm also confused that using the SmartStart Array Configuration I seem to be able to create one logical drive using raid 1+0 with only having 2 drives, how is that possible?
    And links to dirrections would be appreicated.

    ICM 7.0, CVP 4.x, CCM 4.2.3, unity, and the collaboration server 5.0 and e-mail manager options for ICM.
    But to keep it simple let's look at a Roger set-up.
    Sorry for the delayed response.

  • What is "best practice" to set up and configure a Mac Mini server with dual 1 TB drives, using RAID 1?

    I have been handed a new, out of the box, Mac Mini server.  Has two 1 TB drives in it.  Contractor suggested RAID 1 for the set up.  I have done some research
    and found out that in creating the software RAID, this takes away the recovery partition, so I have been reading up on how to create a recovery "disk" using a thumb drive.  this part of the operation I am comfortable with, but there are other issues/concerns that I have.
    Basically, what is the "best practice" to setup the Mini, configure the RAID and then start the server.  I am assuming the steps would be something like this:
    1) start up the Mini and run through the normal Maverick setup/config - keep it plain and vanilla
    2) grab a copy of the Server app and store it offline in a safe place
    3) perform the RAID configuration / reinstall of OS X Maverick using the recovery tools
    4) copy down and start the server app
    This might be considered a very simplified version of this article (http://support.apple.com/kb/HT4886 - Mac mini server (Late 2012 and Mid 2011): How to install OS X Server on a software RAID volume), with the biggest difference being I grab a copy of the Server App off of the mini before I reinstall, since I did not purchase it from the App store, but rather it came with the mini.
    Is there a best practice /  how-to tutorial somewhere that I can follow/learn from? Am I on the right track or headed for a train wreck?
    thanks in advance

    I think this article will answer your question. Hope this helps: http://wisebyte.blogspot.com/2014/01/best-configuration-for-mac-mini-server.html

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best practice RAID configuration for UCS C260 M2 for Unified Communications?

    I have two UCS C260 M2 servers with 16 drives (PID: C260-BASE-2646) and I am trying to figure out what the best practice is for setting up the RAID.
    I will be running CUCM, CUP, CUC, Prime, etc. for about 2000 phone environment.
    If anyone can offer real world suggestions that would be great. I also have a redundnat server.

    The RAID setup depends a bit on your specific configuration; however, there is a guide for Cisco Collaboration on Virtual Servers that you can review here:
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers/CUCM_BK_CF3D71B4_00_cucm_virtual_servers_chapter_010.html#CUCM_TK_C3AD2645_00
    If your server is spec'd as a Tested Reference Configuration (TRC) then the C260M2 TRC1 would have 16 HDD that you would configure/split into 2 x 8HDD RAID 5 arrays.
    Hailey
    Please rate helpful posts!

  • What is the best practice concerning View Objects and List of values

    Hi,
    Let's take these two tables :
    Market_Descriptions
    Id
    Name
    Desc
    Language_Id
    Languages
    Id
    Code
    Desc
    Now, if I am generating these business components with the help of the wizard, I will be having two entities and two views.
    If I am creating a list of value on the field Language_Id in Market_Descriptions table, showing the language code, everything works fine as long as this is drop as a Form but not as a table or read-only form.
    What is the best practice for me to include/replace the language_id with the language code, since it is more user-friendly, on a read-only form or a table?
    Thanks.

    Hi,
    Always have LOV on the field which we want show it on UI. Suppose based on below scenario, If you want to display Langaugae Description on UI, have LOV on Desc but not on Language_Id. Even though we can set display attribute in LOV configuration, but it will work only in forms but not in tables. In tables it will show language id instead of description.
    Have LOV on Desc and make one more join with location Id in LOV configuration, so as soon as we select desc in LOV, location_id will be populated.
    here is the join condition inside LOV
    Languages.Desc = ViewAccessor.Desc
    Languages.Id - ViewAccessor.Desc
    Hope this will help you.
    Dileep.

  • Ask the Expert:Configuring, Troubleshooting & Best Practices on ASA & FWSM Failover

    With Prashanth Goutham R.
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Configuring, Troubleshooting & Best Practices on Adaptive Security Appliances (ASA) & Firewall Services Module (FWSM) Failover with Prashanth Goutham. 
    Firewall Services Module (FWSM) is a high-performance stateful-inspection firewall that integrates into the Cisco® 6500 switch and 7600 router chassis. The FWSM monitors traffic flows using application inspection engines to provide a strong level of network security. Cisco ASA is a key component of the Cisco SecureX Framework, protects networks of all sizes with MultiScale performance and a comprehensive suite of highly integrated, market-leading security services.
    Prashanth Goutham is an experienced support engineer with the High Touch Technical Support (HTTS) Security team, covering all Cisco security technologies. During his four years with Cisco, he has worked with Cisco's major customers, troubleshooting routing, LAN switching, and security technologies. He is also qualified as a GIAC Certified Incident Handler (GCIH) by the SANS Institute.
    Remember to use the rating system to let Prashanth know if you have received an adequate response. 
    Prashanth might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Security sub-community forum shortly after the event. This event lasts through July 13, 2012. Visit this forum often to view responses to your questions and the questions of other community members.

    Hello John,
    This session is on Failover Functionality on all Cisco Firewalls, im not a geek on QOS however i have the answer for what you need. The way to limit traffic would be to enable QOS Policing on your Firewalls. The requirement that you have is about limiting 4 different tunnels to be utilizing the set limits and drop any further packets. This is called Traffic Policing. I tried out the following in my lab and it looks good.
    access-list tunnel_one extended permit ip 10.1.0.0 255.255.0.0 20.1.0.0 255.255.0.0access-list tunnel_two extended permit ip 10.2.0.0 255.255.0.0 20.2.0.0 255.255.0.0access-list tunnel_three extended permit ip 10.3.0.0 255.255.0.0 20.3.0.0 255.255.0.0access-list tunnel_four extended permit ip 10.4.0.0 255.255.0.0 20.4.0.0 255.255.0.0    class-map Tunnel_Policy1     match access-list tunnel_one   class-map Tunnel_Policy2     match access-list tunnel_two   class-map Tunnel_Policy3     match access-list tunnel_three   class-map Tunnel_Policy4     match access-list tunnel_four  policy-map tunnel_traffic_limit     class Tunnel_Policy1      police output 4096000   policy-map tunnel_traffic_limit     class Tunnel_Policy2      police output 5734400   policy-map tunnel_traffic_limit     class Tunnel_Policy3      police output 2457600    policy-map tunnel_traffic_limit     class Tunnel_Policy4      police output 4915200service-policy tunnel_traffic_limit interface outside
    You might want to watch out for the following changes in values:
    HTTS-SEC-R2-7-ASA5510-02(config-cmap)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy1HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 4096000HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy2HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 5734400WARNING: police rate 5734400 not supported. Rate is changed to 5734000    
    HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy3HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 2457600WARNING: police rate 2457600 not supported. Rate is changed to 2457500HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#     policy-map tunnel_traffic_limitHTTS-SEC-R2-7-ASA5510-02(config-pmap)#      class Tunnel_Policy4HTTS-SEC-R2-7-ASA5510-02(config-pmap-c)#       police output 4915200WARNING: police rate 4915200 not supported. Rate is changed to 4915000I believe this is because of the software granularity and the way IOS rounds it off in multiples of a certain value, so watch out for the exact values you might get finally. I used this website to calculate your Kilobyte values to Bits: http://www.matisse.net/bitcalc/
    The Final outputs of the configured values were :
        Class-map: Tunnel_Policy1      Output police Interface outside:        cir 4096000 bps, bc 128000 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps     Class-map: Tunnel_Policy2      Output police Interface outside:        cir 5734000 bps, bc 179187 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps    Class-map: Tunnel_Policy3      Output police Interface outside:        cir 2457500 bps, bc 76796 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps    Class-map: Tunnel_Policy4      Output police Interface outside:        cir 4915000 bps, bc 153593 bytes        conformed 0 packets, 0 bytes; actions:  transmit        exceeded 0 packets, 0 bytes; actions:  drop        conformed 0 bps, exceed 0 bps
    Please refer to the QOS document on CCO here for further information: http://www.cisco.com/en/US/docs/security/asa/asa84/configuration/guide/conns_qos.html
    Hope that helps..

Maybe you are looking for

  • AND/OR + some nulls in filter

    I'm running into another problem with the AND/OR filter. The filter for selecting from object "a" and joining to objects b, c & d is: (b.theid==2 || c.theid==2 || d.theid==3) && active=="Y" The SQL that gets generated is: SELECT <...> FROM b t1, d t3

  • Adobe Acrobat 9 Pro is showing CMYK in spot color document

    Hi, I created a PDF using Adobe Illustrator CS4, and although I am only using 2 spot colors, both, Illustrator and Acrobat are showing CMYK plates in the Separations Preview and Output Preview. When I turn off the Spot plates, there is nothing in the

  • Trouble connecting to AEBS wirelessly

    I can't seem to connect to my old Airport Express Base Station this morning. I can't log onto the network--it seems to be there, but without enough power to log on. When I physically connect to the AEBS with a data cable, it's fine, and I can change

  • Optimized sql not properly generated when using SAP tables

    Hi Experts We are using BODS 4.1 sp2. We have a simple dataflow where we pull data from SAP r3 using direct download and push it in our database. Basically its like this: SAP Table-->Query Transform-->Oracle Database Inside query transform,we have ap

  • Lenovo N5902 wild scrolling issues

    I've been having problems with my new HTPC keyboard where it tends to cause the webpage to wildly scroll up or down randomly while typing on the keyboard or messing around with the optical cursor. I am making sure that i'm very clear away from the sc