RAID - Clarification on Concept & Components

This forum is a great resource on RAID ... yet I still am foggy and need to better understand before making my controller choice.
Can someone shed more light on the various components of a controller to consider?
How do these variables affect video editing performance and future upgrades?
Does controller documentation give you all the info needed to install?
What are the main things to think about as one compares these the main elements on a RAID controller?
Type
External Connectors
Internal Connectors
Interface
Transfer Rate
Cache Memory
RAID (this has been explained well already)
OS Support
I edit mostly doc style with xdcam and dslr footage.  My build will be the typical assortment posted many times -  i7-950, X58 mobo, 12 GB RAM or more, GTX 470 or 580 and a non RAID system disc,and a 850+ PSU
My current plan is to RAID 5 - 4 1TB 7200 Samsung f3's. Since I do not see much info out there on RAID 3 ... I want to stick with RAID 5. 
A fellow forum member has had great luck with 3ware 9750-4i.  It's a great price and should meet my needs ... unless I should be planning further ahead.
http://www.newegg.com/Product/ImageGallery.aspx?CurImage=16-116-109-TS&SpinSet=16-116-109- RS&ISList=16-116-109-Z01%2c16-116-109-Z02%2c16-116-109-Z03%2c16-116-109-Z04%2c16-116-109-Z 05%2c16-116-109-Z06&S7ImageFlag=1&Item=N82E16816116109&Depa=0&WaterMark=1&Description=3war e%20Internal%209750-4i%20SATA%2fSAS%206Gb%2fs%20PCI-Express%202.0%20w%2f%20512MB%20onboard %20memory%20Controller%20Card%2c%20Single
3ware Internal 9750-4i SATA/SAS 6Gb/s PCI-Express 2.0 w/ 512MB onboard memory Controller Card, Single
3ware > Newegg Item#: N82E16816116109
THANK YOU.
--NAN

Actually the last benchmarks i saw for the 3Ware card with raid 5 or 10 were very good. I was actually surprised by them though their management utility is really lacking.
Eric
ADK

Similar Messages

  • Clarification on Prerequisite components for Oracle IDM 11gR2PS2

    I am new to Oracle Fusion Middleware. However, I am familiar with IDM concepts of a different vendor.
    I started with installation documentation and would like to have clarification on IDM components (Includes OAM, OAAM, OIM, OSTS, OES, OIN, OUD, OIF, ESSO, OIC, OPAM). While referring "Oracle Identity and Access Management 11g Release 2 (11.1.2.x) Certification Matrix", first section describes system requirements which tells us about supported  "Processor type, OS Version, OS 32/64 Bit, Database" etc for 11gR2PS2 (11.1.2.2.0). "ID&Access" tab tells about LDAP supported for specific component(s).
    Notes -
    "Some Oracle Identity and Access Management components require an Oracle Database. Ensure that you have an Oracle Database installed on your system before installing Oracle Identity and Access Management. The database must be up and running to install the relevant Oracle Identity and Access Management component. The database does not have to be on the same system where you are installing the Oracle Identity and Access Management component."
    I am aware of the fact that configuration will be difficult in these type of architecture I mention below and it is not recommended. Moreover, I will not be able to use RCU. Let's ignore the configuration part for a while.
    As mentioned above, when I went through first section it says, database supported for Identity Management 11gR2PS2 (11.1.2.2.0) is "Oracle Database". Whereas in the section which specifies about the supported LDAP type, specification are as below -
    (for ease of understanding I have just take two components from the IDM suite).
    OAM - Oracle Internet Directory
            - Oracle Virtual Directory
            - Microsoft Active Directory 2008,
            - Novell eDirectory 8.8,
            - IBM Tivoly Directory Server 6.3
            - And as per list...
    Whereas, OIM Server supports,
    OIM - Oracle Internet Directory,
           - Oracle unified Directory,
           - Microsoft Active Directory 2008
    However, I do not find Novell eDirectory or IBM Tivoli Directory server in the LDAP supported list for OIM Server.
    My Question is,
    In the list of supported database, it has been listed as "Oracle Database" everywhere. Whereas support  for LDAP type we have option(s) from different vendors. Does this mean that if I plan to use LDAP something other than Oracle Internet Directory / Oracle unified Directory / Oracle Virtual Directory, my database should be 'Oracle Database' only ?
    Scenario 1 - I want to use OAM server and since OAM supports LDAP from Microsoft, IBM, Novell.
    Can I go for IBM Tivoli Directory Server as LDAP type and IBM DB2 as database type ?
                                                    or
    IBM DB2 as database type and Oracle Internet Directory / Oracle Unified Directory ?
                                                    or
    (If deploying on Windows system) Microsoft SQL server as database type and Microsoft Active Directory 2008 as LDAP type ?
    Or is it Mandate that my database should be "Oracle Database" irrespective of LDAP (IBM TDS / Microsoft Active Directory / Novell eDirectory) I select.
    Scenario 2 - I want to use OIM server and apart from Oracle Internet Direcotry / Oracle Unified Directory it only supports Microsoft Active Directory 2008 as per list.
    Does this mean that if I use 'OIM Server' component of Oracle IDM 11gR2PS2 (11.1.2.2.0),
    My database will be 'Oracle Database' and as a option for LDAP I have Microsoft Active Direcotry 2008 ? Yes, We have Oracle Unified Directory if we don't opt for Microsoft Active Directory.
                                                                        or
    If I plan to use Microsoft Active Directory 2008 as my LDAP can I use Microsoft SQL server as database ? Or Should I use "Oracle Database" only ?
    And, the OIM components doesn't support IBM's DB2 or IBM's Directory server / Novell eDirectory. ?

    Any OIM experts, can you guys help!!

  • RAID Clarifications

    Hi,
    I read in the Knowledge Base Answer ID 19641 steps to installing a used HDD into NMH-300 to rebuild the RAID setup.
    Can anyone confirm if the steps apply to new HDD as well?
    For instance, if RAID1 is already configured and the HDD is Bay 1 crashes, do I:
    1. Insert a new HDD directly back to Bay 1 and begin the build, or
    2. Switch the working HDD in Bay 2 to Bay 1 then insert the new HDD into Bay 2 and begin the build.
    Any advice is appreciated, I spent countless hours trying different rebuild strategies but the NMH-300 RAID setup is not hot swappable. 
    Thanks in advance.
    Regards

    Hi,
    If we base it on the Linksys article below regarding RAID 1, bay 1 of the mediahub should always contain the harddisk thats going to be backed up onto the harddrive on bay 2. If the harddisk on bay 1 gets defective then you may need to put the harddisk on bay 2 onto bay 1 then add the (new) replacement harddisk on bay 2. This sounds confusing for me as well ...
    Creating RAID 1 on the Network Media Hub 
    I hope this helps.

  • Clarification on concepts of mySAP ERP

    Hi All,
    This is Hussain - would be starting testing of the HCM module shortly. I am pretty new to SAP and the forums as well, i have been referring to the book SAP in 24hrs by George W.Anderson and Danielle Larocca to learn more about SAP.
    The Hour 11's discussion on SAP ECC and R/3 seems a bit confusing though, i was clear on the fact of the evolution of SAP R3 -> R/3 Enterprise and -> ECC, and also about the fact that ECC utlizes ESA etc. But the section mySAP ERP states the business processes as mySAP ERP Financials, Operations, Human Capital Management and Corporate Services while the core SAP business modules include FI, TR, CO, SD, MM, PA, PA-PD etc.. I am unable to interpret the relationship between these business processes and modules, any help (links/documents/references) would be appreciated.
    Can anyone name a couple of books to understand/learn/practice SAP HCM??
    Thanks in advance,
    Hussain

    Dear Venkat,
    note 391846 is also valid for ERP600.
    This modification is not part of the SAP Standard Release 600. It's also in 600 a modification.
    If you want this function you have to implement the note again in your actual release.
    Regards,
    Sabine

  • Business Components for Java - Pooling

    I need a little bit of clarification regarding business components for java...
    I would like to create a jdbc connection pool for my application to avoid the overhead of creating new jdbc connections each time a client connects.
    Since I'm using BC4J, and the jdbc connect is contained within the bc4j components, I connect to the database using:
    session.getTransaction().connect("jdbc:oracle:thin:test/test@ccmain:1521:clincare");
    Does the BC4J architecture do any connection pooling itself? My plan was to create a pool of ApplicationModules that could be easily and quickly accessed, but if bc4j already pools the jdbc connections internally, then I'm not sure I would gain any performance with my ApplicationModule pool.
    Any input would be appreciated! Thanks!

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Steve Muench ([email protected]):
    [b]Does the BC4J architecture do any connection pooling itself?
    Strictly speaking, in JDev/BC4J 3.1, the answer is no. BC4J 3.1 offer Application Module pooling and since AM's are paired one to one with connections, using a pool of AM's is pretty much the same as using a pool of connections.
    However, in JDev 3.2 we've dramatically improved the features for use in a high-throughput Web application scenario and in doing so have implemented a more flexible application module pooling mechanism as well as a connection pooling mechanism that work together to allow developers to exploit application module instances to retain pending state without "pinning" that pending state to a particular AM instance and without "pinning" a dedicated database connection.
    Early customer previews we've done on our 3.2 features have been receving rave reviews, so we'll excited to get it out to a wider audience this Fall.
    For now, you're best bet is to exploit the application module pooling mechanism.<HR></BLOCKQUOTE>
    Is JDev 3.2 going to include a support for registering application module not from the property file, but using specific database username and password for the underlying connection ?

  • RAID 5 performance

    I have 4 320gb disks. I have put them in RAID 5(Intel ICH10). I have used RAID 0 and RAID 1 before. This is the first time I am using RAID 5. Read performance is excellent, but write is very slow at 15MB/sec. Is it normal for RAID 5?Single disk write for my disks is around 70MB/sec.

    Those write speeds are normal for a RAID5 array using an even number of disks. To get excellent write speeds on software RAID5 controllers like the ICH8R-ICH10R, you need an odd number of disks. The only options are a 3 or 5 disk RAID5 array since there are only a maximum of 6 ports.
    See this thread to get an idea of what I'm talking about. The thread is about the nForce onboard SATA RAID, but the concepts also apply to ICHR chipsets as well:
    http://forums.storagereview.net/index.php?showtopic=25786
    To summarize the thread, when dealing with RAID0 or RAID5, to get optimal performance you need to use aligned partitions, ideally created at offsets of 1024KB x (number of usable drives in array). The overall best stripe size to use for the best read, write performance for all filesizes is 32KB. The best cluster size to use when formatting the partition with NTFS is 32KB when storing a variety of filesizes on the array. If you are only dealing with very large files on the array (256MB+), you can get the best performance by using a 128KB stripe size for the RAID array and a 64KB cluster size for the NTFS partition. I store a variety of files on my RAID array, so I use the 32KB stripe/32KB cluster option.
    The next thing you need to do is create an aligned first partition on the array. If you use Windows Vista or later to create a partition it will create an aligned first partition by default. If you are using Windows XP, then you need to use a utility called diskpar.exe (not diskpart.exe since the XP version does not have partition alignment capability, but Windows Server 2003's diskpart.exe, and Vista and later's diskpart.exe do). You can gain slightly more performance by manually aligning the partition yourself using diskpar/diskpart. If you have a 5 disk RAID5 array, you would align the first partition on the array to 4096KB, or 1024KB for every non-parity (i.e. usable) drive in the array. For a 3 disk RAID5 array, you would align on 2048KB.
    Yes, you can get awesome RAID5 write speeds on an Intel onboard RAID controller using the information above. My 5 disk RAID5 array with Samsung F1 500GB drives has a maximum read speed of 350MB/s and write speeds of nearly 300MB/s. They trail off linearly as you get further into the array just as any mechanical HDD's performance does when looking at HDtach/HDTune benchmarks. You're never going to get read and write speeds that match a RAID0 array with the same number of drives, but with proper stripe/cluster/alignment you can get close. For comparison, the same 5 drives in RAID0 have a max read speed of 450MB/s and write speeds over 400MB/s. On ICHR chipsets, RAID0 arrays are not severely hampered by non-aligned partitions, but alignment does help quite a bit. For RAID5, partition alignment is essential for good write performance.
    Diskpart.exe usage (done on a clean drive/array with no partitions):
    1) Open a command prompt window and type diskpart then hit Enter.
    2.) Type: list disk then hit enter. Look for the disk number that corresponds to your RAID array
    3.) Type: select disk 1 (if disk 1 is your RAID array)
    4.) Type: create partition primary align 4096 (if you have a 5 disk RAID5 array, use 2048 if you have a 3 disk array)
    That's it. Format the drive making sure to select the correct cluster size (at least 32KB). If you created a first partition that didn't fill the drive, any subsequent partitions you create on the drive will be aligned because the first one is aligned.

  • XSERVE RAID Disk Failed to Recover After Reset...

    Greetings,
    I've got an PATA XSERVE RAID with 14x250GB Dries setup as RAID 5.
    Last week event monitor reported that Disk 14 Reported An Error Command 0x35 ERROR 0x10 STATUS 0x51 LBA 0X3978B80.
    Not having a spare Drive available, I ordered one and Reconditioned the Raid Using the RAID Admin Tool. 3 days later it reported all as well. This morning my new ADM arrived and before I could replace Drive 14, it started generating basically the same Error.
    So I swapped in the New Drive, and it began to rebuild, now here's the kicker...
    After a few minutes it started giving me this ERROR:
    DISK 14 FAILED TO RECOVER AFTER RESET. RETRYING
    then...
    DATA WAS LOST DURING A REBUILD.
    It continued to repeat these messages until it Finally reported:
    DISK 14 OFFLINE
    So, I thought it must be a bad Drive, So I figured let me kill 2 birds with 1 stone and swap out a drive I had working fine in an XSERVE. So I ejected the new drive from the XSERVE RAID and Replaced it directly with the Drive from the XSERVE (Also 250GB) And...
    It did the Same exact thing, same errors, etc.
    So what's going on?
    RAID ADMIN reports all Components OK
    Any help is appreciated.
    Thanks
    LJS

    OK, So replying to my own issues here...
    So I've replaced the Lower Controller and updated the firmware on that Controller.
    The I ejected disk 14, waited a minute or 2, then re-inserted the Disk, then made it available for use.
    It attempted to add to the existing RAID and the errors started all over again (See 1st post)
    So apparently it wasn't the controller.
    So I figured it must be the RAID Set itself, so to be sure, I swapped Drive 8 with Drive 14.
    Deleted the ARRAY and Re-Created it, it's building now and Now Drive 8 is offline, So it would appear to be the drive, but I already replaced it with a known working drive, a new one from Apple, and then one directly from my XServe (See 1st post)
    So I'm just baffled at this point.
    LJS

  • Enhancement Point Creation in Internal table

    Hi,
    I want to create a Enhancement Point in Internal table declared in Customize Program.
    This is more related to clarification of concept related to Enhancment Point.
    But when i am putting the cursor inside the internal table and right click and Enhancment -> create ,it is giving me a message
    that "Position of enhancement section/point statement is not allowed".
    This means that we can't create Enhancement point or Enhancement Section inside the Internal table and solution is we can use
    only Implicit Enhancement point.
    But the doubt is that i had seen SAP programs where they have created the Enhancement point in the internal table declarations.
    How that is possible then?
    Please provide your helpful answers to get me more clarification on Enhancement Point/
    Thanks,
    Manish

    Hi Afzal,
    Below is the code from standard program :-
    DATA: BEGIN OF lbbes OCCURS 10,        "Tabelle für LB-Bestände
            lifnr LIKE ekko-lifnr,
            matnr LIKE mdpm-matnr,
            werks LIKE mdpm-werks,
            charg LIKE lifbe-charg,        " SC-batch: Key-Feld
            lgort LIKE resb-lgort,         " SC-batch
            lbmng LIKE lifbe-lblab,        "LB-Bestandsmenge
            erfmg LIKE mdpm-bdmng,         "Verfügbare Menge
            erfme LIKE mdpm-lagme,         "Basismengeneinheit
            wamng LIKE mdpm-bdmng,
            waame LIKE mdpm-lagme,
            vstel LIKE tvstz-vstel,
            ladgr LIKE marc-ladgr,
            sernp LIKE marc-sernp,
            lblfa LIKE t161v-lblfa,
            bdmng LIKE mdpm-bdmng,   "sum of dep. requirements from SC POs
            bdbam LIKE mdpm-bdmng,   "sum of dep. requirements from SC req's
            rsmng LIKE mdpm-bdmng,         "sum of transfer reservations
            slbbm LIKE mdpm-bdmng,   "sum of third-party SC requisitions
            slbmg LIKE mdpm-bdmng,         "sum of third-party SC POs
            lfimg LIKE komdlgn-lfimg,      "sum of open deliveries
            maktx LIKE mdpm-maktx,
            selkz LIKE sy-calld.
    ENHANCEMENT-POINT rm06ellb_03 SPOTS es_rm06ellb STATIC.
    $$-Start: RM06ELLB_03----
    $$
    ENHANCEMENT 3  /CWM/APPL_MM_RM06ELLB.    "active version
    DATA:   /cwm/waame LIKE mdpm-lagme,
            /cwm/lbmng LIKE lifbe-lblab,    "LB-Bestandsmenge in ParallelME
            /cwm/erfmg LIKE mdpm-bdmng,     "Verfügbare Menge in ParallelME
            /cwm/erfme LIKE mdpm-lagme,     "ParallelME
            /cwm/meins LIKE mdpm-lagme,     "CW-BasisME
            /cwm/bdmng LIKE mdpm-bdmng,     "sum of dep.req.< SC POs in PME
            /cwm/lfimg LIKE komdlgn-lfimg,  "sum of open deliveries in PME
            /cwm/rsmng LIKE mdpm-bdmng,     "sum of transfer reservations
            /cwm/slbbm LIKE mdpm-bdmng, "sum of third-party SC requisitions
            /cwm/slbmg LIKE mdpm-bdmng,     "sum of third-party SC POs
            /cwm/bdbam LIKE mdpm-bdmng.  "sum of dep. req´s from SC req's
    ENDENHANCEMENT.
    $$-End:   RM06ELLB_03----
    $$
    DATA:
          END OF lbbes.
    Now in the internal table lbbes they have created the enhancement point and in implementation of enhancement point they have declare some extra fields.This is not a implicit enhancement but it is a explicit enhancment implementation with the help of enhancement point.
    Similarly if i have to do in my customize program then how to go ahead?
    I knew that it is possible with Implicit Enhancement point and i can implement that also.
    Let me know your views about this.
    Thanks,
    Manish

  • XI scalability architecture - deploying integration builder objects

    Hi
        I read in the SAP document - 'Scaling up XI' that the way to scale up XI is simply addition of dialog instances and all these additional dialog instances would access the database in the central instance.
    Now, in this scaled up architecture, do we have to transport integration builder objects onto each of the additional dialog instances - or do the entire set of integration builder objects needed at runtime - reside on the database on the central instance and are accessed by the different dialog instances ? (  In this case, the integration builder objects have to be transported to the central instance only . and the CMS setup should point to the url of the central instance only )
    Is there any document that clarifies this concept of moving the production objects onto the central instance ?
    Any thoughts, ideas shared on this would be rewarded.

    Hi Karthik,
    U must ve read this document
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    Scaling of XI servers is done when "Performance" is the main issue to be considered. scaling thus generally includes the scaling of IS. So the entire set of IB objects should be placed at central instances only
    This will help u
    /people/sriram.vasudevan3/blog/2006/12/23/change-management-experience-from-a-sap-xi-architectural-study
    Also have a look here,
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/93/a3a74046033913e10000000a155106/content.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/94/c461bd75be534c89b5646b6ddc0aff/frameset.htm
    Regards,
    Prateek

  • How to prepare for A3 install? Frozen with trepidation from reading here

    I'm mad I bought my A3 so quickly. I have early adopter regret. I tend to jump in early but now that I've been using Aperture for years the stakes are high.
    I have been sitting here staring at A3 and drooling about using it for a couple of weeks. The more I read in these posts the more confused I get. I don't understand migrating by project, yet I think that's the only way I'm going to keep my sanity.
    First, a description of my set up. I have an early Mac Pro with only 2 GB of RAM. I have all my photos reside on my second 1 TB internal HD (does that make them referenced?) and about 300 GB of free space on my primary drive. My library is about 70,000 photos taking up 286 GB.
    I didn't understand the concepts of vaults when I started using A1. I now want everything in one vault but I started out making a few vaults. My first vaults now are ghosted and say they are disconnected even though the external HDs are hooked up. My new 'master' vault only says my library is 225 GB, so I don't know how to account for that discrepancy. What photos are missing? Therefore I have concerns that I'm not completely backed up.
    I would LOVE to have a 4-5 external HD RAID system (another concept I don't understand) but I've looked at the prices and I'm not able to spend that kind of money. It sounds wonderful and someday. . .
    I don't want a regurgitation of what's already been posted, but given my brief description does anyone have advice to help me build my confidence and accomplish this upgrade.
    PS - I am looking forward to using Faces and Places (to a lesser degree) so all the advice about turning off Faces makes me sad. Is there a way to have Faces work project by project instead of running through the whole library? I photograph people a lot so it will take a long time for it to process every photo.

    Hi,
    In my case I installed AP3 and honestly no crashes or major issues other than some slowness in AP3 but I see it's been solved with the updates and my workflow maybe helping too.
    I run a managed AP3 library on a fast external drive connected via eSATA card to my MBP.
    In your case I would say add more RAM (never hurts) although 2GB should be ok for now.
    Leave your AP3 Library on the main drive and reference your files that exist on the second internal drive.
    When importing, you have a choice as to leave in current location(so referenced)
    or have AP manage library.
    As far as the size of your library at 286GB, it my be a good idea to install another internal drive in the Mac Pro.
    So
    drive 1) Boot drive with OS and all your apps.
    drive 2) AP3 Library
    drive 3) Original RAW files that your AP3 library can reference-as long as you tell AP to reference the files upon import.
    eventually maybe a 2 TB external drive (usually made of 2x1TB drives in one enclosure)
    and this external can be used to back up your AP3 library on one drive and your original RAW files on the second drive.
    http://eshop.macsales.com/CustomizedPages/Framework.cfm?page=mepal_splashraid.html
    I hope this helps but lets see what others may advise.

  • Is it Neccessary to update BIOS?

    I am inclined to always update everything (drivers, BIOS, etc) every chance I get, thinking there are fixes to various problems. However, I have read that it is best not to update BIOS if the system is working well.  In my case, although my computer is working with a BIOS dated April this year, I am never satisfied with my system speed and other issues.
    The new BIOS was just released Nov 1. I am on an 1155 platform Sandy Bridge, ASUS brand, with i7 procesor running Windows 7. I have several hard drives, RAID arrays and other components internal and external.

    Alex, I'm agreement with what others have said here but would like to add that you could also do some google searches on this particular new BIOS version to uncover any issues that people may be having with it.  Sometimes BIOS updates are not that well tested by the motherboard company and they have to be quickly followed up by another update to correct bugs.  Being that this new BIOS is a month old already gives some assurance but you should probably do some searches nonetheless.

  • WLC+Anchor+Guest NAC

    Hello all
    I have few basic clarifications on these components.. i have a network, with LWAPP's and WLC on one site - say site A. lets consider only the guest SSID, access as of now.. The Anchor guest controller is positioned on a DMZ segment on Site B. Site A & B are connected through a routed network. I also have a NAC guest server, on Site C. Now, i want to integrate all these components. As per my knowledge following is the traffic flow:
    1) When guest users access their SSID, they are mapped to the anchor controller in DMZ, throu mobililty groups.. the WLC then initiates a EoIP tunnel to DMZ controller.. Firewall rules allow,all reuired ports (IP 97, 16666 UDP etc), and end to end ip communication happens.
    2) Upon the reuest, the Anchor controller provides an Ip address from DHCP configured locally. In this case, will the default gateway of the PC's be Anchor DMZ controller's WLAN IP or will it be local to Site A (say L3 switch) ?
    3) Then when the user tries to access any site, he is given a web authentication portal, which is linked to the radius server/nac guest server. during authentication, dmz controller again tries speaking to the nac guest server in site c. hence the firewall has to alow for UDP 1812/1813 radius ports..
    4) after authentication, the user browses internet. Now, what will be the ip packet flow in this instance. Will all traffic be first tunneled across LWAPP to the controller, and from there EoIP'ed to the Anchor ? Anchor then forwards it to the internet gateway, through DMZ ? as asked before, will the default gateway of the PC's be the WLAN IP of the anchor ? if there are too many users, will I create many WLAN SSID's for guests, for Site A ?
    Sorry for the long post..
    Raj

    Greg
    Thanks again.. that was useful too. One last query.. and this was grilling my head:
    1) how does the guest vlan egress work ? I have a WLC on a new DMZ of PIX, with /27 subnet.. This WLAN is used only for EoIP communication.. now, when the guest user gets a DHCP IP, what IP pool should i define here ? since the default route is going to be towards the PIX, it should be one among the 4 interfaces, right now ? or should I have another interface or VLAN dmz for the egress traffic from WLC ? SRND says something about dynamic interfaces, but not been explained at all :(
    2) will the foreign WLC talk to the Anchor controller 1 & 2, in load balancing mode ? why i'm asking is, if the dhcp is defined on Anchor 1 and if the request goest to anchor 2, then it will be an issue.. otherwise is it advicible to split up dhcp scopes between the two Anchors ? say 1-127 in one anchor and 128-254 on other ?
    3) Lastly.. about guest nac servers.. i have 2 of them in place.. will the guest database be replicated between them , like what ACS does ? if so, is the replication bidirectional ? If lobby admin creates an account, it will be good if he just creates in one box, and the other box replicates it ..
    Thanks for all your answers.. it has been really useful to me.. and i think will be useful for anyone who works on Anchor+guest+foreign WLC designs :)
    Raj

  • Freezing with Native Canon MXF

    We're having some problems with Premiere Pro (CS5.5 and 5.5.1) freezing for a few seconds (the screen goes gray, the program doesn't respond for a while, then it comes back) when scrubbing through native Canon MXF files.  It's doing it on TWO Windows 7 computers.
    Windows 7
    Intel i7 Sandy Bridge
    Asus Mobo
    16GB Ram
    Production Premium CS5.5  (one has a 5.5.1 update)
    It seems to play fine, but when we're looking through clips in the bins it's freezing way too often to be usable.
    We've deleted all the cache files, deleted / reset preferences etc etc. 
    Any ideas?

    @ExactImage:  I think 20 seconds would be plenty - enough to evaluate the playback without being fooled by frame caching.
    @Roberio: I don't speak for product management, but in my opinion, probably not.  Reason being, our 422 MXF support in CS 5.x was not directly implemented by us, it goes through Main Concept components.  Consequently, to get a fix, we'd need to take a new drop of the Main Concept SDK.  This impacts not just 422 MXF support, but all MPEG support in the product, so it's a more risky proposition, one that generally doesn't happen in a dot release.  Evaluating a new SDK drop is something that normally happens during a regular product cycle.
    I can vouch that the audio looping bug is gone in CS6, as I was asked to test a file with this problem against my implementation.  Going forward, our 422 support will be easier to maintain, as we do all the file handling/container navigation, and only rely on using the Main Concept SDK for the actual essence decoding, so we'll have more control for fixing issues such as the looping audio bug.

  • Add Qualification" free enhancement in Performance Appraisals

    Hi all
    I trying to activate the "Add Qualification" free enhancement in Performance Appraisals.
    Can anyone tell me:
    1. Can I use this enhancement only in the VB level?
    2. How to define correctly the "Refers to Attributes of"
    3. How can I set up that the scale will be according to the scale maintained in the Qualification (Q)?
    Tx a lot
    Maya

    Hello,
    You have within PM the possibility to extend the appraisal document with additional elements based on flexible criteria (for example employee, country assignment etc.). The added elements can be appraisal objects (VB/VC) or external objects (D, Q, etc).
    These added elements need to be defined in a way that the appraisal application understands it. If we take the Q object, it has no configuration that describes how to be used in an appraisal environment.
    Thats where the reference objects come in place. These objects tells the appraisal application how to behave in case of added new elements. A reference element is a VB or VC and has the same settings as a VB or VC you add fixed in a template configuration.
    So here you can then say, if you add one or more Q's to teh document then all these Q's use the settings of the reference object (and will use for example only FAPP with scale YHM as per column config on ref. object).
    If you have a fix or free enhancement then atleast one reference object needs to exist. however multiple reference objects are allowed, in the implementation of the fix/free enhancement you can then program which element is to be used when.
    in the config, double clicking on the entry after 'Refers to Attributes of' will navigate you to the reference object configuration.
    Hope this clarifies the concept of reference objects.
    Regards and Groetjes,
    Maurice Hagen

  • ASM - Concept - Clarification Request

    Hello All,
    I'm about to go ahead and install ASM for one of my clients. After going through the book ASM - Under the hood, I have a few clarifications, which I hope can be answered by the experts here.
    1- Since ASM uses its our algorithm for mirroring - Can I have an in-pair number of disks in +DATA diskgroup? say 11 disks ?
    2- In regards to Failure Groups, what is concept? Say I have 1 diskgroup +DATA - 4 disks  - does failure groups mean that id Disk 1 goes, then move the primary extents to another disk, say disk 3.
    - Can failure groups be in different diskgroups, lets say failure group for DATA disks, would be disk in RECOVERY ?
    - Or are failure groups additional disks which just sit there and are activated if case of a disk failure
    3- On installation, ASM 10gR2, are there any things a firs timer should watch out for.
    4- Should I have a hot spare disk on a 15 disk array Dell MD1000 - is this really necessary - why? if one disk goes bad, then we can simpy change it. Does this make sense if I have 4 hour gold-support on site with a new disk?
    Thank in advance for any assistance.
    Jan

    1. Yes, ASM will determine the most suitable block mirroring strategy regardless the number of disks in the diskgroup.
    2. Failure groups affect how ASM mirrors blocks across them. By default, each disk is in its own failure group - it is assumed that each disk can fail independently of others. If you assign two different disks to the same failure group, you indicate that they are likely to fail together (for example, if they share the access path and controller for that access path fails,) so ASM will only create single mirror on them and will try to create another mirror in another failure group. For example, you assign disk1 and disk2 to the same failure group: ASM will never create a mirror of a block from disk1 on disk2, it will only mirror to a different failure group. Note that if your storage is already RAIDed, EXTERNAL redundancy diskgroups are pretty safe: hardware RAIDs are usually more efficient than NORMAL redundancy ASM groups while maintaining the same level of protection, thanks to hardware acceleration and large caches they sport these days.
    3. Not really, as long as you follow the documented procedures and have Oracle patched to the current patchset level. However, if you employ ASMLIB, there might be issues that differ by the storage vendor.
    4. If you are sure that no other disk will fail within those 4 hours, hot spare is probably not that necessary. If availability is of concern, always plan for the worst case though. Having hot spare will protect you from such second failure while the replacement is en route.
    Regards,
    Vladimir M. Zakharychev

Maybe you are looking for