Xsan planning

greetings
new to xsan - been reading.
I'm working to setup a system like the one pictured here:
http://www.apple.com/xsan/datacenter.html
yet simplified a bit.
I have my storage pool, meta1 & meta2
now my NAS heads. these are AFP1 AFP2 AFP3 AFP4. i am assuming i set these up as SANS Clients ? not controllers.
Then in DNS i can have a cname of " AFP " ; for round robin access
afp 60 IN CNAME afp1.bar.ungh
afp 60 IN CNAME afp2.bar.ungh
afp 60 IN CNAME afp3.bar.ungh
afp 60 IN CNAME afp4.bar.ungh
etc,...
and my users ( which live in LDAP DIR ) can then access the same folder from 4 different servers?
I know this does not work without the xsans software. unless I have done something horribly wrong.
-- cavets, flames, insight or general gotchas welcome.

greetings
new to xsan - been reading.
I'm working to setup a system like the one pictured
here:
http://www.apple.com/xsan/datacenter.html
yet simplified a bit.
I have my storage pool, meta1 & meta2
now my NAS heads. these are AFP1 AFP2 AFP3 AFP4. i
am assuming i set these up as SANS Clients ? not
controllers.
Then in DNS i can have a cname of " AFP " ; for round
robin access
afp 60 IN CNAME afp1.bar.ungh
afp 60 IN CNAME afp2.bar.ungh
afp 60 IN CNAME afp3.bar.ungh
afp 60 IN CNAME afp4.bar.ungh
etc,...
and my users ( which live in LDAP DIR ) can then
access the same folder from 4 different servers?
Not the same folder from 4 different servers. Unless all 4 different servers can see and access the same volume (only if you have xsan software), each server will see volumes that's assigned to it. If you access the same LUN without xsan software, data will be corrupted instantenously.
I know this does not work without the xsans software.
unless I have done something horribly wrong.
-- cavets, flames, insight or general gotchas welcome.
Even if you have xsan software, it's hard to guarantee that each of the AFP servers are aware of the locks placed by other AFP server. The 'example' architecture is only for reference and there are many issues related to this which makes it not a pratical expandable file server. If each AFP file server serves different AFP volumes, you have less potential issues. Try to share the same Xsan volume with multiple protocols AFP/NFS/SMB, then you are going to have all sorts of weird file locking issues.

Similar Messages

  • Creating less expensive small xSAN for 2 editors - suggestions?

    I've been setting up a second editing bay for our company which is supposed to have identical capabilities to my own. It's a Quad with Kona LHe, just like I'm using. We mostly edit TV comercials in DV50, uncompressed 10bit SD and DVCProHD 720P so our bandwidth requirements usually out spec editing over ethernet.
    Since both editors need to have equal capability, I have chosen to create a small xSAN to allow both editors to access the same media drives simultaneously. With a SAN, large server volumes (located on a fibre channel network) show up on the editor computers just as a simple local hard drive would at very fast speed. You can point FCP scratches to the same places on each machine, then use any machine to edit any project instantly. My budget is large by my standards, but not very large on the scale of typical xSAN implementations so I've been collecting "deals" on the gear I need before I put it all together.
    Since I'm on a budget, it has been fun finding much of the gear on ebay, etc. FYI, here is a quick list of the stuff I've acquired:
    xServe RAID 5.6TB refurbished from Techrestore.com: $5900
    xServe 2Ghz Metadata controller w/ Tiger Server 1.5GB ECC RAM, PCIX fibre card new, but last year's model from Smalldog: $2600
    Brocade Silkworm 3200 8 channel 2Gb entry fibre switch new on ebay: $950
    2 PCI Express fibre cards new for 2 editor G5 quads: $1100
    3 xSAN licenses new on eBay: $1300
    Mini GB ethernet network stuff and switch (for metadata only - separate from our LAN network): $150
    Misc. fibre cables and tranceivers: $700
    One of my editing G5's will act as the fallback Metadata controller in case the xServe goes down at any time.
    I also am planning on at least 8 hours of time from a local xSAN authorized tech to help me set everything up and teach me what I need to maintain the system. This should be about $700 or so.
    I have found several articles posted at xsanity.com very helpful in planning this.
    If any of you have any experience with xSAN, you might agree this is a very low cost of entry into this very exciting new workflow. $13.5K for a fully functioning xSAN of this size is not bad at all. Many would spend that on the xServe RAID alone, sans SAN;-) And I can expand very easily since my switch will still have 3 unused ports. Note: the 5.6TB xServe RAID will only be about 4TB after accounting for the RAID 5 and dedicated (and mirrored) metadata volumes. Only 4TB. Pity!
    Now that I have the main hardware components ready, it's time to install and setup the system. I'll be posting my progess in the next few weeks as this happens, but first would like to hear any impressions on this. Suggestions or warnings are appreciated by those with experience with xSAN. The xSAN forum here at Apple is used mostly by IT professionals and I'm mostly interested in hearing comments from editors and those that use the system in small settings like my own.
    One question for the Gurus: I don't believe FCP projects can be opened and used by two people at the same time, but if there is a way to do this without corrupting the project file, I would love to know.
    I'm also seeking to hire an assistant to occupy the new editing bay. Broad multimedia skills are needed, but I can train to a degree. We're an advertising agency just north of Salt Lake City, Utah. Please let me know if any of you are interested.

    Thanks for the suggestions. Brian, I'll be sure to get you some points once I close the topic.
    I didn't realize the Project files are best copied to the hard drive. Is this for a permissions related reason or just to avoid short spikes in bandwidth during saves?
    I agree that Metadata is best on a dedicated controller with full scale xSANs, however with just 2 systems editing mostly DVCPro50 resolution projects I can't imagine burning up more than 100MB/sec at any given time. OK, maybe, but this is unlikely for next year or so. I've read that a single controller can achieve 80MB/sec easily, so 2 should be around 150MB/sec under heavy load. I'll have the metadata mirrored to 2 drives on one side of the XSR and the remaining 5 drives on that controller in a RAID 5. The other side of the XSR will be a full 7 drive RAID 5. These 2 data LUNs will be striped together in xSAN to achieve a full bandwidth of about 150MB/sec. I was told that the XSR controller can handle multiple RAID's at the same time so I can send metadata to one mirror array and designate the other as a RAID 5 LUN. Considering the small size of the data going to the metadata volumes and the relative simplicity of RAID 1 mirroring, I believe the controller shouldn't be adversly affected by this. Is this incorrect in your experience?
    I do plan on turning off the cache of the XSR since the system will be used for editing, yet it would be nice to have cache for the metadata so that's a point to consider.
    THe metadata should be segregated on its own XSR controller.
    Are you saying that the metadata sharing the same controller as the video data is going to slow the whole system down even though the metadata is located on separate, dedicated drives in that controller? I thought metadata was tiny and required very little bandwidth on the bus of a controller. If this is the case, the only bottleneck would be the RAID chip in the XSR. Again, these metadata files are very small and RAID 1 is very simple, so I don't see how it could slow things down enough to justify another $4K for a new XSR. If you still disagree, please let me know.
    As per your suggestion, and considering your stellar reputation in this forum, I'm shopping right now for a mostly empty xServe RAID to use this for just the Metadata volumes mirrored. It just seems like a huge waste to get an XSR just to use 2 drive bays mirrored. The plus side of this is I could begin filling the other controller in the future as my storage needs expand.
    It would be really cool to use the 2 drive bays in the Xserve Metadata controller for the metadata volumes, but I can see how that would cause problems if the xServe goes down, making the metadata invisible to the fallback MDC. 100% uptime isn't that big of a deal for me, however. As long as the XSAN comes back online safely after the xServe reboots without trouble, I'm OK with such a setup. Have you ever seen this done? It seems a bit of a hack to use anything but fibre channel data. I'd hate to introduce too much complexity just to save some bucks, but it is an intriguing idea and would cost a fraction of a new XSR. It would be fast since the writing the metadata would be local with very little latency.
    For this reason, I'm also very interested to find any other simple and less expensive Fibre based storage solutions that could host my metadata as an alternative to full blown XSR for this. There are all kinds of fibre drives out there, but I don't want to waste a valuable fibre switch port just for one drive. All I need is 2 hardware mirrored bays accessible over fibre, preferably sharing the same channel on my switch. Does anyone know where I might find something like this?

  • Xsan and Final Cut Pro Setup

    Hello All, I am setting up an Xsan system to be used for Final Cut Pro HD editing. I have 2 full XRAIDs using 400 gb disks. I will have 2 PowerMac G5 editing stations. All connected together using a SanBox 5200 switch and a gigabit ethernet switch for the network.
    I would like some feedback on RAID setup.
    I plan on creating a 2 disk Metadata array using RAID 0, the remaining 4 disks on the controller ( one hot spare) will be used as an ingestion area for HD video data. I am planning on setting this up as RAID 0 as well. My thinking here is that I dont need the redundancy in the Ingestion volume because we will keep the original HD tapes that the data was ingested from. The hot spare will provide a very small amount of protection to the 2 RAID 0 arrays on this controller. On the other side of the XRAID I will build a 6 disk RAID 5 array for the production files. The other XRAID will be setup as 2 additional 6 disk RAID 5 production arrays. All of the production arrays will be added to a storage pool and a volume created from the pool.
    Is this the preffered method? what would you do or have done? Any drawbacks to the method I described. Does a volume built from RAID 5 arrays and hosted on 3 controllers have enough speed for HD editing? Any experiences shared is helpful info.
    Thanks.
    Eric

    Eric,
    First, I'd recommend you spend the $15 and pick up "The Xsan Quick-Reference Guide" by PeachPit press. It goes into a fair bit of detail on recommended configurations.
    Second, I'm going to assume you meant a 2-disk RAID 1 for metadata. RAID 0 would be useless for this purpose -- Metadata isn't high-bandwidth, and you NEED protection from it. Lose your metadata volume, lose your SAN. It's that simple. So use a RAID 1 mirror for it.
    Also, for ingest, I'd recommend RAID 5. RAID 5 is the same speed as RAID 0, and it has protection.
    Note that a hot spare is absolutely worthless when you're dealing with RAID 0. It will give you no protection -- when a disk in RAID 0 fails, the game is over. No restoration is possible on the hot spare -- there is no parity data that can be used to reconstruct the data.
    So I'd do a 2-disk RAID 1 for metadata, and either 4 or 5 disks RAID 5 for ingest (depending on if you want a hot spare). Note, that you will not be able to ingest uncompressed HD onto this volume -- the bandwidth required is way too high. You could do DVCPRO-HD, perhaps.
    If you set up 3 controllers as "6-disk RAID 5 + hot spare," your available bandwidth should be in the 200-240 MB/sec range, which is sufficient for HD editing. It will only be sufficient for one user at a time, of course, and you'd want to avoid having someone do big file copies on the SAN while the editing is taking place.

  • XSan with Gigabit Network

    Please,
    XSan solution is compatible with Gigabit Network? I have a XServer Xeon, XServer RAID but dont use a Fiber Channel cards, cable and connectors.
    Its Possible?
    Tks

    Do you have a fibre channel switch? If not, do you plan on buying one?
    You can connect the Raid to the xServe via Fibre Channel and present the volumes as Network Attached Storage. I do not think you will get enough bandwith for three Video workstations. You could try aggregating the ports of the server and the workstations through a managed ethernet switch.
    Ideally, you would connect all the workstations, the xServe and the Raid to a fibre channel switch and then install xSAN on each. This will will provide direct connection to the storage via the FC switch.
    There will be some overhead, as a portion of the Raid will have to be set up for the metadata of the files stored on the SAN.

  • Using Xsan to carry data over dual gigabit ethernet connections?

    A designer (mostly non-technical) colleague of mine has claimed that Xsan transfer data (not metadata) over dual gigabit ethernet, in lieu of Fiber Channel, with metadata flowing over a third ethernet. Is this true? Has anyone done an install in this manner? I can't find any reference of this, anywhere.
    If anyone can help shed some light on this, I would appreciate it!
    Thanks,
    Ben

    If you're planning on connecting to the SAN in this manner, be aware that the amount of bandwidth is MUCH smaller than fibre channel. (That's why X-SAN used fibre between clients and the SAN)
    So... if you're thinking of pulling uncompressed HD through ethernet (even Dual Ethernet) you're going to be woefully disappointed.
    Having said that, we regulary connect dozens of "story editors" to our X-SAN via a single gig E "reshared" connection. The secret is that we ONLY use "off-line RT" media for this connection. We've locked out our "Full Res" media from this kind of connection because it will only jam up the system and bring our internal system to it's knees. I would regard this kind of "tapping in" to the SAN as a "secondary" method, not your primary way of accessing the media.
    Good luck.
    mark

  • Scanner problems LIDE60, only works with XSane, but Xsane is buggy!!

    Hello everyone, I'm trying to get my LIDE 60 scanner working properly with Arch.
    Xsane work OK with it, but it doesn't seem to work with anything else (I installed gnome-scan and sane-pygtk from AUR).
    Both these apps do not see it at all.
    I like XSANE, but there is a bug with it: I cannot select the area I want to scan in the preview area.
    When I try to drag/enlarge/move the selection box, it dissapears and goes elsewhere on the window...
    (I would PREFER fixing Xsane, but in the meantime, would like to try other apps)
    Here is my LSUSB output
    [charles@amdx2 ~]$ lsusb
    Bus 002 Device 002: ID 045e:009d Microsoft Corp.
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 004: ID 04a9:10c4 Canon, Inc.
    Bus 001 Device 003: ID 04a9:221c Canon, Inc. CanoScan LiDE 60
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    the output from scanimage -L
    [charles@amdx2 ~]$ scanimage -L
    device `genesys:libusb:001:003' is a Canon LiDE 60 flatbed scanner
    genesys IS enabled in /etc/sane.d/dll.conf
    Here is the output from flegita/gnome-scan
    as you can see it seems to be ignoring the scanner it finds...
    "see line ** (flegita:11223): DEBUG: SANE device genesys:libusb:001:003 failed or ignored"
    Please, please HELP I'm not totally new... just been on Windows for a couple years,
    but i've been installing Linux on and off to try. (so far, Arch seems a keeper. take a while to install
    but it's great!)
    [charles@amdx2 sane.d]$ flegita
    ** (flegita:11223): DEBUG: gnome-scan-init.vala:33: Initializing GNOME Scan 0.7.1 for flegita
    ** Message: gsane-module.c:53: SANE version is 1.0.20
    ** Message: gsane-module.c:53: SANE version is 1.0.20
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSaneBackend'.
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSaneScanner'.
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSFileBackend'.
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSFileScanner'.
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSFileOptionFilenames'.
    (flegita:11223): GLib-GObject-WARNING **: Two different plugins tried to register 'GSFileFilenamesWidget'.
    /usr/share/themes/Darklooks/gtk-2.0/gtkrc:181: Invalid symbolic color 'tooltip_bg_color'
    /usr/share/themes/Darklooks/gtk-2.0/gtkrc:181: error: invalid identifier `tooltip_bg_color', expected valid identifier
    (flegita:11223): GLib-CRITICAL **: g_utf8_strlen: assertion `p != NULL || max == 0' failed
    (flegita:11223): GLib-GObject-CRITICAL **: g_object_new: assertion `G_TYPE_IS_OBJECT (object_type)' failed
    ** (flegita:11223): CRITICAL **: gnome_scan_scanner_selector_on_scanner_added: assertion `new_scanner != NULL' failed
    (flegita:11223): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed
    (flegita:11223): GLib-GObject-CRITICAL **: g_object_new: assertion `G_TYPE_IS_OBJECT (object_type)' failed
    ** (flegita:11223): DEBUG: SANE device genesys:libusb:001:003 failed or ignored
    ** (flegita:11223): DEBUG: gnome-scan-dialog.vala:400: job status updated to unconfigured
    ** (flegita:11223): DEBUG: gnome-scan-dialog.vala:400: job status updated to unconfigured

    BTW, this is on a fresh install of Arch done yesterday: Linux groovy9 3.11.4-1-ARCH #1 SMP PREEMPT Sat Oct 5 21:22:51 CEST 2013 x86_64 GNU/Linux
    For a plan B, if there's a straightforward way to custom compile my own kernel, I'd like to try a slightly older one to see if I have better luck.  This same model of scanner is working fine on a 3.5 kernel, for example.
    Last edited by groovy9 (2013-10-09 21:34:22)

  • Oddball client not mounting XSan Volumes

    Background: My client, a TV News broadcast station, had Apple and a third-party design and install a Apple based XSan storage and network. They ran all new fiber and gigabit ethernet to each station to use the XSan. It has been working great, with the exception noted below.
    Setup: 2 MD Controllers, 1 XServe, 4 XRaids, each make up two 2 large TB volumes (raid 5, stripped together).
    XServe, XRAID, MD controllers, and all clients are all connected together on a Fibre and Fiber Optic network using QLogic switches. SUBNET: 10.200.1.x.
    Ethernet Network, also connected all the above together on HP Gigabit Switches. SUBNET: 10.100.1.x The public WAN is made available on this subnet by a connection to the HP switches. Still getting details on how this was done.
    Everything is running OS X 10.4.8 or better, XSan File System 1.4.
    Setup
    There are 8 workstations (edit bays) that act as infusion and editing stations. Each station has two XSans volumes mounted NEWS01 and NEWS02. They have the odd numbered bays use NEWS01 and the even numbered bays use NEW02.
    PROBLEM
    All bays mount the XSan volumes, except one. FCP08 will not mount the XSan volumes. We have rebooted the workstation, and even went to the extreme of shutting down the entire setup all bays and MD controllers, File Servers, etc.. then bringing everything backup up. Same problem.
    Based on some forum discussion, we have tried the following:
    * Ensured that there is no empty mount point in /Volumes
    * Uninstalled all XSan Software and resinstalled v1.4 from Apple's website.
    * Removed the client from the XSan Admin, and readded it, made sure to enter valid serial number, etc..
    * Verified that all fibers are working, all link lights look good, and you can ping across the MD network.
    When you use the XSan admin from either FCP08 (edit bay 08) or from the server, and you add both MD XSans to it, you see the client, and you click on the client and click either Mount (read-only) or Mount (Read/write).
    It will show "Mounting..." and then it will flip back to "not mounted". The only feedback we have received so far is "ACCESS DENIED". All affinity setting are set to rwxrwxrwx (wide open), and all the volumes and workstation logins all have the same access to the volumes. I can not find any restrictive permissions anywhere.
    I plan on trying to move them away from a /etc/hosts type setup to a proper DNS Server running on their XServe using the DNS Server function of OS X Server. But currently all edit bay stations have the same /etc/hosts file installed, which accounts for MD and ethernet networks.
    ANY IDEAS what is wrong with this workstation? With the setup?
    I have had extreme ideas from some who have said that we need to blitz the entire client, and reinstall the operating system. I am not willing to go down that route, since each edit bay was built manually without an image (another aspect I will remedying soon). It will take sometime to rebuild this edit bay client if I that is the only solution.
    The only this question I have if that is the popular opinion, is what is different from a client OS install and the existing one regarding the XSan? This are edit bays, not private workstations, and no one installs any extra software or surfs the net, etc.. They are used for ingest and editing only.
    HELP!
      Mac OS X (10.4.10)  

    Hi,
    You could check if it is Fibre Channel related:
    from a terminal do a cvlabel -l
    this should give you a list of the lun's in your volume.
    When this tool does not show luns, you might have a zoning issue.
    Regards
    Donald

  • No LUNs in Xsan admin

    I'm using Xsan 1.1 on OSX 10.4.7 and I added an new Xserve RAID into the mix. But the trouble is that the LUNs are not showing up in Xsan Admin (v.1.3).
    `cvlabel -l` shows the two LUNs, labeled "unknown". so does `diskutil list`.
    All Xserve RAIDs have the same firmware, no LUN masking is on, and all the ports on the fibre switches are in the same zone.
    Disk Utility.app can see the LUNs and will let me make a software RAID combo of them , but that's what I want. What I would like is to expand two of my data-only storage pools, if they'd show up in Xsan admin.
    I've looked the manuals,man pages, discussions and so forth.
    Any ideas?
    Thx.

    I reveiwed the following documentation for adding LUNs to storage pools:
    http://docs.info.apple.com/article.html?artnum=303571
    http://docs.info.apple.com/article.html?artnum=303570
    http://docs.info.apple.com/article.html?artnum=301911
    And it worked. Sort of.
    I shutdown all the clients, except 2 or 3 (mdc, md backup and my admin Mac). Then I unmounted the volume from last remaining clients, and I stopped the volume. Then I proceeded to backup the cfg, check the volume, and add the luns to an existing storage pool. All went well, until it failed to update the volume with the new cfg. I reverted to the old cfg and added the LUNs to a new storage pool. That worked.
    Thanks for everyone's thoughts on this one. I took more time to do it this time, then I did a month of so ago. Careful planning and backups (cfg and data) are helpful to peace of mind.
    -x

  • Automount xsan on OSX Server 10.4.8

    Greetings,
    I'm planning to put up a server based network were all of my client should login thru my OSX Server 10.4.8 using the open directory. Is it possible that can I limit the access of my client to my xsan or xserve raid to mount and even the quota.
    And also how can I mount a specific shared folder automatically as soon as the client login.
    Hope you could help me
    Thanks
    Im using
    SERVER:
    -MAC G5 dual 2.5
    -OSX Server 10.4.8
    -XSAN w/ XSERVE RAID thru FIBER OPTIC card
    CLIENT
    -MAC G5 dual 2.5
    -OSX 10.4.8
    -XSAN thru FIBER OPTIC CARD

    It sounds like it could have to do with UID from AFP clients sharing the same hidden folder (.Temporaryfiles/.User501 "something" - don't remember the excact name) on the "root" of the server share(s).
    Office will create this folder but it really should have a unique UID for each user so the server can separate who is "editing" what (temporary)file.
    This can happen if "all" clients are setup with the local UID 501.
    To get around this we have been changing the users local user account UID with a couple of Terminal "tricks" by "gatorparrots":
    http://forums.macosxhints.com/archive/index.php/t-12077.html
    Changing UIDs in the terminal is a simple NetInfo property overwrite:
    sudo niutil -createprop . /users/userName uid XXX
    (replace userName as appropriate and XXX with the new UID number)
    Finding and changing UIDs across the filesystem is a one-liner command:
    sudo find / -user UID -exec chown userName {} \;
    (replace UID with the old UID number and userName with the new user name to associate file ownership.)
    On the local machines we've used the account UIDs from the server (WGM).

  • Installing Xsan 1.0 on a new tiger system - Help - Urgent

    Here's my situation
    I purchased a version 1.0 copy of Xsan.
    I am trying to install it on a new dual 2.7 G5 with tiger installed.
    It says I cannot do this because the version of xsan and tiger are not compatible.
    Is there a way to do this, or am I screwed?
    Thanks in advance for your help
    DBK

    Ok, first what's the version of Tiger? and who sold you a 1.0 version knowing that you were running Tiger? You will need the 1.2 version and then you'll need the 1.3 upgrade from apple.
    Please take your time to plan the build of your Xsan and the deployment as well.
    Poor planning with this type of software and hardware will come back to bite you!!!

  • Assigning XSAN MDC to a RAID controller

    In reading the XSan administration guide, it specifies that you should assign data LUNS to each controller for balance. So LUN 1 goes to controller (A) LUN 2 goes to controller (B) LUN 3 Goes to controller (A) LUN 4 goes to contoller (B)
    I am looking at my current set up that was installed by an integrator and all of the Data LUNS are assigned to one controller (A) and the MetaData LUN is assigned to LUN (B).  I also noticed that Force Read Ahead is disabled on both controllers.
    I am having some performance issues and plan on running a file system correct when I can take the XSan offline. Since I will be taking it down, and these settings appear incorrect... I was wondering if I should change the Controller assignments for the LUNS and enable Forced Read Ahead?
    Any thoughts?
    Thanks,
    Ray

    The VTrak config scripts in Apple's KB all have LUNs divided evenly between controllers. See the scripts in articles linked from Promise VTrak: Configuring for optimal performance. I'm not sure if the x30 VTrak built in scripts do the same thing.
    Re Forced Read Ahead see this guidance in http://kb.promise.com/Attachment378.aspx:
    • Controller Settings, Forced Read Ahead: Enable or Disable (aggressive pre-fetch)
    o Controller Forced Read Ahead should be enabled for large block sequential access such as rich
    media type applications
    o Controller Forced Read Ahead should be disabled for Random IO type applications
    You should test with each enabled to see which works better for you.

  • XSAN and OD Master

    Hello,
    I just have 2 questions
    1. Can I have the OD master on leopard and leave the xsan which are on other xserves running tiger?
    2. If I have to rebuild the OD master because of issues I'm having (I posted on the OD forum), does that have any affect on the xSan. of course I would shut the xSan down while I do this.
    Thanks.

    To reiterate Frank's point - if your OD is an XSAN CLIENT - then yes, you have a big problem if it is running 10.5 but your MDCs are running 10.4. Otherwise, no problem.
    Also, when you say "rebuild," it depends on what exactly you mean by "rebuild." If you plan on recreating users from scratch, then you will have permission problems on XSAN unless you are careful to make sure you reassign UIDs exactly the same. If you plan on restoring your entire LDAP, or even just exporting users and groups from your current build, then importing them on your new build - you should be OK.

  • 2 Xserves, 1 XRaid, no Xsan or Masked LUN - Options?

    Hello, we've got an original Xserve hooked up to an XRaid. The setup works great, but we're going to buy another Xserve and I'd like to set something for failover.
    I thought we would just get a fiber switch, hook up the XRaid and both Xserves and we'd be able to configure it with software. I think the masked LUNs would have worked (maybe), but it sounds like that isn't supported anymore.
    I've read about Xsan, but it's too expensive for us right now. I will definately put that on the budget for 2009, but not now. As I read it we'd have to buy 2 additional Xserves so we had 2 for meta data and 2 for file services, plus 4 Xsan licenses. That's a lot of money (but really cool).
    So, my current low-cost plan is to set up the new Xserve as the primary LDAP/file server and the old one as the backup (clone?). The XRaid would be hooked directly to the primary Xserve. If the primary Xserve went down, the backup would provide LDAP services, etc, but the XRaid would go offline. The plan would then be to power everything down, move the XRaid fiber connections to the backup server, then power everything up. Obviously, this presents 15 minutes of downtime even if all goes well, but my question is +will the backup Xserve (if it is configured as a clone or replica of the primary) immendiately recognize the XRaid and be able to host the sharepoints?+
    Is there a different approach we should consider? Is there a way to have the 2 Xserves configured as above, but plug one XRaid controller into each Xserve? Theoretically this might give allow better throughput if I could split demand between the two Xserves. Then if one failed, I could move the fiber connection to the other Xserve? Another issue is that I'm not sure if the back up Xserve could independently host files. This would probably mean it wasn't truly a replica anymore. Perhaps it would work if it was only an LDAP backup server?
    Any suggestions or clarifications would be appreciated. Thanks.

    will the backup Xserve (if it is configured as a clone or replica of the primary) immendiately recognize the XRaid and be able to host the sharepoints?
    Automatically? not at all. In order to do that it would have to be already configured to share those paths, even though those paths would be empty or non-existent.
    However, it wouldn't be impossible to write a script that configured the shares. That way when you connect the fiber channel, you run the script and the sharing comes up.
    There might be easier paths, though, or at least faster ones. Mac OS X Server has a failover daemon that allows one machine to monitor the health of another machine and then automatically reconfigure itself to take over that machine's tasks should it fail.
    In this scenario you might want to consider connecting both machines to the RAID via a fiber channel switch, but have the second machine either not mount the array, or mount it read-only (you do not want two machines writing to the same array at the same time).
    Now, then the failover daemon detects the primary machine is down, it can mount (or re-mount) the array as read-write, reconfigure the network (to take over the primary machine's IP address) and start up the file sharing services.
    This can all happen automatically, with minimal disruption to users, and in a lot less time than 15 minutes.

  • New XSAN Buildout - 10.5.8 vs. 10.6 for MDC and clients

    I am planning on building out a large XSAN utilizing Promise RAIDs, and 2 Xserve MDC's, with a few Final Cut Workstations on Qlogic 5802 switches. I was planning on using Apples 10.6 latest OS, with latest XSan 2.2. I'm being told if I really want rock solid performance I should stick with all my Final Cut clients and Xserves at 10.5.8, because there are some issues that exist if we I use the latest versions of OS and client. Can someone please comment on the validity of that statement. I'm hesitant to install an operating system that is over a year out of date.

    I'm actually wondering about something similar. I have 2 brand new Xserves running 10.6.3 and a Xsan 2.2 connected via the fiber. The FCP clients that connect up are all on 10.5.8 also connected in via the fiber. I have not had any issues with the clients being on a lesser version of the OS but I am wondering if I upgrade the server to 10.6.4 will anything break??? I may upgrade one client today to 10.6.4 and see how that goes first.

  • Saving a Final Cut Pro project on XSan is read only to ACL

    Running OS 10.4.11 on all machines, including MDCs. FCP 6.0.2. Running XSan 1.4.1. I plan on upgrading to 1.4.2 but this was working 2 weeks ago and suddenly its not.
    I create a new project and then save it to the XSan volume, the group permissions are excluding the ACL permissions and only showing the posix permissions, which have the ACL group as read only. This happens on all machine, MacPros, for what its worth. Media files have proper permissions.
    Saving a Photoshop or TextEdit file to same location and the file has proper ACL permissions. Seems to be FCP specific.
    I created a local ACL on the boot volume of an edit system and saving there in FCP works as it should, giving the ACL read and write permission. Makes me think its the XSan somehow.
    User can open the project and just Save As. Still shouldnt be behaving like this, though.
    Have tried a full shutdown of the XSan, and verified CIFSServer is active.
    Any thoughts? I think my next step is to upgrade to 1.4.2 of XSan even tho this WAS working two weeks ago.
    Thanks
    Josh

    Are you saving the project file directly to the SAN, or saving the project file locally and then copying it to the SAN? The latter is the "recommended" best practice -- since Final Cut project files still aren't designed to be worked from from the SAN (they're not like Avid "bins").

Maybe you are looking for