XSAN on a G4?

The XSAN 1.4.0 notes suggest it will work on a dual 800 G4 or faster with Tiger.
Later release notes, from 1.4.1 onward, say G5. There's nothing to say if G4 will or won't work.
Has anyone got 1.4.1 or later running on a G4 client?

It's a client. The MDCs are 10.5.6 Intel Xservs.
If it remains stable, the G4 is a means to archive material from one system to another, in a location where it's safe from fiddling fingers or security throwing the mains breaker.

Similar Messages

  • Can copying a file from a Xsan volume seen thru local sharing then  be...

    I'm told this and wonder:
    The xsan is mounted on computer A. Computer B connects to computer A and copies a file off the Xsan that is visible because computer A is connected to it.
    My associate says those files copied this way that were then copied to a DVD and sent to a client are unreadable. Is this possible? Even if the metadata is missing a file (photoshop psd or tif) should still have permissions that allow it to be opened, or not?
    thanks
    alan

    If the permissions were set to some local user on your network and you sent those files on DVD to someone else the local users on that system would still be able to open them (first local user is 501 on every mac). As long as the files weren't encrypted it shouldn't be a problem. In the absolute worst case the local admin account could change or reset the permissions, instead of unknown UID if a strange OD account was the source.

  • New XSan setup requirements questions

    Hello All,
    We're in the process of installing XSan in our facility, swapping out our current SanMP configuration. I had three questions about metadata controllers.
    1) How CPU intensive is the XSan MDC function? I wanted to repurpose one of our MacPro 1,1 as an XSan metadata backup controller, but I'm a bit worried that the cpu won't be Zippy enough, it's a 2.8ghz dual-core x2 xeon. This would be the back up controller and exclusively dedicated to the task.
    2) I understand that running XSan requires an OS X Server to manage permissions etc. But if I recall from our experience with XSan 1, the MDC could run on OS X client version just fine. The current Apple lit says otherwise. Would I be able to run our backup MDC on OS X client? We already have another xserve functioning as our OD master/file serve. So the Primary MDC running on OS X Server would already be the redundant OD.
    3) How fast does the MDC FC connection need to be? If my clients are all on 4/8gb x2 connections, can I run my MDCs on 2gb? or single 4gb? My understanding is they are generally just accessing the metadata LUN, in which case, that LUN itself is the bottle neck to performance.
    Thanks in advanced for all your help.

    Speaking metadata, I suggest you look into this: http://www.getactivestorage.com/pr041210-1.php
    Active is onto something here and this answers a huge problem when scaling a SAN. Now your storage arrays can all use the max number of drives per box.
    As for your other questions:
    1: CPU is not a huge deal but do things right. Get an MDC and an MDB and let them do just SAN control. Scale your RAM needs to allow for 4 GB of RAM per SAN volume if you are doing AD permissions and (gasp) Spotlight. Thus, if you have one ingest volume and one work volume, have 8 GB of RAM + another 4 for the OS. More is better. Don't expect to use the devices as cluster nodes either. Leave the processing elsewhere.
    2: Using server is recommended and the official way to do it. If you already have an OD Master, then just connect the MDC and MDB to the domain. They do not need to be replicas.
    Just my two cents. Enjoy Xsan.

  • General xSan questions for a new setup

    So I'm looking to setup xSan. Had mainly one question. I know it's recommended that you have a secondary metadata controller server incase one goes down, but is this absolutely needed? We will have very few (like 3) people using the system so it won't be in use very often. If the controller goes down, does the system go down entirely, or can clients still access data with the potential of messing things up because the metadata server isn't functioning?
    Thanks.

    If you have one controller and the controller goes down, the SAN goes down and you run a higher risk of volume damage. The reason for a dedicated backup controller is so that when the primary goes south, there is another system to take of the SAN control, allowing you to complete writes and shut the volume down cleanly.
    If the system is not going to be used that often, are you sure Xsan is the right solution for you? Even if you try and scrimp to save money and make a workstation a backup controller, you are still going to spend about $40K for the solution ($20K for a single RAID array, $7k for a dedicated controller, $4k for the FC switch (you should get two), wiring costs, redundant power, cooling, etc). That is a lot to spend for something that is not going to get much use.
    Oh, and don't go easy on RAM. For your controllers get 2 Gigs per SAN volume and at try to leave 4 Gigs as overhead for the OS and other services. Don't run anything else on the main controller other than DNS and Dir Services (unless you are doing AD integration).
    Hope this helps

  • XSAN and 4K video editing

    I currently have an Apple Xsan 3 deployment using Mavericks Mac Mini MDC Atto Thunderlink to FC,  Promise 610f RAID with a JBOD and a Qlogic 5602FC 4GB switch.
    I was originally looking to simply upgrade my storage as our editors are working in ProRes422 in Final Cut ProX without any issues. As our Promise RAIDs were going out of warrantly, I felt comfortable replacing the RAID and building a new volume on new RAID hardware. But then... I was told in the middle of 2015 we will need to be able to edit 4K.  I did some Googling and see there is a wide variety of 4K formats. I am currently trying to nail down the format they will be using. I am also trying to nail down how many streams, I will need to feed to our edit stations.
    Has anyone worked with 4K on an Apple Xsan? I am a bit confused on the requirements... from what I have read 4Gb FC is not going to cut it... 8Gb FC can handle some, but may have issues with some formats... 16Gb FC looks capable but the RAIDs I have been looking at use 8Gb FC.
    Any input would be appreciated. I am in the Washington, Baltimore area.
    Thanks,
    Ray

    The speed of the RAID and the Thunderbolt connection would help. But hopefully, you don't need to buy more equipment and no, it's not the CPU. (The GPU, on the other hand, could be a factor.)
    Check in Viewer Display options and if if Quality is set to Better Quality change it to Better Performance. If you're not already optimizing, transcode a clip and see whether the playback is good enough, And if neither of those steps is enough, transcode another clip to Proxy and change your Media setting to Proxy (again in Viewer display Options). If you use Proxy, remember to change the Media setting back to Original/Optimized before exporting.
    Russ

  • 10.6.2 Server Xsan 2.2.1 and SNFS 3.1.2 MDC

    I'm trying to connect an Xserve running 10.6.2 and Xsan 2.2.1 to a Linux MDC running StorNext 3.1.2, but Xsan can't connect to the fsmpm. I receive the following entries in system.log:
    Feb 25 17:52:49 engOSX106Svr xsand[1186]: Synchronizing with fsmpm.
    Feb 25 17:52:49 engOSX106Svr sandboxd[1191]: portmap(1190) deny network-outbound /private/var/tmp/launchd/sock
    Feb 25 17:54:49 engOSX106Svr xsand[1186]: Failed to synchronize with fsmpm (error 2).
    Feb 25 17:54:49 engOSX106Svr fsmpm[1189]: NSS: Name Server '192.168.171.1' (192.168.171.1) heartbeat lost, unable to send message.
    Feb 25 17:54:55 engOSX106Svr xsand[1186]: fsm shutdown after 0 seconds
    Feb 25 17:54:55 engOSX106Svr com.apple.launchd[1] (com.apple.xsan[1186]): Exited with exit code: 1
    I am able to ping the MDC (192.168.171.1) and the OS X firewall is disabled. Other Linux and Windows cvfs clients are able to connect.
    Can anyone provide me some insight into this problem? My hunch is some sort of port filtering but I can't figure out what service might be blocking the traffic.
    Thanks,
    Kevin Wright.

    Here is my config.plist with the code and Company X'd out.
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>computers</key>
    <array/>
    <key>metadataNetwork</key>
    <string>192.168.171.0/24</string>
    <key>role</key>
    <string>CLIENT</string>
    <key>serialNumbers</key>
    <array>
    <dict>
    <key>license</key>
    <string>XSAN-XXX-XXX-X-XXX-XXX-XXX-XXX-XXX-XXX-X</string>
    <key>organization</key>
    <string>XXXXXXXXXXX</string>
    <key>registeredTo</key>
    <string>ProductionIT</string>
    </dict>
    </array>
    </dict>
    </plist>
    I was afraid of 10.6.2 Server... We'd like to stick with Snow Leopard, does 10.6.0 or .1 handle Xsan better?

  • Saving Files on FCP 6.0.2 and XSan

    Hello All,
    We're running into very frustrating issues as we are trying to save and open Final Cut Pro project files. We are currently using an XSan with ACLs set up to store FCP documents and related files. The six workstations aren't directly attached to the SAN, they connect via GigE to AFP shares (not an ideal solution, but it's what we can afford right now). It should be noted that our setup is currently using the "Golden Triangle" of directory services (AD user accounts and groups).
    In December we upgraded our XSan to version 2 (which was a nightmare in itself), and all of our clients were upgraded to 10.5 with FCS2. After the upgrades, a few issues have arisen:
    1. When a user attempts to save a FCP project, sometimes they receive the error message that the file is locked. From what I read on apples support site, this is due to permissions-- a file cannot be saved by one person, and then opened and saved by another. If the ACLs set on the folder reflect the groups that have full access to the files in which the project resides (recursively), why is this an issue? Or is this just another case of bad execution on apples part?
    2. After concluding a "Save As", most of the time the alternate project file is able to be opened. We've run into several instances, however, when the project file is a very small size (16k or so), and gives the error message "Unable to open project file".. and it's pretty much trash. In these cases, the autosaves are behaving in a similar manner, being a very low size and not opening.
    So my question is, does this all sound like permissions issues.. or is there something inherently wrong with the XSan at this point (e.g. corruption)? How are other people using XSan volumes in conjunction with FCP 6.0.2/FCS2 to edit and store files? Perhaps we're going about the whole setup the wrong way..
    Thanks for your time and any advice you can provide..

    Hi,
    Thanks for taking the time to reply.
    We have six producers.
    Each producer has their own storage folder on the SAN: Producer 1, Producer 2, Producer 3, Producer 4, Producer 5, Producer 6. Each of these folders are located in One sharepoint called "Producers".
    So we have one main producer, Well call Joan Smith. Joan has full access, recursively throughout her folder, which is producer 1. Joan has interns that work for her, who use an account called pro1edit1. Pro1edit1 has full access to Producer 1's folders as well, recursively. Joan and her intern never work on the same project at the same time from two different workstations. Each project is edited by one and only one user from one workstation.
    Either Joan or her intern logs into the Mac (using either Joan's username/password, or the intern using their generic username and password [pro1edit1]). These accounts are AD Accounts. After a login, OD sends MCX policies to the clients (such as auto-mounting the AFP share points ["Producers", for instance] with their username and password). The user browses to their folder. If they create a new project, the create a folder with the project name. They then open FCP and set the settings (scratch, etc) to point to that folder. They save a project file with the name of the project within this same folder, keeping everything as one general "package" in the folder. For instance: /Volumes/Producers/Producer\ 1/5-Minute-Piece/FiveMinutePiece.fcp. All of the other folders are also located in here, scratch disks, autosaves, captures, etc.
    When the user is done, they save, close final cut, and quit.
    The next say, either Joan or her intern come in to do some work on the project. They log into one of the workstations, and they open the file. They conclude some edits, and upon trying to save the file they receive the error message that the file is locked. They then save the file as an alternate filename. Sometimes this works. Sometimes it says it saves it fine, but the next attempt at opening the file yields the error that the project cannot be opened. Upon inspection of the file, it's about 16k in size.
    Using Producers folder/sharepoint, the permissions are as follows:
    Producers:
    <AD Domain>\grpStaffProduction: Allow: Read Only: This folder. *
    <AD Domain\grpProEdit: Allow: Read Only: This folder. **
    <AD Domain\grpProduction: Allow: Read Only: This folder. *
    <AD Domain\grpEditAdmins: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    <AD Domain\grpDomainAdmins: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    Local Administrator: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    POSIX:
    admin: RW: This Folder
    staff: R: This folder
    Others: None: This folder
    Using Producer 1 (within the above folder), the permissions are as follows:
    <AD Domain>\JoanSmith: Allow: Full control: This Folder, Children, all descendants.
    <AD Domain>\pro1edit1: Allow: Full Control: This Folder, Children, all descendants.
    <AD Domain\grpEditAdmins: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    <AD Domain\grpDomainAdmins: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    Local Administrator: Allow: Full Control: This Folder, Children, all descendants. [inherited]
    POSIX:
    admin: RW: This folder.
    staff: R: This folder.
    Others: R: This folder.
    +* This group is used for another producer folder. This allows AFP to mount the share, and for the users to see their producer folder.+
    +** This group is used for another producer folder. This allows AFP to mount the share, and for the users to see their producer folder.+
    +* Joan, all other users in production, and their interns are apart of this group. This allows them to mount the AFP share and see which folders they have access to.+
    So that's some more detail on our permissions and general workflow. We're up for suggestions. I appreciate your response!
    -M

  • Xsan panic and recurring errors

    I had the volume go down in a panic last week. We were able to restart and get everything running but now have a persistent error which appears to be related. Both reference gethostbyname. I have dns setup for all of the systems on both the regular network and the metadata. One of the issues is that there is no secondary mdc, a problem the client refuses to rectify.
    Process:         fsm [189]
    Path:            /Library/Filesystems/Xsan/bin/fsm
    Identifier:      fsm
    Version:         ??? (???)
    Code Type:       X86-64 (Native)
    Parent Process:  fsmpm [179]
    Date/Time:       2012-03-26 13:58:00.591 -0700
    OS Version:      Mac OS X Server 10.6.8 (10K549)
    Report Version:  6
    Exception Type:  EXC_CRASH (SIGABRT)
    Exception Codes: 0x0000000000000000, 0x0000000000000000
    Crashed Thread:  0  Dispatch queue: com.apple.main-thread
    Application Specific Information:
    PANIC: /Library/Filesystems/Xsan/bin/fsm "Server_comm_init GetHostByName failed" file server_comm.c, line 3455
    Thread 0 Crashed:  Dispatch queue: com.apple.main-thread
    0   libSystem.B.dylib                       0x00007fff848529da __pthread_kill + 10
    1   libSystem.B.dylib                       0x00007fff848522fe pthread_kill + 83
    2   fsm                                     0x00000001000ac22c 0x100000000 + 705068
    3   fsm                                     0x00000001000ac787 0x100000000 + 706439
    4   fsm                                     0x00000001000ea56a 0x100000000 + 959850
    5   fsm                                     0x00000001000767f9 0x100000000 + 485369
    6   fsm                                     0x0000000100030aa4 0x100000000 + 199332
    7   fsm                                     0x0000000100000d94 0x100000000 + 3476
    This error keeps cropping up in the current logs:
    prod servermgrd[16235]: xsan: [16235/2112B0] ERROR: get_fsmvol_at_index: Could not connect to FSM because Cannot get host by name - No such file or directory

    It might be worth while to clear your dns cache on the mdc if its also the dns.
    ~dscacheutil -flushcache
    Is there anything else in the logs that seem relevent?
    Have you run a permissions fix on the mdc anytime recently?
    In your dns setting you have one host name for the public and one for the private?
    internet - 192.168.1.100 mdc01."yourdomain"."internal"
    private net - 192.168.2.100 mdc01-san."yourdomain"."internal"
    Do you have any crostalk? (if only the private network is pluged in does it get a dhcp address? If you add the dns address to your static private port it can you get online?)

  • Is Xsan compantible with Fast User Switching in 10.4 Tiger?

    Does anyone know if Xsan is compatible with Tiger's 'fast user switching' feature (which allows you to have multiple users logged into the machine at the same time)?
    I had a very strange occurrence with my Xsan yesterday. The Xsan volume was mounted on one client only, but that client was logged in with two users. Even though no software was accessing the volume (i.e. FCP not running by either user) I noticed that my Xserve RAID was lit up like a blue Christmas tree -- something was reading data. When I launched the Xsan Admin app on the server (primary controller) it lost connection with the SAN and reported a FATAL error. Long story short... I had to log out of both users and restart the client to get control back. All appears fine now, but I'm only running one user on this client. I would like to be able to use the fast switching feature, but how do I prevent Xsan problems?
    Also.. is it possible that Spotlight is trying to index the SAN volume? If so, how are people preventing this in Tiger?

    Very interesting... never tried it..
    Spotlight does not index your xsan, you can check it by:
    mdutil -s /Volumes/XsanVolume

  • Clarification on how to use Xserve Raid and Fibre Channel without xsan.

    First let me apologize for not responding earlier to your response, I tend to get busy and then forget to check back here.
    Tod, the answer to your question is No, only one computer is accessing the xserve raid files at any one time and that is via Fibre Channel. However I do have the xserve raids set up as share points via ethernet.
    Maybe I should turn that off and only access the files with the one computer that can connect via fibre channel.
    I never thought of that. I will try that while I await for your answer, thanks again.
    Todd Buhmiller
    I have the following setup:
    Xserve: 2x2Ghz Dual Core Intel Xeon, 5Gb of Ram, Running 10.5.8 Leopard Server
    Xserve Raid with firmware version 1.5.1/1.51c on both controllers, and
    Qlogic Sanbox 5600
    Apple Fibre Channel Cards in Xserve, and Mac Pro Tower; Apple 2 Port 4Gbs Fibre Channel Card
    Mac Pro Tower-Quad Core Intel Xeon 2.8Ghz, 16Gb of Ram, Running Snow Leopard 10.6.4
    Here is the problem.
    The directory for the xserve raids keep getting corrup, and I use disc warrior to rebuild them. Is there a way to keep the directories from getting corrupt? I am a few pieces of equipment before I can build an Xsan as that is the ultimate goal, but until then, I just need to be able to have the raids funciton as storage without having to rebuild the directories all of the time.
    Anybody have any suggestions?
    Thanks
    Todd Buhmiller
    Widescreen Media
    Calgary, Alberta Canada
    Tod Kuykendall
    Posts: 1,237
    From: San Diego
    Registered: Oct 11, 2000
    Re: Xserve Raid Mounts, Corrupt Directory tired of rebuilding directory
    Posted: Jun 27, 2010 1:25 PM in response to: Todd Buhmiller
    Are multiple computers accessing the same data on the RAID at the same time?
    If so then NO. This is the source of your data corruption and I'm surprised if you were able to get all your data back every time if this is how you've been running your system. Each fibre channel assumes it has full and sole control of every volume it has mounted, no data arbitration is practiced and data corruption will occur if this assumption is wrong.
    The only way this set-up will work is to use partitions or LUN masks so the volumes are accessed by one computer at any time. As long as one computer relinquishes control before another mounts it you will dodge arbitration issues but this is a dangerous game. If you screw up and mount an already mounted volume - and there is no easy way to tell if a volume is mounted - corruption will then occur. Sharing data simultaneous at fibre speeds is what XSAN does and to do this you need it.
    HTH,
    =Tod
    Intel Xserve, G5 XServes, XRAID, Promise

    +The xserve raids will mount automatically to any computer that I connect the qlogic fc switch to+
    This is source of the corruption to your data. Any computer that attaches to a drive/partition via fibre channel assumes that it alone is in control of the drive and data corruption is inevitable.
    +Is that the issue, should I disconnect the xserve from the fc switch and leave it connected via ethernet?+
    Short answer: YES. The ethernet connections are fine because the server is controlling the file arbitration through the sharing protocol. Fibre channel connections assumes complete control over the partition and no arbitration of the file access is performed. It's like two people independently driving trying to drive the same car to different locations.
    Depending on your set-up it is possible for the two machines to see and use different parts of the Xserve RAID storage but they cannot access the same areas without SAN doing the arbitration.
    Hope that's clear,
    =Tod

  • How to set up VPN using MAC OSX 10.4.11, Please help I need someone to help me set up VPN using regular DSL connection on my home so someone can help me troubleshoot my XSAN system remotely. THANKS

    Hello,
    I'm having trouble setting up a VPN using MAC OSX 10.4.11 Server. I have and XSAN system and one of my volumes has been down for quite a while now. There is a very kind MAC IT professional that is willing to help be troubleshoot my system but he needs to be able to access my system remotely. I am able to connect the MDC to DSL but I haven't been able to set up the VPN. Please help, this is an emergency. Thanks!
    Marco

    have you forwared the ports on your router? Why not let him in via teamviewer? its free and mac compatable

  • After installing XSAN - HDV no longer Recognized...

    Hey - what's the deal with this!?
    Since installing XSAN, 10.4.8 and the latest fcp version
    HDV suddenly is unrecognizable by either G5 system here -
    tried a camera, a deck - different cables...rebooted, reset - you name it - FCP no longer acknowledges any HDV input device. Switch the HDV devices to DV devices and it will recognize them as their DV selves... just not HDV.
    Any ideas?

    i'm surprised that my computer, or myself, haven't been thrown out the window yet.
    if the re-install doesn't work; i guess i'll just have to transfer video from iMovie to FCP. that will be fun! ;P
    and the ironic part of all this is...the only reason i went ahead w/the crossgrade, and then 5.1.2, was so i could finally shoot/capture 24f ftg. w/my XL H1.

  • Client not being recognized on Xsan Admin

    I am not able to see both my client computers on the Xsan admin page, and there are some strange things happening.
    -My setup consists of two G5 computers (Clients), one Xserve G5 (Metadata Controller), a fiber channel switch, and an Xserve RAID (with 5 drives on the first controller partitioned RAID 0 for a total of 1.8 TB of storage space, and 2 drives on the second controller partitioned RAID 1 for a total of 370 GB to be the controller data).
    -For the sake of testing I disabled their internal network controller (connected to a DHCP server), and have them all talking to each other with a second ethernet card. They are all on the same subnet and they can all see each other. They each have a manually assigned IP and are not on an external network
    -When I access Xsan admin from the Xserve (using Remote Desktop) I can connect to the Xsan Xserve.local (IP 10.10.1.10). There, under setup, I see the Xserve (which is bold) stating that it is the controller. I also see Client 1 (client). Both have a green light next to them and are reporting no problems with the serial number (note that I am accessing the Xserve using Remote desktop on CLIENT 2). Client 2 is not seen on the computers section of the setup.
    -Now when I access Xsan Admin from the Client 2 machine, I get different results. First, when I try to add a SAN, Client's 2 local address is the default. I change it to Xserve.local with the proper name and password, and I get to monitor the Xsan just like I was doing before from the Xserve machine.
    -Now if I leave the default address, another Xsan will appear on the left pane (SAN Components). Here is the were it confuses me. Under this new San, I see all 3 machines on the computers list, but the one in bold is the Client 2 machine (the one from which a I am accessing Xsan admin, also the one that was missing from the previous list). If I try to setup the SAN from there, naming the san or continuing the setup it will not allow me (note that all three machines have a green light next to them). It tells me that "Some configuration is invalid; The computer you are connect to must be a controller." The catch here is that it will only let me configure if I am connected to a controller but Client 2 will only appear if I am connect to it. Needless to say that I want client 2 to be a client, not a controller.
    I know this may sound a little confusing, but I can feel I am very close in getting this to work. Any help would be much appreciated.
    Best,
    Marcello

    For anyone who has read this, thank you.
    I finally managed to get all the clients to work together.
    I was having a firewall issue, where my customized firewall settings were preventing communication on one of the critical ports that Xsan uses to talk to the components. All I had to do was flush the firewall and all was back to normal.

  • I have one folder on an XSAN that one machine can not make changes to. Whenever I try to duplicate or copy a file in this folder I get 'The operation can't be completed because an unexpected error occurred (error code -50).'?

    I have one folder on an XSAN that one machine can not make changes to. Whenever I try to duplicate or copy a file in this folder I get 'The operation can’t be completed because an unexpected error occurred (error code -50).'
    The permissions are fine, and I have trashed the finder plist and reset the NVRAM.
    Anyone have a answer?
    Thanks

    Tips I received so far (thanks to A-Mac via Twitter http://www.a-mac.nl and Remco Kalf http://www.remcokalf.nl/)
    - make sure you have at least 10% of your harddisk in free space (I had only 5%).
    - do a PRAM-reset (http://support.apple.com/kb/ht1379)
    - perform a hardware test (http://support.apple.com/kb/HT1509)
    - make a complete backup (with for example Carbone Clone Copier, see http://www.bombich.com/)
    - after complete backup: use diskwarrior (boot from Diskwarrior DVD, first perform diagnostics, then perform "Rebuild" which rebuilds your file directory).
    So far I only cleared up some space on my HD, and already the problem occurs less.
    Still, I will go through the other tips too.
    (will post progress)

  • Creating less expensive small xSAN for 2 editors - suggestions?

    I've been setting up a second editing bay for our company which is supposed to have identical capabilities to my own. It's a Quad with Kona LHe, just like I'm using. We mostly edit TV comercials in DV50, uncompressed 10bit SD and DVCProHD 720P so our bandwidth requirements usually out spec editing over ethernet.
    Since both editors need to have equal capability, I have chosen to create a small xSAN to allow both editors to access the same media drives simultaneously. With a SAN, large server volumes (located on a fibre channel network) show up on the editor computers just as a simple local hard drive would at very fast speed. You can point FCP scratches to the same places on each machine, then use any machine to edit any project instantly. My budget is large by my standards, but not very large on the scale of typical xSAN implementations so I've been collecting "deals" on the gear I need before I put it all together.
    Since I'm on a budget, it has been fun finding much of the gear on ebay, etc. FYI, here is a quick list of the stuff I've acquired:
    xServe RAID 5.6TB refurbished from Techrestore.com: $5900
    xServe 2Ghz Metadata controller w/ Tiger Server 1.5GB ECC RAM, PCIX fibre card new, but last year's model from Smalldog: $2600
    Brocade Silkworm 3200 8 channel 2Gb entry fibre switch new on ebay: $950
    2 PCI Express fibre cards new for 2 editor G5 quads: $1100
    3 xSAN licenses new on eBay: $1300
    Mini GB ethernet network stuff and switch (for metadata only - separate from our LAN network): $150
    Misc. fibre cables and tranceivers: $700
    One of my editing G5's will act as the fallback Metadata controller in case the xServe goes down at any time.
    I also am planning on at least 8 hours of time from a local xSAN authorized tech to help me set everything up and teach me what I need to maintain the system. This should be about $700 or so.
    I have found several articles posted at xsanity.com very helpful in planning this.
    If any of you have any experience with xSAN, you might agree this is a very low cost of entry into this very exciting new workflow. $13.5K for a fully functioning xSAN of this size is not bad at all. Many would spend that on the xServe RAID alone, sans SAN;-) And I can expand very easily since my switch will still have 3 unused ports. Note: the 5.6TB xServe RAID will only be about 4TB after accounting for the RAID 5 and dedicated (and mirrored) metadata volumes. Only 4TB. Pity!
    Now that I have the main hardware components ready, it's time to install and setup the system. I'll be posting my progess in the next few weeks as this happens, but first would like to hear any impressions on this. Suggestions or warnings are appreciated by those with experience with xSAN. The xSAN forum here at Apple is used mostly by IT professionals and I'm mostly interested in hearing comments from editors and those that use the system in small settings like my own.
    One question for the Gurus: I don't believe FCP projects can be opened and used by two people at the same time, but if there is a way to do this without corrupting the project file, I would love to know.
    I'm also seeking to hire an assistant to occupy the new editing bay. Broad multimedia skills are needed, but I can train to a degree. We're an advertising agency just north of Salt Lake City, Utah. Please let me know if any of you are interested.

    Thanks for the suggestions. Brian, I'll be sure to get you some points once I close the topic.
    I didn't realize the Project files are best copied to the hard drive. Is this for a permissions related reason or just to avoid short spikes in bandwidth during saves?
    I agree that Metadata is best on a dedicated controller with full scale xSANs, however with just 2 systems editing mostly DVCPro50 resolution projects I can't imagine burning up more than 100MB/sec at any given time. OK, maybe, but this is unlikely for next year or so. I've read that a single controller can achieve 80MB/sec easily, so 2 should be around 150MB/sec under heavy load. I'll have the metadata mirrored to 2 drives on one side of the XSR and the remaining 5 drives on that controller in a RAID 5. The other side of the XSR will be a full 7 drive RAID 5. These 2 data LUNs will be striped together in xSAN to achieve a full bandwidth of about 150MB/sec. I was told that the XSR controller can handle multiple RAID's at the same time so I can send metadata to one mirror array and designate the other as a RAID 5 LUN. Considering the small size of the data going to the metadata volumes and the relative simplicity of RAID 1 mirroring, I believe the controller shouldn't be adversly affected by this. Is this incorrect in your experience?
    I do plan on turning off the cache of the XSR since the system will be used for editing, yet it would be nice to have cache for the metadata so that's a point to consider.
    THe metadata should be segregated on its own XSR controller.
    Are you saying that the metadata sharing the same controller as the video data is going to slow the whole system down even though the metadata is located on separate, dedicated drives in that controller? I thought metadata was tiny and required very little bandwidth on the bus of a controller. If this is the case, the only bottleneck would be the RAID chip in the XSR. Again, these metadata files are very small and RAID 1 is very simple, so I don't see how it could slow things down enough to justify another $4K for a new XSR. If you still disagree, please let me know.
    As per your suggestion, and considering your stellar reputation in this forum, I'm shopping right now for a mostly empty xServe RAID to use this for just the Metadata volumes mirrored. It just seems like a huge waste to get an XSR just to use 2 drive bays mirrored. The plus side of this is I could begin filling the other controller in the future as my storage needs expand.
    It would be really cool to use the 2 drive bays in the Xserve Metadata controller for the metadata volumes, but I can see how that would cause problems if the xServe goes down, making the metadata invisible to the fallback MDC. 100% uptime isn't that big of a deal for me, however. As long as the XSAN comes back online safely after the xServe reboots without trouble, I'm OK with such a setup. Have you ever seen this done? It seems a bit of a hack to use anything but fibre channel data. I'd hate to introduce too much complexity just to save some bucks, but it is an intriguing idea and would cost a fraction of a new XSR. It would be fast since the writing the metadata would be local with very little latency.
    For this reason, I'm also very interested to find any other simple and less expensive Fibre based storage solutions that could host my metadata as an alternative to full blown XSR for this. There are all kinds of fibre drives out there, but I don't want to waste a valuable fibre switch port just for one drive. All I need is 2 hardware mirrored bays accessible over fibre, preferably sharing the same channel on my switch. Does anyone know where I might find something like this?

  • The Plague of the Disconnecting Xsan

    I am living in Xsan H.e.l.l. I pray someone can save me from this torture.
    Anyways, here is the situation:
    I've been working on trying to make an xsan work for over 6 months now. The latest problem is I am getting constant disconnects when accessing large amounts of small files on the xsan. This is reproducable consistently. I get an error that says "Server connection interrupted: volumename". My only option is to "Disconnect".
    I have three Xservers running 10.4.6. I have one brand new Xraid box. I'm running Xsan 1.2.
    I have one client xserver
    and two controller xservers, with one high priority and one medium priority.
    I've done the following:
    (1) Tried the manual configuration mentioned on Apple's web site article number 302135
    This resulted in the qlogic switch placing all of my xserves as "targets" when they are actually initiators. An it makes my xraid box be "Unknown". This doesn't help stop the disconnects. So I put it back on Auto I/O stream and Auto Device scan and the devices are properly labelled as initiators and targets.
    (2) I called up Qlogic and talked to them, they said that this problem normally happens with lots of small files. Their answer to the problem was to go into the Apple Fibre Utility on all of the systems and change the settings so that the Fibre ports run at 1gb/s and it is set up as point-to-point. Which I did. No effect.
    (3) I had a gigabit layer 3 switch that I had been using for the private network for the Xsan's meta data. At an apple tech's suggestion, I installed a 10/100 hub instead, but this doesn't affect the disconnects either.
    I'm at a loss and my boss is coming back on monday and I still have a unusuable Xsan. Can anyone help me?
    I've talked to Apple Enterprise Techs, but they like to just blaim this on the Qlogic switch without ever trying to understand what is going on.
    Thank you
    David
      Mac OS X (10.4.6)  
      Mac OS X (10.4.6)  

    I'm having the same problem, only I'm mounting a Sun system running solaris. I simply try to do an ls on a directory containing perhaps a hundred or two text files and it just hangs. I can't control-c the process and am forced to control-z/kill -9. I then start seeing nfs errors like this:
    nfs server dyeset:/users: not responding
    nfs server dyeset:/users: not responding
    nfs server dyeset:/users: not responding
    nfs server dyeset:/users: not responding
    nfs server dyeset:/users: not responding
    nfs server dyeset:/users: not responding
    Smaller directories work just fine and I don't get the errors until I try to do something in a large directory. Then I eventually get the window the original poster described "Server connection interrupted: volume-name" and disconnect is the only option. I can re-establish the mount with:
    sudo kill -HUP `cat /var/run/automount.pid`
    but I consistently cannot do anything in large directories. I was looking at the man page for mount_nfs. Perhaps the -r option would alleviate the problem? The only option I have set in niutil for the mount is "-P". I've used similar mounts in the past without -P on 10.3.9 and never had this problem. That was a sun/solaris system as well, albeit a different system.
    Any suggestions?
    Thanks,
    Rob
    G4   Mac OS X (10.4.6)  

Maybe you are looking for

  • Slide effect - change dinamically from to

    Hi, i try to build a slide effect (v1.5) for change dinamically the options 'from' and 'to', the scope is obtain a graphic interface for domotics application (eg. open sliding-windows to 25%) i used this sample code, but the value in the effect not c

  • Two accounts and home share or one account with two iPads?

    I am in college and I have an iPad II but I am getting an iPad II for a course to use this semester. Should I set up another itunes acct or use the one I already have?

  • Export to PDF and Print

    To print a Pages Dokument from Ipad Apple support says: "When exporting documents, including PDF, ensure the target computer has access to the fonts used in the document." My Target Computer is a Windows 7 PC. Where do I get access to the fonts of th

  • Using linpus lite OS on Acer aspire 1, worked fine for 3 years- then 4 days ago firefox refuses to open

    firefox2.0 wireless internet connection Can download and send emails All other programs working Appears that firefox boots but no opening screen

  • Problem update nokia 5800 V52

    I want to give a Nokia software update available but says version is V50. Why do I look like. If I want to give updates on the phone sw update the application so I want to give out .if i  update with * # 0000 * error. Please help .sorry for my bad en