Troubleshooting Xsan Volume Panic

I've been having a semi-regular Xsan panic and haven't been able to isolate the cause. I'm hoping someone here has seen something similar and can offer some tips on solving this.
Here's what the cvlog says when the panic happens (it's the same every time):
0209 07:47:13 0x10da87000 (*FATAL*) PANIC: /Library/Filesystems/Xsan/bin/fsm ASSERT failed "IPXATTRINODE(ip)" file fsm_xattr.c, line 736
0209 07:47:13.258787 0x11de39000 (Debug) timedfree_pending_inodethread: flushing journal.
0209 07:47:13.258805 0x11de39000 (Debug) timedfree_pending_inodethread: journal flush complete.
0209 07:47:13 0x10da87000 (*FATAL*) PANIC: aborting threads now.
The primary MDC panics, and fails over to the backup. The backup panics immediately, and fails back to the primary, which panics again and stops the SAN.
Here are the setup details:
• 4 Early 2008 Xserves (2 controllers, 2 clients), Xsan 2.2.1
• All 4 servers running 10.6.5 (same panic also occurred under 10.6.4)
• Clients share open directory home folders over AFP for users with portable home directories.
• 2 Vtrak E610F enclosures, one hosts a data LUN, the other a metadata LUN for one Xsan volume
• QLogic Sanbox 5602 fiber switch
I can't find any hardware problems on the Vtraks or the Sanbox
After the san panics, I run cvfsck -j followed by cvfsck -wv. The output doesn't show any problems, to my (admittedly untrained) eye. Here's what I get from cvfsck -wv, let me know if I'm missing something:
BUILD INFO:
#!@$ Server Revision 3.5.0 Build 7443 Branch branches_35X (412.3)
#!@$ Built for Darwin 10.0 i386
#!@$ Created on Mon Dec 7 12:52:39 PST 2009
Created directory /tmp/cvfsck15061a for temporary files.
Attempting to acquire arbitration block... successful.
Creating MetadataAndJournal allocation check file.
Creating Homes allocation check file.
Recovering Journal Log.
Super Block information.
FS Created On : Wed Dec 22 07:22:21 2010
Inode Version : '2.5'
File System Status : Clean
Allocated Inodes : 1305600
Free Inodes : 26073
FL Blocks : 85
Next Inode Chunk : 0x32c55
Metadump Seqno : 0
Restore Journal Seqno : 0
Windows Security Indx Inode : 0x5
Windows Security Data Inode : 0x6
Quota Database Inode : 0x7
ID Database Inode : 0xb
Client Write Opens Inode : 0x8
Stripe Group MetadataAndJournal ( 0) 0x746a080 blocks.
Stripe Group Homes ( 1) 0xe8bd3c0 blocks.
Building Inode Index Database 1305600 (100%).
Verifying NT Security Descriptors
Found 697 NT Security Descriptors: all are good
Verifying Free List Extents.
Scanning inodes 1305600 (100%).
Sorting extent list for MetadataAndJournal pass 1/1
Updating bitmap for MetadataAndJournal extents 102400 ( 8%). Updating bitmap for MetadataAndJournal extents 112640 ( 8%). Updating bitmap for MetadataAndJournal extents 113322 ( 9%).
Sorting extent list for Homes pass 1/1
Updating bitmap for Homes extents 1257585 (100%).
Checking for dead inodes 1305600 (100%).
Checking directories 109996 (100%).
Scanning for orphaned inodes 1305600 (100%).
Verifying link & subdir counts 1305600 (100%).
Checking free list. 1305600 (100%).
Checking pending free list.
Checking Arbitration Control Block.
Checking MetadataAndJournal allocation bit maps (100%).
Checking Homes allocation bit maps (100%).
File system 'ODHome'. Blocks-244044736 free-215328011 Inodes-1305600 free-26073.
File System Check completed successfully.
However, if I try to restart the volume after running cvfsck, it just panics again. Shutting down the clients and rebooting the primary controller allows a normal startup, and the volume mounts and runs fine until the next panic. Everything in the cvlog between panics is labeled either debug or info.
I've reversed the roles for the primary and backup controllers, but it didn't make a difference in the panic.
Most, but not all, of the panics seem to happen in the morning when users are logging in, or at quitting time, when the users' portable homes are syncing. I've tried to isolate the panic to a specific user's sync, by temporarily disabling syncing for usergroups (one at a time), but can't tie them to any single user. Nightly incremental backups and weekly full backups run without causing a panic. I can also use rsync to mirror the volume's contents to another server without causing a panic.
I'd appreciate any insight into the cause of the panic, or strategies to diagnose or prevent it.
Thanks!

Hi There ClintR,
I'm having an identical issue that occurred out of nowhere @ 3pm yesterday.
Have been working on finding a solution to this, however haven't found anything effective yet.
Did you have some luck in finding a resolution?
Here is a excerpt of my logs - if anyone can offer some further assistance I'd be very grateful.
[0222 07:52:01] 0x7fff70355ca0 (Info) Branding Arbitration Block (attempt 1) votes 2.
[0222 07:52:03.363744] 0x7fff70355ca0 (Debug) Cannot find fail over script [/Library/Filesystems/Xsan/bin/cvfail.xsan02.lifestyletrader.com.au] - looking for generic script.
[0222 07:52:03] 0x7fff70355ca0 (Info) Launching fail over script ["/Library/Filesystems/Xsan/bin/cvfail" xsan02.lifestyletrader.com.au 59564 Homes]
[0222 07:52:03.404814] 0x7fff70355ca0 (Debug) Starting journal log recovery.
[0222 07:52:03.667002] 0x7fff70355ca0 (Debug) Completed journal log recovery.
[0222 07:52:03.667148] 0x7fff70355ca0 (Debug) Inodeinit_postactivation: FsStatus 0x2525, Brl_ResyncState 1
[0222 07:52:03] 0x11ef11000 (Info) FSM Alloc: Loading Stripe Group "MetadataAndJournal". 931.31 GB.
[0222 07:52:03] 0x11f395000 (Info) FSM Alloc: Loading Stripe Group "Homes". 3.64 TB.
[0222 07:52:03] 0x11ef11000 (Info) FSM Alloc: Stripe Group "MetadataAndJournal" active.
[0222 07:52:04] 0x11f395000 (Info) FSM Alloc: free blocks 423988623 with 0 blocks currently reserved for client delayed buffers.Reserved blocks may change with client activity.
[0222 07:52:04] 0x11f395000 (Info) FSM Alloc: Stripe Group "Homes" active.
[0222 07:52:04] 0x7fff70355ca0 (Warning) Windows Security has been turned off in config file but clients have been requested to enforce ACLs. Windows Security remains in effect.
[0222 07:52:04.135555] 0x7fff70355ca0 (Debug) FSUUID_init: found `FSUUID' xattr on root inode: fecc6178-3f8b-4e10-98bc-52a932539a15
[0222 07:52:04] 0x7fff70355ca0 (Info) RPL_init: RPL upgrade required.
[0222 07:52:04] 0x7fff70355ca0 (Info) RPL_Upgrade: Removing existing RPL attributes.
[0222 07:52:07] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 20 IEL blocks
[0222 07:52:15] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 40 IEL blocks
[0222 07:52:28] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 60 IEL blocks
[0222 07:52:45] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 80 IEL blocks
[0222 07:53:08] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 100 IEL blocks
[0222 07:53:36] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 120 IEL blocks
[0222 07:54:10] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 140 IEL blocks
[0222 07:54:53] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 160 IEL blocks
[0222 07:55:50] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 180 IEL blocks
[0222 07:57:02] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 200 IEL blocks
[0222 07:58:34] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 220 IEL blocks
[0222 08:00:25] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 240 IEL blocks
[0222 08:02:43] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 260 IEL blocks
[0222 08:05:17] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 280 IEL blocks
[0222 08:08:13] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 300 IEL blocks
[0222 08:11:22] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 320 IEL blocks
[0222 08:14:47] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 340 IEL blocks
[0222 08:18:27] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 360 IEL blocks
[0222 08:22:22] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 380 IEL blocks
[0222 08:26:32] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 400 IEL blocks
[0222 08:30:56] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 420 IEL blocks
[0222 08:35:36] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 440 IEL blocks
[0222 08:40:30] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 460 IEL blocks
[0222 08:45:26] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 480 IEL blocks
[0222 08:50:50] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 500 IEL blocks
[0222 08:56:23] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 520 IEL blocks
[0222 09:02:07] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 540 IEL blocks
[0222 09:08:10] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 560 IEL blocks
[0222 09:14:22] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 580 IEL blocks
[0222 09:20:55] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 600 IEL blocks
[0222 09:27:42] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 620 IEL blocks
[0222 09:34:44] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 640 IEL blocks
[0222 09:41:36] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 660 IEL blocks
[0222 09:49:06] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 680 IEL blocks
[0222 09:56:44] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 700 IEL blocks
[0222 10:04:44] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 720 IEL blocks
[0222 10:12:54] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 740 IEL blocks
[0222 10:21:22] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 760 IEL blocks
[0222 10:30:03] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 780 IEL blocks
[0222 10:38:59] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 800 IEL blocks
[0222 10:48:10] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 820 IEL blocks
[0222 10:57:40] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 840 IEL blocks
[0222 11:07:20] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 860 IEL blocks
[0222 11:17:15] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 880 IEL blocks
[0222 11:27:19] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 900 IEL blocks
[0222 11:37:32] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 920 IEL blocks
[0222 11:48:11] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 940 IEL blocks
[0222 11:59:01] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 960 IEL blocks
[0222 12:10:02] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 980 IEL blocks
[0222 12:21:17] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1000 IEL blocks
[0222 12:32:46] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1020 IEL blocks
[0222 12:44:23] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1040 IEL blocks
[0222 12:56:14] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1060 IEL blocks
[0222 13:07:15] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1080 IEL blocks
[0222 13:15:32] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1100 IEL blocks
[0222 13:25:30] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1120 IEL blocks
[0222 13:37:29] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1140 IEL blocks
[0222 13:50:16] 0x7fff70355ca0 (Info) RPL_Upgrade: removal completed 1160 IEL blocks
[0222 14:02:47] 0x7fff70355ca0 (*FATAL*) PANIC: /Library/Filesystems/Xsan/bin/fsm ASSERT failed "IPXATTRINODE(ip)" file fsm_xattr.c, line 736
[0222 14:02:47] 0x7fff70355ca0 (*FATAL*) PANIC: wait 3 secs for journal to flush
[0222 14:02:50] 0x7fff70355ca0 (*FATAL*) PANIC: aborting threads now.
Logger_thread: sleeps/44494 signals/0 flushes/65 writes/65 switches 0
Logger_thread: logged/91 clean/91 toss/0 signalled/0 toss_message/0
Logger_thread: waited/0 awakened/0

Similar Messages

  • Unable to mount XSAN volume on client workstation

    We just reimaged one of our machines and we are now unable to mount the xsan volume. Originally when the machine hooked up the IP was wrong and the XSAN couldn't see it. This has been fixed and the XSAN sees it now and it is authenticated, but when we try to mount the volume we get this error
    Jan 26 13:54:48 *** Xsan Admin[23616]: ERROR: Error reading computer properties: kOfflineError (0)
    Jan 26 13:55:37 *** Xsan Admin[23616]: ERROR: Error mounting volume…: Server returned a non-zero status code (100007)
    Jan 26 13:55:48 *** Xsan Admin[23616]: The temporary directory at "/private/var/folders/Eu/Euvo7L8cHAmWwo7ntmj8ak+TI/TemporaryItems/(A Document Being Saved By Xsan Admin 33)" could not be deleted.
    The *'s was the name of the machine, apple has been little to no help. We have tried removing the machine form the XSAN and then adding it back in, same issue. Tried reinstallingt eh client, same issue. Any ideas?

    This is the error on server
    ERROR: Error mounting volume…: Server returned a non-zero status code (100007)

  • Open and work on FCP project file located at xsan volume

    It is recommended by apple to *open and work on FCP project file located at xsan volume*
    or we should open , work and save our FCP project file from local drive only .
    Is there is risk of corrupting xsan volume if we open FCP project file placed in xsan volume .
    where should be the location of FCP project files ??????
    Mac pro 2.66 , FCP 6.01, xsan 1.4

    Opening a file twice is a indeed a problem. To prevent this, and for numerous other reasons, we use Open Directory. In OD you can configure that a user can login just once (concurrently).
    In old versions of xsan it was not recommended to put project files on xsan, but nowadays it works fine.
    Putting projectfiles on a san has the advantage that a user can work on any client. If this is not important in your environment these files can be stored locally.

  • Backup Controller cannot mount XSAN Volume

    Hi Guys,
    I've really have a big problem regarding the XSAN that I've just setup: There are 3 Xserves connected to the SAN - one for the main controller, one for backup and another one for fileserver. After a fresh install of the Leopard Server 10.5.6 on all servers and fresh install of Xsan 2.0 updated to 2.1.1, I tried creating an SAN from the main controller, I added first the main and backup controllers, authenticated them properly and was successful adding them to the SAN. After this, I created the volume and it mounted properly on the main controller. What I don't understand is that whenever I try to mount the volume to the backup controller, it is saying unable to mount and even I tried forcing it to mount in terminal using the command xsanctl mount VOLNAME, its giving me an error saying
    "unable to mount volume, Cannot mount XSAN volume error code: 5"
    What is that error message? When I tried typing cvadmin to the main controller, it only gave me this message:
    Main Controller:
    File System Services (* indicates service is in control of FS)
    1>*XSAN[0] located on 10.0.0.101:49930 (pid 317)
    Select FSM "XSAN"
    When I type the same cvadmin to the backup controller, it gave me this message:
    Backup Controller:
    File System Service (* indicates service is in control of FS)
    1> XSAN[1] located on 10.0.0.102:50384 (pid 331)
    No FSSs are active
    Select FSM "none"
    What is happening? both servers are having a DNS name. Before I created the SAN, from the authenticate window, the server name is just the IP address of the Ethernet 0 (first ethernet port). Now whenever I fire up Xsan Admin, both controllers are now offline and if you authenticate them, the server name info suddenly changed from IP address to a DNS Name (e.g. from 194.170.34.12, changed to hct-mdc.ad.hct.ac.ae) which even if I type my admin username and password cannot authenticate saying "server not found in network"
    I really don't know now what to do and they need to fix the issue asap.
    I would really appreciate your help guys!
    Thanks.
    jantoniophi

    Hi,
    I am accessing this WS via ARD, so public LAN is definitely there. I have not connected the "Xsan" network (but it should be no problems communicating the Xsan traffic over this single LAN connection).
    With the firewall, the situation is more strange. When I try to open the Firewall settings on Sharing, there is a dialog "Other firewall software is running on this computer." I googled this and it seems that this was a common problem on 10.3. They suggest to delete com.apple.sharing.firewall.plist file but it is not there on 10.4.
    "sudo ipfw list" shows this:
    00001 allow udp from any 626 to any dst-port 626
    65535 allow ip from any to any
    Any ideas on how to make Firewall behave as expected? I will try to reinstall if I don't receive a reply.
    Thanks.

  • Can't mount 3rd party DAS storage and Xsan volume simutaneously

    Xsan client(MacPro) equipped an Apple FC HBA card can't mount SGI TP9300 DAS storage and Xsan volume simutaneously when powering on or restarting.
    Among 2 FC ports, one port is used for Xsan volume using Xserve RAID, and the other port is used for SGI TP9300 storage which is directly connected to that Mac.
    When booting, Xsan volume is always mounted normally, but SGI TP9300 volume is not.
    Sometimes TP9300 volume is mounted about 20~30 minutes later.(I attached system.log file)
    The LUN of TP9300 is seen at Disk Utility, so it's not physical problem.
    It seems that actual volume information is not correctly processed at booting time.
    While, after booting the system with the FC cable for TP9300 disconnected and confirming that Xsan volume mounted normally, then connecting the FC cable for TP9300, the TP9300 volume is mounted immediately.
    Mac OS X 10.4.8, Xsan 1.4 is being used.

    Yes, I think you screwed your xsan volume. You can not remove luns from pools, and you can not remove volumes from volumes.
    How important is your existing data? In your case I would concentrate on that. This is not trivial. Theorotically you have lost your data allready. In practise you might have luck.
    Once I was in your kind of twilight zone, and managed to keep the data. It is a complete cli job. You have to be very carefull about with your config files. You could hire an experienced xsan consultant. If that is not possible, read the manpages very carefully before doing anything on the volume.
    Regards
    Donald

  • Adobe After Effect on Xsan Volume

    Is there anybody using Adobe After Effect(AE) 6.5 on Xsan Volume in PowerMac G5?
    In doing 2k or 4k DI work, AE browses many of sequence files (maybe up to 10,000 files in a folder)
    File type is Cineon( *.cin), and file size is about 12MB for 2k, 48MB for 4k each.
    The problem is that when the number of files is over than 5,000, movie preview gets very slow.
    It takes 3~5 second to move from a point to another point.
    This is happening only in PowerMac client, in Windows client (SNFS installed and using AE 6.5 same) it does not happen.
    PowerMac 2.7G DP with 8GB RAM(10.4.2) is being used.
    MDC is Xserve 2.3G DP with 1GB RAM), Xsan 1.1 is installed on all Mac machine.
    2 Xserve RAID 5.6TB are being used for userdata.(4 RAID controller)
    Metadata pool is belong to another Xserve RAID.
    While, this symptom dows not appear in case Xserve RAID is directly attached to PM.
    So, I thinks this is being from the difference between filesystems(HFS+ vs. Xsan filesystem) and
    file index mechanism(Windows XP vs. Max OS X Tiger)
    Is there any way to solve this problem?
    Any ideas will be appreciated.
    Thanks in advance.
    Steve

    Steve,
    I don't know if this is the same basic symptom or not, but FCP has an extremely difficult (can I emphasize extremely) time reconnecting to media on Xsan Volumes. It is so slow that you think it has crashed or something.
    There is definitely an indexing issue on Xsan Volumes, that is not present in HFS+. If your issues is related to this at all, I am sorry to say, there is no fix for it yet, at least on the FCP side of things.
    That's my two cents.
    Good Luck
    Kalagan

  • Oddball client not mounting XSan Volumes

    Background: My client, a TV News broadcast station, had Apple and a third-party design and install a Apple based XSan storage and network. They ran all new fiber and gigabit ethernet to each station to use the XSan. It has been working great, with the exception noted below.
    Setup: 2 MD Controllers, 1 XServe, 4 XRaids, each make up two 2 large TB volumes (raid 5, stripped together).
    XServe, XRAID, MD controllers, and all clients are all connected together on a Fibre and Fiber Optic network using QLogic switches. SUBNET: 10.200.1.x.
    Ethernet Network, also connected all the above together on HP Gigabit Switches. SUBNET: 10.100.1.x The public WAN is made available on this subnet by a connection to the HP switches. Still getting details on how this was done.
    Everything is running OS X 10.4.8 or better, XSan File System 1.4.
    Setup
    There are 8 workstations (edit bays) that act as infusion and editing stations. Each station has two XSans volumes mounted NEWS01 and NEWS02. They have the odd numbered bays use NEWS01 and the even numbered bays use NEW02.
    PROBLEM
    All bays mount the XSan volumes, except one. FCP08 will not mount the XSan volumes. We have rebooted the workstation, and even went to the extreme of shutting down the entire setup all bays and MD controllers, File Servers, etc.. then bringing everything backup up. Same problem.
    Based on some forum discussion, we have tried the following:
    * Ensured that there is no empty mount point in /Volumes
    * Uninstalled all XSan Software and resinstalled v1.4 from Apple's website.
    * Removed the client from the XSan Admin, and readded it, made sure to enter valid serial number, etc..
    * Verified that all fibers are working, all link lights look good, and you can ping across the MD network.
    When you use the XSan admin from either FCP08 (edit bay 08) or from the server, and you add both MD XSans to it, you see the client, and you click on the client and click either Mount (read-only) or Mount (Read/write).
    It will show "Mounting..." and then it will flip back to "not mounted". The only feedback we have received so far is "ACCESS DENIED". All affinity setting are set to rwxrwxrwx (wide open), and all the volumes and workstation logins all have the same access to the volumes. I can not find any restrictive permissions anywhere.
    I plan on trying to move them away from a /etc/hosts type setup to a proper DNS Server running on their XServe using the DNS Server function of OS X Server. But currently all edit bay stations have the same /etc/hosts file installed, which accounts for MD and ethernet networks.
    ANY IDEAS what is wrong with this workstation? With the setup?
    I have had extreme ideas from some who have said that we need to blitz the entire client, and reinstall the operating system. I am not willing to go down that route, since each edit bay was built manually without an image (another aspect I will remedying soon). It will take sometime to rebuild this edit bay client if I that is the only solution.
    The only this question I have if that is the popular opinion, is what is different from a client OS install and the existing one regarding the XSan? This are edit bays, not private workstations, and no one installs any extra software or surfs the net, etc.. They are used for ingest and editing only.
    HELP!
      Mac OS X (10.4.10)  

    Hi,
    You could check if it is Fibre Channel related:
    from a terminal do a cvlabel -l
    this should give you a list of the lun's in your volume.
    When this tool does not show luns, you might have a zoning issue.
    Regards
    Donald

  • Poor performance with small files on Xsan volume

    Since I'm running on Xsan for with network users and server shares, I have various problems.
    Users complaining on slow logins, Word files that have saving errors. For the Word problem, there are a lot tricks like .TemporaryItems with acl.
    But I think there's another reason. My mobile home users are not yet using the new home server with mobile homes. Today I migrated my own account to the new server en syncing is very slow on files from my iCal server and mail archive.
    So I did a test on the servers that are connected to the san. The backup servers has the san volume mounted and some old Xraids for backup storage. If I duplicate my iCal folder (8000 files) it takes 8 minutes in the Xsan volume. On an Xraid, less than 30 seconds. If I duplicate 1 2.5GB diskimage file on the san volume, it does it in less than 10 seconds.
    My Xsan setup:
    All Intel Xserves
    1 MD server dedicated (4GB Atto FC)
    1 OD server and server shares (and backup MD controller) (4GB Atto FC)
    1 home folder server (4GB Atto FC)
    1 backup server (also MD backup controller) (4GB Apple FC card)
    2 Promise Vtraks with 16X 750 GB disks
    1 MD lun (2disks mirrored) (Own FC controller)
    2 Data luns (8.87 TB each lun) (Each own FC controller)
    Xsan setup is done with configuration for home directories.
    Block allocation size: 8kb Round Robin
    Separate ethernet metadata network.
    I hope someone can give me some pointers.
    Patrick

    I have the same issue and it is killing our iCal Server performance.
    My question would be, can you edit an XSAN Volume without destroying the data?
    There is one setting that I think might ease the pressure... in the Cache Settings (Volumes -> <Right-Click Volume> -> Edit Volume Settings)
    Would increasing the amount of Cache help? Would it destroy the data?!?!?

  • Cannot mount Xsan volume on Mac Pro

    Hi,
    I installed a new workstation - a first Mac Pro in our PPC Xsan deployment. I installed Xsan, then latest updates. However, I cannot mount an Xsan volume on this workstation. This is an abstract from the log:
    May 16 19:06:14 Mac-Pro kernel[0]: Xsan Client Revision 2.7.201 Build 7.23 Built for Darwin 8.0 Created on Mon Nov 13 11:53:07 PST 2006
    May 16 19:06:14 Mac-Pro sudo: root : TTY=unknown ; PWD=/Library/Filesystems/Xsan/debug ; USER=root ; COMMAND=/sbin/kextload -v -s /Library/Filesystems/Xsan/debug /System/Library/Extensions/acfsctl.kext
    May 16 19:06:14 Mac-Pro /Library/Filesystems/Xsan/bin/fsmpm: NSS: No FS Name Servers file - NAME SERVICE DISABLED.
    May 16 19:06:14 Mac-Pro fsmpm[246]: Portmapper: ComputerInfo: computer_name = "Mac Pro", hostname = "Mac-Pro"
    May 16 19:06:14 Mac-Pro fsmpm[246]: PortMapper: CVFS Volume Meta on device: /dev/rdisk1 (blk 0xe000003 raw 0xe000003) con: 2 lun: 0 state: 0xf4 inquiry [APPLE Xserve RAID 1.50] controller # '5000393000018365' serial # '5000393000018365L0' Size: 490190848 Sector Size: 512
    May 16 19:06:14 Mac-Pro fsmpm[246]: PortMapper: CVFS Volume RAID2_Left on device: /dev/rdisk2 (blk 0xe000004 raw 0xe000004) con: 2 lun: 0 state: 0xf4 inquiry [APPLE Xserve RAID 1.50] controller # '5000393000018A78' serial # '5000393000018A78L0' Size: 5860554719 Sector Size: 512
    May 16 19:06:14 Mac-Pro fsmpm[246]: PortMapper: CVFS Volume RAID2_Right on device: /dev/rdisk3 (blk 0xe000005 raw 0xe000005) con: 2 lun: 0 state: 0xf4 inquiry [APPLE Xserve RAID 1.50] controller # '5000393000018805' serial # '5000393000018805L0' Size: 5860554719 Sector Size: 512
    May 16 19:06:14 Mac-Pro fsmpm[246]: PortMapper: CVFS Volume RAID1_Right on device: /dev/rdisk4 (blk 0xe000006 raw 0xe000006) con: 2 lun: 0 state: 0xf4 inquiry [APPLE Xserve RAID 1.50] controller # '5000393000018319' serial # '5000393000018319L0' Size: 5860554719 Sector Size: 512
    May 16 19:06:15 Mac-Pro servermgrd: xsan: [52] main: Waited 21 secs for fsmpm to start (now running)
    May 16 19:06:15 Mac-Pro fsmpm[246]: PortMapper: Local FSD client is registered.
    May 16 19:06:17 Mac-Pro servermgrd: xsan: [52] Done waiting for fsmpm to start
    May 16 19:06:40 Mac-Pro servermgrd: xsan: [52/358E60] ERROR: mountvolumenamed(EditSAN): Cannot mount volume, file system does not know about it.
    I tried reinstalling Xsan on this workstation, re-entering a licence key to the Xsan admin and rewriting Xsan settings, with no success.
    All of the other 10 PowerMac workstations can access the Xsan volume with no problems.
    Your help would be highly appreciated.

    Hi,
    I am accessing this WS via ARD, so public LAN is definitely there. I have not connected the "Xsan" network (but it should be no problems communicating the Xsan traffic over this single LAN connection).
    With the firewall, the situation is more strange. When I try to open the Firewall settings on Sharing, there is a dialog "Other firewall software is running on this computer." I googled this and it seems that this was a common problem on 10.3. They suggest to delete com.apple.sharing.firewall.plist file but it is not there on 10.4.
    "sudo ipfw list" shows this:
    00001 allow udp from any 626 to any dst-port 626
    65535 allow ip from any to any
    Any ideas on how to make Firewall behave as expected? I will try to reinstall if I don't receive a reply.
    Thanks.

  • Mount two Xsan volumes hosted by different MDC

    For several reason, we need to create two Xsan volumes hosted by different MDC group.
    Assume that Xsan volume 1 is hosted by MDC1, MDC2 (fail-over configuration), and Xsan volume 2 is hosted by MDC3, MDC4 (fail-over configuration).
    Is it possible that an Xserve or MacPro simultaneously mount these two Xsan volumes?
    Of course, under the assumption that FC cables and ethernet cable are connected correctly.
    I know that each MDC group has their own .auth_secret file and makes same .auth_secret file in client's config directory.
    So I think it's not a piece of cake.
    Long time ago, I read an article which say it is possible in SNFS environment with proper fsroute file configuration.
    But I hope there is a way for this in Xsan environment.
    Any comment will be appreciated.
    Steve
    Xserve G5 2.3 DP, PowerMac G5 2.7 DP   Mac OS X (10.4.8)   Xsan 1.4

    Hi,
    you have to modify three files in the config dir:
    fsnameservers:
    #mention the IP's of all the MDC's here:
    IPPrimMDC1 0
    IPFailMDC1 1
    IPPrimMDC2 0
    IPFailMDC2 1
    automount.plist
    #mention all mountpoints here like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>volume1</key>
    <dict>
    <key>AutoMount</key>
    <string>rw</string>
    <key>MountOptions</key>
    <dict/>
    </dict>
    <key>volume2</key>
    <dict>
    <key>AutoMount</key>
    <string>rw</string>
    <key>MountOptions</key>
    <dict/>
    </dict>
    </dict>
    </plist>
    Off course you can mount mamualy first:
    mkdir /Volumes/volume2
    mount -t acfs volume2 /Volumes/volume2

  • XSAN volumes mount RW, but ROOT is locked out.

    I am mounting several clients to the XSAN, and the volumes mount as RW through the admin, and the get info shows them as RW mounts, but additional info says 'you can only read' and there is a small lock illustration on the drive icon after mounting. Using terminal ADMIN, I can not write to the root of the drive, but previously created folders work as RW. Using one of the volumes as the FCP scratch disk, and it will not work, but use the folder inside the volume works. Have tried resetting permissions, restarting the volume(s) and the controller. Nothing has helped. I did have one client with possibly corrupted data (bad HDD) that was crashing at the time of this beginning. Any thought will be appriciated, as I have not been able to find anything on it yet.

    I would say that read only access at the root level of your XSan volume would be preferable, at least it is in the environment that we are working. This prevents anyone but an administrator from placing folders at the root level, meaning that you can control affinity assignment, whereas any local user that would create a folder on the root level would have no affinity assignment. In our environment, where we're working with multiple shows at a time and 52 TB of total storage, affinity assignments become absolutely crucial for file management.

  • No refresh files on finder at xsan volume

    I'm working with a xsan 3.0 on a osx 10.8.2 server, all the clients are fiber channel xsan clients. When we create a folder or copy a file on the xsan volume using one client, is not updated on the rest of clients after a few minutes when viewed in finder. Any idea why?
    Thank you!

    "Me too", and it's been happening for us in metaSAN, not even XSAN. Now I know it's not metaSAN's fault...
    As Cooney said, it used to be a non-issue in prior versions and now requires a manual patch. Please send a bug report: http://www.apple.com/feedback/macosx.html
    Or if you have developer access:
    https://bugreport.apple.com/cgi-bin/WebObjects/RadarWeb.woa/wa/signIn
    Here is the report I sent:
    Observed behavior:
    When a user creates new files in a SAN folder, another user will not see those new files. Waiting (over 15 minutes), closing and reopening the folder, or reopening a Finder window makes no difference.
    Expected behavior:
    Finder should automatically pick up on the changes to the folder and display them, as has been the behavior prior to 10.8.
    Recurrence: Sporadic.
    Workarounds:
    Users have to issue a Finder refresh command via AppleScript. Alternatively, if user #2 creates a new subfolder it will also "kick" the folder and update its view with the missing files.
    We observed this issue with metaSAN 5.5.0.42 as well as XSAN3. 10.8.4 clients with 10.8.4 server.

  • Regular Preventive Maintenance schedule for xsan volume ??

    hi
    Is there is any weekly , monthly regular Preventive Maintenance schedule which we should perform for xsan volume like defragment , some commands etc
    What are the steps for regular Maintenance for xsan ,servers which apple recommends ?
    thanks

    Hi,
    I do not do such maintainance. However every night a cvfsck -nv is run. This is a readonly file system check. If in the result of this a line exist which says:
    Filesystem would have been modified
    Then a real filesystem check is needed.
    A defrag is only performed on files, not on empty space. It takes very very long, and I do not expect much of it. So I do not do this.
    Hope this helps,
    Regards
    Donald

  • Want to delete Xsan Volume - REALLY

    Unusual question... I really want to delete our Xsan Volume.
    We're adding bigger drives and additional RAID's to our Xsan and want to "start from scratch" so to speak.
    I want to see if anyone has any ideas about how to do this. Here is what I was thinking:
    1. Unmount all Xsan clients/controllers
    2. Stop Xsan volume
    3. From Xsan Admin, delete the Xsan Volume (I can't seem to delete the Storage Pools, first though)
    4. Save
    5. Unlabel the Xsan LUN's
    6. Save
    7. Add new ADM's to "existing" RAID's
    8. RAID Admin - get them ready
    9. In Xsan Admin, label new LUN's
    10. Create new Volume and Storage Pools
    Does this sound right?? Any additional ideas??
    Thanks in advance.

    Hey jag,
    That sounds almost perfect... only thinkgs I can think of:
    5. wasn't sure if you could do this via xsan admin.. usually I do this via /Library/Filesystems/Xsan/bin/cvlabel -u
    7-8, when you add the ADM's, you'll want to delete/recreate the arrays so that will take a while. Technically you could grow them although I hear that takes just as long, so might as well do it from scratch properly (that's what I've heard is the right thing to do)
    Once you redo all your arrays/luns, make sure to reboot all the machines in your Xsan volume so they see the LUNs properly.

  • NEED HELP Xsan volume is not mounted (strange problem)

    I ask the help in solving the volume mount problem. (advance I am sorry for my english)
    All began with the fact that the MDC is rebooted and on the two LUNs were gone XSAN label. I relabeled these LUNs using commands:
    cvlabel -c >label_list
    then in the file  label_list I corrected unknown disks on the label with a same names that were. Then I ran the cvlabel label_list command. Finally I got the correct label on all drives.
    # cvlabel -l
    /dev/rdisk14 [Raidix  meta_i          3365] acfs-EFI "META_I"Sectors: 3906830002. Sector Size: 512.  Maximum sectors: 3906975711.
    /dev/rdisk15 [Raidix  QSAN_I          3365] acfs-EFI "QSAN_I"Sectors: 7662714619. Sector Size: 4096.  Maximum sectors: 7662714619.
    /dev/rdisk16 [Raidix  meta_ii         3365] acfs-EFI "META_II"Sectors: 3906830002. Sector Size: 512.  Maximum sectors: 3906975711.
    /dev/rdisk17 [Raidix  2k_I            3365] acfs-EFI "2K_I"Sectors: 31255934943. Sector Size: 512.  Maximum sectors: 31255934943.
    /dev/rdisk18 [Raidix  2k_II           3365] acfs-EFI "2K_II"Sectors: 31255934943. Sector Size: 512.  Maximum sectors: 31255934943.
    /dev/rdisk19 [Raidix  QSAN_II         3365] acfs-EFI "QSAN_II"Sectors: 7662714619. Sector Size: 4096.  Maximum sectors: 7662714619.
    The volume [2K] starts successfully.
    but not mounted on the MDC and client.
    I ran a volume check:
    sh-3.2# cvfsck -wv 2K
    Checked Build disabled - default.
    BUILD INFO:
    #!@$ Revision 4.2.2 Build 7443 (480.8) Branch Head
    #!@$ Built for Darwin 12.0
    #!@$ Created on Mon Jul 29 17:01:44 PDT 2013
    Created directory /tmp/cvfsck3929a for temporary files.
    Attempting to acquire arbitration block... successful.
    Creating MetadataAndJournal allocation check file.
    Creating Video allocation check file.
    Creating Data allocation check file.
    Recovering Journal Log.
    Super Block information.
      FS Created On               : Wed Oct  2 23:59:20 2013
      Inode Version               : '2.7' - 4.0 big inodes + NamedStreams (0x207)
      File System Status          : Clean
      Allocated Inodes            : 4022272
      Free Inodes                 : 16815
      FL Blocks                   : 79
      Next Inode Chunk            : 0x51a67
      Metadump Seqno              : 0
      Restore Journal Seqno       : 0
      Windows Security Indx Inode : 0x5
      Windows Security Data Inode : 0x6
      Quota Database Inode        : 0x7
      ID Database Inode           : 0xa
      Client Write Opens Inode    : 0x8
    Stripe Group MetadataAndJournal             (  0) 0x746ebf0 blocks.
    Stripe Group Video                          (  1) 0x746ffb60 blocks.
    Stripe Group Data                           (  2) 0xe45dfb60 blocks.
    Inode block size is 1024
    Building Inode Index Database 4022272 (100%).       
       4022272 inodes found out of 4022272 expected.
    Verifying NT Security Descriptors
    Found 13 NT Security Descriptors: all are good
    Verifying Free List Extents.
    Scanning inodes 4022272 (100%).         
    Sorting extent list for MetadataAndJournal pass 1/1
    Updating bitmap for MetadataAndJournal extents 21815 (  0%).                   
    Sorting extent list for Video pass 1/1
    Updating bitmap for Video extents 3724510 ( 91%).                   
    Sorting extent list for Data pass 1/1
    Updating bitmap for Data extents 4057329 (100%).                   
    Checking for dead inodes 4022272 (100%).         
    Checking directories 11136 (100%).        
    Scanning for orphaned inodes 4022272 (100%).       
    Verifying link & subdir counts 4022272 (100%).         
    Checking free list. 4022272 (100%).       
    Checking pending free list.                       
    Checking Arbitration Control Block.
    Checking MetadataAndJournal allocation bit maps (100%).        
    Checking Video allocation bit maps (100%).        
    Checking Data allocation bit maps (100%).        
    File system '2K'. Blocks-5784860352 free-3674376793 Inodes-4022272 free-16815.
    File System Check completed successfully.
    check not helping 
    sh-3.2# cvadmin
    Xsan Administrator
    Enter command(s)
    For command help, enter "help" or "?".
    List FSS
    File System Services (* indicates service is in control of FS):
    1>*2K[0]                located on big.local:64844 (pid 5217)
    Select FSM "2K"
    Created           :          Wed Oct  2 23:59:20 2013
    Active Connections:          0
    Fs Block Size     :          16K
    Msg Buffer Size   :          4K
    Disk Devices      :          5
    Stripe Groups     :          3
    Fs Blocks         :          5784860352 (86.20 TB)
    Fs Blocks Free    :          3665561306 (54.62 TB) (63%)
    Xsanadmin (2K) > show
    Show stripe groups (File System "2K")
    Stripe Group 0 [MetadataAndJournal]  Status:Up,MetaData,Journal,Exclusive
      Total Blocks:122088432 (1.82 TB)  Reserved:0 (0.00 B) Free:121753961 (1.81 TB) (99%)
      MultiPath Method:Rotate
        Primary  Stripe [MetadataAndJournal]  Read:Enabled  Write:Enabled
    Stripe Group 1 [Video]  Status:Up
      Total Blocks:1953495904 (29.11 TB)  Reserved:270720 (4.13 GB) Free:129179 (1.97 GB) (0%)
      MultiPath Method:Rotate
        Primary  Stripe [Video]  Read:Enabled  Write:Enabled
    Stripe Group 2 [Data]  Status:Up
      Total Blocks:3831364448 (57.09 TB)  Reserved:270720 (4.13 GB) Free:3665432127 (54.62 TB) (95%)
      MultiPath Method:Rotate
        Primary  Stripe [Data]  Read:Enabled  Write:Enabled
    I checked the availability of LUNs on MDC and client, there also all right.
    But, unfortunately, the Volume is not mounted 
    xsanctl mount 2K
    mount command failed: Unable to mount volume `2K' (error code: 5)
    Please help me to figure out this situation, I will be grateful for any information.
    Thanks

    That looks like an I/O error. You may have an issue with one or more of the data LUNs. Check the system log for errors.

Maybe you are looking for

  • Flex 3 : Contour Plot Chart

    Hi, I have been tasked with a project where a contour plot chart is required like the one below. Does anyone know how this coould be done in Flex or are there any third-party components available to do this? Thanks very much for you help. Martin

  • GRIR Account calculation

    Hi All , Can someone please help me in understanding how is GRIR account calculated in MIRO. Whic price it would take as base price when there is price varaince. Early respose in appreciated. Points will be assigned.

  • How to find tables contain in particular authorization group.

    hi all         By using SE11 , i just enter a table name like "MARA"          then i go for display         then by clicking utilities->assign authorization group           i can get the info. about tablename and which authorization group(MM) the tab

  • Poor Business Plan

    I love that Verizon has great coverage around the country, but I have had issues inside my house (inside the house only) where I keep dropping calls.  Verizon opened a ticket and said the whole city is in a low laying area (not true), so our signal w

  • Issue in Goods Receipt for Kanban '

    Hello I have two Kanbans for Qty 10 EA I receive the goods in two steps: 1. GR MVT 103 2. GR MVT  105 When 103 doc is posted for 10 Pcs one Kanban turns to Green for 10 Pc. When I release the blocked stock again the other Kanban also turns Green thou