Kanaka Client unable Mount volume

I have OES11SP2 on SLES 11 SP3 on which I am running KANAKA ENGINE to mount volumes on MAC Client from servers running under same Tree.
Created eDirectory user account which have rights on volumes from two servers under the same tree.
example: 1) abc (volume) under Server1 (Running Kanaka engine)
2) xyz (volume) under Server2
Note: Tested AFP connectivity working without issues for both the volumes from MAC OS, also eDirectory users can map both volumes on windows machine with Novell client login.
Issue is I cannot mount volume from Server2 using KANAKA Client on MAC OS. However Server1(running kanaka engine) can able to mount.
any help is highly appreciated.

01 2014-12-14 20:03:42 10800 1 0001 13780 7f5258c2c760 Engine::SignalHandler() - Invoked. Signal = {SIGTERM} 15.
01 2014-12-14 20:03:45 10800 5 0001 13780 7f524e92d700 WD: Engine mainline thread is leaving the Watch Dog function...
82 2014-12-14 20:03:45 10800 0 ff08 13780 7f524e12c700 <CONFIG><LEVEL>5</LEVEL><CATEGORY/></CONFIG>
82 2014-12-14 20:03:45 10800 0 ff09 13780 7f524e12c700 <CONFIG><LEVEL>5</LEVEL><CATEGORY/></CONFIG>
01 2014-12-14 20:03:45 10800 5 0001 13780 7f524e92d700 ML: Engine is shutting down...
01 2014-12-14 20:03:45 10800 5 0001 13780 7f524e92d700 ML: Terminating the HTTPx Server subsystem...
01 2014-12-14 20:03:45 10800 5 0001 13780 7f524e92d700 ML: Terminating the User Index Manager subsystem...
01 2014-12-14 20:03:45 10800 5 0007 13780 7f524e92d700 Index Manager: Shutting Down...
01 2014-12-14 20:03:45 10800 5 0007 13780 7f524e92d700 Index Manager: Waiting for Scheduler Thread to Terminate...
01 2014-12-14 20:03:46 10800 5 0007 13780 7f524e92d700 Index Manager: Waiting for All Worker Threads to Terminate...
01 2014-12-14 20:03:46 10800 5 0007 13780 7f524e92d700 Index Manager: Shut Down Complete...
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Configuring the Novell XPLAT environment to no longer perform logging...
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Saving state information...
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Saving the Salt state information...
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Saved the Salt state information, bResult = 1.
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Completed saving state information, bOK = 1.
01 2014-12-14 20:03:46 10800 5 0001 13780 7f524e92d700 ML: Engine has shut down. Thread is terminating...
01 2014-12-14 20:03:46 10800 0 8001 13780 7f524e12c700 Shutting down the Logger. Closing log file "/var/opt/novell/kanaka/engine/log/novell-kanakaengined-20141202-132804.log".
81 2014-12-14 20:03:46 10800 0 8001 13780 7f524e12c700 <LOGFILECLOSE/>
81 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 <LOGFILEOPEN/>
81 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 <LOGGER><VERSION>1</VERSION><APPNAME>novell-kanakaengined</APPNAME><TZBIAS>10800</TZBIAS><OSVERSIONINFO>Kernel Name: Linux, Architecture: x86_64, Kernel Release: 3.0.76-0.11-default, Kernel Version: #1 SMP Fri Jun 14 08:21:43 UTC 2013 (ccab990), Machine HW Name: x86_64</OSVERSIONINFO><OSBITS>64</OSBITS><APPBINARYFILESPEC>/opt/novell/kanaka/engine/bin/novell-kanakaengined</APPBINARYFILESPEC><APPBITS>64</APPBITS><AppVersion/><PROCESSID>10919</PROCESSID><THREADID>139987961149184</THREADID><LOGFILEPARAMS><ROLLOVERTYPE>2</ROLLOVERTYPE><FILESTORETAIN>10</FILESTORETAIN><FILESIZEMAX>10485760</FILESIZEMAX></LOGFILEPARAMS><LOGLEVELNAMEMAP><ENTRY><LEVEL>0</LEVEL><NAME>Logging Disabled</NAME></ENTRY><ENTRY><LEVEL>1</LEVEL><NAME>Fatal</NAME></ENTRY><ENTRY><LEVEL>2</LEVEL><NAME>Critical</NAME></ENTRY><ENTRY><LEVEL>3</LEVEL><NAME>Error</NAME></ENTRY><ENTRY><LEVEL>4</LEVEL><NAME>Warning</NAME></ENTRY><ENTRY><LEVEL>5</LEVEL><NAME>Informational</NAME></ENTRY><ENTRY><LEVEL>6</LEVEL><NAME>Success</NAME></ENTRY><ENTRY><LEVEL>7</LEVEL><NAME>Verbose</NAME></ENTRY><ENTRY><LEVEL>8</LEVEL><NAME>Debug</NAME></ENTRY></LOGLEVELNAMEMAP><CATEGORYNAMEMAP><ENTRY><CATEGORY> 1</CATEGORY><NAME>Engine Watchdog</NAME></ENTRY><ENTRY><CATEGORY>3</CATEGORY><NAME>Engine Global Class</NAME></ENTRY><ENTRY><CATEGORY>4</CATEGORY><NAME>UI Session Manager</NAME></ENTRY><ENTRY><CATEGORY>5</CATEGORY><NAME>Web UI Session Manager</NAME></ENTRY><ENTRY><CATEGORY>6</CATEGORY><NAME>Client Manager</NAME></ENTRY><ENTRY><CATEGORY>7</CATEGORY><NAME>Index Manager</NAME></ENTRY><ENTRY><CATEGORY>8</CATEGORY><NAME>Login Script Parser</NAME></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><NAME>Logger</NAME></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><NAME>FSTools</NAME></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><NAME>DSTools</NAME></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><NAME>HTTPxServer</NAME></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><NAME>HTTPxClient</NAME></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><NAME>TaskMgr</NAME></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><NAME>NWXPlat</NAME></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><NAME>Storage Resources</NAME></ENTRY><ENTRY><CATEGORY>32777</CATEGORY><NAME>Security</NAME></ENTRY><ENTRY><CATEGORY>32778</CATEGORY><NAME>Thread</NAME></ENTRY><ENTRY><CATEGORY>32779</CATEGORY><NAME>OS Util</NAME></ENTRY><ENTRY><CATEGORY>32780</CATEGORY><NAME>Scheduling Service</NAME></ENTRY></CATEGORYNAMEMAP><LOGFILELOGLEVELMAP><ENTRY><CATEGO RY>0</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>1</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>3</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>4</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>5</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>6</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>7</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>8</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><LEVEL>5</LEVEL></ENTRY></LOGFILELOGLEVELMAP><CONERRLOGLEVELMAP><ENTRY><CATE GORY>0</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>1</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>3</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>4</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>5</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>6</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>7</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>8</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><LEVEL>5</LEVEL></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><LEVEL>5</LEVEL></ENTRY></CONERRLOGLEVELMAP><SYSLOGLOGLEVELMAP><ENTRY><CATEG ORY>0</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>1</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>3</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>4</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>5</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>6</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>7</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>8</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><LEVEL>2</LEVEL></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><LEVEL>2</LEVEL></ENTRY></SYSLOGLOGLEVELMAP><EVENTLOGLOGLEVELMAP><ENTRY><CAT EGORY>0</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>1</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><LEVEL>0</LEVEL></ENTRY></EVENTLOGLOGLEVELMAP><DEBUGPORTLOGLEVELMAP><ENTRY>< CATEGORY>0</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>1</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32769</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32770</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32771</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32772</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32773</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32774</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32775</CATEGORY><LEVEL>0</LEVEL></ENTRY><ENTRY><CATEGORY>32776</CATEGORY><LEVEL>0</LEVEL></ENTRY></DEBUGPORTLOGLEVELMAP><SYSLOGPARAMS><SYSLOGLEVEL>1</SYSLOGLEVEL><SYSLOGFACILITY>1</SYSLOGFACILITY><SYSLOGLEVELMAP><ENTRY><LEVEL>1</LEVEL><PRIORITY>2</PRIORITY></ENTRY><ENTRY><LEVEL>2</LEVEL><PRIORITY>2</PRIORITY></ENTRY><ENTRY><LEVEL>3</LEVEL><PRIORITY>3</PRIORITY></ENTRY><ENTRY><LEVEL>4</LEVEL><PRIORITY>4</PRIORITY></ENTRY><ENTRY><LEVEL>5</LEVEL><PRIORITY>5</PRIORITY></ENTRY><ENTRY><LEVEL>6</LEVEL><PRIORITY>5</PRIORITY></ENTRY><ENTRY><LEVEL>7</LEVEL><PRIORITY>6</PRIORITY></ENTRY><ENTRY><LEVEL>8</LEVEL><PRIORITY>7</PRIORITY></ENTRY></SYSLOGLEVELMAP></SYSLOGPARAMS><EVENTLOGPARAMS><EVENTLOGTYPE>0</EVENTLOGTYPE><EVENTLOGTYPEMAP/></EVENTLOGPARAMS></LOGGER>
01 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 Initializing Logger, re-opening existing log file "/var/opt/novell/kanaka/engine/log/novell-kanakaengined-20141202-132804.log".
01 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 This Logger class instance [0xadae90] was initialized at 2014:12:14:20:03:49 in Process ID # 10919.
01 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 OS Version Info = "Kernel Name: Linux, Architecture: x86_64, Kernel Release: 3.0.76-0.11-default, Kernel Version: #1 SMP Fri Jun 14 08:21:43 UTC 2013 (ccab990), Machine HW Name: x86_64", OS Bits = 64, Application binary file spec = "/opt/novell/kanaka/engine/bin/novell-kanakaengined", Application Bits = 64, Application Version = "".
01 2014-12-14 20:03:49 10800 0 8001 10919 7f517cb27700 This is the first time that a Logger thread has been started for this class instance.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Thread has started.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Configuring the client logging subsystem [Phase I]...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed configuring the client logging subsystem [Phase I], bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Getting Operational Data [Phase I]...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine's instance name = "".
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine is getting configuration path information...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed getting configuration path information, Result = 0, bOK = 1, path = "/etc/opt/novell/kanaka/engine/config".
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine is getting program path information...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed getting program path information, bOK = 1, path = "/opt/novell/kanaka/engine/bin".
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed getting Operational Data [Phase I], bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Initializing the Configuration...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the reports data path...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the reports data path, sPathSpec = "/var/opt/novell/kanaka/engine/data/reports", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the rtconfig reports data path...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the rtconfig reports data path, sPathSpec = "/var/opt/novell/kanaka/engine/data/reports/rtconfig", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the client log data path...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the client log data path, sPathSpec = "/var/opt/novell/kanaka/engine/log/client", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Validating data path information...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed validating data path information, Data Path = "/var/opt/novell/kanaka/engine/data", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed initializing the Configuration, bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Configuring the Logging subsystem [Phase II]...
82 2014-12-14 20:03:49 10800 0 ff04 10919 7f517cb27700 <CONFIG><LOGPATH>/var/opt/novell/kanaka/engine/log</LOGPATH></CONFIG>
82 2014-12-14 20:03:49 10800 0 ff05 10919 7f517cb27700 <CONFIG><LOGFILEBASENAME/></CONFIG>
82 2014-12-14 20:03:49 10800 0 ff05 10919 7f517cb27700 <CONFIG><LOGFILEBASENAME>novell-kanakaengined</LOGFILEBASENAME></CONFIG>
82 2014-12-14 20:03:49 10800 0 ff08 10919 7f517cb27700 <CONFIG><LEVEL>5</LEVEL><CATEGORY/></CONFIG>
82 2014-12-14 20:03:49 10800 0 ff07 10919 7f517cb27700 <CONFIG><ROLLOVERTYPE>2</ROLLOVERTYPE><FILESTORETAIN>10</FILESTORETAIN><FILESIZEMAX>10485760</FILESIZEMAX></CONFIG>
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed configuring the Logging subsystem [Phase II], bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed configuring the client logging subsystem [Phase II], bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Getting Operational Data [Phase II]...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the XML Message Signature object...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the XML Message Signature object, bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the SSL/TLS certificate file...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the SSL/TLS certificate file, sFileSpec = "/etc/opt/novell/kanaka/engine/config/server.pem", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the storage resources file...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the storge resources file, sFileSpec = "/var/opt/novell/kanaka/engine/data/storage-resources.dat", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the user index file...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the user index file, sFileSpec = "/var/opt/novell/kanaka/engine/data/userindex.dat", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Setting the client list file...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed setting the client list file, sFileSpec = "/var/opt/novell/kanaka/engine/data/clients.dat", bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine is determining its own DNS host name...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed determining its own DNS host name, Result = 0, bOK = 1, Name = "kanakasrvkaia".
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed determining its own DNS host name, Result = 0, bOK = 1, Name = "kanakasrvkaia".
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine is getting O.S. version information...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed getting O.S. version information, Result = 0, bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine is getting file version information for itself ["/opt/novell/kanaka/engine/bin/novell-kanakaengined"]...
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Engine completed getting file version information for itself ["/opt/novell/kanaka/engine/bin/novell-kanakaengined"], Version = "2.7.1.8", Result = 0, bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Completed getting Operational Data [Phase II], bOK = 1.
01 2014-12-14 20:03:49 10800 5 0001 10919 7f517d328700 ML: Getting Operational Data [Phase III]...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Engine is finished preparing DS Context, eDir Tree Name = "KTREE", Result = 0, bOK = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed getting Operational Data [Phase III], bOK = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Configuring the Novell XPLAT environment to perform logging...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed configuring the Novell XPLAT environment to perform logging, bOK = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Initializing the HTTPx Server subsystem...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed initializing the HTTPx Server subsystem, bOK = 1.
82 2014-12-14 20:03:51 10800 0 ff08 10919 7f517cb27700 <CONFIG><LEVEL>5</LEVEL><CATEGORY/></CONFIG>
82 2014-12-14 20:03:51 10800 0 ff09 10919 7f517cb27700 <CONFIG><LEVEL>5</LEVEL><CATEGORY/></CONFIG>
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Loading saved state information...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Loading the Salt state information...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Loaded the Salt state information, bOK = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed loading state information, bOK = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Successfully authenticated as the Proxy Account.
01 2014-12-14 20:03:51 10800 5 0003 10919 7f517d328700 GL: Base schema appears to be properly extended.
01 2014-12-14 20:03:51 10800 5 0003 10919 7f517d328700 GL: Collaborative Homedirectory attribute ccx-FSFManagedPath is available.
01 2014-12-14 20:03:51 10800 5 0003 10919 7f517d328700 GL: Kanaka AFP Volume name attribute cccKanakaAFPVolumeName is available.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Attempted validate accounts, Result = 0.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Successfully authenticated as the Proxy Account.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Attempted to Generate new SALT and Proxy Password, Result = 0.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Called LoadStorageResourcesFromCache(), Result = 0.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Called LoadClientManager(), Result = 0.
01 2014-12-14 20:03:51 10800 3 0001 10919 7f517d328700 ML: Called LoadMountPointManager(), Result = 53.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Called LoadIndexManager(), Result = 0.
01 2014-12-14 20:03:51 10800 5 0007 10919 7f517d328700 Index Manager: Starting Scheduler Thread...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Called IndexManager::StartIndexingScheduler(), Result = 0.
01 2014-12-14 20:03:51 10800 5 0007 10919 7f517991c700 Index Manager [ST]: Next User Index Rebuild will happen in 14169 seconds.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Called LoadKanakaPolicy(), Result = 0.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Starting the HTTPx Server subsystem...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed starting the HTTPx Server subsystem, bResult = 1.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 ML: Completed initialization. Engine is now running.
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 WD: Engine mainline thread is entering the Watch Dog function...
01 2014-12-14 20:03:51 10800 5 0001 10919 7f517d328700 WD: Engine is running...
01 2014-12-14 20:04:50 10800 5 0004 10919 7f5172c9f700 UI: admin.kaia [192.168.1.180] LOGIN SUCCESS

Similar Messages

  • Xsan2.2 client cannot mount StorNext3.5.2 volume

    Hello,
    On an OSX 10.6.4 system , I am trying to mount a Stornext3.5.2 filesystem, whose MDC is a Linux system.
    On the MAC I can see all the LUNs,
    I can somehow see the fylesystem:
    +sh-3.2# xsanctl list+
    +snfs01 - not mounted+
    I cannot mount it:
    +sh-3.2# xsanctl mount snfs01+
    +mount command failed: Unable to mount volume `snfs01' (error code: 5)+
    I would like to know what +error code: 5+ means, and whether anybody can help me in mounting the filesystem.

    Apple's XSAN enterprise support claims error code 5 is proprietary.
    Quite frustrating.
    From our experience with the same set up (RHEL MDCs running Storenext 3.5.x) typically error code 5 on clients after an xsanctl mount command means: reboot the computer and try again.. the computer will automount the san volume(s) 60% of the time.
    mcacciagrano- have you had any success deciphering what error code 5 is actually a complaint about?
    It is not a unique serial number issue (verified through the system's config.plist).
    It's not a fibre issue (we can see the LUNs)
    It's NOT a ghosted mount point folder issue (from early xsan days)
    It IS a (VERY COSTLY) annoyance, that's for sure.

  • Kanaka client, problem to show volume

    Hi everyone....
    I just installed Kanaka engine on a Sles11 OES V11 server and i
    configured all stuff.
    I installed Kanaka client on a Mac OS 10.7 and i can logged in without
    problem but i can't see volumes.
    I can't see Home Directory or any other volumes.
    I can access those volumes by cmd-k shortcut without problem in AFP and SMB.
    And Home Directroy are very well configured for users.
    Someone have an idea?

    FYI there is a specific forum for that product OES: Kanaka for Mac

  • Xsan 4 will not mount volume on client

    I recently upgraded my MDC and my client computer to Yosemite. I went through the migration process the best I could and created a Configuration Profile on my MDC for my client computer. I installed the profile successfully but the volume will not mount on the client computer. On the client computer within Profiles, the Xsan Configuration Profile has in red "Unsigned" underneath it, is that what is causing the problem?
    Also a few notes: there are two other client computers that haven't been upgraded and are running Maverick and the volume does mount to those computers. The volume is also mounted on the MDC and if I go to Disk Utilities on the client computer I do see the volume, just not mounted.
    Any help would be great! Thank you.

    Thank you Claudio,
    After doing much research and testing I believe you are onto something with my fsmpm or my .auth_secret file not being created. After using Server.app on the MDC I used the configure profile within Server.app. After installing the profile on my client computer it didn't mount and when looking within the Library/Preferences/Xsan folder there was no files (including the hidden .auth_secret file). So I then I used the web profile interface to create the configuration profile and this time it installed the fsnameservers and config.plist files but not the hidden auth_secret file and still no mount.
    To answer some of you questions, I did try several times the "sudo xsanctl mount Volume Name" but had no success. I was previously running 10.9 on the MDC and my client computer. I am not sure if the MDC already had an Open Directory Master before upgrading and I did have a little bit of problems activating Xsan, because some of my DNS settings had changed. I got that all squared away and Xsan started working.
    The reason I believe it is a fsmpm issue is because when I executed the command "sudo xsanctl i" (views the volumes connected to Xsan) I go a message reading "fsmpm not running error3". So after reading this forum Xsan: "fsmpm not running" message in Xsan Admin - Apple Support I copied the .auth_secret from the MDC to my client and still no mount. I tried again with all the files within Library/Preferences/Xsan folder and nothing. I then got the .auth_secret file from another client computer running 10.9 and still no mount, BUT when I copied all the files within the 10.9 client's Library/Preferences/Xsan the volume appeared! YEAH! But I'm not sure if this is a fix I should be happy with or continue to figure out why its not working the proper way? That way in the future I'm not running into this issue over and over again.
    So I need to figure out either how to get my fsmpn running in 10.10 or figure out why my my Server.app on the MDC won't create the auth_sect file. I read somewhere that within the Xsan screen in the Sever app it should show the Authentication Secret, but I do not see this on my Xsan screen. Should it be there and if so is this where my problem steams from? Any thoughts? Thank you so much!

  • Unable to Connect to Mounted Volumes via AFP; Local Network

    Server's name is Boing. If I try: Connect to Server: afp://Boing.local I'm given a list of possible volumes. However, this list only includes my home directory and every user's Public Folder. I am an admin user:
    uid=516(mnewman) gid=516(mnewman) groups=516(mnewman),101(com.apple.sharepoint.group.1),105(com.apple.sharepoint. group.3),103(com.apple.accessscreensharing),98(_lpadmin),102(com.apple.access_ssh),81(_appserveradm),79(appserverusr),80(admin),20(staff),106(com.apple.sharepoint.group.4),104(com.appl e.sharepoint.group.2)
    I should have access to all mounted volumes:
    drwxrwxr-t 56 root admin - 1972 May 17 13:39 /Volumes/Banana
    drwxrwxr-x 13 mnewman mnewman - 510 Jan 14 09:17 /Volumes/Farang
    lrwxr-xr-x 1 root admin - 1 May 21 10:46 /Volumes/Fuji -> /
    drwxrwxr-x 12 mnewman mnewman - 476 Apr 20 08:29 /Volumes/Guava
    drwxrwxr-t 17 root admin - 646 May 17 13:39 /Volumes/Lime
    If I ssh into Boing, I can get to everything.
    If a different admin user connects to Boing via afp she can get to everything.
    What has happened to my account that permits me afp access only to public shares?

    Amazing:
    I used Apple's Workgroup Manager to add a short name (mgnewman) to my account.
    Using that short name I can now connect to Boing using the Finder's Go->Connect To Server...
    and, all the mounted volumes are shown.
    Workgroup Manager shows "mnewman" as my default short name. But, it doesn't work right
    for afp connections either through the GUI or command line. But, my newly added short user
    name works just fine. (Oddly, my Dot Mac e-mail address, also shown as a short user name in
    Workgroup Manager, also works....)
    I suspect that the Directory Services database is somehow corrupt.
    Is there a way to rebuild it?

  • OES2sp2 miggui Failed to mount volume

    Trying to migrate NW65SP8 (DS8.8.4) to OES2sp2.
    miggui is running on the OES2sp2 server which is patched up to date using rug.
    Source and target servers are in the came container in the DS.
    I can define the source and target servers and get to the point where I want to configure the migration of volumes, but then I get a failure. The log (minus the timestamps for clarity) reads:
    ERROR - FILESYSTEM:volmount.rb:Command to mount source: ruby /opt/novell/migration/sbin/volmount.rb -s 138.37.100.118 -a "cn=joshua,ou=sys,o=qmw" -c cp437 -f "/var/opt/novell/migration/cc18/fs/mnt/source" -m -t NW65
    INFO - FILESYSTEM:volmount.rb:*****************Command output start**********************************
    INFO - FILESYSTEM:volmount.rb:
    INFO - FILESYSTEM:volmount.rb:Information: ncpmount using code page 437
    INFO - FILESYSTEM:volmount.rb:Information: ncpshell command executed as: LC_ALL=en_US.UTF-8 /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118 --u="joshua.sys.qmw"
    INFO - FILESYSTEM:volmount.rb:Information: Mounting Volume = _ADMIN
    INFO - FILESYSTEM:volmount.rb:Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d /var/opt/novell/migration/cc18/fs/mnt/source/_ADMIN -v _ADMIN
    INFO - FILESYSTEM:volmount.rb:Fatal: Failed to mount volume _ADMIN for server 138.37.100.118
    INFO - FILESYSTEM:volmount.rb:Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume _ADMIN for server 138.37.100.118 .
    I tried executing what appears to be the offending command by hand:
    ruby /opt/novell/migration/sbin/volmount.rb -s 138.37.100.118 -a "cn=joshua,ou=sys,o=qmw" -c cp437 -f "K" -m -t NW65 -p password --debug
    which produces the output:
    Information: ncpmount using code page 437
    Information: ncpshell command executed as: LC_ALL=en_US.UTF-8 /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118 --u="joshua.sys.qmw"
    Information: Mounting Volume = _ADMIN
    Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d K/_ADMIN -v _ADMIN
    Fatal: Failed to mount volume _ADMIN for server 138.37.100.118
    Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume _ADMIN for server 138.37.100.118 .
    Information: File K/_ADMIN/Novell/Cluster/PoolConfig.xml does not exist. No cluster resources attached
    Information: Mounting Volume = SYS
    Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d K/SYS -v SYS
    Fatal: Failed to mount volume SYS for server 138.37.100.118
    Information: unmounting all mounted volumes
    Information: Executing command:/opt/novell/ncl/bin/nwlogout -f -s QMWCC18
    Cannot perform logout: Cannot connect to server:[QMWCC18]. Error:NWCCOpenConnByName:
    Information: Command Output:
    Information: Executing command: rm K/*
    rm: cannot remove `K/*': No such file or directory
    Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume SYS for server 138.37.100.118 .
    SLP shows both servers, they are on the same network as each other, and the firewall is turned off.
    Does anyone have any idea what may be causing the mount failure or what error 1008 might be?
    Tim

    cgaa183 wrote:
    > Thankyou all for helpful suggestions, I'll go through them.
    >
    > 1. novfsd
    > /etc/rc.d/novfsd status
    >
    > running
    >
    > /etc/rc.d/novfsd restart
    > Stopping Novell novfs daemon...
    >
    > done
    > Starting Novell novfs daemon...
    >
    > No Config File Found - Using Defaults
    > novfsd: Novell Client for Linux Daemon
    > Copyright 1992-2005, by Novell, Inc. All rights reserved.
    > Version 3.0.1-503
    >
    > done
    >
    > Nothing changes, so I don't think that was the problem in this case.
    >
    did you restart the server after installing OES2 SP2?
    > 2. try using migfiles
    > /opt/novell/migration/sbin/migfiles -s 138.37.100.118 -v APPS -V APPS
    >
    > The result:
    > Error:
    > Error: nbackup: Unable to retrieve the Target Service Name list from
    > 138.37.100.118
    > Error:
    > Fatal: nbackup command failed to execute: nbackup: Connection denied
    >
    migfiles is not able to connect to the TSA on the source server. Either
    TSAFS is not loaded on the source server or not able to locate the TSA
    using SLP.
    > This might be informative to someone who knows a little more than me. I
    > wonder if I can call it with options to get more information. I have
    > just re-checked I can attach and mount volumes with the same username
    > from another system.
    >
    > 3. ruby version
    >
    > rpm -qa |grep ruby
    > ruby-1.8.4-17.20
    > rubygems-0.9.2-4.4
    > rubygem-needle-1.3.0-1.5
    > rubygem-net-sftp-1.1.0-1.5
    > ruby-devel-1.8.4-17.20
    > rubygem-net-ssh-1.0.9-1.5
    >
    > Doesn't appear to be an afflicted version and migfiles --help tells
    > me all about the options available.
    >
    >
    > As #2 looked interesting I thought I'd look at it a bit more. I turned
    > up TID 7001767. The reason migfiles failed for me was that SMDR and
    > TSAFS weren't loaded, at least I don't get the error and file migration
    > appears to start now they are loaded, though it does appear to have
    > ceased copying files rather prematurely....
    >
    > Going back to volmount.rb I now realise its using ncpfs, and not a
    > lightweight Novell client. So I tried mounting a volume by hand:
    >
    > qmwcc28:~ # ncpmount -S 138.37.100.118 -U joshua.sys.qmw K
    > Logging into 138.37.100.118 as JOSHUA.SYS.QMW
    > Password:
    > ncpmount: Server not found (0x8847) when trying to find 138.37.100.118
    >
    > A bit of a giveaway, but why doesn't it work?
    > It seems I need to use -A DNSname -S servername and then it works.
    > The next important bit seems to be
    > /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118
    > --u="joshua.sys.qmw"
    > which executed by hand lists volumes correctly with the output:
    > Please enter your password ?
    > [0] SYS
    > [1] _ADMIN
    > [2] APPS
    > 3 NCP Volumes Mounted
    >
    > "ncpshell" appears to be from Novell's client for Linux so I don't
    > understand why we'd be trying to use that if we're using ncpfs, and we
    > already know which volumes are mounted by looking in the folder in which
    > we mounted the server using ncpfs. AFAICS is used to invoke NLMs on
    > NetWare remotely using OES so its not testing anything we don't already
    > know.
    >
    > This takes me inevitably to "nwmap". "nwmap" is also from Novell's
    > client for Linux so maybe the ncpfs stuff is unnecessary.
    > /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d sys -v SYS
    > produces:
    > map: server not Found:138.37.100.118 - drive sys not mapped
    >
    ncpmount uses udp as the default. Add the option -o tcp to the ncpmount
    command then mount should work.
    > nwmap doesn't ask for a username. Maybe I'm wrong, but as far as the
    > Novell client goes I don't think it can have attached or logged into the
    > source server (ncpfs having a different connection table and ncpshell
    > asking the remote server to return the answer). I can't actually see
    > where /volmount.rb is calling nwmap at the moment but the results I get
    > my calling it at the command prompt with the same options given in the
    > log are the same.
    >
    if there is an existing connection to the same tree, nwmap does not ask
    for user name. Use the command "nwconnections" to check the existing
    connections. Use nwlogout to logout the connection. check
    /var/opt/novell/nclmnt/root/ for any stale entries.
    > I've tried logging in using nwlogin, but that fails too saying:
    > Cannot perform login: The system could not log you into the network.
    > Make sure that the name and connection information are correct, then
    > type your password again.
    >
    > ncl_scanner -T does list NDS trees but I suspect its only querying an
    > SLP server and nothing more useful. ncl_scanner -S produces:
    > INFORMATION FOR SERVER [QMWCC18] :
    > Server Name : [QMWCC18]
    > Bindery : [FALSE]
    > eDirectory Context : []
    > should it show a context?
    >
    > Looking at the files of the Novell client on the system, it looks a
    > rather cut down set with no config files. Even having introduced
    > protocol.conf the situation is not improved, but I'm now sure the
    > problem lies in this area. Possibly a full client installation is
    > required, or maybe there is something else wrong which is preventing the
    > client from working correctly. namcd is looking suspect.
    >
    >
    You do not need all files for Novel Client. If you want
    You can logout all connections using the command "nwlogout -a" and try
    the nwmap command again.
    "/opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d
    /var/opt/novell/migration/cc18/fs/mnt/source/SYS -v SYS"
    Looks like the novell client is failed to resolve the IP address. You
    can do the following to configure different name resolution methods in
    the following way:
    Create a file: /etc/opt/novell/ncl/protocol.conf with data.
    Name_Resolution_Providers=NCP,SLP,DNS
    then restart rcnovfsd deamon using the command "rcnovfsd restart"
    do you see any NCP errors in the network packet trace?
    regards
    Praveen

  • Mounted Volume Not Showing On Desktop or Sidebar

    Our company uses a Mac Mini running OS 10.10 Yosemite to share files. File Sharing is setup to share contents on the Mac Mini with other Macs and PCs. Sharing is working fine. However, in the last couple days when logging into the Mac Mini the mounted volume (Mac Mini) no longer shows on the desktop, sidebar or when accessing the client hard drive. In order to access the Mac Mini after mounting via log in an alias to the Mac Mini needs to be created and access via the alias. Folders from the Mac Mini can be added to the side bar and accessed that way, but the main folder for the Mac Mini is only accessible through the alias. Preferences in the Finder are set to show mounted volumes and servers on the desktop.
    Computers accessing the Mac Mini are running OS 10.6, 10.9, 10.8.

    Check the settings in the General tab of the Finder's preferences.
    (69253)

  • Show mounted volumes in the Sidebar (not servers)?

    Anyone have a Leopard tweak or hack to get the side bar to show mounted volumes (i.e.; share points), rather than servers? I need to see the share points, not the server itself (especially when Kerberos is enabled - its just plain goofy)
    I like Leopard's Finder improvements for the most part, but need my sidebar to show discreet mounts like 10.4 (Tiger) did. Idont want to see the entire server thats hosting the mounted volume - thats just plain silly. I can use the "Connect to Server" app for that ( or browse the LAN)
    Dragging mounts to the sidebar will work (they show up as "devices"), but they are forgotten after a reboot. Its a temporary solution at best.
    Is this a bug? A limitation? A disaster? A feature?

    I'm not quite sure what your complaint is. You still have the same option to manually connect via the "Go" menu as with Tiger, do you not? Does this not still result in mounted volumes appearing in the devices list? (BTW, "devices" is the appropriate generic name for them in a UNIX environment -- don't blame Apple for that, it is just following a naming convention older & more established than those of either the Mac or Windows OS's.)
    If people are confused by the "Shared" section of the sidebar, which is a convenience item not essential to creating sharing connections, why not just suggest they remove it? A set of Finder preferences exists for this purpose. Deselect all three & no "Sharing" sidebar section will appear. Apple is not forcing anyone to use or see it unless they want to.
    Moreover, Apple is not the originator of SMB, which has its origins in DOS, was modified by Microsoft in ways to this day not completely documented in any open standards, & must be reverse engineered for interoperability with other OS implementations. Not surprisingly, it must be configured carefully on the server side with this in mind for it to work reliably, which is no easy task since (among other things) its performance, ease-of-use, & security can't all be maximized at the same time. This is hard enough in a pure Windows Server domain environment, much more difficult in a mixed platform one.
    Because of this, it may be premature to blame all the connection anomalies on Leopard and/or Finder's sidebar. A review of the protocols & settings used in your shop's SMB servers' implementation may turn up something helpful here, but that is probably best done by a network specialist familiar with the issues involved.
    As a side note (& by way of an example), SMB incorporates "auto-discovery" too, just in a more limited way compared to Bonjour. Typically, to maintain network security, it is configured to auto-discover only on a client's initial contact with the server, & there are sometimes reliability issues with this mechanism on large networks due to latency, which may explain some of the problems your users are having.
    So, don't be too quick to blame all this on Leopard alone -- as with all network issues, one must look at the entire network to identify causes & their remedies.

  • Cannot mount volume

    Hi
    I have a server connected to a MDC.
    The MDC is running 10.4.8 server, XSan 1.4.1.
    The MDC can mount the SAN correctly.
    The client is running 10.4.8 server, XSan 1.4.1
    The client cannot mount the SAN.
    What I've tried:
    Re-installing XSAN
    Re-installing the 10.4.8 combo update
    Re-installing the 2006-007 security update
    Changing from Controller, medium to client and back.
    mount -t ... /Volumes/SAN
    Every time I get this in the syslog:
    Dec 13 13:54:35 xsanclient kernel[0]: Could not mount filesystem SAN, cvfs error ' Timeout' (25)
    any ideas

    More from the log:
    Dec 13 22:17:51 multimedia kernel[0]: Could not mount filesystem SAN, cvfs error ' Timeout' (25)
    Dec 13 22:18:37 multimedia kernel[0]: Could not mount filesystem SAN, cvfs error ' Timeout' (25)
    Dec 13 22:18:53 multimedia sudo: admin : TTY=ttyp1 ; PWD=/Users/admin ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/system.log
    Dec 13 22:19:01 multimedia kernel[0]: Could not mount filesystem SAN, cvfs error ' Timeout' (25)
    Dec 13 22:19:23 multimedia servermgrd: xsan: [39/36E6F0] ERROR: getfsmvol_atindex: Could not connect to FSM because Admin Tap Connection to FSM failed. - Operation timed out
    Dec 13 22:19:24 multimedia kernel[0]: Could not mount filesystem SAN, cvfs error ' Timeout' (25)

  • Xsan client stopped mounting raid

    Good Morning Community, I have xsan 2.2 and os 10.7.5
    two of my clients stopped mounting the raid. they authenticate to the mdc but will not mount. erro on client says check fiber connections. i have switched computer with working connection and problem follows machine. xsan is working with all other machines only two. MDC system log shows
    error mounting volume...operation coundn't be completed (SANTractionErrorDomain  error 100007.) (100007)
    any ideas? All help is greatly appreciated
    thank you

    Have you checked DNS and can the client receive a kerberose ticket? I know these don't sound relavant to a XSAN but after doing this for 8 years I can assure you it is.

  • Permissions re: Mount Volume/Connect to Server

    I wrote a simple little AppleScript to automate users' weekly email backup. the script gets triggered by the user's email client (Entourage)... Here's the script:
    tell application "Finder"
    mount volume "afp://SERVER.ADDRESS/USER BACKUP"
    duplicate folder "USER'S EMAIL FOLDER" of startup disk to folder "USER'S BACKUP FOLDER" of disk "USER BACKUP" with replacing
    end tell
    However, I've noticed that this script only works if I have the user's folder permissions set to "read and write" for everyone (I can't set individual user permissions to restrict access)
    How can I run this script in a way that will authenticate on the fly and then restrict access accordingly?
      Mac OS X (10.4.7)  

    Mount volume doesn't have to go through the finder AFAIK. Also, I'd put the username and password in the applescript.
    mount volume "afp://username:[email protected]/somemountpoint"
    Or if you don't want to put the u/name and password, write a script that prompts for them.
    HTH
    edit: but I think you're right going through the finder, otherwise your your duplicate command would probably not work. It would have to be "do shell script cp blah blah" or "do shell script ditto -rsrcFork blah blah"
    Message was edited by: Ang Moh

  • Cannot mount volume - device offline

    Hi everyone,
    I am trying to setup OSB on a Linux box (CentOS).
    I could configure the tape library and tape drives as explained in the installation/admin guide but could not get the tape volume mounted. It keeps giving the errors:
    can't mount volume in tape1 - device offline
    It seems that the volume is offline. How can we get it online?
    The error log for the tape drive shows:
    Oracle Secure Backup hardware error log for "tape1", version 1
    QUANTUM DLT-S4, prom/firmware id 1414, serial number 236ER89482
    Tue May 13, 2008 at 22:05:23.748 (IST) devtype: 15
    obrobotd: core-ora:/dev/sg2, args to wst__exec: handle=0x1
    accessed via host core-ora: Linux 2.6.9-42.ELsmp #1 SMP Tue Aug 15 10:35:26 BST 2006
    op=18 (log_sense), buf=0x54b0b00, count=44 (0x2c), parm=0x3e0
    cdb: 4D 00 3E 00 00 00 00 00 2C 00 log_sense, cnt=44, pc=0, page_code=0x3e
    sense data:
    70 00 05 00 00 00 00 18 00 00 00 00 24 00 00 00
    00 00 00 00 00 00 00 00 00 00 00 48 00 00 00 00
    ec=0, sk=ill req, asc=24, ascq=0
    error is: illegal request
    flags: (none)
    returned status: code=illegal request,
    resid=0 (0x0), checks=0x0 []
    obtool user admin password oracle
    ob> lsdev -lvg
    lib1:
    Device type: library
    Model: [none]
    Serial number: [none]
    In service: yes
    Debug mode: no
    Barcode reader: yes
    Barcodes required: yes
    Auto clean: no
    Clean interval: (not set)
    Clean using emptiest: yes
    Unload required: yes
    Ejection type: auto
    Min writable volumes: 0
    UUID: 9ac064fe-01fb-102b-8a2e-0015175b4124
    Attachment 1:
    Host: core-ora
    Raw device: /dev/sg1
    Connection type: SCSI
    Inquiry data:
    Vendor: ADIC
    Product: Scalar 24
    Firmware: 0402
    Serial number: ADICJaya
    Element counts / addresses:
    1 mte: 1
    2 se : 4096 - 4097
    0 iee
    1 dte: 256
    Moves:
    From mte, to: mte 0 se 1 iee 0 dte 0
    From se, to: mte 0 se 1 iee 0 dte 1
    From iee, to: mte 0 se 0 iee 0 dte 0
    From dte, to: mte 0 se 1 iee 0 dte 1
    Ok_ops: move=1, reserve=1 sense_dev=1, sense_ele=1, unload_any=1, sense_dev_range=1
    Device characteristics: two_d=0, is_120=0, fake_mte=0, fake_iee=0, one_target=0
    State of barcode reader: present
    Display: none
    Dte 1: target * lun * name tape1 (raw device name /dev/sg2)
    Warning: bus info unknown or drive not installed
    tape1:
    Device type: tape
    Model: [none]
    Serial number: [none]
    In service: yes
    Library: lib1
    DTE: 1
    Automount: yes
    Error rate: 8
    Query frequency: 1024KB (1048576 bytes) (from driver)
    Debug mode: no
    Blocking factor: (default)
    Max blocking factor: (default)
    Current tape: 1
    Use list: all
    Drive usage: none
    Cleaning required: no
    UUID: 371b7330-ffe9-102a-8a2e-0015175b4124
    Attachment 1:
    Host: core-ora
    Raw device: /dev/sg2
    Connection type: SCSI
    Inquiry data:
    Vendor: QUANTUM
    Product: DLT-S4
    Firmware: 1414
    Serial number: 236ER89482
    Tape state: online
    Hardware compression: on
    Last read was: uncompressed
    Maximum block size: 1048576
    Remaining tape: 73728 (uncompressed) blocks (75.50MB
    Looking for comments/suggestions.
    Thanks!

    I created a file system backup job. The backup job is pending with the status "pending resource availability".
    ob> lsjob all long
    7:
    Type: dataset Dataset1
    Level: full
    Family: MediaFamily1
    Encryption: off
    Scheduled time: 05/15.14:00
    State: processed; host backup(s) scheduled
    Priority: 0
    Privileged op: no
    Run on host: (administrative server)
    Attempts: 1
    7.1:
    Type: backup core-ora
    Level: full
    Family: MediaFamily1
    Encryption: off
    Scheduled time: 05/15.14:00
    State: pending resource availability
    Priority: 0
    Privileged op: no
    Run on host: core-ora
    Attempts: 0
    ob> catxcr --level 0 7
    Error: unable to open transcript for 7 - No such file or directory
    Error: unable to open transcript for 7.1 - No such file or directory
    I also tried to do some operations on the tape device (load, identify, import… ) but the commands take forever to finish and return with errors.
    ob> lsvol library lib1 long
    Inventory of library lib1:
    in mte: vacant
    in 1: barcode JK1112, oid 100
    in 2: barcode JK1111, oid 101
    in 3: barcode JK1113, oid 102
    in 4: vacant
    in 5: vacant
    in dte: vacant
    ob> loadvol drive tape1 mount write force 1          > (took around 10 mins to return)
    Error: can't execute command - drive didn't come online; check configuration/hardware
    I could see that the drive loaded into the tape immediately after issuing the command (from the Dxi GUI) but the loadvol command returned after a long time.
    The log file for the obrobotd when the above command was issued shows:
    [root@core-ora lib1]# pwd
    /usr/local/oracle/backup/admin/log/device/lib1
    [root@core-ora lib1]# tail -f obrobotd
    2008/05/15.14:53:05 ***0 wst__dev_state...
    2008/05/15.14:53:05 ***0 wst__exec: op=0 (nop), buf=0x0, count=1 (0x1), parm=0x0
    2008/05/15.14:53:05 ioctl_op=0x3, to=300, datalen=0x0, buf=0x0, cdb: 00 00 00 00 00 00 tur
    2008/05/15.14:53:05 ***0 wst__exec: rval=0, status.code/resid/checks=0x0/0x0/0x0
    2008/05/15.14:53:05 ***0 wst__dev_state: no sense, status 0x0
    2008/05/15.14:53:05 ***0 wst__exec: op=2 (sense), buf=0x54a970, count=32 (0x20), parm=0x0
    2008/05/15.14:53:05 ioctl_op=0x5, to=30, datalen=0x20, buf=0x54a970, cdb: 03 00 00 00 20 00 sense, cnt=32
    2008/05/15.14:53:05 ***0 wst__exec: rval=0, status.code/resid/checks=0x0/0x0/0x0
    2008/05/15.14:53:05 ***0 wst__get_sense(int) cmd = 0, sense data:
    2008/05/15.14:53:05 70 00 00 00 00 00 00 18 00 00 00 00 00 00 00 00
    00 00 00 00 00 00 00 00 00 0C 00 80 00 00 00 00
    ec=0, sk=no sense, asc=0, ascq=0, rem=0x30020000 (805437440)
    flags: (none)
    2008/05/15.14:53:05 ***0 wst__exec: op=7 (readpos), buf=0x7fbfffe1d0, count=20 (0x14), parm=0x0
    2008/05/15.14:53:05 ioctl_op=0x5, to=180, datalen=0x14, buf=0x54aa78, cdb: 34 00 00 00 00 00 00 00 00 00 read_pos
    2008/05/15.14:53:05 ***0 wst__exec, raw position: B0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    00 00 00 00
    2008/05/15.14:53:05 ***0 wst__exec: rval=0, status.code/resid/checks=0x0/0x10/0x0
    2008/05/15.14:53:05 ***0 wst__exec: op=18 (log_sense), buf=0x54a9f0, count=44 (0x2c), parm=0x3e
    2008/05/15.14:53:05 ioctl_op=0x5, to=180, datalen=0x2c, buf=0x54a9f0, cdb: 4D 00 3E 00 00 00 00 00 2C 00 log_sense, cnt=44, pc=0, page_code=0x3e
    2008/05/15.14:53:05 ***0 wst__exec: op=2 (sense), buf=0x54a970, count=32 (0x20), parm=0x0
    2008/05/15.14:53:05 ioctl_op=0x5, to=30, datalen=0x20, buf=0x54a970, cdb: 03 00 00 00 20 00 sense, cnt=32
    2008/05/15.14:53:05 ***0 wst__exec: rval=0, status.code/resid/checks=0x0/0x0/0x0
    2008/05/15.14:53:05 ***0 wst__get_sense(int) cmd = 4D, sense data:
    2008/05/15.14:53:05 70 00 05 00 00 00 00 18 00 00 00 00 24 00 00 00
    00 00 00 00 00 00 00 00 00 0C 00 80 00 00 00 00
    ec=0, sk=ill req, asc=24, ascq=0, rem=0x30020000 (805437440)
    error is: illegal request (OB scsi device driver)
    flags: (none)
    2008/05/15.14:53:05 ***0 wst__exec: rval=-1, status.code/resid/checks=0x20008113/0x0/0x0
    2008/05/15.14:53:05 ***0 wst__dev_state: state=0x1 (online, not at bot)
    2008/05/15.14:53:05 ***0 wst__wait: from dev_state: state=0x1, code=0x0
    Thanks!

  • Delete File From Mounted Volume

    Hey,
    I am trying to delete the "Calendar Cache" files on both my laptop PowerBook G4 and the Mac Pro Quad that I sync my calendars with. I am using ChronoSync and the individual calendars sync fine, but there is a little house keeping needed with the cache file. They need to be deleted on both systems in order to "refresh" the views of the calendars.
    So after the sync of calendars, I have the software initiating an AppleScript that deletes both. Here's the script:
    +(* PowerBook Files / delete cache file *)+
    +(* Please note that both systems have the same username. This may be arise a conflict *)+
    +tell application "Finder"+
    + activate+
    + tell application "Finder" to delete file "Calendar Cache" of folder "Calendars" of folder "Library" of disk "useranthony"+
    +end tell+
    +(* Mac Pro Quad/ delete cache file *)+
    +tell application "Finder"+
    + mount volume "afp://10.10.10.1/anthonyabraira"+
    + tell application "Finder" to delete file "Calendar Cache" of folder "Calendars" of folder "Library" of disk "/volumes/useranthony"+
    +end tell+
    I am having trouble addressing a deletion on the networked Mac Pro Quad.

    why send it to the trash — just delete it...
    (* PowerBook Files / delete cache file )
    try
            do shell script "rm -rf /Library/Calendars/Calendar\\ Cache"
    end try
    you may need a delay for the Mac Pro Quad to mount
    ( Mac Pro Quad/ delete cache file *)
    --the mount and then the delay
    delay 4
    try
            do shell script "rm -rf /THE-CORRECT/PATH-HERE/Library/Calendars/Calendar\\ Cache"
    end try
    Tom

  • Secured WebDAV Mounted Volume Authorization Issues

    I use a secure WebDAV mounted volume from myDisk.se and up until the latest Security Update have had zero issues being able to manipulate files and folders as I would on a normal volume. However, since the installation of the Security Update (2009-004 (PowerPC) 1.0) I find weird things happening with this mounted volume:
    1) I am able to mount the secured WebDAV share using my security credentials.
    2) I can create a default "untitled" folder but when I try to change its name, the WebDAV authorization dialog pops up and despite entering the same credentials (why, I am not sure as the volume has already been properly credentialed in order to be mounted), access is denied.
    3) Trying to create a file within a folder on the mounted WebDAV volume I previously created pre-update causes the same authorization issue.
    I have no other WebDAV shares I can try to mount from any other companies so I am not sure if this is a myDisk issue or one borne from the Security Update. I am not a .Mac/MobileMe user and that info is not filled out in System Preferences. The internal hard drive has been meticulously maintained with Disk and Permissions repair being run both before and after each and every software update installed. Likewise, the volume's structure is also checked both before and after and shows no need for repairs.
    Any ideas? Perhaps there is a corrupted file somewhere that's affecting the authorizations needed by this third-party WebDAV volume?
    The machine that has this problem is the last model iBook G4/1.33GHz 12" display, 1.5GB RAM, and a 100GB 5400rpm HD which replaced the stock OEM 40GB 4200rpm drive about one year ago.
    I'm not willing to do an Archive and Install at this point as the loss of the WebDAV access to my online volume is not critical. Inconvenient as heck but not to the point where I'm willing (or able) stop my normal work to spend the hours it will take to get WebDAV access back.
    Thanks in advance for any insights.

    same problem here with webdav, I can't mount my idisk from university network on Mac Pro 10.5.3 (although it mounts fine from home network on both ibook and PMG5 10.5.3). Everything was fine with 10.5.2 and I already re-installed 10.5.3 combo. Other bugs as well with .Mac prefs (keeps crashing, sometimes it shows the available space on idisk but still no mounting, with error -35 or -8086), but .Mac sync is OK
    Jun 11 12:34:21 webdavfs_agent[579]: mounting as authenticated user
    Jun 11 12:34:22 kernel[0]: webdav server: http://idisk.mac.com/[username]/: connection is dead
    Jun 11 12:34:22 KernelEventAgent[75]: tid 00000000 received VQ_DEAD event (32)
    Jun 11 12:34:22 kernel[0]: webdav_sendmsg: sock_connect() = 61
    Jun 11 12:34:22 KernelEventAgent[75]: tid 00000000 type 'webdav', mounted on '/Volumes/[username]', from 'http://idisk.mac.com/[username]/', dead
    Jun 11 12:34:22 kernel[0]: webdav_sendmsg: sock_connect() = 61
    Jun 11 12:34:22 KernelEventAgent[75]: tid 00000000 found 1 filesystem(s) with problem(s)
    Jun 11 12:34:22 kernel[0]: webdav_sendmsg: sock_connect() = 61
    Jun 11 12:34:52: --- last message repeated 1 time ---

  • Change ownership from "system" on mounted volume?

    I have an external firewire drive with 3 partitions all of which are mounted on an iMac, and which until recently all had the same ownership and permission settings under my admin account. One of the volumes (the one storing all the users' iTunes songs) somehow changed ownership to "system" and group to "wheel". I can no longer access the volume nor can the other user accounts on the iMac, although it shows up as a mounted volume when viewed in Disk Utility. "Repair permissions" is unavailable for this volume in Disk Utility. The other two volumes are unaffected and retain the original ownership settings. I'd like to reset ownership from "system" to my admin account, but do not know how to do so as apparently it needs to be done through unix commands using Terminal. What do I need to do?
    iMac   Mac OS X (10.4.8)  

    "...under Ownership & Permissions, click on the lock, enter your password, and change the Owner to you, with R&W access and the Group to admin, also w/R&W access, and click o Apply to enclosed items. Click the lock and close the Info window. No need to use Unix commands in the Terminal app."
    I first tried that approach but unfortunately the procedure does not work. Under Ownership & Permissions it says "You have No Access". I can click on the lock and select my name under Details: Owner, but once I click to relock, Owner just reverts back to "system".

Maybe you are looking for