SAN boot disk in a cluster node?
As far as I remember, SC requires local boot disk.
Is it possible to use the SAN boot disk? Can this configuration work being just not supported by SUN or there is some technical limitation ?
I am thinking of low price configuration with two diskless X2100s + HBAs and SAN storage. Possible?
thanks
-- leon
As far as I remember, SC requires local boot disk.
Is it possible to use the SAN boot disk? Can this
configuration work being just not supported by SUN or
there is some technical limitation ?The rule for boot disks gos like this:
Any local storage device, supported by the base platform as a boot device, can be used as a boot device for the server in the cluster as well. A shared storage device cannot be used as a boot device for a server in a cluster. It is recommended to mirror the root disk. Multipathed boot is supported with Sun Cluster when the drivers associated with SAN 4.3 (or later) are used in conjunction with an appropriate storage device (i.e. the local disks on a SF v880 or a SAN connected fiber storage device).
So your boot disk can be in a SAN as long the base platform supports it as a boot disk and it is not configured as a shared lun in the SAN (i.e. visible to other nodes than the one that uses it as boot disk).
I am thinking of low price configuration with two
diskless X2100s + HBAs and SAN storage. Possible?You need to check the support matrix of the storage device you plan to use, if it is supported as a boot device for the X2100 + HBA. If the answer is yes, you just must make sure that this lun is only visible to that X2100.
Greets
Thorsten
Similar Messages
-
Boot up from other node's boot disk
Hi,at the moment I got problem to boot up one of two nodes-cluster, because its boot disk seems to be damaged. Is it possible to boot up from other node's boot disk? If yes,how to do it? If not,are there any ideas,beside replacing the damaged boot disk? Because the machines are in testbed, I think my compay doesn't want spend much money to replace the disk in this short time =)
The version is SOLARIS 8 and SUN Cluster 3.0.Hi,
Sorry to disappoint you but you cannot in Sun Cluster have one node boot from the boot disk of the other node. Do you have a backup of the boot disk?
Your best option is to replace it I guess
Kristien -
ASM disk busy 99% only on one cluster node
Hello,
We have a three node Oracle RAC cluster. Our dba(s) called us and said they are getting OEM critical alers for an asm disk on one node only. I checked and the SAN attached drive does not show the same high utilization on either of the other two nodes. I checked the hardware and it seems fine. If the issue was with the SAN attached disk, we would be seeing the same errors on all three nodes since they share the same disks. The system crashed last week(alert dump in the +asm directories), and at the disk has been busy ever since. I asked if the dba reviewed the ADDM reports and he said he had and that there were no suspicious looking entries that would lead us to the root cause based on those reports. CPU utilization is fine. I am not sure where to look at this point and any help pointing me in the right direction would be appreciated. They do use RMAN, could there be a backup running using those disks only on one node? Has anyone ever seen this before?
Thank you,
Benita Ulisano
Unix/SAN Team
Chicago Public Schools
[email protected]Hi Harish,
Thank you for responding. To answer your question, yes, the disks are all of the same spec and are shared among the three cluster node. The asm disk sdw1 is the one with the issue.
Problem Node: coefsdb02
three nodes in RAC cluster
coefsdb01, coefsdb02, coefsdb03
iostat results for all three nodes - same disk
coefsdb01
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdw1 0.00 1.71 0.12 0.58 1.27 18.78 28.63 0.01 13.38 1.75 0.12
coefsdb02
sdw1 0.11 0.02 4.00 0.62 305.84 21.72 70.93 2.96 12.58 211.95 97.88
coefdb03
sdw1 0.21 0.01 4.70 0.33 224.05 13.52 47.22 0.05 10.11 6.15 3.09
The dba(s) run RMAN backups, but only on coefsdb01.
Benita -
We've a 4 node Windows 2003 File Share Cluster. I logged onto one of the nodes and found there are a lot of SAN connected disks that show 'Unallocated' in Disk Management as below,
Please could someone advice if these disks are unused? and reclaimable? From what I heard from Adminstrator is that, its the default behavior of Cluster and will be in use by a different node on the same cluster. If so, is there an easier way to identify
which nodes are using these disks? since it appears as though these disks are mapped to server but not being used, many thanks.As expected.
Things a bit clearer in current versions of Windows Server, but back in the 2003 days, that was how the shared disk was shown on the nodes that did not own the disk. If you go to each node in the cluster and look at the same thing in each node, you
will have the same number of disks. On the node that owns the disk, you will see it represented as you would expect. On the nodes that do not own the disk, you will see it displayed as you have shown in your screen shot.
. : | : . : | : . tim -
UCS 1.4(3M) BIOS Issue ? no network/san boot when diskless
Recreated faithfully on multiple blades and chassis with VIC cards. Have many SAN boot nodes in production and testing and after upgrading firmware this week anytime I apply my store policy of "diskless" to a blade I lose the ability to boot from network. After much effort to work around comes down to no matter what I do with diskless storeage policy, the configured boot order will never become the actual boot order. In the BIOS all network boot options/reference are invisible, cannot enable nor add to options. Upon removing diskless storeage profile and applying "any other that I have" I see in FST that it's powering up internal storage, starting up and updating pre-boot environments, and other things related that connected the dots. Returning to BIOS all reference and options have returned as I expected due to actual boot order screen updating to match configured order and hba's of course login the fabric right away, boot is fine. While I can faithfully replice across many, many blades, they are all the models with same mezz running 1.4.3 so curious if anyone else is seeing, particuaraly now with new version out, as I have done last few weeks without issues leading me towards the firmware.
In a nut shell I see that when going diskless this is actually shutting down the storage/raid controller. i can see that clearly but no bios access to it and not sure what else is or why it's affecting network and hba also since I cannot see, it's shutting more down than low powering drive side raid. I use different profiles due to some blades having disks removed, some retain them. moving a san boot service-profile dynamically around multiple chassis will fail to associate with errors on some due to disk setup and all that. some i have disks pulled and diskless based on documentation and best practices as this customer needed several san boot only blades in that pool. But yes if I use a storage profile allowing "any disk config" for association whether it has disks, no disks, raided drives, replaces the diskless policy in same service profile it faithfully follows my boot order and boots via hba's across multiple blades, up to about 8 or 10 as of today.....b200-m2's with vic's
thx
daveMy UCS manager firmware and Adapter is 1.3.1p,
the boot policy has been set to Boot to SAN,
I checked "Hardware and Software Interoperability Matrix for UCS System Release 1.3.1"
It seems the 1.3.1 only can support the "windows 2008 r2 x64 w/Hyper-v",
so, I updated the UCS manager Bios and the adpter firmware to 2.0, but the I/O module still is 1.3.1p
I can install the "windows 2008 r2 x64" successfuly...
Now, my concern is I have a mix component software versions on my ucs blade(I used the UCS manager 1.3.1, IO module 1.3.1, the UCS Bois 2.0.1 and the adapter 2.0.1)
will the mix component software versions cause some other issues? Dose anyone can give me some suggestion?
Thanks a lot -
AWR report problem after cluster node switch.
Hello all. I have some strange problem, can any one advice what to do....
I have OracleDB (Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 on Solaris x86_64), we have two server nodes and one shared storage attached to them, db is running on node1 and if it dies db will be switched to node2, classic cluster.
Some time ago we tested this switching, so i shut downed db and switch it to node2, and startup it there (oracle_homes are identical), every thing was ok, soo i switched it back to node1. But now i can't run awrrpt.sql or awrinfo.sql, it gives error like this:
Using the report name awrinfo.txt
No errors.
create or replace package body AWRINFO_UTIL as
ERROR at line 1:
ORA-03113: end-of-file on communication channel
No errors.
ERROR:
ORA-03114: not connected to ORACLE
And in alert log:
ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [509] [] [] []
I tried to drop AWR with catnoawr.sql and recreate it with catawrtb.sql, everything seems to be fine, but still can't run awrrpt.sql or awrinfo.sql, same error.
Any one familiar with such problem ?
Thanks for advice.I understand that I provided less than satisfactory amount of info.
So here is more.
I am installing the two node cluster and during scinstall one of the nodes is being rebooted
and goes through (what I am suppose to be) an initial configuration. At the very end of the
boot process there is a message
obtaining access to all attached disksAt this point the boot disk activity LED is lit constantly. After some longish timeout
the following message is printed to console
NOTICE: /pci@0,0/pci1014,2dd@1f,2: port 0: device reset
WARNING: /pci@0,0/pci1014,2dd@1f,2/disk@0,0 (sd1):
Error for Command: read(10) Error Level: Retryable
Requested Block: 135323318 Error Block: 135323318
Vendor: ATA Serial Number:
Sense Key: No Additional Sense
ASC: 0x0 (no additional sense info), ASCQ: 0x0, FRU: 0x0and the disk activity LED is turned off. After that nothing more happens. The system isn't
hard hang, since the keyboard is working, and it responds to ping, but other than that
nothing seems to be functioning.
I understand that diagnosing such a problem isn't easy, but I am willing to invest some
time into getting it working. I would rally appreciate some help with this issue.
Regards,
Cyril -
How to use SVM metadevices with cluster - sync metadb between cluster nodes
Hi guys,
I feel like I've searched the whole internet regarding that matter but found nothing - so hopefully someone here can help me?!?!?
<b>Situation:</b>
I have a running server with Sol10 U2. SAN storage is attached to the server but without any virtualization in the SAN network.
The virtualization is done by Solaris Volume Manager.
The customer has decided to extend the environment with a second server to build up a cluster. According our standards we
have to use Symantec Veritas Cluster, but I think regarding my question it doesn't matter which cluster software is used.
The SVM configuration is nothing special. The internal disks are configured with mirroring, the SAN LUNs are partitioned via format
and each slice is a meta device.
d100 p 4.0GB d6
d6 m 44GB d20 d21
d20 s 44GB c1t0d0s6
d21 s 44GB c1t1d0s6
d4 m 4.0GB d16 d17
d16 s 4.0GB c1t0d0s4
d17 s 4.0GB c1t1d0s4
d3 m 4.0GB d14 d15
d14 s 4.0GB c1t0d0s3
d15 s 4.0GB c1t1d0s3
d2 m 32GB d12 d13
d12 s 32GB c1t0d0s1
d13 s 32GB c1t1d0s1
d1 m 12GB d10 d11
d10 s 12GB c1t0d0s0
d11 s 12GB c1t1d0s0
d5 m 6.0GB d18 d19
d18 s 6.0GB c1t0d0s5
d19 s 6.0GB c1t1d0s5
d1034 s 21GB /dev/dsk/c4t600508B4001064300001C00004930000d0s5
d1033 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s4
d1032 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s3
d1031 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s1
d1030 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s0
d1024 s 31GB /dev/dsk/c4t600508B4001064300001C00004870000d0s5
d1023 s 512MB /dev/dsk/c4t600508B4001064300001C00004870000d0s4
d1022 s 2.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s3
d1021 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s1
d1020 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s0
d1014 s 8.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s5
d1013 s 1.7GB /dev/dsk/c4t600508B4001064300001C00004750000d0s4
d1012 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s3
d1011 s 256MB /dev/dsk/c4t600508B4001064300001C00004750000d0s1
d1010 s 4.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s0
d1004 s 46GB /dev/dsk/c4t600508B4001064300001C00004690000d0s5
d1003 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s4
d1002 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s3
d1001 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s1
d1000 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s0
<b>The problem is the following:</b>
The SVM configuration on the second server (cluster node 2) must be the same for the devices d1000-d1034.
Generally spoken the metadb needs to be in sync.
- How can I manage this?
- Do I have to use disk sets?
- Will a copy of the md.cf/md.tab and an initialization with metainit do it?
I would be great to have several options how one can manage this.
Thanks and regards,
MarkusDear Tim,
Thank you for your answer.
I can confirm that Veritas Cluster doesn't support SVM by default. Of course they want to sell their own volume manager ;o).
But that wouldn't be the big problem. With SVM I expect the same behaviour as with VxVM, If I do or have to use disk sets,
and for that I can write a custom agent.
My problem is not the cluster implementation. It's more likely a fundamental problem with syncing the SVM config for a set
of meta devices between two hosts. I'm far from implementing the devices into the cluster config as long as I don't know how
how to let both nodes know about both devices.
Currently only the hosts that initialized the volumes knows about them. The second node doesn't know anything about the
devices d1000-d1034.
What I need to know in this state is:
- How can I "register" the alrady initialized meta devices d1000-d1034 on the second cluster node?
- Do I have to use disk sets?
- Can I only copy and paste the appropriate lines of the md.cf/md.tab
- Generaly speaking: How can one configure SVM that different hosts see the same meta devices?
Hope that someone can help me!
Thanks,
Markus -
Reformatting a XServe G5 Cluster Node
I got my hands on a pair of G5 cluster nodes (the ones with a single drive bay and no optical) and I need to intall Mac OSX server on it. I tried removing the hard drive and using my macBook Pro to install to it, but I can't install mac Os X using the apple partition map, since I am working from and intel machine.
The G5 Cluster node has no video card so I can't even plug in my external reader and install it from there.
A Little Help Please? I am not a server guy, I'm a video guy but this happens to have fallen into my hands.
Thanks,
CharlesHey Chuck,
You can install whatever partition map you need from the OS X Server installer. Before you begin the installation, go to "Utilities" and open Disk Utility.
Select the server's hard drive (be careful NOT to select your MacBook's drive, you don't want to erase that). Go to the "Partition" tab. Select "1 Partition" from the Volume Scheme popup. Then click on the "Options" button and and select "Apple Partition Map". Hit Apply. This should ensure that your XServe has the appropriate boot record. -
After reboot cluster node went into maintanance mode (CONTROL-D)
Hi there!
I have configured 2 node cluster on 2 x SUN Enterprise 220R and StoreEdge D1000.
Each time when rebooted any of the cluster nodes i get the following error during boot up:
The / file system (/dev/rdsk/c0t1d0s0) is being checked.
/dev/rdsk/c0t1d0s0: UNREF DIR I=35540 OWNER=root MODE=40755
/dev/rdsk/c0t1d0s0: SIZE=512 MTIME=Jun 5 15:02 2006 (CLEARED)
/dev/rdsk/c0t1d0s0: UNREF FILE I=1192311 OWNER=root MODE=100600
/dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 (RECONNECTED)
/dev/rdsk/c0t1d0s0: LINK COUNT FILE I=1192311 OWNER=root MODE=100600
/dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 COUNT 0 SHOULD BE 1
/dev/rdsk/c0t1d0s0: LINK COUNT INCREASING
/dev/rdsk/c0t1d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
In maintanance mode i do:
# fsck -y -F ufs /dev/rdsk/c0t1d0s0
and it managed to correct the problem ... but problem occured again after each reboot on each cluster node!
I have installed Sun CLuster 3.1 on Solaris 9 SPARC
How can i get rid of it?
Any ideas?
Brgds,
SergejHi i get this:
112941-09 SunOS 5.9: sysidnet Utility Patch
116755-01 SunOS 5.9: usr/snadm/lib/libadmutil.so.2 Patch
113434-30 SunOS 5.9: /usr/snadm/lib Library and Differential Flash Patch
112951-13 SunOS 5.9: patchadd and patchrm Patch
114711-03 SunOS 5.9: usr/sadm/lib/diskmgr/VDiskMgr.jar Patch
118064-04 SunOS 5.9: Admin Install Project Manager Client Patch
113742-01 SunOS 5.9: smcpreconfig.sh Patch
113813-02 SunOS 5.9: Gnome Integration Patch
114501-01 SunOS 5.9: drmproviders.jar Patch
112943-09 SunOS 5.9: Volume Management Patch
113799-01 SunOS 5.9: solregis Patch
115697-02 SunOS 5.9: mtmalloc lib Patch
113029-06 SunOS 5.9: libaio.so.1 librt.so.1 and abi_libaio.so.1 Patch
113981-04 SunOS 5.9: devfsadm Patch
116478-01 SunOS 5.9: usr platform links Patch
112960-37 SunOS 5.9: patch libsldap ldap_cachemgr libldap
113332-07 SunOS 5.9: libc_psr.so.1 Patch
116500-01 SunOS 5.9: SVM auto-take disksets Patch
114349-04 SunOS 5.9: sbin/dhcpagent Patch
120441-03 SunOS 5.9: libsec patch
114344-19 SunOS 5.9: kernel/drv/arp Patch
114373-01 SunOS 5.9: UMEM - abi_libumem.so.1 patch
118558-27 SunOS 5.9: Kernel Patch
115675-01 SunOS 5.9: /usr/lib/liblgrp.so Patch
112958-04 SunOS 5.9: patch pci.so
113451-11 SunOS 5.9: IKE Patch
112920-02 SunOS 5.9: libipp Patch
114372-01 SunOS 5.9: UMEM - llib-lumem patch
116229-01 SunOS 5.9: libgen Patch
116178-01 SunOS 5.9: libcrypt Patch
117453-01 SunOS 5.9: libwrap Patch
114131-03 SunOS 5.9: multi-terabyte disk support - libadm.so.1 patch
118465-02 SunOS 5.9: rcm_daemon Patch
113490-04 SunOS 5.9: Audio Device Driver Patch
114926-02 SunOS 5.9: kernel/drv/audiocs Patch
113318-25 SunOS 5.9: patch /kernel/fs/nfs and /kernel/fs/sparcv9/nfs
113070-01 SunOS 5.9: ftp patch
114734-01 SunOS 5.9: /usr/ccs/bin/lorder Patch
114227-01 SunOS 5.9: yacc Patch
116546-07 SunOS 5.9: CDRW DVD-RW DVD+RW Patch
119494-01 SunOS 5.9: mkisofs patch
113471-09 SunOS 5.9: truss Patch
114718-05 SunOS 5.9: usr/kernel/fs/pcfs Patch
115545-01 SunOS 5.9: nss_files patch
115544-02 SunOS 5.9: nss_compat patch
118463-01 SunOS 5.9: du Patch
116016-03 SunOS 5.9: /usr/sbin/logadm patch
115542-02 SunOS 5.9: nss_user patch
116014-06 SunOS 5.9: /usr/sbin/usermod patch
116012-02 SunOS 5.9: ps utility patch
117433-02 SunOS 5.9: FSS FX RT Patch
117431-01 SunOS 5.9: nss_nis Patch
115537-01 SunOS 5.9: /kernel/strmod/ptem patch
115336-03 SunOS 5.9: /usr/bin/tar, /usr/sbin/static/tar Patch
117426-03 SunOS 5.9: ctsmc and sc_nct driver patch
121319-01 SunOS 5.9: devfsadmd_mod.so Patch
121316-01 SunOS 5.9: /kernel/sys/doorfs Patch
121314-01 SunOS 5.9: tl driver patch
116554-01 SunOS 5.9: semsys Patch
112968-01 SunOS 5.9: patch /usr/bin/renice
116552-01 SunOS 5.9: su Patch
120445-01 SunOS 5.9: Toshiba platform token links (TSBW,Ultra-3i)
112964-15 SunOS 5.9: /usr/bin/ksh Patch
112839-08 SunOS 5.9: patch libthread.so.1
115687-02 SunOS 5.9:/var/sadm/install/admin/default Patch
115685-01 SunOS 5.9: sbin/netstrategy Patch
115488-01 SunOS 5.9: patch /kernel/misc/busra
115681-01 SunOS 5.9: usr/lib/fm/libdiagcode.so.1 Patch
113032-03 SunOS 5.9: /usr/sbin/init Patch
113031-03 SunOS 5.9: /usr/bin/edit Patch
114259-02 SunOS 5.9: usr/sbin/psrinfo Patch
115878-01 SunOS 5.9: /usr/bin/logger Patch
116543-04 SunOS 5.9: vmstat Patch
113580-01 SunOS 5.9: mount Patch
115671-01 SunOS 5.9: mntinfo Patch
113977-01 SunOS 5.9: awk/sed pkgscripts Patch
122716-01 SunOS 5.9: kernel/fs/lofs patch
113973-01 SunOS 5.9: adb Patch
122713-01 SunOS 5.9: expr patch
117168-02 SunOS 5.9: mpstat Patch
116498-02 SunOS 5.9: bufmod Patch
113576-01 SunOS 5.9: /usr/bin/dd Patch
116495-03 SunOS 5.9: specfs Patch
117160-01 SunOS 5.9: /kernel/misc/krtld patch
118586-01 SunOS 5.9: cp/mv/ln Patch
120025-01 SunOS 5.9: ipsecconf Patch
116527-02 SunOS 5.9: timod Patch
117155-08 SunOS 5.9: pcipsy Patch
114235-01 SunOS 5.9: libsendfile.so.1 Patch
117152-01 SunOS 5.9: magic Patch
116486-03 SunOS 5.9: tsalarm Driver Patch
121998-01 SunOS 5.9: two-key mode fix for 3DES Patch
116484-01 SunOS 5.9: consconfig Patch
116482-02 SunOS 5.9: modload Utils Patch
117746-04 SunOS 5.9: patch platform/sun4u/kernel/drv/sparcv9/pic16f819
121992-01 SunOS 5.9: fgrep Patch
120768-01 SunOS 5.9: grpck patch
119438-01 SunOS 5.9: usr/bin/login Patch
114389-03 SunOS 5.9: devinfo Patch
116510-01 SunOS 5.9: wscons Patch
114224-05 SunOS 5.9: csh Patch
116670-04 SunOS 5.9: gld Patch
114383-03 SunOS 5.9: Enchilada/Stiletto - pca9556 driver
116506-02 SunOS 5.9: traceroute patch
112919-01 SunOS 5.9: netstat Patch
112918-01 SunOS 5.9: route Patch
112917-01 SunOS 5.9: ifrt Patch
117132-01 SunOS 5.9: cachefsstat Patch
114370-04 SunOS 5.9: libumem.so.1 patch
114010-02 SunOS 5.9: m4 Patch
117129-01 SunOS 5.9: adb Patch
117483-01 SunOS 5.9: ntwdt Patch
114369-01 SunOS 5.9: prtvtoc patch
117125-02 SunOS 5.9: procfs Patch
117480-01 SunOS 5.9: pkgadd Patch
112905-02 SunOS 5.9: ippctl Patch
117123-06 SunOS 5.9: wanboot Patch
115030-03 SunOS 5.9: Multiterabyte UFS - patch mount
114004-01 SunOS 5.9: sed Patch
113335-03 SunOS 5.9: devinfo Patch
113495-05 SunOS 5.9: cfgadm Library Patch
113494-01 SunOS 5.9: iostat Patch
113493-03 SunOS 5.9: libproc.so.1 Patch
113330-01 SunOS 5.9: rpcbind Patch
115028-02 SunOS 5.9: patch /usr/lib/fs/ufs/df
115024-01 SunOS 5.9: file system identification utilities
117471-02 SunOS 5.9: fifofs Patch
118897-01 SunOS 5.9: stc Patch
115022-03 SunOS 5.9: quota utilities
115020-01 SunOS 5.9: patch /usr/lib/adb/ml_odunit
113720-01 SunOS 5.9: rootnex Patch
114352-03 SunOS 5.9: /etc/inet/inetd.conf Patch
123056-01 SunOS 5.9: ldterm patch
116243-01 SunOS 5.9: umountall Patch
113323-01 SunOS 5.9: patch /usr/sbin/passmgmt
116049-01 SunOS 5.9: fdfs Patch
116241-01 SunOS 5.9: keysock Patch
113480-02 SunOS 5.9: usr/lib/security/pam_unix.so.1 Patch
115018-01 SunOS 5.9: patch /usr/lib/adb/dqblk
113277-44 SunOS 5.9: sd and ssd Patch
117457-01 SunOS 5.9: elfexec Patch
113110-01 SunOS 5.9: touch Patch
113077-17 SunOS 5.9: /platform/sun4u/kernal/drv/su Patch
115006-01 SunOS 5.9: kernel/strmod/kb patch
113072-07 SunOS 5.9: patch /usr/sbin/format
113071-01 SunOS 5.9: patch /usr/sbin/acctadm
116782-01 SunOS 5.9: tun Patch
114331-01 SunOS 5.9: power Patch
112835-01 SunOS 5.9: patch /usr/sbin/clinfo
114927-01 SunOS 5.9: usr/sbin/allocate Patch
119937-02 SunOS 5.9: inetboot patch
113467-01 SunOS 5.9: seg_drv & seg_mapdev Patch
114923-01 SunOS 5.9: /usr/kernel/drv/logindmux Patch
117443-01 SunOS 5.9: libkvm Patch
114329-01 SunOS 5.9: /usr/bin/pax Patch
119929-01 SunOS 5.9: /usr/bin/xargs patch
113459-04 SunOS 5.9: udp patch
113446-03 SunOS 5.9: dman Patch
116009-05 SunOS 5.9: sgcn & sgsbbc patch
116557-04 SunOS 5.9: sbd Patch
120241-01 SunOS 5.9: bge: Link & Speed LEDs flash constantly on V20z
113984-01 SunOS 5.9: iosram Patch
113220-01 SunOS 5.9: patch /platform/sun4u/kernel/drv/sparcv9/upa64s
113975-01 SunOS 5.9: ssm Patch
117165-01 SunOS 5.9: pmubus Patch
116530-01 SunOS 5.9: bge.conf Patch
116529-01 SunOS 5.9: smbus Patch
116488-03 SunOS 5.9: Lights Out Management (lom) patch
117131-01 SunOS 5.9: adm1031 Patch
117124-12 SunOS 5.9: platmod, drmach, dr, ngdr, & gptwocfg Patch
114003-01 SunOS 5.9: bbc driver Patch
118539-02 SunOS 5.9: schpc Patch
112837-10 SunOS 5.9: patch /usr/lib/inet/in.dhcpd
114975-01 SunOS 5.9: usr/lib/inet/dhcp/svcadm/dhcpcommon.jar Patch
117450-01 SunOS 5.9: ds_SUNWnisplus Patch
113076-02 SunOS 5.9: dhcpmgr.jar Patch
113572-01 SunOS 5.9: docbook-to-man.ts Patch
118472-01 SunOS 5.9: pargs Patch
122709-01 SunOS 5.9: /usr/bin/dc patch
113075-01 SunOS 5.9: pmap patch
113472-01 SunOS 5.9: madv & mpss lib Patch
115986-02 SunOS 5.9: ptree Patch
115693-01 SunOS 5.9: /usr/bin/last Patch
115259-03 SunOS 5.9: patch usr/lib/acct/acctcms
114564-09 SunOS 5.9: /usr/sbin/in.ftpd Patch
117441-01 SunOS 5.9: FSSdispadmin Patch
113046-01 SunOS 5.9: fcp Patch
118191-01 gtar patch
114818-06 GNOME 2.0.0: libpng Patch
117177-02 SunOS 5.9: lib/gss module Patch
116340-05 SunOS 5.9: gzip and Freeware info files patch
114339-01 SunOS 5.9: wrsm header files Patch
122673-01 SunOS 5.9: sockio.h header patch
116474-03 SunOS 5.9: libsmedia Patch
117138-01 SunOS 5.9: seg_spt.h
112838-11 SunOS 5.9: pcicfg Patch
117127-02 SunOS 5.9: header Patch
112929-01 SunOS 5.9: RIPv2 Header Patch
112927-01 SunOS 5.9: IPQos Header Patch
115992-01 SunOS 5.9: /usr/include/limits.h Patch
112924-01 SunOS 5.9: kdestroy kinit klist kpasswd Patch
116231-03 SunOS 5.9: llc2 Patch
116776-01 SunOS 5.9: mipagent patch
117420-02 SunOS 5.9: mdb Patch
117179-01 SunOS 5.9: nfs_dlboot Patch
121194-01 SunOS 5.9: usr/lib/nfs/statd Patch
116502-03 SunOS 5.9: mountd Patch
113331-01 SunOS 5.9: usr/lib/nfs/rquotad Patch
113281-01 SunOS 5.9: patch /usr/lib/netsvc/yp/ypbind
114736-01 SunOS 5.9: usr/sbin/nisrestore Patch
115695-01 SunOS 5.9: /usr/lib/netsvc/yp/yppush Patch
113321-06 SunOS 5.9: patch sf and socal
113049-01 SunOS 5.9: luxadm & liba5k.so.2 Patch
116663-01 SunOS 5.9: ntpdate Patch
117143-01 SunOS 5.9: xntpd Patch
113028-01 SunOS 5.9: patch /kernel/ipp/flowacct
113320-06 SunOS 5.9: patch se driver
114731-08 SunOS 5.9: kernel/drv/glm Patch
115667-03 SunOS 5.9: Chalupa platform support Patch
117428-01 SunOS 5.9: picl Patch
113327-03 SunOS 5.9: pppd Patch
114374-01 SunOS 5.9: Perl patch
115173-01 SunOS 5.9: /usr/bin/sparcv7/gcore /usr/bin/sparcv9/gcore Patch
114716-02 SunOS 5.9: usr/bin/rcp Patch
112915-04 SunOS 5.9: snoop Patch
116778-01 SunOS 5.9: in.ripngd patch
112916-01 SunOS 5.9: rtquery Patch
112928-03 SunOS 5.9: in.ndpd Patch
119447-01 SunOS 5.9: ses Patch
115354-01 SunOS 5.9: slpd Patch
116493-01 SunOS 5.9: ProtocolTO.java Patch
116780-02 SunOS 5.9: scmi2c Patch
112972-17 SunOS 5.9: patch /usr/lib/libssagent.so.1 /usr/lib/libssasnmp.so.1 mibiisa
116480-01 SunOS 5.9: IEEE 1394 Patch
122485-01 SunOS 5.9: 1394 mass storage driver patch
113716-02 SunOS 5.9: sar & sadc Patch
115651-02 SunOS 5.9: usr/lib/acct/runacct Patch
116490-01 SunOS 5.9: acctdusg Patch
117473-01 SunOS 5.9: fwtmp Patch
116180-01 SunOS 5.9: geniconvtbl Patch
114006-01 SunOS 5.9: tftp Patch
115646-01 SunOS 5.9: libtnfprobe shared library Patch
113334-03 SunOS 5.9: udfs Patch
115350-01 SunOS 5.9: ident_udfs.so.1 Patch
122484-01 SunOS 5.9: preen_md.so.1 patch
117134-01 SunOS 5.9: svm flasharchive patch
116472-02 SunOS 5.9: rmformat Patch
112966-05 SunOS 5.9: patch /usr/sbin/vold
114229-01 SunOS 5.9: action_filemgr.so.1 Patch
114335-02 SunOS 5.9: usr/sbin/rmmount Patch
120443-01 SunOS 5.9: sed core dumps on long lines
121588-01 SunOS 5.9: /usr/xpg4/bin/awk Patch
113470-02 SunOS 5.9: winlock Patch
119211-07 NSS_NSPR_JSS 3.11: NSPR 4.6.1 / NSS 3.11 / JSS 4.2
118666-05 J2SE 5.0: update 6 patch
118667-05 J2SE 5.0: update 6 patch, 64bit
114612-01 SunOS 5.9: ANSI-1251 encodings file errors
114276-02 SunOS 5.9: Extended Arabic support in UTF-8
117400-01 SunOS 5.9: ISO8859-6 and ISO8859-8 iconv symlinks
113584-16 SunOS 5.9: yesstr, nostr nl_langinfo() strings incorrect in S9
117256-01 SunOS 5.9: Remove old OW Xresources.ow files
112625-01 SunOS 5.9: Dcam1394 patch
114600-05 SunOS 5.9: vlan driver patch
117119-05 SunOS 5.9: Sun Gigabit Ethernet 3.0 driver patch
117593-04 SunOS 5.9: Manual Page updates for Solaris 9
112622-19 SunOS 5.9: M64 Graphics Patch
115953-06 Sun Cluster 3.1: Sun Cluster sccheck patch
117949-23 Sun Cluster 3.1: Core Patch for Solaris 9
115081-06 Sun Cluster 3.1: HA-Sun One Web Server Patch
118627-08 Sun Cluster 3.1: Manageability and Serviceability Agent
117985-03 SunOS 5.9: XIL 1.4.2 Loadable Pipeline Libraries
113896-06 SunOS 5.9: en_US.UTF-8 locale patch
114967-02 SunOS 5.9: FDL patch
114677-11 SunOS 5.9: International Components for Unicode Patch
112805-01 CDE 1.5: Help volume patch
113841-01 CDE 1.5: answerbook patch
113839-01 CDE 1.5: sdtwsinfo patch
115713-01 CDE 1.5: dtfile patch
112806-01 CDE 1.5: sdtaudiocontrol patch
112804-02 CDE 1.5: sdtname patch
113244-09 CDE 1.5: dtwm patch
114312-02 CDE1.5: GNOME/CDE Menu for Solaris 9
112809-02 CDE:1.5 Media Player (sdtjmplay) patch
113868-02 CDE 1.5: PDASync patch
119976-01 CDE 1.5: dtterm patch
112771-30 Motif 1.2.7 and 2.1.1: Runtime library patch for Solaris 9
114282-01 CDE 1.5: libDtWidget patch
113789-01 CDE 1.5: dtexec patch
117728-01 CDE1.5: dthello patch
113863-01 CDE 1.5: dtconfig patch
112812-01 CDE 1.5: dtlp patch
113861-04 CDE 1.5: dtksh patch
115972-03 CDE 1.5: dtterm libDtTerm patch
114654-02 CDE 1.5: SmartCard patch
117632-01 CDE1.5: sun_at patch for Solaris 9
113374-02 X11 6.6.1: xpr patch
118759-01 X11 6.6.1: Font Administration Tools patch
117577-03 X11 6.6.1: TrueType fonts patch
116084-01 X11 6.6.1: font patch
113098-04 X11 6.6.1: X RENDER extension patch
112787-01 X11 6.6.1: twm patch
117601-01 X11 6.6.1: libowconfig.so.0 patch
117663-02 X11 6.6.1: xwd patch
113764-04 X11 6.6.1: keyboard patch
113541-02 X11 6.6.1: XKB patch
114561-01 X11 6.6.1: X splash screen patch
113513-02 X11 6.6.1: platform support for new hardware
116121-01 X11 6.4.1: platform support for new hardware
114602-04 X11 6.6.1: libmpg_psr patch
Is there a bundle to install or i have to install each patch separatly_? -
Hi
I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
thanksHi
I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
thanks
Bad news: You cannot do it with a Microsoft built-in solutions because you do need indeed to have physical shared storage to make Microsoft iSCSI target clustered. Something like on Robert Smit blog here:
Clustering Microsoft iSCSI Target
https://robertsmit.wordpress.com/2012/06/26/clustering-iscsi-target-on-windows-2012-step-by-step/
...or here:
MSFT iSCSI Target in HA
https://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
...or very detailed walk thru here:
MSFT iSCSI Target in High Availability Mode
https://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Good news: you can take a third-party solution from various companies (below) and create an HA iSCSI volumes on just a pair of nodes. See:
StarWind Virtual SAN
http://www.starwindsoftware.com/starwind-virtual-san-free
(this setup is FREE of charge, you just need to be an MCT, MVP or MCP to obtain your free 2-node key)
...to have a setup like this:
Also SteelEye has similar one here:
SteelEye #SANLess Clusters
http://us.sios.com/products/datakeeper-cluster/
DataCore SANsyxxxx
http://www.datacore.com/products/SANsymphony-V.aspx
You can also spawn a VMs running FreeBSD/HAST or Linux/DRBD to build a very similar setup yourself (Two-node setups should be Active-Passive to avoid brain split, Windows solutions from above all maintain own pacemaker and heartbeats to run Active-Active
on just a pair of nodes).
Good luck and happy clustering :)
StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Setup of G5 Cluster Node as a Standalone Server....
I have tried to no avail to try and setup a G5 XServer cluster node as a new Mac OS X 10.6 server. Here is where I screwed up - I pulled the drive out not realizing that creating a new partition on my Mac Book Pro would not be compatible and toasted the partition or rather made it so it was intel based instead of Power PC based and further went about installing the OS via "target mode" and went all the way through and realized that the server would not recognize it. So tonight I went back created 2 new partitions on the drive made them Power PC based but the server still does not recognize the drives when I boot up using the Option key. I get 2 buttons (refresh and I'm guessing a next) but the refresh puts up a clock for about 10 - 15 seconds and comes back and the next does nothing visible.
Here is what I have tried so far:
1) Tried booting up with the different boot commands
a) #3 Start up from internal drive
b) #5 Setup in Target mode but I don't have a PowerPC Mac to install from
3) #6 Reset NVRam
2) Boot with the letter "c" but nothing happens other then getting to a window with a single folder that alternates between a question and the mac guy logo (sorry don't know the exact name)
3) Boot with a Fire Wire external BluRay DVD player but does not seem to recognize it at all (could be the BluRay I guess have not thought of that)
And I'm sure I have tried a few other things but currently at wits end. I have a video card (3rd one was the charm) so I have video but have no idea how to get my Mac OS X Server software installed on this machine....
Any help or suggest would be greatly appreciated - oh I'm sure by now you know I'm new to Macs - I was an old Apple ][e guy but have been on PCs since the late 80's and finally got back to Apple - love them.I have tried to no avail to try and setup a G5 XServer cluster node as a new Mac OS X 10.6 server.
Stop right there.
10.6 is Intel-only. It won't boot a PowerPC-based server. It doesn't matter about the disk format, or anything else. 10.5.x is as far as you can go with this machine. -
hi guys,
i'm looking for create a new cluster on two standalone server
the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
Is possible to install SunCluster with a rpool boot zfs disk?
What do i have to do?
AlessioHi!
I am have 10 node Sun Cluster.
All nodes have zfs rpool with mirror.
is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
P.S. And you may use UFS global with ZFS root.
Anatoly S. Zimin -
Cluster node addition fails on cleanup
We have a 2 node cluster setup already
(2) HP BL460c G8 servers connected to a VNX5300 SAN (Nodes 1 & 2)
Server 2012 Datacenter installed
Quarum: Node + Disk
all failover tests went perfectly and all VMs are healthy
Verification on the cluster show some warnings but no failures
We have rebuilt a server (node 3) renamed it and have run a single machine verification test to see if it is suitable for clustering. it succeeded with minor warnings
We ran verification on all three machines and received the formentioned warnings but no game stoppers, however when trying to add the host to the cluster we get the following error in the logs:
WARN mscs::ListenerWorker::operator (): ERROR_TIMEOUT(1460)' because of '[FTI][Initiator] Aborting connection because NetFT route to node <machine name> on virtual IP fe80::cdf2:f6ea:5ce:5f9c:~3343~ has failed to come up.'
This happens after the node is added to the cluster but reports a failure on cleanup processes and reverts everything back. I have done all of this under my domain_admin account.
before and after the attempt to add the NetFT adapter is in media disconnect, during the attempts it does pull down a 169 address as it is supposed to
Node 3 Networking breakdown
The new host uses an Intel/HP NC365T Quard port adaptor
port 1: Mgmt : Static assignment subnet 1
port 2: VM net: Static assignment sibmet 2
port 3: Heartbeat: assigned via DHCP subnet 1 pool (we have attempted the above with this disabled as well)
NCU is not installed for the adapter and bridging in server 2012 is not enabled.
I am at a loss, and would appreciate any additional help as i have spent 3 days researching this to try and find the cause.Hi,
The error message mentioned an IPv6 address, have you enable IPv6 network for the cluster?
Check the IPv6 network configuration in the 3<sup>rd</sup> node server, what’s the status, enabled or disabled?
When two or more cluster nodes are running IPv6 for heartbeat communications, they will require any additional nodes that join to also running IPv6. If the node server has IPv6 disabled, it will fail to join.
Also whether these cluster node server has antivirus software installed, you may temporarily disable it and rejoin the new node.
Check that and give us feedback for further troubleshooting, for more information please refer to following MS articles:
Failover Cluster Creation Issue
http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/1ed1936d-6283-46cc-951d-9c236329b8be
Failure to re-add rebuilt cluster node to Windows 2008 R2 Cluster: System error 1460 has occurred (0x000005b4). Timeout.
http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/a21e9a8e-9f68-4d83-a747-204000cda65a
Hope this helps!
TechNet Subscriber Support
If you are
TechNet Subscription
user and have any feedback on our support quality, please send your feedback
here.
Lawrence
TechNet Community Support -
Hi there
My Setup:
2 Cluster Nodes (HP DL380 G7 & HP DL380 Gen8)
HP P2000 G3 FC MSA (MPIO)
The Gen8 Cluster Node pauses after a few minutes, but stays online if the G7 is paused (no drain) My troubleshooting has led me to believe that there is a problem with the Cluster Shared Volume:
00001508.000010b4::2015/02/19-14:51:14.189 INFO [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:cf2dec1d-ee88-4fb6-a86d-0c2d1aa888b4:Netbios
00000d1c.0000299c::2015/02/19-14:51:14.615 INFO [API] s_ApiGetQuorumResource final status 0.
00000d1c.0000299c::2015/02/19-14:51:14.616 INFO [RCM [RES] Virtual Machine VirtualMachine1 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00001508.000010b4::2015/02/19-14:51:15.010 INFO [RES] Network Name <Cluster Name>: Getting Read only private properties
00000d1c.00002294::2015/02/19-14:51:15.096 INFO [API] s_ApiGetQuorumResource final status 0.
00000d1c.00002294::2015/02/19-14:51:15.121 INFO [API] s_ApiGetQuorumResource final status 0.
000014a8.000024f4::2015/02/19-14:51:15.269 INFO [RES] Physical Disk <Quorum>: VolumeIsNtfs: Volume
\\?\GLOBALROOT\Device\Harddisk1\ClusterPartition2\ has FS type NTFS
00000d1c.00002294::2015/02/19-14:51:15.343 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node. Attempting to find a good node...
00000d1c.00002294::2015/02/19-14:51:15.352 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node. Attempting to find a good node...
000014a8.000024f4::2015/02/19-14:51:15.386 INFO [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
000014a8.000024f4::2015/02/19-14:51:15.386 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
000014a8.000024f4::2015/02/19-14:51:15.386 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
00000d1c.00001420::2015/02/19-14:51:15.847 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node. Attempting to find a good node...
00000d1c.00001420::2015/02/19-14:51:15.855 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node. Attempting to find a good node...
000014a8.000024f4::2015/02/19-14:51:15.887 INFO [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
000014a8.000024f4::2015/02/19-14:51:15.888 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
000014a8.000024f4::2015/02/19-14:51:15.888 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
00000d1c.00001420::2015/02/19-14:51:15.928 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node. Attempting to find a good node...
00000d1c.00001420::2015/02/19-14:51:15.939 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node. Attempting to find a good node...
000014a8.000024f4::2015/02/19-14:51:15.968 INFO [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
000014a8.000024f4::2015/02/19-14:51:15.969 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
000014a8.000024f4::2015/02/19-14:51:15.969 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
00000d1c.00001420::2015/02/19-14:51:16.005 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node. Attempting to find a good node...
00000d1c.00001420::2015/02/19-14:51:16.015 WARN [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node. Attempting to find a good node...
000014a8.000024f4::2015/02/19-14:51:16.059 INFO [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
000014a8.000024f4::2015/02/19-14:51:16.059 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
000014a8.000024f4::2015/02/19-14:51:16.059 ERR [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
\\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
00000d1c.00002568::2015/02/19-14:51:17.110 INFO [GEM] Node 1: Deleting [2:395 , 2:396] (both included) as it has been ack'd by every node
00000d1c.0000299c::2015/02/19-14:51:17.444 INFO [RCM [RES] Virtual Machine VirtualMachine2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000d1c.0000299c::2015/02/19-14:51:18.103 INFO [RCM] rcm::DrainMgr::PauseNodeNoDrain: [DrainMgr] PauseNodeNoDrain
00000d1c.0000299c::2015/02/19-14:51:18.103 INFO [GUM] Node 1: Processing RequestLock 1:164
00000d1c.00002568::2015/02/19-14:51:18.104 INFO [GUM] Node 1: Processing GrantLock to 1 (sent by 2 gumid: 1470)
00000d1c.0000299c::2015/02/19-14:51:18.104 INFO [GUM] Node 1: executing request locally, gumId:1471, my action: /nsm/stateChange, # of updates: 1
00000d1c.00001420::2015/02/19-14:51:18.104 INFO [DM] Starting replica transaction, paxos: 99:99:50133, smartPtr: HDL( c9b16cf1e0 ), internalPtr: HDL( c9b21
This issue has been bugging me for some time now. The Cluster is fully functional and works great until the node gets paused again. I've read somewhere that the MSMQ errors can be ignored, but can't find anything about the
HardDiskpGetDiskInfo: GetVolumeInformation failed messages. No errors in the san or the Server Event logs. Driver and Firmware are up to date. Any help would be greatly appreciated.
Best regardsThank you for your replies.
First some information I left out in my original post. We're using Windows Server 2012 R2 Datacenter and are currently only hosting virtual machines on the cluster.
I did some testing over the weekend, including a firmware update on the san and cluster validation.
The problem doesn't seem to be related to backup. We use Microsoft DPM to make a full express backup once every day, the getvolumeinformation Failed error gets logged periodically every half an hour.
Excerpts from the validation report:
Validate Disk Failover
Description: Validate that a disk can fail over successfully with
data intact.
Start: 21.02.2015 18:02:17.
Node Node2 holds the SCSI PR on Test Disk 3
and brought the disk online, but failed in its attempt to write file data to
partition table entry 1. The disk structure is corrupted and
unreadable.
Stop: 21.02.2015 18:02:37.
Node Node1 holds the SCSI PR on Test Disk 3
and brought the disk online, but failed in its attempt to write file data to
partition table entry 1. The disk structure is corrupted and unreadable.
Validate File System
Description: Validate that the file system on disks in shared
storage is supported by failover clusters and Cluster Shared Volumes (CSVs).
Failover cluster physical disk resources support NTFS, ReFS, FAT32, FAT, and
RAW. Only volumes formatted as NTFS or ReFS are accessible in disks added as
CSVs.
The test was canceled.
Validate Simultaneous Failover
Description: Validate that disks can fail over simultaneously with
data intact.
The test was canceled.
Validate Storage Spaces Persistent Reservation
Description: Validate that storage supports the SCSI-3 Persistent
Reservation commands needed by Storage Spaces to support clustering.
Start: 21.02.2015 18:01:00.
Verifying there are no Persistent Reservations, or Registration
keys, on Test Disk 3 from node Node1. Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x30000000a for Test
Disk 3 from node Node1.
Issuing Persistent Reservation RESERVE on Test Disk 3 from node
Node1 using key 0x30000000a.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x3000100aa for Test
Disk 3 from node Node2.
Issuing Persistent Reservation REGISTER using RESERVATION KEY
0x30000000a SERVICE ACTION RESERVATION KEY 0x30000000b for Test Disk 3 from node
Node1 to change the registered key while holding the
reservation for the disk.
Verifying there are no Persistent Reservations, or Registration
keys, on Test Disk 2 from node Node1.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x20000000a for Test
Disk 2 from node Node1.
Issuing Persistent Reservation RESERVE on Test Disk 2 from node
Node1 using key 0x20000000a.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x2000100aa for Test
Disk 2 from node Node2.
Issuing Persistent Reservation REGISTER using RESERVATION KEY
0x20000000a SERVICE ACTION RESERVATION KEY 0x20000000b for Test Disk 2 from node
Node1 to change the registered key while holding the
reservation for the disk.
Verifying there are no Persistent Reservations, or Registration
keys, on Test Disk 0 from node Node1.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0xa for Test Disk 0
from node Node1.
Issuing Persistent Reservation RESERVE on Test Disk 0 from node
Node1 using key 0xa.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x100aa for Test Disk 0
from node Node2.
Issuing Persistent Reservation REGISTER using RESERVATION KEY
0xa SERVICE ACTION RESERVATION KEY 0xb for Test Disk 0 from node
Node1 to change the registered key while holding the
reservation for the disk.
Verifying there are no Persistent Reservations, or Registration
keys, on Test Disk 1 from node Node1.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x10000000a for Test
Disk 1 from node Node1.
Issuing Persistent Reservation RESERVE on Test Disk 1 from node
Node1 using key 0x10000000a.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x1000100aa for Test
Disk 1 from node Node2.
Issuing Persistent Reservation REGISTER using RESERVATION KEY
0x10000000a SERVICE ACTION RESERVATION KEY 0x10000000b for Test Disk 1 from node
Node1 to change the registered key while holding the
reservation for the disk.
Failure. Persistent Reservation not present on Test Disk 3 from
node Node1 after successful call to update reservation holder's
registration key 0x30000000b.
Failure. Persistent Reservation not present on Test Disk 1 from
node Node1 after successful call to update reservation holder's
registration key 0x10000000b.
Failure. Persistent Reservation not present on Test Disk 0 from
node Node1 after successful call to update reservation holder's
registration key 0xb.
Failure. Persistent Reservation not present on Test Disk 2 from
node Node1 after successful call to update reservation holder's
registration key 0x20000000b.
Test Disk 0 does not support SCSI-3 Persistent Reservations
commands needed by clustered storage pools that use the Storage Spaces
subsystem. Some storage devices require specific firmware versions or settings
to function properly with failover clusters. Contact your storage administrator
or storage vendor for help with configuring the storage to function properly
with failover clusters that use Storage Spaces.
Test Disk 1 does not support SCSI-3 Persistent Reservations
commands needed by clustered storage pools that use the Storage Spaces
subsystem. Some storage devices require specific firmware versions or settings
to function properly with failover clusters. Contact your storage administrator
or storage vendor for help with configuring the storage to function properly
with failover clusters that use Storage Spaces.
Test Disk 2 does not support SCSI-3 Persistent Reservations
commands needed by clustered storage pools that use the Storage Spaces
subsystem. Some storage devices require specific firmware versions or settings
to function properly with failover clusters. Contact your storage administrator
or storage vendor for help with configuring the storage to function properly
with failover clusters that use Storage Spaces.
Test Disk 3 does not support SCSI-3 Persistent Reservations
commands needed by clustered storage pools that use the Storage Spaces
subsystem. Some storage devices require specific firmware versions or settings
to function properly with failover clusters. Contact your storage administrator
or storage vendor for help with configuring the storage to function properly
with failover clusters that use Storage Spaces.
Stop: 21.02.2015 18:01:02
Thank you for your help.
David -
Is setting up a cluster node as a stand alone file server possible? What would it take?
Main concern is if there is a firmware stop point that expects a physical or logical link to a normal server before a cluster node will boot
OK, I'm confused. What makes you think a cluster node isn't a normal server?
The only difference between the XServe Cluster Node and the XServe (at least, the PowerPC one it's based on) is the single drive bay (vs.3) and lack of optical drive. That's it. It comes with the same version of the OS. The same ports, runs the same apps and does the same thing as the non-cluster node version.
Can a cluster node be put into firewire mode with the T option, followed by a drive restore from a pre-configured Mac X server disk image....avoiding all the command line stuff. Or for that matter...just wapping out the drive with a pre-configured Mac X server drive?
Sure. The only thing you can't do it put a second drive inside the machine since, by definition, it only has a single drive bay.
Maybe you are looking for
-
I can't update iPhoto and i movie i photo and i movie the message appear message appear
i can't update iPhoto and i movie i photo and i movie the message appears These apps cannot be accepted by your Apple ID. These apps were already assigned to another Apple ID, and they will be available in that Apple ID's Purchases list. If you don't
-
Problem during installation of NW7.0 SR3 on Windows x64
Hello all, I have a problem during the installation of a new central instance. During the phase 38 "Install Software units", SAPINST shows that logs : Jul 30, 2008 1:41:10 PM Info: Ending deployment prerequisites. All items are correct. Jul 30, 2008
-
How to change from video function to photo in ipad air
I'm trying to take a picture with the camera of the ipa iar, but the video function is activated and i can't change it to select photo function. How can I do it?
-
FI Check , check number field
hie gurus , i need your assistance with a field i'm having a problem with for the check number. in my form its showing the last check printed and not the current check. if i look in the table PCEC its at the current check number. does anyone know any
-
What must I do to delete an unwanted entry in the "Name" column of the Bookmarks Toolbar list in my Bookmarks Library? At the very bottom of the entries in the "Name" column of the Bookmarks Toolbar in my Bookmarks Library, an entry has appeared whic