"Shared Cluster Storage" Qmaster

i have noticed that "Shared Cluster Storage"in > System Preference > Apple Qmaster is not set to "/var/spool/qmaster" as it should be, but "Applications".
Please help!

If you'd like to change it back to the original location, you'll need to use the Go To Folder command since it's in a hidden folder.
In the Qmaster System Preference, click the "Set" button next to the Shared Storage path.
note: You will need to Stop Sharing to make this change.
When the Finder window comes up, press Command-Shift "G"
In the "Go to the Folder" path, enter: /var/spool
You should be able to select the qmaster folder and click "Choose" at that point.
~D

Similar Messages

  • Qmaster Cluster Storage Location Creating Duplicate Clusters

    I'm in the process of setting up a Qmaster cluster for transcoding jobs and also for Final Cut Server. We have an Xserve serving as the cluster controller with a RAID attached via fiber that is serving out an NFS Share over 10GB Ethernet to 8 other Xserves that make up the cluster. The 8 other Xserves all are automounting the NFS Share. The problem we are running into is that we need to change the default "Cluster Storage" location (Qmaster preference pane) to the attached RAID rather than the default location on the system drive. Primarily because the size of transcodes we are doing will fill the system drive and the transcodes will fail if it is left in the default location.
    Everytime we try and set the "Cluster Storage" location to a directory on the RAID and then create a cluster using QAdministrator it generates a duplicate cluster spontaneously and prevents you from being able to modify the cluster you originally made. It says that it's currently in use or being edited by someone else.
    Duplicated Cluster.
    Currently be used by someone else.
    If you close Qadmin and then try and modify the cluster it says it is locked and prompts for a password despite the fact that no password was setup for the cluster. Alternatively if you do setup a password on the cluster it actually does not work in this situation.
    If the "Cluster Storage" location is set back to it's default location none of this duplicated cluster business happens at all. I went and checked and verified that permissions were the same between the directory on the RAID and the default location on the system drive (/var/spool/qmaster). I also cleared out previous entries in the /etc/exports and that didn't resolve anything. Also, everytime any change has been made servives have been reset and started again. The only thing I can see that is different between using the /var/spool/qmaster and another directory on our RAID is that once a controller has been assigned in QAdmin the storage path that shows up is different. The default is nfs://engel.local/private/var/spool/qmaster/4D3BF996-903A30FF/shared and the cusstom is file2nfs://localhost/Volumes/FCServer/clustertemp/869AE873-7B26C2D9/shared. Screenshots are below.
    Default Location
    Custom Location
    Kind of at loss at this point any help would be much appreciated. Thanks.

    Starting from the beginning, did you have a working cluster to begin with or is this a new implementation?
    a Few major housekeepings (assuming this is a new implementation) - Qmaster nodes have the same versions of : Qmaster, Quicktime, and Compressor (if they have compressor loaded).
    The only box that really matters as far as cluster storage location is the controller. It tells the rest of the boxes where to look. On your shared storage, create a folder structure that is "CLUSTER_STORAGE" or something to that affect... then on the controller's preferences pane set the controller's storage to that location. It will create a new folder that is cryptic with numbers and letters and dashes and use that for the storage location for any computer in the cluster.
    Now... What I'm seeing in your first screen shot however worries me a little bit.. I have had the same issue and the only way I've found to remedy that is to pull all things Final Cut Studio off that box and do a completely fresh reinstall... then update everything again to the same versions. I'm assuming you're using FCStudio 7.x?
    We should be able to get you on your feet with this. Configuring is always the hardest part, but when it gets done.. man it's worth it
    Cheers

  • Cluster storage won't mount on restart

    This has been happening for awhile (in 10.5+10.6), but when I restart my computer my "shared" storage drive does not mount automatically. The Cluster storage resides on a local drive (which is mounted), so I'm not sure what steps would be necessary to mount this virtual drive.
    Stopping services in the Qmaster System Prefs. pane and restarting mount the drive and make my services available again.
    This happens with Quick Clusters and manual Clusters.
    Any suggestions?
    Thanks

    Not clear on your config.
    You are restarting a computer, not waking from sleep.
    That computer is supposed to mount a share from another computer, in order to support your Qmaster realm.
    The share does not get mounted automatically, and you need to do it manually.
    You want it to happen automatically instead.
    Do I have that right?
    I've not dealt in this realm for awhile, but in the past I have solved this by adding an item in my user account's "Login Items" pane. Seem to recall you can drag the icon of a currently mounted share into that list.
    Also, since this auto-mounting is really a generic system issue, you might get better help over in one of the general Mac OS X forums.
    Cheers,
    -Rick

  • OCR and vote disk Allocation for shared raw storage with Solaris 10 questio

    Hi all,
    Current environment is Solaris 10 SPARC 64 bit OS with Hitachi SAN for shared storage and Sun E6900 servers.
    For Oracle 10g RAC (10.2) and ASM, I am setting up the vote disk and OCR files on shared raw storage area network.
    Assume that I have a 35Gb LUN carved out on raw device for the vote disk and OCR for a 2-node Oracle 10g RAC cluster and since the vote disk and OCR only require 120MB of storage, is there a way to use only a 120MB slice from the LUN or do I need to allocate the entire LUN/raw device to the vote disk?
    I am looking for a way to avoid wasting space for the OCR and vote disk with my 10g RAC cluster and we are using raw devices and Oracle 10g RAC Clusterware with Veritas filesystem for the disk management. Can I have several slices on Solaris 10 with raw shared storage (Hitachi) to triple mirror my OCR and vote disk files?
    It seems odd to have to use an entire 35GB LUN just for 2 small files: vote disk and OCR. Is there a way to partition a 120MB slice on one of the devices and allocate the rest to ASM?

    Hi
    In our RAC env.
    we have 1GB LUN for OCR and Voting Disk and 100GB LUN for ASM disk. we have 4 Node RAC env on AIX with 2 SAN storage on Hitachi system for our Application.
    we keept 1st Voting diks on SAN-1 2nd Voting Disk on SAN-2 and third voting on SAN-3.
    OCR disk on SAN1 and OCR_MIRROR Disk on SAN-2.
    we have ASM DIsk with Normal Redundancy . Failure Group 1 on SAN-1 and
    Failure Group 2 on SAN-2
    Regards
    Bharat

  • Cluster Storage

    We have three hyper-v host server (windows 2012) in cluster environment. We have SAN which is mounted as cluster-storage on all three server. Hyper-V1 has ownership of the disk.
    Recently we increase the disk space on SAN volume and it reflect in the cluster disk but not in the cluster volume. It shows the correct size in the disk management on all servers, but does not show in the cluster-storage.
    Please see the attache screenshot to understand more clearly.
    Can someone help me how can I resolve this issue?
    Thanks

    You need a proper set of actions to increase shared LUN and CSV layered on top of it. Please see:
    CSV: Extending the Volume
    http://blogs.technet.com/b/chrad/archive/2010/07/15/cluster-shared-volumes-csv-extending-a-volume.aspx
    Similar thread before, see:
    Extend CSV
    in Hyper-V
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/034afc19-490c-45e3-8279-28a2cbfeabe9/hyperv-extend-csv
    Good luck :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Cluster Storage : All SAN drives are not added up into cluster storage.

    Hi Team,
    Everything seems to be working fine except one minor issue which is one of the disk not showing in cluster storage even validation was without any issue, error or warning. Please see the report
    below where all SAN disks are validate successfully, and not added into Window Server 2012 storage.
    Quorum disk was added successfully into storage, but data disk was not.
    http://goldteam.co.uk/download/cluster.mht
    Thanks,
    SZafar

    Create Cluster
    Cluster:
    mail
    Node:
    MailServer-N2.goldteam.co.uk
    Node:
    MailServer-N1.goldteam.co.uk
    Quorum:
    Node and Disk Majority (Cluster Disk 1)
    IP Address:
    192.168.0.4
    Started
    12/01/2014 04:34:45
    Completed
    12/01/2014 04:35:08
    Beginning to configure the cluster mail.
    Initializing Cluster mail.
    Validating cluster state on node MailServer-N2.goldteam.co.uk.
    Find a suitable domain controller for node MailServer-N2.goldteam.co.uk.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node MailServer-N2.goldteam.co.uk does not exist in the domain.
    Creating a new computer account (object) for 'mail' in the domain.
    Check whether the computer object MailServer-N2 for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Creating computer object in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk where node MailServer-N2.goldteam.co.uk exists.
    Create computer object mail on domain controller \\GTMain.goldteam.co.uk in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk.
    Check whether the computer object mail for node MailServer-N2.goldteam.co.uk exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Validating installation of the Network FT Driver on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N2.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N2.goldteam.co.uk.
    Validating installation of the Network FT Driver on node MailServer-N1.goldteam.co.uk.
    Validating installation of the Cluster Disk Driver on node MailServer-N1.goldteam.co.uk.
    Configuring Cluster Service on node MailServer-N1.goldteam.co.uk.
    Waiting for notification that Cluster service on node MailServer-N2.goldteam.co.uk has started.
    Forming cluster 'mail'.
    Adding cluster common properties to mail.
    Creating resource types on cluster mail.
    Creating resource group 'Cluster Group'.
    Creating IP Address resource 'Cluster IP Address'.
    Creating Network Name resource 'mail'.
    Searching the domain for computer object 'mail'.
    Bind to domain controller \\GTMain.goldteam.co.uk.
    Check whether the computer object mail for node exists in the domain. Domain controller \\GTMain.goldteam.co.uk.
    Computer object for node exists in the domain.
    Verifying computer object 'mail' in the domain.
    Checking for account information for the computer object in the 'UserAccountControl' flag for CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set password on mail.
    Configuring computer object 'mail in organizational unit CN=Computers,DC=goldteam,DC=co,DC=uk' as cluster name object.
    Get GUID of computer object with FQDN: CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk
    Provide permissions to protect object from accidental deletion.
    Write service principal name list to the computer object CN=MAIL,CN=Computers,DC=goldteam,DC=co,DC=uk.
    Set operating system and version in Active Directory Domain Services.
    Set supported encryption types in Active Directory Domain Services.
    Starting clustered role 'Cluster Group'.
    The initial cluster has been created - proceeding with additional configuration.
    Clustering all shared disks.
    Creating the physical disk resource for 'Cluster Disk 1'.
    Bringing the resource for 'Cluster Disk 1' online.
    Assigning the drive letters for 'Cluster Disk 1'.
    'Cluster Disk 1' has been successfully configured.
    Waiting for available storage to come online...
    All available storage has come online...
    Waiting for the core cluster group to come online.
    Configuring the quorum for the cluster.
    Configuring quorum resource to Cluster Disk 1.
    Configuring Node and Disk Majority quorum with 'Cluster Disk 1'.
    Moving 'Cluster Disk 1' to the core cluster group.
    Choosing the most appropriate storage volume...
    Attempting Node and Disk Majority quorum configuration with 'Cluster Disk 1'.
    Quorum settings have successfully been changed.
    The cluster was successfully created.
    Finishing cluster creation.

  • Cluster Storage on Xsan - Constant Read/Write?

    Wondering if anyone else has seen this. I have a Qmaster cluster for Compressor jobs, and if I set the Cluster Storage location to somewhere on the xsan instead of my local drive, throughout the whole compression job my disks will be reading and writing like mad - 40megabytes/sec of read and write, constantly. This obviously hurts the performance of the compression job. If I set the Cluster Storage to the internal disk (/var/spool), I get better performance and virtually no disk activity until it writes out the file at the end. It's especially bad when using Quicktime Export Components.
    Has anyone seen this? I've openned a ticket with Apple, but I'm wondering if it's just me?

    Is your Compressor Preferences set to Never Copy? Thats what it should be set to. I personally haven't seen this behavior, and have a 3 clusters (3 X 5 node) connected to the same san.
    Its also possible your san volume config has something to do with it. If the transfer size (block size and stripe breadth) is too low then I could imagine something like this happening.

  • Manually Moving VMS from HOST to HOST *WITH* Shared SAS Storage

    Hi,
    Environment will be 2 Windows 2012 R2 Hyper-V servers.
    They will be connect via shared SAS.  These are two identical IBM servers with shared SAS storage
    I am hoping to eliminate using MS Clustering of the Hyper-V servers just to keep things straight forward.
    In the event, I had a complete failure of one of the  physical server Hyper V hosts, I was hoping be able to manually add/import (not sure what the terminology is here) and run the virtual machines on the second host until the first is repaired.
    If there is third party software that can do this, I would entertain it.
    In VMware (I had two hosts and would manually remove them from inventory on one server and add them to a second server). 
    Ideally, if I could keep both Hyper-V hosts in a Windows Workgroup and as vanilla as possible.
    Thanks in Advance,
    G

    Hi,
    Sorry for the late reply.  I have been researching all I can. And thanks to all for your patience and help!
    Let's start with the hardware to help paint the picture:
    2 @ IBM X3550 M4
    1 @ IBM V3700
    The two X3550's are going to be connected via 6GB SAS Cards (2/server for redundacy)
    Current environment (residing on VMWare with older IBM Xseries servers DS3524 connect via SAS connections)  40 users total.
    Windows servers are:
    1 @ SBS 2011 (One and only AD/PDC)
    2 @ Windows 2008 Servers that are CRM (these are Windows Member Servers)
    5 @ Windows 2003 servers that are going to be replace with Windows 2008 Servers (these are Windows Member servers) (low usage servers)
    Just to recap:
    The current environment running VMware 4.1 has both IBM X3500 servers able to see a  single 1.5 TB lun (SAS connected) presented by the DS3524.
    Currently, in VMware, I can see the same LUN on both servers.  I have all my VM's on HOSTA, if HOSTA should suffer a system board failure, I manually connect to HOSTB with the VMware client and add the VMs that I can see on the shared LUN
    and add them to HOSTB.
    My understanding is that its strongly recommend that my DC (not sure if they are implying PDC or BDC) on a physical server.  My hesitation is now I have to introduce a DC to SBS 2011 and manage a physical server and deal with with SBS 2011's quirks.
    So I was hoping that if I presented a LUN to both my HyperV hosts (In a Workgroup), and created on my VMs on hostA, if something went very bad with HOSTA,  I could connect to HOSTB and "Import" them and manually start the VMs, especially since
    SBS 2011 is sensitive.
    The client is okay with some downtime, so if the process is supported and works, then I am okay with doing the work.
    Clustering or even Replicas will add:
    Complexity to the environment
    Patching needs to be carefully planned as it would not look good to have to failover due to a Windows update on the primary host.
    The tech's that support that environment will now need to know about Clustering and/or Replication ontop of the SAN stuff
    I am looking at Veeam, Double-Take to see if they can work with 2 @ 2012 Hyper-V hosts configured in a Workgroup.
    The SBS 2011 is the "hinge" that will make or break the success of this project.  I personally love the way 2012 Hyper-V works, but it would be great if a physical server outside of the HyperV hosts was not required.
    Thanks !
    G

  • Compressor 4.1 shared cluster computer not appearing - but slave can see master

    On my main "master" machine, I cannot see one of my shared cluster machines. However, on the particular shared cluster machine (slave) that my "master" cannot see, the slave can see the master in the list of shared computers. So, in other words, I suppose I could launch the Compressor job from the "slave" machine [since the source file is on a network drive that all machines can access], but it doesn't make any sense.
    Why can't the "master" see the "slave".
    I've tried restarting the master computer, tried turning off wifi and ethernet on the master computer, tried manually inputting the IP address and host name on the master computer in the list of shared computers by hitting the "plus" button. Also tried setting the "Use network interfaces" option to either both, ethernet only, or wi-fi only, also tried resetting the compressor queue, and trashing the Compressor preferences.
    Nothing works. What's strange is that this master computer was able to see the slave computer two weeks ago, using Compressor 4.1. Nothing has changed. They are both on the same 10.0.0.x network and subnet. The master machine can ping the slave machine.
    Any ideas?

    Just posted this in this discussion forum. Might be helpful even though its specifically for Compressor V4.0.
    Run through some of the the procedure if you are on Compressor V4.0 earlier.
    https://discussions.apple.com/thread/5781288?tstart=0
    As you're using Compressor V4.1, check the cluster and compressor logs for errors and network and file system permission errors:
    ~/Library/Logs/Compressor (new in V4.1) to see what he jobcontroller and services node are doing
    check /var/logs (system log)
    else for compressor V4.1 it seem sensitive to hardware configurations now.
    might be of some help.
    w

  • Move SoFS cluster storage

    Hi,
    I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
    Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
    Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    I'm thinking about a scenario that is as follows: I have a SoFS cluster using iSCSI shared storage and clustered storage spaces. The underlying hardware providing the iSCSI CSVs are being replaced by a new system.
    Can i just add new eligible iSCSI disks to extend the existing clustered pool from the new hardware, then remove the old iSCSI disks from the pool?
    Ideally this would then trigger movement of the VDs to the added disks, freeing up the disks to be removed. Then i should be able to remove the old drives from the cluster. Right?
    Using iSCSI with Clustered Storage Spaces is not supported. So you should be aware of that. Only SAS is currently supported. See:
    Clustered Storage Spaces
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Disk bus type
    The disk bus type must be SAS.
    Note
    We recommend dual-port SAS drives for redundancy.
    Storage Spaces does not support iSCSI and Fibre Channel controllers.
    Yes you can replace disk with Storage Spaces no problem. See:
    Storage Spaces FAQ
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_replace_a_physical_disk
    Physical attachment interface does not matter.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Command in windows to file shared file storage name?

    Hi,
    We have a number of windows servers that are mounted to netapp or sun storages.  We need to find a command in windows that give us the name of the shared storage.
    We tried wmic but this give me the information of the c d drive but not the shared storage name where this is mounted.
    Please help.
    wmic logicaldisk get size,freespace,caption

    Could you explain how are configured your environment ? Are you using iSCSI ? Which Sun storages are used in your situation ?

  • Failover cluster storage pool cannot be added

    Hi.
    Environment: Windows Server 2012 R2 with Update.
    Storage: Dell MD3600F
    I created an LUN with 5GB space and map it to both node of this cluster. It can be seen on both side on Disk Management. I installed it as GPT based disk without any partition.
    The New Storage Pool wizard can be finished by selecting this disk without any error message, nor event logs.
    But after that, the pool will not be visible from Pools and the LUN will be gone from Disk Management. The LUN cannot be shown again even after rescanning.
    This can be repo many times.
    In the same environment, many LUNs work well in Storage - Disks. It just failed while acting as a pool.
    What's wrong here?
    Thanks.

    Hi EternalSnow,
    Please refer to following article to create clustered storage pool :
    http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
    Any further information please feel free to let us know .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Server 2012R2 Cluster Storage Error

    January 2014 I built 4 servers into a cluster for Hyper-V base VDI using a SAN for central storage. I have had no issues with the running of the setup until recently when the 4th server decided to stop one of the vm host services and VMs became inaccessible.
    When I was unable to find a solution I rebuild the server and after finding 50+ updates per server, ran windows update on them all. Ever since these 2 simple actions I have been unable to add the server back into the cluster correctly. The Validation wizard
    shows:
    Failure issuing call to Persistent Reservation REGISTER AND IGNORE EXISTING on Test Disk 0 from node when the disk has no existing registration. It is expected to succeed. The requested resource is in use.
    Test Disk 0 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator
    or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.
    The other 3 nodes are using the storage happily without issues. If I force the node into the cluster, it shows it as mounted and accessible but the moment I try to start a VM on that server, it loses the mount point and reports error 2051: [DCM] failed to
    set mount point source path target path error 85.
    The only difference between them is the model of the server - the 3 that work are HP ProLiant DL360 G7 and the one that doesn't is a HP ProLiant DL360p G8. They have worked together previously without issues though.
    I am at a complete loss as to what to do. Any help would be gratefully appreciated.
    Thanks

    Hello the_travisty
    The first thing to check in here is that you are using different hardware version for this node than the other nodes, even if at some point they could work together this configuration does not warranty that you are not going to have future issues.
    Remember that the base on the Failover Cluster technology is that all the nodes should have the same characteristics since they are working together as only one computer, if the "clusterized" resources
    are going to be moved among the nodes they have to work and react as any other node member of the cluster and provide all the elements (hardware/software/firmware/drives) in order to maintain the resource up and running as if they were not moved to other computer
    with different configuration.
    As per this error:
    Please contact your storage administrator or storage
    vendor to check the configuration of the storage to allow it to function properly with failover clusters.
    This could point to a firmware/driver incompatibility, I think that maybe we have updates missing to be installed in the server or one update installed that have a different version of the system files installed on the nodes due the different version of hardware
    is causing this issue. Also you can check with your storage vendor if there is a new release of drivers/firmware for your storage solution compatible with the OS version. This same question goes to your network adapter vendor.
    Hope this info hekp you to reach your goal. :D
    5ALU2 !

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

Maybe you are looking for

  • Non English characters in BIP email

    Hi, my report contains Japanese characters, when I view the output in HTML format. It is displayed properly. But when I click on send button , enter email parameters like to, cc, bcc, subject , etc and send it, in the mail I receive, the japanese cha

  • After I hit submit, why isn't it compressing?

    I'm new to Compressor. I created "settings" and a "destination" for my project, then I hit submit. The file appears in the history window but it seems to be inactive. Why isn't it compressing?

  • VGA Display Detected But Has No Signal

    Hello, Here's the issue, I'm working on a 13" MacbookPro 10.8. I plug in mac's "display port > VGA" adapter to display my second screen on projectors for a power point presentation. The Mac flashes the screen like it recognizes the display and when I

  • Java Development BI Publisher Desktop 10.1.3.2 vs. XMLP562 library

    Hi, I've created an RTF template with BI Publisher Desktop, that's working quite fine running the preview. In this template I've charts and cross tables. Using this RTF template in JDeveloper with the API of XMLP562 I'm getting errors: It seems that

  • Music doesn't display in Itunes 11 except in playlist

    I have some 24-bit music. I used a product to copy the files to 16-bit accidentally forgetting to change the sample rate. So I add the songs to Itunes and they show up as unknown in Itunes for the icloud status. Then I deleted all of the songs and th