Backing up Voting Disk

Does the clusterware need to be down when backing up the voting disk?

Hi,
The RAC is designed for high availability. No, backup of voting disks can be done online with dd unix command
dd if=/u01/u01/crs_files/voting1.dbf of=/tmp/VotingDisk_001.bkp
And OCR can be backuped with
ocrconfig -export /tmp/OCR_001.bkp -s online
Regards,
Rodrigo Mufalani
http://mufalani.blogspot.com

Similar Messages

  • Backing up Voting disk in 11.2

    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?

    spiral wrote:
    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?Hi,
    Backing up and restoring voting disks using DD is only supported in versions prior to Oracle Clusterware 11.2.
    With Oracle Clusterware 11.2 voting disks contain more information. Hence, one must not operate on voting disks using DD. This holds true regardless of whether or not the voting disks are stored in ASM, on a cluster file system, or on raw / block devices (which could still be the case after an upgrade).
    Since DD must not be used to backup the voting disks with 11.2 anymore, the voting disks are backed up automatically into OCR. Since the OCR is backed up every 4 hours automatically, the voting disks are backed up indirectly, too. Any change to the information stored in the voting disks - e.g. due to the addition or removal of nodes in the cluster - triggers a new voting disk backup in the OCR.
    Regards,
    Levi Pereira

  • Multiple Voting Disks Benefits?

    Are there any benefits to having 3 voting disks versus 1 voting disk?
    Is the sole benefit redundancy?
    Running 4 node RAC Cluster (11gR1) on Windows, currently have 1 voting disk, protected by RAID 1.
    Is there any good reason this should be changed to 3 voting disks?

    have 1 voting disk, protected by RAID 1You use external voting... that's great.
    You don't need to use normal voting (3 voting disks)...
    By the way, I hope you'll back up voting disk after setup RAC complete and after delete + add nodes ;)
    on window "ocopy"

  • Back Up the Voting Disk After RAC Installation

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2 dd if=/dev/raw/raw2 of=/home/backupuser/backup_voting_disk
    What is the command to Back Up the root.sh Script?cp root.sh root.sh.bkup
    Message was edited by:
    forbrich

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • Raw partition creation on RHEL5 for OCR and Voting disk

    Hi all,
    I want to create MAp raw device for block device to use for installing OCR and Voting disk. As i see fro mmetalonk note 465001.1 we can creaet rules to do that but when i follow the steps given in that note, nothing is changed see bellow:
    [root@cdr-analysis01 ~]# /sbin/scsi_id -g -s /block/sdae/sdae1
    360014380024d178600005000006c0000
    [root@cdr-analysis01 ~]# /sbin/scsi_id -g -s /block/sdaf/sdaf1
    360014380024d178600005000006f0000
    Create udev naming rule file /etc/udev/rules.d/55-oracle-naming.rules
    # Configure persistent, user-defined Oracle Clusterware device file names
    KERNEL=="/dev/sdae1", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="360014380024d178600005000006c0000", NAME="ocr1", OWNER="root", GROUP="oinstall", MODE="0640"
    KERNEL=="/dev/sdaf1", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="360014380024d178600005000006f0000", NAME="vote1", OWNER="oracle", GROUP="oinstall", MODE="0640"
    [root@cdr-analysis01 ~]# udevtest /block/sdae/sdae1
    main: looking at device '/block/sdae/sdae1' from subsystem 'block'
    udev_rules_get_name: add symlink 'disk/by-id/scsi-360014380024d178600005000006c0000-part1'
    udev_rules_get_name: add symlink 'disk/by-path/pci-0000:10:00.1-fc-0x50014380025ae47d:0x0007000000000000-part1'
    run_program: '/lib/udev/vol_id --export /dev/.tmp-65-225'
    run_program: '/lib/udev/vol_id' returned with status 4
    run_program: '/sbin/scsi_id'
    run_program: '/sbin/scsi_id' returned with status 1
    udev_rules_get_name: no node name set, will use kernel name 'sdae1'
    udev_device_event: device '/block/sdae/sdae1' already in database, validate currently present symlinks
    udev_node_add: creating device node '/dev/sdae1', major = '65', minor = '225', mode = '0640', uid = '0', gid = '6'
    udev_node_add: creating symlink '/dev/disk/by-id/scsi-360014380024d178600005000006c0000-part1' to '../../sdae1'
    udev_node_add: creating symlink '/dev/disk/by-path/pci-0000:10:00.1-fc-0x50014380025ae47d:0x0007000000000000-part1' to '../../sdae1'
    main: run: 'socket:/org/kernel/udev/monitor'
    main: run: '/lib/udev/udev_run_devd'
    main: run: 'socket:/org/freedesktop/hal/udev_event'
    main: run: '/sbin/pam_console_apply /dev/sdae1 /dev/disk/by-id/scsi-360014380024d178600005000006c0000-part1 /dev/disk/by-path/pci-0000:10:00.1-fc-0x50014380025ae47d:0x0007000000000000-part1'
    The name is not changed and the raw partition is not created
    Could you help
    Please
    Raitsarevo

    raitsarevo wrote:
    I want to create MAp raw device for block device to use for installing OCR and Voting disk. Why not define it as a raw devices (despite raw devices being deprecated in the current Linux kernel, it is still available for backwards compatibility)? Quite easy to configure.
    Do the install and then when done, rename the disks from block devices back to character devices - there's a Metalink note that covers how to do that. Sorry - not in the office and do not have the Metalink note at hand. However, have used this approach successfully with 3 RAC installation (RHEL4 and RHEL5) in recent months.

  • Cluster continues with less than half voting disks

    I wanted to try voting disk recovery. I have voting file on a diskgroup with normal redundancy. I corrupted two of the disks containing the voting disk using dd command. I was expecting clusterware to be down as only 1 voting disk ( < half of total(3)) was available but surprisingly clusterware kept running. I even waited for quite some time but to no avail. I did not have any messages in alert log /ocssd log. I would be thankful if experts can give more input on this.
    Regards

    Hi You can check the metalink notes
    OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
    it says
    For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change.
    The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
    1.Configuration parameters, for example misscount, have been added or modified
    2.After performing voting disk add or delete operations The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

  • Startup of Clusterware with missing voting disk

    Hello,
    in our environment we have a 2 node cluster.
    The 2 nodes and 2 SAN storages are in different rooms.
    Voting files for Clusterware are in ASM.
    Additionally we have a third voting disk on a NFS server (configured like in this descripton: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf&ei=lzJXUPvJMsn-4QTJ8YDoDg&usg=AFQjCNGxaRWhwfTehOml-KgGGeRkl4yOGw)
    The Quorum flag is on the disk that is on NFS.
    The diskgroup is with normal redundancy.
    Clusterware keeps running when one of the VDs gets lost (e.g. storage failure).
    So far so good.
    But when I have to restart Clusterware (e.g. reboot of a node) while the VD is still missing, then clusterware does not come up.
    Did not find an indication if this whether is planned behaviour of Clusterware or maybe because I missed a detail.
    From my point of view it should work to start Clusterware as long as the majority of VDs are available.
    Thanks.

    Hi,
    actually what you see is expected (especially in a stretched cluster environment, with 2 failgroups and 1 quorum failgroup).
    It has to do with how ASM handles a disk failure and is doing the mirriong (and the strange "issue" that you need a third failgroup for the votedisks).
    So before looking at this special case, lets look at how ASM normally treats a diskgroup:
    A diskgroup can only be mounted in normal mode, if all disks of the diskgroup are online. If a disks is missing ASM will not allow you to "normally" mount the diskgroup, before the error situation is solved. If a disks is lost, which contents can be mirrored to other disks, then ASM will be able to restore full redundancy and will allow you to mount the diskgroup. If this is not the case ASM expects the user to tell what it should do => The administrator can issue a "alter diskgroup mount force" to tell ASM even though it cannot held up the required redundancy it should mount with disks missing. This then will allow the administrator to correct the error (or replaced failed disks/failgroups). While ASM had the diskgroup mounted the loss of a failgroup will not result in a dismount of the diskgroup.
    The same holds true with the diskgroup containing the voting disks. So what you see (will continue to run, but cannot restart) is pretty much the same like for a normal diskgroup: If a disk is lost, and the contents does not get relocated (like if the quorum failgroup fails it will not allow you to relocatore, since there are no more failgroups to relocate the third vote to), it will continue to run, but it will not be able to automatically remount the diskgroup in normal mode if a disk fails.
    To bring the cluster back online, manual intervention is required: Start the cluster in exclusive mode:
    crsctl start crs -exclThen connect to ASM and do a
    alter disgkroup <dgname> mount forceThen resolve the error (like adding another disk to another failgroup, that the data can be remirrored and the disk can be dropped.
    After that a normal startup will be possible again.
    Regards
    Sebastian

  • RAC Voting disk recommended size?

    Hi RAC gurus
    what is the Voting disk recommended size?
    i need to do an appropriate size , so when i back up using dd command , it don't take very long
    Am i missing something? , can i create a very big one , say 10G , then backup only the used portion
    Thanks for help

    You asked if you are missing something and my initial thought is that what you missed was reading the docs.
    First of all the recommended size is clearly stated in the docs ... so I am not going to hand it to you and force you to read them. I hope everyone else does so too.
    Second of all, and most importantly, if you have any concept of what the Voting Disk is ... why on earth would you think that making a large one, or a small one, has any relevance to its use? Are you planning to build a cluster larger than 128 nodes?
    PS: What is 10g ... is this 10.1.0.2 or 10.1.0.3 or 10.1.0.4 or 10.2.0.1 or ....
    PPS: Given that you didn't read the docs ... it would be interesting to know, too, your operating system, your cache fusion interconnect solution, your shared storage solution, and how you are planning to define your VIPs. But I will leave that for the future as no doubt you will stub your toe on each if you don't get to those docs and read them with great care.
    My apology for being a bit harsh here but your chances of success, in RAC, are precisely zero if you do not understand these things. And I am really tired of people blaming Oracle and RAC for what they did to themselves.

  • Backup of OCR and voting disk

    Hello,
    Couple of quick questions related to backup of OCR(via ocrconfig -export) and Voting disk(via dd).
    a) I can backup OCR and Voting disk while the database is up and running.
    b) Do I need to login as root to the backup of OCR and Voting disk.
    Thanks,
    C.

    a) I can backup OCR and Voting disk while the database is up and running.
    Yes, the OCR/Voting is at the Clusterware layer, it is not really care about if database is running or not BECAUSE OCR is just a Registry of Clusterware and is static unless you Add/Remove Node/Resources into clusterware.
    OCR/Voting are managed by CRS Stack (CRSCTL), hence can be backed up at any given time, as a matter of fact the backup happens automatically every 4 hours or so ( not sure exact frequency).
    b) Do I need to login as root to the backup of OCR and Voting disk.
    Yes, take the backup as root.
    HTH,

  • Adding ocrmirror and voting disk on another diskgroup 11gr2 RAC

    Hi Gurus,
    Can i add ocrmirror on another diskgroup(external redundancy) and voting disk(normal) on anotherdiskgroup (replace) in 11gr2 RAC while databases are up and running?

    Hi
    in Oracle 11gr2 voting disk are backed up automatically in the ocr part of any configuration change
    Voting disk is automatically restored to any added voting disk
    You can migrate voting fıska from non asm to asm without
    Taking download cluster
    To add a voting disk to asm
    Crsctl replace votedisk +diskgroup
    Ocrconfig -add +datagroup2
    Oracle manage up to five redundant ocr locations
    Hope this helps
    Zekeriya Besiroglu
    Http://zekeriyabesiroglu.blogspot.com

  • 10gR1 Voting Disk

    Hallo,
    is there a way to recreate a voting disk (not backed up) in 10gR1 without reinstalling CRS?

    upper realese it can be easily recreated!Not sure. AFAIK, in 11gR2, the voting disks get automatically backed up therefore, we don't have to manually backup. But, I believe we can't "easily recreate" the voting devices without backups in any release. (some one will correct if I am wrong though!). In previous releases to 11gR2, "dd" is used to backup voting devices but in 11gR2 it's no longer supported to use dd.
    HTH
    Thanks
    Chandra

  • I need help with boot camp. "Back up the disk and use Disk Utility to format it as a single Mac OS Extended (Journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again."

    This message appears every time I try to partition my disk:
    "Back up the disk and use Disk Utility to format it as a single Mac OS Extended (Journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again."
    I verified my Macintosh HD disk on Disk utility and then tried to repair it, but I am unable to click the repair button.
    It says it's not available because the startup disk is selected.
    I don't know what to do or how to go about both these problems.
    Please, any suggestions?

    This message appears every time I try to partition my disk:
    "Back up the disk and use Disk Utility to format it as a single Mac OS Extended (Journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again."
    I verified my Macintosh HD disk on Disk utility and then tried to repair it, but I am unable to click the repair button.
    It says it's not available because the startup disk is selected.
    I don't know what to do or how to go about both these problems.
    Please, any suggestions?

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

Maybe you are looking for

  • Connecting T61P to HDMI Plasma via Port Rep.???

    Hi all, I have a T61 with the docking station which has DVI on the back.  On my Panny plasma I have HDMI connections available. So, can I connect my T61/docking station via cable to my HDMI plasma to project Windows on my plasma? Thanks!

  • Insert more than 4000 characters using sql plus

    I have a SQL script that creates a database. The problem is that some of the fields are longer than 4000 characters in the INSERT statement and Oracle doesn't seem to like it. The field is a clob and works fine in the application (i.e. I can insert m

  • Connecting my z10 to my local mobile network

    I bought a Z10 and tested it with my sim card from Vodafone UK and T-Mobile UK, and it worked. However, when I insert a Mauritius emtel sim card, the mobile network switches off and when trying to activate, it remains on the 'activating mobile ' stat

  • Sidecar installation Linux

    Working on the sidecar installation again (the search of the forums doesn't show my previous post regarding this: http://forums.adobe.com/thread/751340) this time I have to make a Linux variation. I got multiple issues regarding the installer: 1. How

  • Some small review about N93

    To start i would like to say that nokia is an excelent enterprise, and its mobile phones are among the best in the market, but i think that i have found some problems, and with this i would like to know if it hapes with others as well. The problems f