Multiple Voting Disks Benefits?

Are there any benefits to having 3 voting disks versus 1 voting disk?
Is the sole benefit redundancy?
Running 4 node RAC Cluster (11gR1) on Windows, currently have 1 voting disk, protected by RAID 1.
Is there any good reason this should be changed to 3 voting disks?

have 1 voting disk, protected by RAID 1You use external voting... that's great.
You don't need to use normal voting (3 voting disks)...
By the way, I hope you'll back up voting disk after setup RAC complete and after delete + add nodes ;)
on window "ocopy"

Similar Messages

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Why logical partition is a must for voting disk and OCR

    Hi Guys,
    I just started handling jobs for RAC installation, I have a simple question regarding the setup.
    Why does logical partition have to be used for voting disk and OCR?
    I tried partition the disk that were provisioned for voting disk and OCR with primary partition but when OUI is trying to recognize the disk, it cannot find the disk that has been partitioned with primary partition.
    Thank you,
    Adhika

    Hello Adhika,
    I found it on this doc http://download.oracle.com/docs/cd/B28359_01/install.111/b28250/storage.htm
    Be aware of the following restrictions for partitions:
    * You cannot use primary partitions for storing Oracle Clusterware files while running the OUI to install Oracle Clusterware as described in Chapter 5, "Installing Oracle Clusterware". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * With 32-bit Windows, you cannot create more than four primary disk partitions for each disk. One of the primary partitions can be an extend partition, which can then be subdivided into multiple logical partitions.
    * You can assign mount points only to primary partitions and logical drives.
    * You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.
    For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.
    All the best,
    Rodrigo Mufalani
    http://www.mrdba.com.br/mufalani

  • OCFS2 , ASM and Raw Device for Voting Disk and OCR

    Dear all ,
    My manager assigned me to establish a 11gRAC , 3 node on OEL. He likes to locate ASM with datafiles , at the same time, 2 OCR vol on OCFS2 and RawDevice respectively , 2 voting disks on OCFS2 and RawDevice respectively also.
    This really annoys me .
    Oh all experts, could anybody give me some advices ? Can those stuff be co-existed ?

    Billy  Verreynne  wrote:
    MccLok wrote:
    My manager assigned me to establish a 11gRAC , 3 node on OEL. He likes to locate ASM with datafiles , at the same time, 2 OCR vol on OCFS2 and RawDevice respectively , 2 voting disks on OCFS2 and RawDevice respectively also. Not a great idea to introduce additional s/w layers between the CRS software and its OCR and voting disks. If ocfs2 is for example used as a shared file system for these devices (files), then the ocfs2 cluster s/w needs to be up and running, the shared devices mounted, before CRS can successfully start. Thus any problem with the ocfs2 stack, will impact on CRS. Why? What does ocfs2 buy you in this regard? What are the benefits that justifies the increased complexity and the dependency of CRS on another s/w layer?
    Fewer moving parts means less complexity, less stuff that needs to be configured and can go wrong.In fact, some users had perception that " Pay mroe so that Get more".

  • Max of 15 Voting disk ?

    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure group
    May be I silly here, but if some one can advise me, please
    Regards
    Joe

    Hi Joe,
    Jomon Jacob wrote:
    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #When votedisk is on ASM diskgroup, no add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
    When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup.
    When votedisk is on ASM the option to be used is replace.
    crsctl replace css votedisk +NEW_DGVOTE>
    >
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.Yes.. because Voting file is placed directly on ASMDISK not in a DISKGROUP, although it uses the configuration of a diskgroup (e.g failgroup, asmdisks, and so on).
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.When you move (replace) voting file to ASM, Oracle will take configuration of Diskgroup (i.e failgroup) and place one voting file in each ASM DISK in different failgroups.
    >
    >
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?Yes.. with external redundancy you don't have the concept of failgroup because you not use mirror by ASM. So, it's like one failgroup. Then you can have only one Voting file in this Diskgroup.
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure groupNo. 15 voting files is allowed if you not storing voting on ASM. If you are using ASM the maximum number of voting files is 5. Because Oracle will take configuration of Diskgroup.
    Using high number of voting disks can be useful when you have a big cluster environment with (e.g) 5 Storage Subsystem and 20 Hosts in a single Cluster. You must set up a voting file in each storage ... but if you're using only one storage voting 3 files is enough.
    Usage Note
    You should have at least three voting disks, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting disks. The maximum number of voting disks that is supported is 15.
    http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CHEJDHFH
    See this example;
    I configured 7 ASM DISK but ORACLE used only 5 ASM DISK.
    SQL> CREATE DISKGROUP DG_VOTE HIGH REDUNDANCY
         FAILGROUP STG1 DISK 'ORCL:DG_VOTE01'
         FAILGROUP STG2 DISK 'ORCL:DG_VOTE02'
         FAILGROUP STG3 DISK 'ORCL:DG_VOTE03'
         FAILGROUP STG4 DISK 'ORCL:DG_VOTE04'
         FAILGROUP STG5 DISK 'ORCL:DG_VOTE05'
         FAILGROUP STG6 DISK 'ORCL:DG_VOTE06'
         FAILGROUP STG7 DISK 'ORCL:DG_VOTE07'
       ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';
    Diskgroup created.
    SQL> ! srvctl start diskgroup -g DG_VOTE -n lnxora02,lnxora03
    $  crsctl replace votedisk +DG_VOTE
    CRS-4256: Updating the profile
    Successful addition of voting disk 427f38b47ff24f52bf1228978354f1b2.
    Successful addition of voting disk 891c4a40caed4f05bfac445b2fef2e14.
    Successful addition of voting disk 5421865636524f5abf008becb19efe0e.
    Successful addition of voting disk a803232576a44f1bbff65ab626f51c9e.
    Successful addition of voting disk 346142ea30574f93bf870a117bea1a39.
    Successful deletion of voting disk 2166953a27a14fcbbf38dae2c4049fa2.
    Successfully replaced voting disk group with +DG_VOTE.
    $ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    1. ONLINE   427f38b47ff24f52bf1228978354f1b2 (ORCL:DG_VOTE01) [DG_VOTE]
    2. ONLINE   891c4a40caed4f05bfac445b2fef2e14 (ORCL:DG_VOTE02) [DG_VOTE]
    3. ONLINE   5421865636524f5abf008becb19efe0e (ORCL:DG_VOTE03) [DG_VOTE]
    4. ONLINE   a803232576a44f1bbff65ab626f51c9e (ORCL:DG_VOTE04) [DG_VOTE]
    5. ONLINE   346142ea30574f93bf870a117bea1a39 (ORCL:DG_VOTE05) [DG_VOTE]
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
                    FROM V$ASM_DISKGROUP
                    WHERE NAME='DG_VOTE');
    NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    DG_VOTE01  ORCL:DG_VOTE01                 MEMBER               STG1                 REGULAR              Y
    DG_VOTE02  ORCL:DG_VOTE02                 MEMBER               STG2                 REGULAR              Y
    DG_VOTE03  ORCL:DG_VOTE03                 MEMBER               STG3                 REGULAR              Y
    DG_VOTE04  ORCL:DG_VOTE04                 MEMBER               STG4                 REGULAR              Y
    DG_VOTE05  ORCL:DG_VOTE05                 MEMBER               STG5                 REGULAR              Y
    DG_VOTE06  ORCL:DG_VOTE06                 MEMBER               STG6                 REGULAR              N
    DG_VOTE07  ORCL:DG_VOTE07                 MEMBER               STG7                 REGULAR              NRegards,
    Levi Pereira
    Edited by: Levi Pereira on Jan 5, 2012 6:01 PM

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • At the time of recovering voting disk

    os:redhat linux
    rdbms:10.2.0.1
    clusterware:10.2.01.
    hai...
    we are using three voting disks for our cluster.i have taken backup of these three voting disks indivdually.like voting_disk1_bkp,voting_disk2_bkp,voting_disk3_bkp
    suddenly the drive of voting_disk_file2 file has failed.
    now i have to recover the voting_disk_file2.which method do i need to follow?
    1)dd if=voting_disk2_bkp of=voting_disk_name
    or
    2)crsctl add css votedisk <path>
    and why should we take individual backup of votking disks?
    is these three voting disks are not equal?
    please guide me.
    v.s.srinivas

    Also be aware that in 10g theres a bug; Some of the the docs and books say that you can do "crsctl add css votedisk " with the "-force" option to add a voting disk with the cluster up. This can cause corruption - see Note:390880.1
    Yours,
    Bob

  • After Patch  10.2.0.4, got "voting disk  corrupted" error.

    Dear all,
    My setting is:
    OS:RHEL4.8 U8 x86
    DB:10.2.0.1
    CRS:10.2.0.1
    After I installed Patchset 10.2.0.4 and execute $CRS_HOME/install/root102.sh, I could't start clusterware anymore. After checking ocssd.log, I found some messages like this:
    [    CSSD]2010-01-07 15:53:06.519 [3086919360] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-01-07 15:53:06.562 [3086919360] >TRACE: clssnmReadNodeInfo: added node 1 (x101) to cluster
    [    CSSD]2010-01-07 15:53:06.578 [3086919360] >TRACE: clssnmReadNodeInfo: added node 2 (x102) to cluster
    [    CSSD]2010-01-07 15:53:06.580 [3086919360] >TRACE: clssnmInitNMInfo: Initialized with unique 1262850786
    [    CSSD]2010-01-07 15:53:06.586 [3086919360] >TRACE: clssNMInitialize: Initializing with OCR id (33483723)
    [    CSSD]2010-01-07 15:53:06.650 [84294560] >TRACE: clssnm_skgxninit: Compatible vendor clusterware not in use
    [    CSSD]2010-01-07 15:53:06.650 [84294560] >TRACE: clssnm_skgxnmon: skgxn init failed
    [    CSSD]2010-01-07 15:53:06.657 [3086919360] >TRACE: clssnmNMInitialize: misscount set to (60), impending reconfig threshold set to (56000)
    [    CSSD]2010-01-07 15:53:06.659 [3086919360] >TRACE: clssnmNMInitialize: Network heartbeat thresholds are: impending reconfig 30000 ms, reconfig start (misscount) 60000 ms
    [    CSSD]2010-01-07 15:53:06.669 [3086919360] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:06.669 [84294560] >TRACE: clssnmvDPT: spawned for disk 0 (/ocfs/voting)
    [    CSSD]2010-01-07 15:53:08.755 [84294560] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:08.821 [113339296] >TRACE: clssnmvKillBlockThread: spawned for disk 0 (/ocfs/voting) initial sleep interval (1000)ms
    [    CSSD]2010-01-07 15:53:08.889 [3086919360] >TRACE: clssnmFatalInit: fatal mode enabled
    [    CSSD]2010-01-07 15:53:08.999 [94784416] >TRACE: clssnmClusterListener: Spawned
    [    CSSD]2010-01-07 15:53:09.002 [94784416] >TRACE: clssnmClusterListener: Listening on (ADDRESS=(PROTOCOL=tcp)(HOST=x102-priv)(PORT=49895))
    [    CSSD]2010-01-07 15:53:09.002 [94784416] >TRACE: clssnmconnect: connecting to node(1), con(0x81bc480), flags 0x0003
    [    CSSD]2010-01-07 15:53:09.069 [3083459488] >TRACE: clssgmclientlsnr: Spawned
    [    CSSD]2010-01-07 15:53:09.072 [113339296] >ERROR: clssnmvDiskKillCheck: voting disk corrupted (0x00000000,0x00000000) (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [113339296] >TRACE: clssnmDiskStateChange: state from 4 to 3 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >TRACE: clssnmDiskPMT: disk offline (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: clssnmDiskPMT: Aborting, 1 of 1 voting disks unavailable
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: ###################################
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: clssscExit: CSSD aborting
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: ###################################
    [    CSSD]--- DUMP GROCK STATE DB ---
    I tried to recreate my voting disk like this:
    $CRS_HOME/bin/crsctl add css votedisk /ocfs/voting1 -force
    $CRS_HOME/bin/crsctl delete css votedisk /ocfs/voting -force
    $CRS_HOME/bin/crsctl add css votedisk /ocfs/voting -force
    $CRS_HOME/bin/crsctl delete css votedisk /ocfs/voting1 -force
    Then run root102.sh again but still got the same error message about "voting disk corrupted".
    I was totally confused and don't know how to fix it.
    Please, any suggestion would be appreciated.

    Hello,
    Did u try restore the OCR?
    http://download-west.oracle.com/docs/cd/B19306_01/rac.102/b14197/votocr.htm#i1012456
    The steps to apply RAC patch were followed correctly?
    See a patchset roadmap at http://mufalani.blogspot.com/2009/06/applying-10204-patch-on-rac-systems.html
    Best Regards,
    Rodrigo Mufalani
    http://www.mrdba.com.br/mufalani

  • Root.sh hangs at formatting voting disk on OEL32 11gR2 RAC with ocfs2

    Hi,
    Am trying to bring up Oracle11gR2 RAC on Enterprise Linux x86 (32bit) version 5.6. I am using Ocfs2 1.4 as my cluster file share. Everything went fine till the root.sh and it hangs with a message "now formatting voting disk <vdsk path>
    The logs are mentioned below.
    Checked the alert log:
    {quote}
    cssd(9506)]CRS-1601:CSSD Reconfiguration complete. Active nodes are oel32rac1 .
    2011-08-04 15:58:55.356
    [ctssd(9552)]CRS-2407:The new Cluster Time Synchronization Service reference node is host oel32rac1.
    2011-08-04 15:58:55.917
    [ctssd(9552)]CRS-2401:The Cluster Time Synchronization Service started on host oel32rac1.
    2011-08-04 15:58:56.213
    [client(9567)]CRS-1006:The OCR location /u02/storage/ocr is inaccessible. Details in /u01/app/11.2.0/grid/log/oel32rac1/client/ocrconfig_9567.log.
    2011-08-04 15:58:56.365
    [client(9567)]CRS-1001:The OCR was formatted using version 3.
    2011-08-04 15:58:59.977
    [crsd(9579)]CRS-1012:The OCR service started on node oel32rac1.
    {quote}
    crsctl.log:
    {quote}
    2011-08-04 15:59:00.246: [  CRSCTL][3046184656]crsctl_vformat: obtain cssmode 1
    2011-08-04 15:59:00.247: [  CRSCTL][3046184656]crsctl_vformat: obtain VFListSZ 0
    2011-08-04 15:59:00.258: [  CRSCTL][3046184656]crsctl_vformat: Fails to obtain backuped Lease from CSSD with error code 16
    2011-08-04 15:59:01.857: [  CRSCTL][3046184656]crsctl_vformat: to do clsscfg fmt with lease sz 0
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]NOTE: No asm libraries found in the system
    2011-08-04 15:59:01.910: [    CLSF][3046184656]Allocated CLSF context
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Discovery with str:/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]UFS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Fetching UFS disk :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]OSS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]Handle 0xa6c19f8 from lib :UFS:: for disk :/u02/storage/vdsk:
    2011-08-04 17:10:37.522: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    2011-08-04 17:10:37.526: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    {quote}
    ocrconfig log:
    {quote}
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]proprinit:problem reading the bootblock or superbloc 22
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.365: [  OCRRAW][3046991552]iniconfig:No 92 configuration
    2011-08-04 15:58:56.365: [  OCRAPI][3046991552]a_init:6a: Backend init successful
    2011-08-04 15:58:56.390: [ OCRCONF][3046991552]Initialized DATABASE keys
    2011-08-04 15:58:56.564: [ OCRCONF][3046991552]csetskgfrblock0: output from clsmft: [clsfmt: successfully initialized file /u02/storage/ocr
    2011-08-04 15:58:56.577: [ OCRCONF][3046991552]Successfully set skgfr block 0
    2011-08-04 15:58:56.578: [ OCRCONF][3046991552]Exiting [status=success]...
    {quote}
    ocssd.log:
    {quote}
    2011-08-04 15:59:00.140: [    CSSD][2963602320]clssgmFreeRPCIndex: freeing rpc 23
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:00.234: [    CSSD][2996054928]clssgmExecuteClientRequest: VOTEDISKQUERY recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    {quote}
    Any help here is appreciated.
    Regards
    Amith R
    Edited by: Mithzz on Aug 4, 2011 4:58 AM

    Did an lsof on vdisk and it showed
    >
    COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
    crsctl.bi 9589 root 26u REG 8,17 21004288 102980 /u02/storage/vdsk
    [root@oel32rac1 ~]# ps -ef |grep crsctl
    root 9589 7583 0 15:58 pts/1 00:00:00 [crsctl.bin] <defunct>
    >
    Could this be a permission issue ?
    --Amith                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • How can i backup my voting disk on windows

    i can backup voting disk with dd in unix, but how can i backup it in windows,
    I will replace my disk-storage in the ORACLE rac enviement, must i reistall operating system; ocr and voting disk (then restore the database); or can i resotre ocr and voting disk than restore database; have you any other idear ,

    You can not backup from *NIX & restore onto Windoze.                                                                                                                                                                                                           

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • Voting disk problem

    I had a problem with my votingdisks:
    First CSS would not start. CSS log said votingdisks were missing:
    C:\oracle\product\10.2.0>crsctl query css votedisk
    0. 0 \\.\votedsk1
    1. 0
    2. 0
    located 3 votedisk(s).
    I stopped the cluster and could add my voting disks:
    Result:
    C:\oracle\product\10.2.0>crsctl query css votedisk
    0. 0 \\.\votedsk1
    1. 0
    2. 0
    3. 0 \\.\votedsk3
    4. 0 \\.\votedsk2
    located 5 votedisk(s).
    How can I get rid off the "empty" values?

    I ran into the samething today and figured out how to get rid of them at least on a Win 2003 SP2R2 cluster. Mine were showing 0s for the path as well so after a lot of trial and error I finally just tried:
    crsctl delete css votedisk 0
    And it worked... What happens is, when you have to do a -force to create extra votedisks the cluster has to be stopped on all nodes or this will happen. We were doing a totally fresh install so we played with it a little to determine this is what happened. When we stopped everything before adding votedisks using -force and cleaning up the bad ones, everything went fine.

  • How to rename voting disk name in oracle clusterware 11gr2

    Hi:
    I need change the name of voting disk at os level, original name is /dev/rhdisk20, I need rename to /dev/asmocr_vote1 (unix AIX), the voting disk is locate in ASM diskgroup +OCR.
    Initial voting disk was: /dev/rhdisk20 in diskgroup +OCR
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE a2e6bb7e57044fcabf0d97f40357da18 (/dev/rhdisk20) [OCR]
    I createt a new alias disk name:
    #mknod /dev/asmocr_vote01 c 18 10
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:57 rhdisk20 --> Old name
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 ---> alias to old name, the new name.
    After change votingn disk unix name, the cluster doesn't start, voting disk is not found by CRSSD.
    -STEPS to start clusterware after changing the OS voting disk name are:
    1- stop al nodes:
    #crsctl stop crs -f (every node)
    Work only in one node (node1, +ASM1 instance):
    2- Change asm_diskstring in init+ASM1.ora:
    asm_diskstring = /dev/asm*
    3- change disk unix permiss:
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 root system 18, 10 Sep 6 16:59 asmocr_vote01
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 17:37 rhdisk20
    #(root) /dev->chown oracle:asmadmin asmocr_vote01
    #(root) /dev->chown root:system rhdisk20
    #(root) /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 --> new name only have oracle:oinstall
    crw-rw---- 1 root system 18, 10 Sep 6 17:37 rhdisk20
    4-start node in exclusive mode:
    # (root) /oracle/GRID/11203/bin->./crsctl start crs -excl
    CRS-4123: Oracle High Availability Services has been started.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.mdnsd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'orarac3intg'
    CRS-2676: Start of 'ora.gpnpd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.gipcd' on 'orarac3intg'
    CRS-2676: Start of 'ora.cssdmonitor' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.diskmon' on 'orarac3intg'
    CRS-2676: Start of 'ora.diskmon' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cssd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'orarac3intg'
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2676: Start of 'ora.ctssd' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.drivers.acfs' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'orarac3intg'
    CRS-2676: Start of 'ora.asm' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.crsd' on 'orarac3intg' succeeded
    5-check votedisk:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    Located 0 voting disk(s).
    --> NO VOTING DISK found
    6- mount diskgroup of voting disk (+OCR in this case) in +ASM1 instance:
    SQL> ALTER DISKGROUP OCR mount;
    7-add votedisk belongs diskgroup +OCR:
    # (root) /oracle/GRID/11203/bin->./crsctl replace votedisk +OCR
    Successful addition of voting disk 86d8b12b1c294f5ebfa66f7f482f41ec.
    Successfully replaced voting disk group with +OCR.
    CRS-4266: Voting file(s) successfully replaced
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Located 1 voting disk(s).
    8-stop node:
    #(root) /oracle/GRID/11203/bin->./crsctl stop crs –f
    8-start node:
    #(root) /oracle/GRID/11203/bin->./crsctl start crs
    10- check:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Vicente.
    HP.
    Edited by: 957649 on 07-sep-2012 13:11

    There is no facilty to rename a column name in Oracle 8i. This is possible from Oracle 9.2 version onwards.
    For you task one example given below.
    Example:-
    Already existed table is ITEMS
    columns in ITEMS are ITID, ITEMNAME.
    But instead of ITID I want ITEMID.
    Solution:-
    step 1 :- create table items_dup
    as select itid itemid, itemname from items;
    step 2 :- drop table items;
    step 3 :- rename items_dup to items;
    Result:-
    ITEMS table contains columns ITEMID, ITEMNAME

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

Maybe you are looking for

  • Error Message: EventType : BEX   P1 : iTunes.exe --- iTunes Crashes!!

    Hi, Please help with this seemingly unfixable issue: When I go to synch my iPod (30 GB/Video), iTunes crashes and shutsdown and I get an error message with this this string: EventType : BEX P1 : iTunes.exe P2 : 9.2.0.61 P3 : 4c18061b P4 : iTunes.dll

  • Can I use adobe reader with my iPad

    Can I use adobe reader on my iPad

  • Media Center PC Tuner

    Would someone recommend a tv tuner for my desktop m8400f, that will allow me to watch whatever I'm watching on the internet, such as HULU? My apologies if someone has asked that question this year or last year already.  Thx,

  • Digest emails .eml files weird

    Hi I'm subscribing to a digest for an email list. The emails get put into 1 long scrolling email on Apple Mail but sometimes the body of someone's email is missing and just the signature is there. Also sometimes there are equals signs at the end of t

  • I have a MAC Tiger and updated to a Snow Leopard,

    Having trouble with Apple works 6 Just updated my Tiger to a Snow Leopard and certain pages in the Apple works 6 take to long to print. 4 to 5 minutes sometimes before the printer starts. Not all the forme just a few. Others print very fast. Hard dri