OCR VOTE RAW Device mappers

RHEL4 U5 x64.
2 LUNs. OCR and VOTE, each LUN with 4 paths.
I have configured device mapper, assigned the alisas name to the LUNs, and mapped to the raw, modified udev with rules and permissions.
In clusterware installation, when it asks root.sh, it fails with bellow error.
# sh /u01/crs/10gr2/root.sh
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration
I have followd the document and applied the patch 4679769, but no use, still same error.
What is the issue, what i am missing ?

Please check permission on your files (OCR,Voting), "oracle" os user can read:
Example:
$ cat /etc/sysconfig/rawdevices
#Oracle OCR File
/dev/raw/raw1 /dev/sda2
#Oracle Voting File
/dev/raw/raw2 /dev/sda3I have mapped OCR at /dev/raw/raw1 and Voting /dev/raw/ra
and then
$ ls -l /dev/raw/raw1
crw-rw---- 1 root oinstall 162, 1 Jan 13 12:44 /dev/raw/raw1
$ ls -l /dev/raw/raw2
crw-rw---- 1 oracle oinstall 162, 2 Jan 13 12:44 /dev/raw/raw2Please post your files permission. If oracle can not ... (use "chown" to change from root user) $ chown root:oinstall /dev/raw/raw1
If you everything ok about permission... Please post more about Error ;)

Similar Messages

  • Create raw devices for OCR & Voting Disk for Oracle 10g R2 RACLinux 64 bit)

    Hi Friends,
    Please let me know the Document to cretae RAW Disk for OCR and Voting Disks (like rpm's required,process tio cretae raw disks) in Oracle 10g R2 on Linux (64 bit)
    Regards,
    DB

    http://docs.oracle.com/cd/B19306_01/install.102/b14203/storage.htm#BABFFBBA
    and
    Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 465001.1]
    Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1]

  • Root.sh failed:Raw devices for OEL 5.3 and 10.2.0.4 RAC

    Hi
    We have OEL 5.3 and Oracle RAC 10.2.0.4
    While installing clusterware and running root.sh we are getting the error
    Failed to upgrade the cluster registry.
    Further the ocrconfig.log are as below:
    ocrconfig_20527.log
    Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
    2009-08-04 04:38:41.540: [ OCRCONF][286303376]ocrconfig starts...
    2009-08-04 04:38:41.540: [ OCRCONF][286303376]Upgrading OCR data
    2009-08-04 04:38:41.615: [  OCRRAW][286303376]propriogid:1: INVALID FORMAT
    2009-08-04 04:38:41.616: [  OCRRAW][286303376]ibctx:1:ERROR: INVALID FORMAT
    2009-08-04 04:38:41.616: [  OCRRAW][286303376]proprinit:problem reading the bootblock or superbloc 22
    2009-08-04 04:38:41.616: [ default][286303376]a_init:7!: Backend init unsuccessful : [22]
    2009-08-04 04:38:41.616: [ OCRCONF][286303376]Exporting OCR data to [OCRUPGRADEFILE]
    2009-08-04 04:38:41.616: [  OCRAPI][286303376]a_init:7!: Backend init unsuccessful : [33]
    2009-08-04 04:38:41.616: [ OCRCONF][286303376]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2009-08-04 04:38:41.619: [  OCRRAW][286303376]propriogid:1: INVALID FORMAT
    2009-08-04 04:38:41.619: [  OCRRAW][286303376]ibctx:1:ERROR: INVALID FORMAT
    2009-08-04 04:38:41.619: [  OCRRAW][286303376]proprinit:problem reading the bootblock or superbloc 22
    2009-08-04 04:38:41.619: [ default][286303376]a_init:7!: Backend init unsuccessful : [22]
    2009-08-04 04:38:41.623: [  OCRRAW][286303376]propriogid:1: INVALID FORMAT
    2009-08-04 04:38:41.623: [  OCRRAW][286303376]ibctx:1:ERROR: INVALID FORMAT
    2009-08-04 04:38:41.623: [  OCRRAW][286303376]proprinit:problem reading the bootblock or superbloc 22
    2009-08-04 04:38:41.626: [  OCRRAW][286303376]propriogid:1: INVALID FORMAT
    2009-08-04 04:38:41.641: [  OCRRAW][286303376]propriowv: Vote information on disk 0 [dev/raw/raw1] is adjusted from [0/0] to [2/2]
    2009-08-04 04:38:41.646: [  OCRRAW][286303376]propriniconfig:No 92 configuration
    2009-08-04 04:38:41.646: [  OCRAPI][286303376]a_init:6a: Backend init successful
    2009-08-04 04:38:41.663: [ OCRCONF][286303376]Initialized DATABASE keys in OCR
    2009-08-04 04:38:41.677: [ OCRCONF][286303376]csetskgfrblock0: clsfmt returned with error [4].
    2009-08-04 04:38:41.677: [ OCRCONF][286303376]Failure in setting block0 [-1]
    2009-08-04 04:38:41.677: [ OCRCONF][286303376]OCR block 0 is not set !
    2009-08-04 04:38:41.678: [ OCRCONF][286303376]Exiting [status=failed]...
    Any solution please?
    Supriya Sh

    There are a couple of errors that doesn't make the clusterware look fine.
    There was no previous version of OCR. error:PROC-33: Oracle Cluster Registry is not configured
    proprinit:problem reading the bootblock or superbloc 22
    This could mean either your cluster registry file is corrupt or you don't have access to the raw devices. Check the access to the shared storage raw device is valid. If this is not true, then make sure you have all your backup files ready.
    ~ Madrid
    http://hrivera99.blogspot.com

  • How to check space on asm and raw device?

    Hi All,
    I want to know how to check the space on asm device.
    we are using asm device which is on raw partition.
    Our archive gets full.I need to check the space on asm archve partiotion.
    how can i check the archive destination space?
    we are using raw device partition for crs and voting device.
    how can we check the space on raw device and how can we know which device is in use and which are not in use
    Thanks in advance

    4 - 5 raw partitions...
    What oracle version?
    Install OCR + VOTE in raw device better than OCFS2. if you use 10g to 11gR1.
    If use 11gR2,keep OCR + VOTE in ASM Disk Group.
    you can check which raw device OCR:
    $ ocrcheck
    VOTEDISK:
    $ crsctl query css votedisk
    On 10g if use raw device for OCR + votedisk: recommend 1 file = 250M
    11g recommend 1 file = 512M
    Good Luck

  • 10g ASM on Logical Volumes vs. Raw devices and SAN Virtualization

    We are looking at setting up our standards for Oracle 10g non-rac systems. We are looking at the value of Oracle ASM in our environment.
    As per the official Oracle documentation, raw devices are preferred to using Logical Volumes when using ASM.
    From here: http://download.oracle.com/docs/cd/B19306_01/server.102/b15658/appa_aix.htm#sthr
    ef723
    "Note: Do not add logical volumes to Automatic Storage Management disk groups. Automatic Storage Management works best when you add raw disk devices to disk groups. If you are using Automatic Storage Management, then do not use LVM for striping. Automatic Storage Management implements striping and mirroring."
    Also, as per Metalink note 452924.1:
    "10) Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant."
    The issue is: if we use raw disk devices presented to ASM, the disks don't show up as used in the unix/AIX system tools (i.e. smit, lspv, etc.). Hence, when looking for raw devices on the system to add to filesystems/volume groups/etc., it's highly possible that a UNIX admin will grab a raw device that is already in use by Oracle ASM.
    Additionally, we are using a an IBM DS8300 SAN with IBM SAN Volume Controller (SVC) in front of it. Hence, we already have storage virtualization and I/O balancing at the SAN/hardware level.
    I'm looking for a little clarification to the following questions, as my understanding of their responses seem to confict:
    QUESTION #1: Can anyone clarify/provide additional detail as to why Logical volumes are not preferred when using Oracle ASM? Does the argument still hold in a SAN Virtualized environment?
    QUESTION #2: Does virtualization at the software level (ASM) make sense in our environment? As we already have I/O balancing provided at the hardware level via our SVC, what do we gain by adding yet another level of I/O balancing at the ASM level? Or as in the
    arguments the Oracle documentation makes against using Lvm, is this an unnecessary redundant striping (double-striped or in our case triple-striped/plaid)?
    QUESTION #3: So does SAN Virtualization conflict or compliment the virtualization provided by ASM?

    After more research/discussions/SR's, I've come to the following conclusion.
    Basically, in an intelligent storage environment (i.e. SVC), you're not getting a 100% bang for the buck by using ASM. Which is the cat's meow in a commodity hardware/unintelligent storage environment.
    Using ASM in a SVC environment potentially wastes CPU cycles having ASM balance i/o that is already balanced on the backend (sure if you shuffle a deck of cards that are already shuffled you're not doing any harm, but if they're already shuffled - then why are you shuffling them again??).
    That being said, there may still be some value for using ASM from the standpoint of storage management for multiple instances on a server. For example, one could better minimize space wastage by being able to share a "pool" of storage between mulitiple instances, rather than having to manage space on an instance-by-instance (or filesystem by filesystem) level.
    Also, in the case of having a unfriendly OS where one is unable to dynamically grow a filesystem (i.e. database outage required), there would be a definite benefit provided by ASM in being able to dynamically allocate disks to the "pool". Of course, with most higher-end end systems, dynamic filesystem growth is pretty much a given.
    In the case of RAC, regardless of the backend, ASM with raw is a no-brainer.
    In the case of a standalone instance, it's a judgement call. My vote in the case of intelligent storage where one could dynamically grow filesystems, would be to keep ASM out of the picture.
    Your vote may be different....just make sure you're putting in a solution to a problem and not a solution that's looking for a problem(s).
    And there's the whole culture of IT thing as well (i.e. do your storage guys know what you're doing and vice versa).....which can destroy any technological solution, regardless of how great it is.

  • 10.1.0.3 on RH4(Centos) + Raw devices + ASM + RAC

    Hi,
    I'm just wondering if anyone here tried installing a single node RAC on 2 instance using ASM on Oracle 10g (10.1.0.3)?
    I am using RH4(CENTOS 4.1), Oracle 10.1.0.3 + CRS 10.1.0.3 on fake raw devices in Linux.
    I have been able to install the CRS and Oracle database software successfully.
    All CRS services and listener are up and running.
    crs_stat command return successful results.
    srvctl status nodeapps -n <node_name> also returns successful results.
    However, when I attempt to startup the ASM instance, I keep encountering these errors:
    ORA-00603:ORACLE server session terminated by fatal error
    ORA-27504:IPC error creating OSD context
    The ORA-00603 error message will appear immediately in the SQLPLUS prompt when I try to startup ASM instance.
    Both errors will be logged in alert.log and trace files, no other useful error messages besides these errors.
    When I attempt to create a normal database instance using datafiles on the same machine, these errors also appear again!
    So, I figure its got nothing to do with the raw devices since my OCR location and Voting devices are using the same raw devices on Linux.
    Anyone did the same thing like me and faced similar problems.... please help.
    Cheers

    You can use dbca to create ASM instance to get rid of the manual copy business.
    In realease 1, you can invoke dbca, choose database instance and walk through still you reach storage clause and choose asm and DBA will create ASM instance on both nodes.
    Ensure that you choose cluster install option and all the nodes at the initial stages.

  • Configuring Storage for Clusterware on Raw devices

    My proposed system is:
    - Sun Solaris 10
    - Oracle 10G RAC
    - Oracle Clusterware
    - Raw devices for Clusterware files
    - ASM for database flies
    Assume I have appropriately sized LUNs assigned on my SAN for OCR and voting disks. My question is, how do I configure the raw devices required for these Clusterware files?
    I have read the Oracle RAC/Clusterware installation guide (Sun Solaris, SPARC), which goes as far as identifying the requirements but does not give the procedure. If I wish to use raw devices for ASM, it gives a procedure for creating partitions on the LUNs. Is anything like this required for the Clusterware files (OCR and voting disks)?

    As far as I know, Solaris follows the traditional raw device definition, which basically says that a raw device is a disk or a partition on a disk that does not contain a file system (yet). Given your question, this would mean that any device that you can find that does not contain a file system would be a raw device.
    Message edited here:
    However, you need to find the right link to this device in order to use it as a raw or character device. Otherwise, the device would be used as a block device, which means it will be buffered.
    On Solaris, those character devices should be listed under /dev/rdsk, while the block device would be listed under /dev/dsk. (Please, refer to: http://www.idevelopment.info/data/Unix/Solaris/SOLARIS_UnderstandingDiskDeviceFiles.shtml)
    End of update.
    Therefore, you have 2 options in your case. You can create LUNs in your storage that would be just suitable to host your Voting Disks and OCRs OR you can create a bigger LUN and put an OCR and a Voting Disk on this one, AFTER you have sliced the disk (LUN) using the format command.
    There is only one thing to remember: If you follow Oracle's recommendation to use multiple Votings and mirrored OCRs, it would obviously be better that those devices do not all share the same LUN (for availability reasons).
    Last but not least, you may want to have a look at the following forum for a similar discussion on using RAW devices under Solaris: http://forum.java.sun.com/thread.jspa?threadID=5073358&messageID=9267900
    Hope that helps. Thanks.
    Message was edited by:
    MarkusM

  • Raw devices problem

    Hello, I made a model on Vmware ESX for testing Oracle RAC, so it's two Centos 5.10 machines with shared disks. I'm using raw devices for ocr, voting disks and data, and without ASMLib library.
    After installing clusterware and Oracle (10gR2), everything was OK until now, when after reboot of one node, no more shared disks exist anymore. For every raw disk related comand I got an error:
    #raw -qa
    Cannot open raw device '/dev/rawctl' (No such file or directory)
    #service rawdevices restart
    Assigning raw devices:
    /dev/raw/raw1 --> /dev/sdb1
    Cannot open raw device '/dev/rawctl' (No such file or directory)
    Didn't find anything related to this in /var/log's. Could be glad if someone could explain how to fix it.

    Yes, but keep in mind that the device tree is not only rebuild after a system restart, but also the order of devices can be different, so for instance /dev/sdc1 may not be the same after a system restart if you attach or remove another device or plug in any USB. That's why UDEV should be configured to use the unique device identifier to map storage devices which require persistent device naming.
    For instance you could use the following entry to map e.g. current /dev/sdc1 to /dev/vote1
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="361a98001686f6959614a453133524171", NAME="vote1", OWNER="oracle", GROUP="oinstall", MODE="0640"
    Using raw devices is no longer necessary, not for Oracle, which uses the O_DIRECT flag when opening a device and it does not matter if it is a device or partition. But if you really want to map to raw device names, you could use the following:
    ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id", RESULT=="361a98001686f6959614a453133524171", RUN+="/bin/raw /dev/raw/raw1 %N"

  • Ocr/voting RAW Disk sequence number change

    Dear Gurus,
    We are planning to upgrade IBM AIX 5.3 OS technology level upgrade ... and also we are changing the device driver from rdac to sddpcm .
    If we do this activity these are the possible changes may occur.......
    According to IBM there could be a possibility of RHDISK number , MAJOR and MINOR number changes LIKE /dev/rhdisk0 to /dev/rhdisk1 in this case
    Whether this RHDISK number , MAJOR and MINOR number changes will cause any issues on Oracle RAC database particularly on OCR and VOTING disks........
    for example currently our OCR LOCATION IS IN : Device/File Name : /dev/rhdisk23 this may change to /dev/rhdisk<25>
    Please help mee...
    Regards,
    Vamsi...

    Dear Gurus,
    From OCR and VOTING point of view :
    currently we have OCR LOCATION :
    /dev/rhdisk23------------->Might change to /dev/rhdisk29
    currently we have VOTING LOCATION :
    /dev/rhdisk24-------------->Might change to /dev/rhdisk30
    In this case how oracle will trace new OCR/VOTING raw disk path........How we have to make oracle to understand OCR/VOTING LOCATION is in “/dev/rhdisk29 , /dev/rhdisk30”.....when cluster starts it will go and look in /dev/rhdisk23 for OCR and /dev/rhdisk24 for VOTING .Since it was not there it will fail .
    Without OCR/VOTING disk CLUSTER will not start .....
    OCR/VOTING disks are safe in "/dev/rhdisk29 , /dev/rhdisk30".......
    Even if we take backup of OCR/VOTING disk .....no need to import it in "/dev/rhdisk29 , /dev/rhdisk30".......because OCR/VOTING content was safe...only disk number changes........
    In this Scenario
    What are the exact commands and steps we need to follow.........Please help me ...
    Regards,
    Vamsi....

  • SPARC 64bit Clusterware 10.2.0.4 - shared raw device

    Dear group,
    I have been asked to install RAC 10.2.0.4 SE on Solaris 10 (SPARC). I am not a Solaris admin altough I have done many RAC installations on RHEL4/5.
    Does anyone have suggestions/pointers to documentation (apart from the docs on OTN which I already consulted) on how to place OCR and vorting disk on shared raw devices? The setup is unlikely to use any multipathing but will rely on a low end SAN for shared storage. I am especially interested in the proper use of the "format" tool (I know for instance that I need to start from cyliner 1 for ASM slices).
    So far all I found were documents explaining SFRAC, Sun Cluster etc which I either don't have at my disposal or can't use due to license restrictions.
    On a comparable single-pathing Linux setup, we'd simply use fdisk to create partitions on the shared disk(s) which then are made available as raw devices thanks to udev.
    Thanks and regards,
    Martin

    Hi Martin,
    Take a look on this doc http://download.oracle.com/docs/cd/B19306_01/install.102/b14205/storage.htm#sthref675
    Regards,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • Oracle rac raw device as shared storage

    Hi,
    i m new to oracle rac,
    and i wish to instlall 11g r1 RAC on my laptop having linux 4 as platform (on vmware) ,
    for that i prepare 4 partition for that (on node1)
    /dev/sdb1 - for ocr
    /dev/sdb2 - for voting disk
    /dev/sdb3 - for asmdisk group
    /dev/sda5 - fro asmdisk group
    by assuming external redundacy for ocr and voting disk i kept only one disk
    and i configured following in /etc/sysconfig/rawdevices
    /dev/raw/raw1 /dev/sdb1 -- ocr
    /dev/raw/raw2 /dev/sdb2 -- voting disk
    /dev/raw/raw3 /dev/sdb3 -- asmdisk group
    /dev/raw/raw4 /dev/sdb5 -- asmdisk group
    and my question is how node2 can understat these raw device as shared storage?
    thanks for any support

    hi thanks for your suggestion ,
    this may be ok for VMware , but what about for non-VMWare environment?
    how can i make raw device as shared storage?
    one more, all the docs that i followed on net , configured node1 partitions as shared storage.
    please help me in this regards

  • Deleting raw device content

    Hello,
    After one installation of cluster software (with problem) I try to install one more time, but the last installation of cluster software leaves some data on my raw devices (for OCR and Voting Disk I use direct raw devices without OCFS, for data files ASM on raw devices).
    How can I really delete the content of raw devices? When I delete partitions and create new, it does not delete the content.
    Thanx,
    Jacek

    I don't know your OS, this is for Linux:
    dd if=/dev/zero of=<raw_device>
    This command writes binary 0's to the destination.
    Be careful when specifying the 'of' argument, this command is used also to clean a whole harddisk (when you plan to sell it).
    Werner

  • Raw device support in 11gr2

    We are currently running oracle 10.2.0.4 RAC using raw devices on the hpux itanium 64bit platform. We want to upgrade to 11.2.0.3 and I am wanting to know what parts of of database environment will still be able to utilize raw devices?
    Can my datafiles remain on raw devices or will we need to somehow convert over to using ASM? What about the ocr and voting device? Can they remain on raw devices or do we need to configure ASM for the ocr and voting disk?
    Thanks.

    923395 wrote:
    We are currently running oracle 10.2.0.4 RAC using raw devices on the hpux itanium 64bit platform. We want to upgrade to 11.2.0.3 and I am wanting to know what parts of of database environment will still be able to utilize raw devices?
    Can my datafiles remain on raw devices or will we need to somehow convert over to using ASM? What about the ocr and voting device? Can they remain on raw devices or do we need to configure ASM for the ocr and voting disk?
    Thanks.when all else fails, Read The Fine Manual
    http://docs.oracle.com/cd/E11882_01/install.112/e24169/storage.htm#sthref558

  • Shared raw devices not discover during Oracle11g r2 RAC/Grid Installations

    Dear Gurus
    Our Platform: Redhat Linux Enterprise Editions5.3 64bit
    We are installing Oracle11g r2 Grid(Clusterware+ASM) and want to use ASM for storage OCR,VOTING,DATA and FLASH Storage.
    We are not want to use ASMLib.
    plz find the shared raw partitions details
    RAC Node-1
    [root@xyz-ch-aaadb-01 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11370 38796975 83 Linux
    /dev/sda4 11371 17750 51247350 5 Extended
    /dev/sda5 11371 17358 48098578+ 82 Linux swap / Solaris
    /dev/sda6 17359 17750 3148708+ 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    RAC Node-2
    [root@xyzl-ch-aaadb-02 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11240 37752750 82 Linux swap / Solaris
    /dev/sda4 11241 17750 52291575 5 Extended
    /dev/sda5 11241 15940 37752718+ 83 Linux
    /dev/sda6 15941 16332 3148708+ 83 Linux
    Disk /dev/sdp: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdq: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdr: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdr1 1 66837 536868171 83 Linux
    Disk /dev/sds: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sds1 1 66837 536868171 83 Linux
    Disk /dev/sdt: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdt1 1 66837 536868171 83 Linux
    Disk /dev/sdu: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdu1 1 1014 3143369 83 Linux
    Disk /dev/sdv: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdv1 1 66837 536868171 83 Linux
    Disk /dev/sdw: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdx: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdy: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdy1 1 66837 536868171 83 Linux
    Disk /dev/sdz: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdz1 1 66837 536868171 83 Linux
    Disk /dev/sdaa: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaa1 1 66837 536868171 83 Linux
    Disk /dev/sdab: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdab1 1 1014 3143369 83 Linux
    Disk /dev/sdac: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdac1 1 66837 536868171 83 Linux
    plz suggest the solutions.....
    Edited by: hitgon on Aug 20, 2011 3:49 AM
    Edited by: hitgon on Aug 20, 2011 3:50 AM

    we are still not able to discover the shared raw device partitions,
    plz help us......
    now my fdisk -l shown the shown the consistent raw device partitions
    plz find the details...........
    [root@xyz-ch-aaadb-01 grid]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11370 38796975 83 Linux
    /dev/sda4 11371 17750 51247350 5 Extended
    /dev/sda5 11371 17358 48098578+ 82 Linux swap / Solaris
    /dev/sda6 17359 17750 3148708+ 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    RAC Node-2
    [root@xyz-ch-aaadb-02 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 4700 37752718+ 83 Linux
    /dev/sda2 4701 11227 52428127+ 83 Linux
    /dev/sda3 11228 11619 3148740 83 Linux
    /dev/sda4 11620 17750 49247257+ 5 Extended
    /dev/sda5 11620 17750 49247226 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    find the following details.........
    ===============================================
    RAC Node-1
    [root@xyz-ch-xxxdb-01 grid]# ls -l /dev/sdb
    brw-r----- 1 root disk 8, 16 Aug 19 09:15 /dev/sdb
    [root@xyz-ch-xxxdb-01 grid]#
    [root@xyz-ch-xxxdb-01 grid]# ls -l /dev/sdg
    brw-r----- 1 root disk 8, 96 Aug 19 09:15 /dev/sdg
    RAC Node-2
    [root@xyz-ch-xxxdb-02 ~]# ls -l /dev/sdb
    brw-r----- 1 root disk 8, 16 Aug 19 18:41 /dev/sdb
    [root@xyz-ch-xxxdb-02 ~]#
    [root@xyz-ch-xxxdb-02 ~]#
    [root@xyz-ch-xxxdb-02 ~]# ls -l /dev/sdg
    brw-r----- 1 root disk 8, 96 Aug 19 18:41 /dev/sdg
    Edited by: hitgon on Aug 20, 2011 3:50 AM

  • Raw device access checking

    how i can check that OCR & Voting disk can access raw devices? & how i can check the size of raw devices also?

    Once you find the raw devices, see your thread from yesterday on the subject, you can /sbin/fdisk -l <device name> to find information about the size.

Maybe you are looking for