Adding luns to asm on solaris

Hi Experts,
How to add luns to asm on solaris ? In linux i know with oracleasm command with options like scandisk, adddisk etc.. In solaris those rpms will not be there i guess.
Thanks in advance,
Mahesh.G

Hi,
I got the answer. It's " luxmadm probe " command that gives you the information about attached SAN Luns. You can add these luns to asm diskgroups.
Cheers..
Mahesh.G

Similar Messages

  • Verify LUN ID's on Solaris 9 System

    Hi Experts,
    SAN Team just added LUN's to a Solaris 9 system. How can I verify that LUN's are added? Where can I find the LUN ID's?
    Let me know if you need more information.
    Sunil.

    luxadm probe - will all the fabric disk details
    Note: If this is the first time lun is being provisioned to the said server, then there could be additional configuration required on the server side specific to your SAN environment.

  • Add LUN into asm in rac (Solaris)

    I have a 2 node  oracle rac with asm . I have to add  new LUN  to asm  to increase space in asm disk group     I have  done the following on node1
    Ran format
    Created the partition on slice4 on the new  device file corresponding to the LUN
    Created the  block and character device file like  RAC_ORACLE_data_05 under /dev/dsk/oracle and /dev/rdsk/oracle (where the other  device files used by ASM also reside)  using the  major and minor number of the  new LUN
    Changed the permissions to oracle:dba
    The new LUN is now visible  through the ASM on node 1 as RAC_ORACLE_data_05
    My question is what steps I need to carry out on node 2  of the cluster
    The disk is visible  on node 2 as well
    The  disk is already showing up with the partitions as created through node 1
    Do I have to create the device file under /dev/dsk/oracle  with the major minor numbers as shown on node 2  for the LUN
    TIA
    Ravinder

    Almost never use Solaris (last Oracle db I managed on Solaris was 10 years ago). And as this is not a Solaris o/s forum space, you should not expect quick answers to your Solaris o/s questions (which has little to do with ASM).
    A comment though on what needs to be done at o/s level to satisfy RAC/ASM requirements.
    ASM does not care for device names. Device names do not need to be persistent before and after reboots. Device names do not need to be persistent across RAC nodes in a cluster.
    ASM simply needs to
    a) be able to discover the device
    b) be able to open the device
    This means that the device name needs to match the ASM disk discovery string (or vice versa) across all RAC nodes/ASM instances. And that the device permissions need to allow access read/write access to ASM.
    ASM enumerates devices it can I/O using its discovery string. For each device, ASM opens the device. It then reads the ASM disk label of the disk. If there is a valid label, ASM knows the disk name, status, and failgroup and diskgroup this disk belongs to. If no label, ASM treats it  as a candidate disk.
    So from an o/s perspective - a cluster LUN/disk needs to be visible on all nodes, in order to use that disk for RAC/grid storage. Actual device name is not important.
    So whatever you did on node 1 to make that new LUN available to ASM, you need to do on node 2 and others.
    I have 2 bash scripts I feel are essential to managing a RAC. The 1st script enables me to execute a command on all RAC nodes. The 2nd script enables me to copy/distribute a local file to all other RAC nodes.
    So using these scripts and dealing with Linux as o/s, I would determine whether the new disk/LUN is seen by all RAC nodes, and whether the permissions are correct for ASM usage.
    If not, I will use the 2nd script to distribute the config file(s) needed to configure the other RAC nodes with the same changes (on Linux this typically would be /etc/multipath.conf). And then use the 1st script to enable those changes on all RAC nodes (e.g. by restarting multipathd).

  • Installation of RAC with ASM on Solaris

    Hello Everybody
    I have to install and config oracle 10g RAC and ASM on Solaris, and as per my understanding network prerequisite is 3 IP's for each node,
    1) A public IP's registered on DNS and
    2) A virtual IP's registered on DNS, but NOT defined in the servers. can be defined later during Oracle Clusterware Install, And
    3) a private IP only known to the servers in the RAC configuration, to be used for the interconnect.
    The scenario is like
    The Servers & the Storage are in different Network and hence the public registered IP address & Virtual IP address is not possible. Only Private IP Address can be possible, But both the servers are accessible from to me Please confirm on the same.
    My question is
    Is is possible to build the RAC and ASM only with the private IP without the public and the virtual IP, if yes how?
    The setup is not going to be the part of any production or any development environment, I am new to this and just wanted to test the RAC installation with ASM.
    Your inputs or suggestion are greatly appreciated.
    Thanks for your time.

    Hi,
    for a 10g RAC you need:
    - a host-IP for every node
    - a private IP for every node
    - a virtual IP for every node
    Host-IP and Private IP must be assigned to both hosts and conenction between the hosts using either the host-IP or the private IP must be possible.
    Is is possible to build the RAC and ASM only with the private IP without the public and the virtual IP, if yes how?The term "private" and "public" does not refer to public IPs. It refers to the fact "private" is only for communication between the nodes and public for communication between the client and database.
    For a successful installation you need at least these three IPs on each system.
    So for instance your public IPs reside in the network 192.168.1.0/255.255.255.0 and your private interconnect network can be 192.168.2.0/255.255.255.0. Both networks consists of private (i.e. non-routeable) IPs.

  • Add luns in asm instance...

    Hi,
    I need to configure Oracle ASM to include 5 additional new LUN . The oracle database is already up and running on an Oracle DB. The DB is already configured for 6 LUNs, but we need the DB updated to include the 5 new LUNs.
    Already Configured disks:
    /dev/rhdisk1 - 100 GB
    /dev/rhdisk2 - 100 GB
    /dev/rhdisk3 - 100 GB
    /dev/rhdisk4 - 100 GB
    /dev/rhdisk5 - 100 GB
    /dev/rhdisk6 - 30 GB
    New Disks to Configure:
    /dev/rhdisk7
    /dev/rhdisk8
    /dev/rhdisk9sy
    /dev/rhdisk10
    /dev/rhdisk11
    Please let me know steps how to do it and what all informance i need to do it..

    oradba11 wrote:
    I need to configure Oracle ASM to include 5 additional new LUN . Not necessary. ASM either sees these LUNs. Or not. There is no ASM side configuration needed to use/see these additional LUNs or make them visible.
    The oracle database is already up and running on an Oracle DB. The DB is already configured for 6 LUNs, but we need the DB updated to include the 5 new LUNs.
    Already Configured disks:
    /dev/rhdisk1 - 100 GB
    ..snipped..
    New Disks to Configure:
    /dev/rhdisk7
    ..snipped..Which means the ASM disk discovery string should be "+/dev/rdisk*+". And implies that ASM will automatically see these LUNs when made available by the o/s.
    No additional steps needed on the ASM side.
    On the o/s side, the new devices need to have the proper permissions in order for ASM to see and use these devices. So permissions need to be set in order for ASM to see these new devices. Assuming that is done, ASM will see these new LUNs.. just as it sees the existing LUNs. (ASM does not know the difference until it checks the header of each LUN to determine whether it is a member disk, or a candidate disk).

  • LUN has gone AWOL - Solaris 10 x86, Emulex HBAs, IBM DS8300

    We're setting up a test environment for our backup software, currently running on Windows, but targeted for Solaris. This is also a test of our first Solaris x86 deployment.
    * HP DL580 G5 server.
    * Solaris 10 x86 10/08 w/patch cluster from ~March 18th.
    * One dual port Emulex HBA
    * IBM DS8300 SAN
    When we first set this up, we could see the one LUN in format. After adding in the IBM Multipathing software, the LUN disappeared and the FC devices are shown as unconfigured in `cfgadm -al`. We removed the IBM multipathing software, but still cannot see the LUN. Did a reconfiguration reboot and all that.
    Current state:
    # cfgadm -al c6
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c6                             fc-fabric    connected    unconfigured unknown
    c6::5005076306130287           unknown      connected    unconfigured unknown
    # luxadm -e port
    /devices/pci@0,0/pci8086,3607@4/pci10b5,8533@0/pci10b5,8533@a/pci10df,fe00@0/fp@0,0:devctl  CONNECTED
    /devices/pci@0,0/pci8086,3607@4/pci10b5,8533@0/pci10b5,8533@a/pci10df,fe00@0,1/fp@0,0:devctl  NOT CONNECTED
    # luxadm -e dump_map /devices/pci@0,0/pci8086,3607@4/pci10b5,8533@0/pci10b5,8533@a/pci10df,fe00@0/fp@0,0:devctl
    Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
    0    a0200   0        5005076306130287 5005076306ffc287 0x1f (Unknown Type)
    1    a0100   0        10000000c97a7866 20000000c97a7866 0x1f (Unknown Type,Host Bus Adapter)
    # modinfo|grep FC
    102 ffffffffefc2c000  1aab0 190   1  fp (SunFC Port v20090205-1.79)
    103 ffffffffefc47000  199f0 194   1  fcp (SunFC FCP v20090205-1.132)
    104 ffffffffefc63000   a2a0   -   1  fctl (SunFC Transport v20090205-1.59)
    107 ffffffffefc6c000 377bc0 191   1  emlxs (SunFC emlxs FCA v20081211-2.31p)
    189 ffffffffefb547e0   9620 193   1  fcip (SunFC FCIP v20090205-1.50)
    190 ffffffffefb49dc0   5610 196   1  fcsm (Sun FC SAN Management v20070116)
    #c6 is the first port on the dual port card. c7, not shown, is the second port. Prior to installing the IBM multipathing software, we were seeing the one assigned LUN on c6.
    Trying to `cfgadm -c c6` does nothing. Adding `-f` causes an error:
    # cfgadm -c configure c6
    # cfgadm -al c6
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c6                             fc-fabric    connected    unconfigured unknown
    c6::5005076306130287           unknown      connected    unconfigured unknown
    # cfgadm -f -c configure c6
    cfgadm: Library error: failed to create device node: 5005076306130287: Invalid argument
    failed to configure ANY device on FCA port
    # cfgadm -al c6
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c6                             fc-fabric    connected    unconfigured unknown
    c6::5005076306130287           unknown      connected    unconfigured unknown
    #We're currently at a loss here. Any pointers would be greatly appreciated. I am checking sunsolve and google, with plenty of similar hits, but no resolution as yet.
    Thanks,
    Mark

    Ok, so after that detailed message, my SAN admin just realised he hadn't assigned any LUNs to the server. :D
    The LUN that we had seen previously was an artifact from an earlier configuration.

  • Resizing a LUN with the lastest Solaris 10 SPARC

    1. I followed the doc ID 368840.1 to partition the disk use for ASM.
    It looks like this:
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 3 - 858 1016.50MB (856/0/0) 2081792
    1 unassigned wu 0 0 (0/0/0) 0
    2 backup wu 0 - 859 1021.25MB (860/0/0) 2091520
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    2. I re-size the LUN from the storage array to 9GB using "0. Auto configure".
    I see the new size looks like this:
    Part Tag Flag Cylinders Size Blocks
    0 root wm 0 0 (0/0/0) 0
    1 swap wu 0 0 (0/0/0) 0
    2 backup wu 0 - 7757 9.00GB (7758/0/0) 18867456
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 usr wm 0 - 7757 9.00GB (7758/0/0) 18867456
    7 unassigned wm 0 0 (0/0/0) 0
    3. Then I re-partition the LUN using format.
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 3 - 7756 8.99GB (7754/0/0) 18857728
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 7757 9.00GB (7758/0/0) 18867456
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    4. I can't label the LUN. (I have done this with previous Solaris SPARC 10 releases and it works fine)
    partition> l
    Ready to label disk, continue? y
    Warning: error writing VTOC.
    Warning: no backup labels
    Label failed.
    It looks like the new Solaris updates also change this behavior, but don't know what the new requirements are.
    Any idea?
    Thanks much in advance,
    R-

    I would suggest changing the media you burned it on. That always seems to be the issue with things like this. Search the forums for suggestions on what works.

  • Adding eSAN storage to a Solaris 9 box, with MPXIO and Qlogic HBAs

    I recently added a few SAN drives to a Solaris-9 box and enabled MPXIO. I noticed that after you install the qlc drivers and do a reconfigure boot you see two additional devices besides the disks. These additional devices are very small and present themselves as disks. See example below
    12. c5t50060482D5304D88d0 <EMC-SYMMETRIX-5772 cyl 4 alt 2 hd 15 sec 128>
    /pci@8,700000/fibre-channel@4,1/fp@0,0/ssd@w50060482d5304d88,0
    13. c7t50060482D5304D87d0 <EMC-SYMMETRIX-5772 cyl 4 alt 2 hd 15 sec 128>
    /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0
    14. c8t60060480000190103862533032384632d0 <EMC-SYMMETRIX-5772 cyl 37178 alt 2 hd 60 sec 128>
    /scsi_vhci/ssd@g60060480000190103862533032384632
    15. c8t60060480000190103862533032384541d0 <EMC-SYMMETRIX-5772 cyl 37178 alt 2 hd 60 sec 128>
    /scsi_vhci/ssd@g60060480000190103862533032384541
    notice the difference in the number of cylenders. I have a couple of questions about these devices
    1. What are these. The SAN storage is EMC Symmetrix
    2. I see the following errors in the /var/adm/messages any time a general disk access command is run, such as format. Should I be concerned
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@4,1/fp@0,0/ssd@w50060482d5304d88,0 (ssd21):
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0 (ssd28):
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0 (ssd28):
    Feb 4 13:05:35 Corrupt label; wrong magic number

    Gate keepers from EMC Storage, is normal.

  • Install ASM on Solaris RAC 10g

    Hello,
    I installed CRS and database software on two nodes RAC 10.2.0.4 Solaris x86-64 5.10, latest updates for Solaris. I have no error with crs.
    Problems description:
    1) When I run DBCA to create ASM it fails to create it on the nodes, with error ORA-03135: connection lost contact.
    2) I see ORA-29702 into the logs (error occurred in Cluster Group Service operation
    Cause: An unexpected error occurred while performing a CGS operation.
    Action: Verify that the LMON process is still active. Also, check the Oracle LMON trace files for errors.)
    Question:
    Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started? (see bellow bdump/alert_+ASM1.log).
    This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
    Possible solution: I saw Metalink 283684.1 but don't know if/what to change in my interfaces.
    Details:
    I think is something with the interfaces, but I don't know what.
    - One thing I noticed is that is not possible to ssh from node1 to node2-priv-fail2 (this I wad told is the private standby loopback insterface). The same is from node2 to node1-priv-fail2, it gives a frozen prompt.
    - in /etc/hosts on both nodes I have:
    +127.0.0.1 localhost+
    +172.17.1.17 node1+
    +172.17.1.18 node1-fail1+
    +172.17.1.19 node1-fail2+
    +172.17.1.20 node1-vip+
    +172.17.1.29 node2 loghost+
    +172.17.1.30 node2-fail1+
    +172.17.1.31 node2-fail2+
    +172.17.1.32 node2-vip+
    +10.0.0.1 node1-priv+
    +10.0.0.11 node1-priv-fail1+
    +10.0.0.21 node1-priv-fail2+
    +10.0.0.2 node2-priv+
    +10.0.0.12 node2-priv-fail1+
    +10.0.0.22 node2-priv-fail2+
    Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started (see bdump/alert_+ASM1.log).
    This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 e1000g2 10.0.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g3 10.0.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g0 172.17.0.0 configured from OCR for use as a public interface
    Interface type 1 e1000g1 172.17.0.0 configured from OCR for use as a public interface
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    large_pool_size = 12582912
    instance_type = asm
    cluster_database = TRUE
    instance_number = 1
    remote_login_passwordfile= EXCLUSIVE
    ++background_dump_dest = /opt/app/oracle/db/admin/+ASM/bdump++
    ++user_dump_dest = /opt/app/oracle/db/admin/+ASM/udump++
    ++core_dump_dest = /opt/app/oracle/db/admin/+ASM/cdump++
    Cluster communication is configured to use the following interface(s) for this instance
    +10.0.0.1+
    +10.0.0.21+
    node1:oracle$ oifcfg getif
    e1000g0 172.17.0.0 global public
    e1000g1 172.17.0.0 global public
    e1000g2 10.0.0.0 global cluster_interconnect
    e1000g3 10.0.0.0 global cluster_interconnect
    node1:oracle$ ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.17.1.17 netmask ffff0000 broadcast 172.17.255.255
    groupname orapub
    e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
    inet 172.17.1.18 netmask ffff0000 broadcast 172.17.255.255
    e1000g0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2
    inet 172.17.1.20 netmask ffff0000 broadcast 172.17.255.255
    e1000g1: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 3
    inet 172.17.1.19 netmask ffff0000 broadcast 172.17.255.255
    groupname orapub
    e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
    inet 10.0.0.1 netmask ff000000 broadcast 10.255.255.255
    groupname oracle_interconnect
    e1000g2:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
    inet 10.0.0.11 netmask ff000000 broadcast 10.255.255.255
    e1000g3: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 5
    inet 10.0.0.21 netmask ff000000 broadcast 10.255.255.255
    groupname oracle_interconnect

    Hi,
    for a 10g RAC you need:
    - a host-IP for every node
    - a private IP for every node
    - a virtual IP for every node
    Host-IP and Private IP must be assigned to both hosts and conenction between the hosts using either the host-IP or the private IP must be possible.
    Is is possible to build the RAC and ASM only with the private IP without the public and the virtual IP, if yes how?The term "private" and "public" does not refer to public IPs. It refers to the fact "private" is only for communication between the nodes and public for communication between the client and database.
    For a successful installation you need at least these three IPs on each system.
    So for instance your public IPs reside in the network 192.168.1.0/255.255.255.0 and your private interconnect network can be 192.168.2.0/255.255.255.0. Both networks consists of private (i.e. non-routeable) IPs.

  • Question on adding LUNs to disk group

    GI version :11.2.0.3.0
    Platform : OEL
    I need to add 2TB to the following disk group. Each LUN is 100gb in size.
    100gb x 20 = 2 Tera Byte. Which means I have to run 20 commands like below
    SQL> ALTER DISKGROUP DATA_DG1 ADD DISK '/dev/sdc1' rebalance power 4;Each ALTER DISKGROUP...ADD DISK command's rebalance takes around 30 minutes to complete.
    So, this operation will take 10 hours to complete.
    ie. 20 x 30 = 600 minutes = 10 hours !!!I have a 3-node RAC. Can I run ALTER DISKGROUP...ADD DISK commands in parallel from each node ? Will ASM instance allow
    multiple rebalance operations in a single Disk group ?

    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm
    Oracle ASM can perform one disk group rebalance at a time on a given instance. If you have initiated multiple rebalances on different disk groups on a single node, then Oracle processes these operations in parallel on additional nodes if available; otherwise the rebalances are performed serially on the single node. You can explicitly initiate rebalances on different disk groups on different nodes in parallel.
    Why do you need to run 20 of these commands??
    Can you not just add 20 of the disks in one go:
    ALTER DISKGROUP DATADG ADD DISK
    '/devices/diska5' NAME disk1,
    '/devices/diska6' NAME disk2,
    '/devices/diska7' NAME disk3,
    ...etc
    You shouldn't need to manually rebalance as Oracle will automatically do the rebalancing when the configuration changes unless you want to use a non-default value of power as defined by the ASM_POWER_LIMIT parameter. You can run the above command with a rebalance option as well if you you need to do so.

  • Asm on solaris

    Dear all,
    We are in the process of installtion of 10g RAC on solaris.
    As far as i studied,
    OCFS cannot be used in Solaris
    Only 3rd party cluster file system can be used
    if no 3 party cluster file system can be used,we have to use ASM
    Is there any other options for this ?
    Please guide
    Kai

    Hello KaiS,
    Also on certification Matrix we can found another information about your question.
    Certify - Certification Matrix: RAC for Unix on Sun Solaris SPARC
    Server Certifications
    OS Product Certified With Version Status Addtl. Info. Components Other
    10 11gR1 64-bit Fujitsu-Siemens PrimeCluster V 4.2 Certified None None None
    9 11gR1 64-bit Veritas Storage Foundation for Oracle RAC 5.0 Certified Yes None None
    10 11gR1 64-bit Veritas Storage Foundation for Oracle RAC 5.0 Certified Yes None None
    9 11gR1 64-bit Sun Cluster 3.2 Certified None None None
    10 11gR1 64-bit Sun Cluster 3.2 Certified None None None
    9 11gR1 64-bit Sun Cluster 3.1 Certified None None None
    10 11gR1 64-bit Sun Cluster 3.1 Certified None None None
    9 11gR1 64-bit Oracle Clusterware 11g Certified None None None
    10 11gR1 64-bit Oracle Clusterware 11g Certified None None None
    9 10gR2 64-bit Fujitsu-Siemens PrimeCluster V 4.2 Certified None None None
    10 10gR2 64-bit Fujitsu-Siemens PrimeCluster V 4.2 Certified None None None
    9 10gR2 64-bit Fujitsu-Siemens PrimeCluster V 4.1 Certified None None None
    10 10gR2 64-bit Fujitsu-Siemens PrimeCluster V 4.1 Certified None None None
    9 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 5.0 Certified Yes None None
    8 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 5.0 Certified Yes None None
    10 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 5.0 Certified Yes None None
    9 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 4.1 Certified Yes None None
    8 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 4.1 Certified Yes None None
    10 10gR2 64-bit Veritas Storage Foundation for Oracle RAC 4.1 Certified Yes None None
    9 10gR2 64-bit Sun Cluster 3.2 Certified Yes None None
    10 10gR2 64-bit Sun Cluster 3.2 Certified Yes None None
    9 10gR2 64-bit Sun Cluster 3.1 Certified None None None
    8 10gR2 64-bit Sun Cluster 3.1 Certified None None None
    10 10gR2 64-bit Sun Cluster 3.1 Certified None None None
    9 10gR2 64-bit Oracle Clusterware 10g Certified None None None
    8 10gR2 64-bit Oracle Clusterware 10g Certified None None None
    10 10gR2 64-bit Oracle Clusterware 10g Certified None None None
    Cheers,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • ASM on Solaris 10 without Veritas ?

    Looks like most ASM documentation assumes Linux or Solaris Veritas ?
    Is it possible to use inherent Solaris functionality and RAW as ASM devices, if not what aer the storage options

    What 10g ASM document assumes Veritas?

  • Recommended Number LUNs for ASM Diskgroup

    We are installation Oracle Clusterware 11g, Oracle ASM 11g and Oracle Database 11g R1 (11.1.0.6) Enterprise Edition with RAC option. We have EMC Clariion CX-3 SAN for shared storage (All oracle software will reside on locally). We are trying to determine the recommended or best practice number of LUNs and LUN size for ASM Diskgroups. I have found only the following specific to ASM 11g:
    ASM Deployment Best Practice
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.
    1) Recommended number of LUNs?
    2) Recommended size of LUNs?
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?

    user10437903 wrote:
    Use diskgroups with four or more disks, and making sure these disks span several backend disk adapters.This means that the LUNs (disks) should be created over multiple SCSI adapters in the storage box. EMCs have multiple SCSI channels to which disks are attached. Best practice says that the disks/luns that you assing to a diskgroup should be spread over as many channels in the storage box as possible. This increases the bandwidth and therefore, performance.
    1) Recommended number of LUNs?Like the best practice says, if possible, at least 4
    2) Recommended size of LUNs?That depends on your situation. If you are planning a database of 100GB, then a LUN size of 50GB is a bit overkill.
    3) In the ASM Deployment Best Practice above, "four or more disks" for a diskgroup, is this referring to LUNs (4 LUNs) or one LUN with 4 physical spindles?LUNs, spindles if you have only access to physical spindles
    4) Should the number of physical spindles in LUN be even numbered? Does it matter?If you are using RAID5, I'd advise to keep a 4+1 spindle allocation, but it might not be possible to realize that. It all depends on the storage solution and how far you can go in configuring it.
    Arnoud Roth

  • Adding lun to an existing storage pool

    i have problems to add a lun to an existing storage pool.
    The storage pool is data only type.
    The log file /var/run/xsancvupdatefsxsan.log indicates:
    ^MMerging bitmap data ( 99%)99.00
    ^MMerging bitmap data (100%)
    Bitmap fragmentation: 1900004 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    After run the defrag command to some folders the log indicates:
    Merging bitmap data (100%)
    Bitmap fragmentation: 1898528 chunks (0%)
    Bitmap fragmentation threshold exceeded. Aborting.
    Invalid argument
    Fatal: Failed to expand stripe group
    Check configuration and try again
    Do i need to run a complete defrag ?
    Apple documentation indicates we need to delete the data before add the new LUN because there´s an issue. (is a very big issue!!!).
    http://docs.info.apple.com/article.html?artnum=303571
    Thanks for any help.
    xsan 1.4   Mac OS X (10.4.8)  

    hi william, thanks for your answer.
    Right now my storage is %68 used.
    Do you think, if i have < 60% used, the storage pool expansion will work?
    I prefer to add a lun to a storage pool instead to create a new storage pool, because adding a lun add bandwith too.
    thanks for any advice.
    CCL

  • Unable to detect iSCSI LUN size change on Solaris 10

    I resized (grew) an OpenFiler iSCSI LUN, but I cannot get Solaris 10 to pick up the change. Using the type -> autoconfigure in format and relabeling the disk ought to do the trick but it does not work. Does anybody know how to overcome this?

    Hmm, what filesystem do you have on the disk?
    .7/M.

Maybe you are looking for

  • How to export the data from Mainframe to Oracle? Except Powermart

    Hi, I am exporting a data from Mainframe(VSAM) to Oracle 10g using PowerMart.Is there any other chance to exporting data from Mainframe to Oracle10g except Power Mart? Please help me out.

  • What do I need to build & deploy a Flex/Coldfusion standalone app?

    I have been a CF developer for almost 10 years, but have no experience with Flex. I have a new opportunity to create Windows-based standalone applications. I have investigated .NET and I am not impressed as it just seems to be needlessly complex and

  • Can't connect any BB10 device to my Mac

    After I installed the MacBook pro EFI Firmware Update 1.3, none of our devices (we have a few of them running on all released OS's) would connect to my Mac. I'm running the latest BB Link (1.1.1.39) and sometimes I see that the device connects and di

  • Setting up different ipod models/owners on same computer

    My husband has an ipod shuffle. I just purchased an ipod nan. We are both new. We have most different taste in music but there is some overlap. What's the best way to organize our itune libraries? should we have separate libraries (how do you do that

  • Send email & print PO simultaneously in me22n

    Hi all, When I select output NEU and select medium PRINT OUTPUT I can print my purchase order on my local printer. When I select medium EXTERNAL SEND I send purchasing document by e-mail. This works fine but can I print and send purchase order at the