2 node RAC on solaris

Hello all,
can you point me to some cookbook for instalation of
11g 2node RAC on Solaris with ASM ?
Also, can I use raw disks for ASM or i must first configure some filesystem , zfs for example, and then on top of that place asm ?

dbca doesnt allow the use of raw devices on oracle 11gr2. its good to have asm. if you decide to use the shared file systems oracle supports nas,san das etc please read the documentation..
there are lot of materials on the internet that can help you install rac on solaris.. if you are insstalling on a production system then its better to see the relevant docs in the metalink...

Similar Messages

  • INSTALL PROBLEM 2 NODES RAC ON SOLARIS ZONES SINGLE HOST

    Hi,
    Does anybody tried to install rac using two zones located on the same physical machines as rac nodes?
    I got problem after installation when it adds deamons to inittab and tries to start crs stack.
    Even the css didn't come up.
    It seems that voting disk where formatted correctly
    alerttest-zone.log:
    [client(20136)]CRS-1006:The OCR location /dev/rdsk/c0t0d0s0 is inaccessible. Details in /app/oracle/product/10.2.0/crs_1/log/test-zone/client/ocrconfig_20136.log.
    2009-11-19 10:41:14.587
    [client(20136)]CRS-1006:The OCR location /dev/rdsk/c0t0d0s0 is inaccessible. Details in /app/oracle/product/10.2.0/crs_1/log/test-zone/client/ocrconfig_20136.log.
    2009-11-19 10:41:14.636
    [client(20136)]CRS-1006:The OCR location /dev/rdsk/c0t0d0s0 is inaccessible. Details in /app/oracle/product/10.2.0/crs_1/log/test-zone/client/ocrconfig_20136.log.
    2009-11-19 10:41:14.936
    [client(20136)]CRS-1001:The OCR was formatted using version 2.
    2009-11-19 10:44:22.023
    [client(20190)]CRS-1801:Cluster crs configured with nodes test-zone test-zone2 .
    OCR config also created correctly :
    ocrconfig_20136.log:
    2009-11-19 10:41:14.685: [  OCROSD][1]utsz:3: ioctl DKIOCGAPART failed. errno [48]
    2009-11-19 10:41:14.686: [  OCRRAW][1]propriogid:1: INVALID FORMAT
    2009-11-19 10:41:14.785: [  OCRRAW][1]propriowv: Vote information on disk 0 [dev/rdsk/c0t0d0s0] is adjusted from [0/0] to [2/2]
    2009-11-19 10:41:14.936: [  OCRRAW][1]propriniconfig:No 92 configuration
    2009-11-19 10:41:14.936: [  OCRAPI][1]a_init:6a: Backend init successful
    2009-11-19 10:41:15.709: [ OCRCONF][1]Initialized DATABASE keys in OCR
    2009-11-19 10:41:16.019: [ OCRCONF][1]Successfully set skgfr block 0
    2009-11-19 10:41:16.022: [ OCRCONF][1]Exiting [status=success]...
    But it just can't run starting from css services:
    css.log:
    2009-11-19 10:54:39.297: [  OCROSD][1]utsz:3: ioctl DKIOCGAPART failed. errno [48]
    2009-11-19 10:54:39.337: [ CSSCLNT][1]clsssInitNative: connect failed, rc 9
    2009-11-19 10:54:40.401: [  OCROSD][1]utsz:3: ioctl DKIOCGAPART failed. errno [48]
    2009-11-19 10:54:40.440: [ CSSCLNT][1]clsssInitNative: connect failed, rc 9
    Does anyone has any idea???
    Best Regards
    Radek

    I don't have an answer for you, but I do have a warning. Do not stop any RAC processes. All RAC processes are essential to the successful operation of RAC. If you stop one of these processes then it is likely that either the instance on that node will fail and crash, or communication with the other node will be lost for a period of time. If this were to happen that the other node which was still running and unchanged would assume the other node had failed and perform a failover operation, recovering inflight transactions in the redo logs of the other instance.
    All of this takes time, will impact performance, and will possibly kill off half of your sessions that were on the node you made the changes to. Recovery from one node to another takes time, and will mean that the remaining node will force the other node to shutdown, one way or another. It will use some of the underlying Clusterware capabilities to achieve this.
    So, please do not just stop a RAC process, just because you want to backup some files in a file system. Do not touch the RAC processes at all. They are only meant to be controlled through RAC specific commands and tools such as Enterprise Manager.
    John

  • 2 node RAC: one 10gR2  node and one 11.2.0.3 node on Solaris 10.

    Is it possible to have a mixed Oracle version 2 node RAC with 10gR2 database on one machine and 11.2.0.3 database installed on the other machine. Has anyone done this?

    Hi,
    if you are talking about setting up a RAC, and having a database 10g running on one node, and a different database 11g running on the other node this is possible.
    You will have to use the newest clusterware/GI (11.2.0.3) and multiple Oracle Homes (One for 10g and one for the 11.2.0.3).
    If however you want one database with 2 instances running different versions: Then No.
    Regards
    Sebastian

  • 10g R2 RAC on Solaris 10 error when running root.sh

    hi, I am installing Oracle 10g rac on Solaris 10 with two nodes. In the end of the installation of RAC when a run root.sh I get the error bellow.
    # /opt/crs/oracle/product/10.2.0/crs/root.sh
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Failed to upgrade Oracle Cluster Registry configuration

    I have done this install using a pair of T2000's and a 65XX SAN. The install doco is not straight forward. I had to compile a number of Metalink notes to eventually get the disk partitions sorted. Of particular help was Note:271621.1.
    Since I am not a Solaris admin, the next bit is caveated as "This is how I did it" and maybe not how anyone else would have done it. These things were not explicitly noted, but gleaned through a bit of reading:
    1) you need to throw away the first cylinder of each LUN. I did this by creating the first partition as a single cylinder.
    2)then create s4 (or whichever slice you desire) as the rest of the space allocated to that LUN.
    3) Repeat this for each LUN presented
    4) I created a directory /san under which I then created the character mode (raw) interface for each of ocr, vote.
    5) For asm, and for no other reason than being self documenting, I created /san/asm and created character mode interfaces to each of the LUN's for datadg and fradg
    6) repeat steps 1-5 on the other box
    I took quite some time (2 days) to extract all this and assemble it to a working system. But has always been stable.
    Regards,
    Kevin Crowley
    Principal Consultant
    Pacific DBMS P/L

  • 10g R2 RAC on Solaris 10 with EMC Storage

    We are in the process of setting up 3/4 node RAC with the following components:
    Oracle 10g R2 RAC
    Oracle Clusterware / Sun Cluster / Veritas Cluster
    Sun Solaris 10
    EMC storage
    ASM/Cluster FS
    I would appreciate if some one can through some light on:
    * ) Veritas cluster / Sun Cluster is must component or Can I use Oracle clusterware ? what are the advantages and disadvantages of using Oracl clusterware compare with varitas cluster or sun cluster
    * ) Is cluster filesystem a compulsory component or Can I use ASM instead of Cluster File system.
    * ) If I don't use cluster filesystem where to put CRS repository and voting disk ?
    * ) What is best option for Oracle_Home, is it shared oracle home or sepereate oracle_home on each node ?
    * ) Are there any known risks invovled in using ASM. How is the I/O performance with ASM on EMC with Solaris ? Are there any best practices
    * ) Is GigE okay for interconnect or do I need to go for Infiniband ?
    * ) Is there any notes on Best practices for the above components
    *) Do I need to consider fail over option for NIC's (interconnect and public), if yes, how to do that ?
    *) Are there any other risks do I need to consider ?
    Thanks
    G

    Hi,
    I see lot of good input. I have done few RAC installs on sun/solaris/emc ...
    Here are few things to consider.
    * ) Veritas cluster / Sun Cluster is must component or Can I use Oracle clusterware ? what are the advantages and disadvantages of using Oracl clusterware compare with varitas cluster or sun cluster
    Just stay with Oracle Clusterware. If there are any issues then you only have to deal with one vendor and there will be no finger pointing. In any case Oracle Clusterware is needed even if you install Veritas/Sun.
    * ) Is cluster filesystem a compulsory component or Can I use ASM instead of Cluster File system.
    For the database you can use ASM. The only time I have considered a cluster filesystem is if external tables were in use.
    When you use ASM you need to partition the disk with 1 meg offset or start at cyclinder 1.
    * ) If I don't use cluster filesystem where to put CRS repository and voting disk ?
    OCR and Voting Disk go on raw devices.
    * ) What is best option for Oracle_Home, is it shared oracle home or sepereate oracle_home on each node ?
    Install ORACLE_HOME, ASM_HOME and CRS_HOME locally on each server.
    * ) Are there any known risks invovled in using ASM. How is the I/O performance with ASM on EMC with Solaris ? Are there any best practices
    http://www.oracle.com/technology/products/database/asm/pdf/asm-on-emc-5_3.pdf
    We have always installed 2 HBAs and used powerpath.
    * ) Is GigE okay for interconnect or do I need to go for Infiniband ?
    For a majority of cases gigE is sufficient.
    * ) Is there any notes on Best practices for the above components
    Have redundancy at each level.
    *) Do I need to consider fail over option for NIC's (interconnect and public), if yes, how to do that ?
    You can use IPMP. Use large send/receive buffers. Enable Jumbo Frames.
    We had to apply some patches.
    5128575 - RAC install of 10.2.0.2 does not update libknlopt.a on all nodes
    4769197 - WHILE ONE NODE OF RAC IS DOWN, CONNECTIONS FROM CLIENT HANG
    patch 5749953
    Thanks
    G

  • Best way to create a 2 node RAC environment from existing setup

    Hello all,
    I have a 2 node RAC(10.2.0.3) running on Solaris 10 as my Prod Database.
    We are planning to have another 2 node RAC for DEV purposes with 10.2.0.3 itself.
    [Due to certain reasons this will act as the PROD for few weeks, so we need exact copy of the DB]
    I cannot afford any downtime.
    i am planning to
    Install CRS and upgrade to 10.2.0.3
    Install RAC and upgrade to 10.2.0.3
    Duplicate the database using RMAN
    Are there any better ways that i could replicate the environment either using Grid Control(10.2.0.2) or Dataguard (or anyother) ?
    TIA,
    JJ

    I don't think you're going to achieve no downtime, but if you get the DB copied to the 2nd cluster (using RMAN or whatever method you like) and apply all the logs, then your downtime should be able to be limited to the time it takes to apply the last log or two once you shutdown the primary site (a la Data Guard). That should also allow you to avoid data loss by applying the last logs (you'll likely have to manually copy and apply them). I agree with DbaKerber that using Data Guard may not be a bad solution here. You're not going to get 0 downtime, but I think it would be the safest way to have the shortest downtime window with no data loss.

  • Best way to create a similar 2 node RAC environment

    Hello all,
    I have a 2 node RAC(10.2.0.3) running on Solaris 10 as my Prod Database.
    We are planning to have another 2 node RAC for DEV purposes with 10.2.0.3 itself.
    [Due to certain reasons this will act as the PROD for few weeks, so we need exact copy of the DB]
    I cannot afford any downtime.
    i am planning to
    Install CRS and upgrade to 10.2.0.3
    Install RAC and upgrade to 10.2.0.3
    Duplicate the database using RMAN
    Are there any better ways that i could replicate the environment either using Grid Control(10.2.0.2) or Dataguard (or anyother) ?
    TIA,
    JJ

    I don't think you're going to achieve no downtime, but if you get the DB copied to the 2nd cluster (using RMAN or whatever method you like) and apply all the logs, then your downtime should be able to be limited to the time it takes to apply the last log or two once you shutdown the primary site (a la Data Guard). That should also allow you to avoid data loss by applying the last logs (you'll likely have to manually copy and apply them). I agree with DbaKerber that using Data Guard may not be a bad solution here. You're not going to get 0 downtime, but I think it would be the safest way to have the shortest downtime window with no data loss.

  • Oracle Database 11gr2 rac on solaris 10 using ACFS and asm as storage

    Can i get any step by step document to install 11gr2 rac on solaris 10.
    My database is two node rac. I am using ASM as storage. So i need a document which should very easy to understand.
    thanks in advance

    Hi,
    Can i get any step by step document to install 11gr2 rac on solaris 10.
    My database is two node rac. I am using ASM as storage. So i need a document which should very easy to understand.Refer below link:
    http://www.oraclemasters.in/?p=961
    Configure storage as per your requirement.
    thanks,
    X A H E E R

  • Datagurad setup from 2 Node RAC to single instance (DR site)

    Dear Expert,
    I have request from management to setup the DR site from current production RAC database using active dataguard. I have two node rac database 11.2.0.3 running on sun solaris machine. I need proper step or good document can refer to setup between production RAC database to DR site single instance standby database. I only experience setup single instance to single instance .Apreciate expert can provide me some link
    Regard
    liang

    Hello;
    This will provide a good start and overview :
    Creating a Single Instance Physical Standby for a RAC Primary : ( please note parameter changes for Oracle 11 )
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf
    As will this :
    http://oracleinstance.blogspot.com/2012/01/create-single-instance-standby-database.html
    Oracle 11
    Rapid Oracle RAC Standby Deployment: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    Best Regards
    mseberg

  • Multi-table INSERT with PARALLEL hint on 2 node RAC

    Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
    servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
    used is what is given below.
    create table t1 ( x int );
    create table t2 ( x int );
    insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
    when (dummy='X') then into t1(x) values (y)
    when (dummy='Y') then into t2(x) values (y)
    select dummy, 1 y from dual;
    I can see multiple sessions using the below query, but on only one instance only. This happens not
    only for the above statement but also for a statement where real time table(as in table with more
    than 20 million records) are used.
    select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
    sql.sql_text
    from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
    WHERE p.sid = s.sid
    and p.serial# = s.serial#
    and p.sid = ps.sid
    and p.serial# = ps.serial#
    and s.sql_address = sql.address
    and s.sql_hash_value = sql.hash_value
    and qcsid=945
    Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
    Thanks,
    Mahesh

    Please take a look at these 2 articles below
    http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
    http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
    thanks
    http://swervedba.wordpress.com

  • Multiple copies of same database on a 2 node RAC server - How to merge ?

    I currently have multiple copies of the same database running on a 2 node Rac system. I am looking for a way to combine them into 1 large database but keeping the data separate.
    The databases are copies of production for testing, development and a yearly "historical" databases .
    All the databases are created from production, and generally have the same schema's , tables, procedures, etc however may be different versions and need to be.
    Is There a way to use one large database and logically split all the different versions of the same objects into their own space in one database ? The structure cannot change as the database is for a 3rd party's Forms application the relies on the objects not changing names etc.
    Ideally I am looking for a solution that will allow the forms application to connect to "test" and "historical" copies of our production database separately in the same database container.
    Thanks for any direction.

    I currently have multiple copies of the same database running on a 2 node Rac system. I am looking for a way to combine them into 1 large database but keeping the data separate.
    The databases are copies of production for testing, development and a yearly "historical" databases .
    All the databases are created from production, and generally have the same schema's , tables, procedures, etc however may be different versions and need to be.
    Is There a way to use one large database and logically split all the different versions of the same objects into their own space in one database ? The structure cannot change as the database is for a 3rd party's Forms application the relies on the objects not changing names etc.
    Ideally I am looking for a solution that will allow the forms application to connect to "test" and "historical" copies of our production database separately in the same database container.
    Thanks for any direction.

  • Multiple databases/instances on 4-node RAC Cluster including Physical Stand

    OS: Windows 2003 Server R2 X64
    DB: 10.2.0.4
    Virtualization: NONE
    Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
    Node Memory: 128GB RAM
    We are planning the following on the above-mentioned 4-node RAC cluster:
    Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
    Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
    Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
    Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
    Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
    Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
    Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
    Note: DB4 will be the physical primary Data Warehouse DB.
    Note: Going to 11g is NOT an option.
    Note: Data Guard broker will be used across the board.
    Please answer/advise of the following:
    1. Is the above configuration supported and why so? If not, what are the alternatives?
    2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?

    Hi,
    As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
    1. No of CPU on each servers
    2. Memory on each servers
    3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
    4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
    Thanks,
    Keyur

  • How to create 11.2.0.2 physical standby database from 2 node RAC (11.2.0.2)

    Hi,
    Can any one please help me How to manually create 11.2.0.2 standalone physical standby database from 2 node RAC (11.2.0.2) database which is running in RHEL5 and ASM plugged in.
    DB : 11.2.0.2
    OS : RHEL5
    RMAN duplicate is causing problem with network and we decided to go for manual creation of the same.
    Thanks in Advance..

    Hi;
    Can any one please help me How to manually create 11.2.0.2 standalone physical standby database from 2 node RAC (11.2.0.2) database which is running in RHEL5 and ASM plugged in.
    DB : 11.2.0.2
    OS : RHEL5I had similar issue, what i did
    1. Used source oracle_home on standby server
    2. Created new asm instance and use same naming
    3. I took RMAN full backup on source and move it to target
    4. I edit initora file remove RAC setting and restore db(also edited listener file)
    Regard
    Helios

  • ORACLE 10G 2 Node RAC on servers AB to 3 node 11GR2 RAC on  new servers XYZ

    Hi Gurus ,
    We have a business requirement to upgrade existing oracle 10.2.0.4 2 Node RAC on servers A,B (HP UX 11.31 ) to 3 Node ORACLE 11GR2 on new Servers X,Y and Z(Linux and servers are from different vendor)
    We don't have ASM.We have RAW file system
    Now this has to be done with near zero down time.This is a very busy OLTP System.Golden Gate is not an option as Management is not going for it this time.
    Storage is same for everything.I am thinking of the following ways.Please let me know if you have any better plan or if you want to correct my existing plan
    Initially i thought of this way and i immediately answered myself
    Plan A
    1 ) Storage copy (BC etc..) exising 10g RAC Database files on A,B to new volumes allocated to Servers X,Y and Z ( I don't think it's possible as OS is different.I am not sure whether copying this way is possible or not , even if possible i am not sure whether new OS can identify these files or not)
    2 ) upgrade 2 Node 10.2.0.4 on X,Y to 11GR2 .
    3 ) Add a Database Node on Z
    Plan B is
    1) Build 3 Node brand new 11GR2 on X,Y and Z
    2 ) Plan for how can you replicate the data ( This has to be a logical way)
    a ) RMAN - Very time consuming as this is >50 TB Database
    b) Golden Gate - Not an option.Even if we can use it , i see there are so many logical issues with golden gate
    c) Expdp/impdp - forget about it :)
    d) physical standby (Versions are different.If it is same version and Same OS then would have been best bet)
    What would be the ideal way to do this with minium down time and with out golden gate
    I think something can be done on the lines of PLAN A itself.
    Requesting your help Gurus
    Thanks

    Hi,
    Have you considered possibility to setup a logical standby and do rolling upgrade ?
    Please have a look at the following:
    1) http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-transientlogicalrollingu-1-131927.pdf
    2) http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf
    If you can practice on test setup, it looks a faster option than the others.
    Regards,
    Ganadeva

  • Converting single instance database(11gr2) supporting EBS 12.1.3 to 2 node RAC on RHEL.

    Hi We are in the process of converting single instance database(11gr2) supporting EBS 12.1.3 to 2 node RAC on RHEL. Which version of RHEL is better? if its 6.x then x?? The oracle document only says RHEL update 6.
    thanks in advance.

    Hi,
    Yes you can use any version but i recommend  you to use 6.2
    Also refer my post for RAC migration in EBS
    Apps DBA Workshop: Using 11gR2 RAC with Oracle Applications R12.1.1
    Hope this helps
    thanks,
    X A H E E R

Maybe you are looking for

  • After system crash itunes can't find my music

    my system had to have windows re-installed due to a variety of problems. now i can't fin over 20GB of music. i have it on my iPod, but i can't see it on my computer. any suggestions?

  • Installation requirements - Headstart 6.5.3.0 for 9i

    Hi, In the installation guide: INSTALLATION INSTRUCTIONS FOR ORACLE HEADSTART 6.5.3.0 for 9i, one of the requirements is : Oracle 9iAS infrastructure 9.0.2.0.1. Is it really necessary to have the Oracle 9iAS infrastructure or is the Oracle 9iAS Forms

  • Blockade after resume (on IE7)

    Hello, I published course in SCORM 1.2. My course was prepared on Cp 5.5. Published content is work properly on LMS (reporting and resume). There is only one problem if I view the course on IE7 (or higher with Quirks mode). There is steps to cause a

  • VI crashes on load because of hidden tab in tab control

    As part of the user interface I'm using a tab control which has hidden tabs. However there seems to have been some problems with the last few saves because whenever I try to load the VI, LabView crashes with the following message: The instruction at

  • Nokia podcasting: Automatic Download doesn't autom...

    Just confirming this, but I've found that on every phone that has the current version of Nokia Podcasting, the app does not disconnect after an Automatic Download, rendering that feature utterly pointless. Previously Podcasting could be left running,