RAC Questions

Hello,
I have couple of questions
1) In a two node RAC, if we lost the interconnect between two nodes which one will be part of cluster or which node is evicted? (I think the one with lowest node number will survive, but i am not sure what the "lowest node number" mean)
2) At any given point of time how do we know which node is a master node in a multiple node RAC?
3) I have regular backups of my OCR and VOTEDISKS, but unfortunately if we loose current copies and backup copies of both, how to recover from this kind of scenario? Do we need to rebuild the cluster or do we any other alternative?
4) If my OCR disk is on ASM, if i want to manually take a backup using dd command, is it possible ? as dd is at OS level but as it doesn't undersatnd the ASM path name in "if", how do we do this?
Thanks

925967 wrote:
Hello,
I have couple of questions
1) In a two node RAC, if we lost the interconnect between two nodes which one will be part of cluster or which node is evicted? (I think the one with lowest node number will survive, but i am not sure what the "lowest node number" mean)The node which is added in the first place to the cluster, which means node #1.
2) At any given point of time how do we know which node is a master node in a multiple node RAC?You can use this command grep -i "master node" ocssd.log to find the master node. Another common assumption is to assume that the node that contains the OCR backup is the master node but its not always true.
3) I have regular backups of my OCR and VOTEDISKS, but unfortunately if we loose current copies and backup copies of both, how to recover from this kind of scenario? Do we need to rebuild the cluster or do we any other alternative?If you have lost OCR and have no backup, there is still a way to recover it and its on MOS. I don't have the note with me handy so you got to search for it. I shall see if I can find it and update the reply later.
Update: Have a look at this MOS note 399482.1.
4) If my OCR disk is on ASM, if i want to manually take a backup using dd command, is it possible ? as dd is at OS level but as it doesn't undersatnd the ASM path name in "if", how do we do this?OCR backup by DD? Are you sure you talking about the right thing? The only option where DD was used in the previous versions was for Voting Disk. I never recall that this method was used for OCR as its always auto backed up. The answer to your question in the context of Voting disk is that you can't do it and Oracle has very clearly mentioned attempting to do so would end up in the corruption of the Voting Disk and if you attempt to do so for OCR, I believe the answer remains the same.
HTH
Aman....
Edited by: Aman.... on Aug 3, 2012 9:56 AM

Similar Messages

  • Oracle RAC Question

    Question about oracle rac. lets say we have a 3 node RAC database and our tnsnames.ora file is configured to point to node 1. If an application is connected to the database using the database connection information that is on the tnsnames.ora file (pointing to node1), and node 1 is down, how does the application know to point to node 2 or node 3 to connect to the database?

    If you didn't configure node2 and node3 as failover nodes, only the currently connected sessions would failover by the other nodes.
    New connections are no longer possible.
    Sybrand Bakker
    Senior Oracle DBA
    Oracle is not about rocket science. It is about being able and willing to read documentation.

  • RAC question

    Hi,
    Version 11g
    I have RAC 11gR2 on two nodes with ASM.
    The DB name is : cstrprd
    On the first machine the asm instance is : +ASM1 and the instance name is cstrprd1
    On the second machine the asm instance is : +ASM2 and the instance name is cstrprd2
    The question is regarding /etc/oratab
    # This file is used by ORACLE utilities.  It is created by root.sh
    # and updated by the Database Configuration Assistant when creating
    # a database.
    # Multiple entries with the same $ORACLE_SID are not allowed.
    +ASM2:/u01/app/11.2.0/grid:N            # line added by Agent
    cstrprd:/u01/app/oracle/product/11.2.0/CSTOREPRD11gR2:N         # line added by AgentThe last two lines created by root.sh
    I would like to know if its OK that the name of the DATABASE (cstrprd) was written to the /etc/oratab as you can see above ,
    and not the name of the INSTANCE as follow
    cstrprd2:/u01/app/oracle/product/11.2.0/CSTOREPRD11gR2:N  Thanks

    Hi,
    if your entry
    cstrprd:/u01/app/oracle/product/11.2.0/CSTOREPRD11gR2:Y like this then if node reboots it will start the entire database including 2 instances (cstrprd1,cstrprd2)
    suppose if your entry
    cstrprd1 :/u01/app/oracle/product/11.2.0/CSTOREPRD11gR2:Y like this then , if node reboots only cstrprd1 instance will come up on first node . vice versa...
    Note : suppose if u use instance name in oratab , it will be helpful when using . oraenv .
    . oraenv
    ORACLE_SID = [oracle] ? cstrprd1
    just give instance name , it will take the ORACLE_HOME automatically otherwise u have to give ORACLE_HOME also manually ...

  • Solaris 10 + Oracle 10gR2 RAC question

    Hello everyone
    Has anyone come across the case where the CRS services of Oracle cause
    the public interface to get turned off and then restored at random
    time intervals? To elaborate, we have a 2 node cluster database.
    Solaris 10, Oracle 10gR2 RAC with patch 10.2.0.3 applied. No SUN
    clustering is involved. When the cluster software is down (nodeapps,
    asm, database instances all down) /var/adm/messages show nothing. When
    we start nodeapps on the 2 nodes(thus initiating some form of
    communication between the nodes), at random time intervals we get
    "interface ce0 turned off and interface ce0 restored" in /var/adm/
    messages. When we check the status of the RAC, we see that one node's
    vip has been assigned to the other. This on/off behaviour of the NIC
    can be eliminated only if we continuously PING it from a another
    client in the network.
    As a matter of fact, the RAC and the RDBMS work perfectly when we keep
    pinging the 2 nodes from an other client on the network. We even
    managed to run a long batch job, distributed on cluster managed
    services on the 2 instances, and it completed after 9 hours without
    any problems.
    Does anyone have a hint on this behaviour? Is there some sort of
    timeout for the network cards? Some power saving features? Googling
    around I came across the new Containers feature available on Solaris
    10. Is there a way that I can verify that either RAC or the RDBMS is
    running in "container" mode ( since the solaris and Oracle
    installation was not performed by me)? Any other ideas?
    Thank you for reading

    Im an Oracle guy - not the SA type -
    But on ours - the SA configured this cluster incorrectly. We use veritas. instead of making ipmp groups for the interfaces - he built the cluster according to veritas docs. That is - he has two publics - on difference interfaces and different privates on different interfaces. oracle can only two interfaces - no matter if its a ipmp group or a device name. one is used for private- the other is used for public - So sure the veritas cluster filesystems will survive - but the Oracle Cluster will not - nodes will reboot -
    Is your system set up incorrectly as i described above? if it is - a quick test would be - turn down the other interfaces - and only leave the two interfaces you mention above up that you configured for Oracle CRS
    This other sharp sa was able to go through the arp table and see duplicate IPs - and the routing was attempted via an interface that oracle doesnt see. You can not define two different interfaces public - and two different interfaces for private -

  • A non-RAC question for the experts: Clusterware + ASM

    Hi
    We have a 2-nodes cluster in production.
    It's an active/passive cluster with a cold failover using SUN Cluster.
    There is only one Oracle 10G single-instance database (DB1), running on node #1, that use ASM to access the SAN (it's raw devices).
    We want to add a third nodes to move to a 3-nodes cluster, that would have 2 separates single instance Oracle 10g database
    node #1: DB1 with ASM (the existing database, existing raw devices)
    node #2: cluster passive node
    node #3: DB2 with ASM (new additional database, new raw devices)
    All nodes are connected to the same SAN.
    We' want to use Clusterware instead of SUN Cluster.
    The single-instance databases, the listeners and the ASM instances would be protected by Clusterware using "application VIP" (not oracle-vip).
    It's not RAC ! but in the future both database might become RAC (but that's not the need today)...
    My concern is about ASM !
    Do I have to install an ASM instance on each node (just like in RAC) ?
    Or may I just configure clusterware to ensure there is only ASM instance running on a node ?
    Is it possible to have the ASM instance of node #1 and the ASM intance of node#2 that access the same SAN & raw devices ?
    In the case DB1 & DB2 fail over to node#2, the unique ASM instance must see the raw devices of both databases, is this possible ?
    Do I have to destroy the curent ASM configuration of DB1 to build the new one, or can I just add new raw devices and share tem across all nodes ?
    Thanks for your answers.

    Hi,
    Thanks for the answer
    I found this very interesting doucument but it does not go into enough details about ASM, which is my concern.
    I found in the Oracle 11g documentation a statement which say that clustered ASM instances for single instance database do not need to talk each other as long as they do not handle the same diskgroup (Figure 1.3 ; http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmcon.htm#i1021337) . Otherwise, it must be clustered. It would also be a problem for me when DB1 and DB2 are on the same node. I would need two different ASM instance, on the same node, which is not a good situation.
    What happen when I start an ASM instance on node #3 to access raw devices already in use by node # 2.
    I think I need to start from node #1 and change some ASM configuratoion to make it cluster-aware.
    Next, I would add a second clustered ASM instance on node #2, and so on, to make sure that all ASM instances of all nodes can access the same diskgroup, and thefore being able to re-use the existing database on node #1 as well as creating a new one one node #2 (all on the same SAN)

  • OWB RAC question

    Our source and target databases are RAC servers. Is it possible to have OWB installed as single node and still use RAC capabilities of source and target database ?
    Basically we are hoping to install OWB as single node and define source/target locations using sql*net. Has anyone tried this ? Does it work ?
    Let me know.
    Thanks
    Madhavi

    Hi,
    We also have a 6 node rac where we settes up owb.
    We went through a whole lot of problems but got it running finally(with service failover).
    Basically you will need the binaries on all 4 of your physical nodes.
    For the first node you can run through the normal installation (repository setu).
    Thab you need to start the repository assistant on all other nodes in ordr to register them.
    There are a lot of pitfalls with that installation and a lack od docunentation.
    We've written an onstallation guide for an OWB rac setup where with we have now setted up 9 OWB repositories spreaded over 2 racs.
    Just let me know if you require detailed information - maybe we can share our guide.

  • Newbie RAC question

    I have 3 production database servers with 2 databases on each server (6 databases total). Can I set up a RAC environment with just 1 additional node/server? Meaning can I have that clustered server serve all 6 database services while transparent to the app? I hope this make sense.
    Thanks.

    Hi,
    Yes you can clustered the 3 servers that you have and cluster all instances.
    cheers

  • ASM + RAC questions

    How many DBs i should have on 2 node RAC environment? How this work?
    What are the options to start & shutdown the ASM?

    Do you have 30 databases that are single instance database that you are looking to place into your two node cluster?
    What are the memory and processor requirements of your 30 databases? Does your cluster have suitable memory and processors to handle all 30 databases?

  • Configuring our RAC environment Questions

    The environment consists of Sun Solaris 10, Veritas, and 10g RAC:
    Questions:
    I need to know the settings and configuration of the entire software stack that will be the foundation of the oracle RAC environment....Network configurations, settings and requirements for any networks including the rac network between servers
    How to set up the solaris 10k structures: what goes into the global zones, the containers, the resource groups, RBAC roles, SMF configuration, schedulers?
    Can we use zfs, and if so, what configuration, and what settings?
    In addition, these questions I need answers to:
    What I am looking for is:
    -- special hardware configuration issues, in particular the server rac interconnect. Do we need a hub, switch or crossover cables configured how.
    -- Operating System versions and configuration. If it is Solaris 10, then there are more specific requirements: how to handle smf, containers, kernel settings, IPMP, NTP, RBAC, SSH, etc.
    -- Disk layout on SAN, including a design for growth several years out: what are the file systems with the most contention, most use, command tag depth issues etc. (can send my questionnaire)
    -- Configuration settings\ best practices for Foundation suite for RAC and Volume manager
    -- How to test and Tune the Foundation suite settings for thru-put optimization. I can provide stats from the server and the san, but how do we coordinate that with the database.
    -- How to test RAC failover -- what items will be monitored for failover that need to be considered from the server perspective.
    -- How to test data guard failures and failover -- does system administration have to be prepared to help out at all?
    -- How to configure Netbackup --- backups

    Answering all these questions accurately and correctly for you implementation might be a bit much for a forum posting.
    First I'd recommend accessing the Oracle documentation on otn.oracle.com. This should get you the basics about what is supported for the environment your looking to set up, and go a long way to answering your detailed questions.
    Then I'd break this down into smaller sets of specific questions and try and get the RAC axters on the RAC forum to help out.
    See: Community Discussion Forums » Grid Computing » Real Application Clusters
    Finally Oracle Support via Metalink should be able to fill in any gaps int he documentation.
    Good luck on your project,
    Tony

  • RAC Installation Error CLUSTEREXCEPTION

    Dear all RAC Gurus
    I have 2 machines / nodes RAC1 and RAC2
    I have problem when install oracle rac[cauze i'm newbie in rac]...when i specified cluster configuration cluster name, i got error as
    =================================================
    The node(s), oradbn1-priv,oradbn2-priv, does not appear to be
    reachable via the private node name. The status return by the node
    availability check is :
    CLUSTEREXCEPTION
    Please check that all the nodes in the node list are reachable via both
    their public and private node names.
    =================================================
    My /etc/hosts
    on RAC node 1 [the value is same as RAC node 2]
    =================================================
    [root@rac1 ~]# cat /etc/hosts
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1 localhost.localdomain localhost
    #::1 localhost6.localdomain6 localhost6
    192.168.184.201 rac1.gis.com rac1
    192.168.184.202 rac1-vip.gis.com rac1vip
    192.168.1.202 rac1-prv.gis.com rac1prv
    192.168.184.211 rac2.gis.com rac2
    192.168.184.212 rac2-vip.gis.com rac2vip
    192.168.1.212 rac2-prv.gis.com rac2prv
    =================================================
    on RAC node 1 ifconfig
    =================================================
    [root@rac1 ~]# ifconfig
    eth0 Link encap:Ethernet HWaddr 00:0C:29:94:E2:69
    inet addr:192.168.184.201 Bcast:192.168.184.255 Mask:255.255.255.0
    inet6 addr: fe80::20c:29ff:fe94:e269/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:9457 errors:0 dropped:0 overruns:0 frame:0
    TX packets:10724 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:2042640 (1.9 MiB) TX bytes:1159047 (1.1 MiB)
    Interrupt:67 Base address:0x2024
    eth0:1 Link encap:Ethernet HWaddr 00:0C:29:94:E2:69
    inet addr:192.168.184.202 Bcast:192.168.184.255 Mask:255.255.255.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:67 Base address:0x2024
    eth1 Link encap:Ethernet HWaddr 00:0C:29:94:E2:73
    inet addr:192.168.1.202 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::20c:29ff:fe94:e273/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:809 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1070 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:125940 (122.9 KiB) TX bytes:150904 (147.3 KiB)
    Interrupt:67 Base address:0x20a4
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:16936 errors:0 dropped:0 overruns:0 frame:0
    TX packets:16936 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:6013187 (5.7 MiB) TX bytes:6013187 (5.7 MiB)
    =================================================
    each node ssh log is ok
    wait on line!!!
    Could RAC gurus solve the problem???
    Thank so Much for any help!!!

    user3113543 wrote:
    Dear all RAC Gurus
    I have 2 machines / nodes RAC1 and RAC2
    I have problem when install oracle rac[cauze i'm newbie in rac]...when i specified cluster configuration cluster name, i got error as
    =================================================
    The node(s), oradbn1-priv,oradbn2-priv, does not appear to be First, when asking RAC questions, ALWAYS include the version. Show me in your /etc/hosts file where these names exist????
    Before proceeding, From ALL nodes in the cluster, make sure you can successfully connect using the IP address AND hostname AND hostname.FQN.com name using the following command
    ssh date <ipaddress>
    ssh date <hostname>
    etc...
    The result should be the date and possibly the /etc/issues file.
    this should not prompt to be added to knownhosts file (if it does, answer YES) nor should it prompt you for a password.
    # that require network functionality will fail.
    127.0.0.1 localhost.localdomain localhost
    #::1 localhost6.localdomain6 localhost6
    192.168.184.201 rac1.gis.com rac1
    192.168.184.202 rac1-vip.gis.com rac1vip
    192.168.1.202 rac1-prv.gis.com rac1prv
    192.168.184.211 rac2.gis.com rac2
    192.168.184.212 rac2-vip.gis.com rac2vip
    192.168.1.212 rac2-prv.gis.com rac2prv
    Could RAC gurus solve the problem???Yes, WE can, but if you can't then your system will be of little use to you...
    Thank so Much for any help!!!

  • RAC management utilities

    srvctl
    srvconfig
    EM
    crs_ctl
    gsdctl
    what other utilities we can use to manage RAC DBs, any body has some notes for RAC Utilies?

    If you go to http://forums.oracle.com you may note a 'topic' titled 'Grid Computing with a number of RAC related forums. You may ALSO get relevant answers to your RAC question by asking people who are heavily interested in RAC.
    In the mean time - your list misses SQLPLUS and RMAN to name a few.

  • How to create an asm instance manaually? oracle 11gr2.

    env: oracle 11gr2 os: hpux or aix single machine , not rac.
    question:how to create an asm instance manaually?? diskgroup,listener,db ,they can be resigistered to crs??
    can anyone give me document about it?

    Hi,
    This is a simple answer:
    Automatic Storage Management (ASM)
    ASM was a new storage option introduced with Oracle Database 10gR1 that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove "hot spots." It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
    ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, and control files. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
    ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64 MB for most systems.
    Installing ASM
    On Linux platforms, ASM can use raw devices or devices managed via the ASMLib interface. Oracle recommends ASMLib over raw devices for ease-of-use and performance reasons. ASMLib 2.0 is available for free download from OTN. This section walks through the process of configuring a simple ASM instance by using ASMLib 2.0 and building a database that uses ASM for disk storage.
    Determine Which Version of ASMLib You Need
    ASMLib 2.0 is delivered as a set of three Linux packages:
    * oracleasmlib-2.0 - the ASM libraries
    * oracleasm-support-2.0 - utilities needed to administer ASMLib
    * oracleasm - a kernel module for the ASM library
    Each Linux distribution has its own set of ASMLib 2.0 packages, and within each distribution, each kernel version has a corresponding oracleasm package. The following paragraphs describe how to determine which set of packages you need.
    First, determine which kernel you are using by logging in as root and running the following command:
    uname -rm
    Ex:
    # uname -rm
    2.6.9-22.ELsmp i686
    The example shows that this is a 2.6.9-22 kernel for an SMP (multiprocessor) box using Intel i686 CPUs.
    Use this information to find the correct ASMLib packages on OTN:
    1. Point your Web browser to http://www.oracle.com/technology/tech/linux/asmlib/index.html
    2. Select the link for your version of Linux.
    3. Download the oracleasmlib and oracleasm-support packages for your version of Linux
    4. Download the oracleasm package corresponding to your kernel. In the example above, the oracleasm-2.6.9-22.ELsmp-2.0.0-1.i686.rpm package was used.
    Next, install the packages by executing the following command as root:
    rpm -Uvh oracleasm-kernel_version-asmlib_version.cpu_type.rpm \
    oracleasmlib-asmlib_version.cpu_type.rpm \
    oracleasm-support-asmlib_version.cpu_type.rpm
    Ex:
    # rpm -Uvh \
    > oracleasm-2.6.9-22.ELsmp-2.0.0-1.i686.rpm \
    > oracleasmlib-2.0.1-1.i386.rpm \
    > oracleasm-support-2.0.1-1.i386.rpm
    Preparing... ########################################### [100%]
    1:oracleasm-support ########################################### [ 33%]
    2:oracleasm-2.6.9-22.ELsm########################################### [ 67%]
    3:oracleasmlib ########################################### [100%]
    Configuring ASMLib
    Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below.
    # /etc/init.d/oracleasm configure
    Configuring the Oracle ASM library driver.
    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets ('[]'). Hitting <ENTER> without typing an
    answer will keep that current value. Ctrl-C will abort.
    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: [  OK  ]
    Creating /dev/oracleasm mount point: [  OK  ]
    Loading module "oracleasm": [  OK  ]
    Mounting ASMlib driver filesystem: [  OK  ]
    Scanning system for ASM disks: [  OK  ]
    Next you tell the ASM driver which disks you want it to use. Oracle recommends that each disk contain a single partition for the entire disk. See Partitioning the Disks at the beginning of this section for an example of creating disk partitions.
    You mark disks for use by ASMLib by running the following command as root:
    /etc/init.d/oracleasm createdisk DISK_NAME device_name
    Tip: Enter the DISK_NAME in UPPERCASE letters.
    Ex:
    # /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
    Marking disk "/dev/sdb1" as an ASM disk: [  OK  ]
    # /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
    Marking disk "/dev/sdc1" as an ASM disk: [  OK  ]
    # /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
    Marking disk "/dev/sdd1" as an ASM disk: [  OK  ]
    Verify that ASMLib has marked the disks:
    # /etc/init.d/oracleasm listdisks
    VOL1
    VOL2
    VOL3
    Create the ASM Instance
    ASM runs as a separate Oracle instance which can be created and configured using the Oracle Universal Installer. Now that ASMLib is installed and the disks are marked for use, you can create an ASM instance.
    Log in as oracle and start runInstaller:
    $ ./runInstaller
    1. Select Installation Method
    * Select Advanced Installation
    * Click on Next
    2. Specify Inventory Directory and Credentials
    * Inventory Directory: /u01/app/oracle/oraInventory
    * Operating System group name: oinstall
    * Click on Next
    3. Select Installation Type
    * Select Enterprise Edition
    * Click on Next
    4. Specify Home Details
    * Name: OraDB10gASM
    * Path: /u01/app/oracle/product/10.2.0/asm
    Note:Oracle recommends using a different ORACLE_HOME for ASM than the ORACLE_HOME used for the database for ease of administration.
    * Click on Next
    5. Product-specific Prerequisite Checks
    * If you've been following the steps in this guide, all the checks should pass without difficulty. If one or more checks fail, correct the problem before proceeding.
    * Click on Next
    6. Select Configuration Option
    * Select Configure Automatic Storage Management (ASM)
    * Enter the ASM SYS password and confirm
    * Click on Next
    7. Configure Automatic Storage Management
    * Disk Group Name: DATA
    * Redundancy
    - High mirrors data twice.
    - Normal mirrors data once. This is the default.
    - External does not mirror data within ASM. This is typically used if an external RAID array is providing redundancy.
    * Add Disks
    The disks you configured for use with ASMLib are listed as Candidate Disks. Select each disk you wish to include in the disk group.
    * Click on Next
    8. Summary
    * A summary of the products being installed is presented.
    * Click on Install.
    9. Execute Configuration Scripts
    * At the end of the installation, a pop up window will appear indicating scripts that need to be run as root. Login as root and run the indicated scripts.
    * Click on OK when finished.
    10. Configuration Assistants
    * The Oracle Net, Oracle Database, and iSQL*Plus configuration assistants will run automatically
    11. End of Installation
    * Make note of the URLs presented in the summary, and click on Exit when ready.
    12. Congratulations! Your new Oracle ASM Instance is up and ready for use.
    Kind regards
    Mohamed

  • Protocol

    Hi,
    I Am using the t3 protocol to connect to the server in the BEA weblogic, now
    I am porting my beans to IPLANET, now how can I make use of the Attached
    piece of java code, the file makes use of the table called "ts_lookup".
    java file
    package com.se.util;
    import javax.naming.*;
    import java.util.*;
    import javax.sql.*;
    import java.sql.*;
    import javax.ejb.EJBException;
    @author Author
    public class Lookup extends Object
    @roseuid 39EAE91F015B
    public static java.lang.Object getHomeObject(String lookUpID) throws
    java.rmi.RemoteException, EJBException
    Connection con = null;
    PreparedStatement ps = null;
    String initialContextFactory = null;
    String providerURL = null;
    String securityPrincipal = null;
    String securityCredentials = null;
    String jndiName = null;
    try {
    con = getConnection();
    ps = con.prepareStatement("SELECT * FROM tu_lookup WHERE lookUpID = ?");
    ps.setString(1, lookUpID);
    ResultSet rs = ps.executeQuery();
    file://ResultSet rs = ps.getResultSet();
    if (rs.next()) {
    initialContextFactory = rs.getString("initialContextFactory");
    providerURL = rs.getString("providerURL");
    securityPrincipal = rs.getString("securityPrincipal");
    securityCredentials = rs.getString("securityCredentials");
    jndiName = rs.getString("jndiName");
    } else {
    String error = lookUpID + " not found in table tu_lookup";
    System.out.println(error);
    throw new EJBException (error);
    } catch (SQLException sqe) {
    System.out.println("SQLException: " + sqe);
    throw new EJBException(sqe);
    finally {
    cleanup(con, ps);
    Object homeObj =null;
    try {
    // Get an InitialContext
    Properties h = new Properties();
    h.put(Context.INITIAL_CONTEXT_FACTORY, initialContextFactory);
    h.put(Context.PROVIDER_URL, providerURL);
    if (securityPrincipal != null) {
    h.put(Context.SECURITY_PRINCIPAL, securityPrincipal);
    if (securityCredentials == null)
    securityCredentials = "";
    h.put(Context.SECURITY_CREDENTIALS, securityCredentials);
    Context ctx = new InitialContext(h);
    homeObj = ctx.lookup(jndiName);
    } catch (NamingException ne) {
    file://call the exception handler
    System.out.println("NamingException: " + ne);
    throw new EJBException(ne);
    return homeObj;
    * Gets current connection to the connection pool.
    * @return Connection
    * @exception javax.ejb.EJBException
    * if there is a communications or systems failure
    public static Connection getConnection()
    throws SQLException
    String poolName = null;
    InitialContext initCtx = null;
    try {
    ResourceBundle rs =
    ResourceBundle.getBundle("com.se.properties.Lookup");
    Hashtable p = new Hashtable();
    p.put(Context.INITIAL_CONTEXT_FACTORY,
    rs.getString("INITIAL_CONTEXT_FACTORY"));
    p.put(Context.PROVIDER_URL, rs.getString("PROVIDER_URL"));
    poolName = rs.getString("CONNECTION_POOL_NAME");
    initCtx = new InitialContext(p);
    DataSource ds = (javax.sql.DataSource) initCtx.lookup(poolName);
    return ds.getConnection();
    } catch(MissingResourceException mre) {
    System.out.println("UNABLE to find the Lookup.properties file");
    throw new EJBException(mre);
    } catch(NamingException ne) {
    System.out.println("UNABLE to get a connection from "+poolName+"!");
    System.out.println("Please make sure that you have setup the connection
    pool properly");
    throw new EJBException(ne);
    } finally {
    try {
    if(initCtx != null) initCtx.close();
    } catch(NamingException ne) {
    System.out.println("Error closing context: " + ne);
    throw new EJBException(ne);
    * Gets current connection to the connection pool.
    * @return Connection
    * @exception javax.ejb.EJBException
    * if there is a communications or systems failure
    public static Connection getConnection(String poolName)
    throws SQLException
    InitialContext initCtx = null;
    try {
    ResourceBundle rs =
    ResourceBundle.getBundle("com.se.properties.Lookup");
    Hashtable p = new Hashtable();
    p.put(Context.INITIAL_CONTEXT_FACTORY,
    rs.getString("INITIAL_CONTEXT_FACTORY"));
    p.put(Context.PROVIDER_URL, rs.getString("PROVIDER_URL"));
    initCtx = new InitialContext(p);
    DataSource ds = (javax.sql.DataSource) initCtx.lookup(poolName);
    return ds.getConnection();
    } catch(MissingResourceException mre) {
    System.out.println("UNABLE to find the Lookup.properties file");
    throw new EJBException(mre);
    } catch(NamingException ne) {
    System.out.println("UNABLE to get a connection from "+poolName+"!");
    System.out.println("Please make sure that you have setup the connection
    pool properly");
    throw new EJBException(ne);
    } finally {
    try {
    if(initCtx != null) initCtx.close();
    } catch(NamingException ne) {
    System.out.println("Error closing context: " + ne);
    throw new EJBException(ne);
    public static void cleanup(Connection con, PreparedStatement ps) {
    // Having problems with ps.close.
    /*try {
    if (ps != null) ps.close();
    } catch (Exception e) {
    System.out.println("Error closing PreparedStatement: "+e);
    throw new EJBException (e);
    try {
    if (con != null) con.close();
    } catch (Exception e) {
    System.out.println("Error closing Connection: " + e);
    throw new EJBException (e);
    The structure of the ts_lookup is like this
    CREATE TABLE tu_lookup (
    lookUpID char(30) NOT NULL ,
    initialContextFactory varchar (100) NOT NULL ,
    providerURL varchar (100) NOT NULL ,
    securityPrincipal varchar (15) NULL ,
    securityCredentials varchar (15) NULL ,
    jndiName varchar (25) NOT NULL
    sample data row for the above table is like this
    ItemHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 ItemHome
    UnitMasterHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 UnitMasterHome
    ShowColumnsHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 ShowColumnsHome
    CustomerHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 CustomerHome
    CompanyHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 CompanyHome
    CountryHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 CountryHome
    CustomerMasterHome weblogic.jndi.WLInitialContextFactory
    t3://208.28.177.22:7001 CustomerMasterHome

    http://en.wikipedia.org/wiki/User_Datagram_Protocol
    http://en.wikipedia.org/wiki/Inter-process_communication
    http://en.wikipedia.org/wiki/Infiniband
    For your RAC question please refer to the Remote Application Clusters Concepts and Administration Manual for your unknown version.
    Also please make sure you do the necessary prior to asking these questions!
    Sybrand Bakker
    Senior Oracle DBA

  • Dirty buffer and datafile?

    when checkpoint occurs , dirty buffers are written to datafile.
    it means that datfile contains both commited and uncommited data.
    when commit occurs block may not be in db buffer cache on that time
    how database handles the block/transaction?.
    how long that uncommited data will be stored in datafile? if it is not going to contain uncommited data for
    long time(till commit occurs) is it not overhead for database to contain both commited and uncommited?

    Hi,
    this is more of a general than a RAC question, you should have posted this in the general forum
    user10819596 wrote:
    when checkpoint occurs , dirty buffers are written to datafile.
    it means that datfile contains both commited and uncommited data.
    when commit occurs block may not be in db buffer cache on that time
    how database handles the block/transaction?.the process is called 'delayed block cleanout'. Google for it, you will find lots of useful and in-depth explanations. Basically what happens is when you commit, the rollback segment/undo information will get marked as committed and the next transaction that visits the changed block will do the cleanup and mark the data block as committed.
    how long that uncommited data will be stored in datafile? if it is not going to contain uncommited data for
    long time(till commit occurs) is it not overhead for database to contain both commited and uncommited?I don't think I understand your question. Some data will have to be written to disk eventually (before we can re-use a redo log file) and there are also situations where you perform DML on more data that could ever fit into your buffer cache. For those reasons, it cannot be avoided to write uncommitted blocks to the datafile.

  • Machine automatically reboots

    I noticed my machine rebooted itself 3 days ago. Tonight it did the same and this is what I found in panic.log:
    Sun Oct 30 19:52:28 2005panic(cpu 1 caller 0x0015E090): tcp_unlock: so=a306760 usecount=ffffffffLatest stack backtrace for cpu 1: Backtrace: 0x00095544 0x00095A5C 0x0002683C 0x0015E090 0x0028FF64 0x0028CEDC 0x0027F9D8 0x0025C034 0x0025C238 0x0025A330 0x0025A090 0x002A7A94 0x000ABCB0 0xDC2D8818 Proceeding back via exception chain: Exception state (sv=0x5EA75C80) PC=0x900138AC; MSR=0x0200F030; DAR=0x0037F88C; DSISR=0x40000000; LR=0x00453E74; R1=0xBFFFBF60; XCP=0x00000030 (0xC00 - System call)Kernel version:Darwin Kernel Version 8.2.0: Fri Jun 24 17:46:54 PDT 2005; root:xnu-792.2.4.obj~3/RELEASE_PPC*******
    Does anyone know how to decipher this or where I should go from here? I have no idea why the machine is rebooting. I have a 3.5TB Xserve RAID connected to it and this machine is running as a webserver.

    I had a CRS reboot issue on Solaris not too long ago. It ended up being an NTP issue. There are many reasons CRS could cause a node reboot. Possibly there is a timeout talking to the voting disks or OCR.
    MetaLink Note:220970.1 is a good place to start for RAC questions. It has a section on Node reboots.
    I also check out the CRS alert log and ocssd logs which can all be found in the CRS_HOME/log directory.

Maybe you are looking for