RAC baselining
I have 5 nodes in my 10g RAC cluster and I use load balancing. I created a preserved snapshot set during load testing to use as a baseline. When we next performed a baseline I did a comparison with the baseline and found that I'm not comparing apples to apples since the applications could go to any of the nodes to run. Does anyone know how to consolidate a RAC snapshot set to yield a usable baseline?
I'm trying to create a baseline of the database which is spread over 5 instances. I'm trying to compare "database" performance in a specific period of time to database performance during normal operation. We have started using services but have not limited them to any specific node yet.
Similar Messages
-
RCA for Oracle RAC Performance Issue
Hi DBAs,
I have setup a 2 node Oracle RAC 10.2.0.3 on Linux 4.5 (64 bit) with 16 GB memory and 4 dual core CPUs each. The database is serving a web application but unfortunately the system is at its knees. The performance is terrible. The storage is a EMC SAN but ASM is not implemented with a fear to further degrade the performance or not to complicate the system further.
I am seeking the expert advises from some GURUs from this forums to formulate the action plan to do the root cause analysis to the system and database. Please advise me what tools I can use to gather the information about the Root Cause. AWR Report is not very helpful. The system stats with top, vmstat, iostat only show the high resource usage but difficult to find the reason. OEM has configured and very frequently report all kind of high wait events.
How I can use effectively find Network bottle necks (netstat command which need to be really helpful to understand).
How I can see the system I/O (iostats) which can provide me some useful information. I don't understand what sould be the baseline or optimal values to compare the I/O activities.
I am seeking help and advised to diagnose the issue. I also want to represent this issue as a case study.
Thanks
-Samar-First of all, RAC is mainly suited for OLTP applications.
Secondly, if your application is unscalable (it doesn't use bind variables and no SQL statements have been tuned and/or it has been ported from Sukkelserver 200<whatever>) running it against RAC will make things worse.
Thirdly: RAC uses a chatty Interconnect. If you didn't configure the Interconnect properly,and/or are using slow Network cards (1 Gb is mandatory), and/or you are not using a 9k MTU on your 1 Gb NIC, this again will make things worse.
You can't install RAC 'out of the box'. It won't perform! PERIOD.
Fourthly: you might suffer from your 'application' connecting and disconnecting for every individual SQL statement and/or commit every individual INSERT or UPDATE.
You need to address this.
Using ADDM and/or AWR is compulsory for analysing the problem, and/or having read Cary Millsaps book on Optimizing Oracle performance is compulsory.
You won't come anywhere without AWR and OS statistics will not provide any clue.
Because, paraphrasing William Jefferson Clinton, former president of the US of A:
It's the application, stupid.
99 out of 100 cases. Trust me. All developers I know currently are 100 percent clueless.
That said, if you can't be bothered to post the top 5 AWR events, and you aren't up to using AWR reports, maybe you should hire a consultant who can.
Regards,
Sybrand Bakker
Senior Oracle DBA -
Parameters to be monitored on a RAC Database
Hello,
We have 5 , 3 Node RAC 10gR2 Cluster Databases on HP UX. I want to setup a monitoring on the , using the Grid Control.
Can anyone please let me know What are the parameters or thresholds that I should monitor like any locks , queue waits etc..
ThanksHi,
It is very difficult to tell someone what you have to monitor and what the thresholds that you will consider porper or not.
In fact, you must know your environment to know what you want to monitor. Each environment has its characteristic in special.
Not an easy job, but Oracle has facilitated this work by providing some resources.
To know what monitor you need to understand the following concepts:
Metric Thresholds
Some metric thresholds come predefined out-of-box. While these values are acceptable for most monitoring conditions, your environment may require that you customize threshold values to more accurately reflect the operational norms of your environment.
Metric Baselines
Metric baselines are statistical characterizations of system performance over well-defined time periods.
Metric Snapshots
A metric snapshot is a named collection of a target's performance metrics that have been collected at a specific point in time.
Adaptive Thresholds
Once metric baselines are defined, they can be used to establish alert thresholds that are statistically significant and adapt to expected variations across time.
Understanding the concept of how the Grid Control monitors a target. Will be easier for you to monitor your environment.
I recommend you read the chapter System Monitoring
http://download.oracle.com/docs/cd/B14099_19/manage.1012/b16241/Monitoring.htm
Regards,
Levi Pereira -
Oracle insert sessions not failover to node1 in 11gr2 suse linux 2 node rac
Hi all,
I already setup a 2 node oracle 11gr2 rac and setup is installaed fine. in baseline testing insert sessions are not failovering to the node1. scenario i do as follows.
Scenario 1 - We ran an insert query in both Nodes and unplugged the Node2 interconnect wires from the server. In which case the Node2 insert session should failover to Node1 without any interruption to the users and the Node2 CRS( Cluster Ready Services) should reboot.
Scenario 2 - We ran the insert queries and unplugged Node 1 interconnections from the server. In this scenario also Node2 CRS should reboot and the insert session should failover to Node2.
please let me know any one have face this issue.
Regards,
Kanchana.Hi,
I already created the TAF service and try to failover it. Node going down and nothing will happen. But the insert query of the client machine will hand and not getting any error. We using VIP for local and remote listner as well. because still not configure the DNS.
Regards,
Kanchana. -
To achieve high availability by eliminating single point of failures in the following areas we are thinking of having our OLTP DBs on a single 10g RAC cluster
i) OS/Firmware patches requiring reboots
ii) Unplanned server failures
iii) One-off Oracle patches
We have migrated our DSS systems to 10g RAC (windows x64). However, in the last 9 months since we deployed we have seen 2 issues: single node eviction & multiple node evictions. Single node eviction is supposedly fixed w/ a patch that needs clusterwide shutdown.
The baseline for me on OLTP is 8i where I have taken DT once in 2 years to apply Oracle patches, once a year for OS patches and very rare server failures resulting in DB failover.
Questions I have:
a) Is 10g RAC really stable to be used for OLTP?
b) How is this being designed elsewhere with a view to reduce planned/unplanned DTs
thanks,
SM> a) Is 10g RAC really stable to be used for OLTP?
Loaded question as you are implying that until now, RAC has not been stable and not robust enough for OLTP.
Stability for any system is dependent on:
- platform h/w
- storage h/w
- network h/w
- o/s s/w
- application s/w
- administration
What RAC buys you is having multiple database instance for a single physical database. Which means that in the worse case where you are forced to down a platform because of one of the above reasons, the remaining platforms in the cluster should still be available.. courtesy of the share everything approach.
But RAC alone is not the answer.. there are numerous factors to consider. One of my longest uptime databases is an Oracle SE server with a 12,000+ uptime. And it is used 24x7 as a data collection platform.
It went down recently. The cause? Network errors and power failures that resulted in the rack cabinet housing this server, to be reset.
Have numerous examples of how unforseen events caused disaster in a computer room. From dirty electrical power to an aircon automated switchover failing.
RAC does not solve any of these. What happens when there is a power failure or h/w error with the switch used for the Interconnect? Without the nodes being able to communicate with one another, all nodes will be evict themselves from the cluster.
Looking at RAC alone as The Solution to your H/A requirement is a bit naive IMO. Yes, RAC is an excellent and major cog in the wheel of H/A.. but there are others too.
Q. Is 10g RAC really stable to be used for OLTP?
A. As stable and as robust as you make it to be. -
RAC Production Comprehensive Check List
Dear RAC Gurus,
I am putting together a daily health check list for a 4-node production RAC cluster hosting a multi terra-byte database. I would like to know if any of you have put together such list and can share your thoughts on it and what to include in the list.
Please respond with your suggestions and advice.
MadhuLeverage DB Console or Grid Control, create a performance baseline, and create alerts for the exceptions to that baseline (within an acceptable boundary).
CPU utilization per node and overall
CPU queue size
IOPS and throughput per node and overall.
Response time for business critical queries
Interconnect error rate
Latency (interconnect and I/O)
Interconnect I/O rate
Space utilization
TEMP space utilization and incidents
UNDO space incidents
Concurrent sessions
Usable FRA
Services incidents
Cluster incidents -
RAC with 10G using shared directories
We want to test Oracle 10G with Real Applications Cluster, but we do not have a SAN yet, can we use a disk from a normal server, share this disk and create a map network drive in the two servers that i want to install in the RAC? and use them like a shared disk??
This is the article about what I was refering:
Setting Up Linux with FireWire-Based Shared Storage for Oracle9i RAC
By Wim Coekaerts
If youre all fired up about FireWire and you want to set up a two-node cluster for development and testing purposes for your Oracle RAC (Real Application Clusters) database on Linux, heres an installation and configuration QuickStart guide to help you get started. But first, a caveat: Neither Oracle nor any other vendor currently supports the patch; it is intended for testing and demonstration only.
The QuickStart instructions step you through the installation of the Oracle database and the use of our patched kernel for configuring Linux for FireWire as well as the installation and configuration of Oracle Cluster File System (OCFS) on a FireWire shared-storage device. Oracle RAC uses shared storage in conjunction with a multinode extension of a database to allow scalability and provide failover security.
The hardware typically used for shared storage (a fibre-channel system) is expensive (see my column on clustering with FireWire on Oracle Technology Network (OTN) for some background on shared-storage solutions and the new kernel patch). However, once youve installed and set up the kernel patch, you will be on your way to setting up a Linux cluster suitable for your development team to use for demo testing and QAa solution that costs considerably less than the traditional ones.
The patch is available to the Linux and open source community under the GNU General Public License (GPL). You can download it from the Linux Open Source Projects page, available from the Community Code section of OTN. See the Toolbox sidebar for more information.
Figure 1: Two-node Linux cluster using FireWire shared drive
By following this guide, youll install the patched kernel on each machine that will comprise a node of the cluster. Youll basically build a two-node test configuration composed of two machines connected over a 10Base-T network, with each machine linked via FireWire to the drive used for shared storage, as shown in see Figure 1.
If you havent used FireWire on either machine before, be sure to install and configure the FireWire interconnect in each machine and test it with a FireWire drive or other device before you get started, to ensure that the baseline system is working. The FireWire interconnects we tested are based on Texas Instruments (TI, one of the coauthors of the IEEE specification on which FireWire is based) chipsets, and we used a 120GB Western Digital External FireWire (IEEE 1394) hard drive.
Table 1 lists the minimum hardware requirements per node for a two-node cluster and some of the additional requirements for clusters of more than two nodes. You can use a standard laptop equipped with a PCMCIA FireWire card for any of the nodes in the cluster. Weve successfully tested a laptop-based cluster following the same installation process described in this article.
As shown in Table 1, for more than two nodes, you must add a four- or five-port FireWire hub to the configuration, to support connections from the additional machines to the drive. Just plug each Linux box into a port in the hub, and plug the FireWire drive into the hub as well. Without a hub, the configuration wont have enough power for the total cable length on the bus.
The instructions in this article are for a two-node cluster configuration. To create a cluster of more than two nodes, configure each additional node (node 3, node 4) by repeating these steps for each of the additional nodes and also be sure to do the following:
Modify the command syntax or script files to account for the proper node number, machine name, and other details specific to the node.
Create an extra set of log files and undo tablespaces on the shared storage for each additional node.
Its not yet possible to use our patched FireWire drivers to build a cluster of more than four nodes.
Step 1: Download Everything You Need
Before you get started, spend some time downloading all the software youll need from OTN. If youre not an OTN member, youll have to join first, but its free.
Keep in mind that these Linux kernel FireWire driver patches are true open source projects. You can download the source code and customize it for your own implementations as long as you adhere to the GPL agreement.
See "Toolbox" for a list of the software you should download and have available before you get started.
Step 2. Install Linux
Once youve downloaded or purchased the Red Hat Linux Advanced Server 2.1 distribution (or another distribution that youve already gotten to work with Oracle9i Database, Release 2), you can install Linux on the local hard drive of each node (this takes about 25 minutes per node). Well keep the configuration basic, but you should configure one of the network cards on each machine for a private LAN (this provides the interconnect between nodes in the cluster); for example:
hostname: node1
ip address: 192.168.1.50
hostname: node2
ip address: 192.168.1.51
Because this is a private LAN, you dont need "real" IP addresses. Just make sure that if you do hook up either of these machines to a live network, the IP addresses dont conflict with those of other machines. Also, be sure you download all the software you need for these machines before configuring the private network if you havent also configured or dont have a second network interface card (NIC) in the machines.
Step 3. Install Oracle9i Database
If you havent done so already, you must download the Oracle software set for Oracle9i Database Release 2 (9.2.0.1.0) for Linux, or if youre an OTN TechTracks
For each machine that will comprise a node in the cluster, you must do the following:
Create a mount point, /oracle/home, for the Oracle software files on the local hard disk of each machine.
Create a new user, oracle (in either the dba or the oracle group), in /home/oracle on each machine.
Start the Oracle Universal Installer from the CD or the mount point on the local hard disk to which youve copied the installation files; that is, enter runInstaller. The Oracle Universal Installer menu displays.
From the menu, choose Cluster Manager as the first product to install, and install it with only its own node name as public and private nodes for now. Cluster Manager is just a few megabytes, so installation should take only a minute or two.
When the installation is complete, exit from the Oracle Universal Installer and restart it (using the runInstaller script). Choose the database installation option, and do a full software-only installation (dont create a database).
Step 4. Configure FireWire (IEEE 1394)
If you havent done so already, download the patched Linux kernel file (fw-test-kernel-2.4.20-image.tar.gz) from OTNs Community Code area.
Assuming that fw-test-kernel-2.4.19-image.tar.gz is available at the root mount point on each node, now do the following:
Log on to each machine as the root user and execute these commands to uncompress and unpack the files that comprise the modules:
cd /
tar zxvf /fw-test-kernel-2.4.19-image.tar.gz
modify /etc/grub.conf
If youre using the lilo bootloader utility instead of grub, replace grub.conf in the last statement above with /etc/lilo.conf.
To the bottom of /etc/grub.conf or /etc/lilo.conf, add the name of the new kernel:
title FireWire Kernel (2.4.19)
root (hd0,0)
kernel /vmlinuz-2.4.19 ro root=/dev/hda3
Now reboot the system by using this kernel on both nodes. To simplify the startup process so that you dont have to modify the boot-up commands each time, you should also add the following statements to /etc/modules.conf on each node:
options sbp2 sbp2_exclusive_login=0
post-install sbp2 insmod sd_mod
post-remove sbp2 rmmod sd_mod
During every system boot, load the FireWire drivers on each node; for example:
modprobe ohci1394
modprobe sbp2
If you use dmesg (display messages from the kernel ring buffer), you should see a log message similar to the following:
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
SCSI device sda: 35239680 512-byte hdwr sectors (18043 MB)
sda: sda1 sda2 sda3
This particular message indicates that the Linux kernel has recognized an 18GB disk with three partitions.
The first time you use the FireWire drive, run fdisk from one of the nodes and partition the disk as you like. (If both nodes have the modules loaded while youre running fdisk on one node, you should reboot the other system or unload and reload all the FireWire and SCSI modules to make sure the new partition table is loaded.)
Step 5. Configure OCFS
We strongly recommend that you use OCFS in conjunction with the patched kernel so that you dont have to partition your disks manually. If you havent done so already, download the precompiled modules (fw-kernel-ocfs.tar.gz) from OTNs Community Code area. (See the "Toolbox" sidebar for more information.)
Untar the file on each node, and use ocfsformat on one node to format the file system on the shared disk, as in the following example:
ocfsformat -f -l /dev/sda1 -c 128 -v ocfsvol
-m /ocfs -n node1 -u 1011 -p 755 -g 1011
where 1011 is the UID and GID of the Oracle account and 755 is the directory permission. The partition that well use is /dev/sda1, and -c 128 means that well use a 128KB cluster size; the cluster size can be 4, 8, 16, 32, 128, 256, 512, or 1,024KB.
As the root user, create an /ocfs mountpoint directory on each node.
To configure and load the kernel module on each node, create a configuration file /etc/ocfs.conf. For example:
ipcdlm:
ip_address = 192.168.1.50
ip_port = 9999
subnet_mask = 255.255.252.0
type = udp
hostname = node1 (on node2, put node2s hostname here)
active = yes
Be sure that each node has the correct values with respect to IP addresses, subnet masks, and node names. Assuming that youre using the example configuration, node 1 uses the IP address 192.168.1.50 ; while on node 2, put 192.168.1.51
Use the insmod command to load the OCFS driver on each node. The basic syntax is as follows:
insmod ocfs.o name=<nodename>
For example:
insmod /root/ocfs.o name=node1
Each time the system boots, the module must be loaded on each node that comprises the cluster.
To mount the OCFS partition, enter the following on each node:
mount -t ocfs /dev/sda1 /ocfs
You now have a shared file system, owned by user oracle, mounted on each node. The shared file system will be used for all data, log, and control files. The modules have also been loaded, and the Oracle database software has been installed.
Youre now ready for the final stepsconfiguring the Cluster Manager software and creating a database. To streamline this process, you can create a small script (env.sh) in the Oracle home to set up the environment, as follows:
export ORACLE_HOME=/home/Oracle/9i
export ORACLE_SID=node1
export LD_LIBRARY_PATH=/home/Oracle/9i/lib
export PATH=$ORACLE_HOME/bin:$PATH
You can do the same for the second nodejust change the second line above to export ORACLE_SID=node2.
Execute (source) this file (env.sh) when you log in or from .login scripts as root or oracle.
Step 6. Configure Cluster Manager
Cluster Manager maintains the status of the nodes and the Oracle instances across the cluster and runs on each node of the cluster.
As user root or oracle, go to $ORACLE_HOME/oracm/admin on each node and create or change the cmcfg.ora and the ocmargs.ora files according to Listing 1.
Be sure that the HostName in the cmcfg.ora file is correct for the machine that is, node 1 has a file that contains node1, and node 2 has a file that contains node2.
Before starting the database, make sure the Cluster Manager software is running. For conveniences sake, add Cluster Manager to the rc script. As user root on each node, set up the Oracle environment variables (source env.sh):
cd $ORACLE_HOME/oracm/bin
./ocmstart.sh
The file ocmstart.sh is an Oracle-provided sample startup script that starts both the Watchdog daemon and Cluster Manager.
Step 7. Configure Oracle init.ora, and Create a Database
Listing 2 contains an example init.ora in $ORACLE_HOME/dbs. You can use it on each node to create initnode1.ora and initnode2.ora, respectively, by making the appropriate adjustmentsthat is, change node1 to node2 throughout the listing.
You must now create the directories for the log files on node 1, as follows:
cd $ORACLE_HOME
mkdir admin ; cd admin ; mkdir node1 ; cd node1 ;
mkdir udump ; mkdir bdump ; mkdir cdump
Again, do the same for node 2, replacing node1 in the syntax example with node2.
Make a link for the Oracle password file on each node (these files may not yet exist):
cd $ORACLE_HOME/dbs
ln -sf /ocfs/orapw orapw
Now that you have the setup, the next step is to create a database. To simplify this process, use the shell script (create.sh) in Listing 3. Be sure to run the script from node 1 only, and be sure to run it only once. Run this script as user oracle, and if all has goes well, you will have created the database, added a second undo tablespace, and added and enabled a second log thread.
You can start the database from either node in the cluster, as follows:
sqlplus / as sysdba
startup
Finally, you can configure the Oracle listener, $ORACLE_HOME/network/admin/listener.ora, as you normally would on both nodes and start that as well.
You should now be all set up!
Wim Coekaerts ( [email protected]) is principal member of technical staff, Corporate Architecture, Development. His team works on continuing enhancements to the Linux kernel and publishes source code under the GPL in OTNs Community Code section. For more information about Oracle and Linux, visit the OTN Linux Center or the Linux Forum.
Toolbox
Dont tackle this as your first "getting to know Linux and Oracle project." This article is brief and doesnt provide detailed, blow-by-blow instructions for beginners. You should be comfortable with the UNIX operating system and with Oracle database installation in a UNIX environment. Youll need all the software and hardware items in this list:
Oracle9i Database Release 2 (9.2.0.1.0) for Linux (Intel). Download the Enterprise Edition, which is required for Oracle RAC.
Linux distribution. We recommend Red Hat Linux Advanced Server 2.1, but you can download Red Hat 8.0 free from Red Hat. (However, please note that Red Hat doesnt support the downloaded version.)
Linux kernel patch for FireWire driver support, available under the Firewire Patches section. (Note that were updating these constantly, so the precise name may have changed.)
OCFS for Linux. OCFS is not strictly required, but we recommend that you use it because it simplifies installation and configuration of the storage for the cluster. The file you need is fw-kernel-ocfs.tar.gz.
Two Intel-based PCs
Two NICs in each machine (although were only concerned in these instructions with configuring the private LAN that provides the heartbeat communication between the nodes in the cluster)
Two FireWire interconnect cards
One large FireWire drive for shared storage
To supplement this QuickStart, you should also take a look at the supporting documentation, especially these materials:
Release Notes for Oracle9i for Linux (Intel)
Oracle9i Real Application Clusters Setup and Configuration
Oracle Cluster Management Software for Linux (Appendix F in the Oracle9i Administrators Reference Release 2 (9.2.0.1.0) for UNIX Systems)
Table 1: Hardware inventory and worksheet for FireWire-based cluster
Requirements Your configuration details:
Per node minimum Node 1 Node 2
Minimum CPU 500 MHz (Celeron, AMD, Pentium)
Minimum RAM 256 MB
Local hard drive free space 3 GB
FireWire card 1 (TI chipset)
Network interface card 2 (1 for node interconnect; 1 for public network)
Per cluster minimum Your configuration details:
FireWire hard drive 1 300-GB
4-port FireWire hub Required for 3-node cluster
5-port FireWire hub Required for 4-node cluster
http://otn.oracle.com/oramag/webcolumns/2003/techarticles/coekaertsfirewiresetup.html
Joel Pérez
http://otn.oracle.com/experts -
Post-installation RAC maintenance
DB version ==> 10.2.0.4
OS ===> Solaris SPARC 5.10
For the last 7 months , i have been working in two RAC DBs (both of them are 2 node RACs). Apart from few instance restarts, i haven't actually done anything in these DBs because they have been very stable. Wish it wasn't because i never got to learn anything.
If you were asked what are the top 3 RAC maintenance tasks (apart from restarts), what would you say?Hi,
Tasks of a DBA.
Initial and complete study of system
Continuous monitoring of database using advanced monitoring and alert tools
Routine health checks of database and proactive maintenance.
Regular database backups and validation of backups.
User and security management
Database growth planning and management.
Performance monitoring and periodic tuning.
Regular report on the activities performed and database health
Emergency support for production outages
Guaranteed response to production outages (based on SLA)
Proactive Maintenance
- Gather optimizer statistics
- Manage the Automatic Workload Repository
- Use the Automatic Database Diagnostic Monitor (ADDM)
- SQL Tuning Advisor: helps you to find critical or improvable statements
- SQL Access Advisor: gives you suggestion about possible indexes or materialized view that can improve performance of a statement or of a set of statements.
- Memory Advisor: controls SGA usage and performance
- MTTR Advisor: this checks checkpointing attivity, sintetically estimates MTTR. Check FAST_START_MTTR_TARGET parameter
- Segment Advisor
- Undo Advisor
- Set warning and critical alert thresholds
- You can define some baselines and on those baselines are based warning and critical thresholds.
Oracle Enterprise Manager Concepts gives an overview of what can be managed.
http://download.oracle.com/docs/cd/B19306_01/em.102/b31949/database_management.htm
Oracle RAC Monitoring: Keeping your RAC under control
http://www.databasejournal.com/features/oracle/article.php/3676451/Oracle-RAC-Monitoring-Keeping-your-RAC-under-control.htm
This has a lot of work.
Regards,
Levi Pereira -
I used version 7.3.4 of OPS. I am working on a RAC system now (10.2.0.4 RH 2.6.9). In OPS it was a big deal to not share data across instances. So you could not have users logged in through both instances reading/writing to the same table (at least you had to keep this to a minimum). The RAC literature seems to say this is not the case in RAC. The dynamic remastering, etc.. seem to indicate that there has been a bit of work done in that area. Are people running like this, or does there still need to be some seperation? It just seems like the private interconnect isn't the fastest medium. Is there guidelines anywhere about this type of thing?
Thanks,
MikeJamesF wrote:
I am wondering Billy if you think 2Gbps on stacked switches would be sufficient in most RAC private network implementations?
Other option is to connect to Core 6509a for e2 and Core 6509b for e3 and do some kind of active/passive scenario. Speed limit would be the 1Gbps port on the switch and I/O card on back of server chassis.
10Gbps will be Expensive (capital E).
Thoughts?Unfair as I'm biased. We only use Infiniband as our Interconnect for all our clusters. It is also the same technology that Exadata uses, so I have a rather smug and vindicated grin.. ;-)
The decision of ours to use Infiniband years ago was questioned by most everyone, including some of the vendors we dealt with. Time has proven it to be the right one - with a lot more performance and scalability that GigE can provide. Only recently did GigE started to tread in the 40Gb zone. Infiniband has been there quite a while. And what better endorsement for IB when Oracle itself uses it for Exadata.
On the pricing side... not sure how this translate for you (cheap or expensive), but for a very rough baseline.
A 24 port IB switch will be under US $10,000. Cisco discontinued some/all(?) of their IB product line. So not much of a product choice there. And if you want to borrow from the Exadata supplier list, then Voltaire should be your choice. You can also get their kit via other vendors. E.g. Sun Systems also sell Voltaire OEM kit, as does a few others.
For each RAC server you need two cables (redundancy) and a (2 port) HCA PCI-X card. The cables should be around $110 each. The card should be between $800 to $900. Good idea to get a few spares of each too.
If you can get this kit via an existing supplier that you deal with (e.g. Sun or another), then there is surely some discount to factor in too.
Is this expensive? Personally, I do not think so. One of the most critical components in a robust RAC environment is the quality and speed of the Interconnect. For a 5 node RAC. given the rough pricing above, the Interconnect will cost you far less than $20,000 - without any discounts factored in. A small percentage of the overall cost (h/w, s/w and licensing) of a 5 node RAC. -
MULTIPLE USERS 10G RAC ORACLE_HOME INSTALL WITH ASM/CRS
Hi,
We need to install multiple 10g RAC databases on a two node Sun servers. Below is our configuration:
1) Sun Solaris (ver 10) with Sun Cluster 3.2
2) One ASM/CRS install (by 1 OS account)
3) Four ORACLE_HOME 10g database install (by 4 different OS user accounts)
We would like to use one ASM instance for all four databases with appropriate privileges.
OS User: OS Group
======== =========
oraasm dbaasm - (ASM and CRS install owner)
ora1 dbaora1 - first db owner
ora2 dbaora2 - second db owner
ora3 dbaora3 - third db owner
ora4 dbaora4 - fourth db owner
I understand that certain privileges need to be shared between ASM/CRS and DB owners. Please let me know the steps to be followed to complete this install.
Thanks in advance.Hi
Please read that: Documentation http://download.oracle.com/docs/html/B10766_08/intro.htm
- You can install and operate multiple Oracle homes and different versions of Oracle cluster database software on the same computer as described in the following points:
-You can install multiple Oracle Database 10g RAC homes on the same node. The multiple homes feature enables you to install one or more releases on the same machine in multiple Oracle home directories. However, each node can have only one CRS home.
-In addition, you cannot install Oracle Database 10g RAC into an existing single-instance Oracle home. If you have an Oracle home for Oracle Database 10g, then use a different Oracle home, and one that is available across the entire cluster for your new installation. Similarly, if you have an Oracle home for an earlier Oracle cluster database software release, then you must also use a different home for the new installation.
If the OUI detects an earlier version of a database, then the OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous-version databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
- You can use the OUI to complete some of the de-install and re-install steps for Oracle Database 10g Real Application Clusters if needed.
Note:
Do not move Oracle binaries from one Oracle home to another because this causes dynamic link failures.
. If you are using ASM with Oracle database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. This prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home. -
Error while running runcluvfy.sh(11g RAC on CentOS 5(RHEL 5))
Oracle Version: 11G
Operating System: Centos 5 (RHEL 5) : Linux centos51-rac-1 2.6.18-128.1.6.el5 #1 SMP Wed Apr 1 09:19:18 EDT 2009 i686 i686 i386 GNU/Linux
Question (including full error messages and setup scripts where applicable):
I am attempting to install oracle 11g in a RAC configuration with Centos 5 (redhat 5) as the operating system. I get the following error
ERROR : Cannot Identify the operating system. Ensure that the correct software is being executed for this operating system
Verification cannot complete
I get this error message when I run runcluvfy.sh, to verify the my configuration is clusterable. I don't know why.
I edited the /etc/redhat-release and entered echo "Red Hat Enterprise Linux AS release 4 (Nahant Update 7)" to attempt to fool the installer into thinking its red hat 4.
But still shows the same message.
Anyone knows how to fix this ?
Please help me.http://www.idevelopment.info/data/Oracle/DBA_tips/Linux/LINUX_20.shtml
runcluvfy.sh will not work on centos because the cluster verification utility checks the operating system version using the redhat-release packag and centos do this with his packages, so you must install and use redhat-release package
Get rpm-build to be able to build rpm’s:
[root@centos5 ~]# yum install rpm-build
Get source rpm of redhat-release
[root@centos5 ~]# wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/redhat-release-5Server-5.1.0.2.src.rpm
Build package:
[root@centos5 ~]# rpmbuild –rebuild redhat-release-5Server-5.1.0.2.src.rpm
Install newly generated rpm:
[root@centos5 ~]# rpm -Uvh –force /usr/src/redhat/RPMS/i386/redhat-release-5Server-5.1.0.2.i386.rpm -
Error in Creation of Dataguard for RAC
My pfile of RAC looks like:
RACDB2.__large_pool_size=4194304
RACDB1.__large_pool_size=4194304
RACDB2.__shared_pool_size=92274688
RACDB1.__shared_pool_size=92274688
RACDB2.__streams_pool_size=0
RACDB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDB/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDB/bdump'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.1.0'
*.control_files='+DATA/racdb/controlfile/current.260.627905745','+FLASH/racdb/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDB/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDBXDB)'
*.fal_client='RACDB'
*.fal_server='RACDG'
RACDB1.instance_number=1
RACDB2.instance_number=2
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASH/RACDB/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDB'
*.log_archive_dest_2='SERVICE=RACDG VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='DEFER'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_listener='LISTENERS_RACDB'
*.remote_login_passwordfile='exclusive'
*.service_names='RACDB'
*.sga_target=167772160
*.standby_file_management='AUTO'
RACDB2.thread=2
RACDB1.thread=1
*.undo_management='AUTO'
RACDB2.undo_tablespace='UNDOTBS2'
RACDB1.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/RACDB/udump'
My pfile of Dataguard Instance in nomount state looks like:
RACDG.__db_cache_size=58720256
RACDG.__java_pool_size=4194304
RACDG.__large_pool_size=4194304
RACDG.__shared_pool_size=96468992
RACDG.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDG/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDG/bdump'
##*.cluster_database_instances=2
##*.cluster_database=true
*.compatible='10.2.0.1.0'
##*.control_files='+DATA/RACDG/controlfile/current.260.627905745','+FLASH/RACDG/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDG/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASHDG'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDGXDB)'
*.FAL_CLIENT='RACDG'
*.FAL_SERVER='RACDB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASHDG/RACDG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_2='SERVICE=RACDB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
##*.remote_listener='LISTENERS_RACDG'
*.remote_login_passwordfile='exclusive'
SERVICE_NAMES='RACDG'
sga_target=167772160
standby_file_management='auto'
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/u01/app/oracle/admin/RACDG/udump'
DB_UNIQUE_NAME=RACDG
and here is what I am doing on the standby location:
[oracle@dg01 ~]$ echo $ORACLE_SID
RACDG
[oracle@dg01 ~]$ rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jul 17 21:19:21 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDG (not mounted)
RMAN> connect target sys/xxxxxxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-17 22:27:08
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-17 22:27:10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl4.ctl
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl4.ctl tag=TAG20070717T201921
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:23
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-17 22:27:34
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2007 22:27:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database
RMAN>
Any help to clear this error will be apprecited.......
Message was edited by:
Bal
nullHi
Thanks everybody for helping me on this issue...........
As suggested, I had taken the parameter log_file_name_convert and db_file_name_convert out of my RAC primary database but still I am getting the same error.
Any help will be appriciated..............
SQL> show parameter convert
NAME TYPE VALUE
db_file_name_convert string
log_file_name_convert string
SQL>
oracle@dg01<3>:/u01/app/oracle> rman
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 18 17:07:49 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDB (not mounted)
RMAN> connect target sys/xxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-18 17:10:53
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-18 17:10:54
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl5.ctr
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl5.ctr tag=TAG20070718T170529
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:33
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-18 17:11:31
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/18/2007 17:11:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database -
How to install 11gR2 RAC on 64 bit linux OS
I am completely new to this topic of RAC and need to be installing and standing up RAC on Linux 64 bit OS . I have good knowledge of installing oracle database ENTERPRISE version 11gR2.
Can you guide me as to how to start. I am looking for leads. Probably we will have 2 nodes.
Thank you very much for helping me in advanceIf you are a My Oracle Support (Metalink) user, go check out these two notes created by the Oracle RAC Assurance Team. They are excellent.
NOTE: 810394.1 RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
NOTE: 811306.1 RAC Assurance Support Team: RAC Starter Kit (Linux)
In the Linux note mentioned above there is a link to a Linux Step by Step Instruction Guide. This step by step instruction guide is the best start to finish document I've seen for how to set-up and install Oracle RAC. I believe the guide is written for installing release 11.2.0.2. -
In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?
The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.
-
Hindsight is indeed 20/20, but here is the situation:
Set of Tasks A (Design IDD Tasks) - These tasks are not complete and are, in fact delayed
Set of Tasks B (ARC Tasks) – In the baseline, they have Set of Tasks A as their immediate predecessors – some are marked as complete and I made those updates
Set of Tasks C (Tasks that Depend on ARC Tasks) – in the baseline they have Set of Tasks B as their immediate predecessors
The Problem is delays to Set of Tasks A currently do not affect Set of Tasks C (Tasks that Depend on ARC Tasks) – and they should. I am looking for the best way to make Set of Tasks C reflect delays from Set of Tasks A, given that the project has been
baselined with a certain set of predecessors and successors.Leslie - If you are looking for videos, you can try this. I delivered ten webinars on Microsoft Project 2010 last year and its recordings can be played for
free.
Session 1: Ready. Set. Go. Preparing Project :
http://goo.gl/yWVGn
Session
2 : How to change working time and set holidays in Project 2010?
http://goo.gl/QTRds
Session 3 : Structure the schedule by WBS and task
dependencies http://goo.gl/SPqkM
Session 4 : Setup people, cost and material resources
in Project 2010 http://goo.gl/lBTUF
Session 5 : Assigning resources (people and material)
and costs (fixed,variable) http://goo.gl/PPI18
Session 6 : Convert draft schedule to optimal schedule
that meets stakeholders requirements http://goo.gl/ptdTl
Session 7: Keeping Your Project on Track by Leveraging
the Baseline Features of Project 2010 http://goo.gl/TM8Gv
Session 8: Track project actual against project baseline
information http://goo.gl/ZWJxP
Session 9 : Report project performance through Reporting
Features http://goo.gl/CC76e
Session 10: Sharing Resources across Projects
http://goo.gl/JkZU01
Sai PMI-SP, MCTS Project, MVP Project
Maybe you are looking for
-
Hello All, I don't remember where is the point in spro to add a rules for copy sales order => output delivery for return goods issue. Does somebody can tell me ? REGARDS.
-
Linux, Netscape 7.01 and java plug-in how to install?
I have J2SE v 1.4.1_01 SDK installed in /usr/java/ directory. Some days ago I installed Netscape 7.0.1 on my RedHat 8.0 Linux. Now if I try to run my simple html page which calls simple java applet, I get message to install proper plug-in. "OK" butto
-
I just purchased a new computer and am trying Firefox 4 and absolutely hate this program. I have loved Firefox prior to this but this is nuts and waaaay too time consuming to figure out. Get rid of this version please. How can I restore 3.5????
-
Install Camera Raw 4.6 manually on a Mac
I would like to install Camera Raw 4.6 for my Adobe Photoshop CS3, but I am unable to use the Adobe Updater (doesn't work at all - another story though). How will I be able to do this manually? Im on a MacBook Pro with Snow Leopard and Photoshop 10 w
-
Hi All, I have few questions on Decision Tables (DT). 1. Is there any way to provide modeling rights only on particular DTs to a User? I have few decision tables which will be updated with new data every month by a business user and I would like to p