RAC/OCFS on RH AS - Dell Platform

Hi all,
We have been trying to no avail to install RAC using Oracle OCFS on a Dell cluster using RedHat Advanced server. Per Oracle's (and a few other users who have it working) instructions, we have installed OCFS and have it running, Installed Oracle 9.2.0.1, applied patch 2444403. However, whenever we try to run "srvconfig -init" against a pre-created configuration file (touch), we still get the following error:
[main] [8:54:54:499] [OCRTree.<init>:80] OCR Initialization failed for othan than INVALID_FORMAT error
oracle.ops.mgmt.rawdevice.RawDeviceException: PRKR-1064 : General Exception in OCR
at oracle.ops.mgmt.rawdevice.RawDeviceUtil.<init>(RawDeviceUtil.java:136)
at oracle.ops.mgmt.rawdevice.RawDeviceUtil.main(RawDeviceUtil.java:2071)
The configuration file is on an OCFS filesystem. Should it be raw, or should it work on an OCFS filesystem?
Anyone else have success in getting OCFS with Oracle RAC running?

Were you ever able to resolve the problem? I've tracked the error down to the "libsrvmocr" library, where the "ocr.o" binary seems to live. Apparently, the darned thing is blowing up on multiple platforms. Bugs #2594548 and #2512977 indicate very simular failures on HP and TRU64 (Digital) systems.
I started with a shared (ocfs) file, then switched to raw, but had the same problem in either case.
Has anybody resolved this issue once it has been encountered? I've tried using "-DTRACING.ENABLED=true -DTRACING.LEVEL=2 " when calling gsd, but that hasn't eliminated the error. I've also tried remaking the library file using the local ".mk" file, but that didn't fix it either.

Similar Messages

  • 10G RAC OCFS on Itanium Linux-64

    I've spent the last week setting up four Itanium 64-bit Linux servers with OCFS. No problems...however, when I try to install Oracle 10G, (installing CRS First as described in the manual) I can't get past the Public/Private nodes listing. CRS does Not detect my OCFS so it forces me to manually list the hostnames. As soon i'm done, I get an error that I must provide both public and private names??? which I have done. They are all setup in the /etc/hosts file identically on each node:
    [root@sysl098 root]# cat /etc/hosts
    127.0.0.1 localhost
    172.20.176.102 sysl012 #RAC-NODE-1
    172.20.176.103 sysl013 #RAC-NODE-2
    172.20.176.104 sysl098 #RAC-NODE-3
    172.20.176.105 sysl099 #RAC-NODE-4
    10.0.0.2 sysl012-prv #Private-RAC-NODE-1
    10.0.0.3 sysl013-prv #Private-RAC-NODE-2
    10.0.0.4 sysl098-prv #Private-RAC-NODE-3
    10.0.0.5 sysl099-prv #Private-RAC-NODE-4
    I'm really at my wits end here... Why doesn't the CRS install detect OCFS when it is configured and shares a filesystem on all four nodes? it's forcing me to list the nodes on the screen, yet it states that I have not done so??
    TIA- Joe

    ocfs installation is separate from CRS. please try to use the cookbooks published on otn:
    http://www.oracle.com/technology/tech/linux/vmware/cookbook/index.html
    also in your /etc/hosts file you are missing the VIP ip addresses.
    10g requires 3 sets of IP addresses.. public, private and vip (Virtual IP)
    good luck.
    Saar.

  • RAC: ocfs-option disabled when trying to create database with dbca.

    Hi everybody,
    I have a really weird problem running dbca in my 2-node RAC installation on Red Hat ES.
    The software version of both oracle clusterware and rdbms is 10.2.0.2.0, meaning the latest patchset has been applied.
    I have installed and configured ocfs, and 4 different ocfs-partition are neatly presented to the operating system. I can save files and browse the ocfs file system, everything seems to be working fine.
    Also, srvctl status nodeapps -n <nodename> shows that oracle clusterware is running nicely, all the components are up including the listener, which I used netca to configure.
    However, when trying to create a cluster database using dbca, the option to use ocfs is not enabled, e.g it is "grayed out" in the Oracle Universal installer. So is raw devices option. The only option left is the ASM, which is selectable in the OUI.
    In addition, I can indeed run scripts manually and create a single-instance database and use the ocfs-partitions for the datafiles, redo logs and archivelogs. No problems at all!
    Do any of you guys have the slightest clue what can be causing this behaviour? Of course I have raised an issue with Oracle Support, but while waiting for an update, it doesn't hurt trying the forum.
    I appreciate all suggestions, thanks for taking the time to read my posting.
    Best Regards,
    Vegard

    You were right.
    http://download-uk.oracle.com/docs/html/B14203_05/intro.htm#sthref44
    under the section
    "1.3 Configuration Tasks for Oracle Clusterware and Oracle Real Application Clusters"
    "... If you use RAC on Oracle Database 10g Standard Edition, then you must use ASM."
    Thanks, problem solved.
    - Vegard

  • RAC(OCFS)  with (oracle9i release 2)

    Hi everyboady,
    who can help me that how can I Install RAC(option: oracle cluster file system ) with (oracle9i release 2) ?
    please help me !!
    Thanks

    I can't find good Docs for this operation,could you help me and say to me one of them?
    Thanks alot Jaffar

  • Recommendations - Oracle RAC 10g on Solaris 10 Containers Logical/Local..

    Dear Oracle Experts et all
    I have a couple of questions for Oracle 10g RAC implementation on Solaris and seek your advice. we are attempting to implement oracle 10g RAC on Solaris OS and SPARC Platform.
    1 We are wondering if Oracle 10g RAC could be implemented on Solaris Local/Logical Containers? I was assuming that Oracle will always link it self with OS binaries and Libraries while S/W installation and hence will need an OS image/Root Disk over which it could go. However, in containers, I assume we have a single solaris installation and configuration which will thus be shared to the containers which will be further configured in it. In such situations how does Oracle instalation proceed? Do I need to look at a scenario where, the global Container/Zone will have Oracle install and this image be shared across to zones/containers accordingly? If it is so, what all filesystems from OS will need to be shared across to these zones/containers?
    Additionally, even if this approach is supported, is it a recommended approach? I am unsure about the stability and functionality of Oracle in such cases and am not able to completly conceptualize. However, I assume there could be certain items which needs to be approprietly taken care off. It will help if you could share observations from your experiences.
    2 The idea of RAC we are looking at is to have multiple Oracle Installations on top of native clustering solution say veritas clusters/Sun Clusters. Do we still need to have Oracle Cluster solution Clusterware (ORACRS) on top of this to achieve Oracle Clustering? Will I be able to install Oracle as a standalone installation on top of native clustering solution say veritas clusters/Sun Clusters?
    Our requirement is to have the above mentioned multiple Oracle installations spread across two (2) seperate H/W platforms,say Node A and Node B, and configure our Cluster Solution to behave as active-passive across Node A and Node B. In other words, I will configure Clustering Solution like VRTS/SunCluster in Active-Passive, then have 3 Oracle installations on Node A, another 3 on Node B. I will configure one database each for each of these Oracle S/W installation (with an idea not to have Clusterware between clustering solution VRTS/SunCluster and Oracle installation, if it works). Now I will run 3 databases thus on each of these nodes. If any downtime happens on any one of the nodes, say Node A, I will fail all oracle databases and S/W accordingly to the alternate available node, Node B in this case, using native clustering solution and I will want the database to behave as it was behaving earlier, on Node A. I am not sure though if I will be able to bring the database up on Node B when resources in OS perspective are failed over.
    we want to use Oracle 10g RAC Release 2 EE on Solaris 10 OS latest/one before the latest release.
    Please share your thoughts.
    Regards!
    Sarat

    Sarat Chandra C wrote:
    Dear Oracle Experts et all
    I have a couple of questions for Oracle 10g RAC implementation on Solaris and seek your advice. we are attempting to implement oracle 10g RAC on Solaris OS and SPARC Platform.
    1 We are wondering if Oracle 10g RAC could be implemented on Solaris Local/Logical Containers? My understanding is that RAC in a Zone (Container) is not supported by Oracle, and will not work anyway. Regardless of installation, RAC needs to do cluster level stuff about the cluster configuration, changing network addresses dynamically, and sending guaranteed messages over the cluster interconnect. None of this stuff can be done in a Local Zone in Solaris, because Local Zones have fewer permissions that the Global Zone. This is part of the design of Solaris Zones, and nothing to do with how Oracle RAC itself works on them.
    This is all down to the security model of Zones, and Local Zones lack the ability to do certain things, to stop them reconfiguring themselves and impacting other Zones. Hence RAC cannot do dynamic cluster reconfiguration in a Local Zone, such as changing virtual network addresses when a node fails.
    My understanding is that RAC just cannot work in a Local Zone. This was certainly true 5 years ago (mid 2005), and was a result of the inherent design and implementation of Zones in Solaris. Things may have changed, so check the Solaris documentation, and check if Oracle RAC is supported in Local Zones. However, as I said, this limitation was inherent in the design of Zones, so I do not see how Sun could possibly have changed it so that RAC would work in a Local Zone.
    To me, your only option is the Global Zone. Which pretty much destroys the argument for having Zones on a Solaris system, unless you can host other non-Oracle application on the other Zones.
    2 The idea of RAC we are looking at is to have multiple Oracle Installations on top of native clustering solution say veritas clusters/Sun Clusters. Do we still need to have Oracle Cluster solution Clusterware (ORACRS) on top of this to achieve Oracle Clustering? Will I be able to install Oracle as a standalone installation on top of native clustering solution say veritas clusters/Sun Clusters?I am not sure the term 'native' is correct. All 'Cluster' software is low level, and has components that run within the operating system. Whether this is Sun Cluster, Veritas Cluster Server, or Oracle Clusterware. They are all as 'native' to Solaris as each other. They all perform the same function for Oracle RAC around Cluster management - which nodes are members of the cluster, heartbeats between nodes, reliable fast message delivery, etc.
    You only need one piece of Cluster software. So pick one and use it. If you use the Sun or Veritas cluster products, then you do not need the Oracle Clusterware software. But I would use it, because it is free (included with RAC), is from Oracle themselves and so guaranteed to work, is fully supported, and is one less third party product to deal with. Having an all Oracle software stack makes things simpler and more reliable, as far as I am concerned. You can be sure that Oracle will have fully tested RAC on their own Clusterware, and be able to replicate any issues in their own support environments.
    Officially the Sun and Veritas products will work and are supported. But when you get a problem with your Cluster environment, who are you going to call? You really want to avoid "finger pointing" when you have a problem, with each vendor blaming the cause of the problem on another vendor. Using an all Oracle stack is simpler, and ensures Oracle will "own" all your support problems.
    Also future upgrades between versions will be simpler, as Oracle will release all their software together, and have tested it together. When using third party Cluster software, you have to wait for all vendors to release new versions of their own software, and then wait again while it is tested against all the different third party software that runs on it. I have heard of customers stuck on old versions of certain cluster products, who cannot upgrade because there are no compatible combinations in the support matrices between the cluster product and Oracle database versions.
    I will configure Clustering Solution like VRTS/SunCluster in Active-Passive, then have 3 Oracle installations on Node A, another 3 on Node B. As I said before, these 3 Oracle installations will actually all be on the same Global Zone, because RAC will not go into Local Zones.
    John

  • Not able to connect RAC database from client

    Hi there
    Recently I have configured RAC in test environment. version 11.2.0.1. OS Redhat 5.9. Everything seems to be fine except not able to connect rac database from client.  Error is as under :
    C:\Documents and Settings\pbl>sqlplus test1/test1@myrac
    SQL*Plus: Release 10.2.0.1.0 - Production on Mon Nov 17 14:29:06 2014
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ERROR:
    ORA-12545: Connect failed because target host or object does not exist
    Enter user-name:
    myrac =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
        (CONNECT_DATA =
          (SERVICE_NAME = racdb.testdb.com.bd)
    Please give me your valuable suggestion to  overcome the issue.
    Regards
    Jewel

    user13134974 wrote:
    ORA-12545: Connect failed because target host or object does not exist
    This error means that the hostname or IP address used in the TNS connection string, failed to resolve or connect.
    Your client is making two connections. The first connection is to the SCAN listener. It matches a db instance for your connection request based on service requested, available registered service handlers, load balancing, and so on. It then send a redirect to your client informing it of the handler for that service.
    Your client then does a second connection to this service (a local RAC listener that will provide you with a connection to the local RAC instance). This is what seems to be failing in your case.
    The SCAN listener's redirect uses the hostname of the server that the local listener is running on. Your client needs to resolve that hostname (of a RAC node) to an IP address. This likely fails.
    You can add the RAC node hostnames to your client platforms hosts file. The appropriate action however would be to ensure that DNS is used for name resolution instead.

  • Is there any doucment to install RAC 10G R2 with vmware shared storage?

    Hello Guys,
    Is there any documentation or how to available to install Oracle RAC 10G R2 on windows 2000 platform with 2 nodes and using vmware software for shared disk purpose.
    Please let me know the link. I will be really greatful to you. There is a document available for windows 2003 but couldnt find any for windows 2000.
    Regards,
    Imran Baig

    Hello Guys,
    I was reading this article on link http://www.dizwell.com/prod/node/25 it says the following
    "If you had a physical machine with two network cards installed and a second hard disk with absolutely nothing else on it, you could achieve a RAC using a physical machine"
    I am in procesof installing 2 nodes RAC and have configure network requirements on each node. I am struck with shared disk storage... can i acheive a shared diak storage by adding an other hardrive to one of the nodes? Please help...
    Regards,
    Imran

  • 10g grid agent installation and configuration on oracle RAC

    Hi All,
    I have one two node RAC setup on HP-UX itanium platform and I want to install oracle 10g grid agent on both the RAC nodes to monitor the RAC instances and databases and OMS repository server is ready and running. Only I have installed agents on both the nodes, but tns entry and hosts files on OMS server and both the nodes modified.
    But, nodes are not getting automatically discovered by OMS server. And what is the command to configure the grid agent on RAC instances.
    I have to install grid agent on both the nodes individually or it can be installed on single node and can be integrated with other node.
    Please help me out in this regard.......
    Thanks in Advance,
    Sukanta Paul.
    Edited by: sukanta paul on Aug 18, 2009 10:16 PM

    Similar problem and installation does not detect the rac nodes and installation is done locally which does not help the situation..By the way we all consult documentation and the whole point of having a forum is so we have a potential fix or work around and not link from here to documentation. Just to mention :)

  • Should we be using RAC for a data warehouse?

    We have an Oracle 11.1 data warehouse system. We were having some performance issues with the system so we shutdown one of the RAC nodes, to see if that was causing the problem. The problem was slow updates on a table (all 30+ million rows on one table had to be fixed). One other perforamnce problem is queries of large partitoned tables (even if the partitioin key is used). Both bulk collect and bulk inserts are very fast.
    Question: for a 11.1 data warehouse system should we use RAC? Why?
    Thank you...

    a school of thought that suggests RAC potentially decreases system availability, rather than increasing it.RAC also has the potential of increasing availability. The potential "cuts both ways", so to speak.
    I've worked with non-RAC and RAC databases on a variety of platforms. My experience doesn't show evidence that RAC decreases availability. Given that most servers, even in non-HA clusters, are very reliable (generally), downtime is low in both non-RAC and RAC environments. However, RAC does provide an availability advantage -- protection against node outage. And there are environments which do require the avaialability of RAC. Not all applications require it. RAC is too oversold, not in terms of advantages but in terms of installations.
    the increased complexity and the increased risk of both software and human related errors in a RAC environmentI would say that a similar argument arises in DASD v SAN. A SAN is more complex. Human error on a SAN causes a much higher cost. Human error does occur on a SAN. However, no one rejects a SAN on these grounds alone.
    RAC is complex to implement. It requires more skills to adminster and diagnose. However, if it is setup well, it doesn't suffer outages. An outage from human error is the same as in a non-RAC environment.
    The issue isn't RAC. The issue is that too many customers buy RAC without evaluating seriously whether
    a. they need the additional minute increase in availability
    b. whether there applications are "RAC-aware" {TAF is still misunderstood}
    c. whether they have the skills
    RAC provides scalability. It also provides HA. Let me say that again : It also provides HA.
    I've seen a high end Failover Cluster environment where one of the "best" vendors in the world talked of a 10-30minute outage for the Failover.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on May 31, 2009 11:41 PM

  • RAC Installation (Valid IP Error)

    Hello Guys,
    I am installing 2 nodes RAC 10g R2 on windows XP platform.
    Each node contains 2 network cards for private, public and virtual IPs.
    Following is the seeting of TCP/IP on each node.
    Node 1
    Netwrok Card 1
    Public IP : 192.168.0.140
    Virtual IP : 192.168.0.141
    Entry of these IPs exist in DNS
    Network Card 2
    Private IP : 10.0.0.1 - Entry exist in host file with the computer name
    Node 2
    Netwrok Card 1
    Public IP : 192.168.0.142
    Virtual IP : 192.168.0.143
    Entry of these IPs exist in DNS
    Network Card 2
    Private IP : 10.0.0.2 - - Entry exist in host file with the computer name
    Now during installtion where we have to configure IPs of node its says:
    "The following Public IP do not resolve to a valid hostname. Enter Valid IP that maps hostname to continue"
    How to rectify this error to continue....
    Please Help.
    Imran

    Node 1
    # Copyright (c) 1993-1999 Microsoft Corp.
    # This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
    # This file contains the mappings of IP addresses to host names. Each
    # entry should be kept on an individual line. The IP address should
    # be placed in the first column followed by the corresponding host name.
    # The IP address and the host name should be separated by at least one
    # space.
    # Additionally, comments (such as these) may be inserted on individual
    # lines or following the machine name denoted by a '#' symbol.
    # For example:
    # 102.54.94.97 rhino.acme.com # source server
    # 38.25.63.10 x.acme.com # x client host
    127.0.0.1 localhost
    10.0.0.2     bss-training71
    bss-training71.beaconhouse.edu.pk is the public name of this node
    192.168.0.142 is public IP Registerted in DNS
    192.168.0.143 is Virtual IP Registerted in DNS
    255.255.252.0 is the Subnet Mask for above IPs.
    Node 2
    # Copyright (c) 1993-1999 Microsoft Corp.
    # This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
    # This file contains the mappings of IP addresses to host names. Each
    # entry should be kept on an individual line. The IP address should
    # be placed in the first column followed by the corresponding host name.
    # The IP address and the host name should be separated by at least one
    # space.
    # Additionally, comments (such as these) may be inserted on individual
    # lines or following the machine name denoted by a '#' symbol.
    # For example:
    # 102.54.94.97 rhino.acme.com # source server
    # 38.25.63.10 x.acme.com # x client host
    127.0.0.1 localhost
    10.0.0.1     bss-training78
    bss-training.beaconhouse.edu.pk is the public host name for this machine
    192.168.0.140 is public IP Registerted in DNS
    192.168.0.141 is Virtual IP Registerted in DNS
    255.255.252.0 is the Subnet Mask for above IPs.
    Hope it will clear the situation...
    Regards,
    Imran

  • RAC 환경에서 IP 변경에 관련해서 ~

    날씨가 많이 따뜻해 졌습니다 ~ 감기 걸리지 않도록 주의 하시구요 ~
    궁금한 점이 있어 이렇게 글을 올립니다.
    고수님들의 답변을 기다립니다 ~ ^^;
    현재 OS 는 windows2003 에 오라클은 10.2.0.1 RAC 로 구성이 되어 있습니다.
    그런데 아이피 변경을 하는데 있어 이러한 작업을 해보신 분이나 알고 계시분은
    작업 절차를 알려 주셨으면 감사하겠습니다 ^^
    글 수정:
    키노미르

    드디어 IP변경을 성공하였습니다. 3시간 허접질끝에 성공했습니다. 이렇게도 해보고
    저렇게도 해보고 건드리기를 좋아해서 오래 걸렸네요.
    기존에는 SATA2 디스크로 10G RAC를 구성해 놨는데 anygate 공유기 가 말썽을 일으켜서 어쩔 수가 없었지요.
    공유기 gateway IP를 default값으로 변경해야만 무선랜이 정상적으로 되는 버그가 있었네요. 그래서 10G
    rac 구성해 놓은 것도 IP변경을 해야 했습니다.
    그런데 10G RAC를 기존에는 ocfs를 통해서 OCR, voting disk를 구성했는데.. 다시 한번 다른 방식으로 FATA
    디스크로 구성하려고 하니 잘 안되더군요. IP를 변경해야 하는 문제를 해결하다가 실패할 경우를 대비해서
    같은 방법으로 또다시 테스트할 환경을 만들고 싶어졌답니다.
    9i RAC 에서는 Cluster Manager Quorum File과 Shared Configuration File 을 pvcreate, lvcreate를 통해서 logical
    volume을 만들고 그 여러볼륨을 raw device바인딩하고 링크를 걸어서 사용하였습니다.
    9i rac는 logical volume을 통해서 Querom File과 Shared configuration file이 가능하더군요.
    이와 같은 방법으로 FATA 디스크에 10g rac는 CRS 설치시 OCR, voting disk를 logical volume을 통해서 raw device로 바인딩한
    후 링크를 걸었습니다. 권한문제라던지 그런 것도 잘 해주었는데.. root.sh를 하고 나서 crs설치를 실패하더군요.
    Failed to upgrade Oracle Cluster Registry configuration 메세지가 나네요. RH ES3, RH ES4 에서 테스트를 했는데 안되었답니다.
    다시 CentOS에서 해보려고 합니다. 제발 이것은 성공하길...
    이처럼 OCFS를 통해 OCR, voting disk 구성한후 CRS를 설치할 때만 성공하고, logical volume으로 만들면 9i RAC cluster
    manager 설치가 잘 되던 것이 10g에서는 CRS설치시 안되네요.
    하두 답답해서 LG CNS엔지니어한테 물어보니 CRS 설치가 잘 안되는 경우가 있다고 많다고 하더군요. CentOS에서 다시 한번 더
    logical volume으로 OCR voting disk로 만들어보고 안되면 그냥 OCFS로 다시 한번 더 구성해보려고 합니다.
    [root@rac2 root]# /u01/app/oracle/product/crs/root.sh
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Failed to upgrade Oracle Cluster Registry configuration
    ====================================================
    두서가 길어졌죠..~ SATA2 디스크로 기존에 구성했던 것의 IP를 변경해야 할 필요성이 있었고 아래와 같이 구성하였습니다. metalink 내용대로 하였
    습니다.
    # 시스템 구성사항
    OS : CentOS 4.2 32bit
    DB : 10.2.0.1 32bit 2node rac
    shared disk device: 누디앙 CD-503 (ieee1394 2port)
    shared disk : 삼성40G 7200rpm
    server(2노두 모두 동일) : p4 2.8Mhz, 2G RAM , 80G SATA2 disk(7200rpm)
    RAC 구성사항 : OCFS를 통한 OCR, Voting Disk 구성, ASM을 통한 DB구성
    node name (서버명) : rac1, rac2
    instance name: orcl1 , orcl2
    DB name: orcl
    IP구성 :
    기존 : public (192.168.1.101), interconnect (192.168.2.101), VIP (192.168.1.201)
    변경후 : public (192.168.10.101), interconnect (192.168.20.201), VIP (192.168.10.201)
    이 글은 아래 metalink 자료를 통해서 테스트를 하였습니다.
    정확하다고 말씀드릴 수는 없으므로 참고만 하시기 바랍니다.
    metalink note 283684.1 How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
    metalink note 276434.1 Modifying the VIP of a Cluster Node
    1. vip를 먼저 바꾸었습니다.
    rac1 node vip : 192.168.1.201 (vip-rac1)
    rac2 node vip : 192.168.1.202 (vip-rac2)
    (1) 시스템 정보를 export해서 봅니다. 아래에서와 같이 vip의 IP와 subnet을 변경할
    필요가 있습니다.
         [root@rac1 bin]# srvconfig -exp kkk.txt
         srvconfig successfully exported cluster database configurations to file "kkk.txt"
         [root@rac1 bin]# more kkk.txt
         # PLEASE DO NOT EDIT THIS FILE - it is "srvconfig" generated
         DATABASES = (orcl)
         ########## Configuration of cluster database "orcl" follows ##########
         orcl.ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
         orcl.SPFILE = +FLASH_RECOVERY_AREA/orcl/spfileorcl.ora
         orcl.ENABLED = true
         orcl.INSTANCE = (orcl1,orcl2)
         orcl.orcl1.NODE = rac1
         orcl.orcl1.ENABLED = true
         orcl.orcl2.NODE = rac2
         orcl.orcl2.ENABLED = true
         orcl.SERVICE = (orcl)
         orcl.orcl.INSTANCES = (orcl2:PREFERRED:true),(orcl1:PREFERRED:true)
         orcl.orcl.ENABLED = true
         orcl.orcl.TAFPOLICY = basic
         ########## Configuration of nodeapps follows ##########
         NODEAPPS = (rac1,rac2)
         rac1.VIP = 192.168.1.201:255.255.255.0:(eth0,eth1)
         rac1.ORACLE_HOME = /u01/app/oracle/product/crs
         rac2.VIP = 192.168.1.202:255.255.255.0:(eth0,eth1)
         rac2.ORACLE_HOME = /u01/app/oracle/product/crs
         ########## Configuration of vip_range follows ##########
         VIP_RANGE = null
         [root@rac1 oracle]# ocrconfig -export kkk1.txt
         [root@rac1 oracle]# more kkk1.txt
         呈
         SYSTEM
         DATABASE
         CRS
         SYSTEM.css
         SYSTEM.language
         SYSTEM.version
         SYSTEM.versionstring
         SYSTEM.ORA_CRS_HOME
         SYSTEM.local_only
    (2) rac1,rac2 모든 node에서 DB를 내렸습니다.
         [oracle@rac1 ~]$ srvctl stop database -d orcl
    (3) 모든 노드의 asm instance를 내렸습니다.
         [oracle@rac1 ~]$ srvctl stop asm -n rac1
         [oracle@rac1 ~]$ srvctl stop asm -n rac2
    (4) 모든 노드의 nodeapp를 내립니다.
    nodeapp는 VIP, GSD, Listener, and ONS daemon를 말합니다.
         [oracle@rac1 ~]$ srvctl stop nodeapps -n rac1
         [oracle@rac1 ~]$ srvctl stop nodeapps -n rac2
    (5) 양쪽 노드에서 정상적으로 모든 리소스를 내렸는지 crs_stat로 확인합니다.
         [oracle@rac1 bin]$ cd $ORA_CRS_HOME/bin
         [oracle@rac1 bin]$ ./crs_stat
         NAME=ora.ORCL.ORCL.cs
         TYPE=application
         TARGET=OFFLINE
         STATE=OFFLINE
    (6) hosts 파일을 양쪽 노드 모두 변경합니다.
         - 변경전
         127.0.0.1 localhost.localdomain localhost
         # Public Network - (eth0)
         192.168.1.101 rac1
         192.168.1.102 rac2
         # Private Interconnect - (eth1)
         192.168.2.101 int-rac1
         192.168.2.102 int-rac2
         # Public Virtual IP (VIP) addresses for - (eth0)
         192.168.1.201 vip-rac1
         192.168.1.202 vip-rac2
         - 변경후
         127.0.0.1 localhost.localdomain localhost
         # Public Network - (eth0)
         192.168.10.101 rac1
         192.168.10.102 rac2
         # Private Interconnect - (eth1)
         192.168.20.201 int-rac1
         192.168.20.202 int-rac2
         # Public Virtual IP (VIP) addresses for - (eth0)
         192.168.10.201 vip-rac1
         192.168.10.202 vip-rac2
    (7) rac1과 rac2 node의 네트워크를 변경합니다.
    - OS의 네트워크IP를 변경합니다.
    (8) ifconfig 로 현재 상황을 확인해봅니다.
         - rac1 node
         [root@rac1 ~]# ifconfig -a
         eth0 Link encap:Ethernet HWaddr 00:19:21:5D:30:B5
         inet addr:192.168.10.101 Bcast:192.168.10.255 Mask:255.255.255.0
         inet6 addr: fe80::219:21ff:fe5d:30b5/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
         RX packets:100273 errors:0 dropped:0 overruns:0 frame:0
         TX packets:68329 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:78294892 (74.6 MiB) TX bytes:35353321 (33.7 MiB)
         Interrupt:10 Base address:0xcc00
         eth1 Link encap:Ethernet HWaddr 00:10:5A:5E:BD:7C
         inet addr:192.168.20.201 Bcast:192.168.20.255 Mask:255.255.255.0
         inet6 addr: fe80::210:5aff:fe5e:bd7c/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
         RX packets:44306 errors:0 dropped:0 overruns:0 frame:0
         TX packets:48826 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:23070633 (22.0 MiB) TX bytes:38914552 (37.1 MiB)
         Interrupt:11 Base address:0xe880
         lo Link encap:Local Loopback
         inet addr:127.0.0.1 Mask:255.0.0.0
         inet6 addr: ::1/128 Scope:Host
         UP LOOPBACK RUNNING MTU:16436 Metric:1
         RX packets:35150 errors:0 dropped:0 overruns:0 frame:0
         TX packets:35150 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:9869103 (9.4 MiB) TX bytes:9869103 (9.4 MiB)
         sit0 Link encap:IPv6-in-IPv4
         NOARP MTU:1480 Metric:1
         RX packets:0 errors:0 dropped:0 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
         - rac2 node
         [root@rac2 ~]# ifconfig -a
         eth0 Link encap:Ethernet HWaddr 00:19:21:5D:2B:E8
         inet addr:192.168.10.102 Bcast:192.168.10.255 Mask:255.255.255.0
         inet6 addr: fe80::219:21ff:fe5d:2be8/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
         RX packets:67515 errors:0 dropped:0 overruns:0 frame:0
         TX packets:97912 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:35277845 (33.6 MiB) TX bytes:77863535 (74.2 MiB)
         Interrupt:10 Base address:0xcc00
         eth1 Link encap:Ethernet HWaddr 00:50:04:BD:E6:F9
         inet addr:192.168.20.202 Bcast:192.168.20.255 Mask:255.255.255.0
         inet6 addr: fe80::250:4ff:febd:e6f9/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
         RX packets:48512 errors:0 dropped:0 overruns:0 frame:0
         TX packets:43953 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:38782288 (36.9 MiB) TX bytes:22830335 (21.7 MiB)
         Interrupt:11 Base address:0xe880
         lo Link encap:Local Loopback
         inet addr:127.0.0.1 Mask:255.0.0.0
         inet6 addr: ::1/128 Scope:Host
         UP LOOPBACK RUNNING MTU:16436 Metric:1
         RX packets:81708 errors:0 dropped:0 overruns:0 frame:0
         TX packets:81708 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:12717471 (12.1 MiB) TX bytes:12717471 (12.1 MiB)
         sit0 Link encap:IPv6-in-IPv4
         NOARP MTU:1480 Metric:1
         RX packets:0 errors:0 dropped:0 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
    (9) vip를 변경합니다.(root유저에서 수행)
    - 저의 경우에는 실수로 /etc/host 파일과 eth1에 설정한 interconnect IP를 다르게 쓰는
    실수를 하였습니다. 그랬더니 아래 vip를 변경하는 명령을 하면서 멈추었고, 차후에 찾아내어
    /etc/hosts를 변경하는 순간 서버가 리부팅되는 상황을 경험하였습니다. 아래 명령을 하기 전에
    꼭 ping rac1, ping int-rac1 을 수행해서 정확하게 변경을 하였는지 꼭 확인해야 합니다.
         [oracle@rac1 ~]$ cd $ORA_CRS_HOME/bin
         [oracle@rac1 bin]$ su
         Password:
         [root@rac1 bin]# srvctl modify nodeapps -n rac1 -A 192.168.10.201/255.255.255.0/eth0
         [root@rac1 bin]# srvctl modify nodeapps -n rac2 -A 192.168.10.202/255.255.255.0/eth0
    2. 두번째로 interconnect/public IP subnet을 변경합니다.
    우리는 위에서 모든 리소스를 내린 후 /etc/hosts파일을 미리 변경하고 OS의 네트워크도 이미
    변경한 상태입니다.
    (1) oifcfg iflist를 보면 희안하게도 자동으로 변경되어 있었습니다.
         [oracle@rac1 ~]$ cd $ORA_CRS_HOME/bin
         [oracle@rac1 bin]$ su
         Password:
         [root@rac1 bin]# oifcfg iflist
         eth0 192.168.10.0
         eth1 192.168.20.0
         - 그런데 rac2 node에서 확인하였을 때에는 에러가 발생하였습니다.
         [root@rac2 bin]# oifcfg iflist
         PRIF-12: failed to initialize cluster support services
    (2) oifcfg getif를 했을 때에는 문서와 다른 결과가 나왔습니다.
         [root@rac1 bin]# oifcfg getif
         아무런 메세지가 없더군요.
         [root@rac2 bin]# oifcfg getif
         PRIF-12: failed to initialize cluster support services
         node2는 위처럼 에러가 났습니다.
    (3) public interface IP를 변경합니다.
    변경전에 oifcfg delif 를 통해서 interface를 삭제합니다.
    저의 경우에는 문서와는 다르게 삭제가 되지 않았습니다.
    하지만 차후에 추가후에는 삭제가 되더군요. 그리고 rac1 node에서는
    명령어가 가능하였으나 rac2 node에서는 불가능하였습니다.
         기존 public interface IP subnet : 192.168.1.0
         변경할 public interface IP subnet : 192.168.10.0
         기존 interconnect interface IP subnet : 192.168.2.0
         변경할 public interface IP subnet : 192.168.20.0
         [root@rac1 bin]# oifcfg delif -global eth1
         PROC-4: The cluster registry key to be operated on does not exist.
         PRIF-11: cluster registry error
         [root@rac1 bin]# oifcfg
         Name:
         oifcfg - Oracle Interface Configuration Tool.
         Usage: oifcfg iflist [-p [-n]]
         oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
         oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
         oifcfg delif [-node <nodename> | -global] [<if_name>[<subnet>]]
         oifcfg [-help]
         <nodename> - name of the host, as known to a communications network
         <if_name> - name by which the interface is configured in the system
         <subnet> - subnet address of the interface
         <if_type> - type of the interface { cluster_interconnect | public | storage }
         [root@rac1 bin]# oifcfg setif -global eth0/192.168.10.0:public
         [root@rac1 bin]# oifcfg delif -global eth0
         [root@rac1 bin]#
         위처럼 setif를 해야만 delif가 되었습니다.
         다시 설정을 하였습니다.
         [root@rac1 bin]# oifcfg setif -global eth0/192.168.10.0:public
         [root@rac1 bin]# oifcfg getif
         eth0 192.168.10.0 global public
         - 이제 rac2 node에서 명령어를 수행하였으나 실패하였습니다.
         [oracle@rac2 admin]$ oifcfg setif -global eth0/192.168.10.0:public
         PRIF-12: failed to initialize cluster support services
    (4) internal interface IP를 변경합니다.
         [root@rac1 bin]# oifcfg setif -global eth1/192.168.20.0:cluster_interconnect
         [root@rac1 bin]# oifcfg getif
         eth0 192.168.10.0 global public
         eth1 192.168.20.0 global cluster_interconnect
    3. listener.ora 와 tnsnames.ora 파일을 확인/변경합니다.
    (1) listener.ora 파일을 변경합니다.
         - 변경전
         LISTENER_RAC1 =
         (DESCRIPTION_LIST =
         (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac1)(PORT = 1521)(IP = FIRST))
         (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.101)(PORT = 1521)(IP = FIRST))
         SID_LIST_LISTENER_RAC1 =
         (SID_LIST =
         (SID_DESC =
         (SID_NAME = PLSExtProc)
         (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
         (PROGRAM = extproc)
         - 변경 후(위에서 192.168.1.101 은 rac1 이라는 hostname의 이름이었습니다. IP를 변경하였습니다.)
         LISTENER_RAC1 =
         (DESCRIPTION_LIST =
         (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac1)(PORT = 1521)(IP = FIRST))
         (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.102)(PORT = 1521)(IP = FIRST))
         SID_LIST_LISTENER_RAC1 =
         (SID_LIST =
         (SID_DESC =
         (SID_NAME = PLSExtProc)
         (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
         (PROGRAM = extproc)
         rac2 node도 같은 방법으로 변경하였습니다.
    (2) tnsnames.ora 파일을 확인하였습니다. hostname으로 되어 있어서 변경할 필요가 없었습니다.
         [oracle@rac1 admin]$ cat tnsnames.ora
         # tnsnames.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
         # Generated by Oracle configuration tools.
         LISTENERS_ORCL =
         (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac1)(PORT = 1521))
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac2)(PORT = 1521))
         ORCL2 =
         (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac2)(PORT = 1521))
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = orcl)
         (INSTANCE_NAME = orcl2)
         ORCL1 =
         (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac1)(PORT = 1521))
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = orcl)
         (INSTANCE_NAME = orcl1)
         ORCL =
         (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac1)(PORT = 1521))
         (ADDRESS = (PROTOCOL = TCP)(HOST = vip-rac2)(PORT = 1521))
         (LOAD_BALANCE = yes)
         (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = orcl)
         (FAILOVER_MODE =
         (TYPE = SELECT)
         (METHOD = BASIC)
         (RETRIES = 180)
         (DELAY = 5)
         EXTPROC_CONNECTION_DATA =
         (DESCRIPTION =
         (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
         (CONNECT_DATA =
         (SID = PLSExtProc)
         (PRESENTATION = RO)
    - rac2 node도 같은 방법으로 점검합니다.
    4. OCFS2 의 디스크 mount정보를 수정하였습니다.
    저는 OCFS를 통해서 OCR과 Voting disk를 공유하였습니다.
    그래서 IP를 변경할 경우 설정을 변경해야 하였습니다.
    /etc/ocfs2/cluster.conf 를 vi로 열어서 IP를 수정하였습니다.
         [root@rac1 ~]# cat /etc/ocfs2/cluster.conf
         node:
         ip_port = 7777
         ip_address = 192.168.10.101
         number = 0
         name = rac1
         cluster = ocfs2
         node:
         ip_port = 7777
         ip_address = 192.168.10.102
         number = 1
         name = rac2
         cluster = ocfs2
         cluster:
         node_count = 2
         name = ocfs2
    5. 서버를 재기동 하여 자동으로 서비스가 올라오는지 보았습니다.
    - 리부팅 후 서비스가 정상적으로 모두 로드되었습니다.
    이제 IP변경을 완료하였습니다.
         [oracle@rac1 ~]$ cd $ORA_CRS_HOME/bin
         [oracle@rac1 bin]$ ./crs_stat
         NAME=ora.ORCL.ORCL.cs
         TYPE=application
         TARGET=OFFLINE
         STATE=OFFLINE
         NAME=ora.orcl.db
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.orcl.orcl.cs
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.orcl.orcl.orcl1.srv
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.orcl.orcl.orcl2.srv
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.orcl.orcl1.inst
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.orcl.orcl2.inst
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.rac1.ASM1.asm
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.rac1.LISTENER_RAC1.lsnr
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.rac1.gsd
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.rac1.ons
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.rac1.vip
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac1
         NAME=ora.rac2.ASM2.asm
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.rac2.LISTENER_RAC2.lsnr
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.rac2.gsd
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.rac2.ons
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
         NAME=ora.rac2.vip
         TYPE=application
         TARGET=ONLINE
         STATE=ONLINE on rac2
    6. VIP 생성여부 확인
    [root@rac1 ~]# ifconfig -a
    eth0 Link encap:Ethernet HWaddr 00:19:21:5D:30:B5
    inet addr:192.168.10.101 Bcast:192.168.10.255 Mask:255.255.255.0
    inet6 addr: fe80::219:21ff:fe5d:30b5/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:1076 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1026 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:99825 (97.4 KiB) TX bytes:97112 (94.8 KiB)
    Interrupt:10 Base address:0xcc00
    eth0:2 Link encap:Ethernet HWaddr 00:19:21:5D:30:B5
    inet addr:192.168.10.201 Bcast:192.168.10.255 Mask:255.255.255.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    Interrupt:10 Base address:0xcc00
    eth1 Link encap:Ethernet HWaddr 00:10:5A:5E:BD:7C
    inet addr:192.168.20.201 Bcast:192.168.20.255 Mask:255.255.255.0
    inet6 addr: fe80::210:5aff:fe5e:bd7c/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:66124 errors:0 dropped:0 overruns:0 frame:0
    TX packets:93483 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:35409232 (33.7 MiB) TX bytes:85751255 (81.7 MiB)
    Interrupt:11 Base address:0xe880
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:78532 errors:0 dropped:0 overruns:0 frame:0
    TX packets:78532 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:8421778 (8.0 MiB) TX bytes:8421778 (8.0 MiB)
    sit0 Link encap:IPv6-in-IPv4
    NOARP MTU:1480 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
    글 수정:
    Min Angel (Yeon Hong Min, Korean)
    지금보니 벌써 새벽4시..;; 한번 시작하면 밤을 새워도 해내야만
    직성이 풀리는 이 골치아픈 집착은 언제 없어질지 모르겠네요.
    연인에게 이런 집착을 보여야 결혼도 하게 되는건데 참..오라클하고만
    이러고 있으니..;; 사담이 길어져서 죄송합니다.
    글 수정:
    Min Angel (Yeon Hong Min, Korean)
    도움이 되셨길 바랍니다. ^^

  • Hardware analysis for Building RAC on VM

    Can i install this 64bit RAC on vm on my dell vostro 1500 labtop , its 1.79 GHz with 2GB RAM & external hard drive 500GB
    I ordered kit includes Oracle VM 2.1.2 and Oracle Enterprise Linux 5.2 (x86_64-bit).
    That what i am trying to build
    http://www.oracle.com/technology/pub/articles/wartak-rac-vm.html#1

    You have the disk space, but not the memory, even discounting the memory for the Oracle VM hypervisor, OEL/RH5 is going to take at least 1G to run. So, with your base OS, hypervisor, Linux and Oracle (clusterware and DB), it may be possible, but I believe you would be swapping so much you would have terrible performance. I would look at a MINIMUM of 3GB if not 4GB RAM to try this configuration.
    Cheers
    Jay
    http://www.grumpy-dba.com

  • Help required for disabling RAC ethernet

    Hi Guys!
    We want to test Oracle RAC having 2 nodes on Windows platform.
    We are testing the worst scanerio by disabling the ethernet cards on one of the node of RAC
    Just want to know if we should disable "Public" ethernet or "Private" ethernet on node 2 of the RAC please?
    Or disabling both ethernets on node 2 are required
    Thanks

    Just a sidenote:
    11.2.0.2 introduced a concept of "Rebootless node fencing".
    This especially handles the loss of interconnect in 11.2.0.2:
    In most cases only the processes will be killed, and then the clusterstack will get restarted in the event of loss of the interconnect.
    Only if the killing of I/O processes is not successfull (or takes to long) then the cluster restart will get escalated to a node reboot.
    Just checked... wonder why this is not in the new feature section....
    Sebastian

  • OCFS bug

    I think that I've found a bug with OCFS. If you follow these 8 steps exactly, OCFS will lockup/hang until you remove the file from step 7. This happens reliably on our system (2 node cluster w/EMC CX400) ... can anyone confirm that the same bug happens on their (different) hardware as well?
    1. install ocfs-2.4.9-e-enterprise-1.0.9-12 and 1.0.9-12 tools
    2. create a new filesystem
    node1# mkfs.ocfs -b 128 -F -g 0 -L dc1:/u02 -m /u02 -p 755 -u 0 /dev/sdc
    3. mount the fs
    node1# mount /dev/sdc /u02 -t ocfs
    4. make a subdir, change into it
    node1# mkdir /u02/test
    node1# cd /u02/test
    5. fill the directory with 406 70M files
    node1# /usr/bin/time -v /bin/sh -c '
    for ((X=1; X \<= 406; X=((X + 1)) )); do
    echo test$X.dbf;
    dd if=/dev/zero of=./test$X.dbf bs=131072 count=560
    done
    6. mount the volume on THE OTHER NODE
    node2# mount /dev/sdc /u02 -t ocfs
    7. create a file in the test dir (from either node)
    node2# /bin/echo test > /u02/test/file1
    8. create a file in the test dir from the opposite node as step 7
    node1# /bin/echo test > /u02/test/file2
    step 8 causes my system to hang. if you try to "kill" the process (from another login), it will get stuck in a "D" disk wait state. if you go back to the first node and delete the first file you created then all hanging processes on the second node should resume. following these steps consistently generates the error on my system.
    Jeremy
    SysAdmin/DBA
    Lansing, MI

    it still doesnt work with the correct chipsets... you can go to www.linux1394.org for details on ti chipset cards and firewire disks. lacie firewire disks advertise that they use the oxford chipset. some WD disks that are usb/firewire combos dont use the oxford chipset. if youve got firewire/raw working, you probably have the oxford chipset on the disk. it would be great if oracle would either fix the problem, or just tell everyone that this no longer works so people dont waste time on something that wont give them an env for rac/ocfs.

  • RAC OS plaftform migration

    Hi All,
    We're planning to migrate Oracle RAC from IBM/AIX to Linux platform. On the RAC/AIX, we have 2 nodes..each node has 2CPUs, 8GB of RAM.
    How can we covert 2 cpu 8GB on AIX to be equivalent to number of CPUs and memory on LINUX?
    Any advises are greatly appreciated.
    Thank you!

    user12255861 wrote:
    Hi All,
    We're planning to migrate Oracle RAC from IBM/AIX to Linux platform. On the RAC/AIX, we have 2 nodes..each node has 2CPUs, 8GB of RAM.
    How can we covert 2 cpu 8GB on AIX to be equivalent to number of CPUs and memory on LINUX?Invalid question. It has nothing to do with Linux. It has everything to do with h/w comparison.
    You need to look at something like the Matrix of CTP (Composite Theoretical Performance) and MTOPS (Millions of Theoretical Operations Per Second) of the processor and platform to make a call on what the "+equivalence+" is.
    You can find this matrix for the AMD processor range at:
    http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_8796_8800~124990,00.html
    The CPU technology is used in blade servers (relatively inexpensive) from manufacturers like Sun, HP and others. The Sunfire range of AMD servers is what we typically use ourselves for some years now. Added benefit is that this sold by "+Oracle/Sun+" which means that both your RAC s/w and h/w stacks have the same owner. Good for support and maintenance, and potential leverage when it comes to dealing with discounts. ;-)

Maybe you are looking for

  • Getting error while upgrading OIM from 9101 to 9102

    I am trying to upgrade OIM from 9101 to 9102 but getting following error. bash-3.00$ sh ./LoadXLIF.sh jdbc:oracle:thin:@DB_HOST_IP:SID <Database User Name> <Database Password> AUDITCOMPLIANCE /opt/oracle/oim/jdk_1.5.0_06/bin/java -cp /:/lib/xlUtils.j

  • Excise is getting post to RG23A instead RG23C for capital goods return del

    Hi SAP Guru, I have very critical issue to be resolved emmediatly in the area of Excise for CAPITAL GOODS, please help! the things is.... 1. At GR level using mvt 101, system is posting all BEDECSSECess to correct RG23C register, fine Quality people

  • CRMXIF_ORDER_SAVE_M01-IDoc Sample data for sales order in CRM.

    Hi Friends , I am usig the IDoc CRMXIF_ORDER_SAVE_M01,I could able to save the Order in CRM,but the values of partners, Sales area,PO Category,Customer Po details and the material detail are not available in the created Sales order,though I am pasing

  • Intel mini with crt

    when connected to a crt monitor, does it give murky/blurry image or pale colors like the g4 mini did?

  • Time stream from resource  not valid for total display period

    Dear Friends, In APO after PPDS Run, and executing  Product View, in period tab of product view I am getting an error with following meassge. Time stream from resource WP5_0110_001 not valid for total display period Message no. /SAPAPO/RRP_PT055 Diag