BWA OS Upgrade Error Shared Storage

We have recently upgraded our IBM BWA to SUSE Linux 11.1, upgraded the GPFS 3.4.0-7 and recompiled RDAC. Everything seemed to work fine, the queries execute, but when I run the checkBIA, I get these warning/error messages:
OK: ====== Shared Storage ======
OK: Storage: /usr/sap/BXQ/TRX00/index
OK: Logdir: /usr/sap/BXQ/SYS/global/trex/checks/report_2012-02-28_102556
OK: Local avg directory creation/deletion: 1330.9 dirs/s, exp: 700.0 dirs/s
ERROR: Parallel remote avg 4 hosts: 24.4 MB/s, exp: 31.0 MB/s
ERROR: 1 hosts parallel iterative remote avg: 57.9 MB/s, exp: 90.0 MB/s
WARNING: 2 hosts parallel iterative remote avg: 42.8 MB/s, exp: 45.0 MB/s
WARNING: 4 hosts parallel iterative remote avg: 26.4 MB/s, exp: 31.0 MB/s
ERROR: Serial remote avg: 59.4 MB/s, exp: 90.0 MB/s
OK: ====== Landscape Reorganization ======
Anyone seen this before, or have any idea what may be the cause? Thank you
Karl

Hello Karl,
first, BWA is not supported for all SUSE versions. Please check http://service.sap.com/pam > "BW Accelerator 7.20". It's certainly not supported for BWA 7.0.
Secondly, you should not upgrade the OS unless specifically instructed to do so by SAP Support or if security patches are required. Especially moving from SUSE 9 or 10 to 11 is not something that is recommended.
Finally, because of #1 please contact IBM to make sure you have a supported OS version for your specific BWA hardware. IBM also needs to make sure that your hardware fulfills the minimum specifications for network and storage throughput (which is check by the script).
Thanks,
Marc Bernard
SAP Customer Solution Adoption (CSA)

Similar Messages

  • Qmaster error: shared storage client timed out while subscribing to...

    Here's my Qmaster setup:
    computer 1: CONTROLLER, no nodes
    - 8TB RAID hooked up via Fiber
    - connected to the GigE network switch via a 6-port bond
    - cluster storage set to a path on the RAID
    computers 2, 3, 4, 5: RENDER NODES
    - each computer has a 2-port bonded connection with the GigE switch
    computer 6: Client, with FCS2 installed.
    - connected with a single GigE link
    I have set up this cluster primarily for command-line renders, and it works great. I submit command-line renders from the client computer, which get distributed and executed on each node. The command line renders specify a source file on the RAID, and a destination path on the RAID. Everything works great.
    I run into trouble when trying to use Compressor with this same setup. The files are on the RAID, and all my computers have an NFS automount that puts it in the /Volumes folder on each computer.
    I set up my Compressor job and submit it to the cluster. It submits sucessfully, and distributes the work. After a few seconds, each node gives me a timeout error:
    "Shared storage client timed out while subscribing to [computer1.local/path to cluster storage]"
    Is this a bandwidth issue? Command line renders work fine, I can render 16 simultaneous Quicktimes to the RAID over NFS. I don't see much network activity on any of the computers when it's trying to start the Compressor render, it's as if it's not even trying to connect.
    If I submit the SAME compressor job to a cluster with nodes ONLY on the controller computer, it renders fine. Clearly the networked nodes are having trouble connecting to the share for some reason.
    Does anybody have any ideas? I have tried almost everything to get this to work. Hooking up each node locally to the RAID is NOT an option unfortunately.

    WELL I DO NOW!
    Thanks. it's taken 6th months and several paid 'professionals' and then you come in here...swinging your minimalist genius. one line. one single line. and its done.
    if you are in london, lets lift a beer or five together.
    thank you sir. thankyou!

  • I want to upgrade my backup storage space to 10GB but can't. Seems every time I try to "BUY" it never goes anywhere but I notice it is asking An unknown error has occurred. How can I use my re

    I want to upgrade my backup storage space to 10GB but can't.
    Seems every time I try to "BUY" it never goes anywhere but I notice it is asking me for an unknown error has occurred.
    How can I use my regular Apple ID to purchase the upgrade and backup storage space?
    Thanks in advance.
    Mohd

    [email protected] wrote:
    I'm reluctant to post apple ID's in a public support forum, and would prefer to deal with this by private email.  or phone.
    Then contact Apple directly.
    This is a user to user technical support forum.  No one here has access to manage Apple ID's other than their own.
    There is no Apple presence here.

  • Shared storage client timed out error

    Hello everybody, please help
    I have been at this now for about 2 days and still can't find out the source of my issue using FCP to render a project through compressor to multiple macs using Qmaster.
    What is happening is that I start the render and it gets sent to the second computer, I can see the prosessor ramping up then after (+-) 30 seconds I get the error bellow, and the render fails.
    Here is my set up:
    MacBook1 as the cluster controller
    Macbook2 as the service node
    connected via a gigabyte switch using an ethernet cable
    The error I keep getting is this ("Macintosh-7" is the name of MacBook1, "chikako-komatsus-computer" is the name of MacBook2):
    3x HOST [chikako-komatsus-computer.local] Shared storage client timed out while subscribing to "nfs://Macintosh-7.local/Volumes/portable/Cluster_scratch/4AD40699-B5BD6A1A/sha red"
    The volume mentioned above in the error is a shared Fire wire drive connected to the MacBook1. It is have full read and write privileges to everyone. This drive is where the project file and all the source video are located. MacBook1 via the Qmaster system preferences is pointing to a folder "Cluster_scratch" on this drive.
    I have been mounting this drive from macBook2 using the connect to sever option in the finder under Go menu. This method seems to only enable me to connect to this drive using AFP, is this my problem?
    I have "allowed all incoming traffic" of the fire wall on the MacBook1
    What is funny (not really) is that i can Compress a previously compiled video with the cluster If I don't go thou Final Cut Pro!
    Any help with this would be greatly appreciated.
    Thanks

    I also administer a managed cluster with 6 machines, and have been using it successfully for almost a year now. But the only encoding that is submitted is directly through Compressor, never via FCP.
    With QMaster, it sees a Quickcluster and a managed cluster the same way. While they are set up differently, the principle is the same, QMaster only sees services.
    Exporting out of FCP to any cluster has always been slow. If you want to harness the power of distributed encoding, you could export a Quicktime reference file and take that into Compressor to be submitted to the cluster for encoding.

  • Upgrade 3.1- 7.0: internal error Workbook storage fault (read/open)

    Dear all,
    after upgrading an 3.1 (SP12) BW-system to BI 7.0 (SP15) the following error occures when we want to open existing BEx workbooks (in the BEx Analyzer 3.x):
    <b><internal error> Workbook storage fault (read/open)</b>
    <b>We have found out, that this refers only to those BW workbooks being created in BW 2.x and not modified ever since. Thus this actually seems to be an 2.x->7.0 issue.</b>
    Scope of our projekt it not the use of any new BI function, we are just upgrading the system.
    Unfortunately some (but not all) workbooks do not run any more now. Checking existing SAPnotes has not helped yet unfortunately.
    I saw very similar problems on this forum, if there is a solution, please provide it to us urgently.
    THANKS for helping
    Frank

    Hello Venkat,
    thanks for helping. Since I work with SAP_ALL authorization, I do not think this issue is related to missing auth. objects. The error message desribed above is withoutan number-ID, that´s why it is different than those described in the mentioned notes.
    It definitely is related with the upgrade, as I can open those "corrupt" workbooks originating from BW 2.x in the productive system (still on BW3.1).
    Any more suggestions?? Who remembers the 2.x - 3.x upgrade of BEx workbooks?
    Thanks, Frank
    null

  • Access iphoto 08 file on shared storage device from multiple machines

    I recently installed ilife 08 on both an imac and macbook. Previously (iphoto 06), both devices accessed the iphoto library on a shared storage device without any problems. After the upgrade, my imac is able to view thelibrary but my macbook (the second machine to be upgraded) no longer has access. 'Sharing' is too slow over the wireless network and doesn't represent a reasonable option.
    Is anyone else experiencing this issue? Any suggestions.

    Actually, neither repairing permissions or changing them with Get Info worked for me. What did work for me was deleting the empty iPhoto Library in the user folder who couldn't access the shared library, and put an alias of the shared library in that user's Pictures folder. Everything then worked as it did prior to upgrading. Thanks.

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • Shared Storage Check

    Hi all,
    We are planning to add a node to our existing RAC deployment (Database: 10gr2 and Sun Solaris 5.9 OS). Currently the shared storage is IBM SAN.
    When i run shared storage check using cluvfy, it fails to detect any shared storage. Given that i can ignore this error message (since cluvfy doesn't work wth SAN i beleive), how can i check whether the storage is shared or not?
    Note
    When i see partition table from both servers, it looks same (for the SAN drive, of course) but the name/label of the storages are different (For example: In existing node it show c6t0d0 but in the new node, which is to be added, it shows something different. Is it ok?).
    regards,
    Muhammad Riaz

    Never mind. I found solution from http://www.idevelopment.info.
    (1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
    /asmdisks
    - crs
    -disk1
    -disk2
    - vote
    (2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
    # ls -lL /dev/rdsk/c4t0d0*
    crw-r-----   1 root     sys       32,256 Aug  1 11:16 /dev/rdsk/c4t0d0s0
    crw-r-----   1 root     sys       32,257 Aug  1 11:16 /dev/rdsk/c4t0d0s1
    crw-r-----   1 root     sys       32,258 Aug  1 11:16 /dev/rdsk/c4t0d0s2
    crw-r-----   1 root     sys       32,259 Aug  1 11:16 /dev/rdsk/c4t0d0s3
    crw-r-----   1 root     sys       32,260 Aug  1 11:16 /dev/rdsk/c4t0d0s4
    crw-r-----   1 root     sys       32,261 Aug  1 11:16 /dev/rdsk/c4t0d0s5
    crw-r-----   1 root     sys       32,262 Aug  1 11:16 /dev/rdsk/c4t0d0s6
    crw-r-----   1 root     sys       32,263 Aug  1 11:16 /dev/rdsk/c4t0d0s7
    mknod /asmdisks/crs      c 32 257
    mknod /asmdisks/disk1      c 32 260
    mknod /asmdisks/disk2      c 32 261
    mknod /asmdisks/vote      c 32 259
    # ls -lL /asmdisks
    total 0
    crw-r--r--   1 root     oinstall  32,257 Aug  3 09:07 crs
    crw-r--r--   1 oracle   dba       32,260 Aug  3 09:08 disk1
    crw-r--r--   1 oracle   dba       32,261 Aug  3 09:08 disk2
    crw-r--r--   1 oracle   oinstall  32,259 Aug  3 09:08 vote

  • RAC with OCFS2 shared storage

    Hi all
    I wont to create RAC env in oracle VM 2.2 (one server) , with lokal disk's which I used to create LVM for ocr in in guest:
    - two quest with Oracle enterprise linux 5
    - both have ocfs2 rpm instaled
    when I wont to create shared storage for ocr I configure cluster.conf
    - service o2cb configure -> all ok -> on both nodes
    - service o2cb enable -> ok -> on both nodes
    - then mkfs.ocfs2 in node1
    - mount -t ocfs2 in node1
    - mount -t ocfs2 in node 2:
    [root@lin2 ~]# mount -t ocfs2 /dev/sde1 /ocr
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sde1 on /ocr. Check 'dmesg' for more information on this error.
    Jun 27 22:57:23 lin2 kernel: (o2net,1454,0):o2net_connect_expired:1664 ERROR: no connection established with node 0 after 30.0 seconds, giving up and returning errors.
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_request_join:1036 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_try_to_join_domain:1210 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_join_domain:1488 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_register_domain:1754 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_dlm_init:2808 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_mount_volume:1447 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: ocfs2: Unmounting device (8,65) on (node 1)
    can You help me where I doing mistake
    thank You Brano

    Please find the answer in the below link
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-usingOCFS2+in+a+group+of+VM+hosts+to+share+block+storage

  • WRT610N Shared Storage

    I recently purchased a WRT610N and have been having some problems setting up the USB shared storage feature.  I have a 1.5 Terabyte Seagate drive that I have created two partiions on (I had read elsewhere in the forums that the WRT610N only handled partitions/dirves up to 1 TB).  Both partitions are NTFS with the first one being 976,561 GB and the second one being 420,700 GB. Both drives show up in the "Disk" section of the admin console and I can create/define a share for the larger of the two partitions without any problems. 
    The first of my problems comes when I try to create/define a share for the smaller partition.  I can create a share but the admin console does not save the access privleges that I assign to it.  Despite setting them up in the admin console they don't show up when I go back and look (in both the detail and summary views) the Access rights show as blank.  I do not have this issue with the larger partition where I can add and later view groups in the Access section.
    The second problem comes when I try to attach to the larger share from a network client.  I can look at the shares if I use ..........Start - Run - and Type \\192.168.1.1.  If I enter in my admin User ID and password, I can see the new share on the WRT610N.  When I try to double click on it, I am then pompted again for a Username & Password,  When I try to re-enter the admin user ID and password, the logon comes right back to me with "WRT610n\admin" populated in the User ID field.  From there it won't accept the admin password.  There are no error messages.
    Help with either problem would be appreciated.

    When you select your Storage partition and open it, and if it ask you for the Username and Password, thats the username and password is of your storage drive, Might be you must have set some password for your storage drivers.
    Login to your Routers GUI. and click on the Storage Tab and Below you will find the Sub tab "Administration" click on it, if you wish you can Modify the "admin" rights, Like change the password or else you can Create your Own User and password. So whenever you login to your Storage Partition, and it ask for the username and password then you can input that username and password and click OK. This way you will be able to access your Storage Driver. 

  • Server Pool WITHOUT shared storage

    The documentation defines Server Pool as:
          +Logically an autonomous region that contains one or more physical Oracle VM Servers.+
    Therefore, should it possible to add multiple servers (physically separate VM servers) to the same Server Pool even though they are NOT using shared storage? When I tried to add the second VM Server to the Server Pool I received the following error:
    During adding servers ([vmoracle2]) to server pool (VM_Server_Pool), Cluster setup failed:
    (OVM-1011 OVM Manager communication with vmoracle1 for operation HA Setup for Oracle VM
    Agent 2.2.0 failed: <Exception: SR '/dev/sda3' not supported: type 'ocfs2.local' not in
    ['nfs', 'ocfs2.cluster']> )Thanks.

    Nothing is as easy as it seems when it comes to Oracle VM.
    When trying to create a new Server Pool to accommodate my second VM Server, I received the following error:
    +2010-03-08 18:18:24.575 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:24.752 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:24.755 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:24.916 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:30.304 NOTIFICATION entering into assign vs action...+
    +2010-03-08 18:18:30.311 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:30.482 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:30.483 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:30.638 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:45.236 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:45.410 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:45.434 NOTIFICATION master server is:vmoracle2+
    +2010-03-08 18:18:45.435 NOTIFICATION Start to check cluster for server pool+
    +2010-03-08 18:18:45.581 WARNING failed:<Exception: Cluster root not found.>+
    StackTrace:
    File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 535, in cluster_precheck
    clusterprecheck(single_node, ha_enable)+
    +File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 515, in clusterprecheck+
    if not cluster_root_sr_uuid: raise Exception("Cluster root not found.")
    +2010-03-08 18:18:45.582 NOTIFICATION Failed check cluster for server pool+
    +2010-03-08 18:18:45.583 ERROR [Server Pool Management][Server Pool][VMORACLE2_Server_Pool]:Check prerequisites to create server pool (VMORACLE2_Server_Pool) failed: (OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    +)+
    +2010-03-08 18:18:45.607 NOTIFICATION Exception Message:OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    The "*Test Connection*" succeeded just fine prior to clicking NEXT on the "Create Server Pool" page.
    Any suggestions?

  • When installing clusterware, shared storage trouble

    I was trying to install clusterware. When I typed location of OCR, I got error below:
    Oracle Cluster Registry (OCR) is not shared across all the nodes in the cluster
    Then, I found I can not mount ocfs2 on both nodes at the same time. But I can mount it on any one of nodes if it is umounted on the other one.
    Can you anyone give me a hand?
    Environment is as following:
    - OS: Oracle Linux 5 (update 4)
    - Openfiler 3 + ocfs2
    - Oracle 10gR2

    Hi;
    Please see:
    http://kr.forums.oracle.com/forums/thread.jspa?messageID=4254569
    Oracle 10g RAC install- OPEN FAIL ON DEV
    Oracle Cluster Registry (OCR) is not shared across all the nodes . . .
    After OCFS2 install/configure ~ Shared storage check check fails
    Problem of using OCFS2 as shared storage to install RAC 10g on VMware
    Regard
    Helios

  • Sync manager part fail: "Error: Insufficient Storage Space&qu

    I'm using sync manager to transfer my collection of 3502 songs from my Zen Xtra to my PC before I upgrade to a 60GB Vision M.
    It completes the sync but with 92 songs moved and a message that it is "complete with errors". The remaining songs each have the message "Error: Insufficient Storage Space"
    The peculiar thing, my computer has 240GBs free!
    Please can someone advise how to fix this and backup my songs? - Thank you.

    You're not the only one with problems synching with?an Xtra. It appears that creative introduced a bug in their newer software that causes the transfer to fail regardless of how much space you have. Try installing the old software that came with the Xtra to copy it off. Creative sure has stellar support for their products, eh?Next time I'll go with another vendor. This is simply shoddy service on their part.

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • I just upgraded the icloud storage on my iPhone 10 days ago but want to downgrade it and get a refund

    I upgraded my iPhone storage 10 days ago in error and now want to downgrade and get a refund

               Cancel your storage upgrade    

Maybe you are looking for

  • Executing a procedure in ADF model

    Dear All, I have created a new fusion web application in ADF, in that i have created a custom entity class that overrides doDML method. This is because i have to perform the insert, update and delete via stored procedure. But the problem is that i am

  • Hard Drive Failed after replacement

    My computer recently went in for a repalcement harddrive from HP Support as it is still under warranty. After recieving it back I went and ran the same test that told them the harddrive had failed last time and recieved a failure id of  GGANWC-67H6K8

  • Ios7.0.2 iPad multigesture left right broken

    Pre ios7.0.2, was able to use multigesture swipe from right of screen to display open windows. After downloading 7.0.2 left right multigesture doesn't work. To display open windows (and swipe upwards to close them), I have to press twice on big butto

  • Mail attachment drag and drop issue

    Been facing serious mail attachment drag-and-drop issue since 10.8 (if not from 10.7) and was disappointed to find the same behaviour still in 10.9. When I drag and drop a picture file (from Finder for example) into a message window or onto the Mail.

  • Sandbox error, crossdomain loading Images dynamically

    Hi! I have a following image: <mx:Image id="listakuva1" source="images/autoListasivu.jpg" complete="handleImageComplete(event)" /> And because I have to scroll the image, I have to turn the smoothing on when complete: private function handleImageComp