Doubts about RAC infraestructure with one disk array

Hello everybody,
I'm writing to you because we have a doubt about the correct infrastructure to implement RAC.
Please, let me first explain the current design we are using for Oracle DB storage. Currently we are running several standalone instances in several servers, all of them connected to a SAN disk storage array. As we know this is a single point of failure we have redundant controlfiles, archiveds and redos both in the array and in the internal disk of each server, so in case array completely fails we “just” need to recover nightly cold backup, apply archs and redos and everything it's ok. This can be done because we have standalone instances and we can assume this 1 hour downtime.
Now we want to use these servers and this array to implement a RAC solution and we know this array is our single point of failure and wonder if it's possible to have a multinode RAC solution (not RAC One Node) with redundant controlfiles/archs/redos in internal disks. Is it possible to have each node writing full RAC controlfiles/archs/redos in internal disks and apply these files consistently when the ASM filesystem used for RAC is restores (i.e. with a softlink in an internal disk and using just one node)? Or maybe the recommended solution is to have a second array to avoid this single point of failure?
Thanks a lot!

cssl wrote:
Or maybe the recommended solution is to have a second array to avoid this single point of failure?Correct. This is the proper solution.
In this case you can also decide to simply use striping on both arrays, then mirror array1's data onto array2 using ASM redundancy options.
Also keep in mind that redundancy is also need for the connectivity. So you need at least 2 switches to connect to both arrays, and dual HBA ports on each server, with 2 fibres running, one to each switch. You will need multipath driver s/w on the server to deal with the multiple I/O paths to the same storage LUNs.
Likewise you need to repeat this for your Interconnect. 2 private switches, 2 private NICs on each server that are bonded. Then connect these 2 NICs to the 2 switches, one NIC per switch.
Also do not forget spares. Spare switches (one each for storage and Interconnect). Spare cables - fibre and whatever is used for the Interconnect.
Bottom line - not a cheap solution to have full redundancy. What can be done is to combine the storage connection/protocol layer with the Interconnect layer and run both over the same architecture. Oracle's Database Machine and Exadata Storage Servers do this. You can run your storage protocol (e.g. SRP) and your Interconnect protocol (TCP or RDS) over the same 40Gb Infiniband infrastructure.
Thus only 2 Infiniband switches are needed for redundancy, plus 1 spare. With each server running a dual port HCA and a cable to each of these 2 switches.

Similar Messages

  • Is RAC node configuration  when disk array fails on one node .

    Hi ,
    We recently had all the filesystem of node 1 of RAC cluster , turned into read only mode. Upon further investigation it was revealed that it was due to disks array failure on node 1 . The database instance on node 2 is up and running fine . The OS team are rebuilding the node 1 from scratch and will restore oracle installable from the backup .
    My question is once all files are restored :
    Do we need to add the node to the RAC configuration ?
    Do we need to do relink of oracle binary files ?
    Can the node be brought up directly once all the oracle installables are restored properly or will the oRacle team require to perform addition steps to bring the node into RAC configuration .Thanks,
    Sachin K

    Hi ,
    If the restore fails in some way . We will require to first remove and then add the nodes to the node 1 cluster right ? Kindly confirm on the below steps.
    In case of such situation below are the steps we plan to follow:
    version ; 10.2.0.5
    Affected node :prd_node1
    Affected instance :PRDB1
    Surviving Node :prd_node2
    Surviving instance: PRDB2
    DB Listener on prd_node1:LISTENER_PRD01
    ASM listener on prd_node1:LISTENER_PRDASM01
    DB Listener on prd_node2:LISTENER_PRD02
    ASM listener on prd_node2:LISTENER_PRDASM02
    Login to the surviving node .In our case its prd_node2
    Step 1 - Remove ONS information :
    Execute as root the following command to find out the remote port number to be used
    $cat $CRS_HOME/opmn/conf/ons.config
    and remove the information pertaining the node to be deleted using
    #$CRS_HOME/bin/racgons remove_config prd_node1:6200
    Step 2 - Remove resources :
    In this step, the resources that were defined on this node has to be removed. These resources include (a) Database (b) Instance (c) ASM. A list of this can
    be acquired by running crs_stat -t command from any node
    The srvctl remove listener command used below is only applicable in 10204 and higher releases including 11.1.0.6. The command will report an error if the
    clusterware version is less than 10204. If clusterware version is less than 10204, use netca to remove the listener
    srvctl remove listener -n prd_node1 -l LISTENER_PRD01
    srvctl remove listener -n prd_node1 -l LISTENER_PRDASM01
    srvctl remove instance -d PRDB -i PRDB1
    srvctl remove asm -n prd_node1 -i +ASM1
    Step 3 Execute rootdeletenode.sh :
    From the node that you are not deleting execute as root the following command which will help find out the node number of the node that you want to delete
    #$CRS_HOME/bin/olsnodes -n
    this number can be passed to the rootdeletenode.sh command which is to be executed as root from any node which is going to remain in the cluster.
    #$CRS_HOME/install/rootdeletenode.sh prd_node1,1
    Step 5 Update the Inventory :
    From the node which is going to remain in the cluster run the following command as owner of the CRS_HOME. The argument to be passed to the CLUSTER_NODES is a
    comma seperated list of node names of the cluster which are going to remain in the cluster. This step needs to be performed from once per home (Clusterware,
    ASM and RDBMS homes).
    ## Example of running runInstaller to update inventory in Clusterware home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=prd_node2" CRS=TRUE
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in ASM home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ASM_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in RDBMS home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    We need steps to add the node back into the cluster . Can anyone please help us on this ?
    Thanks,
    Sachin K

  • Doubt about Bulk Collect with LIMIT

    Hi
    I have a Doubt about Bulk collect , When is done Commit
    I Get a example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    DECLARE
    CURSOR s_cur IS
    SELECT *
    FROM servers;
    TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
    s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
    could crash redo log ?
    using 9.2.08

    muttleychess wrote:
    If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
    SY.

  • Doubt about Agent Inbox with ERMS and NON-ERMS

    Hi All.
    We have configured both ERMS and NON-ERMS email receiving (to make some tests) and we verify that only the workitem generated by the NON-ERMS workflow (WS14000164) is shown in the IC Web Client Inbox. The workitem generated by the ERMS workflow (WS00200001) is shown only in the SBWP transaction. Is this right???
    We believe that our agent inbox configurations are correct, because it is working fine for NON-ERMS emails.
    There is some trick to make the generated ERMS workitens be shown in the IC Web Client Inbox?
    Thanks in advance!!!

    Hi Julio,
    You will NOT be able to get the workitems created by the standard ERMS wokflow (WS00200001) into your inbox, because SAP's assumption is that ERMS precisely replaces manual inbox assignment work by some automation (Response Management).
    Even if you add the ERMS task 207914 in transaction CRMC_IC_AUICOMM, this configuration will be made inactive by the workitem search-engine. You can observe this in method CL_CRM_IC_AUI_WI->IF_GENIL_SO_HANDLER~GET_LIST, around line 210:
    read table lt_wflows with key object = co_erms_email_task into ls_wflows.
      if ls_wflows is not initial.
        delete table lt_wflows with table key object = co_erms_email_task.
      endif.
    If you still want to see ERMS WIs in your inbox, make a copy of WF WS00200001; in copy, replace task 207914 with your own one, that you add in trx CRMC_IC_AUICOMM. Deactivate triggering event of WF WS00200001; activate triggering event of your own one. No changes in SO28 necessary.
    But you may then think about what should happen when one wants to postprocess an ERMS WI displayed in inbox; because some ERMS processing may already have taken place: incoming e-mail may already have been linked automatically to an existing service ticket. You should then support easy navigation to that ticket, and forbid e-mail is linked again manually.
    Hope this makes topic clearer
    Kind regards
    Walter

  • Doubts about configurate services with JBI

    Hi people,
    I have been looking for a way to develop a wizard for configuration of services in different ESBs from my web application.
    I took a look at some presentations (like http://80.69.93.183/sun-evenementen/pdf/jbi_openesb.pdf and http://mediacast.sun.com/users/~armin/media/SunTechDays2007_OpenESB.pdf), and I would like ask you about possibilities to configure services in a ESB using JBI.
    What I really want to do is allow that my web application connects to some ESBs to configure services into them. It's like to have the web interface that some of them provide to make the configuration from my web application.
    I have been talking about this with some guys and two possibilities that they gave me are:
    1. To develop everything into the web application and send the information through JMX.
    2. To develop a generator of XML content (in memory) and send it the ESB, where it would has another implementation made by me, like a module (one of the guys said me that this could be a JBI Service Engine), which would receive the XML content generated by the web application.
    Using the first one possibility, I would have all solution into my application and developing the other one, I would have one implementation to each ESB that I want to connect to if it wouldn't be done with JBI (I think.. right? Because it seams that it would be the same if I do it with JBI).
    One guy told me that the second one would be the best, because, until he knows (based on the PoCs (proof of concept) he made, with JBossESB, Mule and WebSphere ESB), some of the ESBs he tested require some manual configuration to publish the services to be accessed through the ESB. He didn't do anything with JBI and didn't know anything about this.
    Another thing is that if I deploy only once a JBI Service Engine (SE), I can do the configuration of every service I want to publish in the ESB through this SE. In other words, I won't need to ask to the admin of ESB do the configuration to me (or do it through the web interface of ESB).
    Please, does anybody know if all the ESBs that support JBI give an interface to another application connects to them to configures services?
    I thank you in advance for any information!
    Regards,
    Luiz

    Hi people,
    I have been looking for a way to develop a wizard for configuration of services in different ESBs from my web application.
    I took a look at some presentations (like http://80.69.93.183/sun-evenementen/pdf/jbi_openesb.pdf and http://mediacast.sun.com/users/~armin/media/SunTechDays2007_OpenESB.pdf), and I would like ask you about possibilities to configure services in a ESB using JBI.
    What I really want to do is allow that my web application connects to some ESBs to configure services into them. It's like to have the web interface that some of them provide to make the configuration from my web application.
    I have been talking about this with some guys and two possibilities that they gave me are:
    1. To develop everything into the web application and send the information through JMX.
    2. To develop a generator of XML content (in memory) and send it the ESB, where it would has another implementation made by me, like a module (one of the guys said me that this could be a JBI Service Engine), which would receive the XML content generated by the web application.
    Using the first one possibility, I would have all solution into my application and developing the other one, I would have one implementation to each ESB that I want to connect to if it wouldn't be done with JBI (I think.. right? Because it seams that it would be the same if I do it with JBI).
    One guy told me that the second one would be the best, because, until he knows (based on the PoCs (proof of concept) he made, with JBossESB, Mule and WebSphere ESB), some of the ESBs he tested require some manual configuration to publish the services to be accessed through the ESB. He didn't do anything with JBI and didn't know anything about this.
    Another thing is that if I deploy only once a JBI Service Engine (SE), I can do the configuration of every service I want to publish in the ESB through this SE. In other words, I won't need to ask to the admin of ESB do the configuration to me (or do it through the web interface of ESB).
    Please, does anybody know if all the ESBs that support JBI give an interface to another application connects to them to configures services?
    I thank you in advance for any information!
    Regards,
    Luiz

  • Doubt about actions.xml with actions and roles

    Hi all,
    we are using a file like actions.xml for use them in Web Dynpro applications describing actions like:
    Is it possible to describe GROUPs assigning roles to them in the same XML instead of doing this using the useradmin application? We need to describe the roles in the XML because we are using around 25 ROLEs and 15 GROUPs.
    We appreciate if you can show us the complete description with an example for defining those GROUPs in the XML with all the tags and properties neccesary.
    Thanks in advance.
    Raú

    This feature is one of the hidden features SAP has for deploying stuff to NW. I'm sure there is a way for that, but its not documented, as the role extension is also not documented. I don't know why SAP is hidding this extremly useful features to normal developers. Especially for product development they are so usefull.
    Did you know, that its possible to deploy database content (not just tables!) with a special DC and an XML file in a special format? Just another example of hidden features in SAP Netweaver.

  • I need to know how I contact HP about a complaint with one of their products.

    First of all I find this site rediculous.  You click on contact us, but you cant actually "contact us."  I would like to find someone to talk to about my disappointment, before i talk to everyone about my disappointment.

    First of all I would like to invite your attention to the forum rules of participation at the link below....
    http://h30434.www3.hp.com/t5/Rules-of-Participatio​n/Rules-of-Participation/m-p/252325/highlight/tru...
    Now that you have read the rules of participation, you can see that this is not HP technical support, and criticizing this forum for something it was not designed to do will not get to the attention of any of the folks that work for HP.
    We on this forum are all volunteers who try to help folks resolve issues with their HP products.
    I do not work for HP,  I do not represent HP, and I am not a shareholder in the company.
    To express your dissatisfaction with your product, you can contact HP at:
    Hours of operation
    7 days a week 24 hours a day
    1-800-474-6836 (1-800 HP INVENT)

  • Doubt about applet communicating with the SIM

    Greetings,
    I have this doubt: I am developing a web site which contains a java applet. The web site is going to be available at some web server. The web site is going to be accessed by mobile phones. The idea is that, somehow, the applet communicates with the mobile phone and extract info from the SIM Card for authentication. I still wondering if it is possible and the possible drawbacks or alternatives.
    Thank you in advance,
    Fernando

    Hi,
    you definitely have to sign your applet to do that.
    Check in the forum "Security -> Signed applet". You'll find the information you need to sign an applet.
    For example check at: http://forum.java.sun.com/thread.jsp?forum=63&thread=174214

  • Doubt about abap dynamic with performs.

    Good Morning Gurus.
    1-I have the tables of price in internal table ( t_a530 , T_a951 ... etc ).
    2- I need filter data by field datbi and datab.
    3-Can I create a perform to pass  a table and after pass other table with other format to taht same perform ?
    4-It abap dynamic.
    5-II do not want to create several performs because I can make only one (with my logic to all tables)
    Can you help me?
    Thanks.

    The program is already doing this except:
        where DATBI => sy-datum
             and DATAB <= sy-datum.
    in my program is:
        where DATBI => sy-datum
    if I change it then dont select duplicate matnrs?
    because today my problem is it:
    He slects all before sy-datum then if the material has 3 entries he show all.
    And I need the most current only.

  • 10g RAC Connecting with one node down.

    We're are currently running version 10.1.0.2.0 on RAC, on Redhat Linux Enterprise 3, with two load balanced nodes - I have limited RAC experience.
    Node 1 is currently shutdown, as a user tries to connect they get a 50% chance of success. An attempt is made to connect to node 1 and returns the following: ORA-12154 TNS: could not resolve connect identifier specified. Theyt can retry and eventually connect to node 2.
    Is there anyway to stopping attempted connections to node 1 without having to modify all TNSNAMES entries?
    Thank you,
    David.

    This is now resolved.
    Thank you.
    David.

  • Not enough ram while opening a 122 MB PDF file in CS6 x64 on Win 7 x64 with 32 GB RAM (about 20 free) with SSD disk (enough free) an Intel i7 3,5

    the pdf file is local and I get the same error using PS CS6 in x64 as x86.
    the same after reboot.
    the same after changing ram-using maximum inside PS to 100 %

    Nobody can tell you anything without exact technical details like what the PDF contains or how it was generated, what your scratch disk settings are and so on. The file could simply have a gigazillion lines or something like that and may never properly rasterize.
    Mylenium

  • [SOLVED] Long time with excessive disk access before system reboot.

    I feel I would be grateful for some help here. It's my first go at Arch Linux having used Xubuntu for several years. It may be I'm missing something obvious but then I would be happy if someone could point me in the right direction.
    Problem: When I do a system restart by issuing
    $ systemctl reboot
    I get the following output
    Sending SIGTERM to remaining processes...
    Sending SIGKILL to remaining processes...
    Unmounting file systems.
    Unmounted /sys/kernel/debug.
    Unmounted /dev/hugepages.
    Unmounted /dev/mqueue.
    Not all file systems unmounted, 1 left.
    Disabling swaps.
    Detaching loop devices.
    Detaching DM devices.
    Unmounting file systems.
    Not all file systems unmounted, 1 left.
    Cannot finalize remaining filesystems and devices, giving up.
    Successfully changed into root pivot.
    Unmounting all devices.
    Detaching loop devices.
    Diassembling stacked devices.
    mdadm: stopped /dev/md126
    [ 1654.867177] Restarting system.
    However, after the last line is printed, the system does not reboot immediately but hangs for about 2 minutes with heavy disk activity. I can't say if it is read or write or both, but the led of my HDD is lit constantly. When this activity stops, the machine reboots.
    $ systemctl poweroff
    works as expected, i.e. shuts down immediately without excessive disk access.
    I see this behaviour both with the installed Arch system and when I run the live installation/recovery CD. It is also the same if I boot into the busybox rescue shell and then restarts the machine from there. It also does not seem to matter if any partition on the disk is is mounted or not, the behaviour is always the same with 2 min. heavy activity before reboot.
    System setup:
    Sony Vaio VPZ13. Intel Core i5 M460, 4GB ram, 2x64GB SSD in RAID0 configuration via bios setting (a.k.a. fake raid), partitioned like:
    windows boot
    windows system
    linux swap
    linux "/"
    linux "/home"
    So it's a dual boot setup with Windows 7.
    The raid array is assembled by mdadm, and I have mdadm_udev among my mkinitcpio.conf hooks (after blocks but before filesystems).
    Snip from journalctl log showing actions when reboot has been issued:
    jan 18 12:24:23 wione systemd[1]: Stopping Sound Card.
    jan 18 12:24:23 wione systemd[1]: Stopped target Sound Card.
    jan 18 12:24:23 wione systemd[1]: Stopping Bluetooth.
    jan 18 12:24:23 wione systemd[1]: Stopped target Bluetooth.
    jan 18 12:24:23 wione systemd[1]: Stopping Graphical Interface.
    jan 18 12:24:23 wione systemd[1]: Stopped target Graphical Interface.
    jan 18 12:24:23 wione systemd[1]: Stopping Multi-User.
    jan 18 12:24:23 wione systemd[1]: Stopped target Multi-User.
    jan 18 12:24:23 wione systemd[1]: Stopping Login Prompts.
    jan 18 12:24:23 wione systemd[1]: Stopped target Login Prompts.
    jan 18 12:24:23 wione systemd[1]: Stopping Getty on tty1...
    jan 18 12:24:23 wione systemd[1]: Stopping Login Service...
    jan 18 12:24:23 wione login[333]: pam_unix(login:session): session closed for user root
    jan 18 12:24:23 wione login[333]: pam_systemd(login:session): Failed to connect to system bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
    jan 18 12:24:23 wione systemd[1]: Stopped D-Bus System Message Bus.
    jan 18 12:24:23 wione systemd[1]: Stopped Getty on tty1.
    jan 18 12:24:23 wione systemd[1]: Stopping Permit User Sessions...
    jan 18 12:24:23 wione systemd[1]: Stopped Permit User Sessions.
    jan 18 12:24:23 wione systemd[1]: Stopped Login Service.
    jan 18 12:24:23 wione systemd[1]: Stopping Basic System.
    jan 18 12:24:23 wione systemd[1]: Stopped target Basic System.
    jan 18 12:24:23 wione systemd[1]: Stopping Dispatch Password Requests to Console Directory Watch.
    jan 18 12:24:23 wione systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
    jan 18 12:24:23 wione systemd[1]: Stopping Daily Cleanup of Temporary Directories.
    jan 18 12:24:23 wione systemd[1]: Stopped Daily Cleanup of Temporary Directories.
    jan 18 12:24:23 wione systemd[1]: Stopping Sockets.
    jan 18 12:24:23 wione systemd[1]: Stopped target Sockets.
    jan 18 12:24:23 wione systemd[1]: Stopping D-Bus System Message Bus Socket.
    jan 18 12:24:23 wione systemd[1]: Closed D-Bus System Message Bus Socket.
    jan 18 12:24:23 wione systemd[1]: Stopping System Initialization.
    jan 18 12:24:23 wione systemd[1]: Stopped Setup Virtual Console.
    jan 18 12:24:23 wione systemd[1]: Unmounting Temporary Directory...
    jan 18 12:24:23 wione systemd[1]: Unmounted Temporary Directory.
    jan 18 12:24:23 wione systemd[1]: Unmounted /home.
    jan 18 12:24:23 wione systemd[1]: Starting Unmount All Filesystems.
    jan 18 12:24:23 wione systemd[1]: Reached target Unmount All Filesystems.
    jan 18 12:24:23 wione systemd[1]: Stopping Local File Systems (Pre).
    jan 18 12:24:23 wione systemd[1]: Stopped target Local File Systems (Pre).
    jan 18 12:24:23 wione systemd[1]: Stopping Remount Root and Kernel File Systems...
    jan 18 12:24:23 wione systemd[1]: Stopped Remount Root and Kernel File Systems.
    jan 18 12:24:23 wione systemd[1]: Starting Shutdown.
    jan 18 12:24:23 wione systemd[1]: Reached target Shutdown.
    jan 18 12:24:23 wione systemd[1]: Starting Save Random Seed...
    jan 18 12:24:23 wione systemd[1]: Starting Update UTMP about System Shutdown...
    jan 18 12:24:23 wione systemd[1]: Started Save Random Seed.
    jan 18 12:24:23 wione systemd[1]: Started Update UTMP about System Shutdown.
    jan 18 12:24:23 wione systemd[1]: Starting Final Step.
    jan 18 12:24:23 wione systemd[1]: Reached target Final Step.
    jan 18 12:24:23 wione systemd[1]: Starting Reboot...
    jan 18 12:24:23 wione systemd[1]: Shutting down.
    jan 18 12:24:23 wione systemd-journal[189]: Journal stopped
    -- Reboot --
    Since I have used Xubuntu without hassle for several years, I first thought the problem may be related to systemd reboot and something in my system setup. But I have tried the Fedora 17 live CD and rebooting there works as expected. So, since it works in one systemd distro, it should work with Arch as well.
    Then I thought that it maybe had something to do with the raid-array, something along the lines of
    https://bugzilla.redhat.com/show_bug.cgi?id=752593
    https://bugzilla.redhat.com/show_bug.cgi?id=879327
    But then I found the shutdown hook for mkinitcpio and now I see that the array is stopped and dissassembled. So thats not the problem either. (Or thats what I guess at least.)
    Unfortunately I'm out of ideas. Any help would be grateful.
    Last edited by wingbrant (2013-02-02 22:20:20)

    It turned out that the magic word for me was "reboot=pci" on the kernel command line. With that option set it works lika a charm The machine reboots nice and clean.

  • Did Satellite Pro M70 come with WINXP disks?

    when i got my M70 it only came with one disk! the adons, did it come with the Windows XP Pro Disk? becuase i have Windows Vista Beta 2 and i dont want to install it unless i no i have the disk for Windows XP! becuse i use my laptop at school and if vista dont run word then i will have to go back to XP!
    thanks for your help
    James

    Hi
    The unit was delivered only with the Toshiba recovery CD because the Toshiba Assists (on the desktop) contains an option to create the own Toshiba driver and tools CD.
    Therefore only one disc was delivered.
    By the way. You can create second partition and there you can install a second OS without deleting the available WinXP

  • Oracle RAC with QFS shared storage going down when one disk fails

    Hello,
    I have an oracle RAC on my testing environment. The configuration follows
    nodes: V210
    Shared Storage: A5200
    #clrg status
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Online
    host2 No Online
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Online
    qfs-meta-rg host1 No Online
    host2 No Offline
    rac_server_proxy-rg host1 No Online
    host2 No Online
    #metastat -s racdg
    racdg/d200: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d3s0 0 No No
    racdg/d100: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d2s0 0 No No
    #more /etc/opt/SUNWsamfs/mcf
    racfs 10 ma racfs - shared
    /dev/md/racdg/dsk/d100 11 mm racfs -
    /dev/md/racdg/dsk/d200 12 mr racfs -
    When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Pending online blocked
    host2 No Pending online blocked
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Pending online blocked
    qfs-meta-rg host1 No Offline
    host2 No Offline
    rac_server_proxy-rg host1 No Pending online blocked
    host2 No Pending online blocked
    crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
    Many thanks in advance
    Ushas Symon

    I'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
    Tim
    ---

  • I want to install winXP to my iMAC running Lion OS.  I will use BootCamp.  My winXP uses 2 disks with XP on one CD and SP2 on the other.  Bootcamp seems to say use a one disk version.   What can I do about this paradox?

         I want to install winXP on my iMac.  It has OS 10.7.2 and uses a 27" display.  I will use Bootcamp and create a separate partion for winXP.  Bootcamp instructions call for a single disk winXP CD with Service Pack 2 on it.  My version has winXP on one disk and Service pack 2 on another.  Will this be a problem?  If it is, is there a solution?
                                                                            Tnx...K6jpj

    If you just bought an iMac a week ago new, it WILL NOT support XP unless XP is inside a virtual machine.  End of story.  You will need Windows 7 if you plan to run Windows natively on any Mac in 2011.
    Used models will run XP, though--from the Intel inception in '06 through the middle part of 2010.
    However, consider that Microsoft wants XP gone--and badly so--even as it is extending security hotfix support through April 2014.  This gives you two years and four months before you have to throw down for whatever's current at that time.
    Windows 7 will give you a much better experience (dare I say more Mac-like?) than XP, as it seems to be better organized.  The Taskbar, for instance, has become more like the OS X Dock in that you can "pin" things to it in a straightforward manner, unlike XP's seldom-used Quick Launch traybar. 
    7's also built to take advantage of all the snazzy new hardware you now have.
    But don't let me sell it to you.  It's really your call.
    Nate

Maybe you are looking for