BO4 - Moving from 2 node cluster to 1 node

Hi,
At moment we have a 2 node cluster in a virtual environment (2 virtual hosts but actually 1 physical host) so no added resilience.
We have been asked by the hardware team to look at consolidating all the bo4 services onto 1 node.
Plan would be
1) create the services which currently running on node 2 on node1 - they would be created in stopped and disabled state by default
2) Stop the sia on node 2 and activate the services on node 1 so that node 1 contains all services
Is there anything else we would need to do? e.g deleting the node 2 via the cmc?
Thanks

Hi Philip,
I will suggest you to not to create all the services in Node 1 as Node2.
analyze your system usage and create the services based on the usage and traffic especially on the processing side and APS.
First stop the SIA in node 2 and create few Processing services in node1.
Now remove the node:
In CCM:
To remove the node from a cluster you need to delete SIA from CCM.you need to have at least one running CMS in this cluster.
If it will not work, then we can try to delete from the CMS DB and CMC

Similar Messages

  • Concurrent nodes reading from JMS topic (cluster environment)

    Hi.
    Need some help on this:
    Concurrent nodes reading from JMS topic (cluster environment)
    Thanks
    Denis

    After some thinking, I noted the following:
    1 - It's correct that only one node subscribes to a topic at a time. Otherwise, the same message would be processed by the nodes many times.
    2 - In order to solve the load balancing problem, I think that the Topic should be changed by a Queue. This way, each BPEL process from the node would poll for a message, and as soon as the message arrives, only one BPEL node gets the message and take if off the Queue.
    The legacy JMS provider I mentioned in the post above is actually the Retek Integration Bus (RIB). I'm integrating Retek apps with E-Business Suite.
    I'll try to configure the RIB to provide both a Topic (for the existing application consumers) and a Queue (an exclusive channel for BPEL)
    Do you guys have already tried to do an integration like this??
    Thanks
    Denis

  • Moving of the Infoobjects from Unassigned nodes to Custom Created Nodes

    Hello Experts.
    How to move the infoobjects from unassigned nodes to custom created nodes in RSA1.
    Thanks
    PT

    Hi ,
    Goto RSD1 ->info object catalog -> enter catalog name -> edit -> insert your info objects  under "characteristic" folder.
    Hope this may be the easier way.
    Regards,
    Swarupa.

  • How to failover SCAN VIP and SCAN Listener from one node to another?

    Environment:
    O.S :          HP-UX  B.11.31 U ia64
    RDBMS:   Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    It is a 2 Node RAC.
    Question:
    How to failover the SCAN VIP and SCAN LISTENER running on node 1 to node 2?
    What is the relation between standard LISTENER and SCAN LISTENER ?
    Why do we need LISTENER, when we have SCAN LISTENER ?
    When I tried with SRCVTL STOP LISTENER , I thought the SCAN LISTENER adn SCAN IP will failover, but it did not?
    Also please clarify if I use SRVCTL RELOCATE SCAN -i 1 -n Node1
    Actalluy I am trying that by moving the SCAN listeners so that when I do PSU 7 patching on 1 node, no incoming attempt to connect will spawn
    a process and thereby opening files in $ORACLE_HOME (which would prevent the patch from occurring)
    Please clarify my queries.
    Thanks,  Sivaprasad.S

    Hi Sivaprasad,
    1. The following link will help you for SCAN VIP and SCAN LISTENER failover from 1 node to another.
    http://heliosguneserol.wordpress.com/2012/10/19/how-to-relocate-scan_listener-from-one-node-to-another-node-on-rac-system/
    http://oracledbabay.blogspot.co.uk/2013/05/steps-to-change-scan-ip-address-in.html
    2. The Standard LISTENER is specific for particular node for which it is running. It cannot be relocated as its specific for the node its running. SCAN listeners are not replacements for the node listeners.A new set of cluster processes called scan listeners will run on three nodes in a cluster (or all nodes if there are less than 3).  If you have more than three nodes, regardless of the number of nodes you have, there will be at most three scan listeners. So no relation for standard LISTENER and SCAN LISTENER.
    3. Hmmm. let me put it in easy way for this question. All the RAC services like, asm, db , services, nodeapps registers with this SCAN_LISTENER. So if any of these services (asm, db , services, nodeapps) got down/not running, the SCAN_LISTENER will know the down status, and if any client requests to access the node/service which is down, the SCAN_LISTENER will redirect the client request to the least loaded node. So here all these process will happen without the knowledge of client. And As usual the standard LISTENER looks only for incoming request to connect with the database. So we need both LISTENER and SCAN LISTENER.
    4. If you provide SRCVTL STOP LISTENER,  it stops the default listener on the specified node_name, or the listeners represented in a given list of listener names, that are registered with Oracle Clusterware on the given node. No failover will happen under this case.
    5. Yes you can relocate if you want to relocate the scan.
    Hope this helps!!
    Regards,
    Pradeep. V

  • Wallboard stats broken after moving to a HAoWAN cluster

    Hello, I am looking for some help trying to figure out what changes are made after moving to a HAoWAN cluster in relation to DB reporting for the wallboard app. 
    I had/have the free comminuty wallboard script (thank you comminuty) working just great before the upgrade to a HA cluster.  I am using UCCX 8.0.2SU3 and I have just reenctly added a second node for redundancy.  My stats for the wallboard app seem to have stopped working the moment I started the upgrade.  I have done the steps outline in the "Using Wallboard Software in a High Availability (HA) Deployment" guide but I am still not seeing current data.
    Has anybody else had this problem or have any suggestions on a workaround?
    John P

    Hi
    Well... not sure which wallboard/version of that wallboard you are using, so no idea if it checks that API.
    However.. If you think you have it pointed at the pub, and the pub is master, then I would check that the actual tables are updating:
    Try:
    run uccx sql db_cra select * from rtcsqssummary
    That should hopefully get you a list of the CSQs and stats; if it's not current/updating regularly (run it a few times on each server and compare) then the stats may not be updating and you need to look at the UCCX server.
    If you see them updating on the pub, then you know you have a problem with the wallboard.
    Aaron

  • Issue while executing the discovery command from target nodes

    Hi Experts.
    I had to create cluster two node using openfile, after creation of successful lun and associated partition from all the nodes i have changed the ip address of openfiler.
    After changing the IP on open filer.
    A) openfile version:-
    [root@san ~]# uname -r
    2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686
    B) Linux Oel5:-
    [root@rac1 ~]# uname -r
    2.6.18-194.el5xen
    [root@rac1 ~]#
    1:- I am able to ping and ssh etc from any node to openfiler.
    However, while executing the below command i am facing the below exception..
    service iscsi restart
    Stopping iSCSI daemon:
    iscsid dead but pid file exists [  OK  ]
    Starting iSCSI daemon: [  OK  ]
    [  OK  ]
    Setting up iSCSI targets: iscsiadm: No records found!
    [  OK  ]
    [root@rac1 ~]#
    Moreover, tried to discover the targets, unfortunately no message is getting displayed after execution of this below command.
    [root@rac1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.37.200
    [root@rac1 ~]#
    The quick response will be appreciated as my whole test case is down as of now due storage issues.
    Thanks,
    Arch.

    Are you running a firewall that needs to be adjusted to support your changed Openfiler IP network?

  • Boot up from other node's boot disk

    Hi,at the moment I got problem to boot up one of two nodes-cluster, because its boot disk seems to be damaged. Is it possible to boot up from other node's boot disk? If yes,how to do it? If not,are there any ideas,beside replacing the damaged boot disk? Because the machines are in testbed, I think my compay doesn't want spend much money to replace the disk in this short time =)
    The version is SOLARIS 8 and SUN Cluster 3.0.

    Hi,
    Sorry to disappoint you but you cannot in Sun Cluster have one node boot from the boot disk of the other node. Do you have a backup of the boot disk?
    Your best option is to replace it I guess
    Kristien

  • Using the network load balancing from the nodes itself

    I have installed a 2 node Sun Cluster 3.2, configured a shared ip resource and attached to it a scalable network aware resource working on the two nodes. I have crashed the process on one of the node in such a way that the cluster could not restart it again
    In this status I tried to open a connection from another server and the load balancer always sent the traffic to the node that was up which is as expected...
    If I try to open a connection from the node on which the process is failed then I get a connection refused meaning that the load balancer is not working in this circumstance.
    Is this a bug/ a mis-configuration/ or just an inherent cluster problem.
    Is there a solution to this issue?
    Regards
    Daniel

    To answer your first question, no, there isn't anything you can do.
    Here is what my colleague suggested while I was away:
    Zone-clusters scalable services still require shared-IP zones, which means requests from one app to another would still bounce back due to loopback. Probably wouldn't help here.
    They could isolate the services that must talk to other services into their own failover group on exclusive-IP zones. Other services can be setup as originally planned. But maybe there are too many such "dependent services" for this to be useful. Also, each failover service must have its own IP address.
    Finally, can these  web services be configured so that it tries multiple addresses. In that case, if the shared address foo for service X bounces back (due to X having crashed on the local node), the app itself would retry with address bar for service X? This allows for uniform configuration across all services, namely:
        - try shared address
        - try node 1's own address (either public or clusternode1-priv)
        - try node 2's own address
    You can fine tune it so that configurations on node 1 only use node 2's address as backup, and vice versa. I don't know if that is any help.
    As for your second question, the answer is that Solaris Container Clusters allow for consolidation and isolation of clusters onto a single set of nodes. Normal containers don't really allow you to consolidate complete clusters in quite the same way. See http://www.sun.com/offers/details/820-7351.html for more.
    Tim
    ---

  • Datafile is not accesable from One NODE

    Hi All,
    We have two node cluster RAC we are able to access the datafile from one node from another node getting error like..
    ERROR:
    ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
    ORA-01110: data file 6: '/dev/md/racgcp/rdsk/d548'
    If we offline the TS the error is not appearing...!
    And we tried to drop but its hanging..
    Please Suggest
    Thanks.

    ERROR:
    ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
    ORA-01110: data file 6: '/dev/md/racgcp/rdsk/d548'Hi,
    is this visible in the first node ?
    What is the output of
    select name from v$datafile where name like '%dev/md/racgcp/rdsk/d548%';Might you have given wrong file name, give the file_id instead of name. Post if any errors.
    Celero,
    It is tipical when you have created a tablespace using local node storage instead of the shared storage used by the RAC. Then, it will be only visible from the node it was created.In RAC database datafiles in shared loacation, if you configure tablespace using local node how can you say it as RAC... can you please describe your views more.
    Thanks

  • RAC-DATA FILE ACCESSING ISSUE FROM ONE NODE

    Dear All,
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    Additional information: 2
    Tue Jan 31 08:52:09 2012
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01186: file 75 failed verification tests
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    Tue Jan 31 08:52:09 2012
    File 75 not verified due to error ORA-01157
    Tue Jan 31 08:52:09 2012
    Thanks in Advance

    user585870 wrote:
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.That would be due to some kind of failure in the shared storage layer.
    RAC needs the very same storage layer to be visible and available on each RAC node - thus this needs to be some form of shared cluster storage.
    Should a piece of it fails on one node, that node would not be able to access the RAC database files on that shared storage layer - and will throw the type of errors you are seeing.
    So how does this shared storage layer look like? Fibre channels (HBAs) connected to a Fibre Channel Switch and SAN - making SAN LUNs available as shared storage devices?
    Typically a shared storage failure would throw errors in the kernel log. This is because the error is not an Oracle error, but a kernel error. As it is in your case. The bottom error on the error stack points to the root cause:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    So HP-UX on that node is not seeing a specific shared storage device.

  • Problems looking up JMS queues in JNDI from other nodes

              I have a simple cluster(MyCluster) in weblogic 6.1 which consists of two managed servers
              (Server1 & Server2).
              Server1 has a JMSServer (JMSServer1) containing a couple of JMS queues.
              I also created a JMS Connection Factory, and targeted both the managed servers as
              well as the cluster.
              I can look up the queue from the node hosting the JMSServer, but cannot look up the
              queues from the 2nd node (server2).
              It just says that it cannot resolve the jndi name for the queue on Server2.
              According to weblogic docs, I should have transparent access to the queues from server2
              as long as i target both servers from the connection factory.
              Am I missing something?
              Thanks.
              

              I have a simple cluster(MyCluster) in weblogic 6.1 which consists of two managed servers
              (Server1 & Server2).
              Server1 has a JMSServer (JMSServer1) containing a couple of JMS queues.
              I also created a JMS Connection Factory, and targeted both the managed servers as
              well as the cluster.
              I can look up the queue from the node hosting the JMSServer, but cannot look up the
              queues from the 2nd node (server2).
              It just says that it cannot resolve the jndi name for the queue on Server2.
              According to weblogic docs, I should have transparent access to the queues from server2
              as long as i target both servers from the connection factory.
              Am I missing something?
              Thanks.
              

  • Can we change the UCCX server from single node to the first node without affecting it's configurations

    Hi all,
    Firstly I was using the single CUCM7.0 server with the single UCCX 7.0 server. Now I am going to add the second CUCM and UCCX in the cluster. I was first configured the UCCX server as a single node. Now I have to configure this as a first node. How I could configure this from single node to the first node without affecting it's configurations..??????
    Regards
    Ali Raza

    Hi Aaron,
    How could we do the change on Imhost file. How we add the both servers in that file?
    Below is the Imhost file txt.
    # Copyright (c) 1993-1999 Microsoft Corp.
    # This is a sample LMHOSTS file used by the Microsoft TCP/IP for Windows.
    # This file contains the mappings of IP addresses to computernames
    # (NetBIOS) names.  Each entry should be kept on an individual line.
    # The IP address should be placed in the first column followed by the
    # corresponding computername. The address and the computername
    # should be separated by at least one space or tab. The "#" character
    # is generally used to denote the start of a comment (see the exceptions
    # below).
    # This file is compatible with Microsoft LAN Manager 2.x TCP/IP lmhosts
    # files and offers the following extensions:
    #      #PRE
    #      #DOM:
    #      #INCLUDE
    #      #BEGIN_ALTERNATE
    #      #END_ALTERNATE
    #      \0xnn (non-printing character support)
    # Following any entry in the file with the characters "#PRE" will cause
    # the entry to be preloaded into the name cache. By default, entries are
    # not preloaded, but are parsed only after dynamic name resolution fails.
    # Following an entry with the "#DOM:" tag will associate the
    # entry with the domain specified by . This affects how the
    # browser and logon services behave in TCP/IP environments. To preload
    # the host name associated with #DOM entry, it is necessary to also add a
    # #PRE to the line. The is always preloaded although it will not
    # be shown when the name cache is viewed.
    # Specifying "#INCLUDE " will force the RFC NetBIOS (NBT)
    # software to seek the specified and parse it as if it were
    # local. is generally a UNC-based name, allowing a
    # centralized lmhosts file to be maintained on a server.
    # It is ALWAYS necessary to provide a mapping for the IP address of the
    # server prior to the #INCLUDE. This mapping must use the #PRE directive.
    # In addtion the share "public" in the example below must be in the
    # LanManServer list of "NullSessionShares" in order for client machines to
    # be able to read the lmhosts file successfully. This key is under
    # \machine\system\currentcontrolset\services\lanmanserver\parameters\nullsessionshares
    # in the registry. Simply add "public" to the list found there.
    # The #BEGIN_ and #END_ALTERNATE keywords allow multiple #INCLUDE
    # statements to be grouped together. Any single successful include
    # will cause the group to succeed.
    # Finally, non-printing characters can be embedded in mappings by
    # first surrounding the NetBIOS name in quotations, then using the
    # \0xnn notation to specify a hex value for a non-printing character.
    # The following example illustrates all of these extensions:
    # 102.54.94.97     rhino         #PRE #DOM:networking  #net group's DC
    # 102.54.94.102    "appname  \0x14"                    #special app server
    # 102.54.94.123    popular            #PRE             #source server
    # 102.54.94.117    localsrv           #PRE             #needed for the include
    # #BEGIN_ALTERNATE
    # #INCLUDE \\localsrv\public\lmhosts
    # #INCLUDE \\rhino\public\lmhosts
    # #END_ALTERNATE
    # In the above example, the "appname" server contains a special
    # character in its name, the "popular" and "localsrv" server names are
    # preloaded, and the "rhino" server name is specified so it can be used
    # to later #INCLUDE a centrally maintained lmhosts file if the "localsrv"
    # system is unavailable.
    # Note that the whole file is parsed including comments on each lookup,
    # so keeping the number of comments to a minimum will improve performance.
    # Therefore it is not advisable to simply add lmhosts file entries onto the
    # end of this file.
    Regards
    Ali Raza

  • Moving from Oracle RAC to single instance

    Hi,
    We are running EBS R12 on windows 2008 R2. Its a 2 node setup, we want to move it to single instance.
    I heard that moving from RAC to single instance is not supported by oracle is it true?
    Can someone kindly guide me documentation for the same.

    user10243788 wrote:
    Hi,
    We are running EBS R12 on windows 2008 R2. Its a 2 node setup, we want to move it to single instance.
    I heard that moving from RAC to single instance is not supported by oracle is it true?
    Can someone kindly guide me documentation for the same.Its totally possible technically to move from RAC to single node and its even documented. I dont know where you have heard that its not supported.
    See the process of removing node from rac
    http://docs.oracle.com/cd/B28359_01/rac.111/b28254/adddelunix.htm
    http://docs.oracle.com/cd/B19306_01/rac.102/b14197/adddelunix.htm
    http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_23.shtml
    For performance related
    Moving RAC to single instance

  • Moving from 7-Mode stretched MetroCluster to CDot MetroCluster

    Hi all. We're about to replace our old FAS3140 MetroCluster (streched, single location) with a brand new FAS8040 CDot MetroCluster, half on premises and half hosted.Moving from a dual controller 7-Mode MC to a 4 nodes CDot MC raises a number of questions. Having a dual controller box on each location somewhat makes sense, and I guess NetApp had to go this way to fend off arguments about the alleged lack of local resiliency of 7-Mode fabric MetroClusters. Howerver, what really annoys me is the need to split our storage capacity among the 4 nodes.I understand that it's still better than a few months ago, when each node had to use a dedicated pair of shelves, and the minimum setup for a CDot MC was 4 FAS8040 nodes and 8 DS4246 shelves (plus 4 FC switches, 4 FireBridges, and so on).Now we've been told that shelves can be "shared" among nodes, and the minmum setup has been lowered to just 4 nodes and 4 shelves.Still, this setup implies the use of small aggregates (~10 drives) and many dedicated spare drives. (I've read that CDot wants 2 spare drives per node for garage something ?!?) Really, is there no way to avoid splitting the storage pool in 4 quarters ?Can't the secondary node on each site be put in a standby mode, without dedicated storage ?Can't we use "root-data partitionning", like is being done on FAS2500s ?  (http://community.netapp.com/t5/Technology/Clustered-ONTAP-8-3-amp-Advanced-Drive-Partitioning/ba-p/98529)Or is it coming anytime soon ?And can't spare drives be shared among nodes ?

    I have extremly limited experience with Metrocluster but I can address some of your questions. Can't we use "root-data partitionning", like is being done on FAS2500s Advanced Drive Partiioning (ADP) is not availble on the FAS80XX line of controllers. Since the FAS80XX will typically be larger systems there is no need for ADP. If you have the bare minimum of 2 (fully populated) disk shelves on each side you're going have 48 disks total with 6 of those being dedicated the root parition on each node.  Still, this setup implies the use of small aggregates (~10 drives) and many dedicated spare drives. (I've read that CDot wants 2 spare drives per node for garage something ?!?) Can you explain your reasoning behind the use of small aggregates here? What would be stopping you from increasing your aggregate size? As for your question in regards to clustered Data ONTAP wanting two spare drives per node, that is a new one for me. You may be refering to Flash storage and garbage collection, which is a method of extending the life of a Flash disk. I havent dealt much with the AFF either so I can be sure. 

  • Form button does not work when a program is moved from Windows 8.2 to Windows 7

    Hi,
    I have a few Excel programs which use the ODBC to get data from Access and which have macros which writes data to an external program, MYOB.
    When the macros tries to write the data to MYOB it fails if I am not running the program in administrator mode.   It seems that Windows 8.2 has a different level of security than Windows 7 and must be run in administrator mode for the ODBC to work. 
    I have had issues after running the program in administrator mode (testing) if I simply do a save (in administrator mode) and then send it to the customer.   The issue is that it just will not work on the customer's site.   I have gotten
    around this in the past by saving any changes, going back out of excel, loading the program again (not in administrator mode) and saving it - before sending it to the customer.   This worked until now.
    For some unknown reason, the last time I sent a program to the customer and carried out the above process, the program stopped working.   Originally I thought that the macro just wouldn't work on windows 7, but eventually found that it is the form button
    that will no longer work when the program is moved from 8.2 to 7.
    Does anyone know why there is an incompatibility between 8.2 and Windows 7 and what I should be doing to ensure that my programs work in my customers environment(windows7)?
    In the meanwhile, I have changed the form button to an activex button and the program works fine in both environments.
    Thanking you in advance,

    there is some OP report after Windows update Dec 2014 macro stop responding ( I cant confirm if this is also related to your issue) its because security update for Office maybe conflict with the active-x that you are installed
    try to
    Close Excel
    Start Windows Explorer.
    Select your system drive (usually C:)
    Use the Search box to search for *.exd
    Delete all the files it finds.
    Start Excel again
    Open that file and save it, and try open at Windows 7
    to get more detail about this issue, I suggest also contact Office forum
    this case also will be solve installing kb3025036
    good luck

Maybe you are looking for

  • Windows 2008 Server Standard - Problems Printing

    I have an issue with a server that we are currently setting up in a bus station. There are 6 machines in total, one is the server which is running Windows Server 2008 Standard 32bit SP2, the other 5 are the client machines which are all running Windo

  • More Memorty Make My iMac Faster?

    Hi When I bought my iMac G5 I specified 512meg of RAM. I recently added another 1gig giving a total now of 1.5gig. I make good use of Adobe Photoshop and Bridge, Safari, Pages and iWeb. Since adding more RAM the machine has felt more 'solid' but it w

  • Axis Adapter Framwork Error

    Hi I am not able to access the axis adapter framework like i.e. (http://host:port/XIAxisAdapter/MessageServlet) When am trying to access i am the following getting error. Error: 404 Not Found The requested resource does not exist. Please provide me s

  • Integrating Xcelsius Dashboard in Launchpad

    Hi Experts, I am trying to integrate Xcelsius Dashboard in Launchpad role which is going to use in FPM Launchpad UIBB. I will be showing this FPM application in NWBC 3.0 not in Portal In my System, I have BW server in another client. While integratin

  • Errors Updating Projects and Events in 10.1

    I'm getting so many issues with this update. Right now, I'm trying to update my projects and events through a crazy work-around where I copied select Events and Projects into top level Final Cut Events and Final Cut Projects folders on my external ha