Clustered BO Nodes - central FRS directory on a dedicate SAN space

Hello Everyone,
We want a clustered BO XI 3.1 environment with 2 SIA nodes on 2 different servers with the FRS on both nodes pointing to a central file directory on a dedicated 3rd server.
We want the FRS directory to be a SAN network storage. Our plan is to create a dedicated 300 GB space (LUN?) on SAN and then point the FRS on both nodes to the same space on SAN. Pretty straight forward if it would have been in Windows by using network drive mapping. However our system admins tell us that making the two Windows Server access the same LUN (space on SAN) can result in data corruption. Especially as in our case the two servers hosting the BO nodes are not a part of the same network cluster.  (different from Business Objects own cluster).
Has anybody faced issues with hosting the FRS directories on a dedicated SAN space especially when BO nodes run on Windows server on different network clusters?
Any Suggestions and recomendations would be greatly appreciated.
Thanks in Advance.

"The full control rights are given to the u2018boadminu2019, AD_SSO_service_account, and the local Administrators Group."
Is this the same account the SIA is running under on BO1 and BO2 boxes. The service account the SIA is running under requires access to the FRS share.
One thing to note, if the other FRS servers in your cluster are relying on a share located on BO3 then BO3 will still be a SPOF i.e. if BO3 goes down you may have FRS servers on BO1 and BO2 but they won't be able to access the FRS share via BO3.
One way to test access from BO1 & BO2 to the FRS share is to login on to these systems with the credentials of the SIA service account, see if you can browse to the FRS share using the UNC path specified for the input and output FRS shares

Similar Messages

  • Central transport directory

    Dear all,
    we are running ecc 6.0 (v5r4m0), with central transport directory.
    How can i switch to local transport directories.
    Thanks
    Imran Hasware

    Hi Imran,
    ok, if you don't know how to remove and add links with RMVLNK/DEL or ADDLNK then you should not change anything - otherwise it it 100% sure, that everything is destroyed afterwards
    You are having a totally different problem as you write:
    /usr/sap/trans/tmp/****.LOC file. already in use.
    => there is a semaphore file still there, that shows, that a TP is running - most likely since 3 days, since you are facing these issues.
    If no tp is running anymore, this is an error and should be deleted manually. This kind of file is there by purpose so that the other TPs wait in critical pathes of other TPs. This happens in unshared trans dirs the same way ...
    => you should delete it and import 1 or 2 TPs first and see all is fine - then you can import your 30 TPs in one step - exactly because of these kind of files, it CAN be imported in parallel and will not crash the system.
    (unfortunately this process sometimes gets out of sync ...)
    Regards
    Volker Gueldenpfennig, consolut.gmbh
    http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

  • Migrate Single Node 11.5.10.2 --- to --- Clustered Multi Node

    Dear All
    I have a project to migrate a single node 11i EBS to Clustered Multi-node
    & I opened Thread under (Forum Home » E-Business Suite » Technology - LCM: 11i Install/Upgrade)
    the URL is
    Migrate Single Node 11.5.10.2 --->>  to  --->>>  Clustered Multi Node
    Please if you can help me updating that thread with your answers.
    Best regards
    Naji ghanim
    Apps DBA
    TAGITI

    Dear All
    I have a project to migrate a single node 11i EBS to Clustered Multi-node
    & I opened Thread under (Forum Home » E-Business Suite » Technology - LCM: 11i Install/Upgrade)
    the URL is
    Migrate Single Node 11.5.10.2 --->>  to  --->>>  Clustered Multi Node
    Please if you can help me updating that thread with your answers.
    Best regards
    Naji ghanim
    Apps DBA
    TAGITI

  • Clustering Solutions for Sun One Directory Server

    Hi,
    Please let me know the different recommended clustering solutions for Sun One Directory Server.
    Thanks
    Ram

    Please read the documentation of the Sun ONE Directory Server 5.2... Clustering is covered and Agents for Sun Cluster provided.
    Ludovic

  • Change central download directory for SLM

    Hello together,
    I configure the MOPZ/ SLM function in my Solman system, everythings works fine (download, implementation and so on).
    Now I have a question, is it possible to change the central download directory. At the moment all download files goes to the normal EPS/in directory of my Solman.
    Now I want to share a special directory, because there is a problems with the user authorisations in my Unix filesystem (different UID's of the OS Users in my whole landscape). This directory should be different to the current.
    Is this possible, to change the download directory???
    Bye, M. Haberland.

    Hello Marcel,
    Please review the following notes which may be able to guide you through this.
    1138247 - Maintenance Optimizer: Description about the SLM
    1137683 - Maintenance Optimizer: Notes for Software Lifecycle
    In addition, please review any notes that are releavant to these two notes.
    Also, please review the following URL which will also assist you on configuring MOPZ.
    http://service.sap.com/solutionmanager
    -> Media Library
    -> Technical Papers
    -> "How to Configure Maintenance Optimizer (SPS15-17) to Use SLM" for ST
    SP15 to SP17, and "How to Configure Maintenance Optimizer (SPS18) to Use
    SLM" for ST SP18 and higher.
    Hope this helps you out.
    Thanks,
    Mark

  • Migrate a Single node clustered SQL SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    Our current setup is
        shareclus  
    = cluster (1 node – sharedb (SharePoint database server))
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    and would like to go to
        sharedb01vm = SharePoint Database server
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    So at the moment the database is referenced in Central Addministration as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help? Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    I havent done this specifically with sharepoint, but I dont think it will be any different.
    Basically you build the new VM with the name sharedb01vm. Now when you are doing the cut-over, ie when you are basically moving all the databases, you rename the servers.. ie the new VM will be renamed as Shareclus and the old cluster can be named anything
    you like .
    At this point the sharepoint server should point to the new VM where you have already migrated the db's.
    Another option is to create a Alias in the sharepoint server "shareclus" to point to sharedb01vm.
    I have seen both of this in different environments but I bascially dont prefer the Alias option as it creates confusion to people who dont know this.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Problem uninstalling colocated IM on clustered apps nodes

    I am tring to uninstall IM in colocated IM arch on clustered apps tier nodes due to some directory integration problem. I need to run ocsdeconfig.sh tool to remove entries from OracleAS Metadata Repository in a separate machine. OID is up and I have provided the correct username & pswd, but the tool won't run. Have anyone encountered this before?
    Ref: 10.1.1 Installation Guide > Chapter H - Deinstallation & Reinstallation
    Can we manually remove the entries from the metadata repo? and how? i need perform this in either way before uninstalling or else, subsequent IM install will be treated as subsequent IM instance install and not a new one.
    Thanks in advance

    I don't know if this helps, but I've figured out something concerning the constant hard drive access in the IDE. It seems to be constantly updating a large number of .BTD, .BTX, and .BTB files in my user directory: \.jstudio\Ent81\var\cache\mdrstorage\org-netbeans\java\0.70\; as well as master.btd and master.btx in \.jstudio\Ent81\var\cache\mdrstorage\org-netbeans\java\. The files have long names that somewhat resemble registry keys, followed by what looks like a file name (example: f25937eeba6ba265b3364690828492863ee19a278dk16004jrelibrtjar.btd).
    The update process seems to be running constantly--the file Modify dates are repeatedly updated to the current time while the IDE is open and thrashing the disk.
    Does anyone know what this is or most importantly, how to stop it? I have to get the IDE back in working order on this particular machine, and reinstalling has failed to help.

  • Clustering 5 nodes with Weblogic 5.1.0 sp6

    When setting up a cluster of 5 nodes, I'm seeing that only 3 of the 5 nodes
              respond to when accessing SessionServlet (from weblogic examples. When I
              make a request to the other 2 nodes in the cluster, I see the attached
              stacktrace (using kill -QUIT) on the JVM and the browser times out.
              I am accessing each node in the cluster directly using the URL
              http://<nodename>:7001/session.
              Each node in the cluster is started using the command below. Note that I am
              using nativeIO (weblogic.system.nativeIO.enable=true), which starts 3 posix
              threads for socket polling. Does that number have anything to do with the 3
              nodes that respond, or it just a coincidence?
              Ravi.
              Command:
              /acs/util/SunOS/jdk1.2.2/bin/java
              -native
              -ms1024m
              -mx1024m
              -Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol
              -Xmaxjitcodesize33554432
              -Xbootclasspath/a:<weblogic bootclasspath>
              -Dweblogic.class.path=<weblogic classpath>
              -Djava.security.manager
              -Djava.security.policy=/opt/weblogic/weblogic.policy
              -Dweblogic.system.home=/opt/weblogic
              -Dasera.installhome=/acs
              -Dweblogic.cluster.enable=true
              -Dweblogic.httpd.clustering.enable=true
              -Dweblogic.httpd.session.persistence=true
              -Dweblogic.httpd.session.persistentStoreType=replicated
              -Dweblogic.cluster.multicastAddress=238.0.0.11
              com.asera.server.boot.AseraServer
              Stacktrace:
              "ExecuteThread-19" (TID:0xc58900, sys_thread_t:0xc58838, state:CW, thread_t:
              t@30, threadID:0xa8f81dd0, stack_bottom:0xa8f82000, stack_size:0x20000)
              prio=5
              [1] weblogic.rjvm.ResponseImpl.waitForData(ResponseImpl.java:43)
              [2] weblogic.rjvm.ResponseImpl.getThrowable(ResponseImpl.java:58)
              [3]
              weblogic.rmi.extensions.BasicResponse.getThrowable(BasicResponse.java:13)
              [4]
              weblogic.rmi.extensions.AbstractRequest.sendReceive(AbstractRequest.java:74)
              [5]
              weblogic.jndi.internal.RemoteContextFactoryImpl_WLStub.getContext(RemoteCont
              extFactoryImpl_WLStub.java:95)
              [6]
              weblogic.jndi.WLInitialContextFactoryDelegate.newRemoteContext(WLInitialCont
              extFactoryDelegate.java:316)
              [7]
              weblogic.jndi.WLInitialContextFactoryDelegate.newContext(WLInitialContextFac
              toryDelegate.java:224)
              [8]
              weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialCon
              textFactoryDelegate.java:164)
              [9] weblogic.jndi.Environment.getContext(Environment.java:122)
              [10] weblogic.jndi.Environment.getInitialContext(Environment.java:104)
              [11]
              weblogic.cluster.replication.ReplicationManager.getRepMan(ReplicationManager
              .java:362)
              [12]
              weblogic.cluster.replication.ReplicationManager.createSecondary(ReplicationM
              anager.java:406)
              [13]
              weblogic.cluster.replication.ReplicationManager.register(ReplicationManager.
              java:583)
              [14]
              weblogic.servlet.internal.session.ReplicatedSession.<init>(ReplicatedSession
              .java:107)
              [15]
              weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSession(Rep
              licatedSessionContext.java:50)
              [16]
              weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletRequestImp
              l.java:1065)
              [17]
              weblogic.servlet.internal.ServletRequestImpl.getSession(ServletRequestImpl.j
              ava:967)
              [18] examples.servlets.SessionServlet.doGet(SessionServlet.java:51)
              [19] javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
              [20] javax.servlet.http.HttpServlet.service(HttpServlet.java:865)
              [21]
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
              :105)
              [22]
              weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImp
              l.java:742)
              [23]
              weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImp
              l.java:686)
              [24]
              weblogic.servlet.internal.ServletContextManager.invokeServlet(ServletContext
              Manager.java:247)
              [25]
              weblogic.socket.MuxableSocketHTTP.invokeServlet(MuxableSocketHTTP.java:361)
              [26] weblogic.socket.MuxableSocketHTTP.execute(MuxableSocketHTTP.java:261)
              [27] weblogic.kernel.ExecuteThread.run(ExecuteThread.java:96)
              

              Try SP10 with weblogic.system.servletThreadCount=<same value as executeThreadCount>
              Mike
              "Ravi Sharma" <[email protected]> wrote:
              >When setting up a cluster of 5 nodes, I'm seeing that only 3 of the 5
              >nodes
              >respond to when accessing SessionServlet (from weblogic examples. When
              >I
              >make a request to the other 2 nodes in the cluster, I see the attached
              >stacktrace (using kill -QUIT) on the JVM and the browser times out.
              >
              >I am accessing each node in the cluster directly using the URL
              >http://<nodename>:7001/session.
              >
              >Each node in the cluster is started using the command below. Note that
              >I am
              >using nativeIO (weblogic.system.nativeIO.enable=true), which starts 3
              >posix
              >threads for socket polling. Does that number have anything to do with
              >the 3
              >nodes that respond, or it just a coincidence?
              >
              >Ravi.
              >
              >
              >Command:
              >
              >/acs/util/SunOS/jdk1.2.2/bin/java
              >-native
              >-ms1024m
              >-mx1024m
              >-Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol
              >-Xmaxjitcodesize33554432
              >-Xbootclasspath/a:<weblogic bootclasspath>
              >-Dweblogic.class.path=<weblogic classpath>
              >-Djava.security.manager
              >-Djava.security.policy=/opt/weblogic/weblogic.policy
              >-Dweblogic.system.home=/opt/weblogic
              >-Dasera.installhome=/acs
              >-Dweblogic.cluster.enable=true
              >-Dweblogic.httpd.clustering.enable=true
              >-Dweblogic.httpd.session.persistence=true
              >-Dweblogic.httpd.session.persistentStoreType=replicated
              >-Dweblogic.cluster.multicastAddress=238.0.0.11
              >com.asera.server.boot.AseraServer
              >
              >Stacktrace:
              >
              >"ExecuteThread-19" (TID:0xc58900, sys_thread_t:0xc58838, state:CW, thread_t:
              >t@30, threadID:0xa8f81dd0, stack_bottom:0xa8f82000, stack_size:0x20000)
              >prio=5
              >
              >[1] weblogic.rjvm.ResponseImpl.waitForData(ResponseImpl.java:43)
              >[2] weblogic.rjvm.ResponseImpl.getThrowable(ResponseImpl.java:58)
              >[3]
              >weblogic.rmi.extensions.BasicResponse.getThrowable(BasicResponse.java:13)
              >[4]
              >weblogic.rmi.extensions.AbstractRequest.sendReceive(AbstractRequest.java:74)
              >[5]
              >weblogic.jndi.internal.RemoteContextFactoryImpl_WLStub.getContext(RemoteCont
              >extFactoryImpl_WLStub.java:95)
              >[6]
              >weblogic.jndi.WLInitialContextFactoryDelegate.newRemoteContext(WLInitialCont
              >extFactoryDelegate.java:316)
              >[7]
              >weblogic.jndi.WLInitialContextFactoryDelegate.newContext(WLInitialContextFac
              >toryDelegate.java:224)
              >[8]
              >weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialCon
              >textFactoryDelegate.java:164)
              >[9] weblogic.jndi.Environment.getContext(Environment.java:122)
              >[10] weblogic.jndi.Environment.getInitialContext(Environment.java:104)
              >[11]
              >weblogic.cluster.replication.ReplicationManager.getRepMan(ReplicationManager
              >..java:362)
              >[12]
              >weblogic.cluster.replication.ReplicationManager.createSecondary(ReplicationM
              >anager.java:406)
              >[13]
              >weblogic.cluster.replication.ReplicationManager.register(ReplicationManager.
              >java:583)
              >[14]
              >weblogic.servlet.internal.session.ReplicatedSession.<init>(ReplicatedSession
              >..java:107)
              >[15]
              >weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSession(Rep
              >licatedSessionContext.java:50)
              >[16]
              >weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletRequestImp
              >l.java:1065)
              >[17]
              >weblogic.servlet.internal.ServletRequestImpl.getSession(ServletRequestImpl.j
              >ava:967)
              >[18] examples.servlets.SessionServlet.doGet(SessionServlet.java:51)
              >[19] javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
              >[20] javax.servlet.http.HttpServlet.service(HttpServlet.java:865)
              >[21]
              >weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
              >:105)
              >[22]
              >weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImp
              >l.java:742)
              >[23]
              >weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImp
              >l.java:686)
              >[24]
              >weblogic.servlet.internal.ServletContextManager.invokeServlet(ServletContext
              >Manager.java:247)
              >[25]
              >weblogic.socket.MuxableSocketHTTP.invokeServlet(MuxableSocketHTTP.java:361)
              >[26] weblogic.socket.MuxableSocketHTTP.execute(MuxableSocketHTTP.java:261)
              >[27] weblogic.kernel.ExecuteThread.run(ExecuteThread.java:96)
              >
              >
              

  • Migrate a Single node clustered SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, yes, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    So our current setup is an SQL cluster (1 node), 2 App servers (VM’s) and 2 Web servers (VM’s)
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    I’ve had a look around and seen the option to use SQL aliases but I’m not sure that’s the best option. I was thinking of rebuilding the DB server but was wondering if there is any other options
    Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    Hi, yes that's correct, but my query really is about the SharePoint side with regard to SharePoint and maybe using SQL aliases.
    My current setup is
    shareclus
       = cluster (1 node – sharedb (SharePoint database server))
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    and would like to go to
    sharedb01vm = SharePoint Database server
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    So at the moment the database is referenced in CA as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help?

  • Clustered WS7 nodes w/ distinct IPs

    Hello,
    I am attempting to configure my first clustered WS7 deployment, and I am running into a problem.
    Here's what I'm trying to do.
    I have two web sites (let's call them s1 & s2). They are set up as two different configurations in my WS cluster, and I am deploying them to two nodes (n1 & n2). On these two nodes there are three IPs configured: one for the physical machine and one for each of the two web sites.
    I have a load balancer appliance (LB), and it is accepting connections at a specified IP (one IP for s1 and a second IP for s2) and it forwards those requests to the web server instances in my cluster on the two separate nodes.
    Here's an example of my desired configuration:
    Node    e1000g0 IP     e1000g0:1 IP      e1000g0:2 IP
    n1         192.168.1.1   192.168.1.2        192.168.1.3
    n2         192.168.2.1   192.168.2.2        192.168.2.3So for site s1, the LB dispatches requests to either 192.168.1.2 or 192.168.2.2, and for site s2 it sends them to 192.168.1.3 or 192.168.2.3.
    The problem is, my configuration contains definitions for the listener on port 80 and I cannot use "*" as the IP since the machine has multiple IPs that should be configured for different virtual servers/listeners. If I try to put a specific IP on the listener, it starts up properly on the machine that has that virtual IP configured, but fails to start on the other one ("Error creating socket (Address not available)").
    So how can I configure my clustered server instances so that the same configuration can be deployed to two nodes on two distinct IPs?
    Thanks,
    Bill

    Well, I think I just found an answer, although I'm not sure it's the best answer.
    When configuring the listener, below the IP address text field, it says "IP address, or * to listen on all IP addresses". Instead of putting the IP address, I put the hostname of my site (e.g. "s1"). I then set the IP in /etc/hosts on each of the machines for s1 to the actual value I want them set to, and when the server instances start, they bind to the desired IP.
    The down-side of this solution is that those machines will then not know the "real" IP of s1 (since they will both think it is local), but I think I can live with that.
    Is there a better solution?
    Thanks,
    Bill

  • Corrected: Federated Portals with Clustered/Server Nodes

    We have two server nodes for the producer and two for the consumer.
    Each paring is fronted by message server.
    When I am setting up the FPN, what url should be used for the producer and consumer?
    I first tried setting it up using the producer's message server url  for the "producer", and the consumer's message server for the consumer url.
    But this did not work.
    When I set it up from consumer node url to producer node url, it works fine.
    But this seems like I would be in trouble if the producer node is down.
    Message was edited by:
            Eric Heilers

    Sorry, but maybe I am not asking my question the right way.
    I already am using the web dispatcher.
    What I am asking is:
    For my producer object, there is a property for the producer host and port and a property for the consumer host and port.
    I would think that I would use the host and port of the dispatchers. But I have not been able to get my consumer see my producer if I use these values.

  • Home Directory still has old hdd space......not the new one

    I'm having a frustrating problem for which I can't seem to find a solution. I upgraded my internal hdd to a 320gb drive. When I click 'get info' on my mac drive, it says that the capacity is 319.73 gb and that there is 222.97gb available. When I do the same on my home directory it says that the size is 78.03gb........suspiciously the same size as my old hdd.
    How do I get my home directory to recognize the massive amount of space that is really available??
    I discovered this because my iTunes library was too large and it said there was not enough space remaining on the disk - this is the reason I upgraded my hdd.
    I truly hope that this is an easy fix! Thank you in advance!

    Hi g,
    I suspect the HD directory structure is corrupt. Try running Repair Disk.
    Repair Disk: Boot from install disc (insert disc>restart>immediately hold down c key and keep holding it until you see “Preparing Installation”)>at first screen select the language and click Continue> click on the Utilities Menu in the menu bar>open Disk Utility>select your HD in the panel on the left side>click Repair Disk at bottom of main window. Run this at least twice, and keep running it until it says “appears ok” twice in a row. If that doesn’t happen, you may need a stronger utility such as DiskWarrior or if the directory is damaged beyond repair, you may need to reinstall the OS, or you may have a damaged HD (repair utilities can only repair the directory structure, not the HD itself).

  • MSFT Windows clustering for SAN visibility by BO FRS?

    Hi all,
    This is specially for gurus who have worked on configuring clustered BO environment where FRS directories reside in a common SAN space.  Do you have to have the Windows servers clustered (MSFT Windows clustering) so that the BO nodes residing on these Windows server have no problems accessing/writing/reading to a common SAN space? Does configuring clustered FRS on a SAN need the Windows servers upon which they reside be network clustered as well? A SAP tech support suggested this but I have not come across any such reference hence the question.  Any insights greatly appreciated.
    Thanks in Advance.

    hi, but one thing to note about SAN storage is without MSFT, at only one time, i believe only 1 server is allowed to access to the SAN storage. Unless the 1st server is rebooted / disconnected from the network, then only can the 2nd server access to the same LUN. Do correct me if my understanding is wrong.

  • Nodes not clustering

    Having some issues with clustering between nodes. We have 4 nodes all of which are on the same subnet. When we start them up nodes 1 and 2 cluster and nodes 3 and 4 cluster. However they don't seem to cluster together as we'd like. We noticed the following in the logs on node 3:
    INFO | jvm 1 | 2007/11/05 14:09:42 | 2007-11-05 14:09:41.989 Oracle Coherence GE 3.3.1/389 <Error> (thread=Cluster, member=1): This senior Member(Id=1, T
    imestamp=2007-11-05 14:08:01.948, Address=128.11.28.60:8088, MachineId=38716, Location=process:Cacheserver) appears to have been disconnected from another sen
    ior Member(Id=1, Timestamp=2007-11-05 14:07:07.176, Address=128.11.28.61:8088, MachineId=38717, Location=process:Cacheserver); stopping cluster service.
    The other senior member is node 1. So it seems they are in contact with each other. Any suggestions?
    Richard

    Hi Jon,
    I'm quickly delving into the realms of GC tuning myself for the first time! Our current settings are:
    Using -server
    Using -Xms1024M and -Xmx1024m
    I haven't looked at using different collectors.
    I have been toying around with JMap to collect some stats about the heap and saw the following in one of our processes:
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 1073741824 (1024.0MB)
       NewSize          = 655360 (0.625MB)
       MaxNewSize       = 4294901760 (4095.9375MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 8
       SurvivorRatio    = 8
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)Note the MaxNewSize being 4Gb even though our entire heap should only be 1Gb.
    By adding the following settings to our process:
    -XX:NewSize=512m -XX:MaxNewSize=512m
    We then see the following in JMap:
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 1073741824 (1024.0MB)
       NewSize          = 536870912 (512.0MB)
       MaxNewSize       = 536870912 (512.0MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 8
       SurvivorRatio    = 8
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)Now note the MaxNewSize of 512Mb
    What we were seeing in JConsole was that our Tenured or Old Gen was the main memory user and it was this that was being GC'd only once we'd got very close to our limit, sometimes within 15Mb of 1024Mb. I'm wondering whether the Tenured/Old Gen allocation thinks it has 4Gb with the old settings and hence this is why it fills up so close to the limit and which point something kicks in a causes it to GC.
    By using the new settings the Tenured space only ever fills up to half the total heap size and we seem to get much smoother GC.
    Again this is very much a new subject area to me and the above may turn out to be nothing but these are the settings we've tested with. I completely agree with your comment about adjusting the Coherence timeouts etc but just wanted to know for future reference.
    Does any of this make any sense!!
    Regards
    Richard

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

Maybe you are looking for

  • Apple Products no longer AirPrint with new routers

    I have a OfficeJet Pro 8500 A910 Series printer and was using a linksys WRT160N V2 router (open without a password protection).  I also have an HP Laptop, a MacBook Pro Retina, two IPhones and an IPad.  All devices were able to print wirelessly on th

  • Finding a Particular string in a Database in the most Optimized way

     Hi All,           Below is the query to find  a Particular String in the entire database tables,I have a Database of about 500 + tables and each contains many data's .In these tables i want to find a particular string but didn't know the column name

  • Windows XP resolution issue on 27" iMac

    I've got windows XP installed on my 27" iMac through bootcamp but it's all stretched out. I'm assuming this is because of the high resolution of the 27" iMac and no matter what resolution i choose the screen looks all stretched out. Anyone know a fix

  • EXTACTOR 2LIS_02_SCL  DELTA  ERROR REMNG REWRT /WEMNG /WEWRT

    Hi, I add the following fields REMNG /REWRT/ WEMNG /WEWRT in the LO cockpit but do not understand the delta behavior. When i extract a Invoice receipt in delta : (0processkey = 3). fields REMNG /REWRT are OK but REMNG /REWRT are set to 0 and infoobje

  • Serial Communicat​ion(CAN) of Floating Point Numbers

    Hi, I have ran into a situation were I need to use floating point numbers(IEEE Floating Standard) when communicating.   How can I send and recieve floating point numbers?  What converstions need to be made?  Is there a good resource on this? Thanks,