HFM Cluster Question

We are having some issues with cluster configuration. We have two servers A and B. As there is no round-robin algorithm followed by HFM server every time a user logs on he is assigned a fixed server. We want to know how to overcome this issue as we don't want to assign specific server to the user but we want to assign it by the job sequence. Can anyone help me out.
Thanks in advance.

We use the F5 load balancer for this. Not sure if it's the best brand of load balancer, but our I/T group seems to approve of the performance.
http://www.f5.com/glossary/load-balancing.html

Similar Messages

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • User information on HFM cluster/server configuration

    Hi Folks,
    How to find which HFM Cluster/Server is accessed by the end user (who use workspace or smartview to work). Because there are many end users out of which some them face low performance and for others work fine.We use two application servers under a cluster.
    For ex:
    If an end user logins through workspace or smartview,I need the particular application server(under cluster) which responds him.
    Thanks & Regards,
    Prem

    Folks,
    The user accessing the HFM server/cluster can be found in Hyperion workspace itself under the option "Consolidation user on system".
    Regards,
    prem.

  • HFM cluster error

    Hello Experts,
    I would like to introduce security issue which we are getting.
    We already have the Production environment in our network and in HFM it contains the cluster name as *"HFM"*. We have introduced another environment called Production1 in the same network and it also contains the cluster name as same *"HFM"*. The problem is that when we are trying to access the Production env we are getting security issue and the application is looking to Prodution1 cluster. In Prodution env we setup security and in Prodution1 the security is not setup.
    My question here is :
    1. Is the Different env in the same network gives the problems?
    2. In the Installation manuals it is not mentioned anywhere that we need to give different cluster names to different env in the same network. Then why we are getting the security issue?
    3. Why the Cluster is looking another environment when we are accesing the one environment?
    4. Is this is security issue???
    Please advise on above querries. Your help is most appreciated.
    Thank you in advance....
    Anil

    Hello Experts,
    I would like to introduce security issue which we are getting.
    We already have the Production environment in our network and in HFM it contains the cluster name as *"HFM"*. We have introduced another environment called Production1 in the same network and it also contains the cluster name as same *"HFM"*. The problem is that when we are trying to access the Production env we are getting security issue and the application is looking to Prodution1 cluster. In Prodution env we setup security and in Prodution1 the security is not setup.
    My question here is :
    1. Is the Different env in the same network gives the problems?
    2. In the Installation manuals it is not mentioned anywhere that we need to give different cluster names to different env in the same network. Then why we are getting the security issue?
    3. Why the Cluster is looking another environment when we are accesing the one environment?
    4. Is this is security issue???
    Please advise on above querries. Your help is most appreciated.
    Thank you in advance....
    Anil

  • Front-end/back-end cluster question

    [att1.html]
              

    Patrick Power wrote:
              > Thanx for your reply Prasad. I was surprised none of the Bea engineers
              > wished to touch this one. What do you suppose is up with that? Either
              > they are too busy, or possibly my question is too dumb.
              >
              I am from BEA so its not that we are not responding ;).
              >
              > Back to the issue: Yes, we will NES bridge/proxy into servlet front-end
              > cluster, potentially with Directors on the very front of the topology for
              > balancing. Your diagram as such:
              >
              > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              >
              > 1) Does <Netscape/IIS/Apache/WLS FRONT END> mean NES with proxy shared lib,
              > with a WLS service definition into cluster in obj.conf? I assume yes.
              Yes.
              >
              > 2) I would assume that <CLUSTER OF WEBLOGIC SERVERS SERVING SERVLETS> would
              > need the WLS HttpClusterServlet to the <CLUSTER OF WEBLOGIC SERVERS SERVING
              > EJB> all the way in the back.
              No. I was splitting presentation logic (namely servlets and jsp) and business
              logic (ejb) into two layers. Again you don't have to split it into two. You can
              colocate them both together. You could use NES or IIS or Apache or WLS. You
              don't need HttpClusterServlet.
              Lets get this straight.
              1. You need our proxy plugin for failover and to load balance the request that
              are going to presentation logic.
              2. From presentation logic layer, when you talk to backend business logic
              providers (like ejb cluster), if you use stateless session beans we provide
              failover and load balancing. In future we will support clustered stateful
              session beans as well. Therefore you don't need load balancer here.
              3. HttpClusterServlet should run only in front of presentation logic cluster and
              also it supports http only.
              Hope this helps.
              - Prasad
              > The NES proxy would only proxy into the f/e
              > cluster, right? You're not suggesting an external proxy of some type, are
              > you? The HttpClusterServlet is for WLS cluster-to-cluster proxies.
              > 3) A load balancer between the wls f/e and wls b/e clusters? That doesn't
              > seem applicable here. Once again, it should be HttpClusterServlet for WLS
              > cluster-to-cluster proxies.
              > 4) "use two or three proxy servers to avoid single point of failure."
              > Hmmm, once again - are we talking the WLS HttpClusterServlet proxy? Well,
              > that's the inital question: Can I have more than one HttpClusterServlet
              > proxy in the front-end cluster, proxying to the back-end cluster?
              > Otherwise, internally from this WLS architecture perspective, it is a single
              > point of failure.
              >
              > An example: 10 instances in f/e cluster. can more than one of these
              > instances have the WLS HttpClusterServlet proxy to the b/e cluster? Or, are
              > there instances of WLS HttpClusterServlet proxy in all 10 f/e cluster
              > instances?
              >
              > Cheers, Pat
              >
              > Prasad Peddada <[email protected]> wrote in message
              > news:[email protected]...
              > >
              > >
              > > Patrick Power wrote:
              > >
              > > > I know that this topic was addressed to some degree here in an earlier
              > > > posting, but I still have a question regarding the architecture
              > > > design:
              > > >
              > > > If configuring a front-end cluster for servlets/sessions and a
              > > > back-end cluster for remote services -- you route requests to the
              > > > back-end using the WLS proxy servlet. ok, got that part.
              > >
              > > Not quite. The typical scenario is
              > >
              > > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              > >
              > > You don't proxy and serve servlets from the same server.
              > >
              > > >
              > > > The question: Is there a single instance of the wls proxy servlet in
              > > > the front-end cluster? Or, is it on every instance in the front-end
              > > > cluster? What is the failover mechanism, in the case of a single
              > > > instance of proxy servlet in the f-e cluster failing?
              > >
              > > To prevent that you need to use some kind of h/w or software load
              > > balancer and then use two or three proxy servers to avoid single point
              > > of failure.
              > >
              > > > Is it a single point of failure between the 2 clusters?
              > > >
              > > > Thanx in advance for your help.
              > > >
              > > > BTW, I think Wei, Kumar and the other Bea folks cruising this group
              > > > have been doing a bang-up job of providing badly-needed detail on this
              > > > subject area - material this largely absent from the documentation.
              > > > Good job.
              > > >
              > > >
              > >
              > > --
              > > Cheers
              > >
              > > - Prasad
              > >
              > >
              

  • Windows 2008 Cluster question on using a new cluster drive source from shrinking existing disk

    I have a two node Windows 2008 R2 enterprise SP1 cluster. It has a basic cluster setup of one (Q:)quorum disk and data disk (E:) which is 2.7tb is size. This cluster is connected to a shared Dell Disk array.
    My question is can I safely shrink the 2.7tb drive down and carve out a disk size of 500gb from the same disk and use for a new cluster disk resource. We want to install Globalscape SFTP software on this new disk for use as a cluster resource.
    Will this work without crashing the cluster.
    Thanks,
    Gonzolean

    Hi ,
    Thank you for posting your issue in the forum.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • HFM Cluster configuration is failing in distributed environment

    Hi Gurus,
    I am configuring HFM 11.1.2 on windows 2008 server with sql server 2005.This is a distributed installation.
    Server 1:
    Financial Management Server
    Server 2:
    Financial Management Web Services Web Application
    Financial Management Web Services IIS Web Application
    Financial Management Smart View IIS Web Application
    Financial Management IIS Web Application
    Financial Management LCM IIS Web Application
    Server 3: SQL Server 2005
    on server 1 i am able to configure the hfm but when i tried cluster configuration from server 2 then its failing and giving the error "Tue Sep 14 05:43:13 2010) Failed to register cluster: Access is denied. Line:3809 File: .\CEPMWindowsConfig.cpp Error code:0x80070005 Error: Access is denied.
    (Tue Sep 14 05:43:13 2010) Failed to register cluster containing server adc6140810: IDispatch error #1553"
    can you help me to ressolve this issue.
    Thanks
    Krishna

    Hello Krishna,
    Any luck fixing this issue? I am experiencing the same issue. Please advise, thank you.

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

  • Cluster Question

              If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2). In the
              admin console, I need to provide a cluster address which I might put 192.168.0.1
              and 192.168.0.2.
              In this case, I might need to bind those ip addresses in a single DNS and put it
              as the cluster address.
              How can I bind those ip addresses ? Do I need a DNS server to do that ?
              another question is, when the WLS cluster receive a request, where is the first point
              which is responsible for passing the requests into the nodes ? Is that a java class
              or ?
              

              Hello Ramy,
              If I have a proxy, then I might need to also have a cluster for the proxy server
              as well. Does it mean that I need a local director in front of the proxy cluster
              thanks,
              Friend
              "Ramy Saad" <[email protected]> wrote:
              >
              >Hello Friend,
              >
              >I think you need a proxy-server (for example with load balancing) which
              >can handel
              >a cluster. In your application you can use the IP-Address of the proxy-server
              >and
              >the proxy decides to which WLS the connection will be established. I think
              >a plug-in
              >for the appache server is shipped with the bea software...
              >
              >Regards,
              >Ramy.
              >
              >"Friend" <[email protected]> wrote:
              >>
              >>If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2).
              >>In the
              >>admin console, I need to provide a cluster address which I might put 192.168.0.1
              >>and 192.168.0.2.
              >>In this case, I might need to bind those ip addresses in a single DNS and
              >>put it
              >>as the cluster address.
              >>How can I bind those ip addresses ? Do I need a DNS server to do that ?
              >>another question is, when the WLS cluster receive a request, where is the
              >>first point
              >>which is responsible for passing the requests into the nodes ? Is that
              >a
              >>java class
              >>or ?
              >
              

  • Newbie cluster question: where does the admin server live?

    Hello, I'm looking at clustering for our application where I work, and so I'm reading through the cluster-related documentation for 11g. I have a sort of an architecture question: where should the admin server be started? Hypothetically speaking, if you have two nodes for your cluster, would the AdminServer run on a 3rd box which sort of stood in front of the two nodes?
    Thanks much -

    The ideal situation would be for your admin server to be separate from the machines hosting your managed server. This allows you to control access to the admin server, eliminate the performance impact if you have the admin + managed on the same host, and limits impact if youir #1 host fails, etc.
    But companies may be unwilling to invest in a distinct (3rd) host just for the admin server, especially if you have multiple environments ( prod, testing, dev, etc. ).
    So usually the admin winds up sharing the host with a managed server.

  • Sample HFM Certification questions

    Does anyone know where one can get hands on some sample questions to become HFM Certified.

    Hi,
    In this site you have some sample questions:
    http://www.oracle.com/global/us/education/certification/sample_questions/exam_1z0_271.html
    Regards,
    Marcin

  • MDB/Topic/WLS cluster question

              Hi
              I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              in cluster config where say I have 3 servers with say server#1 hosting the Topic
              [not a distributed destination]. I have an an ear file containing an MDB with
              no pool size limit. After deploying the ear in the cluster - lets say that each
              server on the cluster has 5 instances of the MDB [just an example] and a message
              is published on the Topic.
              Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              is yes]
              Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              the same Topic - thats just going to get treated as a completely separate subscriber
              independent of the first MDB though the implementing class is the same - right?
              thanks
              Anamitra
              

              Anamitra wrote:
              > Hi
              > I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              > in cluster config where say I have 3 servers with say server#1 hosting the Topic
              > [not a distributed destination]. I have an an ear file containing an MDB with
              > no pool size limit. After deploying the ear in the cluster - lets say that each
              > server on the cluster has 5 instances of the MDB [just an example] and a message
              > is published on the Topic.
              >
              > Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              > is yes]
              Yes.
              > Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Yes.
              > Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              > the same Topic - thats just going to get treated as a completely separate subscriber
              > independent of the first MDB though the implementing class is the same - right?
              Yes.
              >
              > thanks
              > Anamitra
              >
              For a little more information, I'm attaching notes on durable
              subscriber MDBs.
              A JMS durable subscription is uniquely identified within a cluster by a combination of "connection-id" and "subscription-id". Only one active connection may use a particular "connection-id" within a WebLogic cluster.
              In WebLogic 8.1 and previous, a durable topic subscriber MDB uses its name to generate its client-id. Since JMS enforces uniqueness on this client-id, this means that if a durable subscriber MDB is deployed to multiple servers only one server will be able to connect. Some applications want a different behavior where
              each MDB pool on each server gets its own durable subscription.
              The MDB connection id, which is unique within a cluster, comes from:
              1) The "ClientId" attribute configured on the WebLogic connection factory.
              This defaults to null. Note that if the ClientId is set on a connection
              factory, only one connection created by the factory
              may be active at a time.
              2) If (1) is not set, then, as with the subscriber-id,
              the connection-id is derived from jms-client-id descriptor attribute:
              <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              3) If (1) and (2) are not set, then, as with the subscriber-id,
              the connection-id is derived from the ejb name.
              The MDB durable subscription id, which must be unique on its topic, comes from:
              1) <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              2) if (1) is not set then the client-id
              comes from the ejb name.
              The above prevents a durable topic subscriber MDB from running on multiple servers. When an instance of the MDB starts on another server, it deploys successfully, but a conflict is detected and the MDB fails to fully connect to JMS. The work-around is the following:
              A) Create a custom connection-factory for each server:
              1) configure "JNDIName" to the same value across all servers
              ("myMDBCF" in this example)
              2) configure "ClientId" to a unique value per server
              3) enable "UserTransactionsEnabled"
              4) enable "XAConnectionFactoryEnabled"
              5) set "AcknowledgePolicy" to "ACKNOWLEDGE_PREVIOUS"
              6) target the CF at a single WebLogic server
              (Number 5 is required for non-transactional topic MDBs)
              B) In the MDB's weblogic-ejb-jar.xml descriptor, set the MDB's connection
              factory to the JNDI name of the custom connection factories configured in
              (A). Optionally, also specify the subscriber-id via the jms-client-id
              attribute.
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
              <ejb-name>exampleBean</ejb-name>
              <message-driven-descriptor>
              <connection-factory-jndi-name>myMDBCF</connection-factory-jndi-name>
              <jms-client-id>myClientID</jms-client-id>
              </message-driven-descriptor>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              C) Target the application at the same servers that have the custom connection
              factories targeted at them.
              Notes/Limitations:
              1) If the MDB is moved from one server to another, the MDB's corresponding
              connection-factory must be moved with it.
              2) This work-around will not work if the destination is not in the same
              cluster as the MDB. (The MDB can not use the local connection factory, which
              contains the connection-id, as connection factories do not work unless they
              are in the same cluster as the destination.)
              3) This work-around will not work for non-WebLogic JMS topics.
              4) A copy of each message is sent to each to each server's MDB pool.
              

  • Cluster (question à 10 cent)

    Bonjour à tous,
    Le contexte
    un Projet "relativement" complexe comportant de multiple VIs et sous-VIs.
    un Cluster "traverse" la plupart de ces VIs et sous-Vis .... un peu comme un "data bus".
    Je suis en phase de développement et de mise au point .... donc, je modifie régulièrement le contenu de ce "cluster_data_bus" (il est en typedef bien entrendu)
    Ce cluster contient x éléments .... dont l'élément "toto".
    Question
    L'élément "toto" est-il toujours utile ?
    autrement dit ... l'élément "toto" est-il encore utilisé quelque part ? (ou ne sert-il plus à rien)
    ma solution : (assez barbare) ... 
    enlever l'élement "toto" du cluster et "regarder" si le code devient broken quelque part.
    existerait-il une "belle solution" à ce type de situation.
    merci
    Résolu !
    Accéder à la solution.

    gros cluster ...
    Ce n'est pas réellement un "gros gros" cluster.
    Ce que je dis est simplement le résultat d'un constat.
    Oui, chaque sous-VIs n'a pas besoin de l'ensemble.
    mais "séparer" suivant les besoins ... augmente le nombre de cluster à "passer" et augmente (un peu partout) le nombre de shift-register.
    J'ai essayé "les deux" ... au départ j'avais 4 cluster séparés ... plus certains paramètres que je passais de façon "indépendante"
    j'ai regroupé le tout dans un seul et même cluster ... et j'utilise les données, suivant les besoins, en utilisant des unbundle_by_name.
    Le résultat n'est certainement pas plus "lent" ... (même un peu le contraire)
    et la lisibilité graphique est beaucoup plus grande.
    De plus, j'ai "tout" sous la main dans un seul et même cluster ... plus besoin de se demander "quoi" est "ou".
    ceci dit ... encore une fois ... ce n'est pas réellement un "énorme" cluster.
    en pièce-joint (il s'agit de  bus.ctl )
    Pièces jointes :
    CTL.zip ‏95 KB

  • Conventional Cluster questions (client reconect & sizing)

    hi *,
    i have studied the documentation of MQ 4.2 and i have a few particular questions about conventional cluster mode:
    when i setup a cluster like this:
    datacenter1                                           datacenter2
    host1                                                    host2
    broker1(master)                                     broker2(slave)
    appserver1                                            appserver 2
    consumer+producer for q1(persistent)     consumer & prod q1
    consumer and producers are configured to take the local broker (nearest) as home broker (mq://localhost:7676,mq://otherdatacanterhost:7676) 1) what exactly is the impact if broker 1 (master) (and only broker 1 ) fails ,besides the data that was in transit on this broker) is not available ? what will i not be able to?
    2) when point one takes place (broker 1 fails) consumer & producers of appserver 1 will switch to broker 2 right? when broker 1 comes up again will the clients on appserver1 somewhen try to switch to their homebroker again ? somewhen?
    3) since datacenter 1 + 2 are seperated geographically what delay is acceptable for JMQ to be between brokers? or how do they communicate in particular?
    4) sizing: on our current STCMS JMS implementation we do have a traffic of
    ~ 1 500 000 messages / day
    ranging from 0k -- 100 000 k payload
    distributed over 1000 JMS queues distributed over
    8 STCMS JMS Servers (2 servers for every business domain ( 1 warehouse, 2 finance...))
    would it be feasible to just create one cluster including 1 master broker with 8 cluster broker members (servers)
    setup like this?
    host1                                                              host2
    appserver warehouse1                                        appserver warehouse2
    appserver finance1                                             appserver finance2
    appserver otherbusiness1                                   appserver otherbusiness2                                  
    appserver otherotherbusiness1                            appserver otherotherbusiness2
    master broker (does basically nothing but mastering is this needed to be standalone doing nothg but admin tasks?)
    broker warehouse1                                             broker warehouse2
    broker finance1                                                  broker finance2
    broker otherbusniess1                                        broker otherbusiness2
    broker otherotherbusiness1                                 broker otherotherbusniess2
    e.g.
    broker warehouse 1 basicallly speaks to appserver warehouse 1 maybe to appserver warehouse 2 in failover cases  seldomly interroutes messages to other brokers e.g. finance 1 or finance 2 or otherbusniess 1
    broker finance basicallly speaks to appserver finance1 (failover appsrv finnance2)  seldomly interroutes messages to other brokers e.g. warehouse1 or warehouse2

  • Cisco ise 1.2 install certificates for ise cluster question

    hello all i have an ise cluster of 4 devices. 1 primary admin/secondary monitor, 1 secondary admin/primary admin and 2 policy nodes
    i need to install public CA certs on them. can I generate 1 CSR on one of the nodes, that includes a SAN with the DNS names of all the nodes?
    Therefore get only 1 cert from the CA, and export and import the same cert into all the other nodes?
    or do i have to generate 1 CSR for each node and purchase 4 certs? Wild card certs is not an option. tHANKS,

    ISE allows you to install a certificate with multiple Subject Alternative Name (SAN) fields. A browser reaching the ISE using any of the listed SAN names will accept the certificate without any error as long as it trusts the CA that signed the certificate.
    The CSR for such a certificate cannot be generated from the ISE GUI. http://www.cisco.com/c/en/us/support/docs/security/identity-services-engine-software/113675-ise-binds-multi-names-00.html
    Cisco ISE checks for a matching subject name as follows:
    1. Cisco ISE looks at the subject alternative name (SAN) extension of the certificate. If the SAN contains one or more DNS names, then one of the DNS names must match the FQDN of the Cisco ISE node. If a wildcard certificate is used, then the wildcard domain name must match the domain in the Cisco ISE node's FQDN.
    2. If there are no DNS names in the SAN, or if the SAN is missing entirely, then the Common Name (CN) in the Subject field of the certificate or the wildcard domain in the Subject field of the certificate must match the FQDN of the node.
    3. If no match is found, the certificate is rejected.
    Regards,
    Jatin Katyal
    *Do rate helpful posts*

Maybe you are looking for

  • Change log of document address for Ship to party in sales order

    hi experts, There is a case where Ship to party address is diffferent at sales order level and custome rmaster level.Can anyone help to find the  log of data inserted /updated in the customer address at sales document level. We tried CDHDR and CDPOS

  • Use DV avi when burning instead of m2v and wav file.

    Hi, Product name: Adobe Premiere Pro CS3 with Encore. Latest version available of Adobe Premiere Pro CS3. Operating system: Vista Ultimate Computer: DELL Dimension 9200, 2 GB RAM, 3gHz dual, 500GB HDD. Here my scenario: When I capture my 1 hour movie

  • Update quantity in sales order by 'BAPI_SALESORDER_CHANGE'

    I am trying to use the 'BAPI_SALESORDER_CHANGE' to change a sales order. I have to add a line item for empties in the sales order. the line gets sucessfully added but the quantity is not gettign updated. I have tried all the fields of table ITEM_IN (

  • Conflict between write-on and continuously rasterize

    After importing the components for a growing element from illustrator I used a series of notion paths with the write-on effect to create the growth animation. I realized after I was done that I had forgotten to turn on continuously rasterize for the

  • Unable to restore Laptop

    I recently purchased a Envy DV6-7200ea Laptop and created media recovery disk as instructed. I tried to recover the laptop via media disk and everything installs correctly untilI receive message that installation is not complete. Try again. It is bee