Replicated idletimeoutsecs in 5.1 Cluster??

It looks like when <idle-timeout-seconds> is changed in the console for a
          given stateful session bean in a WL 5.1 cluster, that change is applied
          across all nodes in a cluster, even though if one views the value from any
          of the other nodes, it appears unchanged. Is this expected behavior?
          We experienced this about a week ago -- we noticed that our
          idle-timeout-seconds for one of our stateful beans was set too high, so we
          decided to reduce it. Prior to making the change permanent in the
          weblogic-ejb-jar we made it temporarily in the console for one of the nodes
          and figured we'd monitor it. The change automatically applied to all nodes
          in the cluster. When I looked at the other nodes in the console, the idle
          timeout seconds as displayed in the console itself still contained the
          original value, but upon scrolling down and looking at the EJB passivation
          activity, I could see that they were clearing passivating at the point of
          the new reduced idletimeoutseconds value I had set on the one node.
          

We do not have this feature as of now.
          Anand Byrappagari wrote:
          > Rajesh,
          >
          > Sure I can do that but that only allows we to weight the number of calls
          > going to each server but it does not allow me to tweek the servers
          > performace. This is orthogonal to what I want. Let me explain what I mean:
          >
          > Server 1 - weight 50
          > Server 2 - weight 100
          > Using the weighted load balancing I can direct double the amount of traffic
          > to go to Server 2 but that is not all that I want.
          >
          > Server 1 - 1 GB memory
          > Server 2 - 2 GB memory
          > I am interested in tweeking the performance of Server 2 but controlling the
          > number of bean instance allowed. Since Server 2 has a larger memory and may
          > be more CPUs I might want to allow the bean cache to contain twice as many
          > instance or maybe I want the timeout interval to be twice as much since I
          > can afford the extra memory there by gaining some performace improvement by
          > not passivating the beans as often.
          >
          > Requiring all the replicated instance to have the same values seems to be an
          > unnecessary restriction or maybe I am missing something.
          >
          > -- Anand
          >
          > "Rajesh Mirchandani" <[email protected]> wrote in message
          > news:[email protected]...
          > > Anand,
          > >
          > > If you really want to load the balance depending on the capabilities of
          > the
          > > hardware/OS you could use a different load balancing algorithm.
          > >
          > > More info at
          > http://edocs.bea.com/wls/docs61/cluster/features.html#1006780.
          > >
          > >
          > > Anand Byrappagari wrote:
          > >
          > > > I have a similar problem. I want to be able to set different values for
          > > > MaxBeansInCache of a entitybean for different managed servers belonging
          > to a
          > > > weblogic cluster. Is it possible to do this in WL6.1 cluster? Since the
          > only
          > > > way to specify this is in the deployment descriptor, all the targets of
          > this
          > > > deployment will use the new value. Same goes for IdleTimeoutSeconds &
          > > > ReadTimeoutSeconds.
          > > >
          > > > This sounds like a big limitation, if I have two WL instances running on
          > > > different powered machines I would like to have different values
          > assigned to
          > > > MaxBeansInCache to accomodate the difference in the capabilities of the
          > > > machine. Can someone please confirm that this is indeed true. If it is
          > not
          > > > can you please tell me how to set this up.
          > > >
          > > > Thanks,
          > > > Anand
          > > >
          > > > "Michael Joseph" <[email protected]> wrote in message
          > > > news:[email protected]...
          > > > > It looks like when <idle-timeout-seconds> is changed in the console
          > for a
          > > > > given stateful session bean in a WL 5.1 cluster, that change is
          > applied
          > > > > across all nodes in a cluster, even though if one views the value from
          > any
          > > > > of the other nodes, it appears unchanged. Is this expected behavior?
          > > > >
          > > > > We experienced this about a week ago -- we noticed that our
          > > > > idle-timeout-seconds for one of our stateful beans was set too high,
          > so we
          > > > > decided to reduce it. Prior to making the change permanent in the
          > > > > weblogic-ejb-jar we made it temporarily in the console for one of the
          > > > nodes
          > > > > and figured we'd monitor it. The change automatically applied to all
          > > > nodes
          > > > > in the cluster. When I looked at the other nodes in the console, the
          > idle
          > > > > timeout seconds as displayed in the console itself still contained the
          > > > > original value, but upon scrolling down and looking at the EJB
          > passivation
          > > > > activity, I could see that they were clearing passivating at the point
          > of
          > > > > the new reduced idletimeoutseconds value I had set on the one node.
          > > > >
          > > > >
          > > > >
          > >
          > > --
          > > Rajesh Mirchandani
          > > Developer Relations Engineer
          > > BEA Support
          > >
          > >
          Rajesh Mirchandani
          Developer Relations Engineer
          BEA Support
          

Similar Messages

  • RMI callbacks with no replica aware stubs in a cluster?

    We would like to use either an RMI ping mechanism or an RMI callback in an upcoming project but are limited to 1.1 and 1.2 JRE's. I don't believe generating a replica aware proxy is an option for us given those constraints, no support for dynamic proxies. So we have a remote server object that we would ideally like to be clusterable that we can register a callback interface with. How can we accomplish this given our client limitations with weblogic 8.1 RMI? Any options?

    Yep you speak no lies. Of course once I thought about it some more I realized the error of assuming one could register a jdk rmi object with a weblogic rmi object. JRMP vs T3?. Not good.
    We still like using plain jane RMI w/in weblogic. I can register my UnicastRemoteObject stub with a normal RMI object on the server and it can callback and deliver messages. It will also send heartbeats periodically. If the heartbeat isn't heard from in awhile we re-register through http. The advantage of this is that the http session has failed over to another box and the web tier manages the rmi registration for me. The client can remain ignorant of any RMI lookups, which are tedious when not using JNDI or the cluster aware proxy.
    All of this to get back a few seconds and keep a few thousand clients from polling through a servlet every second. Argh JDK 1.1.
    Thanks for hammering that final point in.

  • JNDI bindings not being replicated across servers in a cluster

    According to the Weblogic Server documentation, JNDI bindings are
              automatically replicated across the servers in a cluster.
              http://www.weblogic.com/docs45/classdocs/weblogic.jndi.WLContext.html#REPLIC
              ATE_BINDINGS
              This is not proving to be true in my testing. Perhaps it's "just broken", or
              perhaps my cluster isn't correctly configured...
              Here is a reproducible case. Install the following on two or more servers in
              a cluster. You'll need to change line 26 of test1 to reference one of the
              servers explicitly.
              Test 1 works fine, since both web servers connect to the same server for
              JNDI usage. The test should return the last host to hit the page, along with
              the current host name. Alternating between servers in the cluster will
              alternate the results. However, the fact that I'm specifically naming a
              server in the cluster breaks the whole point of clustering -- if that server
              goes down, the application ceases to function properly.
              Test 2, which is supposedly the right way to do it, does not work, an error
              message is logged:
              Tue Jan 04 08:17:15 CST 2000:<I> <ConflictHandler> ConflictStart
              lastviewhost:java.lang.String (from
              [email protected]:[80,80,7002,7002,-1])
              And then both servers begin to report that they have been the only server to
              hit the page. Alternating between servers will have no effect -- both
              servers are looking solely at their own copies of the JNDI tree. No
              replication is occurring.
              What is up with this? Any ideas?
              Tim
              [test1.jsp]
              [test2.jsp]
              

    1. yes
              <JMSConnectionFactory AllowCloseInOnMessage="false"
              DefaultDeliveryMode="Persistent" DefaultPriority="4"
              DefaultTimeToLive="0"
              JNDIName="xlink.jms.factory.commerceFactory"
              MessagesMaximum="10" Name="xlink.jms.factory.commerceFactory"
              OverrunPolicy="KeepOld" Targets="bluej,biztalk-lab,devtestCluster"/>
              2. No I am just using the jndi name of the queue.
              This is an example of how I send a message:
                   Context ctx = new InitialContext();
              QueueConnectionFactory qconFactory;
              QueueConnection qcon;
              QueueSession qsession;
              QueueSender qsender;
              Queue queue;
              ObjectMessage msg;
              qconFactory = (QueueConnectionFactory) ctx.lookup("xlink.jms.factory.commerceFactory");
              qcon = qconFactory.createQueueConnection();
              qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
              queue = (Queue) ctx.lookup("xlink.jms.queue.biztalk-lab.OrdrspImport");
              qsender = qsession.createSender(queue);
              msg = qsession.createObjectMessage(reportExecutorContainer);
              qcon.start();
              qsender.send(msg);
              qsender.close();
              qsession.close();
              qcon.close();
              3. I don't know those setting (wl 6.1sp7)

  • DPM 2012 R2 Consistency Check (Replica Inconsistent) Error for Failover Cluster

    I have a SCDPM 2012 R2 instance running against a Windows Server 2012 R2 Failover Cluster on the network.
    However, any new/migrated Roles/Virtual machines that I add under an existing protection group always gives me an error saying that the replica is inconsistent.
    I've verified the following : 
    1. All windows updates are applied on the cluster hosts & the DPM server.
    2. The VSS and DPMRA services are running on the hosts & DPM server.
    3. The Windows Server Backup feature has been installed on all the machines as well.
    4. The short term storage disks attached to the DPM server are LUNs and there is enough free space on those.
    What else am I missing that could be causing the Replica to be inconsistent - considering that these are new VMs on the hosts themselves - is there a way to force a fresh replica to be created?

    Do not remove any of the providers they come with the system. There is just something wrong with that VM, your trying to backup, as it could take the initial replica creation.
    Check all of the following
    1) Make sure you have enough C drive space on the guest - need space to create a snapshot
    2) Check the vssadmin list wirters for any errors
    If none of the above, rerun the consistency check when no other backup is running. Check the Application log on the host for any vss failures
    Ensure you have all the DPM hotfixes - I am compiling a list since i've been thru many DPM issues.
    Great product when you get it going but like any product there are bumps in the road :)
    ANNCEX

  • Metaimport of replicated cluster diskset?

    We're in the process of building a new cluster and replication environment using Sun Cluster 3.1 on Solaris 9 (9/05), and Hitachi 9990.
    I'm trying to determine if it's possible to use metaimport to import a replicated copy of a cluster diskset. This replicated copy will be created by Hitachi TrueCopy and needs to be imported on a non-cluster host.
    I've been reading SVM documentation all day, and this seems like it should be possible. The only issue that I'm not sure about is that the cluster diskset will be built on DID devices, and I don't know how metaimport will react to that on a non-cluster machine. Seems like it should just ignore the device_ids (if they even exist or are supported in the cluster case) and regenerate them on the replicated side.
    I don't have enough hardware set up yet to try this myself, but if this won't work then I will need to change our recovery strategy. Today we use VxVM with Sun Cluster 3.0, and we can easily import the TrueCopy replicated disk group on non-cluster machines using "vxdg import". Looking for the same functionality with SVM.
    If anyone has any thoughts or recommendations on this, I'd greatly appreciate it!
    Thanks,
    Bryan

    Thanks for your response. Do you know when support will be added to import copies of clustered disksets? Is this limitation due to the fact that cluster does not use device_ids in SVM?
    Without this feature, I don't know how we can make use of our TrueCopy replication if our production environment is clustered...unless we use VxVM which we're trying to move away from.
    Any recommendations?
    Thanks,
    Bryan

  • Unbind don't replicated changes on Cluster

    I'm running Application Cluster with OC4J 9.0.3, with Aplication Context replication using RMI. Lookup, Bind, and Rebind work ok, and changes are replicated to all nodes in Cluster, but Unbind don't replicate state, it works only local on node where was unbind.
    I try OC4J 10g, and it work correct also for unbind.
    Can you help me how unbind might work on OC4J 9.0.3?
    Thanks.
    Robo.

    Maybe you could file a bug and provide a test case.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Unable to replicate reservation to DHCP failover cluster partner node

    Hello.
    After reading that whilst DHCP address leases are replicated automatically to DHCP failover cluster partner nodes, and that reservations aren't, I followed steps to right-click the scope from the server which features the specified reservation, and select
    'Replicate Scope'. The status window detailing the progression of associated tasks shows that replication of reservations to my other node partner, failed. The error code returned was 20022, which states that the specified IP address is currently taken by
    another client. 
    Why is this occurring? The other node I am trying to replicate to, is within the DHCP failover cluster, and so should be aware that the reservation I am trying to replicate is linked to the same device? Why am I unable to replicate the reservations?
    Any help would be appreciated.
    Many thanks.

    Hi,
    Based on my research,
    DHCP Failover automatically synchronizes leases between failover partners in Windows server 2012 and Windows Server 2012 R2.  However, configuration changes on one server in the failover relationship do not automatically get replicated
    to the partner, you need to manually replicate by choosing the “Replicate Scope…”
    or “Replicate Relationship…” option in DHCP Server.
    In addition, for DHCP failover to function correctly, time must be kept synchronized between the two servers in a failover relationship. Please check if the time is synchronized between the two servers.
    Besides, you can check the event log to see if any other related information exists.
    Best regards,
    Susie

  • Finding the handler host in cluster when using sticky sessions

              Our design is like this: we have Apache front-ending the WL cluster. The session
              is not replicated across WL hosts in cluster. However, the Apache-weblogic bridge
              takes care of handling sticky - sessions ( i.e. forwarding requests in same session
              to one host in cluster )
              Now, we have some code running on Apache itself , in Perl.
              The requirement is as follows : In the Perl code, we trap certain requests which
              are NOT forwarded to Weblogic. However, in the Perl code, we do trap the JSESSIONID
              cookie. Now, using this cookie value, is it possible to know which WL host in
              cluster is handling its corresponding session ??
              This is required, since the Perl module is supposed to make an explicit HTTP request
              to that WL host - passing the JSESSIONID as a parameter - for authentication.
              Thanks,
              Subodh
              

    can the perl module send this request to the apache webserver itself and set
              the cookie the same? That would allow apache plugin to send it to the right
              node.
              "Subodh" <[email protected]> wrote in message
              news:[email protected]..
              >
              > Our design is like this: we have Apache front-ending the WL cluster. The
              session
              > is not replicated across WL hosts in cluster. However, the Apache-weblogic
              bridge
              > takes care of handling sticky - sessions ( i.e. forwarding requests in
              same session
              > to one host in cluster )
              >
              > Now, we have some code running on Apache itself , in Perl.
              >
              > The requirement is as follows : In the Perl code, we trap certain requests
              which
              > are NOT forwarded to Weblogic. However, in the Perl code, we do trap the
              JSESSIONID
              > cookie. Now, using this cookie value, is it possible to know which WL host
              in
              > cluster is handling its corresponding session ??
              >
              > This is required, since the Perl module is supposed to make an explicit
              HTTP request
              > to that WL host - passing the JSESSIONID as a parameter - for
              authentication.
              >
              >
              > Thanks,
              > Subodh
              >
              

  • MX LB / HA / Cluster ESA 380

    We go to deploy our new ESA (2 devices) as per mentioned below plan,
    ESA01 is primary for company A and ESA02 is primary for company B.
    If ESA01 is down ESA02 will receive mail for company A using MX load balance. The same method for company B.
    We are very confused on cluster with MX load balance on above scenario.
    Like Two different ESA configurations devices can able include on single cluster. Since we have different policies for both companies and both companies have email server .
    We need some explanation on above.
    Please clarify on this

    "Cluster" in ESA just means the configuration gets replicated.
    So if you cluster them and want different policies for each company, you just go to Mail Polices/Incoming Mail Policies and create one for each company.
    Add each domain you receive mail for to Mail Polices/Recipient Access Table
    Add a route to each mail server for each domain under Network/SMTP routes
    If you want separate "Host Access Tables" you can create separate listeners for each company (under Network/Listeners), and you may want to put them on separate IP interfaces, but you don't have to do this... one HAT may work just fine...

  • Starting weblogic cluster and admin server on Win 2000 platform

    Hi! Here's my problem scenario:
    I'm running my WLS 8.1 application as WLS cluster on Win 2000 platform. I have
    3 replicated WLS servers in my cluster and of course one WLS server as admin server
    (4 WLS servers all together). I'm starting my WLS application using Win services
    which behind a scene call win 2000 comand interpreter batch-file to start the
    servers. Currently I have 4 different win 2000 services each calling different
    batch-files to start a WLS server (4 all together). Here's my question:
    Is there any way to start my cluster (3 managed WLS servers) and WLS admin server
    from single win 2000 command file (using single win 2000 service). Using win 2000
    AT-command interpreter command I can start executables and command(s) and command-files
    at background at specific date and time, but I want to start my WLS application
    on every win 2000 platform boot, is there any way to achieve this ???
    Best Regards!
    Jami

    hi,
    Thanks for stepping up!
    But I wasn't using any clustering or loadbalancing.
    Just a simple Admin server, Managed server set up.
    256 Meg, Pentium box running Win 2000 premium edition.
    I tried tweeking around the heap size too.... coz' the operations were slogging
    like anything...
    Bal
    "Raja Mukherjee" <[email protected]> wrote:
    It seems like network issue....do you have Windows Clustering/Load Balancing
    service turned on by any chance? I would also use netmon to see the traffic
    between the ips....
    ..raja
    "Bala" <[email protected]> wrote in message
    news:3b3a6c00$[email protected]..
    hi,
    I am looking at a strange behaviour with WLS6.0SP1 installed on WINDOWS2000 Premium.
    I brought up the Admin server and a Managed server. After a while theAdmin Server
    is compeletely
    losing the control of the managed server (for no reason).
    when I tried monitoring the managed server from the console, I couldnot
    access
    the manager server
    and the status down (red ball). I tried restarting the Admin Server
    in the discover mode - NO LUCK.
    But when I restart the Managed server and go to the Web Based Admin
    Console and refresh, it worked. But I dont think we should be doingthat
    in a
    production environment.
    Did anybody come across such a problem before ???
    Is this a problem with Win 2000 ???
    coz' it never happened with NT workstation
    thanks in advance

  • Failed to replicate non-serializable object  Weblogic 10 3 Cluster environ

    Hi,
    We have problem in cluster environment, its showing all the objects in Session needs to be serialized, is there any tool to find what objects in session needs to be serialized or any way to find. There was no issue in WLS 8 when the application again setup in WLS 10, we are facing the session replication problem.
    The setup is there are two managed server instance in cluster, they are set to mulicast(and also tried with unicast).
    stacktrace:
    ####<Jun 30, 2010 7:11:16 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939476284> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    ####<Jun 30, 2010 7:11:19 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939479750> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    Thanks,

    Hi
    Irrespective of WLS 8.x or WLS 9.x, 10.x, in general any objects that needs to be synced/replicated across the servers in cluster should be Serializable (or implement serializable). The object should be able to marshall and unmarshall. Simple reason why it did not showed in WLS 8.x is may be in that version they could not show these details like Error. May be they showed as Info or Warn or Just Ignored it. Weblogic Server became more and more stable and more efficient down the lines like from its oldest version 4.x, 5.x, 6.x, 7.x to latest 10.x. So my guess is they added more logic and more functionality to capture all possible errors and scenarios. I did worked on WLS 8.1 SP4 to SP6 long time back. Could not remember if I saw or did not see these errors for cluster domain with non-serializabe objects. I vaguley remember seeing it for Portal Domains but not sure. I do not have 8.x installed, otherwise I would have given a quick shot and confirm it.
    So even though it did not showed up in WLS 8.x, still underneath rule is any object that needs to be replicated that is getting replicated in cluster needs to implement Serializable interface.
    Thanks
    Ravi Jegga

  • Hyper-V Replica in the same site

    I know it may sound strange and I don't care to illustrate the reasoning behind this but I just want to know if this is technically possible?  I haven't read any restrictions regarding replicating Hyper-V data within the same site but I wanted to
    confirm with the community.

    The product has hard-blocks on replicating VMs to the same cluster, but otherwise you should be ok. But I think from a business continuity/VM protection perspective, you should consider replicating to a different site.
    Praveen

  • Clustering and Application/Servlet Singletons...Replicated?

    Are static servlet and instance attributes replicated to servlet instances
              in cluster?
              We have seen some behavior which suggest no?
              Assuming all instance and static variables are serializable or atomic types,
              are they replicated?
              If not, how is application/servlet level state replicated? servletcontext?
              -phil ([email protected])
              [Phillip A. Lindsay.vcf]
              

    "Phillip A. Lindsay" wrote:
              > Are static servlet and instance attributes replicated to servlet instances
              > in cluster?
              Each node will have its own class/classloader tuple and therefore its own set
              of static and instance attributes. The singleton effect can be achieved by
              binding
              an object into a namespace at a well-known point. It can be argued that a
              singleton
              should be replicated but then it wouldn't be a "single" singleton.
              Cheers
              Alex
              mailto:[email protected] // Consulting services available
              

  • Solaris Cluster and Global Devices for sapmnt

    Hi,
    IHAC that is considering using Global Devices for /usr/sapmnt in a SAP environment, since they need all SAP nodes looking for the same sapmnt area. Is this a recommended approach?
    Generally, we are working with S11.1/Solaris Cluster 4.1 to implement Netweaver 7.2 and ERP 6.0.
    I appreciate your comments.
    Regards, Rafael.

    Hi Rafael,
    /usr/sapmnt is a file system, as such I assume when you ask about usage of global devices you really mean to ask about using a global file system (ie. an UFS file system with mount option global)?
    If that is true, then the answer is yes.
    The data service guide for SAP NetWeaver (http://docs.oracle.com/cd/E29086_01/html/E29440/installconfig-10.html#installconfig-34) does mention in the section "configuration considerations":
    "The SAP enqueue server and SAP replica server run on different cluster nodes. Therefore, the SAP application files (binary files, configuration files, and parameter files) can be installed either on the global file system or on the local file system. However, the application files for each of these applications must be accessible at all times from the nodes on which these applications are running."
    And the deployment example in the appendix (http://docs.oracle.com/cd/E29086_01/html/E29440/gmlbt.html#scrolltoc) makes use of a global mounted /sapstore file system.
    Regards
                 Thorsten

Maybe you are looking for

  • OBIEE v11.1.1.5 - Cannot view embedded images  ( PDF/HTML)

    Hi Everyone, I have a OBIEE 11g report with embedded images and I scheduled to run every week using an agent. When I specify Delivery content and format as ( PDF or Plain HTML ) the email does not display any images Error if format is HTML : The link

  • How do I track if a sent e-mail has been opened?

    Mr X sends an e-mail to Mr Y; How can Mr X know if Mr. Y has opened the e-mail?

  • Starting tomcat with bootstrap.jar

    I want to start Tomcat 4.1 with bootstrap.jar and calling the main function from my simple java program. How can I do it and what all parameters are required by the main function so that it can start the Tomcat? Can I supply the catalina.policy file

  • HT3775 how to watch .avi files

    could someome please help me figure out how to watch a video file with a   .avi   suffix?

  • Solid State HD advice?

    I am going to buy a 2.4 GHz, 4 GB RAM MacBook very soon... probably after the WWDC June 8th, so I can see if anything changes on the MacBook after that. My question is: if I can afford it, should I get the 256 GB Solid State Hard Drive upgrade on thi