Clustering wise problem

Hi all
i have an ear application which has been deployed in OC4J application server 10.1.3 in that ear i have one stateless session bean i have created a singleton object in that i am storing registered and online users. it is working fine. but when i deployed in clustering topology some times object is not holding values i observed object we have created is in instance level . in load balancing concept when request is served by another instance then i am unable to get those values which are created by previous instance.
Is there any solution which will fulfill my requirement
please tell any solution
can we create any object that object scope must be container level ( i mean it must be available for all instances )
thanks in advance

Hi, I am deploying my apllication in OCJ instance gruop i.e. in two instances(Clusteing environment). I have 2 applications namely App1 and App2. I m communicating with App2 using App1. Im running the oc4j instances on solaris environment. I have implemented the coherence in the following way. When ever I down either one of the instance its working fine and again restarted the down instance and again make the other instance state as down. At this moment its failing to hold the cahce values. Where I can keep the coherence-cache-config.xml file in the server environmet. for this I have created one environment variable and set like the following way, -
Dtangosol.coherence.cacheconfig=../2ee/home\config\coherence-cache-config.xml
The following is the code where I implented caching.
package com.deceval.brc.brcentcmdframework;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.HashMap;
import org.apache.log4j.Logger;
import com.deceval.brc.brcentcmd.EntCmdProxyObj;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
public class EntCmdRefObj implements Serializable{
     private static final long serialVersionUID = 1L;
     private static EntCmdRefObj refObj = null;
     private HashMap<String, Integer> refMap = null;
     private ArrayList<EntCmdProxyObj> arregloFirmantes;
     private final NamedCache m_cache = CacheFactory.getCache("FirmanteCache");
     public static Logger mLog = Logger.getLogger(EntCmdRefObj.class);
     public EntCmdRefObj() {
          refMap = new HashMap<String, Integer>();
          setArregloFirmantes(new ArrayList<EntCmdProxyObj>());
          //arregloFirmantes = new ArrayList<EntCmdProxyObj>();
     public static EntCmdRefObj getInstance() {
          if (refObj == null) {
               refObj = new EntCmdRefObj();
          return refObj;
     public HashMap<String, Integer> getRefMap() {
          return refMap;
     public void setRefMap(HashMap<String, Integer> refMap) {
          this.refMap = refMap;
     public ArrayList<EntCmdProxyObj> getArregloFirmantes() {
          mLog.info("IN ENtCmdRefObj : " + (ArrayList<EntCmdProxyObj>)m_cache.get("FirmanteCache"));
          mLog.info("IN ENtCmdRefObj : " + ((ArrayList<EntCmdProxyObj>)m_cache.get("FirmanteCache")).size());
          return (ArrayList<EntCmdProxyObj>)m_cache.get("FirmanteCache");
     public void lock() {
          m_cache.lock("monitor", -1L);
     public void unlock() {
          m_cache.unlock("monitor");
     public void setArregloFirmantes(ArrayList<EntCmdProxyObj> pArregloFirmantes) {
          m_cache.put("FirmanteCache", pArregloFirmantes);
          //this.arregloFirmantes = arregloFirmantes;
coherence-cache-config.xml
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with any name will be created as default replicated.
-->
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>default-replicated</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>FirmanteCache</cache-name>
<scheme-name>default-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Default Replicated caching scheme.
-->
<replicated-scheme>
<scheme-name>default-replicated</scheme-name>
<service-name>ReplicatedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
</replicated-scheme>
<!--
Default Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>default-distributed</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
</distributed-scheme>
<!--
Default backing map scheme definition used by all
The caches that do not require any eviction policies
-->
<class-scheme>
<scheme-name>default-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
</class-scheme>
</caching-schemes>
</cache-config>

Similar Messages

  • Clustering Config Problem - ConflictHandler

              We are currently getting ConflictHandler messages when we start servers in
              our cluster. I have scanned through the newsgroup postings on
              ConflictHandler problems and noone seems to be doing quite what we are
              (maybe the problem in and of itself).
              We are on Solaris 2.8, WebLogic 5.1 SP 8, JDK 1.3.
              We have a collection of stateless session beans and entity beans which are
              all NON-clustered (this has been verified several times). We have 2
              computers in the cluster, and all EJBs are supposed to deploy in both
              computers. The first computer comes up fine, but the second comes up with
              conflict handler messages for every EJB deployed.
              Our architectural thought was that fail over/load balancing would occur at
              the servlet/JSP level through the use of the weblogic plug in. So the EJBs
              would not have to be clustered.
              Thanks
              Bob
              

    There was an earlier post in the newsgroup about this....
              Pasted here....
              thanks,
              Patrick
              ----- Original Message -----
              From: "Robert Patrick" <[email protected]>
              Newsgroups: weblogic.developer.interest.clustering
              Sent: Saturday, May 05, 2001 7:53 PM
              Subject: Re: Confilct Handler ..
              > If you bind objects that are not cluster-aware stubs into the JNDI tree on
              > both machines with the same name, this will occur. To prevent this,
              either
              > make sure your stubs are cluster-aware (e.g., EJB Home is clusterable) or
              > set the REPLICATE_BINDINGS property to false when obtaining the
              > InitialContext used to bind these objects into JNDI.
              >
              > Madhu wrote:
              >
              > > When does this happen? Running code in two clustered instances on a same
              > > multihomed machine produces this error in some classes.
              > >
              > > Madhu
              >
              "Bob Simonoff" <[email protected]> wrote in message
              news:[email protected]...
              >
              > We are currently getting ConflictHandler messages when we start servers in
              > our cluster. I have scanned through the newsgroup postings on
              > ConflictHandler problems and noone seems to be doing quite what we are
              > (maybe the problem in and of itself).
              >
              > We are on Solaris 2.8, WebLogic 5.1 SP 8, JDK 1.3.
              >
              > We have a collection of stateless session beans and entity beans which are
              > all NON-clustered (this has been verified several times). We have 2
              > computers in the cluster, and all EJBs are supposed to deploy in both
              > computers. The first computer comes up fine, but the second comes up with
              > conflict handler messages for every EJB deployed.
              >
              > Our architectural thought was that fail over/load balancing would occur at
              > the servlet/JSP level through the use of the weblogic plug in. So the EJBs
              > would not have to be clustered.
              >
              > Thanks
              > Bob
              >
              >
              >
              

  • Wls 5.1 sp9 clustering (DNS Problem)

    Hi all,
              I am trying to create a WLS cluster of 3 instances and the fourth one being
              proxy. All four instances of WLS are on the same machine, and we use
              default subnet mask so they are all in the same LAN.
              The servers in the cluster have started properly and are communicating with
              each other using a IPMulticast. No problem there.
              However, when i request a JSP page giving the proxy servers
              address(http://proxyIP:7001/Hello.jsp) , it is not able to forward it to any
              of the clustered server instances. It is not even giving me a time out
              error or page not found or anything. I am not sure if my DNS entries are
              wrong or if there is any other problem or there is something that i missed.
              Can some one verify this for me please.
              DNS ENTRIES in HOSTS file are as follows :
              127.0.0.1 localhost
              192.200.1.251 server251.bozpivot.com
              192.200.1.252 server252.bozpivot.com
              192.200.1.253 server253.bozpivot.com
              mycluster 192.200.1.251
              mycluster 192.200.1.252
              mycluster 192.200.1.253
              STACK Trace on proxy is as follows :
              Sun Jan 02 10:22:33 GMT+05:30 2000:<I> <ServletContext-General> *.jsp: init
              Sun Jan 02 10:22:33 GMT+05:30 2000:<E> <proxy> Please specify secure port in
              the
              properties. Using default ports 7001/7002 See release notes for more info
              Sun Jan 02 10:22:33 GMT+05:30 2000:<E> <proxy> Please specify secure port in
              the
              properties. Using default ports 7001/7002 See release notes for more info
              Ramesh
              

    Thanks all,
              I got it working, It was not a problem with my DNS entries. I forgot to
              register my JSPServlet, and so all calls to JSP pages were getting into an
              non-ending loop.
              Ramesh
              "ramesh" <[email protected]> wrote in message
              news:[email protected]...
              > Just in case you need to see my proxy servlet configuration :
              >
              > weblogic.httpd.register.*.jsp=weblogic.servlet.internal.HttpClusterServlet
              > weblogic.httpd.initArgs.*.jsp=defaultServers=server251:7001|server252:7001
              >
              >
              weblogic.httpd.register.*.servlet=weblogic.servlet.internal.HttpClusterServl
              > et
              >
              weblogic.httpd.initArgs.*.servlet=defaultServers=server251:7001|server252:70
              > 01
              >
              > weblogic.allow.execute.weblogic.servlet=everyone
              >
              weblogic.httpd.register.cluster=weblogic.servlet.internal.HttpClusterServlet
              >
              weblogic.httpd.initArgs.cluster=defaultServers=server251:7001|server252:7001
              > weblogic.httpd.defaultServlet=cluster
              >
              >
              > "ramesh" <[email protected]> wrote in message
              > news:[email protected]...
              > > Hi all,
              > >
              > > I am trying to create a WLS cluster of 3 instances and the fourth one
              > being
              > > proxy. All four instances of WLS are on the same machine, and we use
              > > default subnet mask so they are all in the same LAN.
              > >
              > > The servers in the cluster have started properly and are communicating
              > with
              > > each other using a IPMulticast. No problem there.
              > >
              > > However, when i request a JSP page giving the proxy servers
              > > address(http://proxyIP:7001/Hello.jsp) , it is not able to forward it to
              > any
              > > of the clustered server instances. It is not even giving me a time out
              > > error or page not found or anything. I am not sure if my DNS entries are
              > > wrong or if there is any other problem or there is something that i
              > missed.
              > > Can some one verify this for me please.
              > >
              > >
              > > DNS ENTRIES in HOSTS file are as follows :
              > >
              > > 127.0.0.1 localhost
              > >
              > > 192.200.1.251 server251.bozpivot.com
              > > 192.200.1.252 server252.bozpivot.com
              > > 192.200.1.253 server253.bozpivot.com
              > >
              > > mycluster 192.200.1.251
              > > mycluster 192.200.1.252
              > > mycluster 192.200.1.253
              > >
              > >
              > > STACK Trace on proxy is as follows :
              > > Sun Jan 02 10:22:33 GMT+05:30 2000:<I> <ServletContext-General> *.jsp:
              > init
              > > Sun Jan 02 10:22:33 GMT+05:30 2000:<E> <proxy> Please specify secure
              port
              > in
              > > the
              > > properties. Using default ports 7001/7002 See release notes for more
              info
              > > Sun Jan 02 10:22:33 GMT+05:30 2000:<E> <proxy> Please specify secure
              port
              > in
              > > the
              > > properties. Using default ports 7001/7002 See release notes for more
              info
              > >
              > > Ramesh
              > >
              > >
              > >
              > >
              >
              >
              

  • Exchange 2010 - Clustering & DAG problems after restoring CLUSTER from domain

    Hi there!
    SITE 1
    Primary EXCHANGE server (2010SP3 with latest CU) with all the roles installed
    SITE 2
    Secondary Exchange server (2010SP3 with latest CU) with only mailbox role for DAG purpouse.
    SITE 1 and SITE 2 are connected with site-to-site-vpn.
    Both servers are on 2008 r2 ENT.
    About 3-4 months ago we have accidentely delete DAG node from domain. We have managed to restore it from domain with using AD RESTORE and checking that DAG is member of all the required Exchange groups in the domain.
    Now we are having some big problems if site-to-site-vpn dropps, our primary Exchange server in SITE1 is not working.
    If VPN dropps between the sites, OWA gets unavailable as it Exchange servers would think that Exchange in SITE2 is primary server.
    Please advice us how to track and repair the root of the problem.
    With best regards,
    bostjanc

    Running command:
    Get-MailboxDatabaseCopyStatus –Server "exchangesrvname" | FL MailboxServer,*database*,Status,ContentIndexState
    Gives as an output that all the databases are healthy:
    Example of 1 database report:
    MailboxServer      : ExchangeSRVname
    DatabaseName       : DatabaseName1
    ActiveDatabaseCopy : exchange2010
    Status             : Mounted
    ContentIndexState  : Healthy
    Running command:
    Test-ReplicationHealth –Server "exchange2010.halcom.local" | FL
    Also gives output that everything is fine.
    We still need to solve this issue so we will be unmarking the thread being ansered.
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterService
    CheckDescription : Checks if the cluster service is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ReplayService
    CheckDescription : Checks if the Microsoft Exchange Replication service is running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ActiveManager
    CheckDescription : Checks that Active Manager is running and has a valid role.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TasksRpcListener
    CheckDescription : Checks that the Tasks RPC Listener is running and is responding to remote requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TcpListener
    CheckDescription : Checks that the TCP Listener is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ServerLocatorService
    CheckDescription : Checks that the Server Locator Service is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : DagMembersUp
    CheckDescription : Verifies that the members of a database availability group are up and running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterNetwork
    CheckDescription : Checks that the networks are healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : QuorumGroup
    CheckDescription : Checks that the quorum and witness for the database availability group is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : FileShareQuorum
    CheckDescription : Verifies that the path used for the file share witness can be reached.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    bostjanc

  • Accessing clustered report servers from Forms

    Hi
    The "Oracle Fusion Middleware Publishing Reports to the Web with Oracle Reports Services 11g R2 (11.1.2)" manual in Section 2.5 talks about setting up a High Availability environment for Reports.
    It discusses how to set the cluster configuration and how to create a reports job repository in the database. It says that you need to have a unique name for each report server, finally it says that you should use Oracle Web Cache to load balance for the reports cluster.
    If you are calling the report server from Forms with a run_report_object then you don't go via the web cache, so I am confused how you send your report request to the reports cluster in a load balanced fashion.
    Has anyone done this or can explain how this works for Forms?
    TIA
    Tony

    I know that this is an old thread, but I've taken the liberty to bump it up, as I have the same question.
    The Reports documentation describes quite succinctly how to configure the report servers to be clustered - no problem there.
    The issue is that when you run a report from forms using run_report_object, you are required to specify a report server name. The Reports docs specify that all of the server names must be unique, so this seems to indicate that you cannot use a clustered report server environment from Forms, or am I missing something somewhere? Oh, wait - an insight coming here - Can you (I'll test this, but it would be good to know) specify a cluster name instead of an actual report server name? Google to the rescue:
    Using RUN_REPORT_OBJECT: If the call specifies a Reports Server cluster name instead of a Reports Server name, the reports_servermap environment variable must be set in the Oracle Forms Services default.env file. If your Oracle Forms application uses multiple Reports Server cluster names, you can map each of those cluster names to a different Reports Server using reports_servermap in rwservlet.properties, as follows:
    There's the answer. Sometimes writing the question down helps figure out the answer :)
    Regards,
    John

  • WAP321 intermittent lock-ups

    Have been trying to work through an ongoing lock-up issue on a pair fo WAP321's.  Have read many of the prior threads and have tried to get some feedback from data from either the switch or the AP's to help figure out where the problem may be, but am at a loss.
    Set-up:
    - 2 x WAP321 running v1.0.2.3 connected to a Cisco 2960S-24PS-L (PoE)
    - 2 SSID's each on a different VLAN, both VLANs are tagged.
    - WAP's are no longer clustered.
    Problem:
    Everyday, anywhere from 4-12hrs, the AP's crash.   They either reboot successfully or lock-up 50% of the time.  In a lock-up, the ethernet ports on the 2960 show a device phsyically connected and powered-up there is no layer 2 nor 3 activity (i.e. the AP's are completely unresponsive).  And there are no PoE events at the time of the crash/reboot.
    The AP's are remote to me and the location is not in use everyday so I have not been able to determine if both AP lock-up everytime at exactly the same time but from the past 24hrs of continuous monitoring (pings) they tend to crash within 30min of each other, but not at the exact same time.   They are no longer clustered as I initally thought that might have been the issue.
    AP use is quite light and they lock-up at anytime of the day (i.e. not just during business hours)
    The problem is resolved by rebooting the AP's by shut/no shutting their respective PoE ports.  They are ceiling mounted and there is no way for staff onsite to have a look at them to see what lights are on or off.
    I have not reset the units to factory defaults and re-loaded the firmware.  This is something I would prefer to do onsite as just getting the two VLAN's into the units was a painful 2hr operation (VLAN settings wouldn't stick, couldn't make changes to the VLAN admin page no matter what browser or pc I used, had to reboot each time I made a change, sometimes it would save, sometimes it wouldn't, AP wouldn't accept a new VLAN, then it would etc etc...what should have taken 10 minutes took 2+ hrs.   Based on that experience alone I will never sell these again and will stick to the higher end Cisco WAPs that I'm used to).
    Beyond rebuilding these units and hoping that somehow magically fixes things has anyone encountered any issues with the WAP321's locking-up due to some kind of incompatibility at the network layer?   Or having them locking-up in general operation?  Any thoughts?  I wish I had error messages or log messages of some type to go on but there's ntohing unfortunately.
    Message was edited by: MICHAEL CORDIEZ

    Hi Michael, thank you for using our forum, my name is Johnnatan I am part of the Small business Support community. Thank you for all your specific detail about your issue, that was really clear to me, I will advise you install the last firmware 1.0.3.4, you can download it at link bellow:
    http://software.cisco.com/download/release.html?mdfid=284152656&softwareid=282463166&release=1.0.1.10
    Try to create a backup of your configuration and then upgrade the firmware, after that perform a factory reset and upload your configuration.Let me know if that worked for you.
    Greetings,
    Johnnatan Rodriguez Miranda.
    Cisco Network Support Engineer.
    “Please rate useful posts so other users can benefit from it”
    Greetings, 
    Johnnatan Rodriguez Miranda.
    Cisco Network Support Engineer.

  • Error in JMS since SP5

    We experience the problem after upgrading from WLS81 SP4 to SP5.
              OS is Red Hat LINUX 3 upgrade 3
              DB is Oracle 10.2
              Server is not clustered.
              Problem in WLS distribution or possibly wrong config settings?
              Thanks for any help
              Martin Luga
              <11.01.2006 11.42 Uhr CET> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed
              java.util.ConcurrentModificationException.
              java.util.ConcurrentModificationException
                   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:782)
                   at java.util.HashMap$EntryIterator.next(HashMap.java:824)
                   at java.util.HashMap.putAllForCreate(HashMap.java:424)
                   at java.util.HashMap.clone(HashMap.java:656)
                   at weblogic.jms.backend.BEDestination.securityCheck(BEDestination.java:5281)
                   at weblogic.jms.backend.BEDestination.access$900(BEDestination.java:93)
                   at weblogic.jms.backend.BEDestination$3.expireTimeout(BEDestination.java:5260)
                   at weblogic.jms.backend.BETimerNode.execute(BETimerNode.java:132)
                   at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
              >

    Along similar lines, I just saw the problem, though we are running SP4 and the trace is slightly different.
              Got it by starting a number of clients at the same time, all of whom call the same method for the same stateless session bean.
              <Jan 19, 2006 3:39:27 PM EST> <Warning> <RMI> <BEA-080003> <RuntimeException thr
              own by rmi server: com.wmc.ejb.proxy.ejb_ContainerDataSetProxy_pu0pbm_EOImpl.loa
              d(Lcom.armanta.comm.SessionContext;)
              java.util.ConcurrentModificationException.
              java.util.ConcurrentModificationException
              at java.util.HashMap$HashIterator.nextEntry(HashMap.java:782)
              at java.util.HashMap$EntryIterator.next(HashMap.java:824)
              at java.util.HashMap.writeObject(HashMap.java:976)
              at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
              sorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:324)
              at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:80
              9)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:12
              96)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav
              a:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java
              :1332)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:13
              04)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav
              a:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
              at java.util.LinkedList.writeObject(LinkedList.java:685)
              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
              java:39)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
              sorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:324)
              at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:80
              9)
              at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:12
              96)
              at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav
              a:1247)
              at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
              at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
              at weblogic.common.internal.ChunkedObjectOutputStream.writeObject(Chunke
              dObjectOutputStream.java:120)
              at weblogic.rjvm.MsgAbbrevOutputStream.writeObject(MsgAbbrevOutputStream
              .java:93)
              at com.wmc.ejb.proxy.ejb_ContainerDataSetProxy_pu0pbm_EOImpl_WLSkel.invo
              ke(Unknown Source)
              at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
              at weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerR
              ef.java:108)
              at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
              at weblogic.security.acl.internal.AuthenticatedSubject.doAs(Authenticate
              dSubject.java:363)
              at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:
              147)
              at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.jav
              a:415)
              at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest
              .java:30)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

  • How to map revision level materials to already created inspection plan?

    Dear All,
    I have created single inspection plan for many existing raw materials. Now it has been found that each raw material have different revision level as well as valid from date.
    My inspection plan is created on 02.07.2014 and each raw material has different revision level with valid from date which is also very old (like 11.03.2012 for revision level 09).
    My Requirement is:  how to correct my inspection plan?,  so that it will work for each assigned raw material and there will not be any material revision level wise problem created for the inspection lot so that my inspection lot should be created in REL status in QA32 transaction i.e. inspection lot is assigned automatically as per standard SAP functionality.
    Thanks,
    Narresh

    Hi Amol,
    Thanks for your reply.
    In this business scenario, we are not creating inspection plan for each raw material because there is similarity in inspection for 40-50 raw materials and total number of raw materials are around 500. This will be very difficult and unnecessarily data creation in the system.
    As per my understanding revision level is for material master revision and if material master changes then how it affect so much on inspection plan? also is it means that for every revision in future we will have to create new inspection plan?  hence I believe that if I will make back dated inspection plan for raw materials then it may work. Is my understanding is correct?
    Expecting much better solution from QM experts.
    Thanks,
    Narresh

  • Crystal Report in jsp

    Hi all,
    How can I incorporate Crystal Report XI in my JSP Project? I am using Netbeans IDE and Tomcat Server.
    Help me in this....
    Thankx in adv.
    AE

    I worked wih crystal report long time before so i am not going to give exact answer but i am giving you some out line as per my best knowledge.
    to generate crystal report
    step 1
    you have to start crystal report (page and image)server(which you installed in you server)
    before generating any report first check sample report page is working or not.
    (here presently verson 9 or hire is avail but version 7 is good for you--both have huge diffrence and install either one other wise problems are arise)
    step 2
    generate crystal report (generate rpt files) and save
    step 3
    you have to put crystal report (rpt) files in perticular forlder like tomcats webapps
    step 4
    then after you have to just give link to your rpt file like <a> (action) tag in html

  • Real application cluster installation

    I'm doing an install of RAC on Linux AS 2.1 using OCFS. I am following the
    directions from the 'Step by Step RAC Linux Installation'. I got the cluster
    manager to install correctly and started cluster manager on both nodes.
    However, when I run the installer again, the 2nd node doesn't show up on the
    "Cluster Node Selection Screen". I'm think both nodes should be showed on the
    selection screen. What is wrong???

    There is a new book on Oracle9i RAC and it discusses in detail all issues with Oracle Real Application Clusters instalation problems:
    Here is a link to it:
    http://www.dba-oracle.com/bp/bp_book1_rac.htm
    Hope this helps. . . .

  • Cluster Synchronization/Communication

    Hi,
    what is the suggested/preferred way of SAP to implement cluster synchronization/communication in a NetWeaver AS Java cluster? F.e. an application deployed on two instances manages a RAM based cache. This cache needs to be synchronized somehow, at least a flush triggered on the one instance should result in a flush on the other instance. I would use JMS for this kind of situation, since the AS seems to be J2EE 1.3 compatible this should be no problem, right? Are there other suggested/preferred ways of implementing this communication?
    Best regards,
    Fabian

    Not sure if you've found an answer or are still looking, but since I was looking for an answer to the same thing and found this post on Google, I'll post this here for any other Googlers.
    From the release notes for Sun Cluster 3.2 Geographic Edition:
    Sun Cluster Manager Requires Same Root Password on Partner Clusters (6260505)
    Problem Summary: To use the Sun Cluster Manager graphical user interface (GUI), the root password must be the same on all nodes of both clusters in the Sun Cluster Geographic Edition deployment.
    Workaround: If you use Sun Cluster Manager to configure your clusters, ensure that the root password is the same on every node of both clusters. If you prefer to not set the root password identically on all nodes, use the command-line interface to configure your clusters.
    I had the exact same error, changed the root passwords to match, and the error goes away, so apparently that was the issue.

  • EMCC certifcates

    Hi,
    I am currently looking at deploying EMCC between two 8.62 CUCM clusters.I have set this up in the lab (two single node clusters) without problem , but I have a question in regards to when we go live.
    When export the certificates , consolidate and then import.Do I need to export from EACH server and then IMPORT to each server i.e Pubs subs and tftp.
    I believe that I would only do this on the pub as it will replicate the certs around , but I just wanted to check
    thanks
    mark

    Hi,
    This is a known issue with 12.1.0.4 Cloud Control Agent deployments when the OMS ORACLE_HOME is installed on a Windows system on a drive other than "C:".  Therefore it's possible for a non-windows agent to hit this problem (if it is being deployed from a Windows OMS)
    You can refer to below doc for the fix
    EM 12c R4: Agent Install from a 12.1.0.4 OMS on Windows fails with 'Creation of Plugin archive failed' (Doc ID 1681463.1)
    Regards,
    Rahul

  • Stock back tracking

    sirs,
    we dont have a batch managemnet active in our plant.
    but i need to track the GR(perhaps PR/PO requisitioner) who has raised the same.
    that is i need the bifuraction of my curent stock GR requisitioner wise.
    problem what i am facing is even thogh i have the stock in plant, user has created the PR/po and GR is also taking place.
    we have put the restriction currently for creating the PR for already available stock.
    is there any way to trace the user pr/po requisitioner in the above case?

    Hi,
    Without batch management you cannot achieve this.
    For example you receive three deliveries of a material of differeing qtys (lets say one was for 10 items, the other was for 15 and the final one was 25).
    So you have 50 pieces in stock.
    When you issue some you have no way of indicating WHICH GR the issued stock relates to.
    So how can you expect the system to identify which items are being issued if there is no unique identity attached to the stock?
    You must use some kind of batch management to make sure that you stock is identified whenever you move it (GR, GI transfer etc.).
    The stock will ALWAYS be issued / comsumed at the Std or Moving average price of the material master record and NOT according to the value of the receipt. Even if you use LIFO or FIFO etc. (because these are RE-valuation methods, i.e. they revalue the stock at a point in  time, they don't affect each and every movement)
    Steve B

  • IPrint Manager segfault

    Hi,
    after running for some month without problems, today the iprint manager
    does not work anymore. It starts, but stoppes immediately because of an
    segfault.
    ipsmd.log shows:
    WARNING The iPrint Manager has just experienced a segfault or fatal
    error. It will be restarted by its monitor process.
    The process is loaded and unloaded several times and then the monitor
    process does not start ipsmd anymore.
    The only TID I could find, is 7000747. It lists the message shown above,
    but in my case the manager does not start again at all (in case of the
    TID the manager does still start, but some printers are corrupt; not
    much better).
    I have requested the debug rpm, as stated at the TID, but no answer by
    Novell so far.
    Does anyone have experienced this problem before and knows a solution?
    Platform is OES2, Linux based, iprint clustered (but problem exists on
    both machines).
    Thanks in advance,
    Frank

    I have experienced this problem when Print manager database was kind of broken. I re-created printmanager. Hope this helps.
    Frank Langner wrote:
    > Hi,
    >
    > after running for some month without problems, today the iprint manager
    > does not work anymore. It starts, but stoppes immediately because of an
    > segfault.
    >
    > ipsmd.log shows:
    >
    > WARNING The iPrint Manager has just experienced a segfault or fatal
    > error. It will be restarted by its monitor process.
    >
    >
    > The process is loaded and unloaded several times and then the monitor
    > process does not start ipsmd anymore.
    >
    > The only TID I could find, is 7000747. It lists the message shown above,
    > but in my case the manager does not start again at all (in case of the
    > TID the manager does still start, but some printers are corrupt; not
    > much better).
    >
    > I have requested the debug rpm, as stated at the TID, but no answer by
    > Novell so far.
    >
    > Does anyone have experienced this problem before and knows a solution?
    >
    > Platform is OES2, Linux based, iprint clustered (but problem exists on
    > both machines).
    >
    > Thanks in advance,
    > Frank

  • I've only had my iphone 5s for a week. I keep getting an error message of "Server has stopped responding."  I need the server to work. Does anyone know if there is a "fix" for the problem? Other wise, I probably best return for a refund and get a Samsung.

    I've only had my iphone 5s for a week. I keep getting an error message of "Server has stopped responding."  I need the server to work. Does anyone know if there is a "fix" for the problem? Other wise, I probably best return for a refund and get a Samsung.  Thanks

    sandyzotz wrote:
    Other wise, I probably best return for a refund and get a Samsung.
    Unlikely.  Based on the complete lack of detail of the issue provided it is entirely possible the same issue would occur.
    Unless and until the user provides some actual details of the problem, there is nothing the indicate that the issue is with the iPhone.

Maybe you are looking for

  • Apple TV Video Podcast Stutter

    I have the Apple TV 2nd Generation. When watching video podcasts they stutter/jitter. I have tried adjusting settings on my TV, tried HD and SD videos with the same results. Anyone have any suggestions? Thank you

  • Confirmation in co11 with scrap

    Hi,    My client doing production confirmation in co11 ( operation wise)    they want enter scrap qty in confirmation and scrap material qty in 531 there itself    whether we can enter it in Goods movement screen in co11???? thanks in advance

  • Workflow information combine to one view/list

    I current have a list setup in sharepoint, with workflow created. I have created the rule using custom workflow created in designer. 1)First guy fills in content of list and starts the workflow. 2) 2nd guy is assigned certain information to provide,

  • Cannot connect FaceTime from ipad 3 to ipad 3 . What's do I do

    Cannot connect FaceTime from ipad 3 to ipad 3 one of the iPads had ios3 update .the other not.

  • Can't download ringtone in iTunes

    About 1 year ago , i purchased ringtone but now i can't download again. in iTunes on my iPhone. i just download again but a window pop up said to me "You have purchased this item before, would you like to purchase it again?" Why i want buy again?