JMS Cache updates in existing transactions

We have two application instances running, with synchronization over JMS. In the following scenario what will be the outome?
- AppInstance1 does changes to Object1 and commits. These changes are sent to the JMS queue.
- AppInstance2 is in the middle of one transaction when it recieves the cache updates. The current transaction include Object1 (which updates are received for).
Does the transaction in Instance2 get aborted upon receiving the cache sync or are the mechanisms at work in TopLink smart enough to merge the non conflicting fields?
Thanks,
Anders,

This depends on your locking mechanism and has little to do with cache-synch. Cache synchronization does not replace the requirement for a locking mechanism.
If you are using no locking, then no matter if you have a single server, multiple servers with or without cache synch, then there is the potential that concurrent transactions can overwrite each others data.
If you are using optimistic locking, then no matter if you have a single server, multiple servers with or without cache synch, your second transaction will get an optimistic lock error and its transaction will be rolled-back.
If you are using pessimistic locking, then there is not much point in using cache synch as you will be accessing the database anyway. The second transaction will block on its read of the object until the first transaction completes, or throw an error on read if using no_wait.
The benefit that cache synchronization provides it that because changes are synchronized between servers the likely-hood of getting an optimistic lock exception is greatly reduced. It can also be used with infrequently updated objects that do not use any locking, or object to which locking is not an issue.

Similar Messages

  • Error in Directory Cache Update

    Hi,
    because we changed from two SLDs (PROD & DEV) to one (DEV) we did all changes like given in note 720717.
    Everything seems to run fine except the Adapter Engine - ok parts of it.
    When checking the Cache-Infos in Integration Directory we get following error in Integration Server (Central Adapter Engine). What do we need to do?!
    br
    com.sap.aii.ib.server.abapcache.CacheRefreshException: Unable to find an associated SLD element (source element: SAP_XIIntegrationDirectory, [CreationClassName, SAP_XIIntegrationDirectory, string, Name, directory.px1.sapru03, string], target element type: SAP_XIIntegrationServer)
         at com.sap.aii.ibdir.server.abapcache.content.CacheCPA.addContent(CacheCPA.java:483)
         at com.sap.aii.ibdir.server.abapcache.content.CacheCPA.addContent(CacheCPA.java:154)
         at com.sap.aii.ibdir.server.abapcache.CacheRefreshRequest.addContent(CacheRefreshRequest.java:388)
         at com.sap.aii.ibdir.server.abapcache.CacheRefreshRequest.addContent(CacheRefreshRequest.java:326)
         at com.sap.aii.ibdir.server.abapcache.CacheRefreshRequest.processHTTPRequest(CacheRefreshRequest.java:145)
         at com.sap.aii.ibdir.server.abapcache.CacheRefreshRequest.handleHTTPRequest(CacheRefreshRequest.java:103)
         at com.sap.aii.ibdir.web.abapcache.HmiMethod_CacheRefresh.process(HmiMethod_CacheRefresh.java:67)
         at com.sap.aii.utilxi.hmis.server.HmisServiceImpl.invokeMethod(HmisServiceImpl.java:169)
         at com.sap.aii.utilxi.hmis.server.HmisServer.process(HmisServer.java:178)
         at com.sap.aii.utilxi.hmis.sbeans.HmisBeanImpl.process(HmisBeanImpl.java:86)
         at com.sap.aii.utilxi.hmis.sbeans.HmisLocalLocalObjectImpl10.process(HmisLocalLocalObjectImpl10.java:259)
         at com.sap.aii.utilxi.hmis.web.HmisServletImpl.processRequestByHmiServer(HmisServletImpl.java:290)
         at com.sap.aii.utilxi.hmis.web.workers.HmisExternalClient.doWork(HmisExternalClient.java:75)
         at com.sap.aii.utilxi.hmis.web.HmisServletImpl.doWork(HmisServletImpl.java:496)
         at com.sap.aii.utilxi.hmis.web.HmisServletImpl.doPost(HmisServletImpl.java:634)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(AccessController.java:207)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)

    Hi,
    Hereu2019s a list of common errors/problems in SAP XI and their possible resolutions. This Guide will help you troubleshoot your integration scenarios in SAP XI/PI. This is in no way an exhaustive list. You can add your points/ideas to this list. Please feel free to post your inputs using the comments form at the end of this article.
    Cache Update Problems
    Use transaction SXI_CACHE to update the Integration Directory cache. Alternatively, you can use the following URLs to update the CPA cache. Use XIDIRUSER to refresh the cache.
    For complete cache refresh - http://<hostname>:<port>/CPACache/refresh?mode=full
    For delta cache refresh - http://<hostname>:<port>/CPACache/refresh?mode=delta
    If this does not solve the issue, check transaction SLDCHECK to ensure that connection to SLD is available. If the connection fails, check the configuration in the transaction SLDAPICUST. Make sure that the password maintained is correct and the maintained service user is not locked.
    Now in the Integration Repository go to Environment u2192 Clear SLD Data Cache. Also go to Integration Directoy and clear the cache using menu Environment u2192 Clear SLD Data Cache.
    Open the XI Start Page and click on Administration. On the Repository tab, choose Cache Overview. Refresh the cache using the buttons/icons on the right. Use XIDIRUSER to refresh the cache. Carry out cache refresh in the same way on the Directory and Runtime tabs.
    If you are facing cache update problems in your BPM (say you have modified the BPM, but when executed old version of the BPM is picked up instead of the new one), run the transaction SWF_XI_CUSTOMIZING and press F9 carry out automatic BPM/Workflow Customizing.
    Routing Errors
    NO_RECEIVER_CASE_BE or NO_RECEIVER_CASE_ASYNC
    This means no receiver could be found. Check your Receiver Determination. Activate and update cache. Asysnchronous messages can be manually restarted.
    TOO_MANY_RECEIVERS_CASE_BE
    More than one receiver found. Check your ID configuration to ensure that there is exactly one receiver for the synchronous message. Multiple receivers for synchronous interfaces are not permitted.
    Mapping Errors
    JCO_COMMUNICATION_FAILURE
    Check whether RFC destination AI_RUNTIME_JCOSERVER is correctly configured
    NO_MAPPINGPROGRAM_FOUND
    Ensure that mapping program exists and is activated. If it exists then update the cache.
    EXCEPTION_DURING_EXECUTE
    This error occurs due to erroneous XML formatting. Check your mapping program and ensure that you supply valid input data.
    Messages stuck in queues
    Check the queues using transactions SMQ1 (outbound)/SMQ2 (inbound). Resolve the displayed errors. You can cancel the messages from SXMB_MONI. Execute LUW if necessary and avoid deleting entries manually.
    Conversion Errors
    Unable to convert the sender service XXXX to an ALE logical system
    This error occurs in case of scenarios with IDoc adapters. Whenever you use business systems, make sure that the corresponding logical system name is maintained in the SLD.
    Open your business system in the Integration Directory. Switch to Change mode. Access the menu path Service u2192 Adapter Specific Identifiers. Click the button that says u2018Compare with System Landscape Directoryu2019 and chose Apply. Save and activate your change list.
    In case of business services, you can manually type a logical system name in the Adapter Specific Identifiers if required. This name should match the corresponding logical system name defined in the partner SAP systemu2019s partner profiles.
    Errors on the outbound side
    Sometimes the link between SAP XI and the target system (say ERP) goes down and messages fail on the outbound side. It may not be possible to restart them from using RWB or the transactions like SXI_MONITOR/SXMB_MONI. In such cases, you can follow the procedure outlined in the following article - Dealing with errors on the outbound side.
    Refer this article:
    http://help.sap.com/saphelp_nwpi71/helpdata/en/0d/28e1c20a9d374cbb71875c5f89093b/content.htm
    Refer this portal
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/ep/pointers%2bfor%2btroubleshooting%2bportal%2bruntime%2berrors
    Regards,
    Suryanarayana

  • How do we update the existing item in Cache (RWB) without know GUID?

    Hi Friends,
        We want to update the existing item in Cache (Value Mapping). Assume that Cache Contains like
         IN --> INDIA
         FR --> FRANCE
         EN --> ENGLAND
        We want to change ENGLAND to LONDON. How it is possible ... ? (Assume that above mapping data is replicated from my text file/Oracle Table, already).
    Kindly reply.
    Kind Regards,
    Jegatheeswaran P.

    Hi Vijay,
        Whatever the value mapping table we created in Intergration Directory, we can display & edit values of particular item. Because, they are stored in the XI default context 'http://sap.com/xi/XI'. In this case, it is not necessary to know the context and GUID value.
        But, in my case, I have replicated value mapping data from external system (SAP table/Text File/Oracle Table) into RWB --> Cache Monitoring --> Runtime Cache. I use my own context like 'http://aprilbiztec.com/FRAMEWORK/FileToFile'. In this case, how do we edit the particular line item. This is my problem.
    Kindly help me.
    Thanks in advance.

  • [SOLVED] error pacman: /var/cache/pacman/pkg exists in filesystem

    Hi guys, yesterday night I tried to update my system and I got the following error which seems weird:
    resolving dependencies...
    looking for conflicting packages...
    Package (1) Old Version New Version Net Change
    core/pacman 4.2.0-5 4.2.0-6 0.00 MiB
    Total Installed Size: 4.22 MiB
    Net Upgrade Size: 0.00 MiB
    :: Proceed with installation? [Y/n] y
    (1/1) checking keys in keyring [##########################################################] 100%
    (1/1) checking package integrity [##########################################################] 100%
    (1/1) loading package files [##########################################################] 100%
    (1/1) checking for file conflicts [##########################################################] 100%
    error: failed to commit transaction (conflicting files)
    pacman: /var/cache/pacman/pkg exists in filesystem
    Errors occurred, no packages were upgraded.
    It seems weird to me because it conflicts to the whole folder and not to any specific package. Any clue, what is going on and how to solve it?
    Last edited by theodore (2015-01-26 19:51:37)

    clfarron4 wrote:
    theodore wrote:
    Allan wrote:Is your /var/cache/pacman/pkg a directory or a symlink?
    yes I have it in a different partition and then I have symilinked it
    I'd have thought it would have been better to set a different directory for the pacman cache, mounting the partition to that through /etc/fstab and using the CacheDir variable in /etc/pacman.conf, but I'll leave that to your judgement.
    If you don't want to do that, then I'll leave someone else to put up a solution (my brain's screaming bind mounts for some unknown reason).
    So, you are saying that I should enable the CacheDir in /etc/pacman.conf and point it to the partition where I have now the pacman cache.

  • JMS Cache Synchronization Manager

    Hi all,
    I am trying to implement distributed cache synchronization in TopLink using JMS.
    On http://download-west.oracle.com/docs/cd/A97688_12/toplink.903/b10064/enterpri.htm#1022254 , there is an example given which has a snippet of the sessions.xml file.
    The example snippet shows a tag <naming-service-initial-context-factory> under the <cache-synchronization-manager> tag. But on the DTD given on http://download-west.oracle.com/docs/cd/A97688_12/toplink.903/b10064/a-sessio.htm#634251 , I cant find an element <naming-service-initial-context-factory>. Please let me know which one is wrong - the DTD or the example?
    Also, I tried to use JMS for distributed cache synchronization. My JMS Server is running on a seperate VM and I have configured info about it in the sessions.xml file. But TopLink is complaining that it cannot find the top and connection factory. Yet, I am able to look them up from a standalone program.
    Does anyone have any experience setting this up? Any examples?
    Thanks,
    Binil

    Hi Doug,
    Thanks for the clarifications! :)
    I am not able to get the JMS Cache Synchronization work still :( I am new to JMS & Toplink - so I will try to explain what I am doing and please let me know if there is something I am doing wrong.
    I have an EJB application which is a Stateless Session bean which has two methods : one to fetch an entity and another to update an entity. Also, I have a weblogic domain in which I have three managed servers - appServerOne, appServerTwo and jmsServer. On appServerOne and appServerTwo I have deployed the above mentioined application. On jmsServer, I have configured my ConnectionFactory and Topic and the JNDI names of those are provided to the EJB application.
    With this setup, I run two client programs. I will list the steps below:
    1. Client1 fetches an entity whereby it is loaded to the TopLink cache. The DB logs show that a SELECT query is issued to the DB.
    2. Client2 fetches the same entity whereby it is loaded to the cache; again a SELECT query is issued.
    3. Client1 updates that entity using UOW; the logs dont show a DB UPDATE statement though. But the updates are indeed made to the DB when I check them.
    4. Client1 fetches the entity again; the updates are visible and the JDBC logs show that a trip to the DB is not done.
    5. After waiting for sometime Client1 fetches the same entity again and I get the stale data! :(
    I have enabled logging, so I am hoping that TopLink will log a message when a JMS notification is posted. Also, I have made the cache synchronization synchronous, so that the transaction shouldnt commit before the notification being send.
    I tried making the stateless session bean stateful and it still didnt work. Is there anything I need to do.
    Cheerio,
    Binil

  • Pof Serialization Error leads to partial cache updates in XA Tran

    I am using coherence jca adapter to enlist in XA transactions with the database operations. The data is being stored in distributed caches with the cache member running on weblogic server with "Storage disabled". POF is being used for serialization. As a part of a single transaction, multiple caches, which are obtained from the CacheAdapter, are being updated. The application code does explicit updates to the cache and the database within the same transaction with the write to the cache happening after the write the database has been executed.
    It is being observed that when an exception happens during the serialization of an object all the cache updates prior to this error are not rolled back. Namely,
    I have CacheA for Object A, Cache B for Object B and Cache C for Object C.
    I am updating A, B, C with the same transaction and in the same order as the objects are listed. So database for A is updated followed by cache update for A, DataBase B is updated followed by Cache update for B and similarly for C.
    If there is an error while serializing C, all the database updates are rolled back, however updates to A and B are committed to the cache.
    Why aren't all the cache updates being rolled back ? Has this been fixed already in coherence 3.6.
    Thanks,
    Shamsur
    Application Server: Weblogic 10.3
    jdbc driver : XA thin driver
    coherence : 3.5.0
    Caused by: java.lang.IllegalStateException: decimal value exceeds IEEE754r 128-bit range: 7777777788888888888899999999999900000000000000044444444444447777777777777.00
    at com.tangosol.io.pof.PofHelper.calcDecimalSize(PofHelper.java:1517)
    at com.tangosol.io.pof.PofBufferWriter.writeBigDecimal(PofBufferWriter.java:562)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1325)
    at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.writeObject(PofBufferWriter.java:2092)
    at com.apx.core.datalayer.data.basictypes.BigDecimalMoneyImpl.writeExternal(BigDecimalMoneyImpl.java:127)
    at com.tangosol.io.pof.PortableObjectSerializer.serialize(PortableObjectSerializer.java:88)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1439)
    at com.tangosol.io.pof.PofBufferWriter$UserTypeWriter.writeObject(PofBufferWriter.java:2092)
    at com.apx.instrument.datalayer.data.domain.impl.DOtcInstrumentImpl.writeExternal(DOtcInstrumentImpl.java:200)
    at com.apx.core.datalayer.data.impl.AbstractDomainObject.writeExternal(AbstractDomainObject.java:109)
    ... 185 more
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$LocalTransaction.commit(CacheAdapter.CDB:37)
    at weblogic.connector.security.layer.AdapterLayer.commit(AdapterLayer.java:570)
    at weblogic.connector.transaction.outbound.NonXAWrapper.commit(NonXAWrapper.java:84)
    at weblogic.transaction.internal.NonXAServerResourceInfo.commit(NonXAServerResourceInfo.java:330)
    at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2251)
    at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:270)
    at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
    at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:283)
    at $Proxy150.create(Unknown Source)
    javax.resource.spi.LocalTransactionException: CoherenceRA: Commit failed:
    java.lang.RuntimeException: error with the class: com.apx.instrument.datalayer.data.domain.impl.DOtcInstrumentImpl
    at com.apx.core.datalayer.data.impl.AbstractDomainObject.writeExternal(AbstractDomainObject.java:111)
    at com.tangosol.io.pof.PortableObjectSerializer.serialize(PortableObjectSerializer.java:88)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1439)
    at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
    at com.tangosol.util.ExternalizableHelper.serializeInternal(ExternalizableHelper.java:2508)
    at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:205)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterValueToBinary.convert(DistributedCache.CDB:3)
    at com.tangosol.util.ConverterCollections$AbstractConverterEntry.getValue(ConverterCollections.java:3333)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.putAll(DistributedCache.CDB:19)
    at com.tangosol.util.ConverterCollections$ConverterMap.putAll(ConverterCollections.java:1570)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.putAll(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.putAll(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.putAll(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.util.DeltaMap.resolve(DeltaMap.CDB:9)
    at com.tangosol.coherence.component.util.deltaMap.TransactionMap.commit(TransactionMap.CDB:1)
    at com.tangosol.coherence.component.util.TransactionCache.commit(TransactionCache.CDB:14)
    at com.tangosol.coherence.component.util.transactionCache.Local.commit(Local.CDB:1)
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$LocalTransaction.commit(CacheAdapter.CDB:25)
    at weblogic.connector.security.layer.AdapterLayer.commit(AdapterLayer.java:570)
    at weblogic.connector.transaction.outbound.NonXAWrapper.commit(NonXAWrapper.java:84)
    at weblogic.transaction.internal.NonXAServerResourceInfo.commit(NonXAServerResourceInfo.java:330)
    at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2251)
    at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:270)
    at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
    at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:283)
    at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1009)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
    at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:374)
    at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)

    Hi SR-APX
    The problem is that even though you are using the JCA adaptor Coherence (pre-3.6) is not really transactional. Once you commit all that data is still being pushed out to the distributed cluster members who all work independently. An error on one member will not stop data being written successfully to others.
    In Coherence 3.6 there are real transactions but you would need to see if the limitations on them fit your use-cases.
    JK

  • JMS Listener update user

    I am using IDM 8.1. I've setup the JMS Listener to activesync queue message to Sun IDM repository.
    I found that, it will consider any incoming queue message as create user request even I've checked the Create Unmatched Accounts in synchronization policy. Do anyone get the idea how to make the adapter launch the create / update process correctly? I've try added activeSync.diffAction value as one of the parameter in the incoming message but it fail, too.
    Thx!

    You need to set up a correlation rule for the adapter to correlate active sync updates to existing accounts.

  • Capturing Info about Updates to previous transactions

    Hello Experts,
    I am a newbie and I have a hard time figuring out the following.
    In accounting and other applications like stocks there are closing balances that are brought forward to next week/months/year.
    How I can write a database object , Say that I am in week 4, and due to some reasons I need to UPDATE a previous transaction (say that happened in in week 2) that changes the closing balance at the end of week 2 accordingly the opening and closing Balance for week 3 changes and also O/balance for week 4 changes.We have similar scenarios in Ledgers etc on yearly basis.
    I will be very grateful if some one can help me to understand how It can be implemented using Oracele/RDBMS objects, How I can capture these changes automatically.
    Any advice or pointers or small application Will do me a world of good.
    Thanks in advance.
    Regards
    Martin St. Lou

    Camus wrote:
    The author of that blog post used cower instead of yaourt, but I don't think cower allows me to sync AUR database into custom dir either. Or am I missing something? I don't how yaourt and cower sync AUR database anyway.
    I actually already wrote such notifier for gnome in bash. But I want to display more info and I want to include AUR packages.
    The perfect tool in my situation would be something that allows me to sync official and AUR databases into custom dir and then displays list of updates with:
    -name of package
    -current version
    -updated version
    -repo
    I can currently do all of the above, but just with official repos and not as elegantly as I want.
    Not sure what you mean by 'sync AUR database' (no such thing exists as it does for the binary repos), but cower will let you download to wherever you want.

  • JMS Cache Synchronization - Merge or Database Read?

    When an object in one JVM is changed does TopLink' JMS cache synchronization mechanism merge the changes into the other JVMS in a cluster or does it just signal that the object is dirty and the object is re-read from the database?
    If the answer is 'merge' does a post merge event get called on the object being merged?
    thanks
    Steve

    In TopLink 10.1.3 both can happen, the changes can be merged or an object can be marked invalid and TopLink will refresh the object as needed. In 9.0.4.X the changes will be merged into the other caches (if an object of correct version exists in that cache).
    In the case of a merge postMerge() event will be called.
    --Gordon

  • Memory leak when using JMS Cache Coordination

    We have two Weblogic Server 8.1 processes running Java 1.4.2 on Solaris using TOPLink 10.1.3 with JMS Cache Coordination. We observe that heap is filled with uncollectable instances of TOPLink-mapped classes. In our production system, under full load, this completely fills a 3.6 GB heap in 30 minutes, requiring a process restart. The problem goes away if we turn off cache coordination.
    It appears that these instances are UnitOfWork or some other kind of toplink-created clone. We have not yet been able to successfully analyse this problem with a heap profiler.
    Has anyone else experienced this this problem? Any suggestions for debugging?
    Thanks in advance.

    We do set the command manager to asychronous mode. (Our debug tracing confirms that the CommandPropagator method asynchronousPropagateCommand() is invoked, not synchronousPropagateCommand().) We started with asynchronous messaging and actually have never tried running in a synchronous mode.
    The Java bug I referenced in my last post (which I have confirmed with a stand-alone test case) indicates that, for Java 1.4.2 and earlier, is is never ok to not start() a Thread -- it will always produce a memory leak. So I am quite surprised than anyone on a pre-1.5 Java had ever had success with synchronous cache coordination messaging. Do you think it is possible all of the pre-release tests and existing customers installations of 10.1.3 cache coordination are using Java 1.5 ?!
    Our heap profiling indicates that the instances of CommandPropagator which are pinned (i.e., those not started) are allocated in the run() method of CommandPropagator itself. So it seems that the instance of CommandPropagator, after it is started, allocates another one in its run method. Looking at the bytecodes for the run() method (using javap -c) shows that in one branch of the code a second CommandPropagator is indeed allocated and then handed off to the launchContainerRunnable method of a ServerPlatform.
    Since these secondary CommandPropagators are the ones which are not started, we have looked into the WebLogic_8_1_Platform class and found that its implementation launchContainerRunnable (inherited from ServerPlatformBase) does a
    new Thread(runnable).start()where the argument runnable is the CommandPropagator.
    So the CommandPropagator itself is not started(), but is instead run an another Thread.
    We are experimenting with a custom ServerPlatform which overrides launchContainerRunnable:
        server.setServerPlatform(
            new WebLogic_8_1_Platform(server)
                public void launchContainerRunnable(Runnable runnable)
                   if (runnable instanceof Thread) ((Thread)runnable).start();
                   else super.launchContainerRunnable(runnable);
        );  This starts the argument Runnable directly if it is actually a Thread. Our early, small-scale tests indicate that this change eliminates the memory leak. We are testing now in a production-replicate environment under full load.
    But this feels like a hack. I have no idea what TOPLink behavior other than cache coordination flows through this method. Does anyone know what else this ServerPlatform method is used for? Is there a better way to do this? Are there any adverse conquences to our hack? Any suggestions would be appreciated.
    Thanks in advance.

  • New user status profile applied to existing transactions

    Hello experts,
    We have been live for nearly a year and a half with CRM service 5.0 and are now preparing to upgrade to 2007. A new requirement to manage transaction item status through authorizations (B_USERSTAT) has been introduced at contract item level. We have tens of thousands of existing line items in production, but a user status was never maintained for the associated item categories.
    We can apply a new status profile to the relevant item category, but that seems to only be enabled in new transactions, not existing ones. Existing transactions are therefore not relevant for the new authorization settings in B_USERSTAT authorization object.
    Does anyone know of a utility or means by which we can "refresh" existing business transactional data to reflect the new user status profile?
    Your assistance is appreciated.
    Allon

    Hi  robert,
    I have requirement lilke this. In my case, i removed an existing user status 'Open' from the status profile for service ticket. it is reflecting in my new creating tickets..But existing all tickets have same status profiles (i.e) 'Open' is still there.. How to update the existing records???
    regards
    Logu

  • How can i update an existing item in sap using CSV file?

    Hi,
    i am trying to update an existing Item in SAP using a CSV file.
    in the message log i get an error message that the item already exists.
    what should i do in order to update the existing record?
    Thanks, Udi

    Hi..........
    I would sugest you to use Tab delimited file and choose proper option in order to update the itsm master in DTW......
    Regards,
    Rahul

  • When i login to update my existing apps, the login window shows the wrong apple id. and it's all prayed out, i can't change it. how do i solve this problem?

    When I login to update my existing apps, the login window shows the wrong apple id. i cannot change it because it is all grayed out. how do i solve this problem?

    Content and Apple IDs -
    Content is forever tied to the Apple ID that bought it. Apple does not transfer content from one Apple ID to another. Apple does not merge Apple IDs. You will never be able to access your content bought with one Apple ID with a new Apple ID.

  • Every time I try to update my existing apps I'm not successful and I get a message saying it can't connect. I've try changing my password and still no success.

    I'm getting a can not connect message every time I try to update my existing apps that need to be updated. I even put in a new password thinking that was the problem. Can someone please help me. This has been going on for two weeks.

    Hello Priscilla43,
    The following article provides tips and suggestions that can help restore your connection to the iTunes Store.
    Can't connect to the iTunes Store
    http://support.apple.com/kb/TS1368
    Cheers,
    Allen

  • How do i update an existing XML File?

    Hello, I have the following xml file gps.xml:<?xml version="1.0"?>
    <!DOCTYPE gps SYSTEM "gps.dtd">
    <gps>
       <latitude>43.00000</latitude>
       <longitude>-83.00000</longitude>
    </gps> I already have methods written to get the values. But how can I change these values and update the file to reflect these changes?
    thanks for your help!

    Hi, I already have this following code. I would just like to add a method to it to update the existing XML file with different lat/lon coordinates. Could you please help me out with it? thanks
    import java.io.*;
    import java.io.PrintWriter;
    import java.net.MalformedURLException;
    import java.net.URL;
    import javax.xml.parsers.*;
    import org.w3c.dom.*;
    import org.xml.sax.*;
    public class Gps implements java.io.Serializable {
        private double latitude_;
        private double longitude_;
        public Gps(Document doc) {
            setup(doc.getDocumentElement());
        public Gps(String uri) throws IOException, SAXException, ParserConfigurationException {
            setup(uri);
        public void setup(Document doc) {
            setup(doc.getDocumentElement());
        public void makeElement(Node parent) {
            Document doc;
            if (parent instanceof Document) {
                doc = (Document)parent;
            } else {
                doc = parent.getOwnerDocument();
            Element element = doc.createElement("gps");
            int size;
            URelaxer.setElementPropertyByDouble(element, "latitude", this.latitude_);
            URelaxer.setElementPropertyByDouble(element, "longitude", this.longitude_);
            parent.appendChild(element);
         public Document makeDocument() throws ParserConfigurationException {
            Document doc = UJAXP.makeDocument();
            makeElement(doc);
            return (doc);
        public final double getLatitude() {
            return (latitude_);
        public final void setLatitude(double latitude) {
            this.latitude_ = latitude;
        public final double getLongitude() {
            return (longitude_);
        public final void setLongitude(double longitude) {
            this.longitude_ = longitude;
       public final void updateXmlFile(double latitude, double longitude) {
         // something???
        public final void updateXmlFile(double latitude, double longitude) {
            this.latitude_ = latitude;
            this.longitude_ = longitude;
        public String makeTextDocument() {
            StringBuffer buffer = new StringBuffer();
            makeTextElement(buffer);
            return (new String(buffer));
        public void makeTextElement(StringBuffer buffer) {
            int size;
            buffer.append("<gps");
            buffer.append(">");
            buffer.append("<latitude>");
            buffer.append(Double.toString(getLatitude()));
            buffer.append("</latitude>");
            buffer.append("<longitude>");
            buffer.append(Double.toString(getLongitude()));
            buffer.append("</longitude>");
            buffer.append("</gps>");
        public String toString() {
            try {
                return (makeTextDocument());
            } catch (Exception e) {
                return (super.toString());
    }

Maybe you are looking for