Temporal Data in MI Cache

Hello:
   I want to know if there is some way to see the temporal data that is store during the synchornization.
The flow of the data when is store in the MI Server. It´s possible?
Thanks
Regards
Mario

hi
Yes it is possible to see the data in your MI server.
you could find container type in the table <b>MESYHEAD</b> and the values in <b>MESYBODY</b> of your MI server
This link will tell you about flow of data in your MI server
<a href="/people/arun.av/blog/2006/03/09/process-flow-of-generic-synchronization-in-mi-abap-server-component Flow of Generic Synchronization in MI ABAP Server Component</a>
regards
Arun
Message was edited by: Arun A V

Similar Messages

  • Problem in getting the data from a cache in hibernate

    I am storing data inside a cache. When i am geeting the data from a cache its showing null.
    How can i retrieve the data from a cache.
    I am using EHCache

    Hi ,
    You have done one mistake while setting the input parameters for BAPI..Do the following steps for setting input to BAPI
    Bapi_Goodsmvt_Getitems_Input input = new Bapi_Goodsmvt_Getitems_Input();
    wdContext.nodeBapi_Goodsmvt_Getitems_Input().bind(input);
    Bapi2017_Gm_Material_Ra input1 = new Bapi2017_Gm_Material_Ra();
    wdContext.nodeBapi2017_Gm_Material_Ra().bind(input1);
    Bapi2017_Gm_Move_Type_Ra input2 = new Bapi2017_Gm_Move_Type_Ra();
    wdContext.nodeBapi2017_Gm_Move_Type_Ra().bind(input2);
    Bapi2017_Gm_Plant_Ra input3 = new Bapi2017_Gm_Plant_Ra();
    wdContext.nodeBapi2017_Gm_Plant_Ra().bind(input3);
    Bapi2017_Gm_Spec_Stock_Ra input4 = new Bapi2017_Gm_Spec_Stock_Ra();
    wdContext.nodeBapi2017_Gm_Spec_Stock_Ra().bind(input4);
    input1.setSign("I");
    input1.setOption("EQ");
    input1.setLow("1857");
    input2.setSign("I");
    input2.setOption("EQ");
    input2.setLow("M110");
    input3.setSign("I");
    input3.setOption("EQ");
    input3.setLow("309");
    input4.setSign("I");
    input4.setOption("EQ");
    input4.setLow("W");
    wdThis.wdGetWdsdgoodsmvmtcustController().execute_BAPI_GOODSMOVEMENT_GETITEMS();
    Finally inavidate your output node like
    wdContext.node<output node name>.invalidate();
    also put your code inside try catch to display any exception if any
    Regards,
    Amit Bagati

  • Client automatically cache the data got from cache server?

    Hi expert,
    I have 2 questions about the client local cache. Would you please help to give me some suggestion?
    1. Will client automatically locally cache the data got from cache server the first time and automatically update the data in local cache when getting the same data from cache server again? I go through the API reference but cannot find any API to query the data currently cached in the local cache.
    2. If client will automatically cache the data got from cache server. Is there any way for a client to get the data event that happens to its local cache, such as entry created in local cache, entry deleted from local cache and entry updated in local cache? In my opinion, when getting an entry from cache server the first time, the MapListener's entry create event should be triggered. When getting the same entry again, the entry update event should be triggered.
    However, I have tried a client with replicated cache, a client with partitioned cache, an extend client with remote cache and a client with local cache(front cache part of near cache), the client (the NamedCache object has been set the MapListener) cannot get any event notification after getting data from cache server. By the way, my listener is OK since when putting data the entry create event and entry update event will be triggered.
    Your suggestion is very appreciated. :)

    Hi
    If I were you I would read this http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/toc.htm
    and particularly the section about Near Caching here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/nearcache.htm#CDEFEAJG
    which is what you are asking about in your question.
    Near Caching is how Coherence stores data in the locally - which is the answetr to your first question. How Near Caching works is explained in the documentation.
    Events, which you ask about in your second question are explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/delivereventsjava.htm#CBBIIEFA
    It might be that ContinuousQueryCache is closer to what you want. This is explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/queryabledatafabric.htm#sthref38 A ContinuousQueryCache is like having a sub-set of the underlying cache on the local client which you can then listen to etc...
    JK

  • Can you please help to understand how the firefox decides on the Expires date for a cached javascript file ( my server did not set any Expire header, but firebox set it down).

    # Question
    Can you please help to understand how the firefox decides on the Expires date for a cached javascript file ( my server did not set any Expire header, but firebox set it down). I tried to understand but different javascript file gets different Expires date value when it is being cached. Please help me as I tried lot and could not get proper answer for this. Thanks in Advance.

    Try posting at the Web Development / Standards Evangelism forum at MozillaZine. The helpers over there are more knowledgeable about web page development issues with Firefox. <br />
    http://forums.mozillazine.org/viewforum.php?f=25 <br />
    You'll need to register and login to be able to post in that forum.

  • How to acces and display datas storaged in cache for a SUP 2.0 workflow?

    HI to all.
    I have an application with a item menu which obtains data thought a online request. the result is shown is a listview.
    My problem is when my BlackBerry has no conection ( offline scenario). When I select the menu item, I obtain an error.
    How to acces and display datas storaged in cache for my MBO? I have read that I can use getMessageValueCollection in custom.js to access to my datas but once I get the datas, How can associate those datas to a Listview like a online request?? Do i have to develop my own screen in html or how?
    Thanks.

    I'm not entirely clear on what you mean by "cache" in this context.  I'm going to assume that what you are really referring to is the contents of the workflow message, so correct me if I'm wrong.  There is, in later releases, the ability to set an device-side request cache time so that if you issue an online request it'll store the results in an on-device cache and if you subsequently reissue the same online request with the same parameter values within that timeout period it'll get the data from the cache rather than going to the server, but my gut instinct is that this is not what you are referring to.
    To access the data in the workflow message, you are correct, you would call getMessageValueCollection().  It will return an object hierarchy with objects defined in WorkflowMessage.js.  Note that if your online request fails, the data won't magically appear in your workflow message.
    To use the data in the workflow message to update a listview, feel free to examine the code in the listview widgets and in API.js.  You can also create a custom listview as follows:
    function customBeforeNavigateForward(screenKey, destScreenKey) {
         // In this example, we only want to replace the listview on the "My Approvals" screen    
         if (destScreenKey == 'My_Approvals'){
              // First, we get the MessageValueCollection that we are currently operating on
              var message = getCurrentMessageValueCollection();
              // Next, we'll get the list MessageValue from that MessageValueCollection
              var itemList = message.getData("LeaveApprovalItem3");
              // Because its a list, the Value of the MessageValue will be an array
              var items = itemList.getValue();
              // Figure out how many items are in the list
              var numOfItems = items.length;
              // Iterate through the results and build our list
              var i = 0;
              var htmlOutput = '<div><ul data-role="listview" data-theme="k" data-filter="true">';
              var firstChar = '';
              while ( i < numOfItems ){
                   // Get the current item. This will be a MessageValueCollection.
                   var currItem= items<i>;
                   // Get the properties of the current item.
                   var owner = currItem.getData("LeaveApprovalItem_owner_attribKey").getValue();
                   var type = currItem.getData("LeaveApprovalItem_itemType_attribKey").getValue();
                   var status = currItem.getData("LeaveApprovalItem_itemStatus_attribKey").getValue();
                   var startDate = currItem.getData("LeaveApprovalItem_startDate_attribKey").getValue();
                   var endDate = currItem.getData("LeaveApprovalItem_endDate_attribKey").getValue();
                   // Format the data in a specific presentation
                   var formatStartDate = Date.parse(startDate).toString('MMM/d/yyyy');
                   var formatEndDate = Date.parse(endDate).toString('MMM/d/yyyy');
                   // Decide which thumbnail image to use
                   var imageToUse = ''
                        if (status == 'Pending'){
                             imageToUse = 'pending.png';
                        else if (status == 'Rejected'){
                             imageToUse = 'rejected.png';
                        else {
                             imageToUse = 'approved.png';
                   // Add a new line to the listview for this item
                   htmlOutput += '<li><a id ="' + currItem.getKey() + '" class="listClick">';
                   htmlOutput += '<img src="./images/' + imageToUse + '" class="ui-li-thumb">';
                   htmlOutput += '<h3 class = "listTitle">' + type;
                   htmlOutput +=  ' ( ' + owner + ' ) ';
                   htmlOutput += '</h3>';
                   htmlOutput += '<p>' + formatStartDate + ' : ' + formatEndDate + '</p>';
                   htmlOutput += '</a></li>';
                   i++;
              htmlOutput += '</ul></div>';
              // Remove the old listview and add in the new one.  Note: this is suboptimal and should be fixed if you want to use it in production.
              $('#My_ApprovalsForm').children().eq(2).hide();
              $('#My_ApprovalsForm').children().eq(1).after(htmlOutput);
              // Add in a handler so that when a line is clicked on, it'll go to the right details screen
              $(".listClick").click(function(){
                   currListDivID = $(this).parent().parent();
                   $(this).parent().parent().addClass("ui-btn-active");
                   navigateForward("Request_Details",  this.id );
                   if (isBlackBerry()) {
                        return;
         // All done.
         return true;

  • Error when Publish data to passive cache

    Hi,
    I am using active-passive push replication. When I add data to active cache, the data added to cache but get the below error during publish data to passive cache.
    011-03-31 22:38:06.014/291.473 Oracle Coherence GE 3.6.1.0 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=site1, clusterName=Cluster1, cacheName=dist-contact-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=12, value=0x154E094D65656E616B736869), value=Binary(length=88, value=0x12813A15A90F00004E094D65656E61
    6B736869014E07506C6F74203137024E074368656E6E6169034E0954616D696C4E616475044E06363030303432401A155B014E0524737263244E0E73697465312D436C757374657231), originalValue=Binary(length=0, value=0x)}} to Cache dist-contact-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 78 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-03-31 22:38:06.014/291.473 Oracle Coherence GE 3.6.1.0 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache[dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
    Here is my coherence cache config xml file
    !DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config xmlns:sync="class:com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler">
    <caching-schemes>
    <sync:provider pof-enabled="true">
    <sync:coherence-provider />
    </sync:provider>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>dist-contact-cache</cache-name>
    <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name>
    <sync:publisher>
         <sync:publisher-name>Active Publisher</sync:publisher-name>
         <sync:publisher-scheme>
         <sync:remote-cluster-publisher-scheme>
         <sync:remote-invocation-service-name>remote-site2</sync:remote-invocation-service-name>
         <sync:remote-publisher-scheme>
         <sync:local-cache-publisher-scheme>
         <sync:target-cache-name>dist-contact-cache</sync:target-cache-name>
         </sync:local-cache-publisher-scheme>
         </sync:remote-publisher-scheme>
         <sync:autostart>true</sync:autostart>
         </sync:remote-cluster-publisher-scheme>
         </sync:publisher-scheme>
    </sync:publisher>
    </cache-mapping>
    </caching-scheme-mapping>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>5</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port>9099</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <remote-invocation-scheme>
    <service-name>remote-site2</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>localhost</address>
    <port>5000</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>2s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
         </serializer>
    </remote-invocation-scheme>
    </caching-schemes>
    </cache-config>

    Please find below for active and passive server start command, log files.
    Thanks.
    Active server start command
    java -server -showversion -Xms128m -Xmx128m -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.cacheconfig=active-cache-config.xml
    -Dtangosol.coherence.cluster="Cluster1" -Dtangosol.coherence.site="site1" -Dtangosol.coherence.clusteraddress="224.3.6.2"
    -Dtangosol.coherence.clusterport="3001" -Dtangosol.pof.config=pof-config.xml -cp "config;lib\custom-types.jar;lib\coherence.jar;lib/coherence-common-1.7.3.20019.jar;lib/coherence-pushreplicationpattern-3.0.3.20019.jar;lib/coherence-messagingpattern-2.7.3.20019.jar;" com.tangosol.net.DefaultCacheServer
    Passive server start command
    java -server -showversion -Xms128m -Xmx128m -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.cacheconfig=passive-cache-config.xml
    -Dtangosol.coherence.cluster="Cluster2" -Dtangosol.coherence.site="site2" -Dtangosol.coherence.clusteraddress="224.3.6.3"
    -Dtangosol.coherence.clusterport="3003" -Dtangosol.pof.config=pof-config.xml -cp "config;lib\custom-types.jar;
    lib\coherence.jar;lib/coherence-common-1.7.3.20019.jar;lib/coherence-pushreplicationpattern-3.0.3.20019.jar;
    lib/coherence-messagingpattern-2.7.3.20019.jar;" com.tangosol.net.DefaultCacheServer
    Active Server log
    <Error> (thread=PublishingService:Thread-3, member=1): Failed to publish the range MessageTracker{MsgId{40-1} } of messages for subscription SubscriptionIdentifier{destinationIdentifier=Identifier{dist-contact-cache}, subscriberIdentifier=Identifier{Active Publisher}} with publisher com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher@423d4f Class:com.oracle.coherence.patterns.pushreplication.providers.coherence.CoherencePublishingService
    2011-04-01 20:23:20.172/65.972 Oracle Coherence GE 3.6.1.0 <Info> (thread=PublishingService:Thread-3, member=1): Publisher Exception was as follows Class:com.oracle.coherence.patterns.pushreplication.providers.coherence.CoherencePublishingService
    (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish t
    o cache dist-contact-cache
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
    at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
    at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:96)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
    at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
    at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
    ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 78
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
    at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
    at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
    at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
    ... 10 moreCaused by: java.io.StreamCorruptedException: invalid type: 78
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2266)
    at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2254)
    at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2708)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
    ... 16 more
    Passive Server log
    2011-04-01 20:23:20.141/56.925 Oracle Coherence GE 3.6.1.0 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=site1, clusterName=Cluster1, cacheName=dist-contact-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=12, value=0x154E094D65656E616B736869), value=Binary(length=88, value=0x12813A15A90F00004E094D65656E616
    B736869014E07506C6F74203137024E074368656E6E6169034E0954616D696C4E616475044E06363030303432401A155B014E0524737263244E0E73697465312D436C757374657231), originalValue=Binary(length=0, value=0x)}} to Cache dist-contact-cache because of (Wrapped) java.io.StreamCorruptedException: invalid type: 78 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher2011-04-01 20:23:20.141/56.925 Oracle Coherence GE 3.6.1.0 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [
    dist-contact-cache]) java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
    at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
    at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:96)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache dist-contact-cache
    at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
    at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
    ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 78
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
    at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
    at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
    at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
    ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 78
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2266)
    at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2254)
    at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2708)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
    ... 16 more

  • MRP Data from Live Cache

    Hi
    Please let me know the FM/BAPI which will get me the MRP data from Live cache.
    My requirement is to get the date and quantity for each element, MRP elements to be used are VC,VE,LF based on the plant and material
    Thanks in advance
    Amit

    Hello Sebastian,
    FM '/SAPAPO/OM_PEG_CAT_GET_ORDERS' seems to cover my requirements and I'm happy I found out about it here. Unfortunately I have some problems testing with SE37:
    Specifying only PEGID together with needed categories was not successful. Searching for calling programs of this FM I found FM '/SAPAPO/ATP_DISP_LC_SINGLE' which additionally sets parameter IS_GEN_PARAMS-SIMVERSION to '000' and receives the correct values.
    Now I have the problem that with setting the same input values I still receive exception 'LC_APPL_ERROR' and don't know why. Since I'm new to APO it could be some basic setting, still I'm wondering why this could be. Hopefully you can provide some helpful information, otherwise I think I'm lost
    Thanks in advance,
    eddy

  • Keep Oracle Spatial data in Coherence Caches?

    Can I keep Oracle Spatial data in Coherence Caches?

    You can store the Oracle Spatial data in Coherence caches. But creating Spatial indexes in Coherence neeeds too much effort, I guess. How do you create the Spatial geocoding package, map symbols, map styling rules and Spatial network model?
    Edited by: junez on 07-Jan-2010 12:24

  • Stale Near Cache data when all Cache Server fails and are restarted

    Hi,
    We are currently making use of Coherence 3.6.1. We are seeing an issue. The following is the scenario:
    - caching servers and proxy server are all restarted.
    - one of the cache say "TestCache" is bulk loaded with some data (e.g. key/value of : key1/value1, key2/value, key3/value3) on the caching servers.
    - near cache client connects onto the server cluster via the Extend Proxy Server.
    - near cache client is primed with all data from the cache server "TestCache". Hence, near cache client now has all key/values locally (i.e. key1/value1, key2/value, key3/value3).
    - all caching servers in the cluster is down, but the extend proxy server is ok.
    - all cache server in the cluster comes back up.
    - we reload all cache data into "TestCache" on the cache server, but this time it only has key/value of : key1/value1, key2/value.
    - So the caching server's state for "TestCache" is that it should only have key1/value1, key2/value, but the near cache client still thinks it's got key1/value1, key2/value, key3/value3. So in effect, it still knows about key3/value3 which no longer exists.
    Is there anyway for the near cache client to invalidate the key3/value3 automatically? This scenario happens because the extend proxy server is actually not down, but all caching servers are, but the near cache client for some reason doesn't know about this and does not invalidate the cache client near cache data.
    Can anyone help?
    Thanks
    Regards
    Wilson.

    Hi,
    I do have the invalidation strategy as "ALL". Remember this cache client is connected via the Extend proxy server where it's connectivity is still ok, just the caching server holding the storage data in the cluster are all down.
    Please let me know why else we can try.
    Thanks
    Regards
    Wilson.

  • Dynamic Creation of Physical Data Server / Agent cache Refresh

    Scenario:
    I have a requirement to load data from xml source to oracle DB, and the xml source will change at run time,but the xsd of the xml would remain same ( so I don't have to change the Logical data Server, models, mappings, interfaces and scenarios - only the Physical Data Server will change at runtime).I have created all the ODI artifacts using ODI studio in my Work Repo and then I'm using odi sdk to create the physical dataserver for the changed xml data source and then invoking the agent programmatically.
    Problem:
    The data is being loaded from the xml source to oracle DB for the first time, but it is not working fine from the second time onwards. If I restart the agent, it is again working fine for one more time. on the first run, I think the agent maintains some sort of cache for the physical data server details and so when ever I change the data server, something is going wrong and that is leading to the following exception. So I want to know, if there is any mechanism to handle dynamic data servers or if there is any way of clearing the agent cache, if any.
    Caused By: org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 41, in <module>
    AttributeError: 'NoneType' object has no attribute 'createStatement'
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1596)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$2.doAction(StartScenRequestProcessor.java:582)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor.doProcessStartScenTask(StartScenRequestProcessor.java:513)
         at oracle.odi.runtime.agent.processor.impl.StartScenRequestProcessor$StartScenTask.doExecute(StartScenRequestProcessor.java:1070)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$1.run(DefaultAgentTaskExecutor.java:50)
         at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor.executeAgentTask(DefaultAgentTaskExecutor.java:41)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doExecuteAgentTask(TaskExecutorAgentRequestProcessor.java:93)
         at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.process(TaskExecutorAgentRequestProcessor.java:83)
         at oracle.odi.runtime.agent.support.DefaultRuntimeAgent.execute(DefaultRuntimeAgent.java:68)
         at oracle.odi.runtime.agent.servlet.AgentServlet.processRequest(AgentServlet.java:445)
         at oracle.odi.runtime.agent.servlet.AgentServlet.doPost(AgentServlet.java:394)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
         at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:503)
         at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:389)
         at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
         at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
         at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
         at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
         at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
         at org.mortbay.jetty.Server.handle(Server.java:326)
         at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
         at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879)
         at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747)
         at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
         at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
         at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
         at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520)

    Hi ,
    If you want to load multiple files ( same structure) through one connection then in topology create M.XSD for M.XML file
    Create three directories
    RAW -- It will contain file with original name
    PRO- Processing area where file will be moved one by one & renamed it as M.XML.
    OUT- Once file data will be loaded into tables move the file M.XML from PRO to OUT.
    Go to odiexperts to create loop,
    Use odifilemove ( to move & rename/masking) to move A.XML from RAW to PRO & rename to M.XML
    use ODIfilemove to move M.XML to OUT folder & then rename back to A.XML
    Use variables to store file names & refresh
    NoneType' object has no attribute 'createStatement' : It seems that structure of your file is different & your trying to load different files in same schema. If stucture is same then use Procedure "SYNCHRONIZE ALL" after every load...
    Edited by: neeraj_singh on Feb 16, 2012 4:47 AM

  • Unable to refresh the data on a cached report.

    Hi,
    I am using the CR4E component on a Tomcat server.
    I am having real difficulty to getting a cached report to refresh. Basically when the report is first displayed it is okay, then some further data is added to the database, and when the report is redisplayed (with the same parameters) it is still displaying the cached information. This is specific for the subreports. I basically always want my reports to refresh when they are displayed.
    I know this must be simple but I just can't get to work, or find any example code.
    I have tried a couple of options:
    Option 1:
    clientDoc.open(rptName, OpenReportOptions._openAsReadOnly+OpenReportOptions._refreshRepositoryObjects);
    This said it could not be implemented before loging in....and the documentation says this only work for the RAS.
    Option 2:
         CrystalReportViewer reportViewer = null;
         try {
              HttpSession session = request.getSession();
              reportViewer = PrepareSignetManifestReport(aryCrates, dateFrom, dateTo, dateSent, session, serContext, false);
              //if (session.getAttribute("refreshed") == null) {
                   reportViewer.refresh();
                   //session.setAttribute("refreshed", "true");
              reportViewer.processHttpRequest(request,response,serContext,out);
         finally
              if (reportViewer != null) {
                   reportViewer.dispose();
    When I add the 'reportViewer.refresh()' i then get the following error:
    "Some parameters are missing values "
    Any pointer in the right direction would be gratefully received.
    Thanks
    Matt.

    Hi,
    On looking into this issue it is just the sub-report that does not refresh when loaded from the session. The main report is okay. We are using JDBC with Oracle 10g.
    Even if I change the parameters the main report refreshes okay, but the subreport doesn't.
    Is there a work-around for this problem?
    My code is pretty much based on the standard code generated by the system in the CRJavaHelper class so I am confident that my code is okay.
    I really appreciate some help here as I have a client not trusting their continuity because it doesn't display in the report and I don't want to have to close the report each time because of the performance. I have tried reseting the parameter fields
    Thanks
    Matt.

  • Best way for Java/C++/C# to share data in a cache?

    I have an order processing application which listens for Order objects to be inserted in a cache. If I want Java, C# and C++ apps to be able to submit orders to that cache, what's the best way to set up that Order object? Orders currently have many Enum member variables. What happens when a C# or C++ app needs to put an Order object in the cache? how would it set those java enums? Also, the java Enum classes have a lot of Java specific code in 'em for convenience. I imagine for cross platform simplicity it might have been best if the Order object were just an array of Strings or a Map of Strings to values but I have too much code depending on the Order object being how it is currently. Should I extract an Order Interface? What about the Enums? My java enums aren't simple {RED,GREEN,BLUE} type enums, they contain references to other Enums, etc. - see below...
    A portion of my Order class looks like:
    public class Order implements Cloneable, Comparable, java.io.Serializable {
      private static Logger logger = Logger.getLogger(Order.class);
      private boolean clearedFromOpenOrdersTable = false; 
      private boolean trading_opened = false;
      private static Random generator = new Random();
      private static int nextID = generator.nextInt(1000000); //just for testing
      private int quantity = 0;
      private int open = 0;
      private int executed = 0;
      private int last = 0;
      private int cancelPriority = 0;
      private Integer sendPriority = 0;
    //enums
      private OrderSide side = OrderSide.BUY;
      private OrderType orderType = OrderType.MARKET;
      private OrderTIF tif = OrderTIF.DAY;
      private OrderStatus orderStatus = OrderStatus.PENDING_NEW;
      private OrderExchange orderExchange = null;
      private OOType ooType = OOType.NOTOO;
      private OOLevel ooLevel = OOLevel.NONE;
      private Float limit = new Float(0);
      private Float stop = null;
      private float avgPx = 0.0f;
      private float lastPx = 0.0f;
      private String account = null;
      private String symbol = null;
      private long submitTimestamp;
      private long fillTime = 0;
      private long ackTime = 0;
      private Timestamp submitSqlTimestamp;
      public /*final*/ static NamedCache cache;
      private ArrayList<OrderStatusChangeListener> statusChangeListeners =
        new ArrayList<OrderStatusChangeListener>();
      transient private Format formatter = new SimpleDateFormat("hh:mm:ss a");
      public static void connectToCache() {
        cache = CacheFactory.getCache("orders");
      public void send() {
        this.submitTimestamp = System.currentTimeMillis() ;
        this.submitSqlTimestamp = new Timestamp(submitTimestamp);
        cache.put(this.ID, this);
      public void setCancelCount(int i) {
        cancelCount = i;
      public int getCancelCount() {
        return cancelCount;
      public static class CancelProcessor extends AbstractProcessor {
        public Object process(InvocableMap.Entry entry) {
          Order o = (Order)entry.getValue();
          if (o.cancelCount == 0) {
            o.cancelCount = 1;
          } else {
            logger.info("ignoring dup. cancel req on " + o.symbol + " id=" + o.ID);
          return o.cancelCount;
      public void cancel() {
          // cache.invoke(this.ID, new CancelProcessor() ); // must this be a 'new' one each time?
          Filter f = new EqualsFilter("getCancelCount", 0);
          UpdaterProcessor up = new UpdaterProcessor("setCancelCount",new Integer(1) );
          ConditionalProcessor cp = new ConditionalProcessor(f, up);
          cache.invoke(ID, cp);
      NumberIncrementor ni1 = new NumberIncrementor("CancelCount", 1, false);
      public void cancelAllowingMultipleCancelsOfThisOrder() {
        System.out.println("cancelAllowingMultipleCancelsOfThisOrder symbol=" + symbol + " id=" + ID);
        cache.invoke(this.getID(), ni1);
      public Timestamp getSubmitSqlTimestamp(){
        return submitSqlTimestamp;
      boolean isWorking( ) {
           // might need to write an extractor to get this from the cache atomically
           if (orderStatus != null &&
                (orderStatus == OrderStatus.NEW || orderStatus == OrderStatus.PARTIALLY_FILLED ||
                    // including PENDING_CANCEL breaks order totals from arcadirect
                    // because they send a pending cancel unlike foc
                    // os.getValue() == OrdStatus.PENDING_CANCEL ||
                    orderStatus == OrderStatus.PENDING_NEW ||
                    orderStatus == OrderStatus.PENDING_REPLACE)) {
              return true;
            } else {
              return false;
      public long getSubmitTimestamp(){
        return submitTimestamp;
      private void fireStatusChange( ) {
              for (OrderStatusChangeListener x:statusChangeListeners) {
                    try {
                         x.dispatchOrderStatusChange(this );
                    } catch (java.util.ConcurrentModificationException e) {
                         logger.error("** fireStatusChange: ConcurrentModificationException "+e.getMessage());
                         logger.error(e.getStackTrace());
                         e.printStackTrace();
      public Order() {
          ID = generateID();
          originalID = ID;
      public Object clone() {
          try {
            Order order = (Order)super.clone();
            order.setOriginalID(getID());
            order.setID(order.generateID());
            return order;
          } catch (CloneNotSupportedException e) {
          return null;
        class ReplaceProcessor extends AbstractProcessor {
          // this is executed on the node that owns the data,
          // no network access required
          public Object process(InvocableMap.Entry entry) {
            Order o = (Order)entry.getValue();
            int counter=0;
            float limit=0f;
            float stop=0f;
            int qty=o.quantity;
            boolean limitChanged=false, qtyChanged=false, stopChanged=false;
            for (Replace r:o.replaceList) {
              if (r.pending) {
                counter++;
                if (r.limit!=null) {limit=r.limit; limitChanged=true;}
                if (r.qty!=null) {qty=r.qty; qtyChanged=true;}
                if (r.stop!=null) {stop=r.stop; stopChanged=true;}
            if (limitChanged) o.limit=limit;
            if (qtyChanged) o.quantity=qty;
            if (stopChanged) o.stop=stop;
            if (limitChanged || qtyChanged || stopChanged)
            entry.setValue(o);
            return counter;
      public void applyPendingReplaces() {
         cache.invoke(this.ID, new ReplaceProcessor());
      private List <Replace>replaceList;
      public boolean isReplacePending() {
        if (replaceList==null) return false; 
        for (Replace r:replaceList) {
          if (r.pending==true) return true;
        return false;
      class ReplaceAddProcessor extends AbstractProcessor {
        // this is executed on the node that owns the data,
        // no network access required
        Replace r;
        public ReplaceAddProcessor(Replace r){
          this.r=r;
        public Object process(InvocableMap.Entry entry) {
          Order o = (Order)entry.getValue();
          if (o.replaceList==null) o.replaceList = new LinkedList();
          o.replaceList.add(r);
          return replaceList.size();
      public void replaceOrder(Replace r) {
          //  Order.cache.invoke(this.ID, new ReplaceAddProcessor(r));
          Order o = (Order)Order.cache.get(this.ID);
          if (o.replaceList==null) o.replaceList = new LinkedList();
          o.replaceList.add(r);
          Order.cache.put(this.ID, o);
      public boolean isCancelRequestedOrIsCanceled() {
        // change to cancelrequested lock, not ack lock
        if (canceled) return true;
    //  ValueExtractor extractor = new ReflectionExtractor("getCancelCount");
    //  int cc = (Integer)cache.invoke( this.ID , extractor );
        Order o = (Order)cache.get(this.ID);
        int cc = o.getCancelCount();
        return cc > 0 || o.isCanceled();
        class Replace implements Serializable{
        boolean pending=true;
        public Float limit=null;
        public Float stop=null;
        public Integer qty=null;
        public Replace() {
      }and then a portion of my OrderExchange.java Enum looks like this:
    class SymbolPair implements java.io.Serializable {
      String symbol;
      String suffix;
      SymbolPair(String symbol, String suffix) {
        this.symbol = symbol;
        this.suffix = suffix;
      public boolean equals(Object o) {
        SymbolPair x = (SymbolPair)o;
        return (this.symbol == x.symbol && this.suffix == x.suffix);
      public int hashCode() {
        return (symbol + "." + suffix).hashCode();
      public String toString() {
        if (suffix == null)
          return symbol;
        return symbol + "." + suffix;
    public enum OrderExchange implements java.io.Serializable {
      SIM("S", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          return symbol + "." + suffix;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          return new SymbolPair(symbol, null);
      TSX("c", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      NYSE("N", false, '.', OrderTIF.DAY) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      ARCA("C", false, '.', OrderTIF.GTD) {
        public String getStandardizedStockSymbol(String symbol, String suffix) {
          String x = externalSymbolPairToInternalSymbolMap_GS.get(new SymbolPair(symbol, suffix));
          return x == null ? symbol : x;
        public SymbolPair getExchangeSpecificStockSymbol(String symbol) {
          SymbolPair sa = internalSymbolToExternalSymbolPairMap_GS.get(symbol);
          return sa == null ? new SymbolPair(symbol, null) : sa;
      public abstract String getStandardizedStockSymbol(String symbol, String suffix);
      public abstract SymbolPair getExchangeSpecificStockSymbol(String symbol);
      private static Map<SymbolPair, String> externalSymbolPairToInternalSymbolMap_GS = new HashMap<SymbolPair, String>();
      private static Map<SymbolPair, String> externalSymbolPairToInternalSymbolMap_ARCA = new HashMap<SymbolPair, String>();
      private static Map<String, SymbolPair> internalSymbolToExternalSymbolPairMap_GS = new HashMap<String, SymbolPair>();
      private static Map<String, SymbolPair> internalSymbolToExternalSymbolPairMap_ARCA = new HashMap<String, SymbolPair>();
      private static Object[] toArrayOutputArray = null;
      static {
        String SQL = null;
        try {
          Connection c = MySQL.connectToMySQL("xxx", "xxx", "xxx", "xxx");
          SQL = "SELECT symbol, ARCASYMBOL, INETSYMBOL, ARCASYMBOLSUFFIX, INETSYMBOLSUFFIX from oms.tblsymbolnew";
          Statement stmt = c.createStatement();
          ResultSet rs = stmt.executeQuery(SQL);
          while (rs.next()) {
            String symbol = rs.getString("symbol");
            if (rs.getString("ARCASYMBOL") != null) {
              if (!symbol.equals(rs.getString("ARCASYMBOL")) || rs.getString("ARCASYMBOLSUFFIX") != null) {
                String suffix = rs.getString("ARCASYMBOLSUFFIX");
                SymbolPair sp = new SymbolPair(rs.getString("ARCASYMBOL"), suffix);
                internalSymbolToExternalSymbolPairMap_ARCA.put(symbol, sp);
                externalSymbolPairToInternalSymbolMap_ARCA.put(sp, symbol);
        } catch (Exception e) {
          System.out.println(SQL);
          e.printStackTrace();
          System.exit(0);
      static {
        populateSymbolToDestination();
      static Logger logger = Logger.getLogger(OrderExchange.class);
      private static HashMap<String, OrderExchange> symbolToDestination = new HashMap<String, OrderExchange>();
      private final String tag100;
      private final boolean usesSymbolSuffixTag;
      private final char symbolSuffixSeparator;
      private final OrderTIF defaultTif;
      private static final String soh = new String(new char[] { '\u0001' });
      private OrderExchange(String tag100, boolean usesSymbolSuffixTag, char symbolSuffixSeparator, OrderTIF defaultTif) {
        this.tag100 = tag100;
        this.defaultTif = defaultTif;
        this.usesSymbolSuffixTag = usesSymbolSuffixTag;
        this.symbolSuffixSeparator = symbolSuffixSeparator;
      public OrderTIF getDefaultTif() {
        return defaultTif;
      public String getTag100() {
        return tag100;
      public char getSymbolSuffixSeparator() {
        return symbolSuffixSeparator;
      public static OrderExchange getOrderExchangeByExchangeName(String name) {
        for (OrderExchange d : OrderExchange.values()) {
          if (d.toString().equalsIgnoreCase(name.trim())) {
            return d;
        return null;
      Thanks,
    Andrew

    Hi Andrew
    The only way to serialize object, so that they can be used by other languages than Java is to use the Portable Object Format.
    The implementation of this requires you to implement the PortableObject interface in Java. PortableObject defines two methods
    public void readExternal(PofReader reader);
    public void writeExternal(PofWriter writer);
    Also you need to add a POF config file that ties the type to a type id.
    In C++ each type needs two template methods implemented to seralize and deserialize. But first it needs to register the data type with the same type id as for the Java type using the COH_REGISTER_MANAGED_CLASS macro.
    Secondly analogs to Java implement the serializer stubs
    template<> void serialize<Type>(PofWriter::Handle hOut, const Type& type);
    template<> Type deserialize<Type>(PofReader::Handle hIn);
    For C# your serializable types need to implement IPortableObject, with it's two methods:
    void IPortableObject.ReadExternal(IPofReader reader);
    void IPortableObject.WriteExternal(IPofWriter writer);
    Similar to Java C# uses a POF configuration file, the same type id should bind to the corresponding C# type.
    For more information see
    POF Configuation: http://coherence.oracle.com/display/COH34UG/POF+User+Type+Configuration+Elements
    C++: http://coherence.oracle.com/display/COH34UG/Integrating+user+data+types
    C#: http://wiki.tangosol.com/display/COHNET33/Configuration+and+Usage#ConfigurationandUsage-PofContext
    Hope that helps!
    /Charlie
    Edited by: Charlie Helin on Apr 30, 2009 12:31 PM

  • Querying data across different cache-clusters

    Hi,
    Say there are multiple clusters each containing DIFFERENT sets of cached data. I would like to know if someone knows how an application from within one cluster can read/query the data from the DIFFERENT clusters "DIRECTLY". Would we need to code in Coherence extend protocol API (if there is one such) in this case ? Does anyone know or done this ?

    Hi,
    Yes you would need to use Extend to be able to see one cluster from another. It is pretty straight forward, you just configure the cache mappings in the same way you would for any remote caches.
    JK

  • Data Base Buffer Cache Ful

    Hi,
    what if, an update statement causes the existing db cache ful with the updated information(in this case this is uncommited transaction). in this case, if another user run the update statement where that update information stored before written to the datafile.

    No this is not correct. A block/buffer , when undergoes any kind of dml, is marked as dirty. Once marked dirty , there are various events which push it to the dirty list , from where it has to be flushed out. So the buffer , if it is marked as dirty, it will be sent to the datafile for sure, without waiting for it to be committed or not. That's why , because of this operation only , we need to rollback the changes with the roll backward process , to make sure that the only committed buffers stay in the datafile not the uncommitted ones.
    The only buffers which are not flushed to the dataifle are the buffers which are used for the read consistacny mechanism by oracle, snapshot buffers. The snapshot buffers are created in the same hash bucket of the dirty buffers but they are not flushed out to the datafile. They are simply thrown away.
    ok... the dirty buffers are witten to datafiles irrespective of the type of transaction(committed or un committed)..
    if this is true, if you roll back the transaction, from where oracle get the previous data(the previous data must be stored some where in order to replace), in this case the previous data will store in UNDO tablespace, this will also provide read consistency. if this is not true how the read consistency provided by snapshot buffers.

  • Store XML data in java cache (hashmap as a key value pair)

    Hi,
    I have to store a xml file in java cache so that I can resue it .The flow is like this :
    DAO layer reads database ,create an xml and sends to --> IBM MQ-->our java code should read this xml file over MQ and store it in a cache (preferably hashmap).The file contain unique id for every customer.
    How can we achieve this.One way is to store the xml as an string as key is "id" and value is whole xml.Is this a good way or any other way is available.Please suggest.
    Sample xml:
    <Client>
    <ClientId>1234</ClientId>
    <ClientName>STechnology</ClientName>
    <ID>10</ID>
    <ClientStatus>ACTIVE</ClientStatus>
    - <LEAccount>
    <ClientLE>678989</ClientLE>
    <LEId>56743</LEId>
    - <Account>
    <AccountNumber>9876543678</AccountNumber>
    </Account>
    </LEAccount>
    - <Service>
    <Cindicator>Y2Y</Cindicator>
    <PrefCode>980</PrefCode>
    <BSCode>876</BSCode>
    <MandatoryContent>MSP</MandatoryContent>
    </Service>
    </Client>
    Thanks
    Sumit

    A HashMap can work, but then still store the customer related data in a bean (and perhaps it will have some child objects as well, if for example the service subelement can repeat). So you get a HashMap of Customer objects, with the clientID as the index into the hashmap.

Maybe you are looking for

  • ITunes 11 not sorting multi-disc audiobooks by disc number

    I just updated to iTunes 11, and for some reason, my multi-disc audiobooks are not being ordered by disc number in the Books view. Instead, it goes track 1 of disc 1, track 1 of disc 2, etc. The tracks have the disc numbers in their metadata, so why

  • Creating planned orders with BAPI_MOSRVAPS_SAVEMULTI3

    Dear all, I have to create planned orders in our APO-system ... This works fine with function BAPI_MOSRVAPS_SAVEMULTI3, until I try to create a planned order with a configurable material ... I added values to the following tables: cfg_headers, cfg_in

  • Too many alerts taking too much system resources in the XI system

    We have been running Alerts in XI and looks like there are too many now. The alerts are stopped now from generation. I can't even run the report RSALERTPROC successfully to delete the existing alerts from the system. Anyone knows an alternative way t

  • Safari not displaying Facebook properly

    Since yesterday, Safari stopped displaying Facebook properly. I get a white page with no formatting, just a column of links. That is the only site I have problems with. All others seem to work fine and Facebook displays properly if I'm using Firefox.

  • Error at the workbook wizard

    Hi, Join configuration can't be resolved. Reason: more than 1 of the detail folders uses non-aggregated items. This is the error msg that I have got when I try to create add one table & fields to create a workbook. Could any one help me with this iss