Backing Map Insert Error

Hi.
I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
Here is the code for it, please let me know what can fix the error above:
     public Object process(InvocableMap.Entry entry)
     BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
     BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
     }

Hi,
I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
Here is the code for it, please let me know what can fix the error above:
     public Object process(InvocableMap.Entry entry)
     BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
     BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
     }The reason for the error is that the owner of the key "KeyObject" is not the same node as the owner of the entry "entry". If you want to implement it then you need to define the "DataAffinity" between the objects of both your caches so they are collocated on the same node.
HTH
Cheers,
_NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Value Mapping Insert Error

    Hi
    I am sendind an item value mapping from R3 to XI and I get this error:
    <ns:ValueMappingReplicationFault xmlns:ns="http://sap.com/xi/XI/System">
    - <standard>
      <faultText>A value is missing for content of Identifier in item 0.</faultText>
    - <faultDetail>
      <text />
      <id>7</id>
      </faultDetail>
      </standard>
    - <addition>
      <ItemNr>0</ItemNr>
    - <Item>
      <Operation>Insert</Operation>
      <GroupID>037dd9b01a3111da8abef522ac127968</GroupID>
      <Context>http://sap.com/xi/XI</Context>
      <Identifier scheme="BS_PEPE" agency="Company" />
      </Item>
      </addition>
      </ns:ValueMappingReplicationFault>
    The code in R3 is this one
    REPORT  Z_XI_REPLICATE_VALUE_MAPPING.
    data: My_ZCO_MI_CONSULTA_SERIAL type ref to
                    CO_SVMR_VALUE_MAPPING_REP_SYNC,
          in type SVMR_VALUE_MAPPING_REP_RESP,
          out type SVMR_VALUE_MAPPING_REPLICATION,
          vIdentifier type SVMR_VALUE_MAPPING_IDENTIFIER,
          vItem type SVMR_VALUE_MAPPING_REP_ITEM,
          vItemLst type SVMR_VALUE_MAPPING_REP_TAB,
          vList type SVMR_VALUE_MAPPING_REP_LIST,
          lo_sys_exception   TYPE REF TO cx_ai_system_fault.
    try.
        create object My_ZCO_MI_CONSULTA_SERIAL.
        vIdentifier-scheme = 'BS_PEPE'.
        vIdentifier-agency = 'Company'.
        vIdentifier-value = '122'.
        vItem-Group_id = '037dd9b01a3111da8abef522ac127968'.
        vItem-operation = 'Insert'.
        vItem-context = 'http://sap.com/xi/XI'.
        vItem-Identifier = vIdentifier.
        append vItem to vItemLst.
        vList-item = vItemLst.
        out-VALUE_MAPPING_REPLICATION = vList.
        call method My_ZCO_MI_CONSULTA_SERIAL->EXECUTE_SYNCHRONOUS
          EXPORTING
            output = out
          IMPORTING
            input  = in.
      CATCH cx_ai_system_fault INTO lo_sys_exception.
        write: /'Error Text   --> ',lo_sys_exception->errortext.
        write: /'Error Code   --> ',lo_sys_exception->code.
        exit.
    ENDTRY.
    write: /'ESN_STATUS --> ',in-STATUS.
    The message sent to XI and I see in the sxmb_moni in R3 is this one
    <?xml version="1.0" encoding="utf-8" ?>
    - <nr1:ValueMappingReplication xmlns:nr1="http://sap.com/xi/XI/System">
    - <Item>
      <Operation>Insert</Operation>
      <GroupID>037dd9b01a3111da8abef522ac127968</GroupID>
      <Context>http://sap.com/xi/XI</Context>
      <Identifier scheme="BS_PEPE" agency="Company">122</Identifier>
      </Item>
      </nr1:ValueMappingReplication>
    When I try to delete a Group it works but not when I want to insert an individual item.
    What am I doing wrong?
    Thanks
    Regards

    Hi
    There is a field call CONTROLLER with this structure:
    FIELD     FIELDNAME     CHAR     30     0     Field Name
    VALUE     PRX_CONTR     CHAR     1     0     Field Control in XML Data Stream (=> Type Group SAI)
    But in SAP documentacion does not say any references to this field and how to fill them.
    I am asbolute sure I filled all the mandatory fields.
    Thanks
    Regards

  • Off-heap backing maps seem to generate lots of garbage at insert..!?

    I have been doing a lot of benchmarks of distributed caches with different backing maps. The results where partly positive (I hoped that partitioned (splitting) off-heap backing map would be almost as fast as a non-splitting on-heap backing map). For read and various types of queries this turned out to be mostly true (some queries were slightly slower - probably because they where performed per partition).
    For inserts it does however sadly enough seem to be another story - already when using a non-splitting NIO backing map inserts seemed to generate a lot of garbage slowing the benchmark down significantly and when switching to a splitting NIO backing map this effect became so extreme that full GC occured more or less constantly on the cache nodes slowing execution down to almost a standstill :-(
    Has anybody else tried this and seen the same results or do any of the Coherence developers have some theory?
    To me it would seem like network-io to off-heap (using storage buffers allocated using nio just as the communication buffers!) should be at least as easy to perform without generating excessive garbage as to heap objects but since I dont know the internals of Coherence I cant say for sure if there are something that breaks this theory?
    For me the main expected advantage with using off-heap rather than on-heap would have been REDUCED GC activity and shorter pauses but instead it seems like the result is the oposite - at least when doing inserts...
    My example do not use (or need!) and secondary indexes (only performs get/put/lock/unlock) but each entry is locked before it is inserted and unlocked after (this is needed for the algorithm I am using as a benchmark) - as I have pointed out in another thread it is a pitty that no "lockAll" / unlockAll method calls exists (my benchmark is suffering a lot from all the lock/unlock remote calls) - the overhead for this is however nothing compared to the performance hit that comes from the all the GC...
    I have tried to tune the GC in several ways but this has only to a very limited extent reduced the GC pauses length or the frequency of full-GC - it just seems like a LOT of garbage is generated for some reason...
    The setings that so far was resulted in the least GC-overhead (still awfully bad though!) are -XX:+UseParallelGC -XX:+UseAdaptiveSizePolicy. I am using Coherence 3.5 GE and Sun JRE 1.6.0_14.
    /Magnus
    Edited by: MagnusE on Aug 10, 2009 3:01 PM

    Thanks for ther info - I was indeed using different initial and max size in this experiment and seting them the same eased the problem (now I mostly get incremental rather than full GC messages). Insert do however still generate more GC activity than read (that seem to be more or less totally free from Java heap allocation / deallocation which is VERY good since read is so common!). Perhaps there is some more tweaking of the heap allocation/deallocation that can be done att the same time as you work on that bug you mentioned - it would really be nice with a NIO backing-map with close to zero Java heap usage for all primitive operations (read, insert, delete)!
    /Magnus
    Edited by: MagnusE on Aug 11, 2009 7:33 AM

  • Get an error when accessing the entry from the Backing Map directly

    We are using some sample code from Oracle to access Objects associated via KeyAssociation directly from the Backing Map.
    Occasionally we get the error posted below. Can someone shed light on what this error means ?
    I'm doing a Get on the Backing Map directly.
    Thanks,
    J
    An entry was inserted into the backing map for the partitioned cache "Customerl" that is not owned by this member; the entry will be removed.
    ReadWriteBackingMap$5{ReadWriteBackingMap inserted: key=Binary(length=75, value=0x---binary key data removed ----), value=Binary(length=691, value=0x---binary value data removed---)), synthetic}
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.onBackingMapEvent(DistributedCache.CDB:152)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage$PrimaryListener.entryInserted(DistributedCache.CDB:1)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.dispatch(ReadWriteBackingMap.java:2064)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.entryInserted(ReadWriteBackingMap.java:1903)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1718)
         at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1786)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:253)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:221)
         at com.tangosol.net.cache.ReadWriteBackingMap.get(ReadWriteBackingMap.java:721)
         at

    Here is the sample we adapted. We have adapted the code below to our specific Cache. I have highlighted the line that throws the exception, this exception doesnt occur all the time, saw it about 10 times yesterday and 2 times today.
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.CacheService;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Binary;
    import com.tangosol.util.ClassHelper;
    import com.tangosol.util.InvocableMap;
    import com.tangosol.util.processor.AbstractProcessor;
    import java.io.Serializable;
    import java.util.Map;
    * dimitri
    public class Main extends AbstractProcessor
    public static class Foo implements Serializable
    String m_sFoo;
    public String getFoo()
    return m_sFoo;
    public void setFoo(String sFoo)
    m_sFoo = sFoo;
    public String toString()
    return "Foo[foo=" + m_sFoo + "]";
    public static class Bar implements Serializable
    String m_sBar;
    public String getBar()
    return m_sBar;
    public void setBar(String sBar)
    m_sBar = sBar;
    public String toString()
    return "Bar[bar=" + m_sBar + "]";
    public Object process(InvocableMap.Entry entry)
    try
    // We are invoked on foo - update it.
    Foo foo = (Foo) entry.getValue();
    foo.setFoo(foo.getFoo() + " updated");
    entry.setValue(foo);
    // Now update Bar
    Object oStorage = ClassHelper.invoke(entry, "getStorage", null);
    CacheService service = (CacheService) ClassHelper.invoke(oStorage, "getService", null);
    DefaultConfigurableCacheFactory.Manager bmm =
    (DefaultConfigurableCacheFactory.Manager) service.getBackingMapManager();
    BackingMapManagerContext ctx = bmm.getContext();
    Map mapBack = bmm.getBackingMap("bar");
    // Assume that the key is still the same - "test"
    Binary binKey = (Binary) ctx.getKeyToInternalConverter().convert(entry.getKey());
    Binary binValue = (Binary) mapBack.get(binKey);
    // convert value from internal and update
    Bar bar = (Bar) ctx.getValueFromInternalConverter().convert(binValue);
    bar.setBar(bar.getBar() + " updated");
    // update backing map
    binValue = (Binary) ctx.getValueToInternalConverter().convert(bar);
    mapBack.put(binKey, binValue);
    catch (Throwable oops)
    throw ensureRuntimeException(oops);
    return null;
    public static void main(String[] asArg)
    try
    NamedCache cacheFoo = CacheFactory.getCache("foo");
    NamedCache cacheBar = CacheFactory.getCache("bar");
    Foo foo = new Foo();
    foo.setFoo("initial foo");
    cacheFoo.put("test", foo);
    Bar bar = new Bar();
    bar.setBar("initial bar");
    cacheBar.put("test", bar);
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    cacheFoo.invoke("test", new Main());
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    catch (Throwable oops)
    err(oops);
    finally
    CacheFactory.shutdown();
    }

  • 750GB Seagate External Drive - Disk Insertion error

    I have two 750 GB Seagate External hard drives, which are daisy chained to my PowerMac G5. I was transfering files between external hard drives, when all of a sudden the computer crashed. Once I restarted, the computer gave a disk insertion error:
    Disk Insertion
    The disk you inserted was not readable by this computer.
    The options are: "Initialize..." "Ignore" and "Eject"
    One of the external hard drives is fine, the one which was directly hooked up to the computer. However, the drive that was daisy chained still gives this error. I have tried connecting the drive to another mac, but it still gave this error. I also tried connecting it to a PC, and it was detected, but it did not show up under "My Computer."
    Is there a way that I could save or re-format the hard drive WITHOUT loosing the data?
    Please let me know. This is very urgent. I have a lot of important data on the hard drive. Any help/ideas/suggestions are greatly appreciated. Thank you very much.

    Hello! Sometimes a large file transfer will lock or crash a computer. With a crash or hard restart the disk's directory is damaged and if disk utility can't repair it then DISKWARRIOR is about the only utility that can fix the problem. The data usually isn't gone. The disk's directory (road map) of where every piece of info is located on the drive has become corrupt hence there's nothing for it to find. Diskwarrior is the best utility for repairing the directory. Regular use of DW will prevent "most" disk problems as it's the directory that gets messed up rather than a physical problem with the disk in most cases. In cases where DW can't fix the problem DataRescue II can sometimes recover valuable data. Tom

  • SQL 2005 Insert Error

    Hi Folks-
    It has been a while and I hope this finds you all doing well. 
    I am working oin a customer application and we are experiencing intermitment MS SQL 2005 insert errors.  We have isolated the cause as the use of the special character, the apostrophe (i.e. " " ") in a material/item description.  The underlying MES application can accept the apostrophe along with the SAP MM back-end, but not MII,  The MII version is 12.0.6 Build 14.  The application also uses the Java Plug-in version 1.6.0_07, which is also the Java Runtime version.  I am not sure of the SQL jdbc being used but was hoping that perhaps there was a version that could "handle" this special character issue or if there was a non-programmatic way to address this.  At this point we are anxious about changing Master Data.
    Any insights or suggestions would be appreciated.  Thanks in advance and I look forward to your responses.
    Kevin Fitzgerald
    Invensys Operation Management

    Hi Kedar-
    This is great information and I thank you for your insights.  I do have what I hope are 2 brief follow-on questions:
    1. Does the single quote/apostrophe represent a general escape character that can be used in all cases?  Are there others (i.e. \n, etc.)
    2. Your special characters cited above do not include the asterik, *.  My customer has also decided to use this special character in a new product description (go figure!!).  Can this be "escaped out" via the same means?
    I have been able to successfully use the java function stringreplace to alter the text as you suggest- even for a local and I hope transaction variable.  Any additional comments per my questions above would certainly advance the cause.
    Thanks again and I look forward to your comments.  Take care and be safe.
    Kevin

  • Nokia Map Loader Error in connection

    Hi Guys,
    I have an N80 and have the MAPS software installed on my phone and the Nokia Map Loader software on my Laptop. However, in order for me to download the maps I have to connect my N80 to my laptop in Data Transfer Mode and then run Nokia Map Loader. This should then allow me to download the Maps. I get an error when connecting to my laptop in Data Transfer Mode. It just says Invalid usb connection or something like that. No I don't think it is the maps software but I would like to know if any of you had the same problem before and how it can be resolved. I sent a note to Nokia Tech team to see if they know but it takes a while for them to get back to you.
    So, does anybody know how I can get past this or get the MAPS downloaded?
    Any advice would be greatly appreciated.
    Cheers

    can´t connect nokia map loader, error, won´t work?
    I had this same problem, found your post looking for an answer (lol), and I just SOLVED the problem ; )
    First, you don´t connect in data transfer mode. You must use pc suite mode, or it won´t work.
    Second, (and this is the tricky part), YOU HAVE TO INSTALL nokia maps on your smartphone (so far ok, we all know this), but then you have to RUN THE PROGRAM ONCE.
    Don´t ask me why, but if you just install nokia maps, then try to connect with nokia map loader, it won´t work. You have to install the program then run it once. Exit the program, then connect to you pc (nokia map loader) with pc suite mode.
    BTW, do NOT install nokia maps on your smartphone on the phone memory. ALWAYS USE THE CARD MEMORY, install nokia maps on your memory card, because the maps are so huge. Most phones won´t have enough internal space.
    So this is it, no more suffering. Damn, I´m good ; )
    Message Edited by unsterblich on 17-Jan-2009 03:54 PM

  • Backing Map Scheme with storage disabled

    I've been playing around with Cache Servers and Cache Clients. If I have a distributed cache, the coherence-cache-config.xml for the client still requires a backing map scheme entry. If the client doesn't physically store data then why does it need a backing map scheme?
    I'm interested to know if I'm understanding the concept of the cache client correctly.
    Is Coherence just expecting to read in the XML but will ignore it because I've set local storage to false, and therefore it doesn't actually matter what is in the backing map sheme?
    Cheers
    Mike

    Hi Mike,
    You are right - storage disabled nodes do not instantiate backing map. Coherence reads XML during initial cohfiguration validation, and on cache clients 'backing-map-scheme' element is not used.
    In general it is cleaner, less error prone and easier to maintain a single configuration file and let Coherence worry about what to instantiate on different nodes.
    Regards,
    Dimitri

  • I was updating my iPad 2 to the latest version, I have never updated before. It was done downloading and was backing up. Error in backing up (-5000). I tried restarting, no help. Someone please help.

    I was updating my iPad 2 to the latest version, I have never updated before. It was done downloading and was backing up. Error in backing up (-5000). I tried restarting, no help. Someone please help.

    Many thanks b Noir
    This is a copy of ONEof the keys  in the registry I changed  as told by Apple support today. I also have changed others as instructed by GEAR  software support to manually delete GEAR drivers (that I had installed but couldn't delete some of the others  they mentioned from Windows system 32. Then some bright spark at work told me I need the Gear drivers so  I downloaded the software and installed again.
    Sorry, just this minute went to insert image  and it is giving me a message saying this sort of content  is not allowed?.
    The most recent key I altered is in: HKey _local _machine. System\class - 4D36E965-E325-11CEBFC1-08002BE10318. Upper Filter data: Upper filter NTIDrvr  SiRem GEARAspiWDN.
    The GEARsoftware info about manually deleting  GEAR drive is from:
    http://www.gearsoftware.com/wiki/index.php?title=DRIVERS:_Windows_-_Updating%2C_ removing%2C_64_bit_versions%2C_etc
    I hope you can help

  • Flash Builder 4.6 compile "Map Failed" error

    HI,all
        First,Adobe have no place for  bug report . Yes,you will say  http://bugs.adobe.com/jira/secure/Dashboard.jspa ,but ,i have no account,  also doesn't know how to register one.
    OK ,Today..i will compile an mobile project have  1.5G+  pictures and swfs ..
    i met a "Map failed" error when i make a IOS Package..
    The .log  file content is :
    !ENTRY com.adobe.flexbuilder.project 4 43 2012-03-15 02:08:09.404
    !MESSAGE Map failed
    !STACK 0
    java.lang.Exception
              at com.adobe.flexbuilder.project.internal.FlexProjectCore.createErrorStatus(FlexProjectCore. java:1019)
              at com.adobe.flexbuilder.util.logging.GlobalLogImpl.log(GlobalLogImpl.java:66)
              at com.adobe.flexbuilder.util.logging.GlobalLog.log(GlobalLog.java:52)
              at com.adobe.flexide.multiplatform.ios.packaging.IPAPackager.create(IPAPackager.java:276)
              at com.adobe.flexide.multiplatform.ios.exportrelease.IOSExportReleaseHandler.doPackage(IOSEx portReleaseHandler.java:264)
              at com.adobe.flashbuilder.project.multiplatform.ui.exportrelease.MultiPlatformExportReleaseV ersionManager.doExport(MultiPlatformExportReleaseVersionManager.java:198)
              at com.adobe.flexbuilder.exportimport.releaseversion.ui.ExportReleaseVersionWizard$1.run(Exp ortReleaseVersionWizard.java:208)
    my hardware:
    G41 MotherBoard +  8G RAM + E7200CPU
    My software:
    windows 2008 r2 with sp1 (64bit)+ FlashBuilder 4.6           default config ,very clean system, just install today.
    And , i transfer my project to a Windows XP host ...Oh, My god .....it  works , the  IOS package produced normally.
    SO......so.......i cann;t say any word now .....is  adobe hate windows 7 (32bit,i tested ,same problem )  and Windows 2008?? 
    Raymond

    I constantly had issues with this.  I managed to run into the max for increasing the heap space.  I was able to get beyond the limit by flagging the FlashBuilder.exe file to allow it to allocate to higher memory address ranges.  I'm on a 64 bit machine, so the only limitation was the 32 bit FlashBuilder process.  Anyways i was able to get from 1024m up to 1720m.  Also note that FlashBuilder 4.7 Beta is out and it is a native 64 bit application, so you will get higher addressing there.  I just found it to be too buggy for my every day development tasks.  Also missing the Design view really hurts development.  I hope they put that back in.
    Here's more detail on how I got beyond the java heap ceiling.
    http://chrsmrtn.azurewebsites.net/flash-builder-java-heap-errors-limitations-of-xms-and-xm x/

  • How do I combine the Coherence 3.5 partitioned backing map with overflow?

    I would like to set up a near cache where the back cache uses an overflow map that uses a partitioned backing map as front and a file (or Berkley DB) based back. I would like the storage for both primary and backup storage to use the same configuration. I tried the following cache config (I am not even sure this say anything about how the backup storage should be configured, except that I say it should be off-heap) :
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>near-small</cache-name>
                <scheme-name>near-schema</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <near-scheme>
                <scheme-name>near-schema</scheme-name>
                <front-scheme>
                    <local-scheme>
                        <eviction-policy>HYBRID</eviction-policy>
                        <high-units>10000</high-units>
                    </local-scheme>
                </front-scheme>
                <back-scheme>
                    <distributed-scheme>
                        <scheme-name>near-distributed-scheme</scheme-name>
                        <service-name>PartitionedOffHeap</service-name>
                        <backup-count>1</backup-count>
                        <thread-count>4</thread-count>
                        <serializer>
                            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        </serializer>
                        <backing-map-scheme>
                            <overflow-scheme>
                                <scheme-name>OverflowScheme</scheme-name>
                                <front-scheme>
                                    <external-scheme>
                                        <nio-memory-manager/>
                                        <unit-calculator>BINARY</unit-calculator>
                                        <high-units>256</high-units>
                                        <unit-factor>1048576</unit-factor>
                                    </external-scheme>
                                </front-scheme>
                                <back-scheme>
                                    <external-scheme>
                                        <scheme-name>DiskScheme</scheme-name>
                                        <lh-file-manager>
                                            <directory>./</directory>
                                        </lh-file-manager>
                                    </external-scheme>
                                </back-scheme>
                            </overflow-scheme>
                            <partitioned>true</partitioned>
                        </backing-map-scheme>
                        <backup-storage>
                            <type>off-heap</type>
                        </backup-storage>
                        <autostart>true</autostart>
                    </distributed-scheme>
                </back-scheme>
                <invalidation-strategy>present</invalidation-strategy>
                <autostart>true</autostart>
            </near-scheme>
            <!--
            Invocation Service scheme.
            -->
            <invocation-scheme>
                <scheme-name>example-invocation</scheme-name>
                <service-name>InvocationService</service-name>
                <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
            </invocation-scheme>
        </caching-schemes>
    </cache-config>This all goes well when I start the cache node(s) but when i start an application that try to use the cache I get the error message:
    2009-04-24 08:20:24.925/17.877 Oracle Coherence GE 3.5/453 (Pre-release) <Error> (thread=DistributedCache:PartitionedOffHeap, member=1): java.lang.IllegalStateException: Partition backing map com.tangosol.net.cache.OverflowMap does not implement ConfigurableCacheMap
         at com.tangosol.net.partition.ObservableSplittingBackingCache.createPartition(ObservableSplittingBackingCache.java:100)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.initializePartitions(DistributedCache.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:63)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
    How should I change my cache config to make this work?
    Best Regards
    Magnus

    Magnus,
    The optimizations related to efficiently supporting overflow-style caching are not included in Coherence 3.5. I created COH-2338 and COH-2339 to track the progress of the related issues.
    There are four different implementations of the PartitionAwareBackingMap for Coherence 3.5:
    * PartitionSplittingBackingMap is the simplest implementation that simply partitions data across a number of backing maps; it is not observable.
    * ObservableSplittingBackingMap is the observable implementation; it extends WrapperObservableMap and delegates to (wraps) a PartitionSplittingBackingMap.
    * ObservableSplittingBackingCache is an extension to the ObservableSplittingBackingMap that knows how to manage ConfigurableCacheMap instances as the underlying per-partition backing maps; in other words, it can spread out and coalesce a configured amount of memory (etc.) across all the actual backing maps.
    * ReadWriteSplittingBackingMap is an extension of the ReadWriteBackingMap that is partition-aware.
    The DefaultConfigurableCacheFactory currently only uses the ObservableSplittingBackingCache and the ReadWriteSplittingBackingMap; COH-2338 relates to the request for improvement to add support for the other two implementations as well. Additionally, optimizations to load balancing (where overflow caching tends to get bogged down by many small I/O operations) will be important; those are tracked by COH-2339.
    Peace,
    Cameron Purdy
    Oracle Coherence

  • Insert error message into session

    hi all,
    I need to Insert error message into the log of SM35 session.
    1. I have created a BDC that creates a session in SM35.
    2. The session is scheduled daily and gets executed automatically.
    3. there is a scenario where the session should be stopped manually with a error message.
    the system doesnt generate any error automatically for that scenario
    can any one pls tell me whether its possible to put a manual error into SM35 session log ?
    Thanks in Advance,
    Santhosini

    Hi,
    There's a few ways you could do this;
    Construct your message first (i.e. combine the message text and variables into one string, then move this into the table)
    or
    Don't store the text, instead store the message ID, number and variable parts of the message.  This has the advantage if you're running a multi-language system the log can be used by users of different languages.
    Regards,
    Nick

  • SQL Insert Error Error in allocating a connection. Cause: No PasswordCreden

    Friends,
    While testing my connection in the Sun java Application Server , I get the following error .
    "SQL Insert Error Error in allocating a connection. Cause: No PasswordCredential found "
    Can somebody please guide ?
    regards
    Dhiraj

    If you are using Netbeans, then this link might help:
    http://forum.java.sun.com/thread.jspa?forumID=136&threadID=598423
    Otherwise, have you try this ?
    Verify your sun-ejb-jar.xml does not use default-resource-princinpal element:
    <res-ref-name>jdbc/pdisasdb</res-ref-name>
    <jndi-name>jdbc/pdisasdb</jndi-name>
    <default-resource-principal>
    <name>myname</name>
    <password>geheim</password>
    </default-resource-principal>
    </resource-ref>

  • I am getting ORA-20001: Seed insert error while seed translatable text step

    Hi,
    I am getting this error while English to Arabic translation in the Seed translatable text step
    ORA-20001: Seed insert error: WWV_FLOW_ICON_BAR.ICON_IMAGE_ALT ORA-00001: unique constraint (APEX_030200.WWV_FLOW_TRANSLATABLE_TEXT_PK) violated
    Can i get any suggetion from your side.
    Thanks,
    nar

    Did you ever figure this out. Because I also have this error.

  • Insert error in master-detail form

    Probably a stupid question. When I populate the master block of
    a master detail form from an LOV, I am asked to save the form.
    Since the information is loaded from the LOV, there are no
    changes to save. If I answer yes, it gives an Insert error due
    to the primary key violation in the master block table. If I
    answer no, it opens the detail block and gives the correct
    information. The problem is that when I enter information into
    the detail block and try to save it, I get the same error
    message regarding the primary key violation in the master block.
    The form works fine if I do not populate the master block from
    the LOV (or from select statements in triggers). Any suggestions
    will be appreciated.
    LS

    Hi,
    Check for the form or block status.Looks like the status has
    changed.Thats why u r getting the message.If any of the base
    table item has changed then u will get such a message.Try
    working on this an check it out.
    Thanks
    Vinod

Maybe you are looking for

  • Schedule line : delivery date not changes

    Dear Gurus, When I change delivery date change in delivery documents and sales documents, picking date, pricing date, etc.. changes, but in sales document, schedule line's delivery date does not changes. Please help me to change the same and kindly t

  • How i can open the form in Internet Explorer

    Dears. I am using Developer 6i and Oracle8. I want to learn how i can execute my form in IE. Please teach me about that how i can be able to execute my form in IE. waiting for reply Zaheer

  • Need to change Field Description

    Hi Experts,                After did AFS implementation some fields are added in the COOIS transaction. some of those fields three fields description are showing technical name.. anyone kindly advice to resolve it. )  In T-code COOIS the technical na

  • WebDynpro Java application in xMII

    Hi All, Is it possible to integrate a web dynpro java application into an xMII dashboard? If yes please provide pointers to any suitable documentation. Thanks Lisha

  • Commit  when exceptionn occurs?

    public static void main(String args[]) method1();//transaction1 method2()// transaction2 // catching exception here say we have a main method main . In that main method we call method1 which creates a customer using connection con1. Then we call meth