Listener for Unique Key in coherence Cache

Hi Experts,
I am using Oracle Coherence in one of my project and I am facing a Performance issue .Here is my scenario,
Thie is my Coherence cache structure
UniqueKey         <Hive data>
D1                     Hive Data1         
D2                     Hive Data2
D3                     Hive Data3
Each Unique Key is for user Session. My application is a single Sign on with multiple applications involved.
The Coherence cache can be updated by any application/sub applications. When ever there is a change in my a user hive Data I need to update. My current implementation is
I get all the data (Map) from the Coherence Cache (Named Cache).
Look for the specific user's key
Hence there is a performance issue when ever i retrieve/set the hive data
Is there a default Listener/methodology which I can use in Oracle Coherence ?

Thanks Jonathan for your timely response will look into the Map event, but just a quick question
Map<Object, Map<Object, Object>> updateUserData = (HashMap) namedCache.get(uniqueKey);
So this is how i retrieve the user data from the NamedCache. What it returns is a Map .....and any change to this current daykey is what the user is worry about.
The addListener() on the ObservableMap will  start to listen on any event happening at the all key in the namedCache.I am looking for something like a listener/logic which looks only the for this uniqueKey for this user session.
Will look into the Map Event in detail.
Thanks Again,
Sarath

Similar Messages

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • How shall we do validation for Unique Key and Multiple Primary Key?

    Hi,
    I have table created From EO in which one column is checked as Unique.
    How to do validation for column checked as Unique.
    I know how to do validation for column checked as primary key.
    Below is sample code for primary key validation
    if (getRvSize() != null)
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "RvSize", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_NO_UPDATE"); // Message name
    if (value != null)
    OADBTransaction transaction = getOADBTransaction();
    Object[] rvKey = {value};
    EntityDefImpl rvDefinition = xxczVAGCSRVSizingEOImpl.getDefinitionObject();
    xxczVAGCSRVSizingEOImpl rv =
    (xxczVAGCSRVSizingEOImpl)rvDefinition.findByPrimaryKey(transaction, new Key(rvKey));
    if (rv != null)
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "RvSize", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_UNIQUE"); // Message name
    What changes need to be done for above code in order to do the validation for Unique Key.
    I have one more Question
    How shall we do the Validation for Multiple Primary Key in a table?
    - Mithun

    1. If you just validate on one attribute like your unique key, then put your logic in the set<Your AttributeName) method
    2. If you want to do the cross validation ( like validating multiple attributes) then put your logic in the validateEntrity Method
    How to do that?
    1. Create a Validation View object.
    2. Associate your VVO to the VAM
    3. Create entity expert.
    4. Have method in entity expert for your validation (you would be calling AM and then VO execute the query and do the validation.
    5. You would be calling the Entity experty method from your EO either setMethods or validateEntity.
    I have given just the high level points.
    Hope this helps.
    Thanks,
    RK

  • Listening for any key to be pressed?

    I'm writing a small text-based program and when Task #1 finishes,
    it prompts for the user to press any key.
    If any key is pressed, the program proceeds to Task #2 and when it finishes the user needs to press any key again. And so on...
    So how can I listen if any key has been pressed or not?
    Can somebody give me some pointers to some examples dealing with this and which class I should look at?
    Any help would be appreciated.

    your class needs to implement KeyListener, and define 3 methods
    keyPressed
    keyTyped
    and
    keyReleased.
    you will also need some counter to know when to do task 1, task 2 etc.
    so it would look like:
    public class Whatever extends Something implements KeyListener {
         private static int counter = 1;
         public static void main(String[] args) {
         public static void task1() {
              some code
              counter++;
         public static void task2() {
              some code
              counter++;
         public void keyTyped(KeyEvent e) {
              switch(counter) {
                   case 1: task1(); break;
                   case 2: task2(); break;
         public void keyPressed(KeyEvent e) {
         public void KeyReleased(KeyEvent e) {
    }hope this helps!
    Tom

  • Cannot create DDL for Unique Key

    I have a relational model with a unique key constraint. When I go to Export -> DDL File for my model, and look at the PK and UK constraints tab for "DDL Generation Options", the UK is checked. But upon inspecting the generated DDL file, it is not being generated! The target database in Oracle 11.1 and I am using DM 4.0.3.853.

    Hi,
    Very strange.  Is there anything unusual about the Table or its Unique Key column(s)?
    Is everything else being generated as expected?
    Are there any relevant error messages in the log?  (You can use External Log on the View menu to display the log.)
    David

  • How to add a listener for enter key

    I'd like to build a chatzone applcation just like msn. The application
    has two textarea, one is for the messages sent and recieve and one is
    for the message typing area. Just like MSN, I'd like to send the text
    after the user pressed enter key. But my problem is I don't know how
    to make a key listener. I've tried jTextArea1.addKeyListener() but I don't
    know how to use it. Anyone can help me?
    Thanks in advance

    KeyListener is just an interface, so you could use:
    public class MyKeyListener implements KeyListener
       private JTextArea area = null;
       public MyKeyListener (JTextArea area)
            this.area = area;
       // Don't care about a key being pressed.
       public void keyPressed (KeyEvent e) {}
       // Don't care about a key being typed.
       public void keyTypes (KeyEvent e) {}
       // We care about the key being released.
       public void keyReleased (KeyEvent e)
           // Check what key it is.  Note sure if \n is the right one...maybe use the key codes.
           if (e.getKeyChar () == '\n')
                // The return key has been pressed and released, they really meant to press it!
                // Get the last line of the text from the text area.  This bits up to you...maybe
                // store the last location of the caret and then read from that point up
                // to the current point...
    }You can also use a DocumentListener if you are using a JTextArea but this is a little bit more complex but may be better, DocumentListeners work on telling you about more macro changes to the document...

  • Listening for a key within the whole app

    Hi. I want to add a "hotkey " feature to my program, for example, to allow a user to press F1 at any time and get a help window.
    I read the Swing documentation and apparently only the focused component can receive the keystrokes (that would explain why adding the keyListener to the window does nothing).
    What would be the workaround to this issue?
    Cheers,
    Hernan

    Well first of all Menus and MenuItems can be assigned accelerators which normally takes care of your case.
    The other thing you can do is use KeyboardFocusManager

  • Listening for keyPressed events from background?

    Is it possible to have a java program running in the background and listen for keyPressed events? I would like for the program to listen for certain keys to be pressed and when they are manipulate the mouse. From what I've looked at so far however it seems the listening component must have focus. Thanks.

    On any OS that would typically involve hooking into the OS itself.
    And then manipulating OS resources as well.
    Putting an app into the 'background' is also OS specific.
    Given that the above is all you want to do then java is not an appropriate language choice. C++ would be better.

  • External Coherence Cache

    Hello,
    Started using Coherence 3.6 and no issues in going through the tutorial.
    Also, from the web application code, was able to use weblogic server internal coherence cache.
    I want help for setting up the external Coherence cache. Any pointers ?
    Thank you for y our help.

    Hi,
    The Coherence product has a folder /bin where you will find script for running standalone/external Coherence cache servers.
    -cache-server.cmd : Execute multiple times the script and you will have multiple external coherence cache servers that can store/compute data
    -query.cmd: Execute sql like scripts to verify the data loaded the the Coherence Grid (Read the documentation at http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_cq.htm)
    -coherence.cmd: Execute for querying the Coherence Grid (Type help for commands)
    Also, I would recommend you to go through the developers' guide and Java APIs for understanding the Coherence capabilities. Here is the list of links that may be helpful for you. http://ora-soa.blogspot.com/2011/07/where-to-start-learning-coherence.html
    Cheers,
    NJ

  • CQL Join on Coherence Cache with Composite Key

    I have a Coherence Cache with a composite key and I want to join a channel to records in that cache with a CQL processor. When I deploy the package containing the processor, I get the following error:
    +italics14:32:35,938 | alter query SimpleQuery start                                                                        | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
    +14:32:35,938 | alter query >>SimpleQuery<< start+
    +specified predicate requires full scan of external source which is not supported. please modify the join predicate | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
    I think that I'm using the entire key. If I change the key to a single field, it deploys OK. I found a similar issue when I defined a Java class to represent the composite key. Is it possible to join in this way on a composite key cache?
    I could define another field which is a concatenation of the fields in the composite key but this is a little messy.
    My config is as below:
    <wlevs:caching-system id="MyCache" provider="coherence" />
    <wlevs:event-type-repository>
    <wlevs:event-type type-name="SimpleEvent">
              <wlevs:properties>
                   <wlevs:property name="field1" type="char" />
                   <wlevs:property name="field2" type="char" />
              </wlevs:properties>
    </wlevs:event-type>
    </wlevs:event-type-repository>
         <wlevs:channel id="InChannel" event-type="SimpleEvent" >
              <wlevs:listener ref="SimpleProcessor" />
         </wlevs:channel>
         <wlevs:processor id="SimpleProcessor">
              <wlevs:listener ref="OutChannel" />
              <wlevs:cache-source ref="SimpleCache" />
         </wlevs:processor>
         <wlevs:channel id="OutChannel" event-type="SimpleEvent" >
         </wlevs:channel>
         <wlevs:cache id="SimpleCache" value-type="SimpleEvent"
              key-properties="field1,field2"
              caching-system="MyCache" >
         </wlevs:cache>
    and the processor CQL is as follows:
    <processor>
    <name>SimpleProcessor</name>
    <rules>
    <query id="SimpleQuery">
    <![CDATA[ 
                select I.field1, I.field2 from InChannel [now] as I,
    SimpleCache as S where
    I.field1 = S.field1 and
    I.field2 = S.field2
    ]]> </query>
    </rules>
    </processor>
    Thanks
    Mike

    Unfortunately, joining on composite keys in Coherence is not supported in the released versions. This will be supported in 12g release.
    As you mention, defining another field as key, which is the concatenation of the original keys is the workaround.

  • Specifying coherence-cache-config.xml for multiple clusters

    Hi,
    I am running two cache clusters (Cluster A and B that hold different cache types). we have a web application that needs to communicate with both the clusters. we have two coherence-cache-config-g.xml files, one for each cluster.
    where do we specify the two coherence-cache-config.xml for each of these clusters in our coherence.jar that we deploy on the web app server.
    pls provide some inputs...
    thanks in advance,
    - G.

    Hi G,
    You can define a path to the cache configuration descriptor in your operation configuration override file (tangosol-coherence=override.xml) or specify it in the system property "tangosol.coherence.cacheconfig".
    Please see this Wiki page for details:
    http://wiki.tangosol.com/display/COH32UG/configurable-cache-factory-config
    Regards,
    Gene

  • How to specify index for cache in coherence-cache-config.xml

    Hi All,
    We want to apply indexing on cache data.
    Suppose i have a EMPLOYEE object in coherence cache.
    and i want to use employeeID for indexing purpose.
    Can anybody help me to achieve this at Congregational level i.e. using xml file (coherence-cache-config.xml) .
    Edited by: 981644 on Jan 16, 2013 1:51 AM

    Hi,
    I've posted some [url http://coherence.oracle.com/download/attachments/14647422/add-index-namespace.jar]code and the [url http://coherence.oracle.com/download/attachments/14647422/add-index-namespace-src.jar]source. It depends on coherence common version 2.3.0.39174 however I believe it will work with 2.0.0.23649 also. Coherence common library can be downloaded from [url http://coherence.oracle.com/display/INC10/coherence-common]here
    Note: This is purely an example on how to achieve index creation via a cache configuration file and is not a part of the product thus is not covered by product support.
    Here is an example cache configuration that uses the namespace:
    <cache-config xmlns:service="class://com.oracle.coherence.environment.extensible.ServiceOperations">
        <caching-scheme-mapping>
            <service:index-add cache-name="dist-indexes">
                <extractor>
                    <class-name>ReflectionExtractor</class-name>
                    <init-params>
                        <init-param>
                            <param-type>string</param-type>
                            <param-value>getName</param-value>
                        </init-param>
                    </init-params>
                </extractor>
            </service:index-add>
            <!-- Simplified POF Config -->
            <service:index-add cache-name="dist-indexes" pof-enabled="true">
                <pof-index>8,16,32</pof-index>
            </service:index-add>
            <!-- This should not be counted based on system-property override -->
            <service:index-add cache-name="dist-indexes" pof-enabled="true" enabled="{tangosol.index.add}">
                <pof-index>8,16,31</pof-index>
            </service:index-add>
            <!-- Explicit POF Config -->
            <service:index-add cache-name="dist-indexes">
                <extractor>
                    <class-name>PofExtractor</class-name>
                    <init-params>
                        <init-param>
                            <param-type>{class}</param-type>
                            <param-value>null</param-value>
                        </init-param>
                        <init-param>
                            <param-type>{object}</param-type>
                            <param-value>
                                <class-name>com.tangosol.io.pof.reflect.SimplePofPath</class-name>
                                <init-params>
                                    <init-param>
                                        <param-type>{int[]}</param-type>
                                        <param-value>1,2,4</param-value>
                                    </init-param>
                                </init-params>                     
                            </param-value>
                        </init-param>
                    </init-params>
                </extractor>
            </service:index-add>
        </caching-scheme-mapping>
    </cache-config>Thanks,
    Harvey

  • How can I run a check for a record across other records in the same dataset based on unique key constraint?

    Dear community,
    I have a flat data set which I run through a lookup and return. I want to check if at least one of the records matching a unique key constraint had a successful return without merging all the records. Can some one point me in the right direction?
    Thanks!
    Marc

    Hi,
    Sounds like a job for the Lookup Check processor!
    As with all check processors, it adds a flag attribute (with a Y/N value) that denotes if the lookup was successful or not, as well as filtering records based on this at output.
    Lookup Check does the same as Lookup and Return but does not return the record(s) looked up.
    Also note that Lookup and Return does not actually merge records, but it does pull back the return data (which you can then split if you want a record for each).
    Regards,
    Mike

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

  • Normal Indexes created for Primary/Unique Keys in Oracle8/8i

    I remeber prior to Oracle8, when we create a Primary/Unique Key constraints, an unique index will be created by default. But I am noticing now in 8/8i when we create either Primary or Unique key, only normal/simple index is created. We are not able to override this normal index either by dropping and recreating the index alone. I believe it would be pretty much better in case of performance if we have a unique index for this columns. Can anybody help why is it changed so in 8/8i??
    Thanks,
    R. M.

    Dear Rob,
    since your answer was helpful and since it was the only one I will grant you full points on that.
    Thanks again for your input. In case other developers should look this thread up being confronted
    with the same kind of problem, here is how we solved it:
    We added an artificial primary key (a number of type NUMC 8) to the table which is supposed to
    include the structure. This key alone takes care of the uniqueness of eacht entry.
    All the others fields that we want to have available for a fast direct access, including the ones
    from the included structure, are put together in a secondary index.
    best regards
    Andreas

Maybe you are looking for

  • Using Mac Mini dvi to vga on an Philips plasma

    Hi I have an Philips plasma tv that dosn´t have an dvi. So I use the vga instead. But I get the message non supported picture format. Sorry for my bad translation från swedish. The tv is an Philips 42PF9945. Please if any one can help me

  • Buy new Macbook Pro now or wait until 2015

    Hi! I'm currently trying to decide between buying a new Macbook Pro 15" right now, or waiting for any possible releases next year. I'm totally dependent upon my laptop for both school and work. I heavily use programs such as Photoshop, Xcode, and oth

  • Service PO and Service Entry Question

    I'm trying to create a new PO order type for services so that we can have a separate PO number range for Services.  This would help simplify PO reporting.  The problem I am having is with the Service entry.  I have a PO for a Quantity of 1000 and a t

  • EPrint from Blackberry

    I tried sending a multipage pdf document to my 8500 printer via eprint.  Only the first page printed and at one-quarter of the size.  Does anyone know what I should do to get multipage documents printed at full size?  Thanks,

  • Org. Structures ..PPOSE

    Transaction PPOSE Could you please advise me on how to display an employees full name ie first name and surname, currently it is displaying initials and surname. Many Thanks