Updating a hierarchical data structure from an entry processor

I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
     at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
     at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
     at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
     at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
     at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
     at java.lang.Thread.run(Thread.java:637)

Hi,
reentrant calls to the same Coherence service is very much recommended against.
For more about it, please look at the following Wiki page:
http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
Best regards,
Robert

Similar Messages

  • Wanna learn to implement hierarchical data structure

    I want to learn the method of handling hierarchical data in Java
    For instance if there is some kind of data which contains 6 main nodes then every node contains 2 sub nodes and there are 4 nodes under the 3rd node where as the 5th one contains two more subnodes one under another.
    So how will that be implemented?
    Ofcourse it must be possible to implement it but how can I do the same if I do not know the depth and number of nodes and will get it during the runtime?
    I had attempted to do create some thing of this kind using Turbo C++ 3.5 but after two weeks of intensive programming I was left utterly confused with innumerable pointers and pointer to pointers and pointer to a pointer to a pointers and more. At last it was me who forgot which pointer was pointing to what.

    Well, just start by making a Node class. To allow Nodes to have children, make each Node have an array (or arraylist, vector, etc.) of other Nodes.
    for example:
    class Node{
      private ArrayList<Node> children;
    }Put whatever else you need in there.
    You can then traverse these through methods you write, to return child nodes. If you need the Nodes to have knowledge of their parents, add a Node parent; variable in your Node class.
    Essentially, keep things as simple as possible, and this will allow you to write cleaner code and also decide on the depth of the structure at runtime, like you describe.

  • Hierarchical data structure

    I am trying to represent the following data structure in hierarchical format ---- but I am not going to use any swing components, so jtree and such are out, and xml is probably out. I was hoping some form of collection would work but I can't seem to get it!
    Example Scenario
    Football League --- Football Team -- Player Name
    West
    ------------------------------Chiefs
    -------------------------------------------------------------xyz
    -------------------------------------------------------------abc
    -------------------------------------------------------------mno
    ------------------------------Broncos
    ------------------------------------------------------------asq
    ------------------------------------------------------------daff
    This hierarchical structure has a couple of layers, so I don't know how I can feasibly do it. I have tried to look at making hashmaps on top of each other so that as I iterate thru the data, I can check for the existence of a key, and if it exists, get the key and add to it.
    Does anyone know a good way to do this? Code samples would be appreciated!!!
    Thank you!

    Hi Jason,
    I guess you wouldn't want to use Swing components or JTree unless your app has some GUI and even then you would want to look for some other structure than say JTree to represent your data.
    You have got plenty options one is that of using nested HashMaps. You could just as well use nested Lists or Arrays or custom objects that represent your data structure.
    I don't know why you should exclude XML. There is the question anyway how you get your data in your application. Is a database the source or a text file? Why not use XML since your data seems to have a tree structure anyway and XML seems to fit the bill.
    An issue to consider in that case is the amount of data. Large XML files have performance problems associated with them.
    In terms of a nice design I would probably do something like this (assuming the structure of your data is fixed):
    public class Leagues {
        private List leagues = new ArrayList();
        public FootballLeague getLeagueByIndex(int index) {
            return (FootballLeague)leagues.get(index);
        public FootballLeague getLeagueByName(String name) {
            // code that runs through the league list picking out the league with the given name
        public void addLeague(FootballLeague l) {
            leagues.add( l );
    }Next you define a class called FootballLeague:
    public class FootballLeague {
        private List teams = new ArrayList();
        private String leagueName;
        public FootballTeam getTeamByIndex(int index) {
            return (FootballTeam)teams.get( index );
        public FootballTeam getTeamByName(String name) {
            // code that runs through the team list picking out the team with the given name
        public void addTeam(FootballTeam t) {
            teams.add( t );
        public void setTeamName(String newName) {
            this.name = newName;
        public String getTeamName() {
            return this.name;
    }Obviously you will continue defining classes for Players next following that pattern. I usually apply that kind of structure for complex hierarchical data. Nested lists would be just as fine, but dealing with nested lists rather than a simple API for you data structures can be a pain (especially if you have many levels in your hierarchy);
    Hope that helps.
    The Dude

  • Accessing Cache From an Entry Processor

    Is it possible to Access another cache to perform operations on when calling cache.invoke()? In this call I pass in the key and an processor that invokes another cache.

    Hi Dan,
         If that other cache is in a cache service different from the cache service of the cache on which the entry-processor is running, then you can.
         Otherwise you should not, because your code will be prone to deadlocks.
         Best regards,
         Robert

  • COLLECT statement in start routine of update rules for data coming from R/3

    Hi,
    I have more than one record with the same key combination that comes from R/3. I have a condition wherein I write into an error log. Hence I want only ione entry to be written into a error log and not multiple instances.
    I had written a similar one for ODS.
    Loop at IT_ODS into WA_ODS.
    Collect WA_ODS into IT_SUM.
    How to collect data that is coming in from R/3. Any inputs?

    Hi,
    I think, by using ATEND you can acheive control break logic.
    Regards,
    -Vj

  • HELP iPhone 5 got reformatted while being updated. not all datas recovered from iCloud.

    I recently updated my iphone5 with iOS 7.0.3 but while updating it was reformatted. I tried to recover my apps and photos from iCloud but not all datas were recovered. I am sure that I was able to back up my phone regularly. Only new photos were recovered the older ones which were the first ones that I backed up cant be found. Can I still recover those datas?

    Restoring the backup should recover the photos that were in the camera roll at the time of the backup.  If that didn't happen, all you can do is try restoring the backup again.  You might try some of the tips mentioned in this discussion thread that other's found useful when they had a similar problem, such as the one by ezjules: https://discussions.apple.com/message/19518589#19518589.

  • Problem to update two different data targets from same DSO using DTP

    Hi All,
    I am trying to upload data from standard DSO 0FIA_DS11 to DSOs 0FIA_DS12 and 0FIA_DS13 using DTP.
    I run DTP 0FIA_DS11 -> 0FIA_DS12 and all data is updated on target, but when I run DTP 0FIA_DS11 -> 0FIA_DS13 job stops at data source extraction, endless.
    Is there something I missing with new 7.0 approach?
    Could someone help me with this problem?!
    Any help is welcome!
    Thanks in advance,
    Alex

    I'm loading cumulative/planned transactions from 0FI_AA_11 to 0FIA_DS11 just filtering by depreciation area on data package. Data is being successfully load and activated into this data provider.
    From 0FIA_DS11 to 0FIA_DS12 I am loading data without filtering and load is finishing successfully.
    From 0FIA_DS11 to 0FIA_DS13 I am not using any filter on DTP. I'm removing data with transaction type 'PLN' into start routine, but load is not even achieving this point. It stops during data source extraction.
    Any more ideas?
    Thanks for your previous reply!!

  • Updating the last date request from ODS to CUBE

    Dear Friends,
    Please can someone explain me.
    i allways update request from ods to cube, sometime there will be 3 to 4 request in the ods which wont have the Data mart (Tick) and when i update it to the cube. i can only see 1 request which is current date request. i dont know whether the previous days request has been updated to the cube.???
    but i see in the ODS that all the data mart (tick) is available for all the request.
    And  please can someone explain me, if i have many request in the ods, current date previous day and so on.
    Is there any way to update in such a way, where i can see all the dates in the Cube instead of only 1 request with current date in the cube.
    when i delete the request from the cube and remove the tick from the ods, and refresh, then suddenly the current request and all the previous request TICK will be gone...
    Thanks for your help.
    will assign complete points.
    Thank you so so much

    Hi,
    if you know everday how many request are getting updated in infocube as one request the you can check  the added records in infocube .. it should be equal to the sum of all those request ..
    but transferred can be more or equal .. also its depends on the
    update rules ..designing ..
    Hope this helps you ..
    Regards,
    shikha

  • Hierarchical data structures (in a single table)

    Hi,
    If I have a hierarchy of objects stored in a table -
    ORG_UNIT
    ID
    PARENT_ID
    NAME
    And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
    Collection of children.
    Is there an efficient way of pulling them out of the database.
    It is currently loading each individual parent's kids.
    This is going to be pretty slow if there are say 500 OrgUnits in the
    database.
    If it would be better to pull them all out and build the hierarchy up in
    code (as it was being done in straight JDBC). How can I efficiently obtain
    the parent or children without doing exactly the same?
    Thanks,
    Simon

    Simon,
    There will be no db access for every child - you will read all child records
    for a particular parent at once when you try to access its child collection.
    Granted that for terminal leaves you will get an db access to load an empty
    collection so effectively you will get a db access per node. If your goal is
    always to load and traverse entire tree it will be expensive. But the
    beauty of hierarchical structures is that while they could be huge millions
    of nodes you do not need to load it all to navigate - just the path you
    need. This is where lazy loading excels so overall on large trees you will
    be much better of not loading whole thing at once. However if you still want
    to do it nothing prevents you from not having persistent collection of child
    records in OrgUnit class at all but only a reference to a parent, load
    entire table using query and then build tree in memory yourself as you
    iterate over the query resultset. You can probably even do it in a single
    iteration over the resultset. I would never do it myself though . In my
    opinion it defeats ease of use and cleanness of your object model.
    Alex
    "Simon Horne" <[email protected]> wrote in message
    news:ag1p9p$9si$[email protected]..
    Hi,
    If I have a hierarchy of objects stored in a table -
    ORG_UNIT
    ID
    PARENT_ID
    NAME
    And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
    Collection of children.
    Is there an efficient way of pulling them out of the database.
    It is currently loading each individual parent's kids.
    This is going to be pretty slow if there are say 500 OrgUnits in the
    database.
    If it would be better to pull them all out and build the hierarchy up in
    code (as it was being done in straight JDBC). How can I efficiently obtain
    the parent or children without doing exactly the same?
    Thanks,
    Simon

  • Is it possible to update master data attributes from an ODS?

    HELLO ALL,
    we have records coming into our ods like the following:
    costcenter1, subcostcentera, subcostcenterb, manager responsible, costs (kf). 
    This is a custom flat file load from a legacy system.
    We would like to just create an update rule from the ODS to the cost center mater data characteristic attributes. 
    Is this possible?
    thank  you

    Yes its possible to update the master data attributes from an ODS.
    Define your master data characteristics as a infoprovider.
    Create a update rule on Characteristic with ODS as a source and do general mapping.
    For step by step, pls refer..
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sapportals.km.docs/library/business-intelligence/g-i/how%20to%20implement%20flexible%20master%20data%20staging

  • Using a single data structure in a desktop application

    Hello,
    I am programming an application that needs to constantly access a data structure, for instance to add / edit / update / search data. Several graphical user interfaces need to modify this data structure. I was wondering of easy ways to use the data structure throught the whole application. One solution I found was to use the singleton pattern for my data structure, though lots of people have recommended me to avoid using that pattern. What are better ways of accessing that single data structure from all of those GUIs ?
    Thank you,
    Alfredo

    Not that there is anything wrong with a Singleton pattern, but I don't see how it would help you in this case.
    Just create the DataStructure and let every GUI that need to use it have a reference to it.

  • Spawning new entry processors from within an existing entry processor

    Is it possible / legal to spawn a new entry processor (to operate within a different cache) from within an existing entry processor.
    E.g we have a parent and a child cache, We will receive an update of the parent and start an entry processor to do this. Off the back of the parent update we will also need to update some child entries in another cache and need to start a new entry processor for the child entries. Is it legal to do this?

    Hi Ghanshyam,
    yes, in case of (a), you would be mixing different types in the same cache. There is nothing wrong with that from Coherence's point of view, as long as all code which is supposed to access such objects in their deserialized form is able to handle this situation.
    This means that you need to use special extractors for creating indexes, and you need to write your filters, entry processors and aggregators appropriately to take this into account. But that's all it means.
    The EntryProcessor on the child could be invoked, so long as there are more service
    threads configured. This allows retaining partition affinity. I don't think this is technically
    illegal.It is problematic, as invoking an entry-processor from another entry-processor in the same cache service can lead to deadlock/livelock situations. You won't find it out in a simple test as you get an exception or not.
    But even if it is technically not guarded against, firing a second entry-processor consumes an additional thread from the thread-pool. Now if you get to a situation when all (or at least more than half of the thread-pool size) of your entry-processors try to fire an additional entry-processor, and there are no more threads in the thread-pool, then some or all would be waiting for a thread to be available, and of course none would be available, because there are not enough single-thread entry-processors to leave to get a thread to everyone.
    However, none of them can back off as all are waiting for the fired entry-processor to complete. Poof, no processing is possible on your cache service.
    Another problematic situation which can arise if entry processors are fired from entry processors is that your entry-processors may deadlock on entries (entry processors executing on some entries and trying to execute on another entry on which another entry processor executes and also tries to execute on the first entry). In this case the entry-processors would wait on each other to execute.
    No code running in the cache server invoked by Coherence is supposed to access a cache service from code running in the threads of the same cache service, except for a couple of specifically named operations which only release resources but not consume additional new ones.
    Best regards,
    Robert

  • Prevent multiple users from updating coherence cache data at the same time

    Hi,
    I have a web application which have a huge amount of data instead of storing the data in Http Session are storing it in coherence. Now multiple groups of users can use or update the same data in coherence. There are 100's of groups with several thousand users in each group. How do I prevent multiple users from updating the cache data. Here is the scenario. User logs-in checks in coherence if the data there and gets it from coherence and displays it on the ui if not get it from backend i.e. mainframe systems and store it in coherence before displaying it on the screen. Now some other user at the same time can also perform the same function and if don't find the data in coherence can get it from backend and start saving it in coherence while the other user is also in the process of saving or updating. How do I prevent this in coherence. As have to use the same key when storing in coherence because the same data is shared across users and don't want to keep multiple copies of the same data. Is there something coherence provides out-of-the-box or what is best approach to handle this scenario.
    Thanks

    Hi,
    actually I believe, that if we are speaking about multiple users each with its own HttpSession, in case of two users accessing the same session attribute in their own session, the actually used cache keys will not be the same.
    On the other hand, this is probably not what you would really like, you would possibly like to share that data among sessions.
    You should probably consider using either read-through caching with the CacheLoader implementor doing the expensive data retrieval (if the data to be cached can be obtained outside of an HTTP container), or side caching with using Coherence locks or entry-processors for concurrency control on the data retrieval operations for the same key (take care of retries in this case).
    Best regards,
    Robert

  • How to get the user log from the entried planning data ???

    Dear All,
    Could you help me to give the suggestion regarding that please .. ?? :).
    I have requirement to get the last user who in charge in modifying the planning data.
    Or in the other words, i'm gonna get the log from the entried planning data.
    e.g.
    1. Phase 1 - My Friend:
    Create the planning data :
    Country           Sales
    INA                 $1000
    2. Phase 2 - I update it and create new record.
    Country           Sales
    INA                 $1500          < modified >
    USA                $400           < new >
    Could i get the log from those records ??
    The log can be contain:
    the created user       &   modified user ??
    I just read the article regarding status and tracking system in BPS, but could i cover that requirement ??
    (Because i got that the status and tracking system for creating a workflow for planning).
    Or ..
    Is there other way that can fulfill this requirement  ???
    Really need your guidance all.
    Regards,
    Niel.

    Dear Mayank,
    Tks a lot for your responses.
    I've tried it but in BPS version..
    I saw in the document there is GUID (unique ID), could you explain me what the objective is ???
    I work out to plan to store the user created, date created, and planning level information in the log data.
    What do you think ..
    Is it better to display them in the BEx Report / another manual planning layout ???
    What did you display the log data in your case ??
    Still need your guidance ..
    Really - really thanks.
    Niel.

  • Problem updating infoobject master data from a dso.

    Hi all,
    i'm updating the master data of an infoobject from another ods, the two table have the same key.
    When i update the infoobject with full repair i get errors about duplicate records and the data load stops, when i delete the request and launch the infopackage again all goes well but it seems it doesn't insert any new record in the master data, what could be the problem?
    If you need more info ask me.
    Thanks a lot.
    Stefano

    Hi Sadeesh,
    i checked the master data table and it seems it has been updated, i check some order number with recordmode N added today in the dso and the entries were in the infoobject, though i don't know why in the request tab it shows 0 added records.
    To be completely sure tomorrow i will check again the number of entries in the 'p' table of the infoobject.
    Anyway this is the exact error message i get:
    7 duplicate record found.     6478 recordings usedin table /BIC/XZDOCDSO
    7 duplicate record found.     6478 recordings usedin table /BIC/PZDOCDSO
    After that i run the change run for the IO and when i launch again tha data load it goes well.
    Regards
    Stefano

Maybe you are looking for