How is index data managed within the cluster?

Hello
We are in the process of evaluating Coherence.
With regard to indexes, am I correct in thinking that the data for the index is stored in an (internally managed) cache on the same node as the attendant indexed values? I'm not interested in getting at the index data, but rather just understanding how the index data itself is treated/stored within the cluster.
So in a 2 node cluster where an index is applied to a distributed cache that only has 2 entries, will there be an index cache on each node that has 1 {Binary,key} entry where the key points to the value that is stored on that same node? That is, I trust that the index data is spread out across the cluster in the case of a partitioned topology?
Regards
Peter

user11279467 wrote:
Hello
We are in the process of evaluating Coherence.
With regard to indexes, am I correct in thinking that the data for the index is stored in an (internally managed) cache on the same node as the attendant indexed values? I'm not interested in getting at the index data, but rather just understanding how the index data itself is treated/stored within the cluster.
So in a 2 node cluster where an index is applied to a distributed cache that only has 2 entries, will there be an index cache on each node that has 1 {Binary,key} entry where the key points to the value that is stored on that same node? That is, I trust that the index data is spread out across the cluster in the case of a partitioned topology?
Regards
PeterHi Peter,
up to 3.4.x indexes were supported only on partitioned topology, and yes, indexes on each node contain only data corresponding to entries which have their primary copy on that node (referred to as local entries from now on).
The indexes are not stored in a cache but they are managed as part of the information Coherence maintains about the backing map (the map which contains local entries).
The indexes are made of 2 parts:
- forward index: this is a mapping from the cache key in an internal representation (backing map keys from now on) to the value extracted with the extractor used to create the index from the entry belonging to the cache key (extracted value from now on)
- reverse index (aka. reverse map): this is a mapping from the extracted value to a set of backing map keys of entries from which the reverse index key was extracted with the extractor used to create the index
The index can be sorted which means that the reverse index will be a SortedMap (so sorting order is the ordering of extracted values)
Let's see an example. Assuming you have the following cache content within a particular storage-enabled node :
A --> Object (getX=5 ,...)
B --> Object (getA=3 ,...)
C --> Object (getA=3 ,...)
D --> Object (getA=4 ,...)
The forward index in that node will contain:
Binary(A) --> 5
Binary(B) --> 3
Binary(C) --> 3
Binary(D) --> 4
The reverse index will contain:
3 --> { Binary(B), Binary(C) }
4 --> { Binary(D) }
5 --> { Binary(A) }
Where Binary(...) means the internal (binary) representation of the ... object.
If the index is not ordered, then the order of iteration on entries in the reverse index are not deterministic.
If the index is ordered, then the iteration of entries in the reverse index will be as shown above. Also, you can cast the reverse index to SortedMap so you have the very useful headMap and tailMap and firstKey and lastKey methods.
Hope this helps.
Best regards,
Robert

Similar Messages

  • How to make use of adjacent data elements within the same buffer

    Hi,
             Does anyone know how to make use of adjacent data elements within the same buffer? To make my question clearly, I would like to give you an example. In my application, I set "sample to read" as 10 which means at each loop 10 data samples will be taken into a buffer. Now, what I would like to do is to do some calculations on adjacent data samples in same buffer. I tried to use "shift register" for this, but it seemed to me that it only can deal with the calculation between data from adjacent loops. In other words, it skips 9 data elements and take the 10th one for the calculation.
             Here I also attach my VI showing what I did.
        Thank you very much in advance,
                            Suksun
    Attachments:
    wheel_encoder_1.vi ‏98 KB

    Hi Suksun,
          I hope you'll forgive me for distilling your code - mainly to understand it better.  I tried to duplicate your logic exactly - which required reversing the "derivatives"-array before concatination with the current samples array.  As in your code, the last velocity is being paired with the first position.  If first velocity is really supposed to be paired with first position, just remove the "Reverse 1D Array" node.
    cheers
    Message Edited by Dynamik on 01-07-2006 03:17 AM
    Message Edited by Dynamik on 01-07-2006 03:19 AM
    When they give imbeciles handicap-parking, I won't have so far to walk!
    Attachments:
    encoder2.GIF ‏14 KB
    encoder2.vi ‏102 KB

  • How can I create cursors within the cursor?

    How can I create cursors within the cursor?
    Table1 2001 - 2007 data
    Account_no
    Account_eff_dt
    No_account_holder
    Num
    Seq_Num
    Value1
    Value2
    Table2_Historical (doesn't have Num as a field) 1990 - 2000 data
    Account_no
    Account_eff_dt
    No_account_holder
    Seq_Num
    Value1
    Value2
    Table3_06
    Account_no
    Account_eff_dt
    No_account_holder
    Num
    My result table should be:
    Table_result_06
    Account_no
    Account_eff_dt
    No_account_holder
    Num
    Value1_min (the minimum value for the minimum of record_sequence)
    Value2_max (the maximum value for the maximum of record_sequence)
    I have to get data from Table1 and Table2_Historical. If one account was open in 1998 and is still effective, the minimum value of that account is in the Table2_Historical.
    Let's say I open a cursor:
    cursor_first is
    select * from table3_06;
    open csr_first
    loop
    fetch cursor_first into
    v_Account_no
    v_Account_eff_dt
    v_No_account_holder
    v_Num
    EXIT WHEN csr_first%NOTFOUND;
    How can I open a second cursor from here that will get the Seq_Num from Table1
    csr_second
    select Seq_Num from Table1 where
    v_Account_no = Account_no
    v_Account_eff_dt <= Account_eff_dt
    v_No_account_holder=No_account_holder
    v_Num = Num
    How does it works???
    Thanks a lot

    Thanks so much for replying back. Here is what I am trying to do.
    I have to create a table for each year 2002, 2003, 2004, 2005, 2006 that has all the account numbers that are active each year plus some other characteristics.
    Let’s say I will create Table_result_06. This table will have the following fields. The account number, account effective date, Number of the account holder and the field Num (The primary key is a combination of all 4 fields), the beginning look of value 1 in 2006, the last look of value 1 in 2006.
    Table_result_06
    Account_no key
    Account_eff_dt key
    No_account_holder key
    Num key
    Value1_min (the minimum value for the minimum of record_sequence)
    Value2_max (the maximum value for the maximum of record_sequence)
    All the active account numbers with the Account_eff_dt, No_account_holder and Num are in the Table3. As such I can build a query that connects Table3 with table Table1 on all 4 fileds, Account_no, Account_eff_dt, No_account_holder, Num and find the Value1_min for the min of req_sequence in 2006 and Value1_max for the max of req_sequence in 2006. Here my problem starts.
    Table 1 doesn’t have a new entry if nothing has changed in the account. So if this account was open in 1993 and nothing has changed I don’t have an entry in 2006 but this doesn’t mean that this account doesn’t exist in 2006. So to find the minimum value I have to go back in 1993 and find the max and min for that year and that will be max and min in 2006 as well.
    As such I have to go to Table_2 historical and search for min and max. But this table doesn’t have the field NUM and if I match only on the Account_no, Account_eff_dt and No_account_holder I don’t get a unique record.
    So how can I connect all three tables, Table 1 for max and min, if it doesn’t find anything will go to table 2 find the two values and populate Table_result_06.
    Thanks so much again for your hep,

  • RetrieveJXM metrics from within the cluster?

    I would like to know if it is possible to retrieve the metrics the JMX adapter exposes from within the cluster e.g. by running a thread on the invocation service that would query the underlying data structures to get the metrics out.

    You could still snapshot the stats if you wanted to but you would have to do it using queries into the local JMS server running in the JVM rather than using Coherence API calls.
    Or, depending on what you want to do with the stats, you could look at the JMX reporter that will log stats to a file every so often or you could look at one of the third-party monitoring tolls that will do this.
    JK

  • With acrobat 7, how do I make links (within the pdf) open in a new window-tab?

    With acrobat 7, how do I make links (within the pdf) open in a new window-tab?

    No love? From anyone?

  • How to display data elements in the tempalte header

    Hello friends
    i've this date_from and date_to parameters which are date parameters that user enters..
    based on these date parameters I want to display them in the header as
    day of date_from(for example if the date_from is 13-nov-2010.then I should display 13)and for date_to it should dispaly as 15 if for example the user enters
    16-nov-2010.(date-1's day)
    so it should break down to
    date_from-13-nov-2010, 13
    date_to- 16-nov-2010, 15
    I want these two values to be displayed in the header of the template how to do this
    pls help
    also let me know how to display data elements in the template header
    Edited by: erp on Dec 22, 2010 12:44 AM

    Hi Ananth..Thanks for ur timely reply
    Can I use it with <? substring(':date_from',1,2)?>
    where date_from is an input parameter which user enters at the run time of the report.
    I've to capture the date entered by the user and print it in the header..
    Pls reply

  • How to keep data integrity with the two business service in OSB 10.3.1.0

    How to keep data integrity with the two business service in OSB 10.3.1.0
    In our customer system, customer want to keep data integerity between two businness service. I thinks this is XA transaction issue.
    Basing customer requirment, I created a testcase but I can't keep data integerity, For detail information, please refer the attached docs.

    Can you please explain what you meant my data integrity in your use case?
    Manoj

  • How to check data type of the field symbol at run time

    Hi,
    My code is as following:
          LOOP AT <fs> ASSIGNING <wa_covp_ext>.
            ASSIGN COMPONENT 86 OF STRUCTURE <wa_covp_ext> TO <f_zzname>.
            IF sy-subrc = 0.
              ASSIGN COMPONENT 158 OF STRUCTURE <wa_covp_ext> TO <f_pernr>.
              IF sy-subrc = 0.
                  SELECT SINGLE sname INTO <f_zzname> FROM pa0001
                                WHERE pernr = <f_pernr>
                                AND endda GE sy-datum
                                AND begda LE sy-datum.
             ENDIF.
          ENDIF.
        ENDLOOP.
    This query is giving dump when <f_zzname> is type P length 8 and decimals 2, because it tries to put PA0001-sname into it which is type C length 30. So I want to check the type of <f_zzname> before the select statement. If it is character 30, then I will write the select statement else not.
    How to check data type of the field symbol at run time? If it's not possible, then can somebody suggest a workaround? Thanks.

    check this ...
    write describe statement  ...
    field-symbols : <f_zzname> .
    data : sname like pa0001-sname,
           typ(10).
    assign sname to  <f_zzname>.
    describe  field <f_zzname> type typ.
    write : typ. <-- typ contains character type in this case ..
    U can check if typ is of character(C) if so .. write the select statement ...

  • How to find referenced photos within the Photos app

    Problem:
    Upon instructing Photos to use iCloud Photo Library, the app informed me:
    "104 referenced files in your library will not upload to iCloud Photo Library. Select referenced files in your library and choose 'Consolidate' from the File menu to copy the original files into the library."
    I do not know which 104 images in my library are referenced, and therefore cannot use "Consolidate" to import the masters into the library.
    Can anyone tell me how to find referenced files, within the active library in the Photos app?
    I am not asking how to use the "Show Referenced File in Finder" menu command. I need to identify a file as referenced first, hence this post!

    Gotcha - came up with a workaround... since the photos were living in the Photo Library packaged itself, and not simply in another regular folder, I right clicked on the library and said show packaged content, then search for what folders the files lived in, then copied the files out of the packaged library into a folder. Then I was able to give the Photos Library access to the new folder with the files.
    With all my files now references, I enabled iCloud Photo Library and am uploading now.
    Hopefully my new local Photo Library will be smart enough to optimize local images and shrink to give me more space.
    Thanks.

  • HOW points get you privileges within the community

    1. Create an account.
    2. Receive welcome email: "Welcome to Apple Support Communities! ... "Participate: Check out the community by reading existing discussions and see what the latest buzz is about. Got answers? Post responses, enlighten the community, and earn points! See how points get you privileges within the community."
    3. Go to link to see how points get you privileges.
    4. Read what points are...
    OK, so it explains how to get points and that they give you privileges, but no mention of how points get you privileges or link to that information. Not that I really care, but pointing it out. Is it just another "status" in our communities for people to feel good about [authoratative, look at me], or does it really provide something worthwhile relative to being helpful and providing Apple's support for them?

    Refer to ... 'More like this', this page, right hand column, bottom entry ... 'What's the benefit of accruing points in Apple communities'
    Carolyn Samit's excellent answer details all the levels and the justification for taking part in community discussions.
    Personally, simply scouting around the different sections is an education in itself and I would heartily recommend you to take an interest.

  • How will my data Secure in the cloud

    how will my data Secure in the cloud when i take the membership with them

    Please read the Data security section of the Creative Cloud FAQ here http://www.adobe.com/products/creativecloud/faq.html#data-security.

  • How do you order topics within the Project Manager?

    Hi all,
    I notice that the workflow in RH is that first you use the
    Project Manager (PM) to build a sort of structured topic database,
    and then you use the database to generate a TOC.
    Ok, so far so good, but the immediate little issue I am
    having is that I see that the PM arranges my topics within each
    folder in alphabetical order. Since, by default, the TOC is
    generated with the topics in this default order, I would like to
    change the order in which the topics are arranged within the PM. I
    don't seem to find any way to do this. I can't drag and there are
    no Move buttons for this pod. What am I missing?
    TIA
    - avi

    Hi Avi
    You aren't missing anything. That's the way it works with the
    Project Manager Pod. It simply lists your topic files in
    alphabetical order.
    For starters, I would never recommend that anyone always
    automatically create the TOC. I might suggest using that only as a
    first step in building a TOC. After it has been built initially,
    any added or amended topics would then be manually added and
    rearranged by you after the changes have been made.
    Unless you are happy with things that way (and I assume you
    aren't as you are posting to ask how to make it different) you will
    find yourself constantly frustrated. Note that if you auto create
    the TOC, you get ALL topics inside. And in many help files there
    are topics that never see the light of day in the TOC structure.
    All that aside, I do see a great many folks that want to have
    an ability to arrange topics in their Project Manager pod as they
    want. For you and anyone else that wants this, you should consider
    submitting a Wish Form to ask for the feature.
    Click
    here to view the WishForm/Bug Reporting Form
    Cheers... Rick

  • Using a byte[] as a secondary index's key within the Collection's API

    I am using JE 4.1.7 and its Collections API. Overall I am very satisfied with the ease of using JE within our applications. (I need to know more about maintenance, however!) My problem is that I wanted a secondary index with a byte[] key. The key contains the 16 bytes of an MD5 hash. However, while the code compiles without error when it runs JE tell me
    Exception in thread "main" java.lang.IllegalArgumentException: ONE_TO_ONE and MANY_TO_ONE keys must not have an array or Collection type: example.MyRecord.hash
    See test code below. I read the docs again and found that the only "complex" formats that are acceptable are String and BigInteger. For now I am using String instead of byte[] but I would much rather use the smaller byte[]. Is it possible to trick JE into using the byte[]? (Which we know it is using internally.)
    -- Andrew
    package example;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.persist.EntityStore;
    import com.sleepycat.persist.PrimaryIndex;
    import com.sleepycat.persist.SecondaryIndex;
    import com.sleepycat.persist.StoreConfig;
    import com.sleepycat.persist.model.Entity;
    import com.sleepycat.persist.model.PrimaryKey;
    import com.sleepycat.persist.model.Relationship;
    import com.sleepycat.persist.model.SecondaryKey;
    import java.io.File;
    @Entity
    public class MyRecord {
    @PrimaryKey
    private long id;
    @SecondaryKey(relate = Relationship.ONE_TO_ONE, name = "byHash")
    private byte[] hash;
    public static MyRecord create(long id, byte[] hash) {
    MyRecord r = new MyRecord();
    r.id = id;
    r.hash = hash;
    return r;
    public long getId() {
    return id;
    public byte[] getHash() {
    return hash;
    public static void main( String[] args ) throws Exception {
    File directory = new File( args[0] );
    EnvironmentConfig environmentConfig = new EnvironmentConfig();
    environmentConfig.setTransactional(false);
    environmentConfig.setAllowCreate(true);
    environmentConfig.setReadOnly(false);
    StoreConfig storeConfig = new StoreConfig();
    storeConfig.setTransactional(false);
    storeConfig.setAllowCreate(true);
    storeConfig.setReadOnly(false);
    Environment environment = new Environment(directory, environmentConfig);
    EntityStore myRecordEntityStore = new EntityStore(environment, "my-record", storeConfig);
    PrimaryIndex<Long, MyRecord> idToMyRecordIndex = myRecordEntityStore.getPrimaryIndex(Long.class, MyRecord.class);
    SecondaryIndex<byte[], Long, MyRecord> hashToMyRecordIndex = myRecordEntityStore.getSecondaryIndex(idToMyRecordIndex, byte[].class, "byHash");
    // END

    We have highly variable length data that we wish to use as keys. To avoid massive index sizes and slow key lookup we are using MD5 hashes (or something more collision resistant should we need it). (Note that I am making assumptions about key size and its relation to index size that may well inaccurate.)Thanks for explaining, that makes sense.
    It would be the whole field. (I did consider using my own key data design using the @Persistent and @KeyField annotations to place the MD5 hash into two longs. I abandoned that effort because I assumed (again) that lookup with a custom key design would slower than the built-in String key implementation.)A composite key class with several long or int fields will not be slower than a single String field, and will probably result in a smaller key since the UTF-8 encoding is avoided. Since the byte array is fixed size (I didn't realize that earlier), this is the best approach.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to - correct episode order within the Remote App

    Recently I found that when i was using the Remote App to control my iTunes Library, all my TV Show episodes where in alphabetical order instead of episode number order.
    after searching on how to solve this and speaking with Apple Support (who couldn't figure out why this was happening) i couldn't find the answer and noticed that many people were having this same issue and were in the same boat as me.
    i managed to solve the issue and wanted to share. sorry if i have posted this in the wrong area, first time giving advise on here.
    ill run through this using The Walking Dead - Season 1 as an example, but this can obviously be done with any TV Show.
    Even with the correct episode numbers, iTunes displays the episode in the correct order -
    but in alphabetical order within your library on the remote app -
    in iTunes - highlight all episodes and get info
         2. under the Options Tab, change the Media Kind to Music Video and click OK at the bottom
         3. Then head over to where you Music Videos are stored
         4. select each of the episodes, get info
         5. under the Details Tab, you will now have more options to select
         6. scroll to the bottom of the Details Tab, and within the Album Artist section, type in "The Walking Dead"
         7. in the Disc Number section, you want to add which season of how many seasons there are. example - I have 4 seasons of The Walking Dead within my library, and as this is season 1, i will enter 1 of 4
         8. next, in the Track section, you want to add which episode of how may episodes in that season. example - this is episode 1 of 6 in the season
         9. next go into the Options Tab, and change the Media Kind back to TV Show
         10. click OK at the bottom
         11. once you have done this for each episode, head back into your TV Shows
         12. when you now get info for each of the episodes, the Album Artist, Disc Number and Track sections should be at the bottom of the Details Tab (if they were not there before)
         13. if this has been done correctly - all episodes should be correctly listed within the remote app by episode order
    Hopefully this has helped somebody out there!

    Gotcha - came up with a workaround... since the photos were living in the Photo Library packaged itself, and not simply in another regular folder, I right clicked on the library and said show packaged content, then search for what folders the files lived in, then copied the files out of the packaged library into a folder. Then I was able to give the Photos Library access to the new folder with the files.
    With all my files now references, I enabled iCloud Photo Library and am uploading now.
    Hopefully my new local Photo Library will be smart enough to optimize local images and shrink to give me more space.
    Thanks.

  • Using band zoom to select all data points within the band

    I'm feeling stupid this morning. I'm a new user with 2011, but haven't figured out how to copy a portion of data. I can use view 2D axis system to view my data. Then I use band zoom to get to the portion I need, but for the life of me I can't get it to copy that portion. I have set flags at the beginning and end and such, but the best I get is the beginning point, a single no value point and the end point. I am not getting all the points in between the flags. Am I supposed to somehow flag all points in the zoom view? I have done a search, but can't figure out what I am doing wrong.
    Robert
    Solved!
    Go to Solution.

    Hello Robert,
    This is a pretty easy and straight forward thing to do:
    Put you data into the VIEW window. Then pick the BAND cursor in the toolbar (see below)
    Select the "Set Flags" icon in the VIEW windows with the data and the Band Cursor:
    After you have the data selected with the FLAGS (the data points will be displayed in a thicker line style) click on the "Copy Data Points" icon:
    You will get a full copy of the data in the band, copied to a  new channel in the Data Portal. In the example below I dragged the new data into both windows (not that the red data is still highlited by the flags, this it's a thicker line style).
    That should answer your question, please let me know if you have additional questions,
         Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

Maybe you are looking for