ConcurrentHashMap.putIfAbsent efficiency question

Hi,
I have a question for the putIfAbsent method of ConcurrentHashMap. according to API, this is atomic. so just consider the following two segments of code:
ConcurrentHashMap<Key, Value> map;
ReentrantLock lock;
1.
Object obj = new Object();
Object oldObj = map.putIfAbsent(key, obj);
if(oldObj!=null) obj = oldObj;
2.
lock.lock();
try{
Object obj = map.get(key);
if(obj==null){
obj = new Object();
map.put(key, obj);
} finally {
lock.unlock();
I am new to the concurrent package and thus don't know the performance issues. For approach 1 I think the potential problem is Object creation everytime no matter the Object exists in the map or not. Would anyone tell me which will have better performance when the map contains lots of objects and the map insert rate is very high?
Thanks,
Benny

Using Locks is hard and errorprone, use ConcurrentHashMap and other collections since inventing your own locking mechnism would be difficult.
If you manage to implement your own fast locking mechnism for hashmaps you want to publish it in a Academic Paper so others can implement it too. Like that ConcurrentLinkedQueue that is based on a paper from 1996 http://www.cs.rochester.edu/u/michael/PODC96.html.

Similar Messages

  • Efficiency Question - Cutting Clips

    Efficiency Question - Cutting Clips.
    So I was wondering if there is a better way to cut my clips that I am not using. Basically I often have large long clips of raw footage that I cut down to the usable segments. Each raw file may be over an hour long and have 100s of small segments of live commentary, often I find I am cutting out slightly to long pauses in the talking and other stuff before I have a clip ready to send to the main edit.
    Anyway, often I am zoomed will in so I can edit, and I press C, make a cut.. then make another a bit further up, press V, click on the segment I just cut out.. press delete and then click on and move the rest of the raw clip to the left to bump up where I made the first cut, often needing to zoom well out and then back in again to continue editing.
    Is there a better way to do this? So I do not need to zoom in and out all the time... is there a way to delete a chunk of video form the sequence and have the clip automatically "close the gap" for me?
    --Thanks

    I would take the Source Clip into the Source Monitor.
    Mark an In Point  ("I")  where the good content starts...and an Out Point ("O")  where the good content stops.
    Hit "," to Insert it to the Sequence.
    Back to Source Monitor and repeat for the next section of good content in the same Source Clip..
    No gaps . Done
    Then ..if necessary...I would play thru  the Sequence and do Trim Edits at the editpoints.
    Many different ways to "skin the cat" at Trim stage.
    Alternatively in the first part:
    I would take the Source Clip into the Source Monitor.
    Mark an In Point ("I") where the good content starts...and an Out Point ("O") where the good content stops.
    CTRL-Drag to the Program Monitor.

  • Efficiency Question: Simple Java DC vs Used WD Component

    Hello guys,
    The scenario is - I want to call a function that returns a FileInputStream object so I can process it on my WD view. I can think of two approaches - one is to simply write a simple Java DC that does so and just use it as a used DC so I can call the functionality from there. Or create another WD DC that exposes the value (as binary) via component interface.
    I'm leaning on the simple Java DC approach - its easier to create. But I'm just curious on what would be the Pro-side (if there's any) if I use the WD Component interface approach? Is there a plus for the efficiency? (Though I doubt) Or is it just a 'best/recommended practice' approach?
    Thanks!
    Regards,
    Jan

    Hi Jan
    >one is to simply write a simple Java DC that does so and just use it as a used DC so I can call the functionality from there
    Implemented Java  API for the purpose mentioned in your question is the right way, I think. The Java API can be even located in the same WebDynpro DC as your WebDynpro components. There is not strongly necessity to put it into separate Java DC.
    >Or create another WD DC that exposes the value (as binary) via component interface
    You should not do this because in general WD components' purpose is UI, not API. Implementing WD component without any UI, but with some component interface has very-very restricted use cases. Such WD component shall only be a choice in the cases when the API is WebDynpro based and if you have to strongly use WebDynpro Runtime API in order to implement your own API.
    If your API does not require WebDynpro Runtime API invocations or anything else related to WebDynpro then your choice is simple Java API.
    >But I'm just curious on what would be the Pro-side (if there's any) if I use the WD Component interface approach? Is there a plus for the efficiency?
    No. Performance will be better in simple Java API then in WebDynpro component interface. The reason is the API will not pass thought heavy WebDynpro engine.
    BR, Siarhei

  • Noobish "efficiency" question on DataOutputStream

    So, I currently have this:
    byte[] buf = some_bytes;
    DataOutputStream out = new DataOutputStream(socket.getOutputStream());
    out.writeInt(buf.length);
    out.write(buf);I've always thought that calling write(byte[]) and any given OutputStream will be "efficient", in that things will be chunked under the hood to avoid incurring whatever I/O overhead is present for each and every byte. So, as I understand it, we only need to explicitly use BufferedOutputStream if we're writing one or a few bytes at a time.
    Now I'm not so sure.
    Digging through the core API source code a bit, it seems that this ultimately calls OutputStream.write(byte) in a loop, which, presumably, is bad unless the OutputStream in question is a Buffered one.
    Am I shooting myself in the foot here? Should I be wrapping that DataOuputStream around a BufferedOutputStream? Or vice versa?
    Or should I just scrap the DataOuputStream altogether and write the int's bytes out myself?
    I'm going to test it both ways, but in the meantime, and just in case the tests are not definitive, I'm wondering if anybody here can tell me what I should expect to see.
    I think I've been staring at this stuff for a bit too long and am second-guessing myself to death here no matter which way I look at it. So thanks in advance for any nudge you can give me back toward sanity.
    Edited by: jverd on Feb 16, 2012 3:59 PM

    EJP wrote:
    So, what's the point of the basic OutputStream not being buffered then?I guess the idea was that if you want buffering you say so,Ok.
    I think you'll find that every significant class that extends OutputStream (specifically FileOutputStream and SocketOutputStream) overrides write(byte[], int, int) to do an atomic write to the OS, so it isn't really such an issue except in the case of DataOutputStream (and not ObjectOutputStream, see above).Okay, so, in this case, I've got a DataOutputStream wrapped around a SocketOutputStream. It's not an issue for SOS, but it is for the wrapping DOS. Yes?
    So to overcome the DOS doing a bunch of piddly 1-byte writes to the SOS, which in turn could result in a bunch of piddly 1-byte writes to the network, which I don't want, I inject a BOS between them. Yes?
    Thanks for the help. I can't believe after all these years I never got these details sorted out. I guess it never came up quite this way before.

  • Hash Table efficiency question

    Hi experts,
    In my program, I want to read in a lot of string and store the occurance of each string. I found that hash table is the best and most efficient option, but the problem is that, hash table only store one item.
    So, I either have to:
    1) store an array object into each hash table entries. ie String[String][Occurance]
    2) create two hash table based on the hash code of the string.
    For 2) I am planning to store all distinct String into one hashtable using the string as the hashcode, then create another hashtable to store the occurance using the String as the hashcode.
    My question is that:
    1)which implementation is more efficient? Constantly creating String array for each entry or create two hashtables?
    2) Is the second implementation possible? Would the hashcode be mapped to different cell in the hashtable even the two hashtable are using the same hashcode and the same size?
    Thank you very much for your help.
    Kevin

    I am wondering what it is you are trying to do, but I am assuming you are trying to find the number of occurrences for a particular word, and then determining which word has the highest value or the lowest value? You can retrieve the initial String value by using the keys() method of the hashtable. You can use it to traverse the entire table and compare the counts there.
    If you really wanna store another reference for that string, create a simple object
    public final class WordCount {
       * The Word being counted.
       * @since 1.1
      private String _word;
       * Count for the Number of Words.
       * @since 1.1
      private int _count;
       * Creates a new instance of the Word Count Object.
       * @param word The Word being counted.
       * @since 1.1
      public WordCount(final String word) {
        super();
        _word = word;
        _count = 0;
       * Call this method to increment the Count for the Word.
       * @since 1.1
      public void increment() {
        _count++;
       * Retrieves the word being counted.
       * @return Word being counted.
       * @since 1.1
      public String getWord() {
        return _word;
       * Return the Count for the Word.
       * @return Non-negative count for the Word.
       * @since 1.1
      public int getCount() {
        return _count;
    }Then your method can be as follows
    * Counts the Number of Occurrences of Words within the String
    * @param someString The String to be counted for
    * @param pattern Pattern to be used to split the String
    * @since 1.1
    public static final WordCount[] countWords(final String someString, final String pattern) {
      StringTokenizer st = new StringTokenizer(someString, pattern);
      HashMap wordCountMap = new HashMap();
      while (st.hasMoreTokens()) {
        String token = st.nextToken();
        if (wordCountMap.containsKey(token)) {
          ((WordCount) wordCountMap.get(token)).increment();
        } else {
          WordCount count = new WordCount(token);
          count.increment();
          wordCountMap.put(token, count);
      Collection values = wordCountMap.values();
      return (WordCount[]) values.toArray(new WordCount[values.size()]);
    }Now you can create your own comparator classes to sort the entire array of Word Count Objects. I hope that helps
    Regards
    Jega

  • ABAP efficiency question

    Hi
    I have an internal table of the type STRING which may contain thousands of records and I am trying to get to lines which starts with a set of characters. There may be a few lines starting with that sequence and it is possible to do what I have shown below, however, is there a more efficient way to do this?
    Ideally what I would like to do is to find the lines or the index (SY-TABIX) using LOOP AT.......WHERE....My current way of doing it is not the best. Any ideas?
    DATA t_spooltext TYPE TABLE OF string. "contains thousands of lines
    DATA l_text(200) TYPE c.
    DATA l_index     TYPE i.
    LOOP AT t_spooltext INTO DATA(l_current_line).
      CONDENSE l_current_line.
      l_text = l_current_line.
      IF l_text+0(5) EQ 'H046A'.
        l_index = sy-tabix.
        ELSE
            CONTINUE.
      ENDIF.
    ENDLOOP.

    Sharath,
    Fixing what you posted, I add the sample code below:
       REPORT  z_test.
    DATA: t_spooltext TYPE TABLE OF string,
          results_tab TYPE TABLE OF match_result.
    DO 10 TIMES.
      APPEND 'H046A' TO t_spooltext.
    ENDDO.
    DO 3 TIMES.
      APPEND 'H046B' TO t_spooltext.
    ENDDO.
    FIND ALL OCCURRENCES OF 'H046A'
    IN TABLE t_spooltext
    IN CHARACTER MODE
    RESULTS results_tab.
    BREAK-POINT.
    Archana, in my point of view the user Sharath are correctly, only thing you have to watch is the form that is set to result table, as I showed above

  • Catalog efficiency question in Lightroom 3.6

    I currently have one catalog with over 22,000 photos and numerous collections in it. I recently heard about creating separate "catalogs" to better LR3's efficiency and to cut down the clutter so that I'm just working on one catalog at a time. Does anyone know the best way to go about this so as not to ruin all of my collections I have created over the past year?

    Don't do it is the answer. For most uses one catalog is the best solution.

  • Small efficiency question

    Does using braces for containing if's and for's etc affect the efficiency or size of compiled binary code?
    e.g.
    Does this make a difference:
    for( int i=0; i<N; i++ ) {
    b += f;
    As opposed to:
    for( int i=0; i<N; i++ )
    b += f;

    Actually you will get a speed increase by a factor of
    10 to 15 if you manage to write your application on
    only one line.
    :)Example 1:
    public class Test {
      public static void main(String[] args) {
        System.out.println("WTF?");
    }vs Example 2:
    public class Test{public static void main(String[] args){System.out.println("WTF?");}}So far my performance tests have proved inconclusive, but I'll be sure to write all my code like example 2 from now on : )

  • Program efficiency question

    Hello all, I was just wondering about the efficiency of some programming practices. I have a program I am working on with about 20 different files. One of the files contains all of my constants ("Constants.java")... I am wondering if it is a bad practice to just call all of the variables statically i.e.
    System.out.println(Constants.menu);
    or would it be more efficient for the classes that use Constants to implement Constants?? I believe in that case I would be able to just use:
    System.out.println(menu);
    As of right now I am using the 1st method, but I wasn't anticipating the growth of the program to be so large. Any input on this subject would be greatly appreciated. Thanks,
    dub

    or would it be more efficient for the classes that
    use Constants to implement Constants?? No.
    Using interfaces for constants is an antipattern.
    Don't dwell on these micro "efficiency" issues. Write code that's simple and easy to understand. Worry about performance in broader terms--e.g., prefer an O(N) operation to an O(N^2) one. Only diddle with those minutuae if you've determined that they're actually causing problems and the change you make will create a significant improvement.

  • Efficiency Question

    Which is more efficient: TOTEXT (field1, "MM/dd/yyyy") or use the Date tab in Format editor?  Or is it just developer's preferences?  Thanks.

    Date tab is more efficient because now crystal doesn't have to process a formula.  By using the date tab, crystal is doing nothing more than changing how it is dispalyed instead of translating the data.

  • DataGrid sort efficiency questions

    Hi,
    I guess I have two specific questions:
    1. Is sorting a DataGrid faster/slower/same by applying a
    sortCompareFunction to a specific column vs. applying a Sort object
    directly to the underlying dataprovider?
    2. It says in the following link that if you use a
    labelFunction on a column, you must also use a sortCompareFunction:
    dataGridColumn.sortCompareFunction
    ASDOC
    Is this really true - anyone know why?
    Thanks!
    -Noah

    Alexander,
    In Flex 3 sorting is controlled by the collections rather than the controls. This provides greater flexibility overall despite making it hard to adjust to. To get or set a sort on the DataGrid in Flex 3, use the sort property of its dataProvider:
    ListCollectionView(dataGrid.dataProvider).sort
    You can both read and write to the sort property. When you write to it, you have to call refresh() after for it to apply. Refer to the documentation on the ListCollectionView class for how to properly get and set its sort property.
    Thanks,
    Philip

  • New user Union efficiency question.

    I have feeling I read this some where but can no longer seem to find any information on the following:
    Which is more efficent to the following with Physical tables A and B
    SELECT * FROM (A union B) WHERE ID < 97 AND ID > 103
    or
    SELECT * FROM A WHERE ID < 97 AND ID > 103 UNION SELECT * FROM B WHERE ID < 97 AND ID > 103.
    And more to the point how would I test / profile this ?
    Thanks
    Mark

    Silly me it was a threoretical question in as much as the code I'm trying to maintain is a little more complex and the records in both tables whilst being the same do not have contiguous ID values.
    I found an article on materialized views at http://www.akadia.com/services/ora_materialized_views.html and used set autotrace on and timing on and found that after creating two copies of the same table both gave the same performance. ( with corrected Id values of course ;-))
    Thanks ever so much and Regards
    Mark.

  • How do I request help for an Acrobat search efficiency question?

    I have had a search time degradation of 60x-70x when I switch to Acrobat X or XI (went from 14 seconds on v9.5 to 841 seconds with X or 991 seconds seconds with XI).  I am trying to find out where I can ask about this to see what I can do to improve the speed of my searches.
    Is this forum a good place to ask?  My prior posting was at:
       http://forums.adobe.com/message/5338121#5338121
    Thanks.

    The PDF files I search are fairly large.  I was told they are "text under image".  They represent the pages of an organization's bimonthly journal over the course of 80 years.  There are 16 files, averaging about 800 pages per file -- totalling around 600MB.  The ability to search across all those pages in 14 seconds rather than 1000 seconds is greatly desired.
    What kind of additional information would be appropriate and helpful?  Should I repost the details I had included under the other forum?  Are there other tests I should try?
    Am I the only one who has had this kind of issue?  I searched to find similar discussions or articles but had found none.
    Is this the appropriate forum in which to follow up as a discussion?
    BTW --  Best regards, and thanks for your feedback.

  • Weak and concurrent hash map for caching

    Hello,
    I have written a very simple cache class with both weakness and concurrency benefits. This class is intended to be used as a weak cache in "hot redeploy" capable servers (JBoss or any other).
    My implementation uses the (problematic) "double-check" pattern with a reentrant lock and encapsulates a WeakHashMap with WeakReference values (to avoid circular key/value references). Here is the code:
    public interface ValueCreator<V> {
         public V create();
    public class WeakCache<K, V> {
         private final Lock lock = new ReentrantLock();
         private final Map<K, WeakReference<V>> weakMap;
         public WeakCache() {
              this(16);
         public WeakCache(int initialCapacity) {
              this(initialCapacity, 0.75F);
         public WeakCache(int initialCapacity, float loadFactor) {
              weakMap = new WeakHashMap<K, WeakReference<V>>(initialCapacity, loadFactor);
         public V get(K key, ValueCreator<V> creator) {
              WeakReference<V> ref = weakMap.get(key);
              if (ref == null) {
                   lock.lock();
                   try {
                        ref = weakMap.get(key);
                        if (ref == null) {
                             ref = new WeakReference<V>(creator.create());
                             weakMap.put(key, ref);
                   } finally {
                        lock.unlock();
              return ref.get();
         }One usage of this cache is for session ejb3 lookup:
    private static final WeakCache<Class, Object> LOOKUP_CACHE = new WeakCache<Class, Object>();
    public static <T> T lookup(final Class<T> serviceClass) {
         T service = (T)LOOKUP_CACHE.get(serviceClass,
              new ValueCreator<Object>() {
                   public T create() {
                        String lookup = "myapp/" + serviceClass.getSimpleName() + "/local";
                        try {
                             return (T)(new InitialContext()).lookup(lookup);
                        } catch (NamingException e) {
                             throw new RuntimeException("Could not lookup EJB " + serviceClass + ": " + lookup, e);
         return service;
    ...2 questions:
    1. Is there any issue with concurrent access to this cache ?
    2. What happens exactly when the garbage collector wants to free memory with a map with both weak keys and values ?
    Some limited tests show that the behavior of this cache fits my needs: the lookup cache is cleared on redeploy and it keeps its key/values pairs otherwise.
    Thanks for any comments.
    Message was edited by:
    fwolff999

    I know that DCL is broken under certain circumstances (but I have read it may work with modern JVM and the use of volatile variables for example).
    So: is it broken with this specific implementation and if so, what should be done to make it work ?
    The getter method is potentially setting a value (like in ConcurrentHashMap.putIfAbsent) and it uses a ValueCreator in order to actually create the value only if it is not already present in the map.

  • NullPoint Error shows up in The Java EE 7 Tutorial

    I am reading The Java EE 7 Tutorial from http://docs.oracle.com/javaee/7/tutorial/doc/jsf-facelets005.htm#GIQZR
    After I typed the example code in the chapter 8.5 Composite Components in my IDE and run the example on GlassFish4.0, I got an error.
    java.lang.NullPointerException
      at java.util.concurrent.ConcurrentHashMap.putIfAbsent(ConcurrentHashMap.java:1078)
      at com.sun.faces.util.Cache.get(Cache.java:116)
      at com.sun.faces.application.view.FaceletViewHandlingStrategy.getComponentMetadata(FaceletViewHandlingStrategy.java:237)
      at com.sun.faces.application.ApplicationImpl.createComponent(ApplicationImpl.java:951)
      at javax.faces.application.ApplicationWrapper.createComponent(ApplicationWrapper.java:648)
    Then I check the older version of this tutorial, I found a difference in email.xhtml code. The namespace has been changed from Java EE 7. After I changed the namespace back to JavaEE 6 version, it works.
    Java EE 7
    xmlns:composite="http://xmlns.jcp.org/jsf/composite"
    Java EE 6
    xmlns:composite="http://java.sun.com/jsf/composite"
    Someone on the StackOverflow told me that this may caused by Glassfish has attempted to download a schema corresponding to one of those namespaces, and received a response that it can't handle. I don't know whether is the real root cause. Is there anyone has the same problem?

    The example from 30. august 2013 (tut-install/examples/web/jsf/compositecomponentexample/) is with the old namespace xmlns:composite="http://java.sun.com/jsf/composite"
    But the namespace xmlns:composite="http://xmlns.jcp.org/jsf/composite" is also ok. Works with RI 2.2.4 on Tomcat.

Maybe you are looking for

  • Safari won't open after OS 10.4 software update combo update

    I am working on a computer iMac g5 1.8ghz, 600 mhz Bus, 512mb Ram was 10.3.9 but Safari was not working because Safari 3.1.1 was installed. I tried removing all safari, cache, plist to no avail, Firefox worked after reinstall. I remove safari and upg

  • How do I get my contacts back into my personal folder

    After syncing my Outlook 2010 Calendar and Contacts with the iCloud I have contacts only in icloud contacts and not in my personal contacts.  How can I restore my Contacts to my personal contacts folder?

  • Connect Notebook to TV

    How do you connect a Toshiba NB505-N500BL to a Toshiba 32TL515U LCD TV? Do I use a VGA to HDMI cable or a VGA to RCA cable ? Have been told to use a VGA to HDMI cable and plug into the HDMI port on TV. Could not find any detailed information in manua

  • Smartd startup service failure

    We have just loaded Unbreakable Linux 4 on a new HP Proliant DL360 Server. The server has a HP Smart Array Raid controller that is recognized as a CCISS drive in Linux. From the get go, and after doing all ULN required updates, this service on start

  • My picture on outgoing e-mails

    How do I stop my picture from automatically attaching to the top right of all of my e-mails? Thanks!!!