Insert, search, delete performance overhead for different collections

Hi,
I am trying to create a table which compares performance overheads for different collections data structures. Does anyone want to help me out? I want to put a number from 1 - 9 in each of the question marks. 1 being very poor performance and 9 being very good performance (the reason I am doing this is that I had this question in a job interview test and I didn't pass it)
anyone have any comments?
          Searching     Inserting     Deleting     
ArrayList ? ? ?
LinkedList ? ? ?
TreeSet ? ? ?
TreeMap ? ? ?
HashMap ? ? ?
HashSet ? ? ?
Stack ? ? ?

sorry the formatting has screwed up a bit when I posted it. It should have a list of the collection types and three columns (inserting, deleting, searching)

Similar Messages

  • Three iPods to be used for different collections by same owner

    Hi.
    I'm the happy owner of three iPods: an older iPod (maybe second or third gen 40 GB), a Nano, and the newer Video. I'd like to reset all of them to factory reset - erase all content and associations. Then, I'd like to reset iTune's knowledge of them.
    I would like to create different subsets of my music from iTunes so I can use them for different purposes. They have different capabilities (storage limits, video, etc.) so this will help me optimize their features as well.
    Does somebody know how to do these things?
    Thanks.
    aps.
    Aluminum 1.67, MacMini, Quicksilver, iBook, 3xPowerbook   Mac OS X (10.4.4)   Former UNIX kernel hacker.

    After some more research, I developed the answer I was looking for. Essentially, use the iPod updater to reset the iPod. Sometimes, you have to quit iTunes when it is updating, even if it is in the middle of updating. Then after reseting the iPod with the iPod updater, when iTunes starts, assign unique names to each iPod.

  • Performance overhead

    Hi,
    I have a quick question.
    There is part in the project where we need to update information in SAP which we gather from the user. I am getting a series of name from user and updating it. For that I am using insert statement. If user tries modify those series of name, I do delete first and insert again.
    I am not sure whether it is a good way. Also my concern is whether there will be performance overhead for SAP through my way?
    thanks.
    - deepan

    Deepan,
    Your method is fine.  No performance issue to be concerned about.
    You could also just use the MODIFY verb.
    It handles INSERT-ing and UPDATE-ing all in one statement.
    By that - I mean:
    If "Jake" does not exist in the table, it will be INSERT automatically.
    If "Jake" does exist in the table and you changing something about Jake's info, it will be UPDATEd automatically.
    All by using the MODIFY verb.

  • Comparison insertion/search time between different Collection class

    Hi,
    Does someone know where I can find a clear an complete comparison between different JAVA class which implements interface Collection?
    I want to compare:
    - elements insertion time
    - elements search/removal time
    Thank you very much in advance
    Diego

    from wikipedia: Its purpose is to characterize a function's behavior for very large (or very small) inputs in a simple but rigorous way that enables comparison to other functions.
    meaning if I ask how quick an algorithm is you might say it completes in 10 seconds but the next time you run it it might take 8 seconds. It kind of depends on what else your computer is doing/ how fast your computer is or how much data you are putting through ie if the puter has little memory it might need to use virtual memory which will have an effect on your performance.
    Big O notation identifies how much work has to be carried out. The easiest example is a simple search of an array:
    for (int i = 0; i < array.length; i++) {
      if (array[i] == "weijewr") {
        return i;
    }Where n represents a number of elements:
    This takes O(n) (big Oh of N) as potentially you need to look at each element.
    if you were to write a standard bubble sort it would be O(n2) as potentially you need to iterate the array once for each element.

  • * * Procedure needs a way to  perform  update, insert or delete to...

    Hi Gurus,
    I got Assignment & need your help how to write this procedure efficiently.
    ** The procedure needs a way to actually perform the update, insert or delete to bring the reference data up-to-date with the refresh table.**These columns are for internal use and should not be compared to the refresh tables.
    Column 1
    Column 4
    Column 5
    Column 7
    Column 9
    Column 12
    Column 22
    Column 24
    1)     I would list out the columns in the cursors in place of the *. Or better define a record with %ROWTYPE.
    2)     You will need to include a way to look for rows that may be in one table and not the other.
    a.     Insert the rows that exist in the refresh table, but are missing from the reference table.
    b.     Delete the rows that are found in the reference table, but do not appear in the refresh table.
    3)     You also need to provide for handling the expiration date. Our default expiration date is ‘01-JAN-2500’, meaning if the refresh table has a futuristic expiration date or null, then our default expiration date is considered valid. Also this date needs to be added to any new rows created as a result of the refresh, if you leave the EXPIRATION_DATE column out of the insert it should default to the above date.
    4)     I assume that Differences is where you plan to list the actual data values that differ. If you don’t list the whole row I would at least list the primary key, in addition to what columns are different.
    5)     The procedure needs a way to actually perform the update, insert or delete to bring the reference data up-to-date with the refresh table.
    Thanks in advance

    Hi,
    Take a look at merge
    http://www.psoug.org/reference/merge.html
    Keep Smiling
    Bob R

  • VBA inserting form fields, different positions result for different users.

    I'm certianly at a loss for wrapping my head around this.
    Adobe Acrobat 9 Standard (v 9.5.4)
    Excel 2010  (VBA)
    The problem:  When I create the PDF document from Excel, I search for a string of text in order to capture the Quads for the containing rectangle.  Then I use the quads to insert a control with numeric offsets.  The problem that I am facing is that the offsets seem to be causing the controls to be in different locations for different users.  For example, when I send (-26, -2, 100, 10) {x-offset, y-offset, width, height}; the control aligns exactly where I want it.  But when another user user runs the exact same routine, or opens the PDF that I created, the fields are no longer positioned correctly.
    Is there some setting that I am missing? EDIT, SOLVED:  My Acrobat had a custom point to pixel setting.  (Preferences > Page Display > Resolution)
    Private Function makePdfControl(ByVal pdfPage As Integer, keyTerm As String, Optional ByVal keyTermLookAhead As Integer = 0, Optional ctrlType As String = "text", Optional cCoords As Variant = 0)
        'pdfPage is the target page of the document
        'keyTerm is the assembled search term: "Date Shipped >> DATESHIPPED"
        'keyTermLookAhead is the number of words assembed into KeyTerm, zero based: "Date Shipped" >>  "DATESHIPPED" >> "DATE" = 0, "SHIPPED" = 1
        'ctrlType determines the type of control to place on the form; default is text
        'cCoords carries an array of integers: x-offset, y-offset, width, and height
        txt = ""
        Dim fkt As Integer 'counter for keyTermLookAhead
        Dim matchFound As Boolean 'flag that a match has been found
        Dim maxWords As Integer 'the maximum number of words in pdfPage
        Dim coord(3) As Integer 'local array container to provide interface for cCoords
        p = 0
        matchFound = False
        maxWords = jso.getPageNumWords(pdfPage)
        Do While p + keyTermLookAhead <= maxWords 'search all words in the target page; break if not found
            p = p + 1
            For fkt = 0 To keyTermLookAhead
                txt = txt & jso.getPageNthWord(pdfPage, p + fkt)
            Next fkt
            If UCase(txt) <> UCase(keyTerm) Then 'the assembly of terms is complete, check if match
                txt = "" 'prepare "txt" for next assembly
                matchFound = False
            Else
                matchFound = True 'we've struck gold, exit the loop preserving val of "p" as the first word in the assembly
                Exit Do
            End If
        Loop
        If matchFound = True Then
            Dim qtmp() As Variant
            Dim q(7) As Double
            qtmp = jso.getPageNthWordQuads(pdfPage, p)(0) 'collect the rectangle containing the first word of the search; output is a base-0x7 array
            For a = 0 To 7
                q(a) = qtmp(a) 'collect the data
            Next a
            If VarType(cCoords) <> 8204 Then '8204 means that we've inserted an array into the Varient type var cCoords
                coord(0) = 0
                coord(1) = 0
                coord(2) = 100
                coord(3) = 15
            Else
                coord(0) = cCoords(0) 'x-offset value
                coord(1) = cCoords(1) 'y-offset value
                coord(2) = cCoords(2) 'width value
                coord(3) = cCoords(3) 'height value
            End If
            x0 = coord(0) 'x-offset var
            y0 = coord(1) 'y-offset var
            w = coord(2) 'ctrl width
            h = coord(3) 'ctrl height
            x = q(0) + x0
            y = q(7) - h + y0
            'origin point of doc matrix is lower-left corner
            'origin point of control is lower left corner of the rectangle containing the first word of the search phrase
            'offsets are placed from this point, negative x shifts to the left, negative y shifts down
            'values are in points, not pixels
            Set f = aForm.Fields.Add(keyTerm, ctrlType, pdfPage, x, y, x + w, y + h) '(uplf, lwlf, lwrt, uprt) 'add the control to the form using values passed in
        End If
    End Function
    The above function is used while looping through the pages of the created PDF document.  I am using the following function to create the document from Excel:
    Private Sub exportToPDF()
        DoEvents
        Application.ScreenUpdating = False
        Call showTabs(False)
        ActiveWorkbook.ExportAsFixedFormat Type:=xlTypePDF, _
                                           Filename:=pdfPathData, _
                                           Quality:=xlQualityStandard, _
                                           IncludeDocProperties:=False, _
                                           IgnorePrintAreas:=False, _
                                           OpenAfterPublish:=False
        Call showTabs(True)
        Call locateDoc
        Application.ScreenUpdating = True
    End Sub
    Message was edited by: ilivingston:  solved

    Thanks for the reply, I did spend some time working on this issue...  here is what I found...
    1)  First of all, I did have a custom Points to Inches setting in my Acrobat options...  110 vs 96.   Resetting this allowed for me to see the alignment issue that my colleagues were referencing first hand.
    As it turned out, my results were better, but still had inconsistency among different workstations.  Leading me to..
    2)  The MSFT creator uses the default printer in some way to create the PDF.  Because the different workstations were using different printers, we were getting different results.  If everyone used an HP 1320, nobody would see any difference upon creating / adding fields.
    The final solution was to change the Application.Printer to a common network printer before the export operation, and return the Application.Printer to the user default after the export completed.  This has provided us with a common ground to work upon; we are lucky to have a network printer that can be used for this purpose, as I can see this becoming non-viable in environments where this would be unavailable.

  • Why the keyword delimiter in Library is a "," and in the search field for dynamic collection it is a space?

    PC Windows 8.1
    Lightroom 5.6 and 6
    Hello I have a problem with Lightroom 5 and 6 on the keywords and dynamic collections:
    Let me explain: I take pictures of people and I put as keyword on each photo the full names of people who are in the picture.
    - If the photo contains a person Toto Smith, I put as keywords Toto Smith.
    - If the photo contains a person Toto Smith and Smith to Titi I put Toto Smith, Tito Smith
    and so on
    Imagine I have another picture or there Titi Yellow, Jean Dupon. Toto Blue
    Now when I make my dynamic collection and I look the pictures "Contains everything" "Toto Dupon" and Titi Dupon, I will have in my collection also contains the photo: Titi Yellow, Jean Dupon. Toto Blue although in the latter there is neither Toto Smith nor Titi Smith.
    This is because in searches for dynamic collections separator is not "," like keywords, but space.
    How can I do my research without renaming all my keywords with a "_" between first name ans dlast name as this: firstname_lastname?
    I think there is a problem of consistency, either you use the space as a delimiter anywhere either you use the comma, but the mixture of the two is not a good idea
    Thank you

    I see that it is a very very long  and old issue.
    I see some posts on internet from 2011 or before.
    like:
    http://photo.stackexchange.com/questions/27514/in-lr-smart-collections-what-is-the-differe nce-between-contains-contains-al
    http://feedback.photoshop.com/photoshop_family/topics/smart_collections_using_keywords_con taining_spaces
    http://feedback.photoshop.com/photoshop_family/topics/lightroom_is_there_really_no_way_to_ search_metadata_for_a_term_that_has_spaces_in_it
    can I  said : shame on Adobe?

  • Searching update insert and delete statements

    Hi.
    Suppose, there is change request containing 10 or more programs.
    One of the program has statements, working with database tables,
    like update, insert or delete.
    Is there any transaction where we can select request number and find
    programs working with tables directly.

    Hello
    1. Goto table E071 with TRKORR = change request and OBJECT = {reps, repo, rept}
    2. In field OBJ_NAME will be all reports names in this request
    3. In abap-programm upload this reports into internal table
    4. Search for 'update' 'insert' and 'delete'
    P.s. it is only for reports

  • I've searched to no avail for this problem. Similar posts but none that tell me what to do. I can't add or delete any bookmarks on my iPad 2 running the newest iOS. I know how it's suppose to work, it just isn't working!

    I've searched to no avail for this problem. Similar posts but none that tell me what to do. I can't add or delete any bookmarks on my iPad 2 running the newest iOS. I know how it's suppose to work, it just isn't working!
    It started after the major update to iOS 7.
    I can't believe that this is so hard to do. It's just not letting me. I can add a bookmark to the home screen just fine, just not in a bookmarks folder anywhere I try.
    I've used Apple products since 2001 and have always loved how intuitive they are. But the Safari browser since iOS 7 has been the worst I've experienced. At least right in the beginning after that update.
    I'd really appreciate any help that doesn't just tell me how it's suppose to work...I know that.
    My iPad 4 is not affected with the problem and works as it should.

    To delete, tap "Edit" (tap to enlarge image)

  • Automatic Deployment Rule - One ADR for Two Different Collection for Two Different time Intervals

    I have a scenario where two collection of Windows 8.1 is made based on geographical location. One collection is for all the windows 8.1 machines in India and one collection is for all windows 8.1 machines in US. Now I have created one ADR to be deployed
    to the Collection for Machines in India with a schedule.
    My requirement is that can i use the same ADR for those two collection with different schedule. Say I want ADR to be deployed in India at say 10:PM IST and for US collection at say 10:00 PM PST.
    Can I use one ADR with two different schedule and can be deployed to two different collection. Any help will be greatly appreciated.
      

    Couple of "bits" to help you one your patch automation quest:
    A ADR is very much a 1:1:1 rule.  It creates (or updates) ONE update group, deploys it to ONE collection, and you can provide ONE schedule.  While you are more than capable of flipping said deployment time to use local time instead of UTC, in general
    it's a very simple set of rules that are dogmatically run.
    That said because of the "include" feature of collections it's not that hard to setup a good/robust patching pattern.  What I recommend doing is building an ADR for each variance of deployment of patches or enforcement time.  for example,
    my ADRs look something like this:
    Software Updates - Zero day enforced
    Software Updates - Critical and Security 1 month enforced
    Software Updates - Critical and Security 1 month no reboots
    Exact terminology is up to you of course, but I find a good descriptive ADR name saves a lot of confusion.  For each ADR I create an identical collection.  From there I can use existing collections and a simple "include collection" rule
    to bundle things up and make them part of the patching schedule of my choice.  Anyone can now go into my "software update" folder, look at my collections, and know exactly what gets patched by what deadline.
    Finally, don't be afraid to look into maintenance windows to trim down ADR count.  Making a deployment available for a month before it goes enforced, then setting up groups of maintenance windows (one for each Friday of the week for example) can also
    accomplish a similar goal by having machines auto-patch during their week but you only using one ADR.
    So by having two "types" of collections to manage your patching (one to assign a ADR built by deployment deadlines, the other for exact update windows) you should be able to group most your workstations into a decent patch scheudle without being
    too excessive about creating a billion ADRs.

  • HT2513 how do i delete events from my calendar for different countries anniversary days

    i need to delete public holidays from Ical for different countries without having to do each one individually

    It seems, as if me and my client have kind of the same issue. Any suggestions to this: I sent my client an invitation via my iCal from my iMac (using my yahoo calendar as the exchange server). My client accepted the invitation. Now I have deleted this appointment from my iCal and my yahoo calendar. But my client is not able to delete the appointment in his iPhone 4S. What can my client do?
    My client already went to the next apple store. There they told him to plug in his iPhone 4S to my iMac. Then he would be able to delete the appointment. Honestly, this does not make any sense to me. Why does my client have to come to me and plug in his iPhone into my iMac to get a meeting deleted. What if my client lived in a different country?????
    My client is really upset about this. Can anyone help me? Thanks in advance.

  • How can perform insert /update /delete in one single mapping.

    Hi,
    I want to is there any logic by which we can create 2-3 pipeline in a mappings like pipelines will work for insert / update /delete or storing soem rejected data according to conditional flag.
    I tried it in a mapping but problem is that when target load order is like ins then upd then delete/reject . if new rec will come then control will pass through ins target . but if rec needs to update or delete then again control is going to ins target not update / delete target.
    We have already given the all conditional flags in filter after lookup and before target .
    all possibilities we checked but didnt got success.
    last option is separate the mappings for insert / update/delete.....etc.
    Is there any solution for this type of problem.
    reply plz if any body have solutions.
    ---Umesh

    Hi Umesh,
    I understand from your query that you want to load target with insert, update and delete rows after runnng the mappping...
    If you are looking for the same then you can use one of the Oracle fetures Oracle Streams: Change Data Capture.
    the Url is:
    http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf
    If any other help required do reply.
    Regards
    Tarang Jain

  • PPOME - Can't insert the SAME task for different organization unit

    Hi all,
    I have a doubt; in SAP 46C, it seems that is impossible to insert in PPOME, Detailed window, Tasks tab the same task for different organization unit. The only way to do this is in PP01 - General management, where you can explicitely create a [B-007] relations with the same "T" Object. Now, it is a little bit difficult for a user to switch from PPOME ad PP01 when defining new organizational structure.
    The question: is there any way to insert the <b>SAME</b> task directly in PPOME, overwriting the standard behaviour of the system that by default create a new task for each new insertion?
    Thanks all
    Paolo

    Hi Naveen
    thank you for your prompt reply.
    The issue that I want to solve is that some organization unit (not all) must be flagged for an external export to another system, depending on some characteristic of each org unit. I didn't find a similar attribute on standard field, so I thought to insert a common task to all the org units to export, so that this relation can serve as the missing attribute on org unit definition.
    PP01 let me insert a task (type T, and not TS) on an org unit directly, so I want to know if I'm going to break some standard behaviour of SAP if I insert a task on OU.
    Thank you
    Paolo

  • Panel collection with auto height for different screen resolution

    Hi.
    I created a page using panel collection by default height for 1024 * 768 screen resolution and works fine. When I change the screen resolution to 1152 * 864 then my page layout changes as I can see lot more space at bottom due to fixed height define for panel collection.
    Here my question is how to make height as dynamic for panel collection which can changed based on screen resolution. Usually percentage for height attribute resolve the issue but JDev 11.1.1.4.0 doesn't support percentage its allows only pixel or em.
    Please let me know to resolve this issue.
    Thanks

    If you want a table to occupy 80% of the height of the screen you should be able to do this with assigining % to the various panelStretch facets - like this:
        <af:form id="f1">
          <af:panelStretchLayout id="psl1" topHeight="10%" bottomHeight="10%">
            <f:facet name="bottom">
              <af:spacer width="10" height="1" id="s2"/>
            </f:facet>
            <f:facet name="center">
              <af:panelCollection id="pc1">
                <f:facet name="menus"/>
                <f:facet name="toolbar"/>
                <f:facet name="statusbar"/>
                <af:table var="row" rowBandingInterval="0" id="t1">
                  <af:column sortable="false" headerText="col4" id="c1">
                    <af:outputText value="#{row.col4}" id="ot1"/>
                  </af:column>
                  <af:column sortable="false" headerText="col5" id="c2">
                    <af:outputText value="#{row.col5}" id="ot2"/>
                  </af:column>
                </af:table>
              </af:panelCollection>
            </f:facet>
            <f:facet name="start"/>
            <f:facet name="end"/>
            <f:facet name="top">
              <af:spacer width="10" height="1" id="s1"/>
            </f:facet>
          </af:panelStretchLayout>
        </af:form>

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

Maybe you are looking for

  • Error Message While Saving Data : ORA-00372

    Hi, I am Getting this Error Message while Saving the data from frontend. ORA-00372: file 5 cannot be modified at this time ORA-01110: data file 5: '/db/d401/CHG_arsystem_01.dbf' Thanks.

  • Water bottle spilled on my ipod touch 2nd gen 8gb . what do i do ?

    My water bottle spilled on my ipod touch 2gen 8gb and when i saw it i shut it down and i ran home and dried it off. i put it in a bag of rice for a night and i saw no progress. now the screen is black and there is no response. im scared because i hav

  • Adobe Photoshop Elements 10 Editor Issue

    Have downloaded Adobe Photoshop Elements 10 Editor from the Mac App Store onto my Mac Book Pro, set iPhoto preferences to Adobe Photoshop Elements 10 Editor, but when trying to open a file from my iPhoto library, the following message appears: "unabl

  • Browser Doesn't start in Windows 7

    I just moved to Windows 7 and got JDev and all my connections working. One problem remains, when I "run" a jsp from Jdev, it doesn't start my browser I have to copy and paste the URL into the browser and then it comes up? Does anyone know how to fix

  • To read a gif file from a jar

    I have made an application which requires some .gif images. I packed all the classes and .gif's in a jar file. I have used no package statement in files, all are in one folder. While retrieving the .gif file from the jar I have used.. Image img1 = To