Question regarding compatibility changing after extracting a muti-pdf to singles?

created a multi pg pdf using acrobat distiller (pro 10.1.12 Acrobat 4 (PDF 1.3)
opened pdf using (acrobat pro 9.5.5) then extracting the multi pdf to singles the compatibility
changes to 1.6?
is there a setting to leave compatibility unchanged after extracting?
document / extract pages pull down

if i use the split pg feature the compatibility stays at 1.3

Similar Messages

  • Question regarding  to changing the region of the app with remaining balance

    Question regarding  to changing the region of the app with remaining balance
    Hello, I am attempting to change the location of my app store which has remaining balance of 0.01$. In order to change the location of the store, it requires  spending all the remaining  balance. As my balance is too low " 0.01$" to buy an app from the store, I am not able to change the region of my app store. Moreover, Since I am not a residence in USA, I am not able to use my credit card either.
    I am wondering if it is possible to change the location of my app store with a remaining balance OR  to delete my current balance.
    Thanks
    Nejem 

    Click here and request assistance.
    (85511)

  • Regarding texts language after extracting texts from R/3.

    Hi,
    i had created datasource and by scheduling infopackage i had extracted texts to BW for a particular field.
    now the problem is that when i see the extracted data in BW, i could only see one language at a time. i mean to say that in the selection screen it is asking for language. when i enter one langauge say EN ,i could only see the texts of english .For other languages it is just a blank both language and texts.when i enter german language it is again the same (  can see german texts and language de  and not english )
    when i did not enter any value in the selection screen for language it says " " is not available!
    my question is how to get the all languages keys and texts irrespective of language text that it is possessing?
    many thanks,
    Ravi

    Hi,
    in the infoobject maintenance screen, goto to the tab 'master data/texts' and double click on the text table. In the following screen select 'display table contents'. Another option might be copy the name of the text table into memory, goto transaction se16 and display the values.
    kind regards
    Siggi

  • Question regarding the change

    I am running Mountain Lion 10.8.2.
    I have been tinkering with modifying backup interval for Time Machine.  In order to do this I have been attempting to edit the com.apple.backupd-auto plist file.  To change it I have read that all that is required is to do the following.
    sudo defaults write /System/Library/Launch Daemons/com.apple.backupd-auto StartInterval -int 1800
    However there is no StartInterval in the plist file.  So how is the above write command supposed to work?
    Contents of the com.apple.backupd-auto is as follows
    <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
        <key>Label</key>
        <string>com.apple.backupd-auto</string>
        <key>Disabled</key>
        <true/>
        <key>ProgramArguments</key>
        <array>
            <string>/System/Library/CoreServices/backupd.bundle/Contents/Resources/backupd- helper</string>
            <string>-auto</string>
        </array>
        <key>LaunchEvents</key>
        <dict>
            <key>com.apple.time</key>
            <dict>
                <key>Backup Interval</key>
                <dict>
                    <key>Interval</key>
                    <integer>3600</integer>
                    <key>MaintenanceWakeBehavior</key>
                    <string>Once</string>
                </dict>
            </dict>
        </dict>
        <key>RunAtLoad</key>
        <false/>
        <key>KeepAlive</key>
        <false/>
        <key>EnableTransactions</key>
        <true/>
    </dict>
    </plist>

    Yes. I am now and have been using it. I like my backups once a day at a specific time so I don't add too much to the backup drive. The redundancy isn't needed. Mostly I am testing Time Machine. It isn't my preferred backup tool.

  • US 20" Cinema sold to buyer in Italy, Question regarding compatability

    Hi,
    My auction just ended for my 20" cinema display, and the buyer is in Italy. He had some questions I am totally in the dark about, and I was wondering if someone could help me out...
    He writes:
    "i'm the winner of your auction, but i've also just discovered that your lcd might not work in Italy due to a different power supply, connection jack and voltage too, could you tell me something about that?..i've also heard that it probably needs an optional cable to work with an Italian power mac.."
    Thanks,
    Sean

    So looks like power isn't an issue according to the input requirments from apple specs page. I have a hard time believing that DVI standards differ from region to region, but if someone could confirm, then that should clear this up.
    Thanks,
    Sean

  • A question regarding compatibility.

    I'm starting to get very, very sick of Windows. It's inevitable that I need to make the Mac switch, but really can't afford an iMac or MacbookPro etc..
    So I was just wondering, can you run a Mac Mini with a Windows Laptop, or does it have to be run on a desktop or with seperate monitor/keyboard/mouse?
    thank you in advance.
    George Gabriel

    Using an ethernet connection between the Mini and PC, you could run the Mini "remotely" via VNC software (Virtual Network Computing).
    However, you would still be running Windows on your PC and the user experience of the Mac interface would be substantially degraded.
    Best to just get a Monitor, Keyboard, and Mouse for the Mini. With careful shopping, this would not be too much.
    Alternatively, refurbished iMacs can be had quite reasonably at the Apple online store.

  • I have a question regarding my txt/alert tones. I recently updated my iPhone 5's software without first backing it up on iCloud or my computer (oops). After my phone finished updating, i lost the new ringtones and txt/alert tones i had bought.

    I have a question regarding my txt/alert tones. I recently updated my iPhone 5's software without first backing it up on iCloud or my computer (oops). After my phone finished updating, i lost the new ringtones and txt/alert tones i had bought. I connected my iPhone to my computer and sync'd it hoping that it could download my purchases from the itunes store and then put them back on to my phone but no such luck. When i look at my iphone on my computer and look at tones, they do not show up at all. However, if i click on the "On This iPhone" tab and click on "Tones" the deleted ringtones and altert tones show up...they are just grey'd out and have a dotted circle to the left of them. I also tried to go to the iTunes store on my phone and redownload them and it tells me that i have already purchased this ringtone and asks me if i want to buy it again. I press Cancel. I know when i do that with music it would usually let me redownload the song without buying it again but it wont let me with the ringtones...so how do i get them back? any help would be greatly appreicated

    Greetings,
    I've never seen this issue, and I handle many iPads, of all versions. WiFi issues are generally local to the WiFi router - they are not all of the same quality, range, immunity to interference, etc. You have distance, building construction, and the biggie - interference.
    At home, I use Apple routers, and have no issues with any of my WiFi enabled devices, computers, mobile devices, etc - even the lowly PeeCees. I have locations where I have Juniper Networks, as well as Aruba, and a few Netgears - all of them work as they should.
    The cheaper routers, Linksys, D-Link, Seimens home units, and many other no name devices have caused issues of various kinds, and even connectivity.
    I have no idea what Starbucks uses, but I always have a good connection, and I go there nearly every morning and get some work done, as well as play.
    You could try changing channels, 2.4 to 5 Gigs, changing locations of the router. I have had to do all of these at one time or another over the many years that I have been a Network Engineer.
    Good Luck - Cheers,
    M.

  • Questions regarding DTP

    Hello Experts
    I am trying to load data from ODS to a Cube and  have the following questions regarding DTP behaviour.
    1) I have set the extratcion mode of the DTP to delta as I understand that it behaves   like init with data transfer for the first time. However, it fetches the records only from the change log of the ODS. Then what about all the records that are in the active table. If it cannot fetch all the records from the active table then how can we term it as init with data transfer.
    2) Do I need to have to seperate DTPs -  One for full load to fetch all the data from the active table and another to fetch deltas from the change log.
    Thanks,
    Rishi

    1. When you choose the Delta as Extraction mode you get the data from only change log table.
    Change log table will contain all the records.
    Suppose when you run a load to DSO which contains 10 records and activate it. Now those 10 records would be available in Active table as well as Change log.
    Now in the Second load you have 1 new record and 1 changed record. When you activate, your active table will have 11 records. The change log will have before and after image records for the changed record along with 11 record.
    So The cube needs that images, so that data can't be mismatched with old data.
    2.If you run the full load to Cube from DSO you need to delete the old request after the load. which is not necessary in the previous case.
    In Bi 7.0 when you choose full load extraction mode you will have flexibility to load the data either from " Active Table or  Change log table".
    Thanks
    Sreekanth S

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Po change after release and printout

    Dear Experts,
    My client require if we change the PO in any field ( expect qty and amount ) our po should be go in the po release again we using release indicator 4 and 6 but we want release should be go if we change any field like date tax code text payment terms etc.
    RCR

    Hi,
    System determines new release strategy only if there are any changes in the characteristic values in the PO. (The characteristics that you must have defined for the PO release strategy configured in your case, like PO value, document type, Plant, etc.)
    By your question, it appears that you do not want to change even the date and text fields after PO is released, and in case you do the changes, then again the PO has to be released.
    For this, you need to use the release indicator, which won't allow any changes after PO is final released. Hence, when any of the field is required to be changed, the release will be required to be cancelled first, then the changes can be done. After doing the necessary changes in your fields, like, date, text, etc., PO will again be subjected to release, and will have to be released again to proceed further.
    Regards,
    Zafar.

  • Questions regarding PO output in SRM 4.0

    Hi All
    I have several questions regarding the settings for PO output in SRM 4.0 ( Ext. classic)
    Would really appreciate if someone provides me the rationale and business reasons behind some config settings:
    I am referring to BBP_PO_ACTION_DEF transaction in IMG
    1) What is the difference between Processing when saving document & immediate processing. SHouldn't the PO output be always processed after the PO is changed and saved?
    2) In the Determination technology what is the significance of the term ' Transportable conditions'. Why would it be different if the conditions were manual and not transportable?
    3) In the Rule type what is the signifcance of Workflow conditions? How workflow controls the output?
    4) What is the meaning of ' Action merging' in layman's terms and what do each of the choices like "Max. 1 Unprocessed Action for Each Processing Type" signify.
    5) What powers do the 'changeable in dialog' & 'executable in dialog' indicators assign to the user processing the PO. What happens if these indicators are not set.
    6) What does ' Archive mode' in processing type signify. Are the PO outputs archived and stored?
    Regards
    Kedar

    Hello,
    Have a look at note 564826. It has some information.
    As far as I know, processing time 'Immediate Processing' is not allowed. I really don't know the reason.
    "Processing Time" should be defined as "Process using Selection Report" when an output should be processed by a report, such as RSPPFPROCESS, for example.
    If you set this as "Process when Saving document", then the output will be sent immediately, otherwise you have to process it with transaction BBP_PPF.              
    I hope I could help you a little.
    Kind regards,
    Ricardo

  • Change to Extract Structure Doubt

    Hi,
    Our Basis Folks are trying to install Patch 6 ST-PI 2005_1_700 in ECC. I refered to OSS Note 328181.
    For this Patch do we need to delete the Set up Table and Fill it again? And How about Init Delta in BW?
    Please suggest me. Any answers will be appreciated with Points.
    Kind Regards
    Mahesh.N

    Hello,
    Yes you have to delete the Setup table data before that as per the OSS Note you have to clear the delta queue data which cannot be recovered after changing the extract structure. So execute the BW Delta Infopackages and make sure there is no delta records in RSA7 or LBWQ.
    Then you can delete the setup table using LBWG and implement the patch.
    1. Make sure that the following steps are carried out at a time when no updates are performed so that no data is lost.
    2. Start the update collective run directly from within the Customizing Cockpit. Up to PI(-A) 2001.2 , this collective run exclusively concerns the V3 update. As of PI(-A) 2002.1, depending on the update mode setting of the application, the collective run concerns either the V3 update or the update from the extraction queue ("Delta Queued", see Note 505700).
    3. Load all data of the respective DataSource into the BW System.
    Now you can make the change.
    After a change you can no longer use the statistical data already recreated. If such data still exists, you should delete it (Transaction LBWG).
    We recommend in particular to upload the data immediately after a restructuring and (after checking in BW) delete them from the restructuring tables.
    Moreover, the update log (Transaction LBWF) can no longer be read. Post a document in this case so that the last log entry is overwritten. This log entry then has the correct new format of the extract structure.
    As of PI 2000.2 the program RMCEXDELETELOG is available. It can be used to delete log entries.
    But if there is any change in lenght of any field of the extract structure, then you have  to reload the entire data again in BW otherwise a Init without data transfer will do.
    But what is the patch all about? Please follow the instruction given in the OSS Note 328181 - Changes to extract structures in Customizing Cockpit.
    Thanks
    Chandran

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Sales order change after Invoice

    Standard SAP would allow us to change the Partners in a sales order even after an invoice has been created for the sales order.
    My question is that how can we prevent doing this. I want that the SO partner functions should not be changed after we have billed the customer.
    I wonder why SAP has kept it like this.Can anyone help me out in ths?
    James

    It is possible to control by identifing UserExit andproviding the logic to the Abaper, as per requirement.
    Regards,
    Rajesh Banka

Maybe you are looking for

  • DMS Document stored in SAP DB: Not shown in SAP PLM Web UI

    Dear Team, Is it by design that DMS documents which are stored in SAP DB are not shown in WEB UI... I get and error on the display document that " KPRO unchecked " how ever in ECC i can see the document with originals and thumbnails and complete DIR.

  • Where to find old backup files and how to use it to restore

    Hi, Need help. I did a OS update to iOS 6 today. Before the update I did backup my phone on my PC. But since that backup there was a new back up that created automatically when I connected my phone back to the PC. Now with some issues I want to rollb

  • Using BAPI_OUTB_DELIVERY_CHANGE

    I want to change only the material qty in a D/O. No changes in header data, how do I fill in the parameters for BAPI_OUTB_DELIVERY_CHANGE? I follow the function module documentation, but is not working.

  • How to send notifications to multiple person with same role and with result

    How to send notifications to multiple people with resultout as approve/reject? We are looping the notification by attaching a cursor query to find the different emp nos to send for approval. I cannot associate a role because these emp nos are sub-set

  • Deployment query

    Is it possible to develop complete software in SJSE 8.1 and create deployment or setup file from it....as we have in VS.net ?