Question regarding recording vocals?

Hi, I'm considering in purchasing an IMAC real soon. Will Garage Band allow me to record vocals while playing an mp3, wave, cd, etc, from an internal player within the mac? For instance, I currently have a WINDOWS PC which I use winamp to play the audio and use CoolEditPro to record that same audio as well as vocals simultaneously. Will I be able to do this using Garageband and an internal audio player which comes with mac?
Thanks
Wil
  Windows XP  

they are recorded as wave files on your player so you should be able to transfer them to your computer using mediasource

Similar Messages

  • Basic question regarding recording of sounds..

    So I have this idea that I would like to use some voice during my music making and since I don't own a direct microphone but I do have a microphone on my headset and built-in to my Mac I would like to use one of those.
    But I can't seem to find out how to do it in Logic.
    I know it's a quite simple question but I still hope one of you can help me
    Best Regards.

    Hi Simon,
    simply set the Audio Device to "built in" and select the Microphone in the OS X Audio Prefs. Should work. Don't forget Logic can only use one Device at once, so if you are using an audio interface, you'll have tu built an aggregate device or switch back and forth.
    Fox

  • Question regarding Records and Collection

    HI All,
    I have a record/Collection say rec_t
    For i in rec_t.FIRST..rec_t.Last
    Loop
    -- i need to have a condition with which i can find out, if in the loop i am at the last record
    if rec_t.last is true --I know this is not working but do we have something to get t
    END LOOP;

    padders wrote:
    Well at least we're all consistent :-DYeah, I saw that.. :-)
    But I would have posted:
    if i = rec_t.Count then ...Cannot recall ever using the +.last()+ and +.first()+ methods for normal array processing. Using
    for i in 1..array.Count loop .... is a lot safer as it deals with array variables with no value (null) just fine. No need for additional code to check for a null state prior to entering the FOR loop.

  • Architecture question regarding record management

    We are designing a contract management system in SharePoint 2013. The contracts are separated by departments and by locations. Users in a particular department/location should only have access to contracts in their own department/location.
    We are thinking to create 1 site collection for each department - this ensures that we stay within the 200 GB content database recommendation. In each department site collection, we will have 1 document library with 1 folder for each location - permission inheritance
    is broken on each folder to restrict access on a location basis. We thought of 1 document library so that we can use out-of-box SharePoint views for reports across locations for executives to see.
    The problem with this design is that there are about 250 different locations. This means, in each department site collection, we will have to create 250 folders in our document library and break inheritance on the 250 folders.
    Another approach would be to give no one access to the document library & use a custom drop off library form to add documents. We can create a web part which elevates permissions and displays only those documents that users are supposed to see. The advantage
    with this approach is much less broken permission inheritance. The disadvantage is that we won't be able to use OOTB views and we'll have to implement our own search.
    Thoughts appreciated!

    You could consider "Audience Targeting". It works well with the Content Query WebPart. You can enable it on one document library and it provides a TargetAudience column allowing you to assign groups/roles to the content. This could get around the
    creation of separate site collections and breaking permission inheritance.
    http://technet.microsoft.com/en-us/library/cc261958(v=office.14).aspx#Section4
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Hello, I have a question regarding the sharing/exporting on imovie. Whenever I click the share button all the normal options pop up, but when I actually click where I want to share it to nothing happens.  If you know what's wrong please let me know.

    Hello,  I have a question regarding the sharing on iMovie.  I have just recently purchased an Elgato Gaming Capture HD and I then finished my recording with that and put it into imovie.  I worked long and hard on the project and when I go click the share feature on iMovie all the noral options pop up and when I actually click where I want to share it to nothing at all happens.  If you know what is wrong/ what I am doing wrong please let me know.
    Thank you.
    PS:  I am using iMovie 10.0.6.

    /*line 957 error */
         public void select()
              for (count = 0; count <= p; count ++ )
                   if(P[count] != null){ /* validation */
                   m = (int)(P[count].getX());
                   n = (int)(P[count].getY());
                   if (Math.pow(-1, m + n) == 1)
                        piece[m][n].setBackground(wselect);
                   else
                        piece[m][n].setBackground(bselect);
              step = 2;
         }

  • Recording vocals seperate from main music Mac

    I do most of my music on a Mac Pro, running Logic Studio 9, but the location of the computer really isn't a suitable space for recording vocals in.
    I can't keep lugging the Mac Pro about, so ideally I'd like to record the vocals in a suitable (different) space on to a laptop, then import the resulting audio files to the main project on my Mac Pro.
    The idea is that I'd do a stereo reference mix of the music in Logic Pro, bounce it and copy that audio file to a project on the laptop (using Garageband or Logic Express to save cost).
    With that project set up at the same tempo and sampling rate, I'd then record the vocals using the laptop. From there I'd copy the vocal audio file(s) and drop them into the music project on the Mac Pro, where I'd add any effects, processing and do the final mix.
    First question - would this work? As long as I recorded the vocals in a project set to the same tempo and sampling rate as the music, would I have any timing or sync problems when adding the vocal audio files to the original music project?
    Second question - if so, can anyone recommend any portable, Firewire 800 audio interfaces which would be suitable for recording the vocals on to the laptop? It would ony need to be 2 input channels maximum, but with at least one XLR input and phantom power.
    Any advice gratefully received!

    but the location of the computer really isn't a suitable space for recording vocals in.
    Is there any other room that would work better at your location? If so, consider buying a snake.
    The idea is that I'd do a stereo reference mix of the music in Logic Pro, bounce it and copy that audio file to a project on the laptop (using Garageband or Logic Express to save cost).
    Or reinstall Logic Pro on your laptop, but without all the content. Your license permits this, as you're not using two copies at the same time.
    First question - would this work?
    Yes. I do this often. Just be sure that both files begin at 1 1 1 1. Additionally, as I usually have many takes tracking o'dubs, I always record to WAV (BWF) files so I can use the 'move to original record position' command to align less than complete audio files.
    Second question - if so, can anyone recommend any portable, Firewire 800 audio interfaces
    Most FW interfaces are FW400. Just get an [adapter.|http://www.sonnettech.com/product/fw_adapter.html] and buy an interface which can be separately powered. There are dozens of suitable FW interfaces: one inexpensive box which I like is the [AudioFire4|http://www.echoaudio.com/Products/FireWire/AudioFire4/index.php] which is 6in/6out - they make an Audiofire2 but that doesn't have XLRs. Apogee, Focusrite, Presonus etc. all have small interfaces which you might like - just [search around|http://www.sweetwater.com/c683--FireWireAudioInterfaces/low2high]. Personally, I would avoid M-Audio.
    Good luck.

  • Recording vocals

    I have an:
    -8800 shure stage mic.
    -M-Audio Ozone.
    -DX4 Audiophile studio monitors.
    -G4 mini mac (256 RAM).
    Does anyone have tips on how I should set up to record vocals over Garageband music tracks I've made?

    you might run into some problems with the RAM, 256 might not really be enough to run garage band quickly and without bugs with alot of real music tracks. However, with regards recording technique, set up the M-audio with he mic plugged in, and then get a pair of headphones. I dont know if your M-audio has a built in monitor or not, so plug the head phones in either through the M-audio or right into the back of the mini. Press record and lay down the vocals to the playback from the already created music tracks.

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Questions regarding new functionalities in EhP 4 - Reporting Financials 2

    Dear Forum,
    in a project we would like to use some new functionalities from Reporting Financials 2 - ie. Datasource 0FI_AA_20 for Depreciation and Amortization loading to BI for following years as this can not be done by old extractor.
    We are know looking for reliable information about impact and changes that are made in ERP if we switch on the functionality Reporting Financials 2 via SFW5? Will old extracors work nevertheless? Will all reports in ERP work without problems? Is there any impact on business processes? Or is this just additional functionality which will not affect current implementation?
    Can anybody give information about this?
    Thanks, regards
    Lars
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • VSTS load test QUESTION : While recording the webPerformanceTest, is it is possible to clear / force to clear cachewithout stopping or closing the IE on recorder?

    VSTS load test QUESTION : While recording the webPerformanceTest, is it is possible to clear / force to clear cache without stopping or closing the IE on recorder?
    I am facing issue, while recording the webtest, our application first time loading any screen calls some requests and 2nd time when browsed the same information if it is available at cache then it will not
    call the requests again.
    How can i force the VSTS to clear cache while recording, so that i can record those requests which gets called at the first time.
    Regards
    SatishK

    Hi SatishK,
    Thanks for your reply.
    >>but my question was while recording (During recording using web test recorder) i want to clear cache.
    Based on your issue,
     I did some research about it,
    but I still find the official document about it. Generally, I know that if we want to clear the web performance test by the
    Cashe Control properties, so I think that it is not possible to clear the cashes when we are recording the web performance.
    If you still want to this feature for web performance test, I suggest you could submit this feature request:
    http://visualstudio.uservoice.com/forums/121579-visual-studio. The Visual Studio product team is listening to user voice there. You
    can send your idea there and people can vote. If you submit this suggestion, I hope you could post that link here, I will help you vote it.
    Thanks for your understanding.
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Question regarding the Transformation activity in BPEL

    Hi,
    I have a question regarding the Transformation activity in BPEL.
    I have a database adapter that uses a PL/SQL API. The API returns two tables of nested objects and it automatically generates a schema for me based on the outputs structure. I need to use the Transformation activity to map both these out parameters to root element in the destination.
    The element in the destination is "unbounded". Please take a look at the following screen shot -
    http://www-apps.us.oracle.com:1100/~rvishnuv/transform1.gif
    From the screen shot, you can notice that I have two XML segments X_TEST_TBL and X_TEST_TBL1 in the source and I have one GLOGXMLElement ("unbounded") in the destination. I need to map each of the source elements to the same destination. Then internally, I will be mapping the elements of X_TEST_TBL to "Location" segment/element and the elements of "X_TEST_TBL1" to "ItemMaster" segment/element. If I try to use "foreach" and map second
    element "X_TEST_TBL1" also GLOGXMLElement using the foreach, I get the following error -
    http://www-apps.us.oracle.com:1100/~rvishnuv/transform1_error.gif
    Let us say my X_TEST_TBL contians two records and X_TEST_TBL1 contains one record. Is there any way that I can use these two out parameters to map to the same destination (GLOGXMLElement) so that my final document contains multiple "GLOGXMLElement"s that represents data from both the output parameters? i.e. there should be in total of three occurances of GLOGXMLelement in the output document such that two occurances correspond to data obtained from X_TEST_TBL and one corresponds to data from X_TEST_TBL1.
    Please let me know if there is any way to achieve this.
    Thanks
    Ravi

    Is the listed $200 credit towards a trade-in on top of the original trade in value of a phone, or the maximum amount?

  • How can I use Garageband on my IPAD2 to record Vocal tracks?

    Sorry for the newbie question.
    I have a mic setup that runs into garage band on my Imac, but would like to use my Ipad to record vocal tracks as well.
    Looking at garageband version for the IPAD in the app store, it does not appear to have any sort of recording capability.
    Currently I run my audio signal into my Mac via a pre-amp, connected via USB cable.
    Would like any sort of help in being able to use garage band, or any other program, to record vocals into my IPAD

    If you check the iTunes Apps store and seem to recall that there is a program that will allow you to transfer Photo's between iPhones using Bluetooth. I stand to be corrected but is this not a feature of the new iPhone 3GS Ant..

Maybe you are looking for

  • Bug Report: #SQLERRM_TEXT# in Process Error Message

    According to the Online Help for "Process Error Message" #SQLERRM_TEXT# should be replaced by "Text of error message without the error number". But it isn't. Patrick My APEX Blog: http://inside-apex.blogspot.com The ApexLib Framework: http://apexlib.

  • Help with scroll function in Lightroom

    I have two laptop on which I installed lightroom this afternoon. Thinkpad T420, Windows 7 pro 64bit, all drivers up-to-date Thinkpad X201, Windows 7 pro 64bit, all drivers up-to-date The mouse scroll function is not working in Lightroom.   It is func

  • Not able to create Invoice against sales order

    Hello   I have a sales order with one main item and having 4 sub items to the main item. For all the 4 sub items the delivery and goods issue process is done . For one of the sub item , delivery document is archived in the system. Now user trying to

  • Forgot ipod touch 3g passcode

    I haven't used my iPod touch in 1 year and I forgot what I had set the passcode. I now need it and need a way to recover it, or if restoration is the only possibility.

  • Re: oracle 11.2.0.4 backup issues

    Hi Mike, I need to upgrade 11.2.0.2 to 11.2.0.4 Can you please mention what steps you have followed Thanks