5 question regarding code in DW

hi
1 - can i can make php code completion to appear automatically as i write and not use the annoying ctrl+space? just like it does for html and css?
2- where can i change the font size of the code i write?
3- where can i change the code formatting for php? like if i prefer the curly braces to appear on its own lines?
5- regarding code completion. is there a way that when i start a php string it will complete the other '? so when i say: echo ' , it will put the end '?
4- is there a way to create ducomentation for php? like there is in other IDE`s?
best regards

derrida wrote:
 1 - can i can make php code completion to appear automatically as i write and not use the annoying ctrl+space? just like it does for html and css?
Variables, classes, and objects appear automatically as soon as you start typing. For functions, you need to press Ctrl+Space.
3- where can i change the code formatting for php? like if i prefer the curly braces to appear on its own lines?
Dreamweaver does not support source formatting in server-side languages or JavaScript. If you want the curly braces on separate lines, format the code manually as you type it.
5- regarding code completion. is there a way that when i start a php string it will complete the other '? so when i say: echo ' , it will put the end '?
Use the echo button in the PHP category/tab of the Insert panel/bar.
4- is there a way to create ducomentation for php? like there is in other IDE`s?
No, Dreamweaver does not support PHPDoc.
If you would like any of these features added to Dreamweaver, submit a feature request through the form at https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform.

Similar Messages

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

  • Question regarding Dashboard and column prompt

    My question regarding Dashboard and column prompt:
    1) Dashboard prompt usually work with only for columns which are in subject area. In my report I've created some of the columns which are based on other columns. Like I've daysNumber column that is based on two other columns, as it calculates the difference of two dates. When I create dashboard prompt I can't find this column there. I need to make a prompt on this column.
    2)For one of the column I've only two values 1 and 0. When I create prompt for this column, is it possible that in drop down list It shows 'Yes' for 1 and 'No' for 0 and still filter the request??

    Hi Toony,...
    I think there was another way of doing this...
    In the dashboard prompt go to Show option > select SQL Results from dropdown.
    There you need to write your Logical SQL like...
    SELECT CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END FROM SubjectAreaName
    Here.. Periods.Year is the column which is already exists in repository's presentation layer..
    and difference of date functionality is the code or formula of column which you want to show in drop-down...
    Also write the CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END code in fx of that prompt.
    I think it helps you in doing this..
    Just check and inform me if it works...
    Thanks & Regards
    Kishore Guggilla
    Edited by: Kishore Guggilla on Oct 31, 2008 9:35 AM

  • Questions regarding how I pay my Line Rental Bill ...

    Hi BT,  I'm currently a BE Broadband customer,  I was a BT Broadband customer back in the days when you had BT Home Hub 1.0 and 8mb packages which i was only getting like 5-6mb, 0.3mb upload and constant speed caps on my connection to 1mb.  You guys didn't have true unlimited usage back then so being a gamer and someone that frequently downloads I changed to BE broadband.  They gave me a steady 11mb connection with 1.3mb upload speed which I've been satisfied with.   But now BE Broadband has been sold off to Sky which I don't think I want to be apart of as I've heard a lot of bad feedback and I really do not want to be forced into paying their line rental if I where to switch to Sky Unlimited.  
    So now I'm considering coming back to BT Broadband,  knowing that you've probably improved over the years.  I hope so.  I know you have this whole BT Infinity going now but my area doesn't support fibre optic broadband and probably never will because I've looked on the BT Openreach website and it says there are no plans for my area to be installed with fibre optics which I would of wanted to have but oh well.  What are the chances of my internet speed being the same as it is now with BE Broadband?   I don't see why it would go any slower.   Are your upload speeds around the same?
    Also I have a question regarding the payment methods of your BT Line rental.  I'm currently already on the BT line rental on the anytime calling plan.   I pay the bill quarterly every 3 months, which I'm comfortable with.  What confuses me is when I select the package to order BT Unlimited + Calls I only have the option to select Monthly Line Rental or Line Rental Saver.   Why am I able to pay my phone every 3 months if those are the only two options?   I'd prefrably want to stick to how I pay my phone bill.   So further more how should I go about ordering BT Unlimited broadband without it interfering with my currrent BT line rental?  I just want BT Broadband unlimited along side my current BT line rental method of pay.  Would I still be eligible for the 6 months free and the gift card.  Is the BT Home Hub 4 that comes with it free also?  Also one more question, I'd need my MAC code from BE Broadband right?  

    This is a community forum.  The people on here are other BT customers.
    The speeds have certainly improved but only if BT's equipment at your exchange supports ADSL2/2+.  If not, you'll get what you had before.  While the speeds have improved, the same can't be said for the customer service.  Upload on ADSL2+ can be up to about 1.2M.  On ADSLMax, it's still capped at 448K.
    BT like to combine broadband and phone on one bill.  So if you get line rental and broadband, they will send you one monthly bill for both.  You might be able to get them separately, but not through the web site.  Try sales on 0800 800 150.

  • Questions regarding Outlook Web App, Remote Desktop, Remote Web Access and VPN Access

    Hi there,
    I want to ask a series of questions regarding Outlook Web App, Remote Desktop, Remote Web Access and VPN access and was hoping whether you could help me. Below are my questions to ask you.
    Outlook Web App - What do I need to configure in order to get my Exchange account to work with the OWA app on my iPhone? Is Office 360 required on the server that hosts Outlook Web App in our organisation? When I configure the settings and
    connect I get the following message "couldn't connect -  We couldn't connect to the server. Check your information and make sure it's correct." I can connect with other devices using Outlook Web App.
    Remote Desktop - What do I need to configure in order to connect to my computer at work using Remote Desktop on my Windows Phone? When I configure the settings and connect I get the following message "Connection error - We couldn't connect
    to the remote PC. Make sure the PC is turned on and connected to the network, and that remote access is enabled. Inquiring minds may find this error code helpful: 0x204" I can connect with other devices using Remote Desktop. There are currently no
    RD Server settings in the Remote Desktop app on the Windows Phone and the only way I'm to connect to my PC at work is via Remote Desktop and not to be confused with the one by Microsoft, however the app is on a trial basis and times out every 5 minutes and
    can only be used once every hour unless I purchased the app for £2.99 off the App Store but would ideally like to use the Microsoft Remote Desktop app though.
    Remote Web Access - What do I need to configure in order to get Remote Web Access on my Windows Phone using a URL? When I log in using a URL I get the following message "There is a problem with this Web page. Please contact the person who manages
    the server" I can connect with other devices using Remote Web Access. Also how do you enable the background option for Remote Web Access? I know how to do this in Remote Desktop but not in Remote Web Access. Remote Web Access works on PCs regardless
    being onsite and offsite and on my iPhone, the same issue also occurs with my Nokia 5230s regardless of whether I'm using Opera Mobile or Mini or the latest Nokia Browser.
    VPN access - How do you configure VPN access on a Windows Phone using VPN? I cannot find the protocols PPTP, L2TP, SSTP and IPsec in order to configure VPN access on the Windows Phone apart from IKEv2.
    Many thanks,
    RocknRollTim

    Any help would be much appreciated.
    Kind regards,
    RocknRollTim

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from 2007 it came preloaded with tiger. I have original install tiger discs version 10.4.10. Is it safe to downgrade or not please help

    Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from Sep 2007 it came preloaded with tiger. I have original install (2) tiger discs version 10.4.10.  I want to know if it is safe and what are the necessary steps to do so. Also by downgrading im wondering if a lot of apps nowadays support tiger for example I have photoshop version 5 and 4 these are very important to me. One last question does anyone know of any reliable virus protection for mac that doesnt slow down your computer? because I have read that a lot of them do so. If anyone can help me I would greatly appreciate it! Here are the specs for my iMac 
    Model Name:
    iMac
      Model Identifier:
    iMac7,1
      Processor Name:
    Intel Core 2 Duo
      Processor Speed:
    2 GHz
      Number Of Processors:
    1
      Total Number Of Cores:
    2
      L2 Cache:
    4 MB
      Memory:
    2 GB
      Bus Speed:
    800 MHz

    Most of the time a perception of general slow performance is the result of installing third party junk alleged to speed up, "clean" or "optimize" your Mac, or to look for viruses that don't exist. Ideally you would know what you installed so you can uninstall it, but if you don't know or aren't sure there are techniques such as Safe Mode and creating a temporary user account to confirm that suspicion.
    If you open Activity Monitor it may show a process, or processes, that occupy a lot of your system's time.
    Slowness confined solely to web browser activity is often the result of an inexorable progress toward websites that demand ever more processor-intensive tasks. If your slow performance is strictly limited to web browsing, you might try disabling Flash by either uninstalling it, or use utilities such as ClickToFlash that allow you to control what Flash content gets loaded. Flash in itself is not inherently evil, but there is nothing to stop websites or the advertisers who pay for them from writing horrible Flash code that can do everything from hogging 100% of your CPU's time to causing random crashes. You can watch Activity Monitor as in the above to correlate these troublesome web pages with performance degradation.
    You are correct; if your computer shipped with Tiger you may certainly revert to it. I forgot that Tiger was shipping on new Macs as recently as five years ago. To downgrade it would be necessary to completely erase your hard disk and boot with the Tiger installation DVD, followed by installing it anew. Such drastic measures are not necessary and you are unlikely to be satisfied with the results anyway.
    Assuming your system is free of third party parasitic junk attached to OS X in an ill-conceived attempt to improve upon it, that your hard disk drive is sound and the boot volume has enough free space to work with, by far the best performance-enhancing improvement would be to add more memory. Buy as much as your computer can use and that you can afford. 2 GB is not that much any more.
    Read the following for some recommended troubleshooting techniques from Apple:
    General purpose Mac troubleshooting guide: Isolating issues in Mac OS X
    Creating a temporary user to isolate user-specific problems: Isolating an issue by using another user account
    Memory limitations: Using Activity Monitor to read System Memory and determine how much RAM is being used
    Identifying resource hogs and other tips: Runaway applications can shorten battery runtime
    Starting the computer in "safe mode": Mac OS X: What is Safe Boot, Safe Mode?

  • Question regarding decode function.

    Hi friends,
    I have a question regarding using decode.
    I'm try'g to explain my problem using emp table.
    Can you guys please help me out.
    For example consider emp table, now i want to get all manager id's concatenated for 2 employees.
    I tried using following code
    declare
    v_mgr_code  number(10);
    v_mgr1      number(4);
    v_mgr2      number(4);
    begin
    select  mgr into    v_mgr1
    from    scott.emp
    where   empno = 7369;
    select  mgr into    v_mgr2
    from    scott.emp
    where   empno = 7499;
    select v_mgr1||'-'||v_mgr2 into v_mgr_code from dual;
    end;now instead of writing 2 select statements can i write one select statement using decode function ?
    Edited by: user642856 on Mar 8, 2009 11:18 PM

    i don't know wheter your looking for this or not.if i am wrong correct me.
    SELECT Ename||' '||initcap('manager is ')||
    DECODE(MGR,
            7566, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7566),
            7698, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7698),
            7782, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7782),
            7788, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7788),
            7839, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7839),
            7902, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7902),
            'Do Not Know')  Manager from empor
    SELECT Ename||' '||initcap('manager is ')||
    DECODE(MGR,
            7566, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7566),
            7698, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7698),
            7782, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7782),
            7788, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7788),
            7839, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7839),
            7902, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7902)) manager
    from empEdited by: user4587979 on Mar 8, 2009 9:52 PM

  • Question regarding palcing cache related classes into a package

    Hi all,
    I have a question regarding placing classes into packages. Actually I am writing cache feature which caches the results which were evaluated previously. Since it is a cache, I don't want to expose it outside because it is only for internal purpose. I have 10 classes related to this cache feature. All of them are used by cache manager (the manager class which manages cache) only. So I thought it would make sense if I keep all the classes into a separate package.
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code. I can't either make it public or private. Can someone suggest a solution for my problem?

    haki2 wrote:
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code.Well, you shouldn't access them in your non-cache code.
    As far as I understand, the only class that other code needs to access is the cache manager. That one must be public. All other classes can be package-private (a.k.a default access). This way they can access each other and the cache manager can access them, but other code can't.

  • Some simple questions regarding the Xcode.

    I have some preliminary questions regarding the Xcode (objective C):
    What is the difference between the programming files followed by .h and .m?
    What does an asterisk * followed by a text mean (i.e. *window;)?
    Is there any source that briefly explains the most common codes with examples (nsstring, nsarray, iboutlet, inaction.....etc)?
    Any other tips will be highly appreciated.
    Many thanks.

    Xcode is the IDE.
    Objective-C is the most common language used within Xcode.
    UIWindow *window .....  the * means return the address of the instance variable 'window'.
    An * is called a pointer to a variable.
    Everything you need is explained here:  https://developer.apple.com

  • Question regarding Inheritance.Please HELP

    A question regarding Inheritance
    Look at the following code:
    class Tree{}
    class Pine extends Tree{}
    class Oak extends Tree{}
    public class Forest{
    public static void main(String args[]){
      Tree tree = new Pine();
      if( tree instanceof Pine )
      System.out.println( "Pine" );
      if( tree instanceof Tree )
      System.out.println( "Tree" );
      if( tree instanceof Oak )
      System.out.println( "Oak" );
      else System.out.println( "Oops" );
    }If I run this,I get the output of
    Pine
    Oak
    Oops
    My question is:
    How can Tree be an instance of Pine.? Instead Pine is an instance of Tree isnt it?

    The "instanceof" operator checks whether an object is an instance of a class. The object you have is an instance of the class Pine because you created it with "new Pine()," and "instanceof" only confirms this fact: "yes, it's a pine."
    If you changed "new Pine()" to "new Tree()" or "new Oak()" you would get different output because then the object you create is not an instance of Pine anymore.
    If you wonder about the variable type, it doesn't matter, you could have written "Object tree = new Pine()" and get the same result.

  • Question regarding the "mcxquery" and "dscl -mcxread" commands:

    Question regarding the mcxquery and dscl -mcxread commands:
    Does anyone know why the mcxquery and the dscl . -mcxread commands don't show any info about MCX managed login items & printers? The System Profiler's "Managed Client" section does. Id like to see info regarding managed printers and managed login items using the mcx tools. I have Mac users running 10.5.2 with both login items and printers that are pushed out to them via MCX. The System Profiler app shows all of their policies, but the dscl . -mcxread and mcxquery tools dont. Why not?
    -D
    Message was edited by: Daniel Stranathan
    null

    How do you "call procedures/functions" without sql code? You need at least the call statement like
    {call myProc(?,?,?)}that you wrap into a CallableStatement.
    Other than that: when you switch off autocommit, you need to call commit/rollback at the end. Usually, if you don't commit/rollback a non-autocommitted connection, the transaction get's committed/rollbacked when you close the connection - that depends on the JDBC driver. But it's never a good idea to ommit the commit/rollback calls on a non-autocommit connection. Usually you enclose your code in a try/catch block like this:
    con.setAutocommit(false);
    try {
       con.commit();
    } catch (Exception e) {
       con.rollback();
    } finally {
        con.setAutocommit(true); //or:
        con.close();
    }

  • Questions regarding Disk I/O

    Hey there, I have some questions regarding disk i/o and I'm fairly new to Java.
    I've got an organized 500MB file and a table like structure (represented by an array) that tells me sections (bytes) within the file. With this I'm currently retrieving blocks of data using the following approach:
    // Assume id is just some arbitary int that represents an identifier.
    String f = "/scratch/torum/collection.jdx";
    int startByte = bytemap[id-1];
    int endByte = bytemap[id];
    try {
              FileInputStream stream = new FileInputStream(f);
              DataInputStream in = new DataInputStream(stream);
                    in.skipBytes(startByte);
              int position = collectionSize - in.available();
              // Keep looping until the end of the block.
              while(position <= endByte) {
                  line  = in.readLine();
                  // some pocessing here
                  String[]entry = line.split(" ");
                  String docid = entry[1];
                  int tf = Integer.parseInt(entry[2]);
                  // update the current position within the file.
                  position = collectionSize - in.available();
       } catch(IOException e) {
              e.printStackTrace();
       }This code does EXACTLY what I want it to do but with one complication. It isn't fast enough. I see that using BufferedReader is the choice after reading:
    http://java.sun.com/developer/technicalArticles/Programming/PerfTuning/
    I would love to use this Class but BufferedReader doesn't have the function, "skipBytes(), which is vital to achieve what I'm trying to do. I'm also aware that I shouldn't really be using the readLine() function of the DataInputStream Class.
    So could anyone suggest improvements to this code?
    Thanks
    null

    Okay I've got some results and turns out DataInputStream is faster...
    EDIT: I was wrong. RandomAccessFile becomes a bit faster according to my test code when the block size to read is large.
    So I guess I could write two routines in my program, RAF for when the block size is larger than an arbitary value and FileInputStream for small blocks.
    Here is the code:
    public void useRandomAccess() {
         String line = "";
         long start = 1385592, end = 1489808;
         try {
             RandomAccessFile in = new RandomAccessFile(f, "r");
             in.seek(start);
             while(start <= end) {     
              line = in.readLine();     
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              start = in.getFilePointer();
         } catch(FileNotFoundException e) {
             e.printStackTrace();
         } catch(IOException ioe) {
             ioe.printStackTrace();
    public void inputStream() {
         String line = "";
         int startByte = 1385592, endByte = 1489808;
         try {
             FileInputStream stream = new FileInputStream(f);
             DataInputStream in = new DataInputStream(stream);
             in.skipBytes(startByte);
             int position = collectionSize - in.available();
             while(position <= endByte) {
              line  = in.readLine();
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              position = collectionSize - in.available();
         } catch(IOException e) {
             e.printStackTrace();
        }and the main looks like this:
       public static void main(String[]args) {
         DiskTest dt = new DiskTest();
         long start = 0;
         long end = 0;
         start = System.currentTimeMillis();
         dt.useRandomAccess();
         end = System.currentTimeMillis();
         System.out.println("Random: "+(end-start)+"ms");
         start = System.currentTimeMillis();
         dt.inputStream();
         end = System.currentTimeMillis();
         System.out.println("Stream: "+(end-start)+"ms");
        }The result:
    Random: 345ms
    Stream: 235ms
    Hmmm not the kind of result I was hoping for... or is it something I've done wrong?

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

Maybe you are looking for