Question regarding a particular query

hi all
i need ur help to write a query. please cosider the following scenario:
There is a table having colums (Attribute, Value) and data:
Attribute Value
a1 1
a2 2
a3 3.2
a4 a1*a2+a3
a5 a3 - a1
is it possible to write a query which will bring:
a1 1
a2 2
a3 3.2
a4 5.2
a5 2.2
Meaning the same column (Value) contains the numeric values as well as the formulas. There can be any number of operands, and all kind of possible operators.
Waiting for a response.
Omer

Given you have following table:
Name
VARIABLE VARCHAR2
FORMEL VARCHAR2
and you implement the following package:
CREATE OR REPLACE PACKAGE EVALUATE_STRING AS
FUNCTION evaluate(v_formel VARCHAR2, level NUMBER:= 0) RETURN VARCHAR2;
END;
CREATE OR REPLACE PACKAGE BODY EVALUATE_STRING AS
FUNCTION evaluate(v_formel VARCHAR2, level NUMBER:= 0) RETURN VARCHAR2 IS
CYCLIC_ERROR EXCEPTION;
UNDEFINED_VARIABLE EXCEPTION;
V_AKT_VAR VARCHAR2(4000);
V_AKT_CHAR VARCHAR2(1);
V_OPERANDS VARCHAR2(20):= '+-*/()';
V_TODO VARCHAR2(32767);
what VARCHAR2(20):= 'A NUMBER';
V_ERGEBNIS VARCHAR2(4000);
V_FORMEL_WERT VARCHAR2(32767);
v_error VARCHAR2(255);
FUNCTION IS_NUMBER(v_string VARCHAR2) RETURN BOOLEAN IS
laenge NUMBER;
help VARCHAR2(32767);
anz NUMBER;
erg BOOLEAN:= TRUE;
BEGIN
laenge:= length(v_string);
anz:= 1;
WHILE erg AND (anz<=laenge) LOOP
IF instr('1234567890.',substr(v_string,anz,1))!= 0 THEN
anz:= anz+1;
ELSE
erg:= FALSE;
END IF;
END LOOP;
RETURN erg;
EXCEPTION
WHEN OTHERS THEN RETURN FALSE;
END IS_NUMBER;
FUNCTION IS_FORMEL(v_string VARCHAR2) RETURN BOOLEAN IS
laenge NUMBER;
help VARCHAR2(32767);
anz NUMBER;
erg BOOLEAN:= FALSE;
BEGIN
laenge:= length(v_string);
anz:= 1;
WHILE NOT erg AND (anz<=laenge) LOOP
IF instr(V_OPERANDS,substr(v_string,anz,1)) = 0 THEN
anz:= anz+1;
ELSE
erg:= TRUE;
END IF;
END LOOP;
RETURN erg;
EXCEPTION
WHEN OTHERS THEN RETURN FALSE;
END IS_FORMEL;
BEGIN
IF level >= 40 THEN
raise CYCLIC_ERROR;
END IF;
IF NOT IS_NUMBER(v_formel) THEN
FOR x IN 1 .. NVL(LENGTH(v_formel),0) LOOP
V_AKT_CHAR:= substr(v_formel,x,1);
IF instr(V_OPERANDS,V_AKT_CHAR) != 0 OR V_AKT_CHAR = ' ' OR x = LENGTH(v_Formel) THEN
IF x = LENGTH(v_Formel) THEN
V_AKT_VAR:= V_AKT_VAR||V_AKT_CHAR;
END IF;
IF V_AKT_VAR IS NOT NULL THEN
IF IS_NUMBER(V_AKT_VAR) THEN
V_TODO:= V_TODO||V_AKT_VAR;
ELSE
IF IS_FORMEL(V_AKT_VAR) THEN
V_TODO:= V_TODO||evaluate(V_AKT_VAR,level+1);
ELSE -- a Variable
begin
SELECT FORMEL
INTO V_FORMEL_WERT
FROM FORMULAR
WHERE VARIABLE = V_AKT_VAR;
exception
when others then
RAISE UNDEFINED_VARIABLE;
end;
V_TODO:= V_TODO||evaluate(V_FORMEL_WERT,level+1);
END IF; /* if is_formel(..*/
END IF; /* if is number */
END IF; /* if v_akt_var is not null */
V_AKT_VAR:= NULL;
IF instr(V_OPERANDS,V_AKT_CHAR) != 0 THEN
V_TODO:= V_TODO||V_AKT_CHAR;
END IF;
ELSE
V_AKT_VAR:= V_AKT_VAR||V_AKT_CHAR;
END IF; /* if instr(V_OPERANDS,V_AKT_CHAR) .. */
END LOOP;
ELSE
V_TODO:= v_formel;
END IF;
EXECUTE IMMEDIATE 'SELECT '||V_TODO||' FROM DUAL' INTO V_ERGEBNIS;
RETURN V_ERGEBNIS;
EXCEPTION
WHEN CYCLIC_ERROR THEN
RAISE_APPLICATION_ERROR (-20000, 'Error:Max. level reached!');
WHEN UNDEFINED_VARIABLE THEN
RAISE_APPLICATION_ERROR (-20000, 'Undefinded Variable: '||V_AKT_VAR);
WHEN OTHERS THEN
v_error:= SQLERRM;
RAISE_APPLICATION_ERROR (-20000,v_error||' '||V_TODO);
END evaluate;
END;
you are able to select:
SQL> select substr(variable,1,10) VAR,
2 substr(formel,1,20) FORMEL,
3 substr(evaluate_string.evaluate(formel,0),1,30) erg
4 from formular;
VAR FORMEL ERG
a1 1 1
a2 2 2
a3 3.2 3.2
a4 a1*a2+a3 5.2
a5 a3 - a1 2.2
a6 (5-3)*2 4
a7 3 * 3 9
Regards
Anna

Similar Messages

  • Questions in Ad Hoc Query & How to Configure the EEO standard reports

    Hi all,
      I have a  question in Ad hoc query report in HR.
    <b>How to:</b> Get a list of the total number of employees included in a particular report at the end of the report. Ex: If i create and run a report for salaried employees, sorted out by company codes, how can i get a sub-total and total no. of employees listed in the report.
    I tried Ranked format, but when you print the report it doesn't retain the report name on the top.
    -->I have a question regarding the Standard reports for EEO and AAP
    <b> How do I</b>
    1. Start configuring these report
    2. What are the things i should have before configuring it in IMG
    If anyone can provide me with some documentation regarding the EEO and AAP report configuration that would be great.
    Thanks in advance.....
    Harish

    This can be done using the security for the Infoprovider,  provide the users access to create queries only for that Infoprovider.

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Question regarding Calculated Key Figures in BEx and their impact on SQL

    Hello,
    I am new to BO SAP integration. I have a question regarding using CKF in BEx.
    I created universe off of a BEx query with no CKF. I then created a Webi report with come dimensions and measures. I captured the SQL generated using trace (ST05).
    In the same BEx, I then create a CKF. Then refreshed the universe and created a new Webi report using the same dimensions and the CKF. The SQL generated had many more select statements.
    My question is what is the effect of CKF on the generated SQL and is there a performance issues using CKF in BEx as opposed to creating variables in Webi report?
    Thanks,
    Nikhil

    Hi,
    if your CKF will have always same unit and you have one KF in you inforpovider with this unit, you can try to do this trick
    create a new hidden CKF as new CKF = KF / KF (with this equale new CKF = 1 unit)
    change your old CKF as old CKF = old CKF * new CKF
    let me know if it works.

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Questions regarding new functionalities in EhP 4 - Reporting Financials 2

    Dear Forum,
    in a project we would like to use some new functionalities from Reporting Financials 2 - ie. Datasource 0FI_AA_20 for Depreciation and Amortization loading to BI for following years as this can not be done by old extractor.
    We are know looking for reliable information about impact and changes that are made in ERP if we switch on the functionality Reporting Financials 2 via SFW5? Will old extracors work nevertheless? Will all reports in ERP work without problems? Is there any impact on business processes? Or is this just additional functionality which will not affect current implementation?
    Can anybody give information about this?
    Thanks, regards
    Lars
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Question Regarding Mesh with 3702 and non AC ap´s

    Hello! 
    quick question regarding MESH deployments with 2 different sorts of AP´s: AC and non-AC modells: If my 3702i is my root AP´s, and 3602i my MAP - will AC still work in 80Mhz, or will I have to switch to 40mhz (and thus crippling (???) AC performance?) 
    Not 100% sure on this... I *think* it should still work for the normal 802.11n connection, but I´m not sure if the 80mhz channel width (needed??) for AC, will cause the non-ac 3602i to be stranded? 
    Thanks alot for your insight! 

    Currently, my network DHCP server is a software based DHCP server. In reading over your post if I understood correctly it sounds like the managed switch would have its own hardware based DHCP server to assign IP addresses to those clients identified on the "external" VLAN. Did I understand that correctly or did misread something?
    DHCP server will be software based, even though you defined it on your switch, it is DHCP service running on its OS.
    I am configuring this setup for a small business application and will need to purchase a managed switch with 16 or 24 ports. Do you have any recommendations on a particular managed switch that will handle the VLAN configuration and include POE while keeping costs in mind.
    In this forum, most of us discussed about Cisco enterprise grade wireless. Here is 2960X series switch detail, if you are interested
    http://www.cisco.com/c/en/us/products/switches/catalyst-2960-x-series-switches/index.html
    You may need to check the pricing with your Cisco account manager or from a Cisco partner.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Question regarding XI/PI and Idoc processing.

    Hi,
    I'm learning XI/PI and I have a question regarding Idoc processing in PI.
    We need to configure communication between our BW system and our PI system using Idocs.
    The Idocs are sent from BW to our PI systems and are then sent back to the BW system, there are no third system involved. The idocs are only between PI and BW.
    Our BW system is already connected with many other R3 systems by using WE20 / WE21 and RFC's and everything works perfectly.
    While I configure this communication between BW and PI it seems that PI is passing the Idoc to the Idoc adapter, converts it to xml and tries to find a receiver for the particular Idoc. I see the error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    Is this a normal behaviour in PI ? Why is PI thinking that the Idoc needs to be sent to another system when it is infact for itself ??
    Thanks and regards
    Remi

    Hi
    for error "NO_RECEIVER_CASE_ASYNC" in SXMB_MONI.
    This problem may occur due to one of following reasons, so check
    1 service is active in message? transaction SICF and activate service sap/xi/engine (right click, activate)
    2 Is the port 8001 defined in the services on the smicm under services?
    3 Check the roles assign PIDIRUSER
    http://help.sap.com/saphelp_nw04/helpdata/en/56/361041ebf0f06fe10000000a1550b0/content.htm
    role: SAP_XI_ID_SERV_USER attached to it
    Also Check Whether PIDIRUSER has following role
    SAP_SLD_CONFIGURATOR
    SAP_XI_RWB_SERV_USER
    SAP_XI_RWB_SERV_USER_MAIN
    Regards
    Abhishek

  • Bw Question :regarding the versioning

    Hii All,
    I did post a question regarding to the versionining of the cube on friday 9th May and still i did not get any reply on that.Plz let me know.otherwise plz keep me posted that you are unable to reply to my question.
    My Question was :
    In the Versioning of the cube ,we give that version a particular name and select the value type of it as : 110 or 130 or 140. what is this value type .. ? what does this 110,130 or 140 really mean ?
    Why we need this value type.? and can we get some documents to read and explore this value type ?. Plz help.
    Thanks & regards ,
    Madhavi S Bichakal

    Hi Madhavi,
    Basically in BW you'll find two characteristics used for versioning:
    - Version: Used to create different versions of the information
    - Value type: used to indicate what the information means.
    Examples:
    Version 000 is usually Plan/Actual data (the final version). Then, for version 000 you will have different value types, like 010 = Actual, 020 = Plan, 030 = Target, etc..
    Then you can have different versions (001, 002, 003) that are used in the planning process. You start with version 001, then you can move to 002, 003,... and when you have the final Plan, you move to 000.
    That's the usual usage of version / value type.
    but, you can use it as you want. The only problem that you can have is that if you rename the description of a value type, and then you activate a BCT that generates data for that value type and the description will be incorrect.
    From what you said, you are using values from 100 and above, SAP uses up to 90 from what i've seen, so you won't have any problems.
    Hope this clarifies.
    Regards,
    Diego

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Not able to see that particular query which is executed from application

    Hi Friends,
    I have one application which is executing a number of queries in the current session. I want to know which query is taking how much time?
    So I have executed following query in Toad for this purpose, but this is giving details of those queries only which are executed from either Toad or SQL developer.
    I am not able to see that particular query which is executed from application. Please suggest me if I am missing something.
    select ss.schemaname,ss.machine,ss.program,ss.logon_time,ss.sql_exec_start,ss.wait_time,
    sa.first_load_time,sa.application_wait_time,sa.plsql_exec_time,sa.cpu_time,sa.elapsed_time,sa.sql_fulltext
    from v$session ss, v$sqlarea sa
    where sa.hash_value=ss.sql_hash_value
    order by sa.first_load_time desc;
    Oracle Version - Release 11.2.0.1.0 Production
    Toad Version - 9.5.0.31
    Regards,
    Sachin

    Dear Friends,
    is there any option for this through Oracle Enterprise Manager 11g?
    Actually I have done little bit research on EM console and go through SQL Monitoring Executions page but did not get the particular query details executed through the application.
    I have also checked the link - AWR Baseline but not get any result.
    As I am new in database administration activities, Please help me regarding this. You can also suggest me other solution except Oracle EM console.
    Thanks.
    Sachin Jaiswal

  • Questions regarding RRI

    Hi Folks,
    I have two questions regarding RRI:
    1) Is it possible to define jump by selecting more than one value of a fileld/variable ? My RRI is working with one value. But if I mark more than one value and goto jump, the new query just uses the first value from my selected list
    2) Is it possible to jump to a workbook ?
    Thanks

    Hi,
    1) Is it possible to define jump by selecting more than one value of a fileld/variable ? My RRI is working with one value. But if I mark more than one value and goto jump, the new query just uses the first value from my selected list
    In RSBBS when you are defining the RRI you can choose the variables which are common in the reports to pass values to the next query.
    2) Is it possible to jump to a workbook ?
    No jump to a workbook is not possible.
    regards,
    Arvind.

  • Questions Regarding Asus K53E-BBR4

    I have a couple questions regarding the AsusK53E-BBR4. First question does this laptop support wireless b and/or g because it only mentions N ?
    Second question is this laptop available for ship to store ? Specifically  (Store 597) Zip 16509
    Thanks,

    Wireless N is backwards compatible with A/B/G standards so you can connect them all. 
    In regards to your product question, you can view the product page on Best Buy's website, input your zipcode and you will see the options for your particular store you shop at. 
    *******DISCLAIMER********
    I am not an employee of BBY in any shape or form. All information presented in my replies or postings is my own opinion. It is up to you , the end user to determine the ultimate validity of any information presented on these forums.

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

Maybe you are looking for

  • Acrobat 7.0 Professional - default window size

    How do I set a default opening window size (to fit A4 @ 125%) for all PDFs? I'm running OS X 10.6.8 with Acrobat 7.0 Professional. Can't find anything that works on this. Would appreciate any suggestions.

  • Abap program to MSSQL connection

    Hello !! Do anybody knows how to retrive data directly from  MSSQL server to abap program ,normaly it should by done by defining DBCO connection and using Native SQL statments but  our SAP system is running on UNIX and what I have found for know in s

  • 3110 AP Consol problem

    Dear All i have a Cisco 3110 acces point but i am facing a issue that i cann't access to its CLI to use serial to USb converter i treid to search a lot on internet but i didn't find the solution please any body help me for that why serial to usb conv

  • Accounting document quantity and billing document quantity doesn't match

    Hello SAP Experts, I have an accounting document that was automatically posted in FI (TCode VF01) after billing document was created. Billing document (debit memo rebilling) shows invoiced quantity 1. But corresponding accounting document in FI shows

  • APP-GME-82860 : YOU DONT HAVE ACCESS TO ANY MATERIAL VIEWS ON THIS FORM

    Dear all I created a new Responsability for the Production supervisor and when switch to new responsability this message apper and the form fail to start .. did i miss any thing ??? what can i do to solve this problem ??? Thank you