Question regarding agent 18 case

Does anybody have any idea whether the agent 18 case will fit with an invisible shield currently on my ipod?

I heard somewhere that if you put a case on an iPod with an invisble shield, it causes friction and the invisible shield will get damaged.

Similar Messages

  • Questions about Agents

    Hi SAP Experts,
    I have questions regarding agents.
    1. In the activity step under agent assignment, when do we use
    General task
    General Forwarding allowed
    General Forwarding not allowed
    Forwarding not allowed.
    2. I am trying to use a multiline container for my agents and indicated.
    &AGENTS[&_WF_PARFOREACH_INDEX&]&
    My requirement is that once the task is executed by one of the agents, the other agents will
    no longer be able to execute the task and the workflow will proceed to the next step. So far
    with what I have done, the workflow is waiting for all the agents to execute the task before
    it continues to the next step. Please advice on what I should do. Thank you in advance!

    Hello,
    You say:
    "Initially it was only &Agents& in the expression field, the problem is that no agents are processed according to the logs"
    Look in the workflow log to see what the problem is. What value does &AGENTS& have? This is what you need for your requirement.
    Usually, you set the task to General Task.
    regards
    Rick Bakker
    Hanabi Technology

  • Question regarding Dashboard and column prompt

    My question regarding Dashboard and column prompt:
    1) Dashboard prompt usually work with only for columns which are in subject area. In my report I've created some of the columns which are based on other columns. Like I've daysNumber column that is based on two other columns, as it calculates the difference of two dates. When I create dashboard prompt I can't find this column there. I need to make a prompt on this column.
    2)For one of the column I've only two values 1 and 0. When I create prompt for this column, is it possible that in drop down list It shows 'Yes' for 1 and 'No' for 0 and still filter the request??

    Hi Toony,...
    I think there was another way of doing this...
    In the dashboard prompt go to Show option > select SQL Results from dropdown.
    There you need to write your Logical SQL like...
    SELECT CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END FROM SubjectAreaName
    Here.. Periods.Year is the column which is already exists in repository's presentation layer..
    and difference of date functionality is the code or formula of column which you want to show in drop-down...
    Also write the CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END code in fx of that prompt.
    I think it helps you in doing this..
    Just check and inform me if it works...
    Thanks & Regards
    Kishore Guggilla
    Edited by: Kishore Guggilla on Oct 31, 2008 9:35 AM

  • Basic question regarding SSIS 2010 Package where source is Microsoft Excel 97-2005 and there is no Microsoft office or Excel driver installed in Production

    Hi all,
    I got one basic question regarding SSIS 2010 Package where source is Microsoft Excel 97-2005. I wanted to know How this package works in production where there is no Microsoft office or Excel driver installed. To check that there is excel driver installed
    or not, I followed steps: Start-->Administrative Tools--> Data Sources(ODBC)-->Drivers and I found only 2 drivers one is SQL Server and another one is SQL Server Native Client 11.0.
    Windows edition is Windows Server 2008 R2 Enterprise, Service Pack-1 and System type is 64-bit Operating System.
    We are running this package from SQL Server Agent and using 32-bit (\\Machine_Name\d$\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\DTExec.exe /FILE "\\Machine_Name\d$\ Folder_Name\EtL.dtsx" /CONFIGFILE "\\Machine_Name\d$\Folder_Name\Config.dtsConfig"
    /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REPORTING E) to run this package. I opened the package and tried to find out what connection we have used and found that we have used "Excel Connection Manager" and ConnectionString=Provider=Microsoft.Jet.OLEDB.4.0;Data
    Source=F:\Fares.xls;Extended Properties="EXCEL 8.0;HDR=YES"; and source is ‘Excel Source’
    I discussed with my DBA and He said that SSIS is having inbuilt Excel driver but I am not convinced.
    Could anyone please clear my confusion/doubt?
    I have gone through various links but my doubt is still not clear.
    Quick Reference:
    SSIS in 32- and 64-bits
    http://toddmcdermid.blogspot.com.au/2009/10/quick-reference-ssis-in-32-and-64-bits.html
    Why do I get "product level is insufficient..." error when I run my SSIS package?
    http://blogs.msdn.com/b/michen/archive/2006/11/11/ssis-product-level-is-insufficient.aspx
    How to run SSIS Packages using 32-bit drivers on 64-bit machine
    http://help.pragmaticworks.com/dtsxchange/scr/FAQ%20-%20How%20to%20run%20SSIS%20Packages%20using%2032bit%20drivers%20on%2064bit%20machine.htm
    Troubleshooting OLE DB Provider Microsoft.ACE.OLEDB.12.0 is not registered Error when importing data from an Excel 2007 file to SQL Server 2008
    http://www.mytechmantra.com/LearnSQLServer/Troubleshoot_OLE_DB_Provider_Error_P1.html
    How Can I Get a List of the ODBC Drivers that are Installed on a Computer?
    http://blogs.technet.com/b/heyscriptingguy/archive/2005/07/07/how-can-i-get-a-list-of-the-odbc-drivers-that-are-installed-on-a-computer.aspx
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Hi S Kumar Dubey,
    In SSIS, the Excel Source and Excel Destination natively use the Microsoft Jet 4.0 OLE DB Provider which is installed by SQL Server. The Microsoft Jet 4.0 OLE DB Provider deals with .xls files created by Excel 97-2003. To deal with .xlsx files created by
    Excel 2007, we need the Microsoft ACE OLEDB Provider. SQL Server doesn’t install the Microsoft ACE OLEDB Provider, to get it we can install the
    2007 Office System Driver: Data Connectivity Components or
    Microsoft Access Database Engine 2010 Redistributable or Microsoft Office suit.
    The drivers listed in the ODBC Data Source Administrator are ODBC drivers not OLEDB drivers, therefore, the Excel Source/Destination in SSIS won’t use the ODBC driver for Excel listed in it by default. On a 64-bit Windows platform, there are two versions
    of ODBC Data Source Administrator. The 64-bit ODBC Data Source Administrator is C:\Windows\System32\odbcad32.exe, while the 32-bit one is C:\Windows\SysWOW64\odbcad32.exe. The original 32-bit and 64-bit ODBC drivers are installed by the Windows operating system.
    By default, there are multiple 32-bit ODBC drivers and fewer 64-bit ODBC drivers installed on a 64-bit platform. To get more ODBC drivers, we can install the 2007 Office System Driver: Data Connectivity Components or Microsoft Access Database Engine 2010 Redistributable.
    Besides, please note that 2007 Office System Driver: Data Connectivity Components only install 32-bit ODBC and OLEDB drivers because it only has 32-bit version, but the Microsoft Access Database Engine 2010 Redistributable has both 32- bit version and 64-bit
    version.
    If you have any questions, please feel free to ask.
    Regards,
    Mike Yin
    TechNet Community Support

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Questions regarding *dump_dest parameters and fast_recovery_area

    Hello,
    I just installed a fresh new 11.2.0.2 Database on Solaris 10.
    Everything straightforward on the parameter side!!! I tried custom install as well as general purpose template. When installing with DBCA, I set every parameters around DB Name in lowercase name.
    With this, questions are popping in my mind regarding some parameters after installation.
    First, %dump_dest parameters contains in path, two times the db name (ocpdb in my case):
    background_dump_dest       /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    user_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdumpIs it normal to have ..../rdbms/dbname/dbname/..... as path, with dbname/dbname ??? Why?
    Second, the question regarding the directory structure under fast_recovery_area (new term for flash_recovery_area). The directory structure:
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-10-28 19:53 ocpdb
    drwxr----- 5 oracle oinstall 512 2010-10-29 07:44 OCPDB
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l ocpdb
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-10-31 21:09 control02.ctl
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l OCPDB/
    total 3
    drwxr----- 5 oracle oinstall 512 2010-10-31 03:48 archivelog
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:44 autobackup
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:43 backupsetWhy am I having a subdir with dbname in uppercase AND in lowercase? Should I specify dbname in uppercase at db creation to have all files under the same directory, or in lowercase? Or, is it normal?
    I want to know how to do it well before reinstalling a fresh database.
    Thanks
    Bruno
    Edited by: blavoie on Oct 31, 2010 6:18 PM
    Edited by: blavoie on Oct 31, 2010 6:20 PM

    Hi,
    I just reinstalled all from scratch, everything in lowercase as well in environment variables and dbname in dbca:
    oracle@enalab13:~$ echo $ORACLE_SID
    ocpdbFast recovery area directories, dates prove that it's my fresh install:
    oracle@enalab13:/u01/app/oracle$ ll fast_recovery_area/
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    oracle@enalab13:/u01/app/oracle$ ll -R fast_recovery_area/
    fast_recovery_area/:
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    fast_recovery_area/ocpdb:
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-11-02 11:34 control02.ctl
    fast_recovery_area/OCPDB:
    total 2
    drwxr-x--- 3 oracle oinstall 512 2010-11-02 11:24 archivelog
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 onlinelog
    fast_recovery_area/OCPDB/archivelog:
    total 1
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:24 2010_11_02
    fast_recovery_area/OCPDB/archivelog/2010_11_02:
    total 47032
    -rw-r----- 1 oracle oinstall 48123392 2010-11-02 11:24 o1_mf_1_5_6f0c9pnh_.arc
    fast_recovery_area/OCPDB/onlinelog:
    total 0Some interresting output asked earlier in post:
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     4
    Next log sequence to archive   6
    Current log sequence           6
    SQL> show parameter recovery
    NAME                                 TYPE        VALUE
    db_recovery_file_dest                string      /u01/app/oracle/fast_recovery_area
    db_recovery_file_dest_size           big integer 4032M
    recovery_parallelism                 integer     0
    SQL> show parameter control_files
    NAME                                 TYPE        VALUE
    control_files                        string      /u01/app/oracle/oradata/ocpdb/control01.ctl,
                                                         /u01/app/oracle/fast_recovery_area/ocpdb/control02.ctl
    SQL> show parameter instance_name
    NAME                                 TYPE        VALUE
    instance_name                        string      ocpdb
    SQL> show parameter db_name
    NAME                                 TYPE        VALUE
    db_name                              string      ocpdb
    SQL> show parameter log_archive_dest_1
    NAME                                 TYPE        VALUE
    log_archive_dest_1                   string
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    SQL> show parameter %dump_dest 
    NAME                                 TYPE        VALUE
    background_dump_dest                 string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdump
    user_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/traceI think, next time, I'll install everything regarding oracle SID in upper case...
    Maybe it's details that I don't need to care about... I seems that something is happening bad with the management of fast_recovery_area...
    Thanks
    Bruno

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Questions regarding new functionalities in EhP 4 - Reporting Financials 2

    Dear Forum,
    in a project we would like to use some new functionalities from Reporting Financials 2 - ie. Datasource 0FI_AA_20 for Depreciation and Amortization loading to BI for following years as this can not be done by old extractor.
    We are know looking for reliable information about impact and changes that are made in ERP if we switch on the functionality Reporting Financials 2 via SFW5? Will old extracors work nevertheless? Will all reports in ERP work without problems? Is there any impact on business processes? Or is this just additional functionality which will not affect current implementation?
    Can anybody give information about this?
    Thanks, regards
    Lars
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Questions regarding DTP

    Hello Experts
    I am trying to load data from ODS to a Cube and  have the following questions regarding DTP behaviour.
    1) I have set the extratcion mode of the DTP to delta as I understand that it behaves   like init with data transfer for the first time. However, it fetches the records only from the change log of the ODS. Then what about all the records that are in the active table. If it cannot fetch all the records from the active table then how can we term it as init with data transfer.
    2) Do I need to have to seperate DTPs -  One for full load to fetch all the data from the active table and another to fetch deltas from the change log.
    Thanks,
    Rishi

    1. When you choose the Delta as Extraction mode you get the data from only change log table.
    Change log table will contain all the records.
    Suppose when you run a load to DSO which contains 10 records and activate it. Now those 10 records would be available in Active table as well as Change log.
    Now in the Second load you have 1 new record and 1 changed record. When you activate, your active table will have 11 records. The change log will have before and after image records for the changed record along with 11 record.
    So The cube needs that images, so that data can't be mismatched with old data.
    2.If you run the full load to Cube from DSO you need to delete the old request after the load. which is not necessary in the previous case.
    In Bi 7.0 when you choose full load extraction mode you will have flexibility to load the data either from " Active Table or  Change log table".
    Thanks
    Sreekanth S

  • Questions regarding the APEX sample shopping cart

    I am fairly new to APEX development, but had a few questions regarding the sample APEX shopping cart:
    1) is there a way to integrate Google check out payments with the sample APEX shopping cart
    2) is there any way to list a service, that involves monthly recurring payments
    3) is there any other shopping cart that is APEX based that is available
    Thank you,
    Ashok

    Hi Sam,
    i am haveing a deadline monitoring for my shopping cart where remainders are going to generate for first day and the second day, after both the remainders generated and the approver changing the cost center or asset or order at that time, the workflow flow should retrigger and send the approval mail to the new approver, but it is not going to the new approver, so the workflow is not triggering in my case. so i need to restart my workflow again if any changes occurs.
    in the second point my functional guy says that, when the approver changes any text or any date like that it should come for the accept changes, and when he sees the approval preview, only the approval person should be there but not the creator person in the preview and when the creator of the sc accepts the changes it is showing that it is approved by the creator name but he says that it should the approver name not the creator name.
    so can you please help me in this to get the solution.
    thanks in advance

  • 3 questions regarding duplicate script

    3 questions regarding duplicate script
    Here is my script for copying folders from one Mac to another Mac via Ethernet:
    (This is not meant as a backup, just to automatically distribute files to the other Mac.
    For backup I'm using Time Machine.)
    cop2drop("Macintosh HD:Users:home:Desktop", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Documents", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Pictures", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Sites", "zome's Public Folder:Drop Box:")
    on cop2drop(sourceFolder, destFolder)
    tell application "Finder"
    duplicate every file of folder sourceFolder to folder destFolder
    duplicate every folder of folder sourceFolder to folder destFolder
    end tell
    end cop2drop
    1. One problem I haven't sorted out yet: How can I modify this script so that
    all source folders (incl. their files and sub-folders) get copied
    as correspondent destination folders (same names) under the Drop Box?
    (At the moment the files and sub-folder arrive directly in the Drop Box
    and mix with the other destination files and sub-folders.)
    2. Everytime before a duplicate starts, I have to confirm this message:
    "You can put items into "Drop Box", but you won't be able to see them. Do you want to continue?"
    How can I avoid or override this message? (This script shall run in the night,
    when no one is near the computer to press OK again and again.)
    3. A few minutes after the script starts running I get:
    "AppleScript Error - Finder got an error: AppleEvent timed out."
    How can I stop this?
    Thanks in advance for your help!

    Hello
    In addition to what red_menace has said...
    1) I think you may still use System Events 'duplicate' command if you wish.
    Something like SCRIPT1a below. (Handler is modified so that it requires only one parameter.)
    *Note that the 'duplicate' command of Finder and System Events duplicates the source into the destination. E.g. A statement 'duplicate folder "A:B:C:" to folder "D:E:F:"' will result in the duplicated folder "D:E:F:C:".
    --SCRIPT1a
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    with timeout of 36000 seconds
    tell application "System Events"
    duplicate folder sourceFolder to folder destFolder
    end tell
    end timeout
    end cop2drop
    --END OF SCRIPT1a
    2) I don't know the said error -8068 thrown by Finder. It's likely a Finder's private error code which is not listed in any of public headers. And if it is Finder thing, you may or may not see different error, which would be more helpful, when using System Events to copy things into Public Folder. Also you may create a normal folder, e.g. named 'Duplicate' in Public Folder and use it as desination.
    3) If you use rsync(1) and want to preserve extended attributes, resource forks and ACLs, you need to use -E option. So at least 'rsync -aE' would be required. And I rememeber the looong thread failed to tame rsync for your backup project...
    4) As for how to get POSIX path of file/folder in AppleScript, there're different ways.
    Strictly speaking, POSIX path is a property of alias object. So the code to get POSIX path of a folder whose HFS path is 'Macintosh HD:Users:home:Documents:' would be :
    POSIX path of ("Macintosh HD:Users:home:Documents:" as alias)
    POSIX path of ("Macintosh HD:Users:home:Documents" as alias)
    --> /Users/home/Documents/
    The first one is the cleanest code because HFS path of directory is supposed to end with ":". The second one also works because 'as alias' coercion will detect whether the specified node is file or directory and return a proper alias object.
    And as for the code :
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    It is to strip the trailing '/' from POSIX path of directory and get '/Users/home/Documents', for example. I do this because in shell commands, trailing '/' of directory path is not required and indeed if it's present, it makes certain command behave differently.
    E.g.
    Provided /a/b/c and /d/e/f are both directory, cp /a/b/c /d/e/f will copy the source directory into the destination directory while cp /a/b/c/ /d/e/f will copy the contents of the source directory into the destination directory.
    The rsync(1) behaves in the same manner as cp(1) regarding the trailing '/' of source directory.
    The ditto(1) and cp(1) behave differently for the same arguments, i.e., ditto /a/b/c /d/e/f will copy the contents of the source directory into the destination directory.
    5) In case, here are revised versions of previous SCRIPT2 and SCRIPT3, which require only one parameter. It will also append any error output to file named 'crop2dropError.txt' on current user's desktop.
    *These commands with the current options will preserve extended attributes, resource forks and ACLs when run under 10.5 or later.
    --SCRIPT2a - using cp(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "cp -pR " & quoted form of src & " " & quoted form of dst
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT2a
    --SCRIPT3a - using ditto(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "src=" & quoted form of src & ";dst=" & quoted form of dst & ¬
    ";ditto "${src}" "${dst}/${src##*/}""
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT3a
    Good luck,
    H
    Message was edited by: Hiroto (fixed typo)

  • BIG Question regarding INIT Loads..I am scratching my head for past 2 days

    Hello All,
    I have one big question regarding the INIT load. Let me explain you my question with an example:
    I have a datasource say 2LIS_02_ITM for which I ran the setup tables on JAN 2008. The setup table is filled with say 10,000 records. After that we are running Delta loads everyday till today (almost 30,000 records). Everything is going fine.
    Imagine I lost a delta (delta failed and did'nt I do a repeat delta for 3 days, by then next delta came), Hence I am planning to do a Init load again without deleting / filling the setup tables. Now my question is :
    1) Will I get the 10,000 records which got filled initially in the setup tables or will I get 40,000 records (that 10,000  + 30,000 records)
    2) If it bring 40,000 how is that possible as I hav'nt filled the setup tables?
    3) Is it each time I need to fill the setup tables to start the Init load.
    Waiting for your guidance.........

    Yes...to answer your question
    But...agin I will suggest not to go by that method. Only one delta had failed so why do you want to wipe away the timestamp of the last init just becasue of 1 delta failure? it is possible to correct it
    First of all  i hope you have stopped the process chain or put a hold to Delta jobs
    Now, To answer your doubt, about the status red ..See you mark the status red in each of the the data target before delting the failed request from there (I usually do not just delete the requests frm data targets - I mark them red and then delte it to be safe).
    Do not forget to delete subsequent requests afterthe failed delta (agin I always mark them red  before doing so)
    Regarding the actual reuest itself...you need not mark it red in the monitor..You have got to correct this request and successful - it will automatically turn green.
    If after successfully correcting the request, it still does not turn green, you can set qm status green. The next delta will recognize this a successful request and bring in the next delta (this happens in cases where you have the infopackage in a process chian when it shows green because subsequent processes were not completed etc).
    Let me know in case you have any questions.

  • A few questions regarding SAP EWM and WM

    Hello,
    I have a few general questions regarding the differences between EWM and WM:
    1) What are the benefits of EWM-MFS compared to WM + TRM (especially in terms of SPS)?
    2) The Quality Inspection Engine (QIE) can also be used by SAP WM, right?
    3) There is RFID-support in EWM, so EWM is able to communicate directly with SAP Auto-ID, right?
         But I have heard that SAP PI is necessary in some cases, when and why?
    4) Is there something new in EWM regarding goods receipt processing?
        I have read that the splitting of inbound delivery items is possible in EWM in case of missing inbound delivery items. Is this really  a new feature?
    5) EWM can easily be connected to SAP BW for reporting purposes, what about WM?
    6) What about scalability if the warehouse grows?
    7) Is there any information about the costs of using EWM compared to WM and vice versa?
    I appreciate any kind of help.
    Thank you.
    Dennis

    Hi,
    1. What does SAP offer as a product for dWM? Is it a u201Cspecialu201D installation of the SAP framework dedicated to WM or is it a standard ECC box where only the WM module is used?
    There are two version of DWM. One is Decentralized WM as a part of ECC and another one is EWM as a part of SCM. Both are decentralized.
    2. My understanding is that the interfaces between ERP and dWM can support some non-real time operations (like when the main ERP system is down, the dWM can still perform some operations). Considering that the transactional interfaces are based on BAPIs, how does SAP achieve this interfacing in non-real time environments? I am thinking you can complete the different processing unless both systems are up
    When it comes to interfaces, DWM needs Deliveries from ERP. That's it, WM can function from there independent of ERP system. But, WM defenitely needs to communicate back PGI and PGR and other posting changes . So, in case ERP is down, even though PGI / PGR is done at WM end, they may not be communicated back to ERP. But WM generates PGI/PGR IDOCs which can always be reprocessed at WM end to resend them to ERP so that Inventory levels are accurate.
    Hope that helps
    Thanks
    Vinod.

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

Maybe you are looking for