Questions regarding managing Siebel

Hey all,
I have the following questions:
1- What responsibility/ way for the Siebel Users to only perform a read only function on both Siebel Call Center (All screens) and Siebel eService
2- Customers can read only certain statuses in Siebel eService regarding a Service Request Status (For example a status of Waiting for customer response)
3- How can I manage the Products screen in Siebel so that I can enter name of Products and Sub-Products (Could I rename the fields of Parent and child products to Products and Sub-Products?)
Regards,
Edited by: Hamzeh Al-Karmi on Sep 14, 2010 1:57 AM

Ok the bank that is implementing Siebel would like the customer read certain service request statuses whenever it is changed in Siebel Call Center. Because whenever for example is changed from Unassigned-Development to Assigned Development the Customer would get an email from Siebel mentioning that the SR that was opened has changed. So he reads the new status (Assigned-Development) But the bank doesnt want the customer to read this status or any status that should only be between the employees. Instead the customer should be able to read specific statuses that would be read by the customer like Waiting for customer response. Because the customer is complaining that he is getting a lot of emails from Siebel regarding the Service Request opened which most emails are statuses that are changed to "Assigned/Unassigned/Under Development" all those technical statuses that as mentioned before should only be read by the employees and not the customer.
What do you think?

Similar Messages

  • A question regarding Management pack dependency.

    Hi All,
    I am new to SCOM, I have a question regarding management pack dependency.
    My question is, Is Dependency is required when New alerts are created in a unsealed MP and the object class selected during alert creation is (i.e Windows server 2012 full operating system) and it is on a Sealed management pack ? 
    For example i have a Sealed Windows server 2012 monitoring management pack.
    I have made a custom one, for windows server 2012, So if it the custom is not dependent on the sealed Windows server 2012 monitoring management pack, Then cant i create any alerts in the custom management pack targeting the class Windows server
    2012 full operating system ?

    Hi CyrAz,
    Thank you for the reply. Now if your's and my understanding is the same, Then look at the below what happened.
    I created a Alert monitor targeting a Windows Server 2012 class in my custom management pack which
    is not dependent on the Windows server 2012 management pack, But how was i successfully able to create them when the dependency is not there at all, If our understanding is same, then there must be an Error thrown while creating the monitor its self right
    ? But how was SCOM able to create that ?
    Look at the below screenshot.
    I was able to create a monitor targeting Windows server 2012 Full operating system and create a alert on the custom management pack which is not at all dependent
    on the Windows server 2012 Sealed MP.
    Look at the dependency of the management pack where i do not have the Windows server 2012 management as my custom management is not dependent on that.
    Then how come this is possible ?

  • Few questions regarding Oracle Scorecard and strategy management.

    Hi,
    I have following questions regarding Oracle Scorecard and strategy management:
    1. What are the ways in which i can show status of a KPI, like we have colors and symbols, what are others?
    2. can we keep log of KPIs, store them, keep report of feedback/action taken on that?
    3. Does Scorecard and strategy management have ability to retain history on feedback and log of
    entries i.e. date/time, user name?
    4. Does Scorecard and strategy management have ability to use common mathematical formulas e.g. median, average, percentiles. Describe.?
    Thanks in advance for your help.

    bump.

  • Questions regarding Outlook Web App, Remote Desktop, Remote Web Access and VPN Access

    Hi there,
    I want to ask a series of questions regarding Outlook Web App, Remote Desktop, Remote Web Access and VPN access and was hoping whether you could help me. Below are my questions to ask you.
    Outlook Web App - What do I need to configure in order to get my Exchange account to work with the OWA app on my iPhone? Is Office 360 required on the server that hosts Outlook Web App in our organisation? When I configure the settings and
    connect I get the following message "couldn't connect -  We couldn't connect to the server. Check your information and make sure it's correct." I can connect with other devices using Outlook Web App.
    Remote Desktop - What do I need to configure in order to connect to my computer at work using Remote Desktop on my Windows Phone? When I configure the settings and connect I get the following message "Connection error - We couldn't connect
    to the remote PC. Make sure the PC is turned on and connected to the network, and that remote access is enabled. Inquiring minds may find this error code helpful: 0x204" I can connect with other devices using Remote Desktop. There are currently no
    RD Server settings in the Remote Desktop app on the Windows Phone and the only way I'm to connect to my PC at work is via Remote Desktop and not to be confused with the one by Microsoft, however the app is on a trial basis and times out every 5 minutes and
    can only be used once every hour unless I purchased the app for £2.99 off the App Store but would ideally like to use the Microsoft Remote Desktop app though.
    Remote Web Access - What do I need to configure in order to get Remote Web Access on my Windows Phone using a URL? When I log in using a URL I get the following message "There is a problem with this Web page. Please contact the person who manages
    the server" I can connect with other devices using Remote Web Access. Also how do you enable the background option for Remote Web Access? I know how to do this in Remote Desktop but not in Remote Web Access. Remote Web Access works on PCs regardless
    being onsite and offsite and on my iPhone, the same issue also occurs with my Nokia 5230s regardless of whether I'm using Opera Mobile or Mini or the latest Nokia Browser.
    VPN access - How do you configure VPN access on a Windows Phone using VPN? I cannot find the protocols PPTP, L2TP, SSTP and IPsec in order to configure VPN access on the Windows Phone apart from IKEv2.
    Many thanks,
    RocknRollTim

    Any help would be much appreciated.
    Kind regards,
    RocknRollTim

  • Basic question regarding SSIS 2010 Package where source is Microsoft Excel 97-2005 and there is no Microsoft office or Excel driver installed in Production

    Hi all,
    I got one basic question regarding SSIS 2010 Package where source is Microsoft Excel 97-2005. I wanted to know How this package works in production where there is no Microsoft office or Excel driver installed. To check that there is excel driver installed
    or not, I followed steps: Start-->Administrative Tools--> Data Sources(ODBC)-->Drivers and I found only 2 drivers one is SQL Server and another one is SQL Server Native Client 11.0.
    Windows edition is Windows Server 2008 R2 Enterprise, Service Pack-1 and System type is 64-bit Operating System.
    We are running this package from SQL Server Agent and using 32-bit (\\Machine_Name\d$\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\DTExec.exe /FILE "\\Machine_Name\d$\ Folder_Name\EtL.dtsx" /CONFIGFILE "\\Machine_Name\d$\Folder_Name\Config.dtsConfig"
    /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REPORTING E) to run this package. I opened the package and tried to find out what connection we have used and found that we have used "Excel Connection Manager" and ConnectionString=Provider=Microsoft.Jet.OLEDB.4.0;Data
    Source=F:\Fares.xls;Extended Properties="EXCEL 8.0;HDR=YES"; and source is ‘Excel Source’
    I discussed with my DBA and He said that SSIS is having inbuilt Excel driver but I am not convinced.
    Could anyone please clear my confusion/doubt?
    I have gone through various links but my doubt is still not clear.
    Quick Reference:
    SSIS in 32- and 64-bits
    http://toddmcdermid.blogspot.com.au/2009/10/quick-reference-ssis-in-32-and-64-bits.html
    Why do I get "product level is insufficient..." error when I run my SSIS package?
    http://blogs.msdn.com/b/michen/archive/2006/11/11/ssis-product-level-is-insufficient.aspx
    How to run SSIS Packages using 32-bit drivers on 64-bit machine
    http://help.pragmaticworks.com/dtsxchange/scr/FAQ%20-%20How%20to%20run%20SSIS%20Packages%20using%2032bit%20drivers%20on%2064bit%20machine.htm
    Troubleshooting OLE DB Provider Microsoft.ACE.OLEDB.12.0 is not registered Error when importing data from an Excel 2007 file to SQL Server 2008
    http://www.mytechmantra.com/LearnSQLServer/Troubleshoot_OLE_DB_Provider_Error_P1.html
    How Can I Get a List of the ODBC Drivers that are Installed on a Computer?
    http://blogs.technet.com/b/heyscriptingguy/archive/2005/07/07/how-can-i-get-a-list-of-the-odbc-drivers-that-are-installed-on-a-computer.aspx
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Hi S Kumar Dubey,
    In SSIS, the Excel Source and Excel Destination natively use the Microsoft Jet 4.0 OLE DB Provider which is installed by SQL Server. The Microsoft Jet 4.0 OLE DB Provider deals with .xls files created by Excel 97-2003. To deal with .xlsx files created by
    Excel 2007, we need the Microsoft ACE OLEDB Provider. SQL Server doesn’t install the Microsoft ACE OLEDB Provider, to get it we can install the
    2007 Office System Driver: Data Connectivity Components or
    Microsoft Access Database Engine 2010 Redistributable or Microsoft Office suit.
    The drivers listed in the ODBC Data Source Administrator are ODBC drivers not OLEDB drivers, therefore, the Excel Source/Destination in SSIS won’t use the ODBC driver for Excel listed in it by default. On a 64-bit Windows platform, there are two versions
    of ODBC Data Source Administrator. The 64-bit ODBC Data Source Administrator is C:\Windows\System32\odbcad32.exe, while the 32-bit one is C:\Windows\SysWOW64\odbcad32.exe. The original 32-bit and 64-bit ODBC drivers are installed by the Windows operating system.
    By default, there are multiple 32-bit ODBC drivers and fewer 64-bit ODBC drivers installed on a 64-bit platform. To get more ODBC drivers, we can install the 2007 Office System Driver: Data Connectivity Components or Microsoft Access Database Engine 2010 Redistributable.
    Besides, please note that 2007 Office System Driver: Data Connectivity Components only install 32-bit ODBC and OLEDB drivers because it only has 32-bit version, but the Microsoft Access Database Engine 2010 Redistributable has both 32- bit version and 64-bit
    version.
    If you have any questions, please feel free to ask.
    Regards,
    Mike Yin
    TechNet Community Support

  • Question regarding decode function.

    Hi friends,
    I have a question regarding using decode.
    I'm try'g to explain my problem using emp table.
    Can you guys please help me out.
    For example consider emp table, now i want to get all manager id's concatenated for 2 employees.
    I tried using following code
    declare
    v_mgr_code  number(10);
    v_mgr1      number(4);
    v_mgr2      number(4);
    begin
    select  mgr into    v_mgr1
    from    scott.emp
    where   empno = 7369;
    select  mgr into    v_mgr2
    from    scott.emp
    where   empno = 7499;
    select v_mgr1||'-'||v_mgr2 into v_mgr_code from dual;
    end;now instead of writing 2 select statements can i write one select statement using decode function ?
    Edited by: user642856 on Mar 8, 2009 11:18 PM

    i don't know wheter your looking for this or not.if i am wrong correct me.
    SELECT Ename||' '||initcap('manager is ')||
    DECODE(MGR,
            7566, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7566),
            7698, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7698),
            7782, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7782),
            7788, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7788),
            7839, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7839),
            7902, (SELECT Ename
                    FROM Emp
                    WHERE Empno = 7902),
            'Do Not Know')  Manager from empor
    SELECT Ename||' '||initcap('manager is ')||
    DECODE(MGR,
            7566, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7566),
            7698, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7698),
            7782, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7782),
            7788, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7788),
            7839, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7839),
            7902, (SELECT empno
                    FROM Emp
                    WHERE Empno = 7902)) manager
    from empEdited by: user4587979 on Mar 8, 2009 9:52 PM

  • Question regarding palcing cache related classes into a package

    Hi all,
    I have a question regarding placing classes into packages. Actually I am writing cache feature which caches the results which were evaluated previously. Since it is a cache, I don't want to expose it outside because it is only for internal purpose. I have 10 classes related to this cache feature. All of them are used by cache manager (the manager class which manages cache) only. So I thought it would make sense if I keep all the classes into a separate package.
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code. I can't either make it public or private. Can someone suggest a solution for my problem?

    haki2 wrote:
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code.Well, you shouldn't access them in your non-cache code.
    As far as I understand, the only class that other code needs to access is the cache manager. That one must be public. All other classes can be package-private (a.k.a default access). This way they can access each other and the cache manager can access them, but other code can't.

  • Questions regarding *dump_dest parameters and fast_recovery_area

    Hello,
    I just installed a fresh new 11.2.0.2 Database on Solaris 10.
    Everything straightforward on the parameter side!!! I tried custom install as well as general purpose template. When installing with DBCA, I set every parameters around DB Name in lowercase name.
    With this, questions are popping in my mind regarding some parameters after installation.
    First, %dump_dest parameters contains in path, two times the db name (ocpdb in my case):
    background_dump_dest       /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    user_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                 /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdumpIs it normal to have ..../rdbms/dbname/dbname/..... as path, with dbname/dbname ??? Why?
    Second, the question regarding the directory structure under fast_recovery_area (new term for flash_recovery_area). The directory structure:
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-10-28 19:53 ocpdb
    drwxr----- 5 oracle oinstall 512 2010-10-29 07:44 OCPDB
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l ocpdb
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-10-31 21:09 control02.ctl
    oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l OCPDB/
    total 3
    drwxr----- 5 oracle oinstall 512 2010-10-31 03:48 archivelog
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:44 autobackup
    drwxr----- 3 oracle oinstall 512 2010-10-29 07:43 backupsetWhy am I having a subdir with dbname in uppercase AND in lowercase? Should I specify dbname in uppercase at db creation to have all files under the same directory, or in lowercase? Or, is it normal?
    I want to know how to do it well before reinstalling a fresh database.
    Thanks
    Bruno
    Edited by: blavoie on Oct 31, 2010 6:18 PM
    Edited by: blavoie on Oct 31, 2010 6:20 PM

    Hi,
    I just reinstalled all from scratch, everything in lowercase as well in environment variables and dbname in dbca:
    oracle@enalab13:~$ echo $ORACLE_SID
    ocpdbFast recovery area directories, dates prove that it's my fresh install:
    oracle@enalab13:/u01/app/oracle$ ll fast_recovery_area/
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    oracle@enalab13:/u01/app/oracle$ ll -R fast_recovery_area/
    fast_recovery_area/:
    total 2
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
    drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
    fast_recovery_area/ocpdb:
    total 9528
    -rw-r----- 1 oracle oinstall 9748480 2010-11-02 11:34 control02.ctl
    fast_recovery_area/OCPDB:
    total 2
    drwxr-x--- 3 oracle oinstall 512 2010-11-02 11:24 archivelog
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 onlinelog
    fast_recovery_area/OCPDB/archivelog:
    total 1
    drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:24 2010_11_02
    fast_recovery_area/OCPDB/archivelog/2010_11_02:
    total 47032
    -rw-r----- 1 oracle oinstall 48123392 2010-11-02 11:24 o1_mf_1_5_6f0c9pnh_.arc
    fast_recovery_area/OCPDB/onlinelog:
    total 0Some interresting output asked earlier in post:
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     4
    Next log sequence to archive   6
    Current log sequence           6
    SQL> show parameter recovery
    NAME                                 TYPE        VALUE
    db_recovery_file_dest                string      /u01/app/oracle/fast_recovery_area
    db_recovery_file_dest_size           big integer 4032M
    recovery_parallelism                 integer     0
    SQL> show parameter control_files
    NAME                                 TYPE        VALUE
    control_files                        string      /u01/app/oracle/oradata/ocpdb/control01.ctl,
                                                         /u01/app/oracle/fast_recovery_area/ocpdb/control02.ctl
    SQL> show parameter instance_name
    NAME                                 TYPE        VALUE
    instance_name                        string      ocpdb
    SQL> show parameter db_name
    NAME                                 TYPE        VALUE
    db_name                              string      ocpdb
    SQL> show parameter log_archive_dest_1
    NAME                                 TYPE        VALUE
    log_archive_dest_1                   string
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    SQL> show parameter %dump_dest 
    NAME                                 TYPE        VALUE
    background_dump_dest                 string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
    core_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdump
    user_dump_dest                       string      /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/traceI think, next time, I'll install everything regarding oracle SID in upper case...
    Maybe it's details that I don't need to care about... I seems that something is happening bad with the management of fast_recovery_area...
    Thanks
    Bruno

  • Question Regarding Mesh with 3702 and non AC ap´s

    Hello! 
    quick question regarding MESH deployments with 2 different sorts of AP´s: AC and non-AC modells: If my 3702i is my root AP´s, and 3602i my MAP - will AC still work in 80Mhz, or will I have to switch to 40mhz (and thus crippling (???) AC performance?) 
    Not 100% sure on this... I *think* it should still work for the normal 802.11n connection, but I´m not sure if the 80mhz channel width (needed??) for AC, will cause the non-ac 3602i to be stranded? 
    Thanks alot for your insight! 

    Currently, my network DHCP server is a software based DHCP server. In reading over your post if I understood correctly it sounds like the managed switch would have its own hardware based DHCP server to assign IP addresses to those clients identified on the "external" VLAN. Did I understand that correctly or did misread something?
    DHCP server will be software based, even though you defined it on your switch, it is DHCP service running on its OS.
    I am configuring this setup for a small business application and will need to purchase a managed switch with 16 or 24 ports. Do you have any recommendations on a particular managed switch that will handle the VLAN configuration and include POE while keeping costs in mind.
    In this forum, most of us discussed about Cisco enterprise grade wireless. Here is 2960X series switch detail, if you are interested
    http://www.cisco.com/c/en/us/products/switches/catalyst-2960-x-series-switches/index.html
    You may need to check the pricing with your Cisco account manager or from a Cisco partner.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Question regarding the "mcxquery" and "dscl -mcxread" commands:

    Question regarding the mcxquery and dscl -mcxread commands:
    Does anyone know why the mcxquery and the dscl . -mcxread commands don't show any info about MCX managed login items & printers? The System Profiler's "Managed Client" section does. Id like to see info regarding managed printers and managed login items using the mcx tools. I have Mac users running 10.5.2 with both login items and printers that are pushed out to them via MCX. The System Profiler app shows all of their policies, but the dscl . -mcxread and mcxquery tools dont. Why not?
    -D
    Message was edited by: Daniel Stranathan
    null

    How do you "call procedures/functions" without sql code? You need at least the call statement like
    {call myProc(?,?,?)}that you wrap into a CallableStatement.
    Other than that: when you switch off autocommit, you need to call commit/rollback at the end. Usually, if you don't commit/rollback a non-autocommitted connection, the transaction get's committed/rollbacked when you close the connection - that depends on the JDBC driver. But it's never a good idea to ommit the commit/rollback calls on a non-autocommit connection. Usually you enclose your code in a try/catch block like this:
    con.setAutocommit(false);
    try {
       con.commit();
    } catch (Exception e) {
       con.rollback();
    } finally {
        con.setAutocommit(true); //or:
        con.close();
    }

  • Some question regarding time evaluation

    Hello,
    I have two questions regarding time evaluation:
    1. Is it possible (if yes, how) to still include a employee in time evaluation even if this employee is inactive (status P0000-STAT2 = 0). We need this in order to calculate weeks of not working (this has to be calculated). I know that for payroll this is possible wih a setting in infotype 0003.
    2. Is it possible (and how) to read data back from payroll into time. For example in payroll you export something to ZL table, can you then pick this up in time evaluation schema, as there is also ZL table. Or is there another way to do this?
    Thanks for you answers,
    Liesbeth

    hi Schrage
    Why you want to evaluate the time for inactive person.
    If you want you can do.
    Process :
    First of all you have to group your employees. and sub groups.( for inactive emp)
    Assign the employee sub group grouping for PCR in Basic Pay ( IMG ).
    Then come to Time Evaluation Schema. Put the Day grouping nn nn nn nn in the Parameters. and run the time evaluation. You will get the output in DZL table.
    For the above process you need to configure the T510S table.
    Yes you can read the payroll into time.
    the same concept will run in both of the modules.
    the output should appear in the ZL table only.
    Here the concept is.....Some companies, they will not use the payroll wage types. only they will use the time wage types.. these wage types has to be configure in the T510S. and we have to do the wage type copying from the part of time management only if they are not using the payroll. So either in Payroll or in Time management the evaluation of time willbe the same.
    Cheers
    Vijai

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Several questions regarding File Vault

    Hi!
    I have several questions regarding File Vault - right now I'm using Mac OS 10.4.8
    1.: The battery lock of my iBook is defect thus it happens from time to time that while transporting it the battery drops out while the laptop is sleeping. What happens with the File Vault-disk image?
    2.: I want to (have to ) set up my Intel iMac again. The installer-CD I have will bring it back to 10.4.6
    AFAIK the data format used for File Vault since 10.4.7 is version 2. What happens if I encrypt my stuff now (10.4.8 - thus version 2), back it up to my backup disc, install a new system (10.4.6 - therefore version 1) and want to access my data via Migration Manager (don't want to use archive and install)?
    3.: How do I actually do a backup of my data while the system is running? The backup should be encrypted as well.
    I use the demo-version of SuperDuper for backing up my system because with it I can ensure that I have a complete bootable backup of my running system.
    Thanks for your answers in advance
    ibook g4 12" 1.2 GHz 768 MB RAM / Intel iMac Core 2 Duo 17" 2.0 GHz 2GB RAM   Mac OS X (10.4.8)  

    Parker,
    You said:1. If it did, Apple would not use FileVault, as everyones computer will have a battery problem once in their life, and Apple would lose buisness from angry people who lost all of their data.I have seen enough reports of data loss with FileVault that I feel compelled to dispute your statement.
    In Data corruption and loss: causes and avoidance, Dr. Smoke writes...If your data-security needs demand FileVault, you should backup your encrypted Home folder regularly, preferably daily. Like any hard drive or disk image, a Home folder protected by FileVault — an encrypted, sparse disk image — does not respond well to the causes of data corruption...Loss of power definitely is a cause of data corruption.
    For Niels....,
    An Unencrypted Look at FileVault, by François Joseph de Kermadec is an excellent discussion of the features, pitfalls, and cautions regarding Filevault.
    Although the article discusses Panther and is dated 12/19/2003, the concepts as they apply Tiger have not changed.
    The cautions and warnings are prominent in any of the Apple Knowledge Base articles referring to the use of FileVault. If a user is unfamiliar with any aspect of FileVault, it should not in my opinion be activated.
    As good as FileVault is in protecting your sensitive data, it also presents the danger of locking up your files in an irretrievable ball of one's and zero's. Backups are critical. You must ensure that you have a comprehensive backup plan. Backup and Recovery, by Dr. Smoke is a fine example of what you need to consider.
    ;~)

  • Questions regarding upgrade from 4th gen to 5th gen iPod

    I recently received a new 80 gig 5th gen iPod and had 2 questions regarding it and my old 40 gig 4th gen. PC meets minimun requirements and is running XP Pro.
    1) Do I need to do anything before synching the new 5th gen iPod with my existing Library (i.e. uninstall iPod Updater for 4th gen)?
    2) If I don't need to uninstall the 4th gen Updater, can I run both iPods off the same Library, same user (obviously not simultaneously)?
    tia - joggy
    N/A   Windows XP Pro  

    Hey, joggy!
    1) No, I don't believe you do. Be sure to disconnect the 4th gen iPod before connecting the 5th generation iPod to the computer.
    2) Yes, you can.
    There are basically two methods with managing muleiple iPods on one computer:
    Method 1 - Create different Windows users accounts for each resgistered iPod on this computer.
    Method 2 - Create a playlist in iTunes for each iPod.
    To make Method 2 work, connect one of your iPods, and click on it in the left-source panel.
    Under the "Music" tab, and set your option for a specific playlist(s) under the "Sync Music" option.
    Do the same with your other iPod; not connected at the same time, though.
    For more details on this matter, check out Apple's Support article about it:
    How to manage multiple iPods using one computer
    I hope that helps you.
    -Kylene

  • Wiget question regarding system usage

    I'm a recent convert from the PC world and am finding the dashboard feature very useful. I did, however have a question regarding the way wigets in the dashboard access the iMac's system resources.
    Specifically, I was wondering if a wiget is installed and appears in the "Manage Wigets" list but is NOT active on the dashboard, does that wiget still utilize the system's resources? (i.e. is it still actively updating its information or performing its task) Or does it "sleep" until you actively enable it in the dashboard?

    Welcome to Discussions!
    I believe widgets don't consume resources unless you have them open in Dashboard-just having them installed, but not open wouldn't consume resources.
    Since you're new to mac, you may want to check out Mac 101 and Switch 101.
    Message was edited by: joshz

Maybe you are looking for

  • How can I create a new OC4J Instance in Application Server Control

    Hi All, Is there any way to create a new OC4J Instance in Application Server Control of installed SOA Suite, so that it gets listed in Cluster Topology page. Thanks Krrish

  • Cash Sales with Credit card

    Hi, I would like to know how to handle a scenario in which a customer comes to a shop where he will pick the goods and pay immediately by Cash, Credit Card or Cheque( Rarely). I understand that Cash Sales Order will be suitable for this scenario. But

  • Changing print format for the Shop papers -External service operations

    Hi Friends , We want print outs for the external service operations in our PM orders . The standard format available with shop papers is displayed with some extra info . We want to remove those info and add some extra info . Can we change the printin

  • Websites open the Appstore - Guide Needed

    I ran across this from 9to5 Mac - http://9to5mac.com/2015/03/18/safari-app-store-redirect/#more-370337 On that post, it shows how many well known sites and ads make it nearly impossible to use the page due to the ads. Yes this is a nuisance and Apple

  • Very annoying feature of the arrange in 7.0 question:

    whenever i move an object in the arrange window (drag a loop from one pt to another, or cut/copy and paste) the object does not automatically align with the 'beat' tics at the top, so i have to magnify the entire arrange window and micromove it so th