Unity Connection 7.x - Best Practice for Large Report Mailboxes?

Good morning, We have 150 mailboxes from Nurses to give shift reports in. The mailbox quota is 60MB and the message aging policy is on. Deletede messages are deletede after 14 days. The massage aging policy is system wide, and increasing the quota would cause storage issues. Is there a way to keep the message aging policy and reduce it for 1 group of users? Is there a way to bulk admin the mailbox quota changes?
Version 7.1.3ES9.21004-9
Thanks

As for UC 8x, you're not alone.  I don't typically recommend going to an 8.0 release (no offense to Cisco).  Let things get vetted a bit and then start looking for the recommended stable version to migrate to.
As for bulk changes to mailbox store configurations for users, Jeff (Lindborg) may be able to correct me if I am wrong here.  But with the given tools, I don't think there is a way to bulk edit or update the mailbox info for users (i.e., turn on/off Message Aging Policy).  No access to those values via Bulk Edit and no associated fields in the BAT format either.
Now, with that said - no one knows better than Lindborg when it comes to Unity.  So I defer to him on that point.
Hailey
Please rate helpful posts!

Similar Messages

  • What are the best practices for audit report for SharePoint 2013 farm ?

    Hello,
    I am looking for the best practices for audit reporting in SharePoint 2013 farm.Can anyone please provide me checklist/tools/guidelines on same ?
    your help will be much appreciated.
    Thanks and Regards,
    Dipti Chhatrapati

    This is quite open ended question. A sharepoint farm should be well maintained as per :
    1. Microsoft's recommendations on : Topology, Hardware and Software requirements, Operational procedures and most important Capacity guidelines:
    http://technet.microsoft.com/en-us/library/ff758645(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/cc262787(v=office.15).aspx
    2. Organisation's IT policies and procedures : Farm Configuration, Workload and monitoring
    http://technet.microsoft.com/en-us/library/ff758658(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/ee748651(v=office.15).aspx
    3. Industry best practices
    I would suggest to start thinking over these lines and create a plan for your Sharepoint farm.
    You can then create powershell scripts to run these reports at certain frequency to find the changes, any deviation from the standard and health of the entire farm.
    Hope this helps!!
    I LOVE MS..... Thanks and Regards, Kshitiz (Posting is provided "AS IS" with no warranties, and confers no rights.)

  • Best practice for test reports location -multiple installers

    Hi,
    What is recommended best practice for saving test reports with multiple installers of different applications:
    For example, if I have 3 different teststand installers: Installer1, Installer2 and Installer3 and I want to save test reports of each installer at:
    1. C:\Reports\Installer1\TestReportfilename
    2. C:\Reports\Installer2\TestReportfilename
    3. C:\Reports\Installer3\TestReportfilename
    How could I do this programatically as to have all reports at the proper folder when teststand installers are deployed to a test PC?
    Thanks,
    Frank

    There's no recommended best practice for what you're suggesting. The example here shows how to programmatically modify a report path. And, this Knowledge Base describes how you can change a report's filepath based on test results.
    -Mike 
    Applications Engineer
    National Instuments

  • Best Practices for KM Reports

    Hi all,
    Does anybody have some tip about the best practices to delivery reports for KM activities? Ex.: Users that access the a file, what's the file that a user has accessed and so. I'll need to define input criterias for it. Its better read the statistics file and publish reports with a Web Dynpro application, or use KM Reporting API and publish reports like https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7d28a67b-0c01-0010-8d9a-d7e6811377c0?
    Have some facilities to generate graphs from KM info?
    Best regards

    Hi Isaias,
    Have you read this blog: "How to use the Activity Data Collector" ?
    The ADC provides data on portal requests and stores them to flat files in a filesystem.
    The collected data can include items such as:
    u2022      Requested iView
    u2022      Processing time
    u2022      Browser type
    u2022      Requested headers
    u2022      etc.
    The collector creates a set of files that contain a line for each request. You can take these files to create reports about portal activity using Business Intelligence, Microsoft Excel or other analytics tools.
    For KM data collection:
    For example you can specify which KM operations should be monitored. Also you can define a positive or negative RID list. If you mark the checkbox u201CPositive Listu201D only the comma-separated RIDs listed in u201CRID Listu201D are monitored, if you donu2019t mark the checkbox and enter RIDs in u201CRID Listu201D these RIDs are not monitored (negative list). Details of the property description can be found here. 
    For more information you can read:
    http://help.sap.com/saphelp_nw70/helpdata/EN/46/e42c3ed63369b5e10000000a114a6b/frameset.htm
    Regards,
    Patricio.

  • Best Practices for Setting up MailBox Quotas in Unity Connection.

    Hi all,
    I've just completed migrating from Unity 4.0.5 to Unity Connection 8.0.2c and all seems to be working well at the moment. I am looking into setting up quotas for subscriber mailboxes and I would like to find out what are the rules of thumb when assigning disc space to user mailboxes ? and How much disc space does it take for a one minute message ?
    Thanks in advance for any inputs/suggestions !!!
    D.

    D. Tran,
    Here are some things to consider:
    –Mailbox Quotas
    •Specifies a mailbox size in MB for Warning, Send, and Send/Receive Quotas on mailboxes
    •Can be applied system-wide and customized on a per-user basis (maximum mailbox size is 2GB)
    •Default Warning Quota = 12 MB (25 min of recording with G711)
    •Default Send Quota = 13 MB (27 min of recording with G711)
    •Default Send/Receive Quota = 14 MB (29 min of recording with G711)
    When you use G711, the space requirements for a message are:
    •480Kb/min when using G.711
    –Message Aging Policy
    •If enabled, it applies system-wide but can be disabled on a per-user basis.
    •If disabled, no message aging policies are applied and cannot be enabled on a per-user basis.
    Should you need to be more aggressive in your Message Aging Policy, you can optionally choose to move saved messages to the Deleted Items folder within a specified timeframe.  This is disabled by default.
    I typically recommend giving users access to the deleted items (it allows them to access deleted messages for a period of time after they delete them). This is set in the Class of Service here:
    –Class of Service > Message Options > Uncheck “Delete Messages Without Saving to Deleted Items Folder”
    Hope this helps.
    Hailey
    Please rate helpful posts!

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • SolMan CTS+ Best Practices for large WDP Java .SCA files

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

  • Best Practices for creating reports/Dashboards from BW systems

    HI Gurus,
    Best  Practices of creating BO Dashboards / Xcelsisus from BW systems
    Prasad

    You can use the BICS connector that leverages BW queries directly.  It is listed in the Connection Manager as "SAP NetWeaver BW Connection".  You will need both the ABAP and Java stack and SSO configured between the two.  You will also need to have SAP GUI and BEx installed on the machine you are doing development on.  Note that dashboards using this connection can only be hosted in NW Portal for the time being until the next release of BI 4.x platform.
    Here are some links on getting started with the BICS connector:
    [Building Fast and Efficient Dashboards with BW and Xcelsius|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d0ab8cce-1851-2d10-d5be-b5147a651c58]
    [Requirements for BICS|http://wiki.sdn.sap.com/wiki/display/BOBJ/prerequisitestoXcelsiusandSAPNetWeaverBW+Connection]

  • The best practice for creating reports and dashboard

    Hello guys
    I am trying to put together a list of best practice on how to create reports and dashboards using OBIEE presentation service. I know a lot of those dos and donts are just corporate words that don't apply consistantly in real world environment, but still I'd like to know if Oracle has any officially defined best practice or not.
    the only best practice I can think of when it comes to building reports and dashboards is:
    Each subject area should contain only one star schema that holds data for a specific business information
    Is there anything else?
    Please advice
    Thanks

    Read this book to understand what a Dashboard is, what it should do and look like to be used by the end users. Very enlightentning.
    Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few (There are a couple of other books by Stephen and although I haven't read them yet, I anticipate them to be equally helpful.
    This book was also helpful to me:
    http://www.amazon.com/Performance-Dashboards-Measuring-Monitoring-Managing/dp/0471724173
    I also found this book helpful in Best Practices...
    http://www.biconsultinggroup.com/knowledgebase.asp?CategoryID=337

  • Best practices for large ADF projects?

    I've heard mention (for example, ADF Large Projects of documentation about dealing with large ADF projects. Where exactly is this documentation? I'm interested in questions like whether Fusion web applications can have more than one ViewController project (different names, of course), more than one Model project, the best way to break up applications for ease of maintenance, etc. Thanks.
    Mark

    I'd like to mention something:
    Better have unix machines for your development.
    Have at least 3 GB of RAM on windows machines.
    Create all your commonly used LOVs & VOs first.
    If you use web services extensively, create it as a seperate app.
    Make use of popups, it's very user friendly and fast too. You no need to deal with browser back button.
    If you want to use common page template, create it at the beginning. It's very difficult if you want to apply it later after you developed pages.
    Use declarative components for commonly used forms like address, etc.
    Search the forum, you will see couple of good util classes.
    When you check-in the code, watch out some of the files don't show up in JDev like connections.xml
    Make use of this forum, you will get answers immediately from great experts.
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Best practice for large form input data.

    I'm developing an HIE system with FLEX. I'm looking for a strategy, and management layout for pretty close to 20-30 TextInput/Combobox/Grid controls.
    What is the best way to present this in FLEX without a lot of cluter, and a given panel being way too large and unweildy.
    The options that I have come up with so far.
    1) Use lots of Tabs combined with lots of accordions, and split panes.
    2) Use popup windows
    3) Use panels that appear, capture the data, then, make the panel go away.
    I'm shying away from the popup windows, as, that strategy ALWAYS result in performance issues.
    Any help is greatly appreciated.
    Thanks.

    In general the Flex navigator containers are the way to go. ViewStack is probably the most versatile, though TabNavigator and Accordion are good. It all depends on your assumed workflow.
    If this post answers your question or helps, please mark it as such.

  • Aperture best practices for large libraries

    Hi,
    I am very new to Aperture and still trying to figure out the best way to take advantage of it.
    I have been using iPhoto for a while, with just under 25,000 images. This amount of images takes up about 53 gig. I recently installed and built an Aperture library, leaving the images in the iPhoto library. Still, the Aperture library is over 23 gig. Is this normal? If I turn off the preview, is the integration with iLife and iWork the only functionality lost?
    Thanks,
    BC
    MacBook Pro   Mac OS X (10.4.10)  

    Still, the Aperture library is over 23 gig. Is this
    normal?
    If Previews are turned on, yes.
    If I turn off the preview, is the
    integration with iLife and iWork the only
    functionality lost?
    Pretty much.
    Ian

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best practice for RAC connections

    Got a question of what people consider best practice for setting up high-availability connection pools to a RAC cluster. Now that you can specify the fail-over logic right in the thin connection string it seems like there are three options.
    A) Use OCI connections and allow the fail-over logic to be maintained in the TNSNAMES.ORA file.
    B) Use simple thin connections with multi-pools and let WebLogic maintain the fail-over logic.
    C) Use simple thin connections with fail-over logic in the connection string.
    Thanks,
    Rodger...

    If you need XA, then follow the WebLogic documentation. If not, then
    you have much more freedom. The thin driver can be configured to
    use the tnsnames.ora file if that helps you. WebLogic much prefers the
    thin driver to the OCI-based one, which can kill a JVM with OCI bugs.
    If you do driver-level failover, each failed connection will cost a test
    and replace. If you use multipools, WLS can be configured to flush a
    whole pool when it finds a connection bad, and also make the failover
    at the pool level, right then, so application delay is minimized.
    Joe

Maybe you are looking for

  • Crystal Reports and Java error

    Hi all, I'm trying to run a Crystal Reports report from my web application, but is happening a problem and I check all my code, CR installation, windows folder access rights but I didn't find any problem. Anyone can help me.... please!!!!!! The error

  • How to call ME21n Screen in webdynpro once you click any button

    Hi exports,                  I have created one webdynpro application..In first view i have one button which have action...Once i click that button it should go to next view where it should diaplay standard Transaction ME21n screen..where i need to e

  • List view is blank in event browser

    I have noticed something strange in Final Cut ProX.  If I select an event in the Event Browser while in Film Strip View I can see all of the files that are in that event.  If I click on the icon to show clips in List View, none of my files can be see

  • Need help! nano isn't recognized by windows!

    I hooked my brand new ipod nano up to my computer and it appears to be charging. I have done the 5 Rs one hundred times each. It shows up in the device manager with a yellow question mark and exclamation point. When I plug it in, it makes the little

  • Creating accruals for GR non-valuated

    Hi Guys, By using the multiple account assignment feature in PO, it activates the GR non-valuated indicator that prevent the MIGO from creating its intermidiary transaction: <b>Acc/DR Expenses accounts Acc/CR GI/IR Account 400301</b> Is there possibi