Best practice to control db growth

Exchange 2010 sp3, mb01, mb02, hub01, hub02. plus an offsite cas/hub/mb server. all running on VM.
i'm noting down the disk usage starting yesterday and the database disk consumes 10GB for the last 24 hours. i'm not sure if that can be taken as a pattern but to me it is a huge jump. so I started investigating on ways to trim the database sizes and I've
to archiving.
i'm planning to configure an archive server and place my users' emails there that are older than 12 months. seems an easy enough task. I've got a VM just for that archive server.
any thoughts? seems the next best thing to control db growth imho.

Hi,
If you want to control database growth, the important thing is to research what caused this issue. So I recommend you use the Troubleshoot-DatabaseSpace.ps1 script to detect and correct any excess log growth or Microsoft Exchange database (.edb) file growth
at first.
After you run the above cmdlet, you can navigate to the following location see the result:
Event Viewer -> Application and Services Logs -> Microsoft -> Exchange -> Troubleshooters -> Operational.
For more information, here is an article for your reference.
Manage Database Log Growth by Using the Troubleshoot-DatabaseSpace.ps1 Script in the Shell
http://technet.microsoft.com/en-us/library/ff477617(v=exchg.141).aspx
Hope it helps.
If you need further assistance, please feel free to let me know.
Best regards,
Amy
Amy Wang
TechNet Community Support

Similar Messages

  • Best practice for using remote control under limited rights?

    Hi. We are getting ready to take admin rights away from our users and make them standard users. We plan to utilize Zen for most of our in-scope applications so that we can allow users to install supported software. There is usually no problem in that case because Zen can elevate to System access during the install. However, we know that there are applications out there that a user may want to install that is not packaged in Zen. Also, in the event that a system setting needs to be changed, we will have to have a method for supporting this. In either case, the user will call our help desk. Unfortunately, the user will not have enough rights to do the install or system change even if the help desk associate remote control's the PC. What is the best practice to handle this situation in a Netware/Zenworks environment where users only have limited access?
    I was thinking of three possibilities:
    1.) The obivous one is to send a technician over to log in using local admin credentials to install the software or perform the change. (Drawback - not very efficient because a desktop tech would have to get over to the user's PC to perform the work)
    2.) Have the help desk engineer log out of the machine through remote control, log back in as local admin to install the software or perform the change. (Drawback - not very convienant and time consuming.)
    3.) Have the help desk engineer use the "run as" command or even create a Zen application object that could be executed to provide temporary rights for installing software or making system changes. Aaron Margosis of Microsoft writes about this quite a bit in his blog Aaron Margosis' "Non-Admin" WebLog : Table of Contents (Aaron Margosis' Non-Admin WebLog) (Drawback - some software or settings will not work properly using this technique)
    The last one that I didn't list was creating a new application object. I did not factor this one in because this isn't always applicable to system changes and we really don't want to be making app objects for every out of scope app that exists in the user community. We typically only make them for widely used and supported apps.
    Your feedback is appreciated.
    Thanks

    Originally Posted by spond
    Joshbilsky,
    how about
    4) use the remote execute option to remotely launch an app as admin?
    Shaun Pond
    That's probably an option that we will make available. I wasn't sure how some things will work under the SYSTEM context vs local admin.

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practice concerning embedding script in report vs.  controlling from Java

    Hi,
    I'm faced(probably not the only one) with adding some intelligence to my reports.  In a prior post I was curious about displaying/hiding sections based on conditions found in the bean/pojo. 
    Is there a best practice concerning embedding logic in the report in the form of formula(s), vs. using Java to get or create a field and then creating a formula on the fly?  I suspect the answer has something to do with truely dynamic fields, and perhaps a little bit of both Java, and script.
    Anyone on staff care to try answering??
    Peter

    Hi,
    log into your SAP ERP system using the SAP GUI and choose in the SAP Menu the following path:
    SAP Menu -> Accounting -> Controlling -> Cost Cetner Controlling ->Environment->Set Controlling Area.
    Set the desired controlling area for your user there (DO NOT FORGET TO CLICK ON THE DISKETTE ICON) and try again.
    Regards,
    Stratos

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

  • Looking For Guidance: Best Practices for Source Control of Database Assets

    Database Version: 11.2.0.3
    OS: RHEL 6.2
    Source Control: subversion
    This is a general question aimed at database professionals, however, it is not specific to any oracle version, etc.  Its a leadership question for other Oracle shops regarding source control.
    The current trunk, in my client's source control, is the implementation of a previous employee who used ER Studio.  After walking the batch scripts and subordinate files , it was determined that there would be no formal or elegant way to recreate the current version of the database from our source control - the engineers who have contributed to these assets are no longer employed or available for consulting.  The batch scripts are stale, if you will.
    To clean this up and to leverage best practices, I need some guidance on whether or not to baseline the current repository and how to move forward with additions of assets; tables, procs, pkgs, etc.  I'm really interested in how larger oracle shops organize their repository - what directories do you use, how are they labeled...are they labeled with respect to version?
    Assumptions:
    1. repository (database assets only) needs to be baselined (?)
    2. I have approval to change this database directory under the trunk to support best practices and get the client steered straight in terms of recovery and
    Knowns:
    1. the current application version in the database is 5.11.0 (that's my client's application version)
    2. this is for one schema/user of a database (other schemas under the database belong to different trunks)
    This is the layout that we currently have and for the privacy of the
    client I've made this rather generic.  I'd love to have a fresh
    start...how do I go about doing that...initially, I like using
    SqlDeveloper's ability to create sql scripts from a connected target. 
    product_name
      |_trunk
         |_database
           |_config
           |_data
           |_database
           |_integration
           |_patch
           |   |_5.2A.2
           |   |_5.2A.4
           |   |_5.3.0
           |   |_5.3.1
           |
           |_scripts
           |   |_config
           |   |_logs
           |
           |_server
    Thank you in advance.

    HiWe are using Data ONTAP 8.2.3p3 on our FAS8020 in 7-mode and we have 2 aggregates, a SATA and SAS aggregate. I want to decommission the SATA aggregate as I want to move that tray to another site. If I have a flexvol containing 3 qtrees CIFS shares can I use data motion (vol copy) to move the flex vol on the same controller but to a different aggregate without major downtime? I know this article is old and it says here that CIFS are not supported however I am reading mix message that on the version of data ONTAP we are now on does support CIFS and data motion however there will be a small downtime with the CIFS share terminating. Is this correct? Thanks

  • Any best practice recommendations for controlling access to dashboards?

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

  • Any best practice to apply role based access control?

    Hi,
    I am starting to apply the access permissions for new users as being set by admin. I am choosing Role Based Access Control for this task.
    Can you please share the best practices or any built-in feature in JSF to achieve my goal?
    Regards,
    Faysi

    Hi,
    The macro pattern is my work. I've received a lot of help from forums as this one and from the Java developers community in general and I am very happy to help others and share my work.
    Regarding the architect responsibility of defining the pages according to the roles that have access to them : there is the enterprise.software infrastructure.facade
    java package.
    Here I implemented the Facade GoF software design pattern in the GroupsAndRolesAccessFacade java class. Thus, this is the only class the developer uses in order to define groups and roles of users and to define their access as per page.
    This is according to Java EE 6 tutorial, section VII Security, page 471.
    A group, role or user is created with an Identity Management application or by a custom application.
    Pages of the application and their sections are defined or modified together with the group, role or user who has access to them.
    For this u can use the createActiveGroup and createActiveRole methods of the GroupsAndRolesAccessFacade class.
    I've been in situations where end users very strict about the functionality of the application.
    If you try to abstract web development, u can think of writing to database, reading from database and modifying the database as actions.
    Each of these actions should have suggester, approver and implementor.
    Thus u can't call the createActiveGroup method for example, without calling first the requestActiveGroupCreationHelper and then the approveOrDeclineActiveGroupCreationHelper method.
    After the pages a group has access to have been defined with the createActiveGroup method, a developer can find out the pages and their sections a group has access to by calling the getMinimumInformationAboutGroup method.
    Further more, if the application is very strict, that is if every action which envolves writing to the database must be recorded, this concept of suggester, approver and implementor is available throught the recordActiveGroupAction method.
    For example, there is a web shop, its managers can change the prices of the products, but the boss will want to know who had the dared to lower prices.
    This action of lowering prices, is an action of modifying the information in the database and u can save in the database who suggested it, who approved it and who implemented it.
    Now that I write about the functionality of the macro pattern, I realise that some methods should have more proper names and I haven't had time to write documentation in the API, but this will be a complete when I add the web pages for the architect to use for defining access control and for the end users to view who and what is doing with their application.

  • Usage of Efxclipse Controls like FilterableTreeTable - Best practice?

    Hello all,
    what is best practice of using efxclipse controls?
    currently i have linked the needed jars from the eclipse plugins folder (i.e. efxclipse controls jar) to my project to use features like the FilterableTreeTable. But.. this can not be the best practice. If there is no maven support, there should be another solution, right?!
    thanks in advance and best regards,
    Frank

    Hi,
    I published the controls bundle to
    https://oss.sonatype.org/content/repositories/releases/ including all
    the transitive dependencies.
    > <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    > <modelVersion>4.0.0</modelVersion>
    > <groupId>my.test</groupId>
    > <artifactId>my.test.app</artifactId>
    > <version>0.0.1-SNAPSHOT</version>
    >
    > <dependencies>
    > <dependency>
    > <groupId>at.bestsolution.eclipse</groupId>
    > <artifactId>org.eclipse.fx.ui.controls</artifactId>
    > <version>2.0.0</version>
    > </dependency>
    > </dependencies>
    >
    > <repositories>
    > <repository>
    > <id>oss</id>
    > <url>https://oss.sonatype.org/content/repositories/releases/</url>
    > </repository>
    > </repositories>
    >
    > </project>
    I'll publish more artifacts in the days to come.
    Tom
    On 13.07.15 22:25, Thomas Schindl wrote:
    > It's on my todo list to publish some parts of efxclipse at maven Central
    > but i did not yet had time - i'll keep you posted

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Grid Control deployment best practices

    Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

    hi
    have a search for the following document
    MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    regards
    Alan

Maybe you are looking for