Best practice on SQL memory allocation

Hi experts,
Is there a best practice to set the max amount of allocated SQL memory?
Regards

AWE must be enabled just if we are speaking about 32 bits OS.
Max Memory limit must be specified always because SQL Server 2005 or 2008 is trying to use how much is available and the problem it is that actually is not releasing the memory after that.
Amount of memory allocate for each component is depending but processes running for that application.
In your case you just have to put Max memory for SQL server to be around 40% from total amount of memory from that server.
Into SSAS to set to take between 30 and 40% of total amount of memory and te rest will be used by application server and OS.
Again this configuration doesn't means it is optimal but it will avoid problems with out of memory into that server.
Kind Regards
Sorin Radulescu

Similar Messages

  • How best to configure the memory allocation

    Hi,
    Anyone could advise how best to configure the memory allocation by setting up
    the value for
    -Xms
    -Xmx
    -XX=NewSize
    -XX=MaxNewSize
    -XX:SurvivorRate
    -XX:MaxPermSize
    for weblogic Express 8.1 and the server has about 4G RAM ?
    Thanks.

    Hi Chandra,
    "Chandra" <[email protected]> wrote in message news:[email protected]..
    Anyone could advise how best to configure the memory allocation by setting up
    the value for
    -Xms
    -XmxSet -Xms == Xmx == 512m
    -XX=NewSize
    -XX=MaxNewSize
    -XX:SurvivorRate
    -XX:MaxPermSizeSet -XX:MaxPermSize == 64m
    and see how it goes.
    for weblogic Express 8.1 and the server has about 4G RAM ?This is way too much for a single weblogic instance. Normally 512-1024Mb
    is enough. I'd consider partitioning it to multiple instances with that much
    of memory. Lesser memory means faster GC.
    HTH
    Regards,
    Slava Imeshev

  • Best Practices for EJB--memory leak avoidance

    I an researching a memory leak. From what I can tell, we do not want to put any instance variables at the class level. This would incur a memory leak when the ejb containers creates the ejb.
    I am wondering if there are any more 'best practices' with EJBs that prevent memory leaks. A good site would help me too
    Russ

    Thank for your reply.
    You are right. I was referring to stateless session
    beans.
    What I was thinking was the following:
    1. Making parameters final unless the parameter
    r state is changed inside a method.Won't help with memory leaks. There are benefits for an object being immutable, but I don't think that it eliminates the possibility of the object being leaked.
    2. All instance variables in stateless session beans
    s should be set to null upon ejbPassivate and
    ejbRemove operations. The variables should be
    populated on ejbCreate and ejbActivate.Optimizing compilers don't need the hint of setting the reference to null. I don't think that helps, but I'm not 100% certain.
    3. Before throwing an exception at the session bean
    level, clear up all the instance variables by setting
    them to null.Shouldn't scope make it clear to the GC that the objects aren't needed? If they're all primitives, how does this help memory leaks?
    I am using a profiler to identify the objects still
    being held onto once GC, but I was hoping others with
    more experience than I would share some of their tips
    on how they avoid memory leaks.I don't know where you're getting these tips, but I don't think they're helpful.
    Collections and Singletons would be the places where I'd look. If you add a reference to an object that holds onto it in a collection and never lets go, the GC won't reclaim it.
    Singletons aren't cleaned up, because their instance is static. Anything the Singleton refers to won't be cleaned up unless the Singleton relinquishes the reference.
    Look for things like that. I think they matter more.
    %

  • Oracle JDBC (10g) reading clobs -- best practices

    What is the better approach using oracle 10g to save clobs:
    #1) This:
    PreparedStatement pstmt = conn.prepareStatement
    //Create the clob for insert
    Clobs Clobs = new Clobs();
    CLOB TempClob = Clobs.CreateTemporaryCachedCLOB(conn);
    java.io.Writer writer = TempClob.getCharacterOutputStream();
    writer.write(Description);
    writer.flush();
    writer.close();
    #2) Or this:
    OraclePreparedStatement pstmt = (OraclePreparedStatement)conn.prepareStatement
    pstmt.setStringForClob()
    According to my notes, it is #2.
    What is the better approach to read clobs:
    #1) Stream the clob
    //Get character stream to retrieve clob data
    Reader instream = ClobIn.getCharacterStream();
    //Create temporary buffer for read
    char[] buffer = new char[10];
    //Length of characters read
    int length = 0;
    //Fetch data
    while ((length = instream.read(buffer)) != -1){
    for (int i=0; i<length; i++){
    Contents += buffer;
    //Close input stream
    instream.close();
    //Empty LOB
    ClobIn.empty_lob();
    #2) Or this:
    Simply use rs.getString() to get your clob contents. This will return the entire clob and will not truncate.
    Im just confused on the best practices for performance/memory allocation and I keep reading people saying different.
    Reposted in JDBC forum

    Check chapter 16 of "PL/SQL Programming", by Oracle Press, for a starter.
    Then have a look at this link - I found it helpful: http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/advanced/LOBSample/LOBSample.java.html

  • Best practice for CPU and memory usage?

    I find my AIR application takes a lot of memory -- usually
    >170M. And what is strange is that the memory usage is
    increasing (about 4K/s) even when the application is simply sitting
    there and do nothing. The CPU usage is supposed to be 0% when the
    application is doing nothing, but it's not (usually ~5%). So I
    wonder if there is any article about best practice on CPU/Memory
    usage.

    Those numbers indicate that your application is in fact doing
    something. Perhaps you have a timer still running, or work being
    done on an enterFrame event?

  • SQL 2012 service accounts best practice

    I'm installing SQL Server 2012 for ConfigMgr 2012 r2 and I wonder what is the best practice for SQL service accounts.
    During the installation of SQL Server, in the server configuration/Service accounts menu I'm allowed to configure following service accounts: SQL Server Agent, SQL Server Agent Database Engine, SQL Server Reporting Services, SQL Server Browser.
    Do I have to create separate domain user (not admin) accounts for each service and configure service principal name (SPN) for all of them?
    For example: Domain user account named SQLSA for SQL Server Agent, another domain user account
    SQLADBE for SQL Server Agent Database Engine etc.

    During the installation of SQL Server 2012, the user is prompted to provide service account
    credentials. The default service accounts suggested vary depending on whether SQL Server
    2012 is installed on a computer running Windows Vista or Windows Server 2008 or on a computer
    running Windows 7 or Windows Server 2008 R2. On computers running Windows Vista
    or Windows Server 2008 operating systems, the following default service accounts are used:
    - NETWORK SERVICE Database Engine, SQL Server Agent, Analysis Services,
    Integration Services, Reporting Services, SQL Server Distributed Replay Controller,
    SQL Server Distributed Replay Client
    - LOCAL SERVICE SQL Server Browser, FD Launcher (Full-Text Search)
    - LOCAL SYSTEM SQL Server VSS Writer
    On computers running Windows 7 or Windows Server 2008 R2 operating systems, the following
    default accounts are used:
    - Virtual Account or Managed Service Account Database Engine, SQL Server Agent,
    Analysis Services, Integration Services, Replication Services, SQL Server Distributed
    Replay Controller, SQL Server Distributed Replay Client, FD Launcher (Full-Text Search)
    - LOCAL SERVICE SQL Server Browser
    - LOCAL SYSTEM SQL Server VSS Writer
    For Windows 7 and Windows Server 2008 R2, you can use a Managed Service Account
    (MSA) or a Managed Local Account. The differences between these account types are as
    follows:
    - Managed Service Account (MSA) This special kind of domain account managed
    by a domain controller is assigned to a single member computer and used for running
    services. The MSA password is managed by the domain controller. MSAs can register
    a Service Principal Name (SPN) with Active Directory. MSAs use a $ name suffix; for
    example, CONTOSO\SQL-A-MSA$. You must create the MSA prior to running SQL
    Server Setup if you want to use an MSA with SQL Server services.
    - Virtual Accounts or Managed Local Accounts These virtual accounts can access
    the network in a domain environment and are used by default for service accounts
    during SQL Server 2012 setup when run on Windows 7 or Windows Server 2008 R2.
    Such accounts use the NT SERVICE\<SERVICENAME>format. You don’t need to specify
    a password when using virtual accounts with SQL Server 2012 because this is handled
    automatically by the operating system.
    You should run SQL Server services, using the minimum possible user rights, and use an
    MSA or virtual account when possible. If you are manually configuring service accounts, use
    separate accounts for different SQL Server services. If it is necessary to change the properties
    of service accounts used for SQL Server 2012, use SQL Server tools such as SQL Server
    Configuration Manager. This ensures that all necessary dependencies are
    updated, which does not happen if you use only the Services console.
    Although you can configure domain accounts as service accounts, this strategy requires
    more effort because you must ensure that service account passwords are changed regularly.
    You must also manage SPNs, which are required for Kerberos authentication.
    Best regads
    P.Ceglie

  • Oracle PL/SQL best practice

    Hello experts,
    Is there any place I could find oracle PL/SQL best practice programming advice?  I got a requirement to write a small paper to help new people that come to work for us so I wanted to put some of best practices (or a set of standard tips) in that along with coding standards etc...
    Best regards,
    Igor

    Hello,
    my first links would be
    Re: 10 database commandments
    On Code Reviews and Standards
    Beware: Any discussion about coding standards tends to get lenghty with flame wars about upper/lower/camelcase keywords, indent style etc :-)
    As stated in the linked document: keep them simple.
    Regards
    Marcus
    Best Practices
    Doing SQL from PL/SQL: Best and Worst Practices
    Naming and Coding Standards for SQL and PL/SQL
    PL/SQL Coding Standards
    Also related:
    Performance question
    Re: How to re-construct my cursor ?

  • SQL Server Best Practices Architecture UCS and FAS3270

    Hey thereWe are moving from EMC SAN and physical servers to NetApp fas3270 and virtual environment on Cisco UCS B200 M3.Traditionally - Best Practices for SQL Server Datbases are to separate the following files on spearate LUN's and/or VolumesDatabase Data filesTransaction Log filesTempDB Data filesAlso I have seen additional separations for...
    System Data files (Master, Model, MSDB, Distribution, Resource DB etc...)IndexesDepending on the size of the database and I/O requirements you can add multiple files for databases.  The goal is provide optimal performance.  The method of choice is to separate Reads & Writes, (Random and Sequential activities)If you have 30 Disks, is it better to separate them?  Or is better to leave the files in one continous pool?  12 Drives RAID 10 (Data files)10 Drives RAID 10 (Log files)8 Drives RAID 10 (TempDB)Please don't get too caught up on the numbers used in the example, but place focus on whether or not (using FAS3270) it is better practice to spearate or consolidate drives/volumes for SQL Server DatabasesThanks!

    Hi Michael,It's a completely different world with NetApp! As a rule of thumb, you don't need separate spindles for different workloads (like SQL databases & logs) - you just put them into separate flexible volumes, which can share the same aggregate (i.e. a grouping of physical disks).For more detailed info about SQL on NetApp have a look at this doc:http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-61005-16&m=tr-4003.pdfRegards,Radek

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best practice for saving data in SQL server

    Hi all
    Hoping for a little help on this question. 
    If i have a list of fields ex. (name,address,postal,phone etc.). Then i create a webform/task
    to gather some of theese fields (name, postal), then i make another webform/task to gather some other fields (address, phone). 
    What is best practice in the SQL server for storing returning values.
    Is it: 
    1. to make a table with all the fields in the list + taskid. Theese fields could be in
    correct format (number, date etc.). And all answers to all tasks is inserted into this table. 
    2. Make a value table for each field with the correct type + task id. So all name values
    are stored in the "name value table" with the task id.
    How would i select values from a certain task from this kind of setup?
    3. ??
    Best regards
    Bo

    Hi Atul
    Thanks for your reply, can you elaborate a bit further on this, since i am still a little confused. 
    Let me try to explain my scenario at bit more:
    Say instead that it is 50 fields in a table with their own unique ID, maybe an answer table
    would look like this:
    taskid | field_1 | field_2 | field_3 | field 4 | field_n
    So no matter which fields the user fillsout it will can be stored in one table. 
    QUestion is, is this a good way to do it? and how do i select from this table using a join
    As far as i know you cant name columns in a table with just numbers, which would have been
    great, giving the columnnames the field_id.
    OR
    Would you have 50 tables each with a field_id and a value (of correct type) ?
    And could you give me an example of how to bind and select from this kind of structure ?
    Also inserting into 50 tables on a save.... is that the right way to go? :)
    Best regards
    Bo

  • Sql backup best practice on vms that are backed up as a complete vm

    hi,
    apologies as i am sure this has been asked many times before but i cant really find an answer to my question. so my situation is this. I have two types of backups; agent based and snap based backups.
    For the vm's that are being backed up by snapshots the process is: vmware does the snap, then the san takes a snap of the storage and then the backup is taken from the san. we then have full vm backups.
    For the agent based backups, these are only backing up file level stuff. so we use this for our sql cluster and some other servers. these are not snaps/full vm backups, but simply backups of databases and files etc.
    this works well, but there are a couple of servers that need to be in the full vm snap category and therefore cant have the backup agent installed on that vm as it is already being backed up by the snap technology. so what would be the best practice on these
    snapped vms that have sql installed as well? should i configure a reoccurring backup in sql management studio (if this is possible??) which is done before the vm snap backup? or is there another way i should be backing up the dbs?
    any suggestions would be very welcome.
    thanks
    aaron

    Hello Aaron,
    If I understand correctly, you perform a snapshot backup of the complete VM.
    In that case you also need to create a SQL Server backup schedule to perform Full and Transaction Log backups.
    (if you do a filelevel backup of the .mdf and .ldf files with an agent you also need to do this)
    I would run a database backup before the VM snapshot (to a SAN location if possible), then perform the Snapshot backup.
    You should set up the transaction log backups depending on business recovery needs.
    For instance: if your company accepts a maximum of 30 minutes data loss make sure to perform a transaction log backup every 30 minutes.
    In case of emergency you could revert to the VM Snapshot, restore the full database backup and restore transaction log backups till the point in time you need.

Maybe you are looking for

  • Filtering recordset with session variable

    This has never happened to me before, but for some reason, my recordset that drives a dynamic table won't filter results based on a session variable. I know session variables are working because I have the session variable echo on the page (dragged-n

  • Locking text frames to a layout (for threaded text)

    I am currently working on a long document and am using threaded text throughout.  There are few parts in the book where I am using 2 column text frames and other irregular text frames.  Is there a way to lock that style of text (2-column..etc) with t

  • How to set command timeout for table object?

    I have a report that is generating an error during rendering in a CrystalReportViewer control (v10.5).  The error is "Failed to open a rowset." which appears to be caused by a timeout.  I'm currenlty setting the connection info and location of the sp

  • Azure RMS

    Dear Sir, I got an experienced for the RMS with iPhone.  I have enrolled an account for RMS evaluation from aadrm portal.  I have registered two acounts for testing purpose.  First of all, I have download the apps from apple store and install it on m

  • Data menu items and Forms

    I am enabling the Data menu items for my custom form like this: myForm.EnableMenu("1281", true); myForm.EnableMenu("1282", true); myForm.EnableMenu("1283", true); myForm.EnableMenu("1288", true); myForm.EnableMenu("1289", true); myForm.EnableMenu("12