Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

Hi,
For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
Maybe some one can guide me with these 2 parameters...
Cheers

Dear unsolaris,
First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
Regards.
Ogan

Similar Messages

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • What are the best practices for Database management and performance tuning?

    Hello,
    I want to ensure that I am using the best practices for managing and maintaining our Database.
    Is there any documentation out there that outlines how to maintain and ensure top performance out of our database?
    Thank you!
    John Sefton

    I appreciate the responses, however this is not the information I am looking for.
    I am specificaly looking for best practices invloving the managment and performance tuning.
    Example: are their tools that I can install that will monitor the size and response time of the database and alert me if there is degradation in performance?
    Are there specific periodic activities I should be doing to garuntee that my database will continue to function that way it is supposed to?
    Or is this a fire and forget solution that does not need this attention?

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • What is the best practice for creating master pages and styles with translated text?

    I format translated text all the time for my company. I want to create a set of master pages and styles for each language and then import those styles into future translated documents. That way, the formatting can be done quickly and easily.
    What are the best practices for doing this? As a company this has been tried in the past, but without success. I'd like to know what other people are doing in this regard.
    Thank you!

    I create a master template that is usually void of content, with the exception I define as many of the paragraph styles I believe can/will be used with examples of their use in the body of the document--a style guide for that client. When beginning a new document for that client, I import those styles from the paragraph styles panel.
    Exception to this is when in a rush I begin documentation first, then begin new work. Then in the new work, I still pull in those defined paragraph and or object styles via their panels into the new work.
    There are times I need new styles. If they have broader applicability than a one-off instance or publication, then I open the style template for that client and import that style(s) from the publication containing the new style(s) and create example paragraphs and usage instructions.
    Take care, Mike

  • Tips n Tricks/Best Practices for integrating iPhone, iPad and MacBook Pro

    My wife just purchased an iPhone, iPad and Macbook Pro for her non profit consulting business and I was wondering if a tips and tricks or best practices for efficiently and productively integrating these devices exists?

    http://www.apple.com/icloud/

  • Best practice for database migration in 11g

    Hello,
    Database migration is required due to OS change.  Here, I have two database instances say A and B in the old server where RDBMS_VERSION is 11.1.0.7.0. They need to be migrated into a new OS where the oracle has been installed with version 11.2.0.2.0.
    Since all data + objects need to be migrated into the new server, I want to know what the best practice is and how to do that. Thanks in advance for your necessary guidance.
    Thanks and Regards,
    Prosenjit

    Hi Prosenjit,
    you have some options.
    1. RMAN Restore: you can restore your database via rman to the new host, and then upgrade it.
        Please follow instruction from MOS Note: RMAN Restore of Backups as Part of a Database Upgrade (Doc ID 790559.1)
    2. Data Guard: check the MOS Note: Mixed Oracle Version support with Data Guard Redo Transport Services (Doc ID 785347.1)
    3. Full Export / Import (DataPump)
    Borys

  • Best practice for database move to new disk

    Good morning,
    Hopefully this is a straight forward question/answer, but we know how these things go...
    We want to move a SQL Server Database data file (user database, not system) from the D: drive to the E: drive.
    Is there a best practice method?
    My colleague has offered "ALTER DATABASE XXXX MODIFY FILE" whilst I'm more inclined to use "sp_detach_db".
    Is there a best practice method or is it much of a muchness?
    Regards,
    Andy

    Hello,
    A quick search on MSDN blogs does not show any official statement about ALTER DATABASE – MODIFY FILE vs ATTACCH. However, you can see a huge number of article promoting and supporting
     the use of ALTER DATABASE on any scenario (replication, mirroring, snapshots, always on, SharePoint, service broker).
    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/04/27/how-to-move-publication-database-and-distribution-database-to-a-different-location.aspx
    http://blogs.msdn.com/b/sqlcat/archive/2010/04/05/moving-the-transaction-log-file-of-the-mirror-database.aspx
    http://blogs.msdn.com/b/dbrowne/archive/2013/07/25/how-to-move-a-database-that-has-database-snapshots.aspx
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/02/06/how-to-move-databases-configured-for-sql-server-alwayson.aspx
    http://blogs.msdn.com/b/joaquint/archive/2011/02/08/sharepoint-and-the-importance-of-tempdb.aspx
    You cannot find the same about ATTACH. In fact, I found the following article:
    http://blogs.msdn.com/b/sqlcat/archive/2011/06/20/why-can-t-i-attach-a-database-to-sql-server-2008-r2.aspx?Redirected=true
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best practice for migrating between environments and versions?

    Hi to all,
    we've got a full suite of solutions custom developed in SAP BPC 7.0, SP 7. We'd like to understand if
    - there are best practice in order to copy this applications from an environment to another environment (another client)
    - there are best practice in case the client has got a newer version of SAP BPC (they should install the 7.5, while we're still stucked with the 7.0).
    Thank you very much
    Daniele

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • Best Practice for Buy in Set and Dismantle for Sales

    Hi All SAP Masters,
    We have a scenario that when purchasing an item as "set", in this set, it has a few components inside this set (something like a material BOM). Example, a machine which comes with several parts. However, when the user received this set from the supplier, the user would further dismantle certain part(s) from the set/"machine" and sell it separately to the customer as a component/"single item".
    What is the best practice in the SAP process to be adopted?
    Please help. Thank you.
    Warmest Regards,
    Edwin

    If your client  have PP module , then follow this steps
    Consider A is the purchased material and going to dismantle the A into B, and C
    1) create a BOM for B material
        and assign the header material  A as consumption material with + ve qty
       and C component as byproduct and maintain - ve qty in BOM
    2) maintain backflush indicator for A & C in material master MRP2 view
    3) create routing for B and maintain auto GR for final operation
    4) create a  production order for B
    5) confirm the order in Co11n, A  will be consumed in 261 movement, C will be receipt with 531 movement
    B will receipt in 101 movement .
    once the stock is posted into unrestricted you can sale B & C

  • Best Practice for saving all fieds and searches in capital letters

    I want to save all fields in my all pages in CAPS and also to search with CAPS e.g user enters search criteria in small letters, then automatically it should convert to caps. What is the best practice to do that?

    Hi,
    There are already so many discussions on this in this forum, some of the links are:
    Uppercase
    How to convert user input in the page to upper case?
    Sireesha

  • Best Practices for SSO between NWBC and BOBJ CMC

    What are the best practices in this scenario:
    - NWBC client (using SAP ECC logon credentials)
    - BOBJ client (configured using Windows AD credentials)
    I would like my users to log into NWBC - but be automatically logged into CMC for running crystal reports inside the NWBC gui.
    Thanks
    Shane Kelly

    yes.  we're not using portal.    only SAPGUI up till now.
    but we've recently configured our DEV server to run NWBC.
    Normally my users log into CMC Infoview in a browser - but with NWBC i can bring infoview directly into the UI.
    but it asks for a sign=on every time.
    i'd like to configure SSO for NWBC to BOBJ infoview somewhow.

  • Best practice for database calls from Java components?

    I have a java component that encapsulates some complex database logic. In unit tests, I pass in a jdbc connection.
    Is there a way to pass in a database connection from PBL for a database defined as an External Resource in an ALBPM project? That way, I can test it using the "abstract" definition in the project and know that when it is deployed to production it will use the concrete definition. And, I won't have to maintain a separate configuration of the JDBC url.
    Is there a better way to do this? Or is it possible?
    Thanks,
    Todd

    Hi Bruno,
    The main issue with the combination of stateful session beans and servlets is the servlet threading model.
    It is dangerous to store a stateful session bean reference in servlet instance state, since the servlet instance
    can be accessed concurrently, yet a stateful session bean reference is intended to be used by only one
    client.
    As you point out, one alternative is to store the reference in the HttpSession. That associates the reference
    with a particular client, which matches the stateful session bean programming model.

  • Best Practice for update to iPhone and iTouch

    OK, when 3.0 comes down the pike, what is the best way to get 3.0 as a "clean" install? Currently 2.2.1 is on both. If I do a restore, will the system only pick up 3.0 or will it see 2.2.1 which is currently on the hard drive? With that in mind, how can I delete the 2.2.1 version of the iPhone and iTouch software? Sorry for two question in one post.
    Steve H

    When firmware update 2.0 was released, the entire iPhone was eraseed first including the existing firmware - just as when restoring an iPhone with iTunes, followed by 2.0 being installed, which was followed by the iPhone's backup being transferred to the iPhone.
    The same may apply with firmware update 3.0 with your iPhone's backup being updated immediately before. If not, firmware version 2.2.1 will be updated with 3.0.
    If 2.2.1 is updated and you want a "clean" install of 3.0, you can follow the initial upgrade by restoring your iPhone with iTunes.

  • Best Practices for Batch Updates, Inserts and Complex Queries

    The approach we have taken for our ALDSP Architecture is to model or DASi as Business Data Objects, each DS joining several (some times many) tables and lookups. This works ok when needing to access individual records for read and update, but when we need to update multiple tables and rows within the same commit, trying to do this with a logical single ds built on tables or other dASi, proves both cumbersome and slow. This is also the case for queries, when we have complex where clauses within a DS built upon two or more multi-table-joined logical DASi.
    We tried a DS built on SQL, but that does not allow dml operations. We may have to just use JDBC. Any thoughts on how best to leverage DAS in this respect.

    I tried doing this by creating a UO class and using it on a DS built on a sql statement. What we wanted to do here is first read the DS to get a list of ID values that met the conditions of the query and then call submit() and have the UO update all the necessary tables associated with those IDS.
    However, we found that U/O never get's called unless you actually update something, not just send submit() after a read. Dis I misunderstand the way this shoudl work?

Maybe you are looking for

  • After partition using fdisk, i can not see the new device file (/dev/sdd1)

    I used fdisk -l to partition /dev/sdd, use the entire disk as partition 1. it looks successful: # fdisk -l /dev/sdd Disk /dev/sdd: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes

  • Inserting pictures in a 16:9 sequence

    I'm working on a video that I shot in the 16:9 format and I'd like to include, in the same sequence some photos that I took using my Canon 5D, gives the same aspect ratio as a piece of 35mm photographic film, which is 3:2. I imported my footage as 16

  • Using offset in join statement

    hi, i wanted to use offset in join statement. but it is giving error. select a~kunnr           a~vkorg          from knvv as inner join zcust          on knvv-kunnr+5(5) = zcust-refid.

  • IIF, BITAND & Logical AND in SSRS Expressions

    I have had some interesting times using IIF to display "X" when a bit is a word is true and "" when it is not. I THINK I understand it all now but would appreciate comments from anyone else to confirm that my understanding is correct. I was using, fo

  • OS X: Windows 7 partition not mounted

    Hi everyone, I had previously dual booted my macbook with OS X and WinXP, i was able to view the WinXP partition through OS X on my desktop. I got my hands on Windows 7(Final Release) and now my macbook is dual booting OS X and Win7. I cannot figure