Best practice for RAC planned downtime maintenance?

I have a 3 node RAC on Linux redhat. DB version is 11gr2.
I want to know the steps to perform each node OS patch upgrade.
I want to be sure I did right steps:
1). Node 1, stop crs and do the OS patch upgrade.
2). same steps for Node 2, 3.
Is this right?
Thank you for any help.

user569151 wrote:
I have a 3 node RAC on Linux redhat. DB version is 11gr2.
I want to know the steps to perform each node OS patch upgrade.
I want to be sure I did right steps:
1). Node 1, stop crs and do the OS patch upgrade.
2). same steps for Node 2, 3.
If your RAC environment is configured properly follow this step:
Node 1:
* Relocate services to node 2 or 3 and stop database instance and service using SRVCTL
* Stop Clusterware using CRSCTL
* Disable automatic Startup of Clusterware on this node using CRSCTL
* Patch your OS
* Relink Oracle Binaries (Grid Infrastructure/Oracle Database) (P.S You will need unlock and re-lock your Grid Home to relink binaries see note RAC: Frequently Asked Questions [ID 220970.1])
If you are using ACFS you must check if that OS Upgrade will affect ACFS drivers (ACFS Supported On OS Platforms. [ID 1369107.1])
* Enable Automatic Startup of Clusterware on this node using CRSCTL
* Start Clusterware using CRSCTL
* Start Database and Services with SRVCTL
Repeat steps above on Node 2 and 3
Of course don't forget a good plan to backup and recovery to support case of failure.
http://docs.oracle.com/cd/E11882_01/server.112/e17157/planned.htm#CJACDIJD
Is It Necessary To Relink Oracle Following OS Upgrade? [ID 444595.1]
My concern is here, let's say each node has to be restarted.
So my procedure should be on 1st node: crsctl stop crs to stop everything and failover everything to other nodes.
I wonder crsctl stop crs will cause ASM instance to be down?Yes "CRSCTL STOP CRS" will try stop all clusterware resource with clean state. If any problem occurs will be raised on your prompt.
Regards,
Levi Pereira
Edited by: Levi Pereira on Nov 1, 2012 7:35 PM

Similar Messages

  • Best practice for RAC connections

    Got a question of what people consider best practice for setting up high-availability connection pools to a RAC cluster. Now that you can specify the fail-over logic right in the thin connection string it seems like there are three options.
    A) Use OCI connections and allow the fail-over logic to be maintained in the TNSNAMES.ORA file.
    B) Use simple thin connections with multi-pools and let WebLogic maintain the fail-over logic.
    C) Use simple thin connections with fail-over logic in the connection string.
    Thanks,
    Rodger...

    If you need XA, then follow the WebLogic documentation. If not, then
    you have much more freedom. The thin driver can be configured to
    use the tnsnames.ora file if that helps you. WebLogic much prefers the
    thin driver to the OCI-based one, which can kill a JVM with OCI bugs.
    If you do driver-level failover, each failed connection will cost a test
    and replace. If you use multipools, WLS can be configured to flush a
    whole pool when it finds a connection bad, and also make the failover
    at the pool level, right then, so application delay is minimized.
    Joe

  • Best Practice for Hyperion Planning : Essbase or Dataform for Adhoc Analysi

    Hi,
    with Hyperion Planning, you are able to start an Adhoc Analysis from an Hyperion Planning Dataform or from an Essbase cube?
    What is the best way / pro / contra ? thanks in advance
    .... I 've juste notice that when you delete the dataform the Retrieve doesn't work anymore :-( .... and with Essbase you get some "un-useful" dimension as HSP_Value
    Regards
    xavier

    A couple of other things that go with Planning as the data source:
    1) You have access to all of the non-Essbase data -- think SmartLists, text (okay, it's there now in Essbase, but not from Planning applications) fields, supporting detail, etc. If you want users to see that relationally stored data, the only other way to get it in Excel is to run a Financial Reports report into Excel (or hack the tables and write SQL to pull the stuff -- did that pre 11x and it's too much work).
    2) There's a performance hit when going against Planning, just like there is when Planning is the data source in Financial Reports.
    Using a form to limit dimensionality - isn't that what SmartSlices are for? I guess it's another way to get to the same place. Although you have to administer the SmartSlices, I would imagine they would break less often than a form might. Having said that, once built, how often do you delete a form?
    It's really nice that ad-hoc analysis from forms are now possible, but I've always thought of it as nice feature to have alongside data entry, never as the primary path into Essbase analysis.
    Just my $0.02.
    Regards,
    Cameron Lackpour

  • Best practice for "Portal Appplication in Maintenance"message ?

    Hi,
    I was wondering how you deal with portal applications that go "in maintenance". We now have multiple application area's (HR/PM/CO) integrated in our portal with own developed applications and I was wondering what the best option is if not the entire portal (for portal maintenance) goes down, but you just want to show for example a message "portal application down for maintenance" only for a certain application in the portal
    - Do you change the role in which the application under maintenance is and assign it a page with the 'maintenance message'
    Regards,
    Igor

    Hi
    in our framework, each application on the back end system (SAP) posess 'running' state that is either true or false.
    All application iview check the state of their respoective application, and display a logo and message where the app is not running.
    thus, no change in the portal if part of the application is in maintenance.
    in addition, administrator possess a Iview that allows changing the state of an application.
    we're currently discussing changing the state to a tri-state (running, will-be-closed, closed), to allows more flexibility.
    regards,
    Guillaume Patry

  • Best practice for RAC RMAN backup

    I have a 10gR2 RAC db that is 3.5TB. It is a 2 node cluster on AIX. Each instance is db1 on node1, and db2 on node2, the "world" database name is db.
    Currently we are doing our backup from 1 node of the cluster, specifying that it connect to to db2, so this instance is doing all of the backup work. We allocate 14 channels to back this database up to tape (yes, we have plenty of tape resources, connecting to a tape machine with > 20 physical tape drives). Would it be better to have our rman script connect to the general database db and let the listener decide which instance to send the channel allocation to, or keep all the work on one node.

    I seem to remember that when I was reading up on RAC and RMAN a couple of years ago (maybe up to 4 years ago) that there was something about always doing backups from the same node in a RAC configuration i.e. you have logically 'db' in your network configuration and also direct names of 'db1' and 'db2', and you should always use one of 'db1' or 'db2' but never 'db'. And I also mean do not alternate between 'db1' and 'db2'. Pick one of them and always do your backups from that instance.
    I think this was to do with things like the database identifier, and the instance identifier too, which RMAN reads and puts into the backup. It also affects things like reading the archive logs, which must be cross mounted to both nodes. I have this feeling that using the same node kept a kind of consistency within the backups, and meant that Oracle was getting all of the block images from one node i.e. from one of the SGAs. If you used both nodes simultaneously you could run into issues with their SGAs being out of sync. I know this does not make much sense, given that RAC is all about keeping the SGAs consistent with each other. But at the end of the day I do remember something about always using one node all the time to do your RMAN backups.
    Not sure if these rules hold true if you are using ASM, which itself is shared in a RAC environment.
    Things may have moved on since then, but that was still 10g Release 2 documentation, so I would guess it is still true. At the end of the day, it is not the backup that is important, but the recovery. You don't want to create any kind of inconsistent backup that RMAN cannot successfully recover, even though the backup may look good when you create it.
    I'd check out the RMAN and RAC documentation further to see if this is supported.
    John

  • What is the best practice for APO - Demand planning implementation

    Hi,
    M client wants to implement demand planning.
    Cient has come up with one scenario like a New Customer is created in ECC, and if I use BI and then APO flow for Demand planning, user will have to wait for another day. (AS BI is always having one day delay).
    For this scenarios user is insisting on ECC and APO-DP interface.
    Will anybody suggest what should be the best practice for Demand planning.
    ECC -> Standalone BI -> Planning area (Planning is done in APO) -> Stand alone BI
    Or ECC -> APO-DP (Planning is done in APO) -> Standalone BI system
    I hope I am able to explain my scenario.
    Regards,
    Saurabh

    Any suggestions !!

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Best practice for implementing Manufacturing Cost Planning ( MCP)

    is there any best practice for implementing Manufacturing Cost Planning ( MCP) using BI-IP?

    Hi:
            Both options are viable. If you reverse posting in FB50 then FI GL account postings will also be reversed and along with cost center postings. Hence here advantage is that cost center reversal will be with referenced to the original document with which wrong posting were made. Disadvantage here is that you will to post the entry again in FB50 . In KB11N you will simply transfer cost center amount from wrong to new one that should be in place of it but here you will have no reference . I personally think reversing posting through FB50 is viable options , reverse postings can be seen in KSB1 as well against that cost center.
    Regards

  • Best practice for installation oracle 11g rac on windows 2008 server x64

    hello!
    can somebody tell me a good book or an other kind of literature regarding "best practice for installation oracle 11g rac on windows 2008 server x64"? thx in advance!
    best regards,
    christian

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • What is best practice for deploying agent(10204) on RAC 9i

    Hello,
    What would be best practice for deploying agent(10204) on RAC 9i? Should the agent be deployed on each node or should the agent be deployed on the cluster file system? What are the advantages/disavantages deploy on individual nodes vs. on cluster file system? Please advice. Thank you in advance.

    Please use agent push application to deploy agent on all the nodes at one shot
    Please refer the obe
    http://www.oracle.com/technology/obe/obe10gemgc_10203/agentpush/agentpush.htm

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

Maybe you are looking for

  • XboxLIVEGames on Windows 8.1?

    Don't want this app but cannot uninstall or delete it! I have recently bought a new Windows 8.1 laptop and have stumbled across the C:\Program Files\WindowsApps folder and the various provisioned apps there waiting for a user to install the app. I've

  • Error message in BAPI_REQUISITION_CHANGE

    Hi  gurus . I've got an error message : Runtime Errors         CALL_FUNCTION_UC_STRUCT Exception              CX_SY_DYN_CALL_ILLEGAL_TYPE   The reason for the exception is:   In the function "BAPI_REQUISITION_CHANGE", the STRUCTURE parameter    "REQU

  • BI Publisher - Header not displayed

    Hi All, I have created a rtf template using MSWord and have used the Word Header to insert header information. I loaded xml data to the template and when I preview it in pdf format, the Header information is not displayed on the pdf form. I tried to

  • Problem with SP6 + IE + sendRedirect

    Hello,           We've had this problem with the sendRedirect method. Under IE (all versions           we tested), a sendRedirect would take a considerable amount of time to           actually perform the redirection; in fact the time it took was app

  • IDOC to WebService(SMS)

    Hi, All In My Scenario I Need TO send IDOC Data to SMS I need Links I appriciate Your help Points awardble Regards m.kumar