Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

We've been running Sun Cluster for about 7 years now, and I for
one love it. About a year ago, we starting consolidating our
standalone web servers into a 3 node cluster using multiple
HA-Zones. For the most part, everything about this configuration
works great! One problem we've having is with patching. So far,
the only documentation I've been able to find that talks about
patch Clusters with HA-Zones is the following:
http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
Sun Cluster System Administration Guide for Solaris OS
How to Apply Patches in Single-User Mode with Failover Zones
This documentation works, but has two major drawbacks:
1) The nodes/zones have to be patched in Single-User Mode, which
translates to major downtime to do patching.
2) If there are any problems during the patching process, or
after the cluster is up, there is no simple back out process.
We've been using a small test cluster to test out using
LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
are still in a position of patching our HA-Zoned clusters based
on home grow steps, and not anything blessed by Oracle/Sun.
How are others patching Sun Cluster nodes with HA-Zones? Has any
one found/been given Oracle/Sun documentation that lists the
steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
Thanks!

Hi Thomas,
there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
Regards
Hartmut

Similar Messages

  • Best practice for setting up iCloud with multiple devices using a single AppleID

    Hi there
    Me and my wife have an iPhone each, and are looking at getting both of us using iCloud. The problem is that we only use one Apple ID for our music library.
    Is getting a separate Apple ID necessary for each device on iCloud, or can multiple devices have seperate settings/photos/music, etc.?

    Using different Apple ID for iCloud is not necessary, but in most cases it is recommended.
    You can however choose to use separate Apple IDs for iCloud and continue to use the same Apple ID for iTunes, thereby being able to share all your purchases of music, apps and books.

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • What is the best way for sharing an iPad with 2 iPhones using different Apple acount ?

    What is the best way for sharing an iPad with 2 iPhones using different Apple acount ?

    You can't share with other devices if you are using different Apple ID's and iTunes account on them. You can only share if you use the same ID.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best Practices for patching Exchange 2010 servers.

    Hi Team,
    Looking for best practices on patching Exchnage Server 2010.
    Like precautions  , steps and pre and post patching checks.
    Thanks. 

    Are you referring to Exchange updates? If so:
    http://technet.microsoft.com/en-us/library/ff637981.aspx
    Install the Latest Update Rollup for Exchange 2010
    http://technet.microsoft.com/en-us/library/ee861125.aspx
    Installing Update Rollups on Database Availability Group Members
    Key points:
    Apply in role order
    CAS, HUB, UM, MBX
    If you have CAS roles in an array/load-balanced, they should all have the same SP/RU level.  so coordinate the Exchange updates and add/remove nodes as needed so you do not run for an extended time with different Exchange levels in the same array.
    All the DAG nodes should be at the same rollup/SP level as well. See the above link on how to accomplish that.
    If you are referring to Windows Updates, then I typically follow the same install pattern:
    CAS, HUB, UM, MBX
    With windows updates however, I tend not to worry about suspending activation on the DAG members rather simply move the active mailbox copies, apply the update and reboot if necessary.

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best practice for deleting multiple rows from a table , using creator

    Hi
    Thank you for reading my post.
    what is best practive for deleting multiple rows from a table using rowSet ?
    for example how i can execute something like
    delete from table1 where field1= ? and field2 =?
    Thank you

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Best Practices for patch/rollback on Windows?

    All,
    I have been working on BO XI with UNIX for some time now and while I am pretty comfortable with managing it on UNIX, I am not too sure about the "best practices" when it comes to Windows.
    I have a few specific questions:
    1) What is the best way to apply a patch or Service Pack to BO XI R2 on a Windows envt without risking a system corruption?
    - It is relatively easier on UNIX because you don't have to worry about registry entries and you can even perform multiple installations on the same box as long as you use different locations and ports.
    2) What should be the ideal "rollback" strategy in case an upgrade/patch install fails and corrupts the system?
    I am sure I will have some follow up questions, but if someone can get the discussion rolling with these for now, I would really appreciate!
    Is there any documentation available around these topics on the boards some place?
    Cheers,
    Sarang

    This is unofficial but usually if you run into a disabled system as a result of a patch and the removal/rollback does NOT work (in other words you are still down).
    You should have made complete backups of your FRS, CMS DB, and any customizations in your environment.
    Remove the base product and any seperate products that share registry keys (i.e. crystal reports)
    Remove the left over directories (XIR2 this is boinstall\business objects\*)
    Remove the primary registry keys (hkeylocalmachine\software\businessobjects\* & hkeycurrentuser\software\businessobjects\* )
    Remove any legacy keys (i.e. crystal*)
    Remove any patches from the registry (look in control panel and search for the full patch name)
    Then reinstall the product (test)
    add back any customizations
    reinstall either the latest(patch prior to update) or newest patch(if needed)
    and restore the FRS and CMS DB.
    There are a few modifications to these steps and you should leave room to add more (if they improve your odds at success).
    Regards,
    Tim

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Best Practice for Droid Gmail Contacts with Exchange ActiveSync?

    Hi, folks.  After going through an Address Book nightmare this past summer, I am attempting to once again get my Contacts straight and clean.  I have just started a new job and want to bring my now clean Gmail contacts over to Exchange.  The challenge is creating duplicate contacts, then defining a go-forward strategy for creating NEW contacts so that they reside in both Gmail and Exchange without duplication.  Right now, my Droid is master and everything is fine.  However, once I port those contacts from Gmail onto my laptop, all hell breaks loose... Does Verizon have a Best Practice finally documented for this?  This past summer I spoke with no less than 5 different Customer Support reps and got 3 different answers... This is not an uncommon problem...

    In parallel to this post, I called Verizon for Technical Support assistance.  Seems no progress has been made.  My issue this past summer were likely a result of extremely poor quality products from Microsoft, which included Microsoft CRM, Microsoft Lync (new phone system they are touting which is horrible), and Exchange.  As a go-forward strategy, I have exported all Gmail contacts to CSV for Outlook and have imported them to Exchange.  All looks good.  I am turning off phone visibility of Gmail contacts and will create all new contacts in Exchange.

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practices for Patching RDS Environment Computers

    Our manager has tasked us with creating a process for patching our RDS environment computers with no disruption to users if possible. This is our environment:
    2 Brokers configured in HA Active/Active Broker mode
    2 Web Access servers load balanced with a virtual IP
    2 Gateway servers load balanced with a virtual IP
    3 session collections, each with 2 hosts each
    Patching handled through Configuration Manager
    Our biggest concern is the gateway/hosts. We do not want to terminate existing off campus connections when patching. Are there any ways to ensure users are not using a particular host or gateway when the patch is applied?
    Any real world ideas or experience to share would be appreciated.
    Thanks,
    Bryan

    Hi,
    Thank you for posting in Windows Server Forum.
    As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
    completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

Maybe you are looking for