Installation Procedures Proven Best Practices 10 Years...

I wish to address two topics under this posting:
1. The intentional use of the v1.1 Combo update
2. Recommended pre-installation procedures that have proven to work for me over the last 10 years. I make the following statements for inferential purposes: I use my macs to make a living: professional video, film, for broadcast, as well as technical work in industrial engineering, both elec. and mech. I do all my own maint. and repairs, hard and soft.
1. Someone had mentioned that the v1.1 combo update was applicable to only a handful of machines that had problems using the original 10.6.3 combo update. Well this is obviously not true as Apple has removed the original 10.6.3 combo update download from their support downloads and replaced it with the v1.1 combo update. Additionally, the details of the v1.1 combo update do not specify specific machines... obviously, there was something amiss with the original 10.6.3 combo updater download.
2. Prior to executing major updates I always follow these procedures and policies, most of which I must credit to a gentleman who posts on this site under the name "Kappy", who has helped me numerous times over the last 10 years, is very approachable, and knowledgeable:
A). Never perform major updates on a "bread-n-butter" (i.e. a mac you use to make a living) machine if you need to use it within a week of doing the update.
B). Plan on doing a full restore, taking the appropriate precautions such as cloning / backing up, etc...Hopefully you'll be presently surprised and won't need to restore.
C). Monitor the support discussions for at least a week before doing the update.... patients is a sign of wisdom. "Fools rush in where angles fear to tread"...
D). Set energy saver (system preferences) not to engage, then Run all other mac and 3rd party software updates
E). Download the required update from apples support downloads. Important: if it is a OS update, ALWAYS download and install the "combo" updater. It contains additional fixes, etc. that the regular update does not, and that you may have missed. Always download major updates, do not use the OS auto updater (various reasons).
F). Reboot, and using Disk Utility, verify the target disk, and repair permissions.
G). Repair preferences and run OSX Cocktail (both are free, google them for download sites)
H). Reboot, Run Diskwarrior ($100 from alsoft.com)
I). Run Drive Genius repair utility ($99 prosoftengineering.com)
J). Reboot, Run Drive Genius defrag utility
K). Reboot, Run Disk Warrior again
L). Boot into "safe mode" by holding down the shift key while restarting. (this disables any 3rd party software, virus software, etc. that very often causes failures during major intalls / upgrades).
M). Install your update from the previously downloaded update file (item E, above).
N). Reboot, and perform items D, F, G, H, I, J, K again.
Hope this helps!
Your Pragmatic Apple Fan
John

Hi Kappy, thanks for weighing in. Hope all is well with you. With regard to using software updater. I choose not to use this feature for major updates because it eliminates another external variable which has potential to cause a problem with the download and installation process (e.g. LANs, WANs, etc.). The download and install process becomes one while a "hick-up" in either can cause problems. Circumnavigating the software updater allows me more control. I also always choose to use the combo updater because, while it includes integral updates that are suppose to be found in the singular updaters, I assume that the code for the combo updaters is written subsequent to the singular updaters and thereby, in effect, supersedes the previously released updates. I do not, however lay claim to knowledge of Apple software team's configuration control protocols. I would assume that they have many similarities to the Configuration Control/Management. procedures I use, DoD MIL-STD 973 / ANSI-EIA 649. Knowing this, there is undoubtedly some degree of fallibility, however minute. Additionally, I would also assume that using a combo updater serves to possibly "reset" certain system settings / code, that may have become corrupted over time. Stemming from this rational, is my preference to always use the combo updater; not required, but preferential. I will say though, I never suffer the problems codified in these forums by other users, with respect to installation problems. I must conclude this is due to the aforementioned protocols I follow. However, I always remain open to your suggestions and instructions as they have served myself and others quite well over the years.
Take care Kappy, and good to hear from you again.
Respectfully,
John

Similar Messages

  • Software Installs for Zones - Best Practices

    I haven't found anything in the documentation on this subject yet. I'm wondering what the best way is to install software for non global zones. Let's take a simple setup say a webserver and a database and I want them in separate non global zones. I would like to use sparse root zones and have my software in /usr/local. The problem is that if I add the software in the global zone /usr/local, the Web server zone has access to my DB install in /usr/local. I know i will be putting the data else where but I would rather not have that binary accessible. Is that possible? These are not package installs. They are binary distributions. Glassfish and Postgres.
    If anyone has any answers or input it would be much appreciated.

    We are using zones as part of a whole security solution. I don't see installing whole root zones as a solution. I may as well create another Solaris 10 server in that case since we are in a VM cluster anyways.
    I was able to find some documentation on this subject and there is a more sensible solution to this that makes more sense and applies to using sparse root zones.
    Let's say you want to have each child zone have its own writable /usr/local directory. First create the /usr/local directory within the global zone. Next add a file system to the zone configuration. Let say your zones are in /zones. You have zone1 and zone2. So create the /zones/zone1/local directory with 700 permissions. In the zone config you set the fs special to that directory and the dir to /usr/local. Give it rw,nodevices for options. Now zone1 has its own writable /usr/local that zone2 cannot access.

  • Upgrade and migration from Netware to Linux best practice

    Hi folks,
    We've been running NSM 2.5 for a few years on Netware and have been very satisfied with the product and performance. We are planning to migrating to SLES11 with OES11 very soon to finally move away from Netware and I have some issues I can't seem to find answers to. Of course I want to use NSM to move my home directories from my Netware to my SLES so I want this working well when we migrate.
    I've searched but not found a good resource to show how to move the NSM engine from a Netware server to a Linux server with best practices. Anyone have experience with this and any gottchas?
    The other question is of course is to upgrade NSM to a new version before or after the move to Linux/OES11? We are at Ver 2.5.0.43 and want to move to the latest version. Upgrade procedure and best practices would be handy.
    The last of course is NSM compatible with SLES/OES11? I presume it is and certainly hope so because we want to move all of our users to SLES11 NSS.

    On 5/9/2012 11:06 AM, jlauzon wrote:
    >
    > Hi folks,
    > We've been running NSM 2.5 for a few years on Netware and have been
    > very satisfied with the product and performance. We are planning to
    > migrating to SLES11 with OES11 very soon to finally move away from
    > Netware and I have some issues I can't seem to find answers to. Of
    > course I want to use NSM to move my home directories from my Netware to
    > my SLES so I want this working well when we migrate.
    > I've searched but not found a good resource to show how to move the NSM
    > engine from a Netware server to a Linux server with best practices.
    > Anyone have experience with this and any gottchas?
    The NSM 3.0.x Engine setup process actually handles the migration from
    NSM 2.5 fairly easily. Our 3.0 Installation Guide (available at
    http://www.novell.com/documentation/storagemanager3 ) includes all the
    information you should need regarding migration, including the
    suggestions I'll list here.
    You'll want to leave NSM 2.5 running during the migration so that the
    NSM 3.0 setup wizard can connect to that Engine and import its policies
    and pending events. You'll also want to have as few pending events as
    possible in NSM 2.5 -- deferred deletes are fine, but all pending events
    will slow down the migration process, since they'll have to be
    transferred over.
    > The other question is of course is to upgrade NSM to a new version
    > before or after the move to Linux/OES11? We are at Ver 2.5.0.43 and want
    > to move to the latest version. Upgrade procedure and best practices
    > would be handy.
    Again, this information is available in the 3.0 Installation Guide. To
    migrate from 2.5 to 3.0.x, you'll have to have at least one OES11 server
    in your tree to install it on; but you'll also have to leave the NSM 2.5
    Engine running on its Netware host long enough to migrate from it.
    > The last of course is NSM compatible with SLES/OES11? I presume it is
    > and certainly hope so because we want to move all of our users to SLES11
    > NSS.
    We are about to release version 3.0.4 of NSM, which provides full
    support for OES11 on SLES11. The NFMS Support Team can also provide you
    with builds of NSM 3.0.3 which support OES11; if you need those for
    early testing, please send an email to storagemanager[at]novell[dot]com.
    Hope this helps!
    - NFMS Support Team

  • VPN3020 - ACS - Windows AD - best practices links

    Do you have good link with general procedures and best practices for setting up VPN user authorization to a standard Windows domain/AD.
    VPN3020 -> radius -> ACS (with default policy to Windows NT) does work, but wanted more granular control which user have VPN access.
    With this model everyone who has Windows account would automatically get VPN access.
    Also if there are any good reading on setting up "single logon" Cisco VPN client and windows domain.

    Try this link
    http://www.cisco.com/univercd/cc/td/doc/product/vpn/vpn3000/4_0/404acn3k.htm

  • SQL Best Practices

    Hello,
            I'm planning to install a SQL server 2012 SP1 as a backend database for SCCM 2012 R2.
    Kindly suggest a best practice.

    Julie,
    Check this:
    http://www.sqlservercentral.com/blogs/basits-sql-server-tips/2012/06/23/sql-server-2012-installation-guide/
    http://www.sqlskills.com/blogs/glenn/the-accidental-dba-day-4-of-30-sql-server-installation-and-configuration-best-practices/
    http://www.brentozar.com/archive/2008/03/sql-server-2005-setup-checklist-part-1-before-the-install/
    http://technet.microsoft.com/en-us/sqlserver/bb671430.aspx
    http://forums.whirlpool.net.au/archive/2144288
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • DRM Alternate Hierarchy Best Practice

    What procedures and best practices are suggested for implementing Alternate Hierarchies for DRM reporting purposes?
    Thank You.

    In two ways you can set up alternate hierarchy in DRM
    1) Across the hierarchies but within same version (Linked Nodes)
    2) Within the hierarchy (Shared Nodes)
    Insert and AddInsert actions in DRM will let you to create shared hierarchies either within Hierarchy or across hierarchies.
    If you are creating shared hierarchies across the hierarchies you don't see suffixes such as ":Shared-001" at the end of node but if you are using Insert within the hierarchy then you will suffixes, to have second one implemented first you need turn on one system preferences "SharedNodeMaitainanceEnabled".

  • Wireless best practices

    Are there a set of best practices that one could use for deployment of Cisco APs or any general AP?
    Let me know.
    Thanks,
    Ohamien

    Are you trying to setup a Wireless LAN network. If this is a new installation the first best practice that you need to follow is to do a site survey. For more information on site survey read http://www.cisco.com/en/US/tech/tk722/tk809/technologies_q_and_a_item09186a00805e9a96.shtml

  • XREF best practices in ESB cluster installation-OESB10.1.3.3

    Hi,
    We are using Oracle ESB during last 2 years.
    2 months ago I migrated our ESB installation to ESB Cluster in production (1 ESB DT, 1 ESB RT for polling
    adapters, 2 ESB RT for further message processing).
    We are using SOA Suite 10.1.3.3 with MLR#17 applied.
    I faced a issue with XREF (populateXRefRow XPath function) in production system and need assistance.
    All our ESB Processes contains next main parts:
    1) Polling DB-adapter (or FTP-adapter, this didn't matter) that initiates a ESB process, routing service for that polling adapter
    that asyncronously (!) invokes Requestor ABC level services (AIA terms);
    2) Requestor ABC level-services perform XREF population and continues message
    processing.
    XREF population is doing with next steps:
    we call lookupXRefRow XPath function, if value is not present in XREF, we doing
    populateXRefRow call.
    This logic is working fine when we are not using ESB cluster, but now step 2) (ReqABC level) is performed by different ESB servers
    and frequently we faced unique constraint violation error on XREF_DATA
    population (during populateXrefRow call).
    ESB RT nodes using to balance load but transmitted data is intersected. For example, we are poll not documents but document details instead (polling table populated by Oracle Streams, there are no guarantee that document header receives earlier than document details, because our system is high loaded and we are using commit_serialization=none with parallelism at APPLY processes).
    Each ESB RT instance can receive different rows of same document and xref population done at document header level.
    My question is: what is best practices to work with XREF in ESB cluster installations?
    May be other peoples faced with this issue and how this issue was resolved?
    I know possible workarounds to accomplish this task: not call populateXRefRow function in XSLT, instead call PL/SQL procedure or function that working same but can ignore any exceptions.
    This's solution not liked to me, but I dont know any other solutions.
    Also I cannot not populate XREF because XREF actively used in inter-systems communication.

    Hi,
    We are using Oracle ESB during last 2 years.
    2 months ago I migrated our ESB installation to ESB Cluster in production (1 ESB DT, 1 ESB RT for polling
    adapters, 2 ESB RT for further message processing).
    We are using SOA Suite 10.1.3.3 with MLR#17 applied.
    I faced a issue with XREF (populateXRefRow XPath function) in production system and need assistance.
    All our ESB Processes contains next main parts:
    1) Polling DB-adapter (or FTP-adapter, this didn't matter) that initiates a ESB process, routing service for that polling adapter
    that asyncronously (!) invokes Requestor ABC level services (AIA terms);
    2) Requestor ABC level-services perform XREF population and continues message
    processing.
    XREF population is doing with next steps:
    we call lookupXRefRow XPath function, if value is not present in XREF, we doing
    populateXRefRow call.
    This logic is working fine when we are not using ESB cluster, but now step 2) (ReqABC level) is performed by different ESB servers
    and frequently we faced unique constraint violation error on XREF_DATA
    population (during populateXrefRow call).
    ESB RT nodes using to balance load but transmitted data is intersected. For example, we are poll not documents but document details instead (polling table populated by Oracle Streams, there are no guarantee that document header receives earlier than document details, because our system is high loaded and we are using commit_serialization=none with parallelism at APPLY processes).
    Each ESB RT instance can receive different rows of same document and xref population done at document header level.
    My question is: what is best practices to work with XREF in ESB cluster installations?
    May be other peoples faced with this issue and how this issue was resolved?
    I know possible workarounds to accomplish this task: not call populateXRefRow function in XSLT, instead call PL/SQL procedure or function that working same but can ignore any exceptions.
    This's solution not liked to me, but I dont know any other solutions.
    Also I cannot not populate XREF because XREF actively used in inter-systems communication.

  • How to load best practices data into CRM4.0 installation

    Hi,
      We have successfully installed CRM4.0 on a lab system and now would like to install the CRM best practice data into it.
      If I refer to the CRM BP help site http://help.sap.com/bp_crmv340/CRM_DE/index.htm,
    It looks like I need to install at least the following In order to run it properly.
    C73: CRM Essential Information 
    B01: CRM Generation 
    C71: CRM Connectivity 
    B09: CRM Replication 
    C10: CRM Master Data 
    B08: CRM Cross-Topic Functions
    I am not sure where to start and where to end. At the minimum level I need the CRM Sales to start with.
    Do we have just one installation CDs or a number of those, Also are those available in the download area of the service.sap.com?
    Appreciate the response.

    <b>Ofcourse</b> you need to install Best Practices Configuration, or do your own config.
    Simply installing CRM 4.0 from the distibutiond CD\DVD will get you a plain vanilla CRM system with no configuration and obviously no data.  The Best Practices guide you trhough the process of configuring CRM, and even has automated some tasks.  If you use some of the CATT processes of the Best Practices you can even populate data in your new system (BP data, or replace the input files with your own data)
    In 12 years of SAP consulting, I have NEVER come across a situation whereby you simply install SAP from the distribution media, and can start using it without ANY configuration.
    My advise is to work throught the base configuration modules first, either by importing the BP config/data or following the manual instruction to create the config/data yourself.  Next, look at what your usage of CRM is going to be, for example Internet Sales, Service Management, et cetera, and then install the config  for this/these modules.

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • Best practice  Error " no fiscal year "

    In best practice Report CR " GL Statment " After updating the data base and entering the parameters , It pops to me error " no fiscal year variant specified " though i run the same query in the ERP with the same parameters and it runs ! 
    And am using Crystal Reports 2008 , version 12.1.0.892 . This Report in SAP BEST PRACTICE

    Did you configure the additional data sources and the BP package itself as described in the BI QUick guide and the Add additional data sources document included in the package?
    Regards,
    Stratos

  • Best practice for calling stored procedures as target

    The scenario is this:
    1) Source is from a file or oracle table
    2) Target will always be oracle pl/sql stored procedures which do the insert or update (APIs).
    3) Each failure from the stored procedure must log an error so the user can re-submit the corrected file for those error records
    There is no option to create an E$ table, since there is no control option for the flow around procedures.
    Is there a best practice around moving data into Oracle via procedures? In Oracle EBS, many of the interfaces are pure stored procs and not batch interface tables. I am concerned that I must build dozens of custom error tables around these apis. Then it feels like it would be easier to just write pl/sql batch jobs and schedule with concurrent manager in EBS (skip ODI completely). In that case, one could write to the concurrent manager log and the user could view the errors and correct.
    I can get a simple procedure to work in ODI where the source is the SQL, and the target is the pl/sql call to the stored proc in the database. It loops through every row in the sql source and calls the pl/sql code.
    But I can not see how to set which rows have failed and which table would log errors to begin with.
    Thank you,
    Erik

    Hi Erik,
    Please, take a look in these posts:
    http://odiexperts.com/?p=666
    http://odiexperts.com/?p=742
    They could help you in a way to solve your problem.
    I already used it to call Oracle EBS API's and worked pretty well.
    I believe that an IKM could be build to automate all the work but I never stopped to try...
    Does it help you?
    Cezar Santos
    http://odiexperts.com

  • Best Practice for SAP PI installation to share Data Base server with other

    Hi All,
    We are going for PI three tire installation but now I need some best practice document for PI installation should share Data base with other Non-SAP Application or not. I never see SAP PI install on Data base server which has other Application sharing. I do not know what is best practice but I am sure sharing data base server with other non-sap application doesnu2019t look good means not clean architecture, so I need some SAP document for best practice to get it approve from management. If somebody has any document link please let me know.
    With regards
    Sunil

    You should not mix different apps into one database.
    If you have a standard database license provided by SAP, then this is not allowed. See these sap notes for details:
    [581312 - Oracle database: licensing restrictions|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=581312]
    [105047 - Support for Oracle functions in the SAP environment|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=105047] -> number 23
          23. External data in the SAP database
    Must be covered by an acquired database license (Note 581312).
    Permitted for administration tools and monitoring tools.
    In addition, we do not recommend to use an SAP database with non-SAP software, since this constellation has considerable disadvantages
    Regards, Michael

  • Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

    I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
    Thanks for pointing me in the right direction.
    Steve

    steve hutchcraft wrote:
    I've tried searching for a best practice to install...
    • First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
    • Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
    • When you upgrade the OS do a "clean install."
    • RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
    • After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
    • If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
    Good luck!
    -Allen Wicks

  • Best practice of 11G release 2 Grid & RAC installation on Solaris 10

    Hi Experts,
    Please share 11g Release 2 Grid infrastructure and RAC installation experiennce on Sun SPARC.
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)
    Also please let me know which is the best storage option( NFS , ASM,...) and pros and cons
    Regards,
    Rasin M

    Hi,
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)Check this in MOS:
    RAC Assurance Support Team: RAC Starter Kit and Best Practices (Solaris)
    https://support.oracle.com/CSP/main/article?cmd=show&id=811280.1&type=NOT
    Regards,
    Levi Pereira
    http://levipereira.wordpress.com

Maybe you are looking for

  • Simplest was to make a calendar

    I am not sure if this is the right forum to ask this question, so don't flame me if I'm wrong. How can I make a simple calendar using iWorks '08 or iLife '08 with a photo, a title, and the actual calendar. I am not an artist, so if this is simple, do

  • Facing Problem in Multiselect List ....

    Hi Andy,, We are facing one more problem.. In the Menu selection ..... As u given a solution for dropdownlist hiding which is working fine... but the same logic is not working for Multiselect List item .Why so... anoo.. Edited by: anoo on Nov 17, 200

  • Date shows timestamp with cast function

    hi, i am entering date from java to oracle, when i used CAST function to convert as TIMESTAMP it shows the correct time in ONE DB environment. But the same date not showing proper time in another DB environment. It always shows 12.00.00.000000 shall

  • What is exactly clusterware in RAC

    What is exactly clusterware in RAC? Is it same as having a shared disk or is it the RAC software itself? I got confused while reading below sentence. "If you are using Oracle clusterware you can add nodes dynamically. " Can someone explain what it is

  • Export under Bond/ No Bond

    Hi All, My client has scenarios like Exports under Bond and No Bond(Rebate Claim). I have created separate pricing procedure for Exports, separate Distribution channel for exports and separate Series group for exports. now my doubt is while configuri