Firmware Package best pratices... ???

Hello,
I do not understand the advantage(s) to use a host firmware policies.
It seems to me that it is more flexible to upgrades endpoints for each server with apply a maintenance policy "Uer Ack" in order to get "Pending Activies" for servers which need a reboot when we need tp upgrade firmware.
What is the best practices ?
I could see that Management Firmware Package  is not longer supported with UCS 2.1 ; all firmware package are drived via Hist Firmware Packages.
Last question, why do we have 2 package version for Set Bundle :
2.1(1a)A/2.1(1a)B, 2.0(3c)A/2.0(3c)B and so on..
best regards
Nicolas.

Host firmware package policies are nice because you can guarantee that all server level firmware is the same for every server in the system. Another advantage is when new servers are added. If you install a new server it is probably going to come with a different firmware than your other servers but if you have a Host Firmware package assigned to a Service Profile template and you create a Service Profile for the new server from that template the new server will match your other servers.
Host firmware package policies are the only way to upgrade the BIOS of a server.
As for the bundles, all infrastructure components (FIs, IOM, UCSM) will be the .A and all server level firmware for B-Series will be the .B. C-Series will be .C

Similar Messages

  • Pro's and Con's if i run multiple host firmware packages in the single chassis

    hi
    i would like to know pro's and Con's if i run multiple host firmware packages in the single chassis.
    Like if my UCS is running with firmware of 1.4.1m ( having backup of 1.3.1 and 1.1.1j) with 3 blades in one chassis
    can i assign  service-profiles having three different host firmware versions
    Server1 - host firmware 1.1.1j-- win2k8
    server2 -host firmware 1.4.1m -- RHEL 5.6 ( OS)
    server -host firmware 1.3.1 -(esxi 4.0)
    Thanks in advance ,please replay if can i go ahead with this . ASAP

    Yes, you can package them together.
    Now Cisco has a better way. Cisco UCS Manager provides two main advantages over past firmware provisioning:
    • The capability to group multiple firmware components together in one package
    • The capability to apply a firmware package to any compatible server in a single operation
    Cisco  UCS Manager provides an accurate, easier, faster, more flexible, and  centralized solution for managing firmware across the entire hardware  stack. Service profiles in Cisco UCS Manager abstract the physical  hardware from its software properties. Service profiles allow  administrators to associate any compatible firmware with any component  of the hardware stack. Simply download the firmware versions needed from  Cisco and then, within minutes, totally provision firmware on  components within the server, fabric interconnect, and fabric extender  based on required network, server, and storage policies per application  and operating system.
    here is the document
    http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/white_paper_c11-588010_ps10279_Products_White_Paper.html

  • ADF Faces & BC: Best pratices for project layout

    Season greetings my fellow JDevelopers!
    Our software group has been working with ADF for around 5 years and through the years we have accumulated a good amount of knowledge working with JDeveloper and ADF. Much of our current application structure has been resurrected in the early days of JDeveloper 10 where there were more samples codes floating around then there were "best pratice" documentation. I understand this is a subjective topic and varies site to site, but I believe there is a set of common practices our group has started to identify as critical to streamlining a development process(reusable decorated ui components, modular common biz logic, team development with svn, continuous integration/build, etc..). One of our development goals is to minimize dependency between each engineer as everyone is responsible for both client and middle layer implementation without losing coding consistency. After speaking with a couple of the aces at the last openworld, I understand much of our anticipated architectural requirements are met with JDeveloper 11(with the introduction of templates, declarative components, bounded task flows, etc..) but due to time constraints on upcoming deliverables we are still about an year away before moving on with that new release. The following is a little bit about our group/application.
    JDeveloper version: 10.1.3.4
    Number of developers: 7
    Developer responsibilties: Build both faces & bc code
    We have two applications currently in our production environments.
    1.A flavor of Steve Muench's dynamic jdbc credentials login module
    2.Core ADF Faces & BC application
    In our Core ADF Faces application, we have the following structure:
    OurApplication
         -OurApplicationLib (Common framework files)
         -OurApplicationModel (BC project)
              -src/org/ourapp/module1
              -src/org/ourapp/module2
         -OurApplicationView (Faces project)
              public_html/ourapp/module1
              public_html/ourapp/module2
              src/org/ourapp/backing/module1
              src/org/ourapp/backing/module2
              src/org/ourapp/pageDefs/
    Total Number of Application Modules: 15 (Including one RootApplicationModule which references module specific AMs)
    Total Number View Objects: 171
    Total Number of Entities: 58
    Total Number of BC Files: 1734
    Total Number of JSPs: 246
    Total Number of pageDefs: 236
    Total Number of navigation cases in faces-config.xml: 127
    Total Number of application files: 4183
    Total application size: 180megs
    Are there any other ways to divide up this application? Ie: module specific projects with seperate faces-config files/databindings? If so, how can these files be "hooked" together? A couple of the aces has recommended that we should separate all the entity files into its own project which make sense. Also, we are looking into the maven builds which should remove those pesky model.jpr files that constantly gets “touched”. I would to love hear how other groups are organizing their application and anything else they would like to share as an ADF best pratice.
    Cheers,
    Wes

    After discussions over the summer/autumn by members of the ADF Methodology Group I have published an ADF Coding Standards wiki page that people may find useful:
    [http://wiki.oracle.com/page/ADF+Coding+Standards]
    It's aimed at ADF 11g and is intended to be a living document - if you have comments or suggestions please post them to the ADF Methodology google group ( [http://groups.google.com/group/adf-methodology?hl=en] ).

  • Info about the linux-firmware package and rt2780-usb-fw

    Hello,
    I have a have question about the linux-firmware package in the repo.
    Recently there was an update for this package and when I checked its contents I saw that it also updates rt2870usb-fw.
    Maybe I am a bit confused about the all kernel modules / firmware, but since I need to compile the latest kernel (linux-mainline from AUR) to get my buffalo wireless usb work(my rt2870 is supported from rc4), would it be possible to update only this package and use the kernel in the arch repo?
    I can't test it myself because I am out of town for while.
    Thank you

    Hello,
    I have a have question about the linux-firmware package in the repo.
    Recently there was an update for this package and when I checked its contents I saw that it also updates rt2870usb-fw.
    Maybe I am a bit confused about the all kernel modules / firmware, but since I need to compile the latest kernel (linux-mainline from AUR) to get my buffalo wireless usb work(my rt2870 is supported from rc4), would it be possible to update only this package and use the kernel in the arch repo?
    I can't test it myself because I am out of town for while.
    Thank you

  • Best pratices for ODI interfaces

    I was wondering how everyone is handling the errors that occur when running an interface with ODI.
    Our secinaro:
    We have customer data that we want to load each night via ODI. The data is in a flat file and a new file is provided each night.
    We have come across an issue where a numeric field had data that was non numeric in it ... so ODI created a bad file ... with the bad record in it .... and an error file with the error message. We then had some defined constraints that forced records into the E$ table.
    My question is how does everyone handle looking for these errors. We would like them to just bereported to one place ( an oracle table ) so when the process runs we can just look at the one table and then act on the issues.... as shown above ODI puts errors in two different places... DB ones in a flat file and user defined in the E$ tables.....
    I was wondering if anyone has come across this issue and might be able to tell me what was done to handle the errors that occurr .. or what the best pratices might be for handling these errors?
    Thanks for any assistance.
    Edited by: 832187 on Sep 29, 2011 1:18 PM

    If you have got few fields affected by conversion problem you could try insert an ODI constraint or you could modify LKM to load bad file if present.

  • Best pratices for the customizing about the performance

    Hello,
    I would like to know the list of the best pratices for the customizing BPC NW 7.5 about the performance.
    Best regards
    Bastien

    Hi,
    There are few how to guides on SDN which will give you a basic idea on script logic. Apart from this, you can refer to the help guide on help. sap.com.
    The templates might also effect the performance. The number of EVDRE functions, the number of expansion dimensions, the number of members on which expansion takes place will effect the performance. A complex formatting in the template will also effect.
    Hope this helps.

  • Best pratices for GATP by SAP

    Hi all,
    I am not able to download best pratices for GATP by SAP from http://help.sap.com/bp_scmv250/BBLibrary/HTML/ATP_EN_DE.htm. Seems the documents are removed. Can some one who already downloaded share the same with me?
    Also can you provide working links for best pratices for SNP and PP/DS?
    Thankyou,
    Ram

    Hello Ram
    Please check this wiki page - it has good content and some useful links
    APO-GATP General Information - Supply Chain Management (SCM) - SCN Wiki
    and
    Find out more on RDS solution for GATP at : http://service.sap.com/rds-gatp
    if you search http://service.sap.com/bestpractices you will find a documents about best practice in GATP.  The help.sap.com for GATP is a good resource too to start with as well.
    Also you can read below blog written by me
    Global Available To Promise (GATP) Overview
    Hope this will help
    Thank you
    Satish Waghmare

  • Need best pratices advices

    Hey guys,
    Anyone can share with me the best pratices for the setup of an oracle database. I know that the amount of redo, grouping, file system layout, etc.. depend on the size of your BD. So to help here is the spec of my BD
    oradata : 200GB
    change rate : 50k/s (I got that by dividing the size of my archive redolog by the amount of time between the first and last archlog).
    This is a standard database (not OLTP or Data Warehouse) use to store client information
    My RPO (Recovery Point Objective) is 30 minutes
    Some quick question
    1. How should I layout the file system
    2. How many redo/group/size
    3. How many control file, where shoud I put it
    4. How I should setup the log switching
    Anyway doc, quick, don't want to read a 300 pages oracle document :-) This is why I'm looking on your knowledge
    Thanks
    Edited by: Sabey on 9-Feb-2011 8:01 AM

    Sabey wrote:
    Ok a bit more information.
    Storage : SAN, RAID 5 disk onlySince it's SAN, the RAID 5 (which is generically bad for performance in any update environment) will have minimal adverse effect (because the RAID 5 is hidden by massive cache). Just try to spread the data files across as many disks as possible.
    Oracle works best for datafiles on 'SAME' (Stripe and Mirror Everything). Spread the data files across all possible disks and mix data and index to try to get randomization.
    No ASMPity. A lot of potential transparency will be side-stepped.
    OS: Solaris 10 on a M4000, (2 SPARC 2.1GHz, 4 core each), 16GB RAMFinally some meat. ;-)
    I assume Enterprise Edition, although for the size, the transaction rate proposed, and for the configuration, Standard Edition would likely be sufficient. Assuming you don't need EE-specific features.
    You don't talk about the other things that will be stealing CPU cycles from Oracle, such as the app itself or batch jobs. As a result, it's not easy to suggest an initial guess to memory size. App behaviour will dictate PGA sizing, which can be as important as SGA size - if not more so. For the bland description of app you provide, I'd leave 2GB for OS, subtract whatever else required (app & batch, other stuff running on machine) and split the remaining memory at 50/50 for SGA and PGA until I had stats to change that.
    >
    Like I said, I espect a change rate of 50k/s, is there a rule of thumbs for the size of redo log, the amount, etc.. No bulk load, data is entered by people from a user interface, no machine generated data. Query in read for report but not a lot.Not too much to worry about then. I'd shoot for a minimum of 8 redo logs, mirrored by Oracle s/w to separate disks if at all possible, and size the log files to switch roughly every 15 minutes under typical load. From the looks, that would be (50k/s * 60 sec/min * 15 min) or about 50M - moderately tiny. And set the ARCHIVE_LAG_TARGET to thrum at 15 minutes so you have a predictable switch frequency.
    >
    BTW, what about direct I/O. Should I mount all oracle FS in that mode to prevent the use of OS buffer cache?Again, this would be eliminated by using ASM, but ... here is Tom Kyte's answer confirming direct IO http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4159251866796
    Your environment is very very small in Oracle terms. Not too much to fuss over. Just make sure you have a decent backup/recovery/failover strategy in place and tested. Use RMAN for the BR and either DataGuard (or DBVisit for Standard Edition)

  • Board Controller not updated with Host Firmware Package

    UCS V2.2.1c.
    New B200-M3 Blade inserted, and associated with a SP, that references a Host Firmware Package 2.2.1c
    Although the HFP has Version 11 for the Board Controller; the Blade however has a Version of 13 ?
    Is this a bug, or a feature ?

    Walter,
    What happens is that the downgrade is not supported... if you try yo do it manually through a SSH session, the system will tell you "downgrade is not supported"
    Try the commands (for future users, you may know them already ):
    # scope server X/Y (chassis X blade Y)
    # scope boardcontroller
    # show image (this will list all of the firmware versions)
    # activate firmware version.0 force (Select a lower version than the one already running)
    # commit-buffer   <<<<< Here you will see that message I mentioned
    HTH
    -Kenny

  • Linux Native Multipath and ASM Best Pratices questions

    Hallo,
    I 'd like to know your opinions about some questions I have:
    I am using linux native multipath without asmlib and I wonder:
    1-
    Is it mandatory/best pratice to partition (by fdisk) device-mapper luns before using them to build an ASM DISKGROUP, or Oracle asks to
    partition them because asmlib works better on partition? In other words, is there any issues to use directly /dev/device-mapper/mpath1 or I
    have to use /dev/device-mapper/mpath1p1 with 1 MB offset?
    2-
    Is it best to give the proper user/group to mpath lun's by rc.local or by udev rules? Is there any difference?
    Please , write me what do you have experienced..
    Thanks and bye

    ottocolori wrote:
    Hallo,
    I' m trying to have a clearer picture of it, and as far as I know:
    1 -
    Assuming you need to use the whole disk,
    Partitioning it is mandatory only if you use ASMLIB as it works only on partitioned disk.
    Yes you need to partition the disk first before presented to ASMLib
    ottocolori wrote:
    2-
    There is no need to skip first cylinder, or at least, I can't find official infos about that.
    What do you think about? TIANo need in linux platform to skip 1st cylinder, If I remember correctly you'd need to skip 1st cylinder in solaris as there is bug
    Cheers

  • Vmware Data recovery Best pratices

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

    Hi,
    I am looking for Vmware Data recovery Best pratices, I find everywhere on the internet the following link :  http://viops.vmware.com/home/docs/DOC-1551
    But this is not a valid link and I can't find it anywhere...
    Thanks

  • Kernel-firmware package

    I compiled the whole kernel package for OEL5.7.
    But on installing the RPM, I get this error:
    rpm -vih /usr/src/redhat/RPMS/x86_64/kernel-uek-2.6.32-200.20.1uek.x86_64.rpm
    error: Failed dependencies:
    kernel-firmware >= 2.6.32-200.20.1uek is needed by
    kernel-uek-2.6.32-200.20.1uek.x86_64
    Yet I see no kernel-firmware package that was compiled into an RPM
    as part of this. I compiled using:
    rpmbuild -ba --target x86_64 kernel-uek.rpm
    Any ideas? How do I get and compile the kernel-firmware package?
    Thanks in advance.

    user754933 wrote:
    I know very well how to compile it, so does anyone know where I can
    obtain the source for the kernel-uek-firmware for kernel-uek-2.6.32-200.20.1 ?The source for the firmware is contained with the same kernel src you've already downloaded, however it's compiled separately. You need to enable it in the .spec file for the UEK build:
    # kernel-firmware
    %define with_firmware  0Switch that to 1 and rebuild the kernel RPM from the .spec file.

  • Creating Software Update Packages - Best Practice?

    I am setting up our SCCM 2012 R2 environment to begin using it for Windows Updates, however I'm not sure 100% the best method of setting it up.
    Currently my plan is to break out the deployment packages by OS, but I read\told that I should avoid creating to many dynamic deployment packages, as every time it changes all the computers will re-scan that package.  So What I want to do is create
    various packages for OS and years, so I would have a package that contains all updates for Windows 7, older then January 31, 2013 (assuming the package doesn't have 1000+ updates), and are not superseded\Expired. Then I would create Packages for the 2014
    monthly updates each month, then at the end 2014, combine them all in 1 package, and restart the process for 2015.  Is this a sound plan or is there a better course of action?
    If this the best practice method, is there any way to automatically create these packages?  I tried the Automatic Deployment Rules, but I can not set a Year of release, only the a time frame of the release,(older then 9 Months), unless I am missing
    something.  The only way I can see doing this is going into All Software Updates, and filtering on my requirements, and then manually creating the package, but this would less desirable, as after each year I would like to remove the superseded and expired
    without having to recreate the package.
    Mark.

    First, please learn what the different objects are -- not trying to be rude, just stating that if you don't do this, you will have fundamental issues. Packages are effectively meaningless when it comes to deploying updates. Packages are simply a way of grouping
    the binary files so they can be distributed to DPs and in-turn made available to clients. The package an update is in is irrelevant. Also, you do not "deploy" update packages and packages are not scanned by clients. The terminology is very important because
    there are implications that go along with it).
    What you are actually talking about above are software update groups. These are separate and distinct objects from update packages. Software Update groups group updates (not the update binaries) into logical groups that can be in-turn deployed or used for
    compliance reporting.
    Thus, you have two different containers that you need to be concerned about, update packages and update groups. As mentioned, the update package an update is in is pretty meaningless as long as the update is in a package that is also available to the clients
    that need it. Thus, the best way (IMO) to organize packages is by calendar period. Yearly or semi-annually usually works well. This is done more less to avoid putting all the updates into a single package that could get corrupted or will be difficult to deploy
    to new DPs.
    As for update groups, IMO, the best way is to create a new group every month for each class of products. This typically equates to one for servers, one for workstations, and one for Office every month. Then at the end of every year (or some other timeframe),
    rolling these monthly updates into a larger update group. Keep in mind that a single update group can have no more than 1,000 updates in it though. (There is no explicit limit on packages at all except see my comments above about not wanting one huge package
    for all updates.)
    Initially populating packages (like 2009, 2010, 2011, etc) is a manual process as is populating the update groups. From then on, you can use an ADR (or really three: one for workstations, one for servers, and one for Office) that runs every month, scans
    for updates released in the past month, and creates a new update group.
    Depending upon your update process, you may have to go back and add additional deployments to each update group also, but that won't take too long. Also, always QC your update groups created by an ADR. You don't want IE11 slipping through if it will break
    your main LOB application.
    Jason | http://blog.configmgrftw.com

  • Import best pratices

    Hello everybody
    I want to import the SAP best practices, then how to import?
      can use the transaction code SAINT?
    i read the document Quick guide, but cant understand that how to upload the files means which transaction   code is used?
    and
    how to directly create the a transport request and workbench request
    anybody, please suggest me how to do?
    Thanks
    ganesh

    Hi,
    Go through the Note 847091 and at the bottom of the note you will find "BL_Quick_Guide_EN_UK.zip".
    You can follow that document for BP installation.
    Note : This note is for SAP Best Practices Baseline Package UK,SG,MY V1.500 but your requirement may be different search accordingly.
    --Kishore

  • Host firmware package

    Hi,
    I am trying to create a new host firmware pakage for version 2.1.(1a).
    I was able to find right hardwares for Aadpter, CIMC, BIOS, and storage but couldn't find right one for board controller.
    We are using B200 M3 but not listed under the tab. (see attached imgae).
    Do you know what is the best practice?
    Choose all available hardware or just choose what actually installed?
    I only chose follwing items under their respective tab.
    for adapters - UCSB-MLOM-40G-01, UCS-VIC-M82-8P
    for storage - LSI MegaRaid SAS 2004 ROMB
    for bios and cimc - UCSB-B200-M3
    But couldn't find matchin PID for B200 M3.
    Thanks,
    Harry

    Harry,
    The B200's don't have board controller FW.  Not necessary.  You're not missing anything.  Only the platforms listed in the "Board Controller" section of the FW policy tab require them.
    Regards,
    Robert

Maybe you are looking for

  • How do I transfer all files from one mbp to another mbp

    I want to duplicate the mbp that I use at home to another mbp for work. How do I do it?

  • Old User deleted after install.

    Ok, so basically I am stupid, didn't read all of these forums before starting, and I simply installed the program onto my MBP (why can't it just be this simple?). So I installed everything on my wife's computer and everything went just fine. I procee

  • Best car CD unit to hook up to ipod

    I need to buy a CD player am/fm to put in my car. I would like to be able to hook my Classic 160 ipod up to it and see the Track info on the CD units display, be able to go back and forward using the CD unit track controls, but still be able to use t

  • Imported Pictures get re-sized

    When I import images that are 1440 × 1080, they get re-sized (stretched) to fit the dimensions of the movie.  If I change the size slightly (trimming it to 1436 x 1080), it imports fine.  Is there some setting that causes this to happen, or is it a b

  • BAM Active Studio - Group by

    Hi, I tried to design a report (Stacked Bar Chart) for my Data Object Employee (Salesname:string, salesarea:string, salesnumber:integer,timestamp:date). Now I'd like to group by Salesarea and show the chart values count(salesname) for each column. Bu