Grid Control deployment best practices

Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

hi
have a search for the following document
MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
regards
Alan

Similar Messages

  • Do I need license to patch agent through grid control deployment wizard

    In grid control, we have license to use database diagnostics pack and database tuning pack.
    I know wihtout license for provision pack, I cannot patch database homes using grid control deployment tab. But I'm wondering if I can patch grid control agent through deployment page without license for the prvision pack.
    thanks

    There is no management pack license needed for patching the agent. You can get clear details what link needs what packs when you click on '+' symbol (Show Management Pack Information) right to the "About Oracle Enterprise Manager" at the extreme down on OEM screen.
    When you click on "Show Management Pack Information", it will show what pack needed next to every link in paranthesis. If there is no pack information mentioned next to link means, there is no license needed for that link.

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Portal server deployment best practices

    anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

    From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
    One way You can do is by taking the "ldif " back up from the existing portal server.
    For that first you have to install the portal server in the new box and then take back up of existing portal server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
    edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
    copy this file to the new server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
    and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
    Restarting the server makes you to access the portal server with the confguration of the old one.

  • CCT deployment best practices Question.

    Using the packager to deploy 50+ seats, would it be best practice to perform a team-wide uninstall of currently installed CS5.5 assets prior to package deployment?  Or can these assets remain on end user systems?  Please advise.  Thanks!

    We're faced with that scenario more times than we like. Depending on who you ask, it's "We need to remove old versions to reduce unnecessary issues" on  the IT side, and "Touch my CS5.5 and I will beat you" on the user side. LOL
    Our position is to remove old versions, so we're testing the Adobe cleaner tool, hoping to be able to script removal of all previous versions. Of course the litmus test is to get CS4 installed on a test Mac for testing.
    http://www.adobe.com/support/contact/cscleanertool.html
    Man if there's anything this exercise has done is to remind us how much progress Jody/Karl have made in the last few years. Installing CS4 (pre-AAMEE days) is like having toothpicks shoved into your retina. It's sooo much better today.
    Don

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

  • Deployment best practice

    Looking to setup a deploy to live strategy for a relatively new team, the options are:
    option-1) Build in test environment using ant tasks and move the war, jar etc. to live. With the live specific properties in a separate file
    option-2) Make a ant task that reads properties files and compiles a war, jar, ear specific to the environment. The only issue
    here is the war file will be specific to environment.
    What are the pros & cons of each?
    Any suggestions will be greatly appreciated.
    TIA,
    Raj

    Thanks for the links.
    The particular item we were concerned was, how to handle environment specific values.
    Seems like this is the best to go...
    1) Keep the application binaries/byte-code independent of the environment
    2) Factor the environment specific values into a separate resource xml or properties file
    Based on which, the way to go about would be to build the jars/wars etc in test environment and copy them to live instead of
    rebuilding them in live.
    Cheers,
    Raj

  • Deployment - best practices

    There is a lot of questions/answers about deployment strategies in this forums .
    i solved a lot of problems an get answer to my questions
    but there are some remaining problems..
    My problem :
    I developed a project.
    There are several mappings, source/target tables, process flows and one main flow combined with a scheduler.
    What is the best method/way to deploy the whole project without the OWB client.
    With OMB scripts , i can deploy the mappings and process flows...
    But how can i define new connections(for example the production connections) ,
    how can i deploy the target tables...
    how can i configure the new connection to the tables,mappings etc.
    how can i deploy the scheduling part en let start it( without the OWB client software).
    Thanks a lot..

    OMB encompasses pretty much every UI interaction, so you can also deploy tables, create new locations, connectors and schedules can be created.
    You can create locations and connectors via OMB
    OMBCREATE LOCATION 'X' SET PROPERTIES (TYPE,VERSION,CONNECTION_TYPE) VALUES ('ORACLE_DATABASE','11.2','DATABASE_LINK')
    OMBCREATE LOCATION 'Y' SET PROPERTIES (TYPE,VERSION) VALUES ('ORACLE_DATABASE','11.2')
    # 1.a. To create a connector and let OWB generate a database link....
    OMBCREATE CONNECTOR 'Y/X' SET PROPERTIES (DATABASE_LINK_NAME) VALUES ('LINKNAMEHERE') SET REF LOCATION 'X'
    # 1.b. To create a connector that uses an existing database link of yours ....
    OMBCREATE CONNECTOR 'Y/X' SET PROPERTIES (DATABASE_LINK_NAME) VALUES ('LINKNAMEHERE') SET REF LOCATION 'X'
    # To add the location to a module and configure the module to use the location
    OMBALTER ORACLE_MODULE '<modulename>' ADD REFERENCE LOCATION '<locationname>' SET AS DEFAULT
    OMBALTER ORACLE_MODULE '<modulename>' SET PROPERTIES (DB_LOCATION) VALUES ('<locationname>')
    Schedules can be created as in the link Re: how to properly create a calendar using OMB*Plus, to schedule a workflow ? and then to set the calendar on a flow for example see Problems setting the calendar on a process flow using OMB
    Cheers
    David

  • PJC deployment best practice

    Hi
    We have started using several PJCs to enhance our 10g webforms. The conversion has not gone live and I need to decide how to deploy the PJCs.
    The 2 options that I can think of:
    1. Have all PJCs packaged into one jar.
    Pro - no need to change formsweb.cfg every time a new pjc is used
    2. Have separate jar files for each pjc
    Pro - Easier to maintain each pjc e.g. if one changes just have to test that one rather than all as with above deployment
    - Reduce jar download quantity. Form will only download jar files required rather than one large jar file
    Has anybody been through a similar process?
    Any thoughts/comments are appreciated.
    thanks
    paul schweiger

    Hi,
    I think option 2 is preferrable. However, I don't see any benefit in configuring all jar files with an application.
    One thing I haven't tried yet, maybe you want to try it for me and le me know if it works
    Say you have common PJCs and application specific PJCs. You may be able to define
    commonPjc = pjc1.jar, pjc2.jar, pjc3.jar
    Then in your application specific archive definition you do
    [myApp]
    form=...
    <the archive tag> = frmall.jar,%commonPjc%
    It works for other configurations and may work for PJCs as well. This way you have a common configuration (similar to a big monolithic jar file) that you can use for each application definition). Additional, application specific jar files can be added to teh application definition
    Let me know if thsi works
    Frank

  • EM grid control: Deploy EMC plugins

    EMG 11
    Both plugins (cellera & clarion) failing during deployment.
    Does anybody able to install them?
    Thank you.

    Did you check:
    Oracle® Enterprise Manager System Monitoring Plug-in Installation Guide for EMC Celerra Server
    10g Release 2 (10.2.0.2)
    http://download.oracle.com/docs/cd/E11857_01/install.111/b28042/emcel.htm
    Oracle® Enterprise Manager System Monitoring Plug-in Installation Guide for EMC CLARiiON System
    Release 5 (1.0.3.0.0)
    http://download.oracle.com/docs/cd/E11857_01/install.111/e10505/E10505-01.htm
    Regards
    Rob

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

Maybe you are looking for

  • Can i change the 15 minute intervals of time capsule

    Is there a locked time frame for the time capsule?  I find that it interrupts a DVD use and I have to click on the dock to re-open the DVD...

  • Project coding mask problem

    Hi friends, My coding mask is: 00/XXXX.X.XX.XX.XX.XX.XX The First six characters are for the Project definition. First I created the Project definition, eg: 13/0000. Then I created the top level WBS element which will be the same as the Project defin

  • Help Verifing my payment, it's not letting me.

    I just got a I pod touch for Christmas. And It was working fine untill around 12 am,12-26. I was getting free apps and then it stopped. I got a message saying I needed to verify my payments and then took me to the Edit Payment area on Itunes. I did a

  • How can I view my backed up docs on ICloud?

    I need to know which docs have been backed up? Bd

  • Images pre-edited when opened in Editor

    I have used PSE 11 for a few years and all of the sudden this week my images are being "auto edited" when opened.  When I open an image in Editor it has already been edited a bit- it looks different than the image in Organizer.  I want to edit the im