OEM GC Best Practices

Greetings,
We are a shop with 5 DBAs, each responsible for different databases and we are in the process of moving to have our regularly scheduled tasks (backups, monitoring, etc.) scheduled through OEM GC. It is beginning to appear as though having an administrative account for each DBA may not be the best way to go but instead have one administrative account from which jobs are submitted. A couple of initial issues surround: one of our DBAs having just left and owning several jobs. I want to drop that account after moving those jobs to another account. Tedious. I have also just realized that for jobs to take advantage of default credentials that apparently these default credentials must be set for the account owning the particular jobs.
I am interested in how other shops handle this issue and further if Oracle or someone alse has published suggesations for best practices for setting up and administering OEM GC.
Thank you.

Hi,
I am interested in how other shops handle this issue and further if Oracle or someone alse has published suggesations for best practices for setting up and administering OEM GC.You have already found the best practice as follows.
but instead have one administrative account from which jobs are submitted.Salman

Similar Messages

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Backup validation best practice  11GR2 on Windows

    Hi all
    I am just reading through some guides on checking for various types of corruption on my database. It seems that having DB_BLOCK_CHECKSUM set to TYPICAL takes care of much of the physical corruption and will alert you to the fact any has occurred. Furthermore RMAN by default does its own physical block checking. Logical corruption on the other hand does not seem to be checked automatically unless the CHECK LOGICAL is added to the RMAN command. There are also various VALIDATE commands that could be run on various objects.
    My question is really, what is best practice for checking for block corruption. Do people even bother regularly checking this and just allow Oracle to manage itself, or is it best practice to have the CHECK LOGICAL command in RMAN (even though its not added by default when configuring backup jobs through OEM) or do people schedule jobs and output reports from a VALIDATE command on a regular basis?
    Many thanks

    To use CHECK LOGICAL clause is considered best practice at least by Oracle Support according to
    NOTE:388422.1  Top 10 Backup and Recovery best practices
    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best practice for installing many SL-500

    Hi,
    I have just bought 25 SL-500 and are to buy a hundred more in half a year.
    They are all model 2746-4DG
    I wish to make a special installation with all standard software of my office and transfer this to a DVD as a Ghost Image.
    Normally (in my old job) I would do this by installing everything I need on one PC, and do a sysprep for making a factory image. After this I would make a Ghost image of the disk and transfer this to a DVD.
    But in my old job we did not buy OEM windows XP - we had a VOL agreement.
    I would really like to install all PC's with only one Windows Key and skip activation, but can this be done if I buy one VOL license and media for Windows XP and use this key on all PC's.
    And is it legal for me to do so?
    Kind regards
    Hchhimself
    Denmark

    There is one best practice document available from Oracle RACSIG site and "Oracle Real Applicaiton Cluster Administration and Deployment Guide" available on OTN is also good source of informaiton about 10g RAC.
    Oracle has made sincere efforts in 10g documentations expecially in server technology.
    Thanks & Regards

  • Naming convention best practice for PDB pluggable

    In OEM, the auto discovery for a PDB produces a name using the cluster as the prefix and the database name suffix, such as:
    odexad_d_alpcolddb_alpcolddb-scan_PDBODEXAD
    If that PDB is moved to another cluster, I imagine that name will not change but the naming convention has been violated.
    Am I wrong and does anyone have a suggestions for a best practice naming the PDB's

    If the PDB moves to another cluster, OEM would auto-discover it in the new cluster.  So it would "assign" it a new name. 
    As a separate question, would you be renaming the PDB (the physical name) when you move it to another cluster ?
    Hemant K Chitale

  • ECC 6 Best Practice

    Dear all,
    I hope someone can help me as I am a bit confused. We are implementing ECC 6 Automotive OEM Best Practice.
    The customer has a number of SD requirements which I do not seem to find anywhere in the pre configured scenarios.
    Furthermore, I am findign it extremelly difficult to find documentation, such as data conversion sheets - are we really supposed to go through each one individually to check if it applies to our scenarios or not?
    My other confusion / concern is if we are not implementing PP how can we have processes such as MRP, ATP, Schedule Agreements, Cross or Inter Company, and Backorders? Is it possible to implement these scenarios without PP?
    I know these are very generic questions but if someone could shed some light on the right approach.
    Yes I did go through the entire Best Practice documentation so please don't tell me to read it.
    Thanks in advance

    Dear Rakesh,
    I am implementing Auto Solution - OEM. ECC 6 Germany version.
    My confusion comes when I have a look at some of the scenarios as they all ask for PP and we are not implementing it.
    One of these scenarios is A62 Sales Order with cross company process. According to this process we have a delivery / sales plant and a production plant. The creation of the customer SO triggers the entire schedulling agreement process between the 2 plants. The sale splant runs MRP creating the demand. A schedule agreement is creating the demand on the produciton plant.
    The produciton plant then does whatever needs to do - mrp run, backflush, etc - and "despatches" the stock to the delivery plant. The delivery plant receives the stock and an intercompany billing process occurrs.
    After this ONLY will the stocks be available to be picked for the customer.
    Now my issue is if there is no production plant how will we amend this scenario?
    Thanks for your comments

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Best Practices question re Windows XP & Parallels 4.0 installation

    To Apple Gurus here:
    I am a new convert from Windows to Mac. Just bought a Macbook Pro (4G/320G, 2.4Gz) and a copy of Parallels 4.0. I have an OEM copy of Windows XP Pro & Photoshop CS4 for Windows. The question before me is what sequence should I go about installing Windows & Parallels. Logically, I think I should install:
    1) Windows XP using Boot Camp first,
    2) then install PhotoShop CS3 for Windows in the Windows partition
    3) then install MS Office
    4) and finally install Parallels 4.0.
    Is this the right sequence or indeed a "Best Practices" scenario?
    Any tips for a 'Best Practice' installation will be highly appreciated.
    Also, is anyone here using the SAP GUI for Mac OS-X & Citrix Presentation Server Client for Mac OS 10.0 (now renamed XenApp)?

    First, my creds. I don't consider myself an Apple guru. I have been running a MB since last December and at that time, I installed Parallels 3.0. If I remember correctly, after installing Parallels, I installed Windows Vista, and then Office and while I was impressed to be able to run MS Office on a MB, it took what I considered to be TOO long to load and then the performance was not that great. So, mostly I've stayed on the Mac side of the operation and only loaded Parallels if I had to run some MS program.
    About a week ago I got an offer from Parallels to buy 4.0 at an upgrade price of $40. I went with the box version since it was the same price as the download version. Tonight I got my courage up to do the upgrade. I was leery because I thought I might have to reinstall all my MS stuff (Office Pro, etc.) When I put the disk in to install the program, I receive a message saying there was a later edition available with the option to download it or install the box edition. After a few minutes of thought, I decided to do the download version. I would still recommend getting the box version since you get a manual with it although the download version comes with a PDF manual.
    When I finished, I then clicked on upgrade/install and the installation proceeded without much input from me. Lo and behold, the installation finished and it booted up to my previous Vista installation with all my programs intact.
    So far, I must stay I'm VERY impressed with this upgrade Parallels edition. It seems to load much faster, the programs are more responsive, Vista so far seems very stable and the ability to switch back and forth from Windows to OS X is totally better. From what I've seen so far, I would highly recommend anyone using Parallels 3.0 get this upgrade. While I've only been using it a few hours, it seems like the best upgrade for ANY program/system (Windows 95-->Vista) that I've ever done.
    A few months ago I saw a piece on an upgraded version of Fusion which stated that it moved Fusion ahead of Parallels. If that were so, I think the ball must be back in Parallels court with 4.0.

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Information for All: Updated bare Metal provisioning best practices paper

    Hi All,
    The Bare Metal provisioning best practices paper has been updated and is now available on OTN at the following location:
    http://www.oracle.com/technology/products/oem/pdf/bmp_best_practice.pdf
    Thanks,
    Rajat

    Very nice, thank you.

  • DW best practices when needing to update data

    Hi all,
    I have a few general questions about data warehousing...
    We often need to update/process the data after that we imported it into the DW (with ETS tools). Since data in a DW is not supposed to be updated I wonder what the corrct way to do this is.
    The scenario is data coming from a few systems, we import it and transform it, then, when we want to run reports etc. we sometime need to apply some changes to the data, like for example add a column with some result. Is the correct way to do that adding tables in the DW? Other systems use separate tables (inside or outside the DW) in order to transform the data... when is it worth to create separate tables outside the DW and when to create/add columns in the DW? What is allowed/best practice in DW?
    Thanks,
    A.

    It is a view. That is what the error message is saying.
    Why not deal with facts instead of speculating? Look at the Oracle Data Dictionary and see what the object DPIT.DEDUCTIONS is.
    Use TOAD. Use SQL*Plus and select on ALL_OBJECTS. Use OEM. Etc.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

Maybe you are looking for

  • How to use two agents in one scene?

    Hi experts, I have two agents: First Agent: Unix Second Agent: Windows. I would like: Steps In the scene: 1) Load file in the oracle table (LKM --> File to oracle) --> agent linux 2) Load Table in BAM (LKM --> oracle to BAM) --> agent windows I would

  • Sort by file size and/or pixel dimensions

    I am constantly frustrated by the lack of ability to sort by file size or pixel dimension. I have many versions of the same images, and often want to find the biggest one, or one of a certain size, and it's very time consuming to have to scan all the

  • Not synching :S

    My contacts wont synch to icloud even though my itunes says they are...?

  • Seeing File Size in Finder

    I can find out the size of a garageband file by using getinfo. This of course is relatively tedious. How do I see the sizes of gargareband files in Finder?

  • Intra-Company & Inter-Company Stock Transfer Order

    Hi,      what is the difference between Intra-Company & Inter-Company Stock Transfer Order...with configuration guide... thanks   mohan