OEM for TimesTen

We are not able to complete the deployment of the new plug-in.
When trying to finish the deployment we are getting errors like:
"Undeployment operation failed on the agent with an error status"
Is anyone completed successfully the deployment?

Hi,<p>
Are you trying to deploy or un-deploy the TimesTen EM plug-In?<br>
- You need to deploy the EM Plug-In and configure it to be able to use it.<br>
- Un-deployment is optional (newer versions of the EM Plug-In will co-exist with version 1.1 and the Plug-In is only about 15 KB in size).<p>
Are you familiar with the
<a
href=http://www.oracle.com/technology/products/timesten/timesten_oem_plugin.html>
TimesTen EM Plug-In</a> web-site which has things like viewlets which to show you how to
deploy,
un-deploy and
<a
href=http://www.oracle.com/technology/products/timesten/viewlets/tt703_emplugin_using_viewlet_swf.html>
use<a/> the EM Plug-In?<p>
The deployment and un-deployment steps are also specified in the
<a
href=http://www.oracle.com/technology/products/timesten/pdf/doc/empluginstall.pdf>
Install Guide</a><p>
Can you provide some information to help diagnose your issue such as:<br>
- The version and platform of your EM Repository (eg 10.2.0.1 on Red Hat Linux 4.4, x86)<br>
- The version and platform of your EM Agent (eg 10.2.0.2 on Windows XP SP2, x86)<br>
- Whether your EM Agent is correctly deployed, configured and communicating with the EM repository (this is a pre-requisite for the EM Plug-In)<br>
- Any EM log files related to what happened<p>
Regards<p>
Doug Hood<br>
TimesTen Product Manager<br>

Similar Messages

  • GUI support for TimesTen

    What is the GUI support available for TimesTen? Also the recent releases have removed the support for the Cache Administrator (web-based user interface) has been removed. So what are the options available now?
    Thanks..

    As far as Oracle GUI products go, you have the options of using the TimesTen Plug-in for OEM
    http://www.oracle.com/technology/products/timesten/timesten_oem_plugin.html
    as well as the latest version of SQL Developer
    http://www.oracle.com/technology/products/timesten/timesten_sqldeveloper.html
    which supports working with Cache Groups. I believe the addition of this functionality to SQL Developer was the reason for making Cache Adminstrator redundant.

  • OEM for oracle database 10.2.0.3 .where to install

    Hi ..i have Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 installed in my pc and looking to install oracle enterprise manager but only agent is avalble in oracle website.please let me know how to install oem for 10g

    Hi,
    You can refer to below metalink doc for the complete steps for configuring the EM DBcontrol for your standalone DB instance
    How To Drop, Create And Recreate the Database Control (DB Control) Release 10g and 11g (Doc ID 278100.1)
    Regards,
    Rahul

  • CPU and RAM for TimesTen

    I have test performance statistics for our java application (using oracle db) on a 4 X Quad-Core AMD Opteron™ 8360 SE cpu node with 64 GB RAM.
    I am trying to collect performance statistics for TimesTen on a 2 X Dual Core Intel Xeon 5160 cpu node with 8 GB RAM.
    My question is how does TimesTen treat processors. Is dual core considered as 2 cpu or single cpu?
    So am I comparing performance - (2 X 2 = 4) TT vs (4 X 4 = 16) Oracle DB ?
    Any suggestions on how should we compare the performance statistics obtained on the two machines ?

    Please see my answers embedded below:
    1. Under what conditions does TimesTen use multiple processors ?
    CJ>> There are many background daemon components to TimesTen that are multi-threaded and may use multiple processors to some extent as required (main daemon, sub-daemons, replication agent, cache agent).
    The main daemon is a very lightweight process that is purely supervisory in nature. It is not involved in database transatcion processing etc. and so its CPU usage is very low.
    Each active datastore (database) has a dedicated managing sub-daemon. Again, this is not directly involved in transaction processing but it too has several threads. The checkpointer thread may use a lot of CPU while a checkpoint is occurring. The log flusher thread will use CPU in proportion to the intensity of write operations (primarily INSERT/UPDATE/DELETE) in the application workload (this thread is responsible or flushing the transaction log buffer to the log files on disk).
    If replication, or AWT caching, is used the replication agent transmitter and/or receiver threads may use significant CPU depending on the replicated workload.
    If Cache Connect is being used then the cache agent threads may use significant CPU when e.g. an AUTOREFRESH is in progress.
    These ar all background activities and you do not have direct control over how many CPUs or how much CPU time is used by them. They will try and use what they need and the O/S will allocate them time based on available system resources.
    Equally significant is the CPU power used to process application transactions. Each application process/thread that is executing via a separate connection will potentially be executing concurrently within TimesTen hence if you have 20 application threads (or processes) each with its own connection to a TimesTen datastore then TimesTen could potentially use up to 20 CPUs/cores concurrently. This is the key factor in your ability to control how many CPUs/cores TimesTen. The crucial things to understand here are:
    1. In direct connection mode there is no dedicated TimesTen server process. All DBMS logic is encapsulated in the TimesTen library (libtten.so). All database query and transaction processing is actually executed in the context of the application thrad that makes the database call. Hence, as I mentioned, if there are 'n' concurrent application processes/threads each with a separate database connection then TimesTen can potentially use 'n' CPUs/cores concurrently. Essentially, # concurrent connections = max concurrent CPUs/coires.
    2. In client server there is a dedicated server process or thread (depending on configuration) for each application connection and so again # concurrent connections = maximum number of concurrent CPUs / cores that will be used.
    2. What techniques do I use to ensure maximum performance on this -
    a. Machine with 2 cpu (each is a dual core processor) and 8 GB RAM.
    CJ>> The number of physical CPUs (chips) is irrelevant. What matters is the number of cores. In this case 4. So, the system can concurrently execute up to 4 tasks. Anything more than 4 and tasks may have to wait for available CPU in order to execute. Since there are no blocking operations within TimesTen database processing then from a TimesTen perspective this machine can execute 4 application threads performing database access at maximum speed. Of course in reality CPU time is needed for the application, O/S, TT background processes etc. etc. so one would aim for fewer than 4 concurrent database processes. This assumes of cousre that there is no blocking at the application level either and that these processes or threads can therefore spend 100% of the available time executing. If the application blocks for any reason (e.g. waiting for the next 'request' from somewhere) then this introduces idle toime and so one can increase the number of concurrent application processes/threads to use up this idle time thereby increasing overall throughput.
    b. Application using direct connection mode with weblogic Application server
    CJ>> See my comments above. Generally one would use a connection pool and configure the number of connections to optimise performance (some experiments will be needed to arrive at the optimal value since it is very much dependant on the application workload and processing model).
    c. All data is cached in TimesTen (using Cache Groups) for read intensive operations (There are few write operations as well but TT is mainly for the reads). Should the connections be 4 or 3 in this case ?
    CJ>> If the volume of cache refrshes is low then you don't ned to 'reserve' much CPU for checkpointing logging and so you have all 4 cores available for application+database processing (plus O/S etc.). If the application is mostly using the database and not waiting for stuff outside the database then the optimal number of connections is probably in the range of 3-6. If the application does a lot of non-database work which may include waiting for things then the optimal number of connections will be much higher. Again, you need to experiment to find out what is optimal.
    Chris

  • OEM for sun solaris

    Hello,
    I'm working on Database studio (Oracle Enterprise Manager) under window environemt, but I'm wondering if there is GUI OEM for sun solaris like this in windows ?and if there is one what's the name of the file, or how can i install it ?
    Thanks.

    EM is available for Sun Solaris too. If you are using Oracle Server v8.1.6 and earlier, EM has to be installed separately in a different Oracle Home. however, with effect from v 8.1.7, installation of the Oracle Server also installs the base EM.
    Some commands for starting on Solaris.
    1. oemapp console for starting the console
    2. oemapp dbastudio for starting the dbastudio
    3. oemctrl start oms for starting the Management Server
    4. lsnrctl start for the listener
    5. lsnrctl dbsnmp_start for the Agent

  • Can i use Microsoft Windows 7 Home Premium SP1 64-bit - OEM for bootcamp?

    Someone suggested to use
    Microsoft Windows 7 Home Premium SP1 64-bit - OEM for bootcamp.
    Can i use Microsoft Windows 7 Home Premium SP1 64-bit - OEM to use in bootcamp to install windows 7 on my mac book pro 13 inch?
    thank you.
    PS: i bought my macbook pro 3 weeks ago.
    Also, is there any other windows 7 installation CD that is cheaper?
    thank you.

    Yes.
    OEM just means that the Windows OS is dedicated to that computer only. If you sell, replace or throw away the computer you use it on, the Windows CD/DVD goes along with it. (According to the Licensing dialog)
    Just do a Yahoo/Google search for prices is the only way to find it cheaper. Or keep an eye on your favorite web store for sales.
    If you are in college and have an .edu e-mail account thru the college then Microsoft has it for about $30 US. And other deals also.

  • Guidelines for Health Monitoring for TimesTen

    This document provides some guidance on monitoring the health of a TimesTen
    datastore. Information is provided on monitoring the health of the
    datastore itself, and on monitoring the health of replication.
    There are two basic mechanisms for monitoring TimesTen:
    1. Reactive - monitor for alerts either via SNMP traps (preferred) or
    by scanning the Timesten daemon log (very difficult) and reacting
    to problms as they occur.
    2. Proactive - probe TimesTen periodically and react if problems, or
    potential problems, are detected.
    This document focusses on the second (proactive) approach.
    First, some basic recommendations and guidelines relating to monitoring
    TimesTen:
    1. Monitoring should be implemented as a separate process which maintains
    a persistent connection to TimesTen. Monitoring schemes (typically based
    on scripts) that open a connection each time they check TimesTen impose
    an unnecessary and undesireable loading on the system and are discouraged.
    2. Many aspects of monitoring are 'stateful'. They require periodic
    sampling of some metric maintained by TimesTen and comparing its
    value with the previous sample. This is another reason why a separate
    process with a persistent connection is desireable.
    3. A good monitoring implementation will be configurable since the values
    used for some of the chcks may depend on e.g. the TimesTen configuration
    in use or the workload being handled.
    MONITORING THE HEALTH OF A DATASTORE
    ====================================
    At the simples level, this can be achieved by performing a simple SELECT
    against one of the system tables. The recommended table to use is the
    SYS.MONITOR table. If this SELECT returns within a short time then the
    datastore can be considered basically healthy.
    If the SELECT does not return within a short time then the datastroe is
    stuck in a low level hang situation (incredibly unlikely and very serious).
    More likely, the SELECT may return an error such as 994 or 846 indicating
    that the datastore has crashed (again very unlikely, but possible).
    A slightly more sophisticated version would also include an update to a
    row in a dummy table. This would ensure that the datastore is also capable
    of performing updates. This is important since if the filesystem holding
    the trsnaction logs becomes full the datastore may start to refuse write
    operations while still allowing reads.
    Now, the SYS.MONITOR table contains many useful operational metrics. A more
    sphisticated monitoring scheme could sample some of these metrics and
    compute the delta between subsequent samples, raising an alert if the
    delta exceeds some (configurable) threshold.
    Some examples of metrics that could be handled in this way are:
    PERM_IN_USE_SIZE and PERM_IN_USE_HIGH_WATER compared to PERM_ALLOCATED_SIZE
    (to detect if datastore is in danger of becoming full).
    TEMP_IN_USE_SIZE and TEMP_IN_USE_HIGH_WATER compared to TEMP_ALLOCATED_SIZE
    (ditto for temp area).
    XACT_ROLLBACKS - excessive rollbacks are a sign of excessive database
    contention or application logic problems.
    DEADLOCKS - as for XACT_ROLLBACKS.
    LOCK_TIMEOUTS - excessive lock timeouts usually indicate high levels of
    contention and/or application logic problems.
    CMD_PREPARES & CMD_REPREPARES - it is very important for performance that
    applications use parameterised SQL statements that they prepare just once
    and then execute many times. If these metrics are continuously increasing
    then this points to bad application programming which will be hurting
    performance.
    CMD_TEMP_INDEXES - if this value is increasing then the optimiser is
    comntinually creating temporary indices to process certain queries. This
    is usually a serious performance problem and indicates a missing index.
    LOG_BUFFER_WAITS - of this value is increasing over timne this indicates
    inadequate logging capacity. Yiou may need to increase the size of the
    datastore log buffer (LogBuffSize) and log file size (LogFileSize). If that
    does not alleviate the problem you may need to change your disk layout or
    even obtain a higher performance storage subsystem.
    LOG_FS_READS - this indicates an inefficieny in 'log snoop' processing as
    performed by replication and the XLA/JMS API. To alleviate this you should
    try increasing LogBuffSize and LogFileSize.
    Checking these metrics is of course optional and not necessary for a basic
    healthy/failed decision but if you do check them then you will detect more
    subtle problems in advance and be able to take remedial action.
    MONITORING THE HEALTH OF REPLICATION
    ====================================
    This is a little more complex but is vital to achieve a robust and reliable
    system. ideally, monitorting should be implemented at both datstores, the
    active and the standby. There are many more failure modes possible for
    a replicated system than for a standalone datastore and it is not possible
    to ennumerate them all here. However the information provided here should
    be sufficient to form the basis of a robist monitoring scheme.
    Monitoring replication at the ACTIVE datastore
    1.     CALL ttDataStoreStatus() and check result set;
    If no connections with type 'replication' exists, conclude that
    replication agents are stopped, restart the agents and skip
    next steps.
    It is assumed here that the replication start policy is 'norestart'.
    An alarm about unstable replication agents should be raised
    if this is Nth restart in M seconds (N and M are configuration parameters).
    The alarm can later be cleared when the agents stayed alive K
    seconds (K is configuration parameter).
    2.     CALL ttReplicationStatus() and check result set;
    This returns a row for every replication peer for this datastore.
    If the pState is not 'start' for any peer, raise an alarm about paused or
    stopped replication and skip rest of the steps.
    It is assumed that master cannot help the fact that state is not
    'start'. An operator may have stopped/paused the replication or
    TimesTen stopped the replication because of fail threshold
    strategy. In former case the operator hopefully starts the replication
    sooner or later (of course, after that TimesTen may stop it again
    because of the fail threshold strategy). In latter case the standby
    side monitor process should recognise the fact and duplicate the data
    store with setMasterRepStart-option which sets state back to 'start'.
    If for any peer, lastMsg > MAX (MAX is a configuration parameter), raise
    an alarm for potential communication problems.
    Note that if replication is idle (nothing to replicate), or there is
    very little replication traffic, the value for lastMsg may become as
    high as 60 seconds without indicating any problem. The test logic
    should cater for this (i.e. MAX must be > 60 seconds).
    3.     CALL ttBookmark();
    Compute the holdLSN delta between the values from this call and the
    previous call and if the delta is greater than maximum allowed
    (configuration parameter), raise an alarm about standby
    that is too far behind. Continue to next step.
    Notice that maximum delta should be less than FAILTHRESHOLD * logSize.
    4.     CALL ttRepSyncSubscriberStatus(datastore, host);
    This step is only needed if you are using RETURN RECEIPT or RETURN TWOSAFE
    with the optional DISABLE RETURN feature.
    If disabled is 1, raise an alarm for disabled return service.
    Continue to next step. If RESUME RETURN policy is not enabled we could,
    of course, try to enable return service again (especially when DURABLE
    COMMIT is OFF).
    There should be no reason to reject TimesTen own mechanisms that
    control return service. Thus, no other actions for disabled return
    service.
    Monitoring replication at the STANDBY datastore
    1.     CALL ttDataStoreStatus();
    If no connections with type 'replication' exists, conclude that
    replication agents are stopped, restart the agents and skip
    next steps.
    It is assumed that replication start policy is 'norestart'.
    An alarm about unstable replication agents should be raised
    if this is Nth restart in M seconds (N and M are configuration parameters).
    The alarm can later be cleared when the agents stayed alive K
    seconds (K is configuration parameter).
    2.     Call SQLGetInfo(...,TT_REPLICATION_INVALID,...);
    If the status is 1, this indicates that the active store has marked this store
    as failed due to it being too far out of sync due to log FAILTHRESHOLD.
    Start recovery actions by destroying the datastore and recreating via a
    'duplicate' operation from the active.
    3.     Check 'timerecv' value for relevant row in TTREP.REPPEERS
    If (timerecv - previous timerecv) > MAX (MAX is a configuration parameter),
    raise an alarm for potential communication problems.
    You can determine the correct row in TTREP.REPPEERS by first getting the
    correct TT_STORE_ID value from TTREP.TTSTORES based on the values in
    HOST_NAME and TT_STORE_NAME (you want the id corresponding to the active
    store) and then using that to query TTREP.REPPEERS (you can use a join if
    you like).
    The recovery actions that should be taken in the event of a problem with
    replication depend on several factors:
    1. The application requirements
    2. The type of replication configuration
    3. The replication mode (asynchronous, return receipt or return twosafe)
    that is in use
    Consult the Timesten replication guide for information on detailed recovery
    procedures for each combination.
    ================================ END ==================================

    The information in the forum article is the abridged text of a whitepaper I wrote recommending best practice for building a monitoring infrastructure for TimesTen. i.e. you write an 'application' in C, C++ or Java that performs these monitoring activities and run it continually in production against your datastores. Various aspects of the behaviour of the application could be controlled by configurable parameters; these are not TimesTen parameters but parameters defined and used by the monitoring application.
    In the specific case you mentioned, the 'lastMsg' value returned by ttReplicationStatus is the number of seconds since the last message was received from that peer. The monitoring application would compare this against some meaningful threshold (maybe 30 seconds) and if lastMsg is > that value, raise an alarm. To allow flexibility, the value compared against )MAX) should be configurable.
    Does that make sense?
    Chris

  • OEM for 10g - Ten Steps Back!

    I'm not liking 10g so far... I need to do lots of table structure modifications and data updates. OEM for 9i made it so easy. OEM for 10g is forcing me back to the old days of using SQL*PLus to do most of what I need done.

    Hmmmm.....
    I'am not sure I agree with you.
    I don't think it is appropriate to measure the quality of a new rdbms release from it's client tools as the java based console actual is.
    You can make table structure changes with dbconsole and data updates can be done throgh isqlplus.
    However, If this is not enough, you might download the client distributives for 10g and use the java based console from this installation. As far as I can see, this is entire the same as on 9i.
    The way I see it, it is beneficial to separate a client tool like the java based console from the actual database installation home, as this tool is a client application.
    Agree?
    rgds
    Kjell Ove

  • Oem for 8i

    where i will find an enterprize manager(oem) for oracle 8i
    enterprise edition.

    I've been caught out by this one. Oracle 8.0.5EE came supplied with OEM. When 8.1.5 was released, OEM was re-packaged as a separately purchased product. I think a lot of people installed 8.1.5, tried to find OEM and gave up. All is forgiven for 8.1.6., which comes with a smooth, Java-based OEM package. Incidentally, the OEM bit you see listed in the 8.1.5. installer is just the Agent to allow OEM (if you purchase it!) to communicate with the DB.

  • OEM for 500 GB drive?

    Anyone know who the OEM for the 550 GB drive is?
    TIA

    It is tough to pin down which hard drive manufacturer will be used in a Mac you order from Apple. The reason is simple, Apple contract bids from multiple suppliers for commodity products, like hard drives.
    So even if your specific Mac came with a Seagate 500 MB drive, it is possible that someone's else's built at a earlier or later date may have a Western Digital.
    That being said, the early reports from people who have started receiving their Mac Pros, report Seagate's showing up in them for the 250 size, I'm not awake of any reports on the 500's yet.
    The best thing to do if you must have a specific hard drive, is to order your Mac with the smallest (cheapest) drive offered, then add your own drive purchased from a third party.
    Tom N.

  • Configure OEM for 9i

    Hi,
    I want to install standalone OEM for 9i DB
    I tried to follow note 458533.1
    [oradev@vision ~]$ emca -config dbcontrol db -repos create
    java.lang.IllegalArgumentException: -config
            at oracle.sysman.vto.vtoe.repmgr.Arguments.getOperationCode(Arguments.java:424)
            at oracle.sysman.vto.vtoe.repmgr.Arguments.parseOptions(Arguments.java:212)
            at oracle.sysman.vto.vtoe.repmgr.Arguments.parseArguments(Arguments.java:191)
            at oracle.sysman.vto.vtoe.repmgr.Arguments.<init>(Arguments.java:176)
            at oracle.sysman.vto.vtoe.repmgr.Arguments.<init>(Arguments.java:74)
            at oracle.sysman.vte.repmgr.RepositoryMgrApp.main(RepositoryMgrApp.java:103)Is this note applicable to 9i?
    Thanks a lot
    Ms K

    Hi MSK;
    This note 458533.1 is only considers the enabling of the DB Control application used to manage a single 10g database. Its not applicable for 9i
    You can monitor 9i db wiht oracle 10g Grid Control
    http://www.oracle.com/technology/products/oem/pdf/em_gc_8i-9i_4.pdf
    Hope it helps
    Regard
    Helios

  • SNMP for TimesTen

    Hi
    I have installed timesten70500.solx8664.tar.
    [2] Oracle In-Memory Database Cache
    and
    [1] Client/Server and Data Manager
    I want use SNMP but any packages/files for snmp is not instaled automaticly.
    What I have to do to use SNMP (traps)?
    BR

    You do not need any additional software in order for TimesTen to generate SNMP traps. Of course if you want to catch those traps, display them and act upon them they you need some kind of thord party software to do that.
    For details on how to configure TimesTen to generate SNMP traps please see the section on 'Diagnostics through SNMP traps' in the Error Messages and SNMP Traps manual (error_ref.pdf) that is installed as part of the TimesTen installation.
    Chris

  • How to add host in OEM for which agent is already deployed

    Hi All,
    I had installed OEM 12c Cloud Control previously. From that OEM I had added certain host. While adding those hosts I deployed agents on them. Now I am not using that OEM anymore. I have freshly installed OEM again and now I want to add the same host. Is is possible to use the same agent which was deployed earlier on the host which I want to add as target. If yes, then what steps do I need to follow to add a host as a target in OEM for which an agent is already deployed. That means I want to by-pass the step which deploys an agent to the target host and reuse the already deployed agent.
    Please guide!!

    Yes this is possible. To use an installed agent with a new OMS you can follow these steps:
    1. Shut down the running 12c agent with the below command:
    $AGENT_INSTANCE_HOME/bin/emctl stop agent
    2. Remove the agent instance home:
    rm -rf $AGENT_INSTANCE_HOME
    3. Manually remove the targets monitored by the agent in the EM console (sounds like you don't need to do this as the EM installation is no longer present)
    4. Create a new instance home pointing to the new oms:
    $AGENT_BASE_DIR/core/12.1.0.1.0/sysman/install/agentDeploy.sh
    AGENT_BASE_DIR=<location of the agent base dir> OMS_HOST=<oms hostname>
    EM_UPLOAD_PORT=<http/https upload port>
    AGENT_REGISTRATION_PASSWORD=<registration password>
    AGENT_INSTANCE_HOME=<absolute path where the instance home has to be created>
    -configOnly
    Regards, Mark.

  • Anyone using OEM for scheduling backups?

    I'd like to see a poll for the number of users using OEM for taking database backups. Had some interesting conversations with some colleagues via social media and it seems like some DBAs don't trust OEM for their backups even though the feature works IMO.
    Are you using OEM to do backups?
    Edited by: DBA on May 3, 2012 1:02 PM

    I have 91 scheduled recurring backup jobs running through OEM 12c BP1. They work quite well. I've only noticed a few problems and nitpicks:
    1) Occasionally, for unknown reasons, a backup job against a newly-added database target will fail stating that the backup cannot be run since the database is closed. The database is not closed, it's up and running fine. Changing the backup job to run using SYS as SYSDBA credentials resolves this problem.
    2) Identification of failed backups is not as customizable as I would like. For example, if I have a full "backup database plus archivelog" running, and during the course of that backup an archivelog backup runs and deletes archived logs, OEM will report that the backup job has failed since RMAN throws an error about not being able to backup an archivelog it expected to find. I dealt with this by adding SKIP INACCESSIBLE to the backup statement. I'm not totally comfortable about that but I monitor for offline datafiles so I consider this only a minimal risk. Our custom backup scripts used to catch this warning and ignore it.
    3) It's really annoying that, after creating a backup job through the target Availability -> Schedule Backup tool, I cannot then run a 'Create Like' against that backup job to create a nearly identical one against a different database target. You have to schedule each of them from the Schedule Backup tool. Clicky clicky clicky clicky.
    4) A backup job created through the 'Schedule Backup' tool has a job type of 'Database Backup', therefore when the job fails a high-availability incident is created, so you can configure incident rules to receive notification of this event. This is good. Unfortunately a backup job created from the Jobs page has a job type of 'RMAN Script' does NOT create an incident on job failure out of the box. I've had difficulty configuring incident rules to catch failures for RMAN Script job types. This is not good.
    5) The repeating schedule options for jobs are limited compared even to something like cron. I can schedule a job for a day of the week, or a day of the month, but cannot easily schedule a job for "the first Tuesday of each month". This can be worked around.
    6) After creating and submitting a backup job through the Schedule Backup tool, you cannot edit the RMAN script without recreating the job. You can only edit the RMAN script before submitting the job. I deal with this by having my backups run stored scripts in the recovery catalog, and make the changes there if I need to change something.
    7) It is very annoying that if I 'stop' a repeating job (instead of suspend), there is no way to resume that job. It has to be recreated. This is my own user error -- I just don't click the stop button. Not OEM's fault but I'd rather not even have that button.
    8) No ability to sequence jobs. You can sequence steps within a job, but then the entire job succeeds or fails as a unit. If you want to back up database A then immediately back up database B afterwards, you have to create a multi-task job. The multi-task job has the same problem as item #4 above such that it does not create a "backup failed" incident when the backup fails.
    Even with all of these gripes, I am quite pleased with the backups I have running through OEM.

  • I downloaded the 10g OEM for Solaris

    I downloaded the 10g OEM for Solaris and put the gz file on the server. When I did a gunzip to the file I got a cpio extension. What are the instructions to load the file onto the server?

    cpio -idmv < filename.cpio

Maybe you are looking for

  • Strange System.out.print behavior inside a for loop

    I'm trying to create a simple ASCII progress bar in the console that would display like this: |--------------------------------------------| ============================More equal signs would appear as time goes on. At least this is what I want to ha

  • Get transaction isolation level

    How do I ask sql server what it's isolation level is? I know that jdbc has a method to get this but this info is cached and not what the current state is

  • How to balance now??

    Hi I have a small issue now. There are some invoices generated in bulk for customers due to running of some cp by mistake. The invoices which have 'direct debit' payment method went to bank for payment and the amount was collected(debited) from the c

  • Songsd ont playing to the end

    Hi Since I upgraded to iTunes 11 a number of songs, which played perfectly well before, now won't play through to the end. Apart from missing the end of the song this is messing up smart playlists because the songs aren't marked as played. Any ideas?

  • Aqualung + skins pkgbuild. new music player for linux.

    Aqualung is a new linux music player. More info and screenshots are available here http://aqualung.sourceforge.net/ It supports alsa, oss and jack and uses gtk2. aqualung pkgbuild pkgname=aqualung pkgver=0.9beta1 pkgrel=1 pkgdesc="Skinable xmms/bmp l