Job executions lost in EM Grid Control

Hello,
We have lost all executions from the job execution pane except the backup jobs. We have lost also all scheduled jobs. Any idea about ?
Thanks,
Manuel.

Did you create the email Notification Schedule?

Similar Messages

  • Problem in deleting backup job in Grid Control

    Hi,
    The topology and versions are as follows:
    1 windows 2003 server with Oracle Grid Control 10.2.0.4.0.
    1 windows 2003 server with Oracle Databases 10.2.0.3.
    1 windows 2003 server with Oracle Databases 10.2.0.3 - Data Guard setup
    I have tried to stop (and delete) some scheduled fullbackup jobs from Grid Control.
    The job has now status SUSPEND. I have tried to stop and delete the job, but I get the message:
    All executions of the job were stopped successfully. Currently running steps will not be stopped.
    I cannot delete the job and the job is not running.
    I have run this procedure in order to delete the job:
    DECLARE
    jguid RAW;
    BEGIN
    SELECT job_id
    INTO jguid
    FROM mgmt_job
    WHERE job_name = '<name of your job>'
    AND job_owner = '<owner of your job>';
    mgmt_job_engine.stop_all_executions_with_id(jguid,TRUE);
    COMMIT;
    END;
    With no effect. The job is still shown in Grid Control with status SUSPEND.
    I have restarted all servers and all the components in Grid Control, but the jobs will not disappear from Grid Control although they have been deleted.
    I am struggling with this for about 2 days now.
    I have search in Metalink and the internet, but I have not found anything that provides a solution.
    Any help will be very much appreciated.

    hi,
    I have in the past used the following from metalink
    Do you have a metalink account?
    SET verify OFF
    SET linesize 255
    SET pagesize 128
    SET trimout ON
    SET trimspool ON
    SPOOL jobdump.log
    ALTER SESSION SET nls_date_format='MON-DD-YYYY hh:mi:ss pm';
    COLUMN status format a15
    COLUMN job_name FORMAT a64
    COLUMN job_type FORMAT a32
    COLUMN job_owner FORMAT a32
    COLUMN job_status format 99
    COLUMN target_type format a64
    COLUMN frequency_code format a20
    COLUMN interval format 99999999
    VARIABLE JOBID VARCHAR2(64);
    PROMPT *********************** JOB INFO ********************************
    REM Get the job id
    SET serveroutput on
    BEGIN
    SELECT job_id INTO :JOBID
    FROM MGMT_JOB
    WHERE job_name='&&jobName'
    AND job_owner='&&jobOwner'
    AND nested=0;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    BEGIN
    DBMS_OUTPUT.put_line('JOB NOT FOUND, TRYING NAME ONLY');
    SELECT job_id INTO :JOBID
    FROM MGMT_JOB
    WHERE job_name='&&jobName'
    AND nested=0
    AND ROWNUM=1;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    DBMS_OUTPUT.put_line('JOB NOT FOUND');
    END;
    END;
    SELECT job_name, job_owner, job_type, system_job, job_status, target_type
    FROM MGMT_JOB
    WHERE job_id=HEXTORAW(:JOBID);
    PROMPT *********************** JOB SCHEDULE ****************************
    SELECT DECODE(frequency_code,
    1, 'Once',
    2, 'Interval',
    3, 'Daily',
    4, 'Day of Week',
    5, 'Day of Month',
    6, 'Day of Year', frequency_code) "FREQUENCY_CODE",
    start_time, end_time, execution_hours, execution_minutes,
    interval, months, days, timezone_info, timezone_target_index,
    timezone_offset, timezone_region
    FROM MGMT_JOB_SCHEDULE s, MGMT_JOB j
    WHERE s.schedule_id=j.schedule_id
    AND j.job_id=HEXTORAW(:JOBID);
    PROMPT ********************** PARAMETERS ********************************
    SELECT parameter_name,
    decode(parameter_type,
    0, 'Scalar',
    1, 'Vector',
    2, 'Large', parameter_type) "PARAMETER_TYPE",
    scalar_value, vector_value
    FROM MGMT_JOB_PARAMETER
    WHERE job_id=HEXTORAW(:JOBID)
    AND execution_id=HEXTORAW('0000000000000000')
    ORDER BY parameter_name;
    PROMPT ********************** TARGETS ********************************
    SELECT target_name, target_type
    FROM MGMT_JOB_TARGET jt, MGMT_TARGETS t
    WHERE job_id=HEXTORAW(:JOBID)
    AND execution_id=HEXTORAW('0000000000000000')
    AND jt.target_guid=t.target_guid
    ORDER BY target_type, target_name;
    PROMPT ********************** FLAT TARGETS ********************************
    SELECT target_name, target_type
    FROM MGMT_JOB_FLAT_TARGETS jft, MGMT_TARGETS t
    WHERE job_id=HEXTORAW(:JOBID)
    AND jft.target_guid=t.target_guid
    ORDER BY target_type, target_name;
    PROMPT ************************ EXECUTIONS *******************************
    SELECT execution_id,
    DECODE(status,
    1, 'SCHEDULED',
    2, 'RUNNING',
    3, 'FAILED INIT',
    4, 'FAILED',
    5, 'SUCCEEDED',
    6, 'SUSPENDED',
    7, 'AGENT DOWN',
    8, 'STOPPED',
    9, 'SUSPENDED/LOCK',
    10, 'SUSPENDED/EVENT',
    11, 'SUSPENDED/BLACKOUT',
    12, 'STOP PENDING',
    13, 'SUSPEND PENDING',
    14, 'INACTIVE',
    15, 'QUEUED',
    16, 'FAILED/RETRIED',
    17, 'WAITING',
    18, 'SKIPPED', status) "STATUS",
    scheduled_time, start_time, end_time
    FROM MGMT_JOB_EXEC_SUMMARY e
    WHERE job_id=HEXTORAW(:JOBID)
    ORDER BY scheduled_time;
    UNDEFINE jobName
    UNDEFINE jobOwner
    UNDEFINE JOBID
    SPOOL OFFAlan

  • Scheduling an Export Job Using Grid Control

    Hi,
    I got a requirement to schedule export jobs in Oracle Grid Control 10.2.0.5 (Windows). Is this possible and have any one done this? If so please share the steps. The idea is to get alerts if the job fails.
    Thanks.
    GB

    Here is the easy steps (There might be slight differences as I am posting it based on 11g screen)
    1. On grid control console, click on the relevant database
    2. click on 'Data Movement' tab
    3. Click on 'Export to Export Files'
    4. chose the Export type (Database / Schema / Tables/ Tablespace)
    5. Provide Host credentials (OS username and Password) and click on 'Continue'. You will get a new screen called 'Export: Options'
    6. Click the checkbox called 'Generate Log File' and select the Directory object where you wants to create the dump files. (You need to create a directory object in database, if you don't have one)
    7. Chose the contents option as you required, and click '*Next*'. You will get a new page called 'Export: Files'
    8. Select the directory object from the drop-down box and provide a name format for file name, and click 'Next'
    9. Here you provide the Job name and description, and chose repeat options (daily, weekly etc), and click 'Next'
    10. You will get a summary screen called 'Export: Schedule'. review your job details and click on 'Submit'.
    This is the solution, and it works well.

  • SQL script with host command job in Enterprise Manger Grid Control

    I use Enterprise Manger Grid Control 10.2.0.5 and need to create SQL script job on database instance target on unix/linux platform.
    I have problem with os command inside sqlplus script.
    For example for the simple command: SQL> host ls
    I get the message in output log: SQL> SQL> SQL> SQL> SQL> SQL> /bin/bash: ls: command not found
    Can anyone help me?
    Thank you.

    Hi,
    Make sure you have granted all necessary rights (log on as a batch job etc.) to the user used in the prefered credentials.
    Cheers,
    Kenneth

  • Can we schedule Loader Jobs (SQL Loader) using Grid Control  ?

    Can we schedule SQL Loaders jobs/process using Grid Control for 11g database ?
    Or
    Is it good to schedule it as external jobs using DBMS_SCHEDULER ?
    OS is LINUX. Database is 11g R1 Grid Control will be the latest, I believe 10gR3.
    Any other suggestions... (I know it can be done using OS utilities like cron and others but that is not an option for now.)
    Thanks in advance.

    Try this
    -> Create a shell script to execute sqlldr
    -> On Grid, create an "OS COMMAND" job and pass this script as parameters. You'll have options to schedule this job.
    Let us know how it works.

  • Export job on EM Grid Control 10gR2

    As far as I understand Export job on EM Grid Control 10gR2 can be created only on Database Maintenance page. Also Create Like is not supported for this job type, it cannot be edited and cannot be in Job Library. Thus, its definition is not in Grid Control repository.
    I think dealing with Export job is not much user friendly on EM Grid Control 10gR2. Am I right?
    Thank you.

    Thank you for interesting for my question.
    Every time I want to create Export job I can create it through wizard. After that I cannot use neither Create Like feature nor edit this job. I have to use wizard to create another job from scratch.
    When I use RMAN scripts job I can notice much better features.
    In addition, I do not know whether the job definition exists in Grid Control repository.

  • Grid Control jobs issue

    Hi,
    AIX, EMR3
    Hi,
    I'm testing a job scheduling from OEM - startups and shutdowns of monitored databases.
    Immediate job execution works always, scheduled execution never works....
    Any suggestions, troubleshooting ....
    Thx,
    Dobby

    What errors do you get in the job status and what scheduled time does it show after the time has elapsed?

  • Cleanup job to remove archive logs automatically through OEM Grid control

    Hi All,
    I am working on 11gR2 3node RAC database. we have enabled archivelog mode for the databases and don't have any backup processes (like rman) and not using ASM.
    Please let me know how to cleanup the old archivelogs automatically through oem Grid control.
    I have some idea how to do it in standalone database, but not sure how it works in RAC environment through OEM. Please let me know.
    Thanks in advance.

    Hari wrote:
    Thanks for your reply and The requirement is, put the DB in archive log mode and cleanup the old archive logs which is more than 5days. We are doing this because of space issue and don't have backup for these files and the DB must be in archive log mode.
    I have few question here.
    1. Is it must to take the backup of the archive log files and before delete them?No, but if you aren't backing up, why create the archivelogs in the first place?
    2. If i delete them without backup, what is the negative impact?If you aren't backing up the database in the first place (as you stated in an earlier post) then it really doesn't matter what you do with the archivelogs as they are worthless anyway.
    3. What is the recommended process to do it?My recommendation is you first start using rman to backup the database
    4. I need to setup this process through OEM grid control.
    Please let me know.
    Thanks,
    HariIt all begs the question which has already been asked and you avoided answering . . . if you are not taking backups, why bother archiving? The archive logs have ZERO VALUE outside of a consistent backup strategy. So how is it you have a 'requirement' to run in archivelog mode but no requirement for backups?
    Edited by: EdStevens on Dec 2, 2011 9:30 PM

  • No log on DBMS_DATAPUMP schedule on Grid Control

    Hello,
    I'm trying to plan my own procedure lto make a datapump full of the instance with OEM Grid Control.
    It's working but with no log in the job detail :
    create or replace
    PROCEDURE pr_expdp_full AS
    Procedure permettant d'effectuer un export FULL
    de l'instance avec la technologie DATAPUMP
    Déclarer l'objet répertoire dans l'instance
    CREATE DIRECTORY "DATAPUMP_DIR" AS '<Mon chemin réseau>'
    JobHandle NUMBER; -- Data Pump job handle
    JobStamp VARCHAR2(13); -- Date time stamp
    InstanceName varchar2(30); -- Name of instance
    ind NUMBER; -- Loop index
    n_Exist NUMBER; -- count Job
    JobHandle_Exist NUMBER; --JobHandle exist
    percent_done NUMBER; -- Percentage of job complete
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    BEGIN
    -- TimeStamp File with system date
    select to_char(SYSDATE,'DDMMRRRR-HH24MI') into JobStamp from dual ;
    -- Instance Name to export
    select rtrim(global_name) into InstanceName from global_name;
    --Delete Job if exist
    select count(*) into n_Exist from user_datapump_jobs where job_name = 'DAILY_EXPDP_'||InstanceName;
    IF n_Exist > 0
    THEN
    JobHandle_Exist := DBMS_DATAPUMP.ATTACH('DAILY_EXPDP_'||InstanceName,'SYSTEM');
    dbms_datapump.stop_job(JobHandle_Exist);
    DBMS_DATAPUMP.DETACH (JobHandle_Exist);
    execute immediate('DROP TABLE DAILY_EXPDP_'||InstanceName||'');
    END IF;
    -- Create a (user-named) Data Pump job to do a schema export.
    JobHandle :=
    DBMS_DATAPUMP.OPEN(
    operation => 'EXPORT'
    ,job_mode => 'FULL'
    ,job_name => 'DAILY_EXPDP_'||InstanceName
    ,version => 'COMPATIBLE'
    dbms_output.put_line('after OPEN');
    -- Specify a single dump file for the job (using the handle just returned)
    -- and a directory object, which must already be defined and accessible
    -- to the user running this procedure.
    DBMS_DATAPUMP.ADD_FILE(
    handle => JobHandle
    ,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.dpf'
    ,directory => 'DATAPUMP_DIR'
    ,filetype => 1 );
    dbms_datapump.set_parameter(handle => JobHandle, name => 'KEEP_MASTER', value => 0);
    -- Specify a single log file for the job
    DBMS_DATAPUMP.ADD_FILE(
    handle => JobHandle
    ,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.log'
    ,directory => 'DATAPUMP_DIR'
    ,filetype => 3 );
    dbms_datapump.set_parameter(handle => JobHandle, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => JobHandle, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    -- Start the job. An exception will be generated if something is not set up
    -- properly.
    DBMS_DATAPUMP.START_JOB(JobHandle);
    -- The export job should now be running. In the following loop, the job
    -- is monitored until it completes. In the meantime, progress information is
    -- displayed.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(JobHandle,dbms_datapump.ku$_status_job_error dbms_datapump.ku$_status_job_status dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;
    -- If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||
    to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;
    -- If any work-in-progress (WIP) or error messages were received for the job,
    -- display them.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;
    -- Indicate that the job finished and detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(JobHandle);
    END pr_expdp_full;
    I shcedule after :
    BEGIN
    SYSTEM.pr_expdp_full();
    END;
    But when i want to see the results of the execution in the details of job, i can't see anything.
    When i do the same job using the direct link OEM : "Export Data" and after submission, i can see the DATAPUMP log in the job log details.
    Can you tell me what is missing on the submission or what we can do to correct that.
    Thanks in advance
    Best regards.

    Hello,
    I'm trying to plan my own procedure lto make a datapump full of the instance with OEM Grid Control.
    It's working but with no log in the job detail :
    create or replace
    PROCEDURE pr_expdp_full AS
    Procedure permettant d'effectuer un export FULL
    de l'instance avec la technologie DATAPUMP
    Déclarer l'objet répertoire dans l'instance
    CREATE DIRECTORY "DATAPUMP_DIR" AS '<Mon chemin réseau>'
    JobHandle NUMBER; -- Data Pump job handle
    JobStamp VARCHAR2(13); -- Date time stamp
    InstanceName varchar2(30); -- Name of instance
    ind NUMBER; -- Loop index
    n_Exist NUMBER; -- count Job
    JobHandle_Exist NUMBER; --JobHandle exist
    percent_done NUMBER; -- Percentage of job complete
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    BEGIN
    -- TimeStamp File with system date
    select to_char(SYSDATE,'DDMMRRRR-HH24MI') into JobStamp from dual ;
    -- Instance Name to export
    select rtrim(global_name) into InstanceName from global_name;
    --Delete Job if exist
    select count(*) into n_Exist from user_datapump_jobs where job_name = 'DAILY_EXPDP_'||InstanceName;
    IF n_Exist > 0
    THEN
    JobHandle_Exist := DBMS_DATAPUMP.ATTACH('DAILY_EXPDP_'||InstanceName,'SYSTEM');
    dbms_datapump.stop_job(JobHandle_Exist);
    DBMS_DATAPUMP.DETACH (JobHandle_Exist);
    execute immediate('DROP TABLE DAILY_EXPDP_'||InstanceName||'');
    END IF;
    -- Create a (user-named) Data Pump job to do a schema export.
    JobHandle :=
    DBMS_DATAPUMP.OPEN(
    operation => 'EXPORT'
    ,job_mode => 'FULL'
    ,job_name => 'DAILY_EXPDP_'||InstanceName
    ,version => 'COMPATIBLE'
    dbms_output.put_line('after OPEN');
    -- Specify a single dump file for the job (using the handle just returned)
    -- and a directory object, which must already be defined and accessible
    -- to the user running this procedure.
    DBMS_DATAPUMP.ADD_FILE(
    handle => JobHandle
    ,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.dpf'
    ,directory => 'DATAPUMP_DIR'
    ,filetype => 1 );
    dbms_datapump.set_parameter(handle => JobHandle, name => 'KEEP_MASTER', value => 0);
    -- Specify a single log file for the job
    DBMS_DATAPUMP.ADD_FILE(
    handle => JobHandle
    ,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.log'
    ,directory => 'DATAPUMP_DIR'
    ,filetype => 3 );
    dbms_datapump.set_parameter(handle => JobHandle, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => JobHandle, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    -- Start the job. An exception will be generated if something is not set up
    -- properly.
    DBMS_DATAPUMP.START_JOB(JobHandle);
    -- The export job should now be running. In the following loop, the job
    -- is monitored until it completes. In the meantime, progress information is
    -- displayed.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(JobHandle,dbms_datapump.ku$_status_job_error dbms_datapump.ku$_status_job_status dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;
    -- If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||
    to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;
    -- If any work-in-progress (WIP) or error messages were received for the job,
    -- display them.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;
    -- Indicate that the job finished and detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(JobHandle);
    END pr_expdp_full;
    I shcedule after :
    BEGIN
    SYSTEM.pr_expdp_full();
    END;
    But when i want to see the results of the execution in the details of job, i can't see anything.
    When i do the same job using the direct link OEM : "Export Data" and after submission, i can see the DATAPUMP log in the job log details.
    Can you tell me what is missing on the submission or what we can do to correct that.
    Thanks in advance
    Best regards.

  • How to monitor ODI in Grid Control

    Hello
    We are using Grid Control 11.1.0.1.0 and ODI 11.1.1.5.0.
    We were previously using a grep command on the server that ODI was running from to monitor if it was up or not.
    Since the (OEM) upgrade we have lost this metric and would like to take a more elegant approach to monitoring the state of ODI.
    (We didn't install ODI as part of a WebLogic Server install, it is stand alone and therefore the plugins or usual way to monitor are not relevant to us.)
    Is there a way I can have Grid (via a UDM for instance) go into the database and query for a process?
    If the process is found then no alert is sent. This also means that we will only have to blackout the database during cold backups instead of the whole server.
    Hopefully this makes sense and hopefully someone can give me a few hints / tips on how to achieve this?
    Thanks.

    You can find info about the backups in sysman.mgmt$ha_backup. But only one entry per database can be seen. So only for the latest rman job you will find info here.
    You could also start using a rman catalog that will contain all backup info (example http://oracleinstall.wordpress.com/2011/06/03/oracle-11g-rman-catalog-setup/). You can create reports/queries on the rman repository to get the info you need.
    Or even better: create a rman repository and schedule the rman jobs using Grid/Cloud Control.
    Eric

  • Grid Control monitoring a particular logfile

    Hi all,
    I have a doubt, is it possible to Grid Control monitor a specific log file?
    I have a log file where a register a script execution status, is it possible to raise an alert on Grid control just keeping the eyes on this file ?
    Thanks in advance,
    Regards,
    Emerson

    Yes.
    Create an external table for that logfile, then create a user defined metric that queries the table to retrieve the execution status results, and define key values in that metric to raise warning or critical alerts based on the execution status.
    For example, checking for success on a data pump job (admittedly contrived and inefficient) with a logfile called 'impdp.log' sitting in /tmp:
    sqlplus / as sysdba
    create or replace directory TEST_EXTERN_DIR as '/tmp';
    grant read on directory TEST_EXTERN_DIR to someuser;
    exit;
    sqlplus someuser/somepassword
    create table myextern ( string varchar(255) )
    organization external ( default directory TEST_EXTERN_DIR
    access parameters ( records delimited by newline ) location ( 'impdp.log' ) );
    select * from myextern where string like 'Job%completed%';Then create the user-defined metric, in this case as a string metric run by someuser with a comparison operator of CONTAINS with a warning value of 'error' and a critical value of 'fail', set the metric to run as frequently as necessary along with any response actions necessary if desired.

  • Can not Integrate with Grid Control

    Hi all,
    We are testing the feature of Ops Center 11g and Grid Control 11g. When we perform connect to Grid Control from Ops Center, all other tests are OK except a message say that we can reconfigure to another EMRepository, like this:
    Job ID : testserver.90
    Job Name : OEMGCManager-config
    Job Type : OEMGC
    Job Desc : -
    Run ID : 1
    Status : FAILED
    Mode : Actual run
    Owner : john
    Create Date : 03/19/2012 11:51:53 AM ICT
    Start Date : 03/19/2012 11:51:54 AM ICT
    Last Updated : 03/19/2012 11:51:56 AM ICT
    Execution Order : SEQUENTIAL
    Failure Policy : ABORT_ON_ANY_FAILURE
    Task : OEMGCManager-config
    Target : EC->satellite
    Status : FAILED
    Result : Task failed
    Logs :
    03/19/2012 11:51:56 AM ICT INFO Action: config
    03/19/2012 11:51:56 AM ICT INFO Started configuration procedure
    03/19/2012 11:51:56 AM ICT INFO Configuration details:
    03/19/2012 11:51:56 AM ICT INFO OEMGC url:jdbc:oracle:thin:@192.168.0.10:1521:testdb
    03/19/2012 11:51:56 AM ICT INFO oemgc mbean: __local/com.sun.hss.domain.manager:type=OEMGCManager
    03/19/2012 11:51:56 AM ICT INFO oemgc mbean: MXBeanProxy(com.sun.jdmk.JdmkMBeanServerImpl@c1f10e[__local/com.sun.hss.domain.manager:type=OEMGCManager])
    03/19/2012 11:51:56 AM ICT INFO OEMGC Console connection test OK for: testsvr10
    03/19/2012 11:51:56 AM ICT INFO OEMGC Repostory connection test OK for: jdbc:oracle:thin:@192.168.0.10:1521:testdb
    03/19/2012 11:51:56 AM ICT ERROR Reconfiguring to another EMRepository is not supported. (101055)
    We searched Google but can not see any thing to resolve this problem. So can anyone give us some advices to try? Thank you very much.

    Whe have same problem.
    I open a ticket whit Oracle:
    Job ID : ge09.73
    Job Name : OEMGCManager-config
    Job Type : OEMGC
    Job Description : -
    Run ID : 1
    Status : FAILED
    Mode : Actual run
    Owner : root
    Create Date : 04/24/2012 08:34:48 AM CEST
    Start Date : 04/24/2012 08:34:48 AM CEST
    Last Updated : 04/24/2012 08:34:49 AM CEST
    Execution Order : SEQUENTIAL
    Failure Policy : ABORT_ON_ANY_FAILURE
    Task : OEMGCManager-config
    Task Run ID : 335
    Target : EC->EC
    Status : FAILED
    Result : Task failed. (15030)
    Logs :
    04/24/2012 08:34:48 AM CEST INFO Action: config
    04/24/2012 08:34:48 AM CEST INFO Started configuration procedure
    04/24/2012 08:34:49 AM CEST INFO Configuration details:
    04/24/2012 08:34:49 AM CEST INFO OEMGC url:jdbc:oracle:thin:@oe01.san.gva.es:1521:GRID
    04/24/2012 08:34:49 AM CEST INFO oemgc mbean: __local/com.sun.hss.domain.manager:type=OEMGCManager
    04/24/2012 08:34:49 AM CEST INFO oemgc mbean: Proxy[com.sun.hss.domain.oemgc.OEMGCManagerMXBean: __local/com.sun.hss.domain.manager:type=OEMGCManager]
    04/24/2012 08:34:49 AM CEST INFO OEMGC Console connection test OK for: oe01.san.gva.es
    04/24/2012 08:34:49 AM CEST INFO OEMGC Repostory connection test OK for: jdbc:oracle:thin:@oe01.san.gva.es:1521:GRID
    04/24/2012 08:34:49 AM CEST ERROR Reconfiguring to another EMRepository is not supported. (101055)

  • Notification Email/Alert - Including Job execution details(step output)

    Hi,
    I am using Oracle 10g R2 GRID Control on Windows. I would like to add the execution details to the default notification (Long)emails. Is there any possibility without adding any code in current version or later version to include step output(failed/Passed details)?
    Thaks for your help.

    I am already getting notification email/alert with just status failed/succeded. But I want to include the step output(job execution details) to that default email given by Oracle. I could write PL/SQL block to manipulate this,but looking for default way to include these details in any version?

  • Error while installing Grid Control 12c (ERROR STREAM: *sys-package-mgr*: skipping bad jar)

    Hi all,
       OS: OEL 6.3 64 bits
       DB: 11.2.0.3
       Grid: 12.1
       While installing Grid Control 12c, the following error appears to me:
    INFO: SaveInvWCCE JRE files in Scratch
    INFO: oracle.installer.mandatorySetup property is set to false, so skipping the execution of additional tools
    INFO: oracle.installer.installUpdates property is set to false, so skipping the checking of updates
    INFO: Config Initialize JRE files in Scratch
    INFO: no env vars set, no envVars.properties file is generated
    INFO: none of the components are configurable
    INFO: This is a shared oracle home or remote nodes are null. No copy required.
    INFO: Adding iterator oracle.sysman.oii.oiif.oiifw.OiifwRootShWCDE
    INFO: Updating the global context
    INFO: Path To 'globalcontext.xml' = /gridControl/WLS/jdk16/install/chainedInstall/globalcontext
    INFO: Since operation was successful, move the current OiicAPISessionDetails to installed list
    INFO: Install Finished at: 2013-07-04_11-12-49-PM
    INFO: The ARU ID found in shiphomeproperties.xml file is 226
    INFO: ARUId present in shiphomeproperties.xml matches with the 64 bit OMS Platforms ARU ID 226, so -d64 is passed for wls install.
    INFO: Executing the command /gridControl/WLS/jdk16/jdk/bin/java   -d64  -Djava.io.tmpdir=/gridControl/WLS/.wlsinstall_temp -Xms128m -Xmx1024m  -jar /u01/binaries//wls/wls1035_generic.jar -mode=silent -silent_xml=/gridControl/WLS/.wlsinstall_temp/wls.xml -log=/u01/oraInventory/logs/installActions2013-07-04_11-07-45PM.log.wls  -log_priority=debug
    INFO: Extracting 0%....................................................................................................100%
    INFO: ERROR STREAM: *sys-package-mgr*: skipping bad jar, '/u01/binaries/wls/wls1035_generic.jar'
    INFO: #
    INFO: # A fatal error has been detected by the Java Runtime Environment:
    INFO: #
    INFO: #  SIGSEGV (0xb) at pc=0x0000003a23689cdc, pid=24834, tid=139710737405696
    INFO: #
    INFO: # JRE version: 6.0_24-b50
    INFO: # Java VM: Java HotSpot(TM) 64-Bit Server VM (19.1-b02 mixed mode linux-amd64 compressed oops)
    INFO: # Problematic frame:
    INFO: # C  [libc.so.6+0x89cdc]
    INFO: #
    INFO: # An error report file with more information is saved as:
    INFO: # /tmp/hs_err_pid24834.log
    INFO: #
    INFO: # If you would like to submit a bug report, please visit:
    INFO: #   http://java.sun.com/webapps/bugreport/crash.jsp
    INFO: #
    INFO: Return value is 134
    INFO: POPUP ERROR:Installation of Oracle WebLogic Server has failed, check log files for more details.
       The line "skipping bad jar, '/u01/binaries/wls/wls1035_generic.jar'" is the one that's worring me. Can it be corrupt? Or it is something else?
       Thanks in advance.

    Its too early to conclude the issue is a bug with out looking into the logs. For the same reason i requested you to open an SR so that we can have a look at the logs and identify the cause.  If you are Oracle employee then you can share the VM details else we need logs to debug this further. If any one from your company can open an sr and share logs then that will be helpful. Can you also check if the shiphome that you downloaded is all correct and checksum/byte matches to what is mentioned on OTN.

  • Using Data Pump from Grid Control to import schema

    Hi,
    I have 2 10g oracle databases on 2 different windows servers. The databases are called: source and dest.
    In dest I have created a database link to source.
    I want to import a schema from source to dest.
    I log on Grid Control and in the maintenance tab I click the link Import from database.
    I choose import from schema and the database link I created earlier.
    I then choose the schema name.
    In the page Import From Database: Options I expand the advanced options.
    I choose Exclude Only Objects Specified Below and I add a row that looks like this:
    object type: TABLE
    object name expression: EXCLUDE=TABLE:"IN('TEST6', 'TEST7')"
    As I submit the job I get this error message:
    Import Submit Failed
    Errors: ORA-39001: invalid argument value ORA-39071. The value of NAME_EXPR is misformed. ORA-00920: invalid relation operator.
    I have tried several ways, but I am still getting the same error.
    Any suggestion will be appreciated.

    Hi,
    Thanks for your replay.
    I have already tried the following with back slash:
    EXCLUDE=TABLE:\"IN('TEST6', 'TEST7')\"
    and I get the same error message.
    If you have the correct syntax could you send it to me?
    Regards.

Maybe you are looking for

  • Memory Leak in .swf

    Memory Leak issue with CS4 Using CS4, we have a memory leak and I can not find the source of the problem. http://tiny.cc/O7D3e here is the link to the testing site. If you take a look at your task manager you will see it your RAM will continue to inc

  • Payment Term - Advance & Retention

    We have a requirement.  We need to define a payment term as follows: 5% Advance, 90% Payment, 5% Retention In the above case, should I define it as an installment term with 5%, 90%, 5% slabs ?  Please advise what is normally done for this type of pay

  • Reinstall and Install horrors

    Earlier this month Time Machine crashed during a backup and hung the system. Then I could not reboot. I booted from TechTool 4 and attempted to repair the drive, but it couldn't be repaired. I booted from the Leopard DVD and ran Disk Utility and it t

  • Data Pump Export issue - no streams pool created and cannot automatically c

    I am trying to use data pump on a 10.2.0.1 database that has vlm enabled and getting the following error : Export: Release 10.2.0.1.0 - Production on Tuesday, 20 April, 2010 10:52:08 Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production O

  • Partial Search

    Hey all Wondering if anyone would have any idea how to do a partial search of a database in a servlet. I'm doing a database on songs so lets say someone just enters the first word of a song i need to be able to search for that and return details on t