No log on DBMS_DATAPUMP schedule on Grid Control

Hello,
I'm trying to plan my own procedure lto make a datapump full of the instance with OEM Grid Control.
It's working but with no log in the job detail :
create or replace
PROCEDURE pr_expdp_full AS
Procedure permettant d'effectuer un export FULL
de l'instance avec la technologie DATAPUMP
Déclarer l'objet répertoire dans l'instance
CREATE DIRECTORY "DATAPUMP_DIR" AS '<Mon chemin réseau>'
JobHandle NUMBER; -- Data Pump job handle
JobStamp VARCHAR2(13); -- Date time stamp
InstanceName varchar2(30); -- Name of instance
ind NUMBER; -- Loop index
n_Exist NUMBER; -- count Job
JobHandle_Exist NUMBER; --JobHandle exist
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- TimeStamp File with system date
select to_char(SYSDATE,'DDMMRRRR-HH24MI') into JobStamp from dual ;
-- Instance Name to export
select rtrim(global_name) into InstanceName from global_name;
--Delete Job if exist
select count(*) into n_Exist from user_datapump_jobs where job_name = 'DAILY_EXPDP_'||InstanceName;
IF n_Exist > 0
THEN
JobHandle_Exist := DBMS_DATAPUMP.ATTACH('DAILY_EXPDP_'||InstanceName,'SYSTEM');
dbms_datapump.stop_job(JobHandle_Exist);
DBMS_DATAPUMP.DETACH (JobHandle_Exist);
execute immediate('DROP TABLE DAILY_EXPDP_'||InstanceName||'');
END IF;
-- Create a (user-named) Data Pump job to do a schema export.
JobHandle :=
DBMS_DATAPUMP.OPEN(
operation => 'EXPORT'
,job_mode => 'FULL'
,job_name => 'DAILY_EXPDP_'||InstanceName
,version => 'COMPATIBLE'
dbms_output.put_line('after OPEN');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.dpf'
,directory => 'DATAPUMP_DIR'
,filetype => 1 );
dbms_datapump.set_parameter(handle => JobHandle, name => 'KEEP_MASTER', value => 0);
-- Specify a single log file for the job
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.log'
,directory => 'DATAPUMP_DIR'
,filetype => 3 );
dbms_datapump.set_parameter(handle => JobHandle, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => JobHandle, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(JobHandle);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(JobHandle,dbms_datapump.ku$_status_job_error dbms_datapump.ku$_status_job_status dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(JobHandle);
END pr_expdp_full;
I shcedule after :
BEGIN
SYSTEM.pr_expdp_full();
END;
But when i want to see the results of the execution in the details of job, i can't see anything.
When i do the same job using the direct link OEM : "Export Data" and after submission, i can see the DATAPUMP log in the job log details.
Can you tell me what is missing on the submission or what we can do to correct that.
Thanks in advance
Best regards.

Hello,
I'm trying to plan my own procedure lto make a datapump full of the instance with OEM Grid Control.
It's working but with no log in the job detail :
create or replace
PROCEDURE pr_expdp_full AS
Procedure permettant d'effectuer un export FULL
de l'instance avec la technologie DATAPUMP
Déclarer l'objet répertoire dans l'instance
CREATE DIRECTORY "DATAPUMP_DIR" AS '<Mon chemin réseau>'
JobHandle NUMBER; -- Data Pump job handle
JobStamp VARCHAR2(13); -- Date time stamp
InstanceName varchar2(30); -- Name of instance
ind NUMBER; -- Loop index
n_Exist NUMBER; -- count Job
JobHandle_Exist NUMBER; --JobHandle exist
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- TimeStamp File with system date
select to_char(SYSDATE,'DDMMRRRR-HH24MI') into JobStamp from dual ;
-- Instance Name to export
select rtrim(global_name) into InstanceName from global_name;
--Delete Job if exist
select count(*) into n_Exist from user_datapump_jobs where job_name = 'DAILY_EXPDP_'||InstanceName;
IF n_Exist > 0
THEN
JobHandle_Exist := DBMS_DATAPUMP.ATTACH('DAILY_EXPDP_'||InstanceName,'SYSTEM');
dbms_datapump.stop_job(JobHandle_Exist);
DBMS_DATAPUMP.DETACH (JobHandle_Exist);
execute immediate('DROP TABLE DAILY_EXPDP_'||InstanceName||'');
END IF;
-- Create a (user-named) Data Pump job to do a schema export.
JobHandle :=
DBMS_DATAPUMP.OPEN(
operation => 'EXPORT'
,job_mode => 'FULL'
,job_name => 'DAILY_EXPDP_'||InstanceName
,version => 'COMPATIBLE'
dbms_output.put_line('after OPEN');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.dpf'
,directory => 'DATAPUMP_DIR'
,filetype => 1 );
dbms_datapump.set_parameter(handle => JobHandle, name => 'KEEP_MASTER', value => 0);
-- Specify a single log file for the job
DBMS_DATAPUMP.ADD_FILE(
handle => JobHandle
,filename => 'FULL_EXPDP_'||InstanceName||'_'||JobStamp||'.log'
,directory => 'DATAPUMP_DIR'
,filetype => 3 );
dbms_datapump.set_parameter(handle => JobHandle, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => JobHandle, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(JobHandle);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(JobHandle,dbms_datapump.ku$_status_job_error dbms_datapump.ku$_status_job_status dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(JobHandle);
END pr_expdp_full;
I shcedule after :
BEGIN
SYSTEM.pr_expdp_full();
END;
But when i want to see the results of the execution in the details of job, i can't see anything.
When i do the same job using the direct link OEM : "Export Data" and after submission, i can see the DATAPUMP log in the job log details.
Can you tell me what is missing on the submission or what we can do to correct that.
Thanks in advance
Best regards.

Similar Messages

  • How to delete archive log files from ASM through Grid Control

    Hi
    Anybody suggest me how to delete archive log files from ASM through Grid Control.
    Thanks

    It is important to specify both, the oracle version and os version when posting, so confusions can be avoided.
    In this particular case, since you are referring to asm and grid control you could be talking about either 10gR1, 10gR2 or 11gR1; but I strongly suggest you to avoid us to be guessing. In either case, you sould go to the maintenance tab of the target database and program a scheduled 'delete noprompt obsolete;' procedure. This will purge the information stored at the Flash recovery area, which I assume you have declared inside the ASM.
    ~ Madrid
    http://hrivera99.blogspot.com/

  • Scheduling an Export Job Using Grid Control

    Hi,
    I got a requirement to schedule export jobs in Oracle Grid Control 10.2.0.5 (Windows). Is this possible and have any one done this? If so please share the steps. The idea is to get alerts if the job fails.
    Thanks.
    GB

    Here is the easy steps (There might be slight differences as I am posting it based on 11g screen)
    1. On grid control console, click on the relevant database
    2. click on 'Data Movement' tab
    3. Click on 'Export to Export Files'
    4. chose the Export type (Database / Schema / Tables/ Tablespace)
    5. Provide Host credentials (OS username and Password) and click on 'Continue'. You will get a new screen called 'Export: Options'
    6. Click the checkbox called 'Generate Log File' and select the Directory object where you wants to create the dump files. (You need to create a directory object in database, if you don't have one)
    7. Chose the contents option as you required, and click '*Next*'. You will get a new page called 'Export: Files'
    8. Select the directory object from the drop-down box and provide a name format for file name, and click 'Next'
    9. Here you provide the Job name and description, and chose repeat options (daily, weekly etc), and click 'Next'
    10. You will get a summary screen called 'Export: Schedule'. review your job details and click on 'Submit'.
    This is the solution, and it works well.

  • Alert Log File Monitoring of 8i and 9i Databases with EM Grid Control 10g

    Is it possible to monitor alert log errors in Oracle 8i/9i Databases with EM Grid Control 10g and EM 10g agents? If yes, is it possible to get some kind of notification?
    I know that in 10g Database, it is possible to use server generated alerts, but what about 8i/9i?
    Regards,
    Martin

    Hello
    i am interested in a very special feature: is it possible to get notified if alerts occur in alert logs in an 8i/9i database when using Grid control and the 10g agent on the 8i/9i systems?
    Moreover, the 10g agent should be able to get Performance Data using the v$ views or direct sga access without using statspack, right?
    Do you know where I can find documentation about the supported features when using Grid Control with 8i/9i databases?

  • Grid control patching log locations

    I looked on oracle support. I am looking for the grid control logs for when I patch through the grid control gui.
    The agent deploy logs are here:
    /opt/app/oracle/product/10.2.0/oms10g/sysman/prov/agentpush
    Where are the logs for Deploy -> Patch Agent ?
    When a job fails for either patching an agent or a database, the error messages I get in the GUI are not real useful. I need the logs. I have been hunting for them and can't find them.

    Check the job logs in the OMS since the real deploying/patching is done by a job.
    See the tab Jobs. If you select Advanced Search, change Target Type to Targetless you may even see more jobs that ran on your OMS.
    Select a job that ran. You can click Show All Details and even zoom in further, depending if it was multi step job.
    Eric

  • Can we schedule Loader Jobs (SQL Loader) using Grid Control  ?

    Can we schedule SQL Loaders jobs/process using Grid Control for 11g database ?
    Or
    Is it good to schedule it as external jobs using DBMS_SCHEDULER ?
    OS is LINUX. Database is 11g R1 Grid Control will be the latest, I believe 10gR3.
    Any other suggestions... (I know it can be done using OS utilities like cron and others but that is not an option for now.)
    Thanks in advance.

    Try this
    -> Create a shell script to execute sqlldr
    -> On Grid, create an "OS COMMAND" job and pass this script as parameters. You'll have options to schedule this job.
    Let us know how it works.

  • Cleanup job to remove archive logs automatically through OEM Grid control

    Hi All,
    I am working on 11gR2 3node RAC database. we have enabled archivelog mode for the databases and don't have any backup processes (like rman) and not using ASM.
    Please let me know how to cleanup the old archivelogs automatically through oem Grid control.
    I have some idea how to do it in standalone database, but not sure how it works in RAC environment through OEM. Please let me know.
    Thanks in advance.

    Hari wrote:
    Thanks for your reply and The requirement is, put the DB in archive log mode and cleanup the old archive logs which is more than 5days. We are doing this because of space issue and don't have backup for these files and the DB must be in archive log mode.
    I have few question here.
    1. Is it must to take the backup of the archive log files and before delete them?No, but if you aren't backing up, why create the archivelogs in the first place?
    2. If i delete them without backup, what is the negative impact?If you aren't backing up the database in the first place (as you stated in an earlier post) then it really doesn't matter what you do with the archivelogs as they are worthless anyway.
    3. What is the recommended process to do it?My recommendation is you first start using rman to backup the database
    4. I need to setup this process through OEM grid control.
    Please let me know.
    Thanks,
    HariIt all begs the question which has already been asked and you avoided answering . . . if you are not taking backups, why bother archiving? The archive logs have ZERO VALUE outside of a consistent backup strategy. So how is it you have a 'requirement' to run in archivelog mode but no requirement for backups?
    Edited by: EdStevens on Dec 2, 2011 9:30 PM

  • Grid Control 10g Purge log files

    Hi,
    I want to know which log files/directories I can clean without stopping/affecting the OMS? I have cleaned couple but not sure what is the complete list. Its 10.2.05 grid control.
    Thanks.
    Edited by: gbyte2001 on 2-Jun-2012 9:03 AM

    Except the current version of the log files (e.g., those without the .xxxxxxxxxx extension), you can remove those in the following directories:
    $ORACLE_HOME/Apache/Apache/logs
    e.g.; remove access_log.xxxxxxxxxx and keep the access_log file file
    $ORACLE_HOME/webcache/logs
    $ORACLE_HOME/sysman/log
    Also, you keep an eye on the sizes of the files in the following directories. If necessary, purge them when the OMS is stopped.
    $ORACLE_HOME/j2ee/OC4J_EM/log/OC4J_EM_default_island_1
    $ORACLE_HOME/j2ee/OC4J_EMPROV/log/OC4J_EM_default_island_1
    $ORACLE_HOME/opmn/logs
    Regards,
    - Loc

  • Problem in deleting backup job in Grid Control

    Hi,
    The topology and versions are as follows:
    1 windows 2003 server with Oracle Grid Control 10.2.0.4.0.
    1 windows 2003 server with Oracle Databases 10.2.0.3.
    1 windows 2003 server with Oracle Databases 10.2.0.3 - Data Guard setup
    I have tried to stop (and delete) some scheduled fullbackup jobs from Grid Control.
    The job has now status SUSPEND. I have tried to stop and delete the job, but I get the message:
    All executions of the job were stopped successfully. Currently running steps will not be stopped.
    I cannot delete the job and the job is not running.
    I have run this procedure in order to delete the job:
    DECLARE
    jguid RAW;
    BEGIN
    SELECT job_id
    INTO jguid
    FROM mgmt_job
    WHERE job_name = '<name of your job>'
    AND job_owner = '<owner of your job>';
    mgmt_job_engine.stop_all_executions_with_id(jguid,TRUE);
    COMMIT;
    END;
    With no effect. The job is still shown in Grid Control with status SUSPEND.
    I have restarted all servers and all the components in Grid Control, but the jobs will not disappear from Grid Control although they have been deleted.
    I am struggling with this for about 2 days now.
    I have search in Metalink and the internet, but I have not found anything that provides a solution.
    Any help will be very much appreciated.

    hi,
    I have in the past used the following from metalink
    Do you have a metalink account?
    SET verify OFF
    SET linesize 255
    SET pagesize 128
    SET trimout ON
    SET trimspool ON
    SPOOL jobdump.log
    ALTER SESSION SET nls_date_format='MON-DD-YYYY hh:mi:ss pm';
    COLUMN status format a15
    COLUMN job_name FORMAT a64
    COLUMN job_type FORMAT a32
    COLUMN job_owner FORMAT a32
    COLUMN job_status format 99
    COLUMN target_type format a64
    COLUMN frequency_code format a20
    COLUMN interval format 99999999
    VARIABLE JOBID VARCHAR2(64);
    PROMPT *********************** JOB INFO ********************************
    REM Get the job id
    SET serveroutput on
    BEGIN
    SELECT job_id INTO :JOBID
    FROM MGMT_JOB
    WHERE job_name='&&jobName'
    AND job_owner='&&jobOwner'
    AND nested=0;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    BEGIN
    DBMS_OUTPUT.put_line('JOB NOT FOUND, TRYING NAME ONLY');
    SELECT job_id INTO :JOBID
    FROM MGMT_JOB
    WHERE job_name='&&jobName'
    AND nested=0
    AND ROWNUM=1;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    DBMS_OUTPUT.put_line('JOB NOT FOUND');
    END;
    END;
    SELECT job_name, job_owner, job_type, system_job, job_status, target_type
    FROM MGMT_JOB
    WHERE job_id=HEXTORAW(:JOBID);
    PROMPT *********************** JOB SCHEDULE ****************************
    SELECT DECODE(frequency_code,
    1, 'Once',
    2, 'Interval',
    3, 'Daily',
    4, 'Day of Week',
    5, 'Day of Month',
    6, 'Day of Year', frequency_code) "FREQUENCY_CODE",
    start_time, end_time, execution_hours, execution_minutes,
    interval, months, days, timezone_info, timezone_target_index,
    timezone_offset, timezone_region
    FROM MGMT_JOB_SCHEDULE s, MGMT_JOB j
    WHERE s.schedule_id=j.schedule_id
    AND j.job_id=HEXTORAW(:JOBID);
    PROMPT ********************** PARAMETERS ********************************
    SELECT parameter_name,
    decode(parameter_type,
    0, 'Scalar',
    1, 'Vector',
    2, 'Large', parameter_type) "PARAMETER_TYPE",
    scalar_value, vector_value
    FROM MGMT_JOB_PARAMETER
    WHERE job_id=HEXTORAW(:JOBID)
    AND execution_id=HEXTORAW('0000000000000000')
    ORDER BY parameter_name;
    PROMPT ********************** TARGETS ********************************
    SELECT target_name, target_type
    FROM MGMT_JOB_TARGET jt, MGMT_TARGETS t
    WHERE job_id=HEXTORAW(:JOBID)
    AND execution_id=HEXTORAW('0000000000000000')
    AND jt.target_guid=t.target_guid
    ORDER BY target_type, target_name;
    PROMPT ********************** FLAT TARGETS ********************************
    SELECT target_name, target_type
    FROM MGMT_JOB_FLAT_TARGETS jft, MGMT_TARGETS t
    WHERE job_id=HEXTORAW(:JOBID)
    AND jft.target_guid=t.target_guid
    ORDER BY target_type, target_name;
    PROMPT ************************ EXECUTIONS *******************************
    SELECT execution_id,
    DECODE(status,
    1, 'SCHEDULED',
    2, 'RUNNING',
    3, 'FAILED INIT',
    4, 'FAILED',
    5, 'SUCCEEDED',
    6, 'SUSPENDED',
    7, 'AGENT DOWN',
    8, 'STOPPED',
    9, 'SUSPENDED/LOCK',
    10, 'SUSPENDED/EVENT',
    11, 'SUSPENDED/BLACKOUT',
    12, 'STOP PENDING',
    13, 'SUSPEND PENDING',
    14, 'INACTIVE',
    15, 'QUEUED',
    16, 'FAILED/RETRIED',
    17, 'WAITING',
    18, 'SKIPPED', status) "STATUS",
    scheduled_time, start_time, end_time
    FROM MGMT_JOB_EXEC_SUMMARY e
    WHERE job_id=HEXTORAW(:JOBID)
    ORDER BY scheduled_time;
    UNDEFINE jobName
    UNDEFINE jobOwner
    UNDEFINE JOBID
    SPOOL OFFAlan

  • How to use stored script from with Grid Control 10gR2

    HI
    My backup method uses a stored script in the recovery catalog that is separated from Grid Control's repository database.
    In Scheduled Backup, there is no way to connect to recovery catalog; let alone use stored script. Is that true?
    Thanks,
    Kevin

    it works, here is a test case , I use this functionality a lot
    $connected from HOST
    rman target / catalog <mycatalog>
    connected to target database: TOPAZ (DBID=3348250252)
    connected to recovery catalog database
    RMAN> create script testbackup
    2> {backup current controlfile;
    3> }
    created script testbackup
    RMAN> run {execute script testbackup;}
    executing script: testbackup
    Starting backup at 12-JUN-07
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=217 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=251 devtype=DISK
    allocated channel: ORA_DISK_3
    channel ORA_DISK_3: sid=200 devtype=DISK
    allocated channel: ORA_DISK_4
    channel ORA_DISK_4: sid=337 devtype=DISK
    allocated channel: ORA_DISK_5
    channel ORA_DISK_5: sid=182 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current controlfile in backupset
    channel ORA_DISK_1: starting piece 1 at 12-JUN-07
    channel ORA_DISK_1: finished piece 1 at 12-JUN-07
    piece handle=/rman_backup/d_TOPAZ_s_12344_p_1_t_625077001 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    Finished backup at 12-JUN-07
    Starting Control File and SPFILE Autobackup at 12-JUN-07
    piece handle=/rman_backup/c-3348250252-20070612-06 comment=NONE
    Finished Control File and SPFILE Autobackup at 12-JUN-07
    NOW in grid control I have configured my Recovery catalog settings, check for OS username , that should be owner of oracle home for recover catalog
    I schedule a RMAN backup job with these parameters in script
    {execute script testbackup;}
    Status          Running
         Step ID          203585
         Targets          topaz.ucas.ac.uk
         Started          12-Jun-2007 16:34:56 (UTC+01:00)
         Step Elapsed Time          1 minutes, 37 seconds
         Management Service          stardb1:4889_Management_Service
         TIP      Management Service from which the job step was dispatched.
    Output Log
    Recovery Manager: Release 9.2.0.6.0 - 64bit Production
    Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
    RMAN>
    connected to target database: TOPAZ (DBID=3348250252)
    RMAN>
    connected to recovery catalog database
    RMAN>
    echo set on
    RMAN> {execute script testbackup;}
    2> exit;
    executing script: testbackup
    Starting backup at 12-JUN-07
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=164 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=134 devtype=DISK
    allocated channel: ORA_DISK_3
    channel ORA_DISK_3: sid=22 devtype=DISK
    allocated channel: ORA_DISK_4
    channel ORA_DISK_4: sid=198 devtype=DISK
    allocated channel: ORA_DISK_5
    channel ORA_DISK_5: sid=145 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current controlfile in backupset
    channel ORA_DISK_1: starting piece 1 at 12-JUN-07
    channel ORA_DISK_1: finished piece 1 at 12-JUN-07
    piece handle=/rman_backup/d_TOPAZ_s_12346_p_1_t_625077300 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 12-JUN-07
    Starting Control File and SPFILE Autobackup at 12-JUN-07
    piece handle=/rman_backup/c-3348250252-20070612-07 comment=NONE
    Finished Control File and SPFILE Autobackup at 12-JUN-07
    null

  • Backing up database and archivelogs with grid control.

    So I was scheduling a job to backup the entire database and archive logs to an ASM diskgroup.  Well for some reason.... the backup of both the datafiles and archivelog files keep going to my $ORACLE_HOME/db/dbs directory.  I have no clue as I've dropped the job and recreated it and made sure it was going to <diskgroup>. Anyone have any idea what might be happening? I'm using 10.2.0.4 for the database and ASM. My Grid control is also 10.2.0.4.
    Thanks in advance
    Luke

    Grid Control:
    Disk Backup Location: I've tried to leave it blank so that it will goto my flash recovery area(Which is on ASM) and I even tried to explicitly state an ASM diskgroup, but both of those fail and start backing up to /dbs directory
    RMAN:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTRONLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '%U';
    CONFIGURE MAXSETSIZE TO UNLIMITED;
    CONFIGURE ENCRYPTION FOR DATABASE OFF;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128;
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/SOMEDIR/';
    Edited by: Luke22 on Feb 2, 2010 12:07 PM

  • Grid Control stops sending emails?

    Hello everyone. Interesting thing happened today. For whatever reasons, Grid Control stopping sending out email alerts that we have configured. Let me give some details.
    We have customized some rules so we can receive email alerts if something goes wrong with our production databases. It has worked great until today.
    We have setup an alert to let us know the 'state' of the database. This worked extremely well on our dataguard box, which would change status to 'MOUNT" twice a day as we applied redo logs from the primary database. Each time this happened, Grid Control would fire off an email alert stating the database was in the 'MOUNT' stage.
    Once all the logs were applied and the database would return to 'OPEN', we could get a email saying the database was 'CLEAR' and now 'OPEN'.
    This worked great, until our 17:00 log applies happened.
    We did not receive anything at all and now I am trying to figure out why.
    To my knowledge, nothing has changed on the Grid Control server.
    I logged in and checked everything out, and it seemed ok.
    I did restart some of the services and restarted the DB instance.
    Doing some testing, I stopped a database to check the rule. It took a while for Grid Control to recognize that the database was down and it eventually fired off a email alert stating the database was unreachable. But this was after at least 3-5 minutes, where as before, it was almost instant.
    My guess is that for whatever reasons, is that when the database status changes, Grid Control is not finding out right away. Due to the latency of the issue, by the time all the logs have applied (no more than 5 minutes), the database is back 'OPEN' and the database looks good to grid control.
    I'm a little baffled by this because Grid Control has worked great for months.
    Does anyone have any idea on what could be wrong?
    Any ideas on where to start looking?
    Any additional information I can post to help troubleshoot? Logs?
    I appreciate it.
    Jason

    hi,
    Has a schedule been set-up for the user. This normally happens when a user is created.
    Does the user have access to the rule set that has been created.
    For example, in my system SYSMAN creates all the rules and then makes them public. Individual users then subscribe to the public rules to get the alerts.
    regards
    Alan

  • Create like  - this options on Grid control fails

    Hi,
    OS is Linux RHEL4
    grid control 10gr2 is installed an monitoring a cluster database.
    I have created a scheduled job to backup the database and I have another job to backup archive logs
    I am trying to use the Create Like feature of the job to create other jobs but even though I am logged in as either SYSMAN or SYSTEM
    I get a message saying that
    'Create like is not supported for this type of job'
    can anyone help out with this
    regards

    as the messags says, it is not supported for the type job. maybe there is a button "show sql" for the job definition, then you could adapt this sql for your new job. HTH

  • 10g grid control installation (hanging problem) on RHEL AS 4.0

    Iam trying to install enterprise manager 10g(10.2.0.1.0) grid control on RHEL AS 4.0. At a stage after specifying security options the OUI hangs for long time (for about 15 mins) then it works fine till 100% installation then hanged for all day which i nevar see - (execute configuration scripts step). After a huge frustration i stopped the installation. Did anyone has come across this kind of problem..can anyone help me out of this problem...thanks in advance..
    The last few lines from logfile shows:
    *** End of Installation Page***
    The installation of Oracle Enterprise Manager Repository Database was successful.
    INFO: Path To 'globalcontext.xml' = /u01/app/oracle/product/10.2.0/db10g/install/chainedInstall/globalcontext

    Please paste the install action log and config log

  • Grid Control 11g Installation Unable to find Weblogic Server

    Hi,
    I am trying to install Grid Control 11g on Rhel 5.3. I already installed Weblogic . But when I give this location as middleware. Grid doesnt accept it.
    This is the exact error: ORACLE_MIDDLEWARE_HOME_LOCATION: specified Oracle Middleware Home is not installed in this host.
    I am giving /opt/oracle/Middleware in the Middleware home location screen.
    [oracle@ Middleware]$ pwd
    /opt/oracle/Middleware
    [oracle@]$ ls -l
    total 108
    drwxr-xr-x 2 oracle dba 4096 Dec 15 22:45 logs
    drwxr-xr-x 7 oracle dba 24576 Dec 15 22:45 modules
    -rw-r--r-- 1 oracle dba 852 Dec 15 22:45 ocm.rsp
    drwxr-xr-x 5 oracle dba 4096 Dec 15 22:50 patch_wls1032
    -rw-r--r-- 1 oracle dba 56812 Dec 15 22:45 registry.dat
    -rw-r--r-- 1 oracle dba 1718 Dec 15 22:45 registry.xml
    drwxr-xr-x 8 oracle dba 4096 Dec 15 22:45 utils
    drwxr-xr-x 7 oracle dba 4096 Dec 15 22:45 wlserver_10.3

    Did you install the correct version of WebLogic?
    Check you HOSTS file for incorrect entries.
    Eric

Maybe you are looking for

  • Change of open sales order : using BAPI

    Hi    this  is ravi                   can any one help me on this  topic         1)change of open sales order  using BAPI .With appropriate reason( like header level reason)                    thanks & regards

  • XSLT Rule Template

    Hello Everyone, I've not an XSLT programmer. Looking for Ideas from XSLT experience,... Here are 3 sample transformations I've been using very frequently across multiple .xls files. 1. Straight mapping: - <tns:unit_issue_code> <xsl:value-of select="/

  • How to save font colour in mail

    How to save font colour in mail

  • PSE 13 will not print to my Canon MG6150.

    Elements 9 does, Premier 13 does, but not elements.  The printer is my default but the fault says "because cannot load default printer". help please

  • Problem Executing Survey API

    Hello everyone. I'm hoping someone can help me with a problem I'm having running one of the Survey Builder API's. I need to edit certain survey fields using the following API: OPC_SURVEY_BUILD.EDIT_SURVEY. The problems is that when I try to run this