Hot backup full disk job Scheduled failed

Hello,
We are facing the issue with hot backup job. Can some one help me on this how to fix.
Here is the error log message..
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on c1 channel at 03/20/2014 01:25:20
ORA-19510: failed to set size of 4985610 blocks for file "+DB_FLASH_1/##########"(block size=8192)
ORA-17505: ksfdrsz:1 Failed to resize file to size 4985610 blocks
ORA-15041: diskgroup space exhausted
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of allocate command on c1 channel at 03/19/2014  23:00:01
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 2

Hi,
It doesn't look like a Scheduler issue, but an RMAN one instead.
According to the error, the diskgroup space is exhausted, so perhaps you're running out of space on your disk?

Similar Messages

  • RMAN HOT BACKUP ON DISK

    Hi All,
    I want to take online full hot backup on disk through RMAN.
    Is it possible can any one provide me a sample script.
    Regards,
    Umair

    hi
    run
    crosscheck backup;
    delete noprompt obsolete;
    allocate channel ch1 type disk;
    allocate channel ch2 type disk;
    backup
    database tag='full_database_backup_and_ctl' include current controlfile fo
    rmat '/oracle/backup/rman_bk/fulldb_%u%s%p';
    sql "alter system switch logfile";
    backup
    archivelog all delete input format '/oracle/backup/rman_bk/arch_%u%s%p';
    backup current controlfile format '/oracle/backup/rman_bk/ctl_%u%s%p';
    Release channel ch1;
    Release channel ch2;
    Thanks
    Kuljeet Pal Singh

  • Job scheduling failed because the user has no permission to access this rep

    Hi. I've OBIP 10.1.3.4.1.
    When I launch a print with the scheduler I see this error:
    oracle.apps.xdo.servlet.scheduler.ProcessingException: Job scheduling failed because the user has no permission to access this report. [REPORT_URL]=[folderreport/report/report.xdo], [USERNAME]=[administrator]
         at oracle.apps.xdo.servlet.ui.scheduler.SchedulerServlet.scheduleJob(SchedulerServlet.java:1140)
         at oracle.apps.xdo.servlet.ui.scheduler.SchedulerServlet.doPost(SchedulerServlet.java:295)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
         at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:100)
         at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:621)
         at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
         at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
         at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
         at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
         at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
         at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
         at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
         at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
         at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
         at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
         at java.lang.Thread.run(Thread.java:595)
    In this env. I've a LDAP Security Model and all the report and all the users work.

    Please check whether you have assigned below responsibility to the user trying to schedule report.
    XMLP_SCHEDULER

  • DPM 2012 R2 long backup to tape job randomly fail after installing SCCM 2012 Client

    Hello,
    I'm managing a two nodes 2012 R2 file server cluster that contains a 16To CSV. I'm using DPM 2012 R2 to backup this entire shared volume directly to LTO 4 tapes, the job last about 55h.
    Since SCCM 2012 client has been installed(I don't manage it), the tape jobs are failing ramdomly after several hours with the error:
    Type: Tape backup
    Status: Failed
    Description: The DPM service was unable to communicate with the protection agent on serverX.xxxx.xxx . (ID 52 Details: The semaphore timeout period has expired (0x80070079))
     More information
    End time: 19/07/2014 03:11:06
    Start time: 18/07/2014 22:00:00
    Time elapsed: 05:11:05
    Data transferred: 768 289,56 MB
    Cluster node serverX.xxxx.xxx
    Source details: G:\
    Protection group members: 1
     Details
    Protection group: File Server Tape Protection
    Library: Quantum PX500 Series Medium Changer
    Tape Label (Barcode): File Server Tape Protection-00000230 (000043L4)
    If I uninstall SCCM 2012 client, no more issue, backups succeed. I've asked our SCCM team, no specific task has been scheduled or deployed in SCCM.
    I can't see anything abnormal in logs.
    Any idea?

    I have disabled "Configuration Manager Maintenance" and I have also tried to set the registry value HKLM\Software\Microsoft\CCM\CcmEval\NotifyOnly to TRUE and still the same issue.
    I can't find any correlated errors in the Windows event logs, task scheduler history neither in the DPM logs.
    I've increased the log level of DPM by following the following procedure
    http://blogs.msdn.com/b/george_bethanis/archive/2013/11/04/how-to-collect-dpm-verbose-logs.aspx
    Now i'm suspecting the maintenance job of Windows 2012 R2, i'll try to disabled this task. But the fact is that I don't have this backup issue if SCCM 2012 client is not installed.
    I'm waiting for next logs and will keep you informed

  • Error message if job scheduling fails.

    Hi,
    I want to catch the message if scheduling fails and send to an email id.
    can this be done?
    how can we do this?
    thanks in advance
    Vikash.

    Hi Vishal,
    I can elaborate on that. I dont have a sample code with me.
    Steps:
    1) Consider the job which is scheduled with the help of SM36 as Job_A
    2) Now, develop one program which needs to be incorporated as Job_B. This Job_B moniors and send whether the first job has triggered successfully, running or not.
        a) Now, to develop the program I would request you to take a look at the table TBTCO first.
        b) This table will have details of all the jobs that were scheduled in the system.
        c) Create a work area of the type TBTCO structure and try to fetch the details of the job which you 
            have triggered already.
        d) Now, check the value of the status of the job in the work area.
        e) if it is active/ready/released, then dont take any action.
         f) if it is cancelled or failed, then try to implement the logic of sending an e-mail with the contents
            saying the job has failed to run.
    3) Note: Remember to run this monitoring job Job_B after sometime the Job_A has started.
    Hope this much information would pull your mind to incorporate it easily
    Thanks,
    Babu Kilari

  • Window Server 2012 R2 Job Scheduler Failing.

    Hi,
    Currently, my company's Window Server 2012 R2 job scheduler having a problem.
    Then i found out the solution at http://support.microsoft.com/kb/2617046, so i download the hotfix to fix the problem.  However, i get another error as below.
    Please, what can i do?
    Thanks 
    Best Regard
    Vincent.

    Amy,
    Thanks for the tips, I follow the settings. I checked the condition and troubleshoot according to the lists you provided.
    The task was configured to run only when a specified network is available. (Not set this)
    The configured expiration time for the task has passed. (no expired time set)
    The task is configured to ignore or queue a new task instance if a previous instance is still running. (not set this)
    The task is configured not to run when the computer is on battery power. (yes, will not run when computer on battery power. will not happen because is a server all the time running AC power).
    The task is configured to run only if a specific user is logged on. (must select this, if not, the scheduler will not run at all)
    The task was disabled by a user. (is enabled)
    The previous task instance might have been running longer than expected because a component is busy processing data. If the task is normally expected to run for this length of time, consider modifying the task triggers to take this run-time length
    into consideration, or configure the task to be terminated after a preset time. (already set "Stop the task if runs longer than 3 days" and "If the running task does not end when requested, force it to stop".  and "Do not start
    a new instance " had been set)
    Frankly, for the moment it run normally, somehow, I don't know when the incident will happen again.  It seem randomly happen, and not by default.  That is why kinda need the fix to fix this bug.  Cannot trigger email when this incident happen. 
    And the schedule we run is highly critical.
    Thanks.

  • Hot backup question

    Our DBA duties are currently outsourced and I'm just curious about something.
    We asked them to copy a schema from our Production database (during the day) down to it's equivalents in the Acceptance and Test environments. We found a problem in that the maximum value of a primary key on a table was 100121, and the sequence uses to generate the value for the primary key was only at 100081.
    Is this just one of the risks of taking a hot backup, and so should schedule a backup be taken after hours? Or did he possibly do something wrong?
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi

    If the request was to copy only one schema, then it is unlikely that a "hot backup" would have been used. Quite possibly exp/imp or expdp/impdp would have been used to export the schema from Production and import it into the other two databases.
    Sequences are always "a problem" when export - import is used while the database is active. The export of the sequence and the export of the table (or tables) that use the sequence are not at the same time / SCN so the values may diverge as they get updated in the source database.
    The plan is to reset the sequence in the target database by manually incrementing it till it reaches the desired target (100122).
    Hemant K Chitale

  • ANNOUNCE:::::::: EJB job scheduler for Weblogic

    Paramus, NJ - June 26, 2001 - Indus Consultancy Services today announced
    an upgrade release of its Java scheduling product. Kronos Enterprise
    Scheduler is a full-featured job scheduling system written for the
    Enterprise Java (J2EE) environment.
    As a J2EE application, Kronos Enterprise Scheduler is able to achieve a
    high degree of portability, scalability, and reliability, features which
    are critical to an enterprise system.
    Additionally, Kronos Enterprise Scheduler provides a comprehensive set
    of scheduling options, powerful holiday definitions, job dependencies,
    administrative alerts, J2EE security integration, monitoring
    capabilities, Web and "rich client" user interfaces, and an open
    developer API. The combination of these features make Kronos Enterprise
    Scheduler one of the most complete solutions on the market today.
    Version 2.10 contains a number of minor enhancements and bug fixes and
    is a free upgrade for current customers.
    For more information on Kronos, please visit our products page at
    www.indcon.com

    I would hope that BEA will step up and develop a replacement for their
    deprecated APIs. It is unfair to ask their customers to buy something they
    got previously for free. I also understand that the future EJB spec will
    contain scheduling features.

  • KRONOS JOB Scheduler For WEBLOGIC CUSTOMERS

    Paramus, NJ - June 26, 2001 - Indus Consultancy Services today announced
    an upgrade release of its Java scheduling product. Kronos Enterprise
    Scheduler is a full-featured job scheduling system written for the
    Enterprise Java (J2EE) environment.
    As a J2EE application, Kronos Enterprise Scheduler is able to achieve a
    high degree of portability, scalability, and reliability, features which
    are critical to an enterprise system.
    Additionally, Kronos Enterprise Scheduler provides a comprehensive set
    of scheduling options, powerful holiday definitions, job dependencies,
    administrative alerts, J2EE security integration, monitoring
    capabilities, Web and "rich client" user interfaces, and an open
    developer API. The combination of these features make Kronos Enterprise
    Scheduler one of the most complete solutions on the market today.
    Version 2.10 contains a number of minor enhancements and bug fixes and
    is a free upgrade for current customers.
    For more information on Kronos, please visit our products page at
    www.indcon.com

    Paramus, NJ - June 26, 2001 - Indus Consultancy Services today announced
    an upgrade release of its Java scheduling product. Kronos Enterprise
    Scheduler is a full-featured job scheduling system written for the
    Enterprise Java (J2EE) environment.
    As a J2EE application, Kronos Enterprise Scheduler is able to achieve a
    high degree of portability, scalability, and reliability, features which
    are critical to an enterprise system.
    Additionally, Kronos Enterprise Scheduler provides a comprehensive set
    of scheduling options, powerful holiday definitions, job dependencies,
    administrative alerts, J2EE security integration, monitoring
    capabilities, Web and "rich client" user interfaces, and an open
    developer API. The combination of these features make Kronos Enterprise
    Scheduler one of the most complete solutions on the market today.
    Version 2.10 contains a number of minor enhancements and bug fixes and
    is a free upgrade for current customers.
    For more information on Kronos, please visit our products page at
    www.indcon.com

  • KRONOS - EJB Job scheduler for Weblogic

    Paramus, NJ - February 22, 2002 - Indus Consultancy Services (ICS) today
    announced the latest in a series of improvements to Kronos Enterprise
    Scheduler, a full-featured job scheduling system written for the
    Enterprise Java (J2EE) environment.
    This version fixes some minor bugs reported by customers, and adds a new
    security feature to control user visibility to jobs, tasks, and
    schedules. A new environment setting allows the Kronos Enterprise
    Scheduler administrator to restrict users to seeing only those items
    which they have created. Administrators still have full visibility to
    all items.
    ICS is also proud to have Kronos Enterprise Scheduler competing in the
    Java Developer's Journal 2002 Readers' Choice Awards. Every year, Java
    Developer's Journal holds voting for products in a variety of
    categories. This year, Kronos Enterprise Scheduler is nominated in
    three of those categories:
    · Best Java Application (Standing as of 2/22/02 : 16th out of 69
    products with votes.)
    · Best Java Component (Standing as of 2/22/02 : 10th out of 26 products
    with votes.)
    · Most Innovative Java Product (Standing as of 2/22/02 : 21st out of 63
    products with votes.)
    To vote, visit the Java Developer's Journal website at
    <http://www.sys-con.com/java/readerschoice2002>
    To find out more about Kronos Enterprise Scheduler, and learn why
    companies around globe are selecting this powerful product to handle
    their scheduling needs, visit the ICS website at <http://www.indcon.com>
    and request a FREE 30-day evaluation.
    © 2002 Indus Consultancy Services, Inc. All rights reserved. Java and
    all Java-based marks are trademarks or registered trademarks of Sun
    Microsystems, Inc. in the U.S. and other countries. All other product
    and company names are trademarks of their respective owners.

    I would hope that BEA will step up and develop a replacement for their
    deprecated APIs. It is unfair to ask their customers to buy something they
    got previously for free. I also understand that the future EJB spec will
    contain scheduling features.

  • FULL HOT Backup failing with a 6 on media management layer....

    Hi all,
    My thanks in advance for reading and any posts. Appologies for the long post, but wanted to make sure I got all information required into the post
    *******************The background*************************
    We are currently doing full hot backups of all production databases to tape. The database we use is RAC install (9.2.0.6.0) on Linux Redhat Enterprise 4 (database on OCFS). We use a seperate Solaris 8 box with 2 HP Ultium tape drives over a standard Cat5 line (100mb), we use Veritas Netbackup 5.1 as our media manager and we use an RMAN repositry on another box. Currently the backup is taking on average 17 hours to complete (something that I have been through with Oracle).
    We use the below rman script to do the full database backup.
    DATE=`date +%m%d%Y-%H%M%S`; export DATE
    USER=`id | cut -f2 -d"(" | cut -f1 -d")"`
    ORACLE_SID=ausl5; export ORACLE_SID
    ORACLE_USER=oracle; export ORACLE_USER
    ORACLE_HOME=/u01/oracle/product/9.2.0 export ORACLE_HOME
    #delete log files older than 35 days
    find /var/adm/netbackup -name "auslive*" -mtime +35 -type f -exec rm {} \;
    #if [ "${USER}" = "root" ]
    #then
    # su - ${ORACLE_USER} "cd /rac/DBA/scripts/BACKUP/" #-c "${CMD_STRING}"
    # SUCCESS=$?
    #else
    # eval "/rac/DBA/scripts/BACKUP"
    # SUCCESS=$?
    #fi
    <<EOF su - oracle
    <<EOF1 /u01/oracle/product/9.2.0/bin/rman msglog /var/adm/netbackup/AUSLIVE_b
    ackup.${DATE} append
    connect target sys/fowler@rman_ausl5
    connect catalog rman/rman@rman
    #change archivelog all validate;
    run {
    allocate channel ch1 type sbt_tape PARMS="BLKSIZE=1048576";
    #allocate channel ch2 type sbt_tape PARMS="BLKSIZE=1048576";
    #allocate channel ch3 type sbt_tape;
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    CONFIGURE CHANNEL 1 DEVICE TYPE 'SBT_TAPE' MAXPIECESIZE 100000m
    CONNECT = 'SYS/fowler@rman_ausl5';
    #CONFIGURE CHANNEL 2 DEVICE TYPE 'SBT_TAPE' MAXPIECESIZE 100000m
    CONNECT = 'SYS/fowler@rman_ausl3';
    #CONFIGURE CHANNEL 3 DEVICE TYPE 'SBT_TAPE' CONNECT = 'SYS/fowle
    r@rman_ausl4';
    change archivelog all validate;
    ###=============== Veritas Calls =========================
    send "NB_ORA_CLASS=ORAhot_AUSLIVE";
    #send "NB_ORA_CLIENT=sunatlsunx008";
    send "NB_ORA_CLIENT=sunatlsunx008";
    send "NB_ORA_SCHED=full";
    ###=====================main backup sections =============
    backup
    incremental level=0
    filesperset 15
    format "bk_%s_%p_%t"
    tag "hot_${ORACLE_SID}_${DATE}"
    database;
    sql "alter system archive log current";
    change archivelog all validate;
    backup
    filesperset 10
    format "ct_%s_%p_%t"
    current controlfile;
    sql "alter system archive log current";
    backup
    filesperset 150
    format "al_%s_%p_%t"
    archivelog all not backed up 2 times;
    change archivelog all validate;
    backup
    filesperset 50
    format "al_%s_%p_%t"
    archivelog until time 'SYSDATE-4' delete input;
    change archivelog all validate;
    release channel ch1;
    #release channel ch2;
    #release channel ch3;
    exit
    EOF1
    EOF
    Now due to the backup taking so long, we often have archive log backups that kick off. We generally get Restore Validates of this backup once a week.
    *********************************Problem*******************************
    Getting to the problem, the media manager generally completes the job with a 6, which in Veritas means that the backup failed to back up the requested files. I guess that this could be something to do with the media manager layer. But myself and my colleagues are of the opinion that this is something to do with the archive log backups running inbetween (hence the change archivelog all validate; appearing so many times in the script).
    Has anyone got any ideas?
    Thanks again for any posts / ideas,
    Mark.

    No I mean, it really tries to backup everything; control file, archive logs etc...
    Yeah I will admit its a bit strange, didn't code it myself. The idea was to do a bit a of house keeping at the same time with the archived redo logs (hence the file system stuff with the archived redo logs).
    You see we thought that this might be causing the issue.
    any ideas are welcomed,
    thanks again,
    Mark.

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • Full Hot Backup using RMAN?

    Dear all,
    I want to take a full hot backup every week on sunday and the followings are the commands.
    run
    allocate channel ch1 type disk format '/db/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck';
    backup
    incremental level 0
    database
    plus archivelog delete input;
    backup current controlfile;
    backup spfile;
    release channel ch1;
    I have the following questions:
    1) Am I need to define a directory named as "'/db/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck", so that the backup copies will put it here,right?
    2) What format needs to be save in where and how to run it on weekly schedule basis?
    3) I saw above script with "channel ch1", can I change it to "channel ch2"?
    I am a beginner using RMAN, please let me know.
    Best Regards,
    amy

    Which database version on which OS (looks like Unix/Linux)?
    You have to create the directory '/db/BACKUP/RMAN', 'backup_%d...' is the filename created by RMAN.
    %d Specifies the name of the database;
    %t Specifies the backup set time stamp, which is a 4-byte value derived as the number of seconds elapsed since a fixed reference time. The combination of %s and %t can be used to form a unique name for the backup set;
    %s Specifies the backup set number. This number is a counter in the control file that is incremented for each backup set. The counter value starts at 1 and is unique for the lifetime of the control file. If you restore a backup control file, then duplicate values can result. Also, CREATE CONTROLFILE initializes the counter back to 1;
    %p Specifies the piece number within the backup set. This value starts at 1 for each backup set and is incremented by 1 as each backup piece is created;
    %U Specifies a system-generated unique filename (default).
    For scheduling a cron job could be defined or you use Enterprise Manager (10g and higher)
    Channel name doesn't matter.
    Werner

  • RMAN Full  Hot Backup Daily

    Hi,
    I have Production Database going to live this week..
    this is my RMAN Backup script for taking daily Full Hot Backup night 12pm...
    I have enabled FLASH_RECOVERY_AREA
    DB_RECOVERY_FILE_DEST=/backup/flash_recovery_area
    log_archive_dest=/backup/archivelog
    log_archive_format='arch_%d_%S_%T';
    =========================================================
    rman target=/
    configure controlfile autobackup on;
    run
    allocate channel c1 device type disk;
    crosscheck backup;
    backup as compressed backupset database;
    sql 'alter system switch logfile';
    crosscheck backup;
    release channel c1;
    exit
    ===========================================================
    Above script is scheduled usgin cron job...
    =============================================================
    Here I am not taking the archive logs backup...because..by default..archive logs are created in 2 places
    (a) FLASH RECOVERY AREA
    (b) /backup/archivelog
    so archive logs are created by date wise in Flash Recovery Area....
    I have flash recovery area structure like this for ever day
    /backup/flash_recovery_area/PROD/archivelog/29_09_2007
    /backup/flash_recovery_area/PROD/autobackup/29_09_2007
    /backup/flash_recovery_area/PROD/archivelog/30_09_2007
    /backup/flash_recovery_area/PROD/autobackup/30_09_2007
    Above are 2 days backups of 29th and 30th....
    I can simply send those 30_09_2007 and 29_09_2007 directories to TAPE Drive....
    =======================================================================
    When disaster recovery, i can restore those directories to flash_recovery_area and do the recovery using rman...
    =============================================
    Friends...am i goin in correct way..if anything wrong...please correct me..
    my doubt is..I am not used any command of backuping up archivelogs in rman script.......becasue..archive logs are created in flash_recovery_area by date wise......
    am i correct..

    Hi,
    Thanks for the reply,
    My requirement is......Full Hot Backup daily at 2 AM......Daily i have to take this...so i scheduled this using cron job......I need to maintain 7 days backup in backup location.....
    Here is my script
    # Export Environment Variables
    export ORACLE_HOME=/oradata/oracle
    export ORACLE_SID=PROD
    export PATH=$PATH:$ORACLE_HOME/bin
    # RMAN Full Backup
    rman target / catalog rman/rman << EOF
    configure controlfile autobackup on;
    configure retention policy to redundancy 7;
    run
    crosscheck backup;
    backup as compressed backupset database;
    sql 'alter system switch logfile';
    backup as compressed backupset archivelog all;
    backup as compressed backupset current controlfile;
    crosscheck backup;
    crosscheck archivelog all;
    delete noprompt obsolete;
    delete noprompt expired backup;
    exit=============================================================
    Can u please verify that, script is good or not
    Message was edited by:
    bsubbu
    Message was edited by:
    bsubbu

  • Error in Backup job scheduling in DB13

    Hi All
    Backup job scheduled in DB13 kicks error ,I am using Oracle as database and ERP6.0
    database and application are on diffrent servers.Before it was working fine,I didn't changed any password
    I can run backupjob sucessfully directly from BRtools on database server.Please provide any hint
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000060, user )
    No application server found on database host - rsh/gateway will be used
    Execute logical command BRBACKUP On host DLcSapOraG08
    Parameters:-u / -jid INLOG20090120204230 -c force -t online -m incr -p initerd.sap -w use_dbv -a -c force -p in
    iterd.sap -cds -w use_rmv
    BR0051I BRBACKUP 7.00 (31)
    BR0128I Option 'use_dbv' ignored for 'incr'
    BR0055I Start of database backup: bdztcorv.ind 2009-01-20 20.42.31
    BR0484I BRBACKUP log file: D:\oracle\ERD\sapbackup\bdztcorv.ind
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310E Connect to database instance ERD failed
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310E Connect to database instance ERD failed
    BR0056I End of database backup: bdztcorv.ind 2009-01-20 20.42.32
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0054I BRBACKUP terminated with errors
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.32
    BR0291I BRARCHIVE will be started with options '-U -jid INLOG20090120204230 -d disk -c force -p initerd.sap -cds -w use_rmv'
    BR0002I BRARCHIVE 7.00 (31)
    BR0181E Option '-cds' not supported for 'disk'
    BR0280I BRARCHIVE time stamp: 2009-01-20 20.42.33
    BR0301W SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    BR0310W Connect to database instance ERD failed
    BR0007I End of offline redo log processing: adztcorw.log 2009-01-20 20.42.32
    BR0280I BRARCHIVE time stamp: 2009-01-20 20.42.33
    BR0005I BRARCHIVE terminated with errors
    BR0280I BRBACKUP time stamp: 2009-01-20 20.42.33
    BR0292I Execution of BRARCHIVE finished with return code 3
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished

    Hi,
    not sure if the recommendations given will address this issue.
    You are getting this error:
    BR0301E SQL error -1017 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-01017: invalid username/password; logon denied
    the log file indicates:
    > No application server found on database host - rsh/gateway will be used
    This indicated that the user that is connecting from the AS to the DB server is not properly configured to perform the DB tasks on it.
    So, first question would be to know if you have configured a gateway on the DB server and how, or if you are using remote shell.
    Second question, you can do backups on the DB server.
    > I can run backupjob sucessfully directly from BRtools on database server
    How did you run exactly the backup job (what is the exact command line, what is the exact OS user that executed it)?
    What is the OS of the DB server?
    I have reread your post, your OS is windows therefore you fall in the "typical" error in Windows.
    You have executed your backup as <sid>ADM and it works. Unfortunatelly, in windows, SAP is exectuted by SAPSERVICE<sid>, and this is the user who should be connecting to your DB server, and this is the user who cannot execute the backup.
    The fact that you can run the backup with <sid>ADM in Windows does not means that you have SAPService<sid> properly configured.
    For the error (see before) I think your ops$ user for this user is not properly configured in the DB server. take a look at the note mentioned by KT and pay attention to the SAPSERVICE<sid> configuration
    Edited by: Fidel Vales on Jan 24, 2009 12:45 AM

Maybe you are looking for