Archived zip logs with 0 kb

Hi all,
I set some Logs Categories and Logs Destinations for some applications JWD in my WebAS. Since these logs are archived in the zip folder /logs/archives when they reach 10 mb. However, these zip files are always getting 0 kb.
Someone could provide some help.
regards,
Angelo

Hi Angelo,
Can you please make me clear about how you set the archive rules?
Thanks,
Siva Kumar

Similar Messages

  • RMAN9I HOW TO RESTORE ARCHIVE LOGS WITH LIMITED DISK SPACE

    제품 : RMAN
    작성날짜 : 2002-12-09
    RMAN9I HOW TO RESTORE ARCHIVE LOGS WITH LIMITED DISK SPACE
    ==========================================================
    PURPOSE
    이 자료는 Oracle 9.2 이상의 RMAN에서 사용 가능한 MAXSIZE 기능에 대하여
    설명할 것이다.
    How to restore archive logs with limited disk space
    Old Backup을 사용하여 RMAN으로 database를 recvoery할때, 모든 archived
    redo logfile들을 Restore할 destination의 disk space가 모자란 경우를 종종 만난다.
    이러한 경우 여러 개의 restore and recovery job으로 나누어 수행한다.
    즉, 첫 번째 restore and recovery job이 끝나면 archived redo logfile들을 지우고
    다음 적용될 것들을 restore한다. 그리고 다시 Recovery를 수행한다. 이러한 작업을
    원하는 시점까지 recovery를 하기위해 반복적으로 수행한다.
    Oracle 9iR2(9.2.0.x)부터 RMAN은 MAXSIZE option을 제공하는데 이것은 archive log
    file이 restore될 disk의 space를 control할 수 있게 해 준다.
    만약 disk space가 매우 제한적이라면, 즉 예를 들어 모든 archive log의 size 합보다
    disk free space가 적다면 MAXSIZE option을 사용하는 것은 매우 유용하다.
    이 OPTION이 기술되면 RMAN은 Media Manager에게 disk space 크기에 부합하는 만큼만
    archive log들을 restore하도록 한다. 부가적인 restore operation들은 restore된
    마지막 archive log가 적용될 때마다 발생한다.
    MAXSIZE option은 이러한 작업들을 하나의 rman job으로 처리하게 해 주고
    실수를 하지 않게 해 준다.
    아래는 MAXSIZE를 설명하기 위한 예제이며 총 6 단계로 구성되어 있다.
    STEP 1: Add data to the database to enforce log switches
    STEP 2: Backup the database and archive logs and delete the logs
    STEP 3: Add additional data to enforce new log switches
    STEP 4: Remove the data file and simulate a database crash
    STEP 5: Restore the data file from the backup
    STEP 6: Recover the database using MAXSIZE
    위 예제는 Unix와 Windows 모두에 적용될 수 있다.
    STEP 6 는 다음과 같은 조건으로 두 번 수행될 것이다.
    1) MAXSIZE가 archive log size보다 작은 경우:
    이 경우 RMAN-6558 Error message를 만나게 된다.
    그러므로 MAXSIZE를 archive log보다는 크게 설정해야한다.
    2) MAXSIZE가 archive log size보다 큰 경우:
    예를 들어 여러 개의 archive log를 포함할 수 있을 정도로 크게 설정하면
    restore/recovery는 user에게 transparent하게 수행되어진다.
    즉 archive log들은 restore된 후 applied되고 deleted되어 진다. 다시 새로운
    archive log들이 restore되어지고 applied된 후 deleted 되어 진다. 이러한 작업은
    recovery가 끝날 때까지 반복되어진다. 이러한 작업들이 진행되는 동안 RMAN은
    아무런 Message도 발생시키지 않는다.
    # Step 1: INSERT enough new data to generate log switches
    create table rman_tst (col1 varchar2 (10));
    begin
    for i in 1..30000 loop
    insert into rman_tst values(i,'test');
    commit;
    end loop;
    end;
    # Step 2: BACKUP the database and the archive logs automatically
    # and then delete the input
    run {
    backup database format='/web01/usupport/krosenme/admin/backups/db_%d%s%t'
    plus archivelog format='/web01/usupport/krosenme/admin/backups/arch_%d%s%t'
    delete input;
    # Step 3: INSERT enough new data to generate new log switches
    begin
    for i in 1..30000 loop
    insert into rman_tst values(i,'test');
    commit;
    end loop;
    end;
    # Step 4: REMOVE users01.dbf file and crash the database
    mv users01.dbf users01.org
    shutdown abort
    # Restore is now needed as the data file is deleted. The backup was
    # taken before the new data was added to it, thus archive logs are
    # needed to bring the database up to date
    # Step 5: RESTORE the data file from the full backup
    run {
    restore datafile '/web01/usupport/krosenme/oradata/kro_920/users01.dbf';
    # Step 6: RECOVER
    run {
    recover database delete archivelog maxsize 10 K;
    # This will fail with RMAN-6558 as the archived log has a size of 16 KB,
    # which is bigger than MAXSIZE limit of 10 KB. So the error is expected
    # and MAXSIZE works as designed.
    # Now rerunning STEP 6: but with MAXSIZE 50 K
    run {
    recover database delete archivelog maxsize 50 K;
    RELATED DOCUMENTS
    Recovery Manager Reference, Release 2 (9.2)

    cold backup means offline backup, you shut database normally and copy datafiles to another location.
    i think operating system does not matter that much but it is RedHat Linux Enterprise Server.
    backing up archivelogs whith rman script which is:
    run {
    allocate channel c1 type disk format '$BKUPLOC/arch_%d_%u_%s_%p.bkp';
    change archivelog all validate;
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    backup archivelog time between 'SYSDATE - (30*60/(60*60*24))' and 'sysdate';
    release channel c1;
    so here is the details.
    1. because everything is lost, we restored all files(datafiles, controlfiles, redologs etc) from offline backup meaning copied all files from another location to corresponding location.
    2. want to apply archive logs which were created after cold backup till disk failure and we have these archive logs with backups which were taken by rman.
    So
    first of all, we want to register these archive log rman backups(because these backup information does not exist in restored controlfile)
    and then restore them and then apply them.
    we need rman commands to register these backups and restore them.
    Regards,
    Kamil

  • How to restore archive logs with rman

    Hi,
    here is the scenario:
    we have lost everything because of disk failure.
    we have a full cold backup and archivelog backups which were created after cold backup and were backed up with rman.
    after restoring from cold backup, because archivelog backups do not exist in controlfile, how can we catalog archivelog backups and restore them?
    could you give me the exact rman command for this?
    Best Regards,
    Kamil

    cold backup means offline backup, you shut database normally and copy datafiles to another location.
    i think operating system does not matter that much but it is RedHat Linux Enterprise Server.
    backing up archivelogs whith rman script which is:
    run {
    allocate channel c1 type disk format '$BKUPLOC/arch_%d_%u_%s_%p.bkp';
    change archivelog all validate;
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    backup archivelog time between 'SYSDATE - (30*60/(60*60*24))' and 'sysdate';
    release channel c1;
    so here is the details.
    1. because everything is lost, we restored all files(datafiles, controlfiles, redologs etc) from offline backup meaning copied all files from another location to corresponding location.
    2. want to apply archive logs which were created after cold backup till disk failure and we have these archive logs with backups which were taken by rman.
    So
    first of all, we want to register these archive log rman backups(because these backup information does not exist in restored controlfile)
    and then restore them and then apply them.
    we need rman commands to register these backups and restore them.
    Regards,
    Kamil

  • Dynamically loading and registering JDBC driver from an archive (zip - jar)

    I'm programming an JDBC driver tester.
    I have to load dynamically any driver from an archive (jar or zip) after the user uploaded it.
    I think i did it well with my ClassLoader, i can get an instance of the driver and use any method like (getMinorVersion()) but when i registering it fail.
    There is no error but the driver is not registered.
    I rode the DriverManager log (with his logwriter) and he says :
    skipping: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@5439fe]
    skipping: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@2b7db1]
    (two times, it looks curious isn't it ?)
    This is a part of my code :
    Driver pilote = (Driver)Class.forName(driverClass.getName(), true,this).newInstance();
    System.out.println("Minor Version = "+ pilote.getMinorVersion());
    PrintWriter printwriter = new PrintWriter(new OutputStreamWriter(System.out));
    DriverManager.setLogWriter(printwriter);
    DriverManager.registerDriver(pilote);
    System.out.println("Driver registered\n");

    I have made a simple test :
    public static void main(String[] param)
    System.out.println("Loading Driver from JAR ...");
    try
    File jar = new File("c://mbm//drivers//oracle.jar");
    URL aurl[] = {jar.toURL()};
    URLClassLoader urlclassloader = new URLClassLoader(aurl, ClassLoader.getSystemClassLoader());
    Class.forName("oracle.jdbc.driver.OracleDriver", true, urlclassloader);
    PrintWriter printwriter = new PrintWriter(new OutputStreamWriter(System.out));
    DriverManager.setLogWriter(printwriter);
    Enumeration listDriver = DriverManager.getDrivers();
    System.out.println("[---------Drivers-----------]");
    while (listDriver.hasMoreElements())
    Driver driver = (Driver) listDriver.nextElement();
    System.out.println("->> "+driver.getClass().getName());
    catch (MalformedURLException e)
    e.printStackTrace();
    catch (ClassNotFoundException e)
    e.printStackTrace();
    This displays that :
    Loading Driver from JAR ...
    skipping: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@9ec21d67]
    [---------Drivers-----------]
    D:\www\tomcat\webapps\mbm\WEB-INF\classes>
    I think there is in this case only one instance

  • Bt Cloud only downloading 22 Byte Archive zips?

    I've got a few largish files stored on BT Cloud - everything has been fine until today.
    Now, on clicking a file to download... I only get a 22 byte archive zip file - not the file in question?
    Is this a known bug, that's just surfaced?
    Many thanks in advance,
    Bizarreo

    Hi bizarreo,
    Did you manage to get this sorted? If you need any help please use the 'contact the mods' link in my forum profile under the 'about me' section to send in your details and we can raise an issue with the Cloud team.  You can find the link by clicking on my username.
    Thanks
    Neil
    BTCare Community Mod
    If we have asked you to email us with your details, please make sure you are logged in to the forum, otherwise you will not be able to see our ‘Contact Us’ link within our profiles.
    We are sorry but we are unable to deal with service/account queries via the private message(PM) function so please don't PM your account info, we need to deal with this via our email account :-)
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Archiving EHS document with ADK

    Dear all,
    I am running ECC 5.0.  I am trying to archive EHS documents with ADK using the object CV_DVS. I am following the documentation from IMG, Under the step "Check Document Types and Document Statues" I create the status HI, AL and AR for the document type SBR.  I ran the preprocessing step for CV_DVS and I get nothing to archive.  I check the log and here is the message.
    Job started
    Step 001 started (Program RC1_DVSARCH_PREP, variant TEST1)
    Change report bodies:
    Report DW_PERFORMI XXX_US_SDS E 00002 is not archived; document for report missing in SAP database
    The status you enter does not exist
    The status you enter does not exist
    No reports were set to the status "Archiving is Running (AL)'
    Job finished.
    I just wonder have anyone using CV_DVS to archive those MSDS reports and would like to know how do you configure 
    "check document types and document statuses"? it seems something is missing in my ECC 5.0 default configuration for the table.
    Thanks.

    Hi,
    As you experienced all DIALOG function modules will not work in Batch because the is no connection with a frontend (PC).
    You have to get your PDF on a server so you can process them in Batch.
    Maybe you can than use the FM: ALINK_DOCUMENTS_CREATE_FILE
    Success,
    Rob

  • Cannot extract Zip file with Winzip after zipping with java.util.zip

    Hi all,
    I write a class for zip and unzip the text files together which can be zip and unzip successfully with Java platform. However, I cannot extract the zip file with Winzip or even WinRAR after zipping with Java platform.
    Please help to comment, thanks~
    Below is the code:
    =====================================================================
    package myapp.util;
    import java.io.* ;
    import java.util.TreeMap ;
    import java.util.zip.* ;
    import myapp.exception.UserException ;
    public class CompressionUtil {
      public CompressionUtil() {
        super() ;
      public void createZip(String zipName, String fileName)
          throws ZipException, FileNotFoundException, IOException, UserException {
        FileOutputStream fos = null ;
        BufferedOutputStream bos = null ;
        ZipOutputStream zos = null ;
        File file = null ;
        try {
          file = new File(zipName) ; //new zip file
          if (file.isDirectory()) //check if it is a directory
         throw new UserException("Invalid zip file ["+zipName+"]") ;
          if (file.exists() && !file.canWrite()) //check if it is readonly
         throw new UserException("Zip file is ReadOnly ["+zipName+"]") ;
          if (file.exists()) //overwrite the existing file
         file.delete();
          file.createNewFile();
          //instantiate the ZipOutputStream
          fos = new FileOutputStream(file) ;
          bos = new BufferedOutputStream(fos) ;
          zos = new ZipOutputStream(bos) ;
          this.writeZipFileEntry(zos, fileName); //call to write the file into the zip
          zos.finish() ;
        catch (ZipException ze) {
          throw ze ;
        catch (FileNotFoundException fnfe) {
          throw fnfe ;
        catch (IOException ioe) {
          throw ioe ;
        catch (UserException ue) {
          throw ue ;
        finally {
          //close all the stream and file
          if (fos != null)
         fos.close() ;
          if (bos != null)
         bos.close();
          if (zos != null)
         zos.close();
          if (file != null)
         file = null ;
        }//end of try-catch-finally
      private void writeZipFileEntry(ZipOutputStream zos, String fileName)
          throws ZipException, FileNotFoundException, IOException, UserException {
        BufferedInputStream bis = null ;
        File file = null ;
        ZipEntry zentry = null ;
        byte[] bArray = null ;
        try {
          file = new File(fileName) ; //instantiate the file
          if (!file.exists()) //check if the file is not exist
         throw new UserException("No such file ["+fileName+"]") ;
          if (file.isDirectory()) //check if the file is a directory
         throw new UserException("Invalid file ["+fileName+"]") ;
          //instantiate the BufferedInputStream
          bis = new BufferedInputStream(new FileInputStream(file)) ;
          //Get the content of the file and put into the byte[]
          int size = (int) file.length();
          if (size == -1)
         throw new UserException("Cannot determine the file size [" +fileName + "]");
          bArray = new byte[(int) size];
          int rb = 0;
          int chunk = 0;
          while (((int) size - rb) > 0) {
         chunk = bis.read(bArray, rb, (int) size - rb);
         if (chunk == -1)
           break;
         rb += chunk;
          }//end of while (((int)size - rb) > 0)
          //instantiate the CRC32
          CRC32 crc = new CRC32() ;
          crc.update(bArray, 0, size);
          //instantiate the ZipEntry
          zentry = new ZipEntry(fileName) ;
          zentry.setMethod(ZipEntry.STORED) ;
          zentry.setSize(size);
          zentry.setCrc(crc.getValue());
          //write all the info to the ZipOutputStream
          zos.putNextEntry(zentry);
          zos.write(bArray, 0, size);
          zos.closeEntry();
        catch (ZipException ze) {
          throw ze ;
        catch (FileNotFoundException fnfe) {
          throw fnfe ;
        catch (IOException ioe) {
          throw ioe ;
        catch (UserException ue) {
          throw ue ;
        finally {
          //close all the stream and file
          if (bis != null)
         bis.close();
          if (file != null)
         file = null ;
        }//end of try-catch-finally
    }

    Tried~
    The problem is still here~ >___<
    Anyway, thanks for information sharing~
    The message is:
    Cannot open file: it does not appear to be a valid archive.
    If you downloaded this file, try downloading the file again.
    The problem may be here:
    if (fos != null)
    fos.close() ;
    if (bos != null)
    bos.close();
    if (zos != null)
    zos.close();
    if (file != null)
    file = null ;
    The fos is closed before bos so the last buffer is not
    saved.
    zos.close() is enough.

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Appsutil.zip created with Exceptions

    hi
    I have been following post patch step 9535311.
    In Post steps
    While creating appsutill.zip file from application i got exception below
    ppltst@ebsdevdb on /ebdbh/app/ebprdappl # $ADPERLPRG $AD_TOP/bin/admkappsutil.pl
    Starting the generation of appsutil.zip
    Log file located at /ebdbh/app/ebprdappl/admin/log/MakeAppsUtil_06140722.log
    output located at /ebdbh/app/ebprdappl/admin/out/appsutil.zip
    java.lang.NoClassDefFoundError: java/util/HashMap
    at
    at oracle.apps.ad.tools.MakeAppsUtil.<init>(Compiled Code)
    at oracle.apps.ad.tools.MakeAppsUtil.main(Compiled Code)
    Exception in thread "main" MakeAppsUtil completed successfully.
    can I ignore above Exception?
    Thanks
    With Regards
    A-Z

    Please see these docs.
    After Applying Patch 9535311 Get java.lang.NoClassDefFoundError: java/util/HashMap Error [ID 1188327.1]
    Creating "appsutl.zip" Failed On "java.lang.NoClassDefFoundError: java/util/HashMap", What is the Potential Solution ? [ID 1310838.1]
    Thanks,
    Hussein

  • OBIEE Dev VM Install Issue:Cannot open archive "ARCHIVE.zip" as archive

    All,
    I downloaded all eleven files for the OBIEE 11.1.1.6.2 BP1 - Sample Application (V207).
    I checked the MD5 Sum of the first two files and the MD5 Sums match, however when I use 7zip to extract the files, I get an error that says: Cannot open archive "ARCHIVE.zip" as archive
    Has anyone encountered this issue? I verified that the files are not read-only and that they are not "blocked" by Windows.
    I am running 64-bit Windows 7 Home Edition on a Dell laptop.
    I encountered this error on two separate archives with correct MD5 Sums.
    Any help would be appreciated! Thanks!
    Nathan

    Hi,
    I got the same problem. Have you found any solution for this? Thanks.

  • Archive ZIP files to IXOS

    Hi ,
    Can the FM ARCHIV_CONNECTION_INSERT be used to archive .zip files, to IXOS server ? If no, any other function module that can be used to archive the ZIP files?

    yes it is true. You can only create a new archive with the added file(s) inside, there is nothing to happen an existing file to an existing zip file.
    Good luck

  • How could I find specified date of REDO log and archive REDO log ?

    we use Oracle11gr2 on win2008R2.
    1
    How could I find specified date of REDO log(2013/10/17,etc) and archive REDO log ?
    2
    What is the format of archive REDO log.? (zipped file ?)

    user12075536123 wrote:
    1)
    select * from v$archived_log;
    select * from v$log_history;
    but there is a possibility there is no old data
    below contains no filename column
    SQL> desc v$log_history
    Name                                      Null?    Type
    RECID                                              NUMBER
    STAMP                                              NUMBER
    THREAD#                                            NUMBER
    SEQUENCE#                                          NUMBER
    FIRST_CHANGE#                                      NUMBER
    FIRST_TIME                                         DATE
    NEXT_CHANGE#                                       NUMBER
    RESETLOGS_CHANGE#                                  NUMBER
    RESETLOGS_TIME                                     DATE
    there is NO data when archive mode is disabled

  • Clone from standby  ended in ARC: Cannot archive online log based on backup

    Hi
    I m into scenario where my prod db is in one data center and standby is in other data center.
    Both are geographically separated. I have to get a copy of prod on to standby data center side.
    Sending data over the network is taking long time either with duplicate db from active db or take backup and copy over standby side and restore it.
    so i thought of duplicate db from standby db which is in same data center, using 11g RMAN duplicate from active standby command.
    I have simulated scenario which is as below
    oracle version 11.2.0.1
    os version REHL 5.4
    My procedure & parameter are as below.
    on standby side from where i m copying (TARGET)
    1) on standby
    alter database recover managed standby db cancel;
    2)alter database convert to snapshot standby;
    which gave me
    /u01/data/DGSTD/archive/1_152_750425930.dbf
    /u01/data/DGSTD/archive/1_153_750425930.dbf
    */u01/data/DGSTD/archive/1_1_752604441.dbf*
    */u01/data/DGSTD/archive/1_2_752604441.dbf*
    3) alter database open;
    4) alter system switch logfile;
    now from rman
    RMAN> connect target sys/system@DGSTD
    connect auxiliary sys/system@GGR
    connected to target database: DGPRM (DBID=578436102)
    RMAN>
    connected to auxiliary database: NOTREAL (not mounted)
    RMAN>
    run{
    allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate channel prmy3 type disk;
    allocate channel prmy4 type disk;
    allocate channel prmy5 type disk;
    allocate auxiliary channel stby1 type disk;
    duplicate target database to ggr from active database
    spfile
    parameter_value_convert='DGSTD','GGR','/u01/data/DGSTD/','/u01/data/ggr/'
    set db_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set log_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set 'db_unique_name'='ggr'
    set 'audit_file_dest'='/u00/app/oracle/admin/ggr/adump'
    set 'sga_max_size'='140m'
    set 'pga_aggregate_target'='28940697'
    nofilenamecheck;
    and when output of rman reaches up below
    Starting backup at 31-MAY-11
    channel prmy1: starting datafile copy
    input datafile file number=00001 name=/u01/data/DGSTD/datafile/system01.dbf
    channel prmy2: starting datafile copy
    input datafile file number=00002 name=/u01/data/DGSTD/datafile/sysaux01.dbf
    in alert log of clone db it gives massive error saying
    ARC3: Cannot archive online log based on backup controlfile
    ARC2: Cannot archive online log based on backup controlfile
    ARC3: Cannot archive online log based on backup controlfile
    ARC2: Cannot archive online log based on backup controlfile
    and it fill up whole fs. and finally duplicate command throws error.
    not sure what i m missing of inside duplicate command or is it valid to duplicate database from snapshot standby.
    can somebody light on it please
    Edited by: user12281508 on Jun 1, 2011 10:26 AM
    Edited by: user12281508 on Jun 1, 2011 10:28 AM

    duplicate target database to ggr from active database
    spfile
    parameter_value_convert='DGSTD','GGR','/u01/data/DGSTD/','/u01/data/ggr/'
    set db_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set log_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set 'db_unique_name'='ggr'
    set 'audit_file_dest'='/u00/app/oracle/admin/ggr/adump'
    set 'sga_max_size'='140m'
    set 'pga_aggregate_target'='28940697'
    nofilenamecheck;
    }I think you should use standby cluase as
    DUPLICATE TARGET DATABASE TO dup1 FOR STANDBY FROM ACTIVE DATABASE;

Maybe you are looking for

  • HP P2035n not recognized after migration from Airport

    OS X 10.5.8/intel (multiple computers) and one 10.6.3 Installed the P2035n software from included CD (which matches the latest version available from HP site). We had our HP LaserJet P2035n connected via USB to our workgroup's Airport Extreme Base St

  • Lync 2013 domain discovery not working, but Lync 2010 works flawlessly

    We are using Lync 2013 and Lync 2010 in our company. Upon installation Lync 2010 can connect absolutely fine with for example [email protected], but when we use Lync 2013 we receive a DNS error saying it cannot resolve the domain when using the same

  • Different layouts for different pages

    hi all, i have created a form in which there are 5 output pages but the print preview shows only the first page out of 5.i want to go to the next page.plz help how would i do that.and  i want different layouts for different pages. thnks n regards, mu

  • Apps R12 Installed in window2008 but how ddid i connect ..... please help

    Dear, I have installed apps Oracle R12.1.1 in windows 2008. but while connecting database, given bellow error ""THE PROCEDURE ENTRY POINT longjmp COULD NOT BE LOCATED IN THE DYNAMIC LINK LIBRARY orauts.dll "" Please help me Anybody ... Warm Regards,

  • Monitoring server connection

    Hello, i'm developing a client-server application with bluetooth and i must control the connection. So if the connection is down i want display a message. I'm using a while that send a special message to the server. My question is: there is another s