Skipping archive logs

Hi All,
I have a question regarding oracle archive log configuration .
My DB is : Ora10gR2
Unix : HPUX
To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
1* select LOG_MODE,FORCE_LOGGING from v$database
SQL> /
LOG_MODE FORCE_LOGGING
ARCHIVELOG YES
Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
Thanks

am_73798 wrote:
Hi All,
I have a question regarding oracle archive log configuration .
My DB is : Ora10gR2
Unix : HPUX
To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
1* select LOG_MODE,FORCE_LOGGING from v$database
SQL> /
LOG_MODE FORCE_LOGGING
ARCHIVELOG YES
Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
ThanksHi,
No there is no option to keep a table in nologging mode if DB is in force logging.
Regards
Anurag

Similar Messages

  • Skip archive log on logical standby

    hi ,
    I want to skip archive log from nmber 1150 to 1161 on logical standby dtbs.
    I knw , we can skip ddl , dml on logical standby .
    How can archive this ??
    (oracle 10g entreprise edition )

    Hello;
    I do not believe this is an option. The closest to this would be "applying modifications to specific tables"
    See :
    9.4.3 Using DBMS_LOGSTDBY.SKIP to Prevent Changes to Specific Schema Objects
    Data Guard Concepts and Administration 10g Release 2 (10.2) B14239-05
    While this is not the answer you want the skip Archive would create a Gap and cause many other issues you don't want.
    Best Regards
    mseberg

  • Archive log / nologging/ direct path insert

    Could you please confirm if following are true or correct me if my understanding is wrong:
    1 ) Archive log mode and LOGGING is needed to deal with media recovery; it was not needed for instance recovery.
    2) IF insert is in NO APPEND mode , redo is generated even if table is in nologging mode and database is in noachive log mode. This redo is needed for instance recovery.
    3) Direct path insert skips undo generation and may skip redo generation if the object is in nologging mode.
    Thanks.
    In case if it is relevant , I am using Oracle 11.2.0.3.

    1) Yes, Archive logs are needed for media recovery.
    2 and 3) Even if the table is in nologging mode , it generates little bit of redo for index maintenance and dictionary data. Upon a restart from a failure - Oracle will read the online redo logs and replay any transaction it finds in there. That is the "roll forward" bit. The binary redo information is used to replay everything that did not get written to the datafiles. This replay included regenerating the UNDO information (UNDO is protected by redo).
    After the redo has been applied, the database is typically available for use now - and the rollback phase begins. For any transaction that was being processed when the instance failed - we need to undo its changes, roll it back. We do that by processing the undo for all uncommitted transactions.
    The database is now fully recovered.
    Also read he following link
    http://docs.oracle.com/cd/B19306_01/server.102/b14220/startup.htm
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869

  • Archived log is missing

    dear all,
                  i have rman backup taken and an archived log is missing .how can i recover my database using rman,your help appreciated.
    thanks and regards.

    You can't recover the database to any point in time or SCN that is on or after the beginning of the missing archivelog.  Oracle cannot and does not allow you to skip a missing archivelog.
    Thus, if the archivelog Sequence#1020 was for transactions from 10:01:01am to 10:48:42am of 25-Aug and is the missing archivelog, you can recover the database only upto 10:01:01am even if you have Sequence#1021 and subsequent archivelogs available.
    Note that if you have a database backup that began after 10:48:42am, you can use the more recent backup to ignore the missing Sequence#1020.
    Hemant K Chitale

  • RMAN ALert Log Message: ALTER SYSTEM ARCHIVE LOG

    Created a new Database on Oracle 10.2.0.4 and now seeing "ALTER SYSTEM ARCHIVE LOG" in the Alert Log only when the online RMAN backup runs:
    Wed Aug 26 21:52:03 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:52:03 2009
    Thread 1 advanced to log sequence 35 (LGWR switch)
    Current log# 2 seq# 35 mem# 0: /u01/app/oracle/oradata/aatest/redo02.log
    Current log# 2 seq# 35 mem# 1: /u03/oradata/aatest/redo02a.log
    Wed Aug 26 21:53:37 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:53:37 2009
    Thread 1 advanced to log sequence 36 (LGWR switch)
    Current log# 3 seq# 36 mem# 0: /u01/app/oracle/oradata/aatest/redo03.log
    Current log# 3 seq# 36 mem# 1: /u03/oradata/aatest/redo03a.log
    Wed Aug 26 21:53:40 2009
    Starting control autobackup
    Control autobackup written to DISK device
         handle '/u03/exports/backups/aatest/c-2538018370-20090826-00'
    I am not issuing a log swiitch command. The RMAN commands I am running are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/exports/backups/aatest/%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u03/exports/backups/aatest/%d_%U';
    BACKUP DATABASE PLUS ARCHIVELOG;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-2';
    I do not see this message on any other 10.2.0.4 instances. Has anyone seen this and if so why is this showing in the log?
    Thank you,
    Curt Swartzlander

    There's no problem with log switch. Please refer to documentation for more information on syntax "PLUS ARCHIVELOG"
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup003.htm#sthref377
    Adding BACKUP ... PLUS ARCHIVELOG causes RMAN to do the following:
    *1. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *2. Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then RMAN skips logs that it has already backed up to the specified device.*
    *3. Backs up the rest of the files specified in BACKUP command.*
    *4. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *5. Backs up any remaining archived logs generated during the backup.*
    This guarantees that datafile backups taken during the command are recoverable to a consistent state.

  • What will happen when redo log file or archive log file, which is yet to be

    What will happen when redo log file or archive log file, which is yet to be read by logminer is corrupted? It seems that the captures process hangs between “Paused for flow control” and “Enqueuing Messages”. How to come out of this condition without recreating the captures process?
    Any clue is helpful
    Thanks in advance for your help.

    Basically you can't skip SCN since it will result in a data integrity issues (say you had skipped some inserts and later there will be some updates to a not replicated data).
    Streams maintain their own checkpoint tables with transaction related stuff. So there is no way you can jump over a range of SCN's without recreating capture.
    The only thing you can try - temporary give capture process a rule without any objects. But it will need to mine through the redo anyway.

  • How to find out who deleted the archive logs

    Hi All,
    Recently some archive logs were deleted from one of our servers. Is there any way to find out which user has deleted the archive logs through OS or through database ?
    OS Version :-
    SunOS Generic_Virtual sun4u sparc SUNW,SPARC-Enterprise
    Database Version:-
    SQL*Plus: Release 9.2.0.8.0 - Production on Mon Apr 9 01:12:15 2012

    888132 wrote:
    Hi All,
    Recently some archive logs were deleted from one of our servers. Is there any way to find out which user has deleted the archive logs through OS or through database ?
    OS Version :-
    SunOS Generic_Virtual sun4u sparc SUNW,SPARC-Enterprise
    Database Version:-
    SQL*Plus: Release 9.2.0.8.0 - Production on Mon Apr 9 01:12:15 2012As explained by others, from oracle database there is no record if they are deleted from OS.
    But you can probably find the history of OS command been run with history command :). You can get the date and time.
    Following link can help
    http://stackoverflow.com/questions/99755/how-do-i-get-the-command-buffer-in-solaris-10
    http://www.cyberciti.biz/faq/unix-linux-bash-history-display-date-time/
    http://www.linuxquestions.org/questions/solaris-opensolaris-20/in-solaris-command-line-how-to-get-the-previous-commands-573814/
    But i suggest you to post in Sun OS forum to get more details as its nothing to do with Database(in this scenario)

  • ARCHIVE LOGS CREATED in WRONG FOLDER

    Hello,
    I'm facing an issue with the Archive logs.
    In my Db the parameters for Archive logs are
    log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
    db_create_file_dest string /u01/oradata/SIEB/dbf
    db_create_online_log_dest_1 string /u01/oradata/SIEB/rdo
    But the archive logs are created in
    /u01/app/oracle/product/9.2.0.6/dbs
    Listed Below :
    bash-2.05$ ls -lrt *.arc
    -rw-r----- 1 oracle dba 9424384 Jan 9 09:30 SIEB_302843.arc
    -rw-r----- 1 oracle dba 7678464 Jan 9 10:00 SIEB_302844.arc
    -rw-r----- 1 oracle dba 1536 Jan 9 10:00 SIEB_302845.arc
    -rw-r----- 1 oracle dba 20480 Jan 9 10:00 SIEB_302846.arc
    -rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
    -rw-r----- 1 oracle dba 104858112 Jan 9 10:58 SIEB_302848.arc
    bash-2.05$
    Does anyone have an Idea why this happens?
    Is this a Bug!!!
    Thxs

    But in another Db I've
    log_archive_dest string
    log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
    and my archivelogs are in
    oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] ls -lrt /u03/archive/SIEB
    total 297696
    -rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
    -rw-r----- 1 oracle dba 21573632 Jan 9 11:00 SIEB_302848.arc
    -rw-r----- 1 oracle dba 101450240 Jan 9 11:30 SIEB_302849.arc
    -rw-r----- 1 oracle dba 6308864 Jan 9 12:00 SIEB_302850.arc
    -rw-r----- 1 oracle dba 12936704 Jan 9 12:30 SIEB_302851.arc
    oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB]

  • Changing the location of archive log from flash recovery area PLZ HELP!!!

    Hi All,
    My archive log is being stored in flash memory area which got full and the production server went down.
    alert log file details.....
    ORA-19809: limit exceeded for recovery files
    ORA-19804: cannot reclaim 43432960 bytes disk space from 2147483648 limit
    *** 2010-04-25 14:22:49.777 62692 kcrr.c
    ARCH: Error 19809 Creating archive log file to
    '/oracle/product/10.2.0/flash_rec
    overy_area/EDWREP/archivelog/2010_04_25/o1_mf_1_232_%u_.arc'
    *** 2010-04-25 14:22:49.777 60970 kcrr.c
    kcrrfail: dest:10 err:19809 force:0 blast:1I removed the files and started the database,
    Can someone kindly tell me as to how to avoid this problem in future by keeping archive log destination in flash recovery area.
    I want to change the location of archive log files, can someone please guide me as to hiow to do that
    I changed the size of flash recovery area for the time being, but i am afraid it will be full again!!
    SQL> select * from v$flash_recovery_area_usage;
    FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROLFILE                   0                         0               0
    ONLINELOG                     0                         0               0
    ARCHIVELOG                99.44                         0              57
    BACKUPPIECE                   0                         0               0
    IMAGECOPY                     0                         0               0
    FLASHBACKLOG                  0                         0               0
    6 rows selected.
    SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE = 4G ;
    System altered.
    SQL> select * from v$flash_recovery_area_usage;
    FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROLFILE                   0                         0               0
    ONLINELOG                     0                         0               0
    ARCHIVELOG                49.72                         0              57
    BACKUPPIECE                   0                         0               0
    IMAGECOPY                     0                         0               0
    FLASHBACKLOG                  0                         0               0
    6 rows selected.regards,
    Edited by: user10243788 on Apr 25, 2010 6:12 AM

    user10243788 wrote:
    Hi All,
    My archive log is being stored in flash memory area which got full and the production server went down.
    alert log file details.....
    ORA-19809: limit exceeded for recovery files
    ORA-19804: cannot reclaim 43432960 bytes disk space from 2147483648 limit
    *** 2010-04-25 14:22:49.777 62692 kcrr.c
    ARCH: Error 19809 Creating archive log file to
    '/oracle/product/10.2.0/flash_rec
    overy_area/EDWREP/archivelog/2010_04_25/o1_mf_1_232_%u_.arc'
    *** 2010-04-25 14:22:49.777 60970 kcrr.c
    kcrrfail: dest:10 err:19809 force:0 blast:1I removed the files and started the database,
    Can someone kindly tell me as to how to avoid this problem in future by keeping archive log destination in flash recovery area.
    I want to change the location of archive log files, can someone please guide me as to hiow to do that
    I changed the size of flash recovery area for the time being, but i am afraid it will be full again!!
    SQL> select * from v$flash_recovery_area_usage;
    FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROLFILE                   0                         0               0
    ONLINELOG                     0                         0               0
    ARCHIVELOG                99.44                         0              57
    BACKUPPIECE                   0                         0               0
    IMAGECOPY                     0                         0               0
    FLASHBACKLOG                  0                         0               0
    6 rows selected.
    SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE = 4G ;
    System altered.
    SQL> select * from v$flash_recovery_area_usage;
    FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROLFILE                   0                         0               0
    ONLINELOG                     0                         0               0
    ARCHIVELOG                49.72                         0              57
    BACKUPPIECE                   0                         0               0
    IMAGECOPY                     0                         0               0
    FLASHBACKLOG                  0                         0               0
    6 rows selected.regards,
    Edited by: user10243788 on Apr 25, 2010 6:12 AMPointing the archive log dest (and/or the FRA) to a new location, or enlarging them, will do no good if you are not performing regular housekeeping on the archivelogs. You will just keep knocking down the same problem over and over.
    If you simply delete the archivelogs at the OS level, the database will never know about it and it will continue to think the destination is full, based on records kept in the control file.
    For regular housekeeping, you need to be doing something similar to this in rman:
    run {
      backup archivelog all not backed up 1 times tag='bkup_vlnxora1_arch';
      delete noprompt archivelog all backed up 1 times to device type disk;
    run {
    delete noprompt obsolete;
    crosscheck archivelog all;
    delete noprompt expired archivelog all;

  • Archive Logs NOT APPLIED but transferred

    Hi Gurus,
    I have configured Primary & Standby databases in same Oracle Home. OS version is OEL 5. Database version is 10.2.0.1. I could get the archive logs in the standby site but they are not getting applied in the standby database. I don't have OLAP installed in my database version. Would this create this issue? However I attached my primary alert log details below for your reference:
    Thu Aug 30 23:55:37 2012
    Starting ORACLE instance (normal)
    Cannot determine all dependent dynamic libraries for /proc/self/exe
    Unable to find dynamic library libocr10.so in search paths
    RPATH = /ade/aime1_build2101/oracle/has/lib/:/ade/aime1_build2101/oracle/lib/:/ade/aime1_build2101/oracle/has/lib/:
    LD_LIBRARY_PATH is not set!
    The default library directories are /lib and /usr/lib
    Unable to find dynamic library libocrb10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 150
    sga_target = 289406976
    control_files = /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control01.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control02.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control03.ctl
    db_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim
    log_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWSTAND/onlinelog, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWPRIM/onlinelog
    db_block_size = 8192
    compatible = 10.2.0.1.0
    log_archive_config = DG_CONFIG=(newprim,newstand)
    log_archive_dest_1 = LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/arch/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=newprim
    log_archive_dest_2 = SERVICE=newstand LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=newstand
    log_archive_dest_state_1 = enable
    log_archive_dest_state_2 = enable
    log_archive_max_processes= 30
    log_archive_format = %t_%s_%r.dbf
    fal_client = newprim
    fal_server = newstand
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=newprimXDB)
    job_queue_processes = 10
    background_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump
    user_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump
    core_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/cdump
    audit_file_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/adump
    db_name = newprim
    db_unique_name = newprim
    open_cursors = 300
    pga_aggregate_target = 95420416
    PMON started with pid=2, OS id=28091
    PSP0 started with pid=3, OS id=28093
    MMAN started with pid=4, OS id=28095
    DBW0 started with pid=5, OS id=28097
    LGWR started with pid=6, OS id=28100
    CKPT started with pid=7, OS id=28102
    SMON started with pid=8, OS id=28104
    RECO started with pid=9, OS id=28106
    CJQ0 started with pid=10, OS id=28108
    MMON started with pid=11, OS id=28110
    MMNL started with pid=12, OS id=28112
    Thu Aug 30 23:55:38 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    Thu Aug 30 23:55:38 2012
    ALTER DATABASE MOUNT
    Thu Aug 30 23:55:42 2012
    Setting recovery target incarnation to 2
    Thu Aug 30 23:55:43 2012
    Successful mount of redo thread 1, with mount id 1090395834
    Thu Aug 30 23:55:43 2012
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Thu Aug 30 23:55:43 2012
    ALTER DATABASE OPEN
    Thu Aug 30 23:55:43 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=16, OS id=28122
    ARC1 started with pid=17, OS id=28124
    ARC2 started with pid=18, OS id=28126
    ARC3 started with pid=19, OS id=28128
    ARC4 started with pid=20, OS id=28133
    ARC5 started with pid=21, OS id=28135
    ARC6 started with pid=22, OS id=28137
    ARC7 started with pid=23, OS id=28139
    ARC8 started with pid=24, OS id=28141
    ARC9 started with pid=25, OS id=28143
    ARCa started with pid=26, OS id=28145
    ARCb started with pid=27, OS id=28147
    ARCc started with pid=28, OS id=28149
    ARCd started with pid=29, OS id=28151
    ARCe started with pid=30, OS id=28153
    ARCf started with pid=31, OS id=28155
    ARCg started with pid=32, OS id=28157
    ARCh started with pid=33, OS id=28159
    ARCi started with pid=34, OS id=28161
    ARCj started with pid=35, OS id=28163
    ARCk started with pid=36, OS id=28165
    ARCl started with pid=37, OS id=28167
    ARCm started with pid=38, OS id=28169
    ARCn started with pid=39, OS id=28171
    ARCo started with pid=40, OS id=28173
    ARCp started with pid=41, OS id=28175
    ARCq started with pid=42, OS id=28177
    ARCr started with pid=43, OS id=28179
    ARCs started with pid=44, OS id=28181
    Thu Aug 30 23:55:44 2012
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARCt started with pid=45, OS id=28183
    LNS1 started with pid=46, OS id=28185
    Thu Aug 30 23:55:48 2012
    Thread 1 advanced to log sequence 68
    Thu Aug 30 23:55:48 2012
    ARCo: Becoming the 'no FAL' ARCH
    ARCo: Becoming the 'no SRL' ARCH
    Thu Aug 30 23:55:48 2012
    ARCp: Becoming the heartbeat ARCH
    Thu Aug 30 23:55:48 2012
    Thread 1 opened at log sequence 68
    Current log# 1 seq# 68 mem# 0: /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/redo01.log
    Successful open of redo thread 1
    Thu Aug 30 23:55:48 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Thu Aug 30 23:55:48 2012
    SMON: enabling cache recovery
    Thu Aug 30 23:55:48 2012
    Successfully onlined Undo Tablespace 1.
    Thu Aug 30 23:55:48 2012
    SMON: enabling tx recovery
    Thu Aug 30 23:55:49 2012
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=47, OS id=28205
    Thu Aug 30 23:55:49 2012
    Error 1034 received logging on to the standby
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-01034: ORACLE not available
    FAL[server, ARC1]: Error 1034 creating remote archivelog file 'newstand'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Aug 30 23:55:49 2012
    ORACLE Instance newprim - Archival Error. Archiver continuing.
    Thu Aug 30 23:55:49 2012
    db_recovery_file_dest_size of 2048 MB is 9.77% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Thu Aug 30 23:55:50 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump/newprim_ora_28120.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Thu Aug 30 23:55:50 2012
    Completed: ALTER DATABASE OPEN
    Thu Aug 30 23:56:33 2012
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 1-33
    DBID 1090398314 branch 792689455
    Kindly, guide me please..
    -Vimal.

    CKPT: The trace file details are added below for your reference;
    /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning and Data Mining options
    ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1
    System name:     Linux
    Node name:     localhost.localdomain
    Release:     2.6.18-8.el5PAE
    Version:     #1 SMP Tue Jun 5 23:39:57 EDT 2007
    Machine:     i686
    Instance name: newprim
    Redo thread mounted by this instance: 1
    Oracle process number: 17
    Unix process pid: 28124, image: [email protected] (ARC1)
    *** SERVICE NAME:() 2012-08-30 23:55:48.314
    *** SESSION ID:(155.1) 2012-08-30 23:55:48.314
    kcrrwkx: nothing to do (start)
    Redo shipping client performing standby login
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1034 and errmsg is 'ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 60679 kcrr.c
    Error 1034 received logging on to the standby
    Error 1034 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    Error 1034 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 58941 kcrr.c
    kcrrfail: dest:2 err:1034 force:0 blast:1
    kcrrwkx: unknown error:1034
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    ARCH: Connecting to console port...
    kcrrwkx: nothing to do (end)
    *** 2012-08-31 00:00:43.417
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:05:43.348
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:10:43.280
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:15:43.217
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:20:43.160
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:25:43.092
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:30:43.031
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:35:42.961
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:40:42.890
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:45:42.820
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:50:42.755
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:55:42.686
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:00:42.631
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:05:42.565
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:10:42.496
    kcrrwkx: nothing to do (start)
    Mahir: Yes I have my 4 standby redo logs!
    I created the standby manually without using RMAN.
    Hemant: if it asks for even first thread, then obviously it shows nothing is applied on Standby. By the way so it is not called a 'GAP', I think..!
    Thanks.

  • How to delete the data in archived log files

    hi
    how can i delete the enteries in archived log files. and what is the disadvantage of deleting archived log enteries.

    There is no documented way to delete data stored in archived log files: you can only remove the archived log files if needed.

  • *HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY*

    HOW TO DELETE THE ARCHIVE LOGS ON THE STANDBY
    I have set the RMAN CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY; on my physical standby server.
    My archivelog files are not deleted on standby.
    I have set the CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default on the Primary server.
    I've checked the archivelogs with the FRA and they are not beign deleted on the STANDBY. Do I have to do something for the configuation to take effect? Like run a RMAN backup?
    I've done a lot ofresearch and i'm getting mixed answers. Please help. Thanks in advanced.
    J

    Setting the Policy will not delete the Archive logs on the Standby. ( I found a thread where the Data Guard product manager says "The deletion policy on both sides will do what you want" ). However I still
    like to clean them off with RMAN.
    I would use RMAN to delete them so that it can use that Policy are you are protected in case of Gap, transport issue etc.
    There are many ways to do this. You can simply run RMAN and have it clean out the Archive.
    Example :
    #!/bin/bash
    # Name: db_rman_arch_standby.sh
    # Purpose: Database rman backup
    # Usage : db_rman_arch_standby <DBNAME>
    if [ "$1" ]
    then DBNAME=$1
    else
    echo "basename $0 : Syntax error : use . db_rman_full <DBNAME> "
    exit 1
    fi
    . /u01/app/oracle/dba_tool/env/${DBNAME}.env
    echo ${DBNAME}
    MAILHEADER="Archive_cleanup_on_STANDBY_${DBNAME}"
    echo "Starting RMAN..."
    $ORACLE_HOME/bin/rman target / catalog <user>/<password>@<catalog> << EOF
    delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-8';
    exit
    EOF
    echo `date`
    echo
    echo 'End of archive cleanup on STANDBY'
    mailx -s ${MAILHEADER} $MAILTO < /tmp/rmandbarchstandby.out
    # End of ScriptThis uses ( calls an ENV) so the crontab has an environment.
    Example ( STANDBY.env )
    ORACLE_BASE=/u01/app/oracle
    ULIMIT=unlimited
    ORACLE_SID=STANDBY
    ORACLE_HOME=$ORACLE_BASE/product/11.2.0.2
    ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
    LIBPATH=$LD_LIBRARY_PATH:/usr/lib
    TNS_ADMIN=$ORACLE_HOME/network/admin
    PATH=$ORACLE_HOME/bin:$ORACLE_BASE/dba_tool/bin:/bin:/usr/bin:/usr/ccs/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/lbin:/GNU/bin/make:/u01/app/oracle/dba_tool/bin:/home/oracle/utils/SCRIPTS:/usr/local/bin:.
    #export TERM=linux=80x25 wrong wrong wrong wrong wrong
    export TERM=vt100
    export ORACLE_BASE ORACLE_SID ORACLE_TERM ULIMIT
    export ORACLE_HOME
    export LIBPATH LD_LIBRARY_PATH ORA_NLS33
    export TNS_ADMIN
    export PATH
    export MAILTO=?? your email hereNote use the env command in Unix to get you settings.
    There are probably ten other/better ways to do this, but this works.
    other options ( you decide )
    Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
    http://www.oracle.com/technetwork/database/features/availability/rman-dataguard-10g-wp-1-129486.pdf
    Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
    Tip I don't care myself but in some of the other forums people seem to mind if you use all caps in the subject. They say it shouting. My take is if somebody is shouting at me I'm probably going to just move away.
    Best Regards
    mseberg
    Edited by: mseberg on May 8, 2012 11:53 AM
    Edited by: mseberg on May 8, 2012 11:56 AM

  • IS there a way to view archive logs

    Hi, is there a way to view the content of the archive logs which have the extension "arc".

    Hi,
    This link is useful for you:
    http://www.oracle.com/technology/deploy/availability/htdocs/LogMinerOverview.htm
    Cheers

  • Error while taking archive log backup

    Dear all,
    We are getting the below mentioned error while taking the archive log backup
    ============================================================================
    BR0208I Volume with name RRPA02 required in device /dev/rmt0.1
    BR0210I Please mount BRARCHIVE volume, if you have not already done so
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.41
    BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRARCHIVE:
    c
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0257I Your reply: 'c'
    BR0259I Program execution will be continued...
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0226I Rewinding tape volume in device /dev/rmt0 ...
    BR0351I Restoring /oracle/RRP/sapreorg/.tape.hdr0
    BR0355I from /dev/rmt0.1 ...
    BR0278W Command output of 'LANG=C cd /oracle/RRP/sapreorg && LANG=C cpio -iuvB .tape.hdr0 < /dev/rmt0.1':
    Can't read input
    ===========================================================================
    We are able to take offline, online backups but we are facing the above mentioned problem while taking archive log backup
    We are on ECC 6 / Oracle / AIX
    The kernel is latest
    The drive is working fine and there is no problem with the tapes as we have tried using diffrent tapes
    can this be a permissions issue?
    I ran saproot.sh but somehow it is setting owner as sidadm and group as sapsys to some of the br* files
    I tried by changing the permissions to oraSID : dba but still the error is the same
    Any suggestions?

    Means you have not initialized the medias but trying to take backups.
    First check how many medias you have entered in your tape count parameter for archive log backups (just go to initSID.sap and check)
    Then increase/reduce them to according to your archive backup plan >> Initialize all the tapes according to their name (same as you have initialized in initSID.sap) >> stick physical label to all the medias according to name >> Schedule archive backups
    It will not ask you for initialization as already you have initialized in second step.
    Suggestion: Use 7 medias per week (one tape per day)
    Regards,
    Nick Loy

  • Create procedure is generating too many archive logs

    Hi
    The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
    What would be the answer? The db must remain in archivelog mode.
    I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
    CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
    ,P_GRE NUMBER
    ,P_SDATE VARCHAR2
    ,P_EDATE VARCHAR2
    ,P_ssn VARCHAR2
    ) IS
    CURSOR MainCsr IS
    SELECT DISTINCT
    PPF.NATIONAL_IDENTIFIER SSN
    ,ppf.full_name FULL_NAME
    ,ppa.effective_date Pay_date
    ,ppa.DATE_EARNED period_end
    ,pet.ELEMENT_NAME
    ,SUM(TO_NUMBER(prv.result_value)) VALOR
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAf.ASSIGNMENT_ID ASSG_ID
    ,paf.ORGANIZATION_ID
    FROM
    pay_element_classifications pec
    , pay_element_types_f pet
    , pay_input_values_f piv
    , pay_run_result_values prv
    , pay_run_results prr
    , pay_assignment_actions paa
    , pay_payroll_actions ppa
    , APPS.pay_all_payrolls_f pap
    ,Per_Assignments_f paf
    ,per_people_f ppf
    WHERE
    ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND ppa.payroll_id = pap.payroll_id
    AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
    AND ppa.payroll_action_id = paa.payroll_action_id
    AND paa.action_status = 'C'
    AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
    AND ppa.action_status = 'C'
    --AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
    AND paa.assignment_action_id = prr.assignment_action_id
    AND prr.run_result_id = prv.run_result_id
    AND prv.input_value_id = piv.input_value_id
    AND piv.name = 'Pay Value'
    AND piv.element_type_id = pet.element_type_id
    AND pet.element_type_id = prr.element_type_id
    AND pet.classification_id = pec.classification_id
    AND pec.non_payments_flag = 'N'
    AND prv.result_value &lt;&gt; '0'
    --AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
    -- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
    AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
    AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
    ------------------------------------------------------------------TO get emp.
    AND ppf.person_id = paf.person_id
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
    ------------------------------------------------------------------TO get emp. ASSIGNMENT
    --AND paf.assignment_status_type_id NOT IN (7,3)
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
    GROUP BY PPF.NATIONAL_IDENTIFIER
    ,ppf.full_name
    ,ppa.effective_date
    ,ppa.DATE_EARNED
    ,pet.ELEMENT_NAME
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAF.ASSIGNMENT_ID
    ,paf.ORGANIZATION_ID
    BEGIN
    DELETE cust.DFC_PAYROLL_DW
    WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND tax_unit_id = NVL(p_GRE, tax_unit_id)
    AND ssn = NVL(p_ssn, ssn)
    COMMIT;
    FOR V_REC IN MainCsr LOOP
    INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
    VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
    COMMIT;
    END LOOP;
    END ;
    So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
    Thanks
    Oracle 9.2.0.5
    AIX 5.2

    The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
    I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo.

Maybe you are looking for