How to control too much of archived log generation

Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,

796843 wrote:
Hi ,
This is one of the interview questions,
I have replied to this. Just like to know what is answer of this.
How we can control the excessive archived log generation ?
Thanks,do not do any DML, since only DML generates REDO

Similar Messages

  • How to control RMAN backup of archive logs in FRA

    Setup:
    11.2.0.2 GI
    ASM Diskgroup for Fast Recovery Area
    10.2.0.4 Enterprise Database
    Without Catalog Database
    To get familiar with Flashback Database we set up RMAN Backups
    without deleting the archive logs,
    as published in Oracle10g / 11g - Getting Started with Recovery Manager (RMAN) (Doc ID 360416.1)
    "These deletions are managed by Oracle when space is required."
    We observed that RMAN takes backups of all archive logs every time,
    here we go with backing up archive logs every 30 minutes.
    So the question is:
    Is this (backing up all archive logs every time) the expected behavior or
    the result of not using a catalog database or
    any other mismatch?
    thanx for any hint
    Michael

    Hi Michael,
    In Note 360416.1 it refers to two archivelog backup commands.
    BACKUP ARCHIVELOG ALL;and
    BACKUP ARCHIVELOG FROM TIME 'SYSDATE-30' UNTIL TIME 'SYSDATE-7';Your question:
    Is this (backing up all archive logs every time) the expected behavior or the result of not using a catalog database or any other mismatch?If you used the first script you will backup alle archivelogs everytime.
    If you use the second one it will do a subset (probably not the range you want to backup...)
    You could implement another script
    BACKUP ARCHIVELOG NOT BACKED UP 2 TIMES;So every archivelog is only backed up twice.
    Check
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta009.htm#i78895
    for more information.
    Regards,
    Tycho

  • Heavy archive log generation

    Hi,
    Database Version: Oracle 11.1.0.6
    Platform: Enterprise Linux 5
    Can someone please tell me the troubleshooting steps_ in a situation where there is heavy inflow of archive log generation i mean, say around 3 files of around 50MB every minute and eating away the space on disk.
    1) How to find out what activity is causing such heavy archive log generation? I can run the below query to find out the currently running sql queries with status;
    select a.username, b.sql_text,a.status from v$session a inner join v$sqlarea b on a.sql_id=b.sql_id;
    But is there any other query or a better way to find out current db activity in this situation.
    Tried using DBMS_LGMNR Log Miner but failed because (i) utl_file_dir is not set in init param file (So, mining the archive log file on production is presently ruled out as I cannot take an outage)
    (ii) alter database add supplemental log data (all) columns query takes for ever because of the locks. (So, cannot mine the generated archive log file on another machine due to DBID mismatch.)
    2) How to deal with this situation? I read here in OTN discussion board that increasing the number of redo log groups or redo log members will help to manage this situation when there is lot of DML activity going on application side....But I didn't understand how it is going to help in controlling the rigorous archive logs generation
    Edited by: user10313587 on Feb 11, 2011 8:43 AM
    Edited by: user10313587 on Feb 11, 2011 8:44 AM

    Hi,
    Other than logminer, which will tell you exactly what the redo is by definition, you can run something like the following:
    select value/((sysdate-logon_time) * 1440) redo_per_minute,s.sid,serial#,logon_time,value
      from v$statname sn,
           v$sesstat ss,
           v$session s
      where s.sid = ss.sid
        and sn.statistic# = ss.statistic#
        and name = 'redo size'
    and value>0Then trace the "high" sessions above and it should jump out at you. If not, then run logmnr with something like...
    set serveroutput on size unlimited
    begin
      dbms_logmnr.add_logfile(logfilename => '&log_file_name');
      dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog + dbms_logmnr.no_rowid_in_stmt);
      FOR cur IN (SELECT *
                    FROM v$logmnr_contents) loop
        dbms_output.put_line(cur.sql_redo);
      end loop;
    end;
    /Note you don't need utl_file_dir for log miner if you use the online catalog.
    HTH,
    Steve

  • Archive log generation in every 7 minute interval

    One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
    Current settings are
    fast_start_mttr_target integer 300
    log_buffer integer 5242880
    Regards
    Manoj

    You can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 i.block_changes
    3 FROM v$session s, v$sess_io i
    4 WHERE s.sid = i.sid
    5 ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 t.used_ublk, t.used_urec
    3 FROM v$session s, v$transaction t
    4 WHERE s.taddr = t.addr
    5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.

  • Archive log generation in standby

    Dear all,
    DB: 11.1.0.7
    We are configuring physical standby for our production system.we have the same file
    system and configuration for both the servers.. now primary archive
    destination is d:/arch and the standby server also have d:/arch .Now
    archive logs are properly logged into the standby and the data is
    intact . the problem we have archive log generation proper in the
    primary arch destionation. but no archive logs are getting
    generated in the standby archive location. but archive logs are being
    applied to the standby database ?
    is this normal ?..in standby archive logs will not be generated ?
    Please guide
    Kai

    There are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
    in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
    ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
    ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
    /

  • Archive Log Generation in EBusiness Suite

    Hi,
    I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
    I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
    Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
    Please suggest what can be done.
    (I have raised this issue in Metalink Forum also and awaiting a response)
    Thanks
    qA

    Log/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
    If possible, can you run this query and post the result:
    select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
    where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
    group by trunc(first_time)
    order by 1
    --Adams                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Query help for archive log generation details

    Hi All,
    Do you have a query to know the archive log generation details for today.
    Best regards,
    Rafi.

    Dear user13311731,
    You may use below query and i hope you will find it helpful;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
        FROM V$LOG_HISTORY
        WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
    ) WHERE ROWNUM < 8;Hope That Helps.
    Ogan

  • Growth of Archive log generation

    Hi,
    In my case the the rate of archive log generation has been increased, so I want to know the query
    to find out the rate of archive log generation per hour.
    Regards
    Syed

    Hi Syed;
    What is your DB version? Also ebs and os?
    I use below query for my issue:
    select to_char(first_time,'MM-DD') day, to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
    from v$log_history group by to_char(first_time,'MM-DD')
    Regard
    Helios

  • Hourly archive log generation

    Hi,
    I am working in Oracle 10g RAC database on Hp-UX... in the standby environment..
    Instance name :
    R1
    R2
    R3
    for the above three instance... i need to find the hourly archive log generation in standby site.....
    Hours 1 2 3
    R1
    R2
    R3
    Total
    Share the query...

    set parameter archive_lag_target to required value. its a dynamic parameter and specified in secs.

  • How to removed the 300GB old archive logs???

    Dear Gurus
    Site-A
    Our Environment:
    Live Production 2-Node RAC Oracle10g r2 10.2.0.4
    Data and Archive is stored in Oracle ASM
    Operating System: Sun Solaris10 64bit
    We have two databases.
    ==>db-1(Site-A db-1 is Oracle streaming with Site-B db-1,so archive logs should not be deleted which is required for oracle streaming)
    ==>db-2
    Existing Backup and Recovery Strategy:
    Daily logical backup of application schema using DATA PUMP.
    We have not taken any RMAN Backup still.
    Please find the status of Shared Storage.
    -bash-3.00$ asmcmd
    ASMCMD> lsdg
    State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
    MOUNTED EXTERN N N 512 4096 1048576 614256 577801 0 577801 0 DATA/
    MOUNTED EXTERN N N 512 4096 1048576 298948 38635 0 38635 0 FLASH/
    We want to removed the old archive logs?
    Please suggest the best way of removed the old archive logs for db-2 and db-1?
    Can you Please suggest that how many time and how many space it's required for completed the backup?
    Regards
    Hitesh Gondalia
    Edited by: hitgon on Oct 1, 2011 9:50 PM
    Edited by: hitgon on Oct 1, 2011 10:09 PM
    Edited by: hitgon on Oct 1, 2011 10:13 PM

    You should consider using RMAN for backup/recovery. Logical exports (Data Pump) are useful for some purposes, but are not the preferred way to go for your backup and recovery needs. RMAN will give you better performance and so much more options.
    Long-term, a great way to delete archivelog backups is via RMAN “backup archivelog …. delete input” and/or “delete archivelog … “commands.
    Please delete archivelogs only after they are already backed up. RMAN will also take into consideration if the backups are needed for Streams and/or Standby.
    About the backup space – a full RMAN backup is typically smaller than the size of your DB. You need to run a test to see the exact size. The frequency and type of backup should be based on your SLE for this DB.
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • How can I set destination for archived logs?

    I would like to know:
    how to set destination for archived logs?
    how to identify the init.ora that is used for my database?
    With rman using compressed backupset by default and and making
    backup database;
    What does it backup exactly?

    Another thing I am wondering, when I make a backup with rman : backup database.
    It saves the backups in the directory autobackup from the flash_recovery_area but it seems that it only saves the data files and the control files.Isn't there a way to sava archived logs files, control files, datafiles in a single backup?
    In fact I would like to make a full backup using rman on sunday of everything and a incremental backup all days of the week how can I acomplish this with a retention of 7 days?

  • How to calculate storage space for archive log files and database backups?

    Hi all,
    I have a 1.8 terabyte Oracle 9i database and need to plan for how much additional disk space I will need to perform nightly backups and for archivelog files. Is there a script or formula available that can help me estimate how much required disk space I will need to hold a days worth of archived logs as well as a nightly export dump file and a full hot RMAN backup on disk?
    Thanks!

    I'm not sure how to estimate the size of your backups, especially if you use incrementals. However, the space required for archive logs will be equal to the amount of REDO your DB generates. I would count the number of log switches per day with a query like the following:
    select trunc(first_time), count(*)
    from v$log_history
    group by trunc(first_time)
    I would then take the average and multiply this count by the size of your redo log files (assuming they are all the same size).

  • How to know the size of archived logs created under ASM

    I using Oracle 10g on Linux x86-64.
    I need to ship the archived logs(not the entire directory, only a few) from the Live database to the DR site, so I need an estimate of how much time it will take to ship them across the network ?
    Is there anyway I can know the size of a specific archived log file stored under ASM ?
    We can use du in ASM to know the size of directory but I dont find a command in ASM to get the size of a file ?

    No we are also switching logfiles manually , so the maximum size may not
    have reached.
    What I need is something like ls -l command in the Unix prompt which will
    help us to find the size of the file , a similar command to help us determine
    a size of file in ASM ?What is the objective?
    Anyways, you can get the size of an archived log file by quering V$ARCHIVED_LOG view.
    SQL> select sequence#, name, blocks*block_size from v$archived_log where sequence# > 180;
    SEQUENCE# NAME                                     BLOCKS*BLOCK_SIZE
           182 C:\MYDB\ARCH\ARC00182_0633314306.001             223053312
           181 C:\MYDB\ARCH\ARC00181_0633314306.001             264281600
           183 C:\MYDB\ARCH\ARC00183_0633314306.001              26209280
           184 C:\MYDB\ARCH\ARC00184_0633314306.001                  4096
           185 C:\MYDB\ARCH\ARC00185_0633314306.001                 16384
    SQL>

  • How to delete the data in archived log files

    hi
    how can i delete the enteries in archived log files. and what is the disadvantage of deleting archived log enteries.

    There is no documented way to delete data stored in archived log files: you can only remove the archived log files if needed.

  • How to recover a database with archive log

    how to recover database with archivle log

    Hi,
    With in information no one can tell the answer.
    Kindly post your qusetion in details information, you want to recover the database in archive log mode, what type of error you get, bcoz depending up on the errors you recover your database,
    please mention all about your database
    cheers
    Senthil Kumar

Maybe you are looking for