Urgent: Huge diff in total redo log size and archive log size

Dear DBAs
I have a concern regarding size of redo log and archive log generated.
Is the equation below is correct?
total size of redo generated by all sessions = total size of archive log files generated
I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
Before i start measuring i cleared up archive directory and started to monitor from a specific time.
Environment: Oracle 9i Release 2
How I tracked the sizing information is below
logon as SYS user and run the following statements
DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
CREATE TABLE REDOSTAT
AUDSID NUMBER,
SID NUMBER,
SERIAL# NUMBER,
SESSION_ID CHAR(27 BYTE),
STATUS VARCHAR2(8 BYTE),
DB_USERNAME VARCHAR2(30 BYTE),
SCHEMANAME VARCHAR2(30 BYTE),
OSUSER VARCHAR2(30 BYTE),
PROCESS VARCHAR2(12 BYTE),
MACHINE VARCHAR2(64 BYTE),
TERMINAL VARCHAR2(16 BYTE),
PROGRAM VARCHAR2(64 BYTE),
DBCONN_TYPE VARCHAR2(10 BYTE),
LOGON_TIME DATE,
LOGOUT_TIME DATE,
REDO_SIZE NUMBER
TABLESPACE SYSTEM
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
GRANT SELECT ON REDOSTAT TO PUBLIC;
CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
BEFORE LOGOFF
ON DATABASE
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO SYS.REDOSTAT
(AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
WHERE
A.SID = B.SID
AND
B.STATISTIC# = C.STATISTIC#
AND
C.NAME = 'redo size'
AND
A.AUDSID = sys_context ('USERENV', 'SESSIONID');
COMMIT;
END TR_SESS_LOGOFF;
Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
I have seen the similar implementation as above at many sites.
Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
If I didnt find a solution I would raise a SR with Oracle.
Thanks
[V]

You can query v$sess_io, column block_changes to find out which session generating how much redo.
The following query gives you the session redo statistics:
select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
where a.statistic# = b.statistic#
and b.name like '%redo%'
and a.value > 0
group by a.sid,b.name
If you want, you can only look for redo size for all the current sessions.
Jaffar

Similar Messages

  • Total combined size of Archived logs

    DB version : 11.2
    Platform : AIX
    How can I determine the total size of archive logs for a particular DB?
    Googling and OTN search didn't provide much details
    Didn't get the solution from the following thread either as it digressed from the subject
    Re: archive log size
    The redo log size for our DB is 100 mb.
    SQL> select count(*) from v$archived_log where status = 'A' and name is not null;
      COUNT(*)
            22
    So, I can multiply 22*100 = 2200 mb . But there has been some manual switches, the size of those files will be less. This is why I am looking for an accurate way to determine the total size of Archive logs.

    Hello;
    V$ARCHIVED_LOG contains BLOCKS ( Size of the archived log (in blocks) ) and BLOCK_SIZE ( which is the same as the logical block size of the online log from which the archived log was copied )
    So with a little help in the query you should be able to get it.
    Archivelog size each day
    select
      trunc(COMPLETION_TIME) TIME,
      SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
    from
      V$ARCHIVED_LOG
    group by
    trunc (COMPLETION_TIME) order by 1;Since COMPLETION_TIME is a DATE you can add another SUM to the query to get the exact total you want for the exact date range you want.
    Archivelog size each hour
    alter session set nls_date_format = 'YYYY-MM-DD HH24';
    select
      trunc(COMPLETION_TIME,'HH24') TIME,
       SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB
    from
      V$ARCHIVED_LOG
    group by
      trunc (COMPLETION_TIME,'HH24') order by 1;Another example
    SELECT   To_char(completion_time,'YYYYMMDD')    run_date,
             Round(Sum(blocks * block_size + block_size) / 1024 / 1024 / 1024) redo_blocks
    FROM     v$archived_log
    GROUP BY To_char(completion_time,'YYYYMMDD')
    ORDER BY 2
    /Best Regards
    mseberg
    Edited by: mseberg on Feb 23, 2012 2:30 AM

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Redo and archive log association

    Hi,
    I am curious about the redo file and archive log file association, is it one to one, or one to many? That is, does one arc log file hold data from just one redo file or many redo files?
    Or , is the association redo group to arc file as opposed to redo file to arc file?
    The size of the arc log files on my machine, sometimes far exceeds the size of a single redo file and sometimes goes well under the size.
    Thanks.

    I am curious about the redo file and archive log file association, is it one to one, or one to many?One archive log file represents one redo log file contents in a group.You can have multiple logfile members in a group. All the members have the same data.
    The size of the arc log files on my machine, sometimes far exceeds the size of a single redo file and sometimes goes well under the size.The size of the archive log can be smaller from redo logfile in following scenarios:--
    1. Manual log switch.
    2. Setting archive_lag_target parameter.
    sometimes far exceeds the size of a single redo file I am not very sure about it.I haven't seen size greater than the redo logfile.
    Anand
    Edited by: Anand... on Sep 2, 2009 7:45 PM

  • Size of archived logs

    Hi,
    1- how can we define size of archived logs in 10g R2 ?
    2- how can we define size of archived logs in 8.1.7?
    Thank you.

    You can not drop any logfile member which belongs to a current log group. to drop any member from non current group is
    alter database drop logfile member 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\ORCL\REDO02A.LOG';
    You can add one member in the following way as
    alter database add logfile member 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\ORCL\REDO02A.LOG' to group 2;
    But one thing all these are available in Oracle docs and you better find yourself to enhance your knowledge.
    Thanks

  • High redo, log.xml and alert log generation with streams

    Hi,
    We have a setup where streams and messaging gateway is implemented on Oracle 11.1.0.7 to replicated the changes.
    Until recently there was no issue with the setup but for last few days there is an excessive amount of redo and log.xml and alert log generation, which takes up about 50gb for archive log and 20 gb for the rest of the files.
    For now we have disabled the streams.
    Please suggest the possible reasons for this issue.
    Regards,
    Ankit

    Obviously, as no one here has access to the two files with error messages, log.xml and alert log, the resolution starts with looking into those files
    and you should have posted this question only after doing this.
    Now no help is possible.
    Sybrand Bakker
    Senior Oracle DBA

  • What will happen when redo log file or archive log file, which is yet to be

    What will happen when redo log file or archive log file, which is yet to be read by logminer is corrupted? It seems that the captures process hangs between “Paused for flow control” and “Enqueuing Messages”. How to come out of this condition without recreating the captures process?
    Any clue is helpful
    Thanks in advance for your help.

    Basically you can't skip SCN since it will result in a data integrity issues (say you had skipped some inserts and later there will be some updates to a not replicated data).
    Streams maintain their own checkpoint tables with transaction related stuff. So there is no way you can jump over a range of SCN's without recreating capture.
    The only thing you can try - temporary give capture process a rule without any objects. But it will need to mine through the redo anyway.

  • Pros and cons between the large log buffer and small log buffer?

    pros and cons between the large log buffer and small log buffer?
    Many people suggest that small log buffer (1-3MB) is better because we can avoid the waiting events from users. But I think that we can also have advantage with the bigger on...it's because we can reduce the redo log file I/O...
    What is the optimal size of the log buffer? should I consider OLTP vs DSS as well?

    Hi,
    It's interesting to note that some very large shops find that a > 10m log buffer provides better throughput. Also, check-out this new world-record benchmark, with a 60m log_buffer. The TPC notes that they chose it based on the cpu_count:
    log_buffer = 67108864 # 1048576x cpuhttp://www.dba-oracle.com/t_tpc_ibm_oracle_benchmark_terabyte.htm

  • RMAN BACKUPS AND ARCHIVED LOG ISSUES

    제품 : RMAN
    작성날짜 : 2004-02-17
    RMAN BACKUPS AND ARCHIVED LOG ISSUES
    =====================================
    Scenario #1:
    1)RMAN이 모든 archived log들을 삭제할 때 실패하는 경우.
    database는 두 개의 archive destination에 archive file을 생성한다.
    다음과 같은 스크립트를 수행하여 백업후에 archived redo logfile을 삭제한다.
    run {
    allocate channel c1 type 'sbt_tape';
    backup database;
    backup archivelog all delete input;
    Archived redo logfile 삭제 유무를 확인하기 위해 CROSSCHECK 수행시 다음과
    같은 메시지가 발생함.
    RMAN> change archivelog all crosscheck;
    RMAN-03022: compiling command: change
    RMAN-06158: validation succeeded for archived log
    RMAN-08514: archivelog filename=
    /oracle/arch/dest2/arcr_1_964.arc recid=19 stamp=368726072
    2) 원인분석
    이 문제는 에러가 아니다. RMAN은 여러 개의 arhive directory중 하나의
    directoy안에 있는 archived file들만 삭제한다. 그래서 나머지 directory안의
    archived log file들은 삭제되지 않고 남게 되는 것이다.
    3) 해결책
    RMAN이 강제로 모든 directory안의 archived log file들을 삭제하게 하기 위해서는
    여러 개의 채널을 할당하여 각 채널이 각 archive destination안의 archived file을
    백업하고 삭제하도록 해야 한다.
    이것은 아래와 같이 구현될 수 있다.
    run {
    allocate channel t1 type 'sbt_tape';
    allocate channel t2 type 'sbt_tape';
    backup
    archivelog like '/oracle/arch/dest1/%' channel t1 delete input
    archivelog like '/oracle/arch/dest2/%' channel t2 delete input;
    Scenario #2:
    1)RMAN이 archived log를 찾을 수 없어 백업이 실패하는 경우.
    이 시나리오에서 database를 incremental backup한다고 가정한다.
    이 경우 RMAN은 recover시 archived redo log대신에 incremental backup을 사용할
    수 있기 때문에 백업 후 모든 archived redo log를 삭제하기 위해 OS utility를 사용한다.
    그러나 다음 번 backup시 다음과 같은 Error를 만나게 된다.
    RMAN-6089: archive log NAME not found or out of sync with catalog
    2) 원인분석
    이 문제는 OS 명령을 사용하여 archived log를 삭제하였을 경우 발생한다. 이때 RMAN은
    archived log가 삭제되었다는 것을 알지 못한다. RMAN-6089는 RMAN이 OS 명령에 의해
    삭제된 archived log가 여전히 존재하다고 생각하고 백업하려고 시도하였을 때 발생하게 된다.
    3) 해결책
    가장 쉬운 해결책은 archived log를 백업할 때 DELETE INPUT option을 사용하는 것이다.
    예를 들면
    run {
    allocate channel c1 type 'sbt_tape';
    backup archivelog all delete input;
    두 번째로 가장 쉬운 해결책은 OS utility를 사용하여 archived log를 삭제한 후에
    다음과 같은 명령어를 RMAN prompt상에서 수행하는 것이다.
    RMAN>allocate channel for maintenance type disk;
    RMAN>change archivelog all crosscheck;
    Oracle 8.0:
         RMAN> change archivelog '/disk/path/archivelog_name' validate;
    Oracle 8i:
    RMAN> change archivelog all crosscheck ;
    Oracle 9i:
    RMAN> crosscheck archivelog all ;
    catalog의 COMPATIBLE 파라미터가 8.1.5이하로 설정되어 있으면 RMAN은 찾을 수 없는
    모든 archived log의 status를 "DELETED" 로 셋팅한다. 만약에 COMPATIBLE이 8.1.6이상으로
    설정되어 있으면 RMAN은 Repository에서 record를 삭제한다.

    Very strange, I issue following command in RMAN on both primary and standby machine, but it they don't delete the 1_55_758646076.dbf, I find in v$archived_log, this "/home/oracle/app/oracle/dataguard/1_55_758646076.dbf" had already been applied.
    RMAN> connect target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    old RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters are successfully stored
    RMAN>
    ----------------------------------------------------------------------------------

  • Rounding Value,Minimum Lot Size and Maximum Lot Siz parameters

    Hello Gurus,
           Please explain me the what is the use of the parameter Rounding Value
    ,Minimum Lot Size and Maximum Lot Size parameters in the product master and how does it in impact during the Heuristic Run?
    Thanks.

    Rounding value is the incremental quantities in which the order can be produced/procured. Eg. if orders are possible with quantities 40,60, 80, 100 etc..then rounding value is 20.
    Min lot size is the minimum quanity in which order can be produced eg. 40 in above example
    Max lot size is the maximum quanitity in which the order can be produced/procured. eg. 100 in our example
    Impact on Heuristic run: heuristics takes all the above parameter to plan supply order. eg. if requirement is 55, it will supply plan for 60. If requirement is 120, it will produce two orders - one for 100 and the other for 20. if the reuqirement is 10, the order size will be 40.
    Hope this helps.

  • Cisco Supervisor Desktop show "Agent Logs - call" and "Agent Logs - state" in N/A ::: UCCX 8.5.1

    Hi team.
    The Cisco Supervisor Desktop don't show any logs in the "Agent Logs - State" and "Agent Logs - Call" in some agents.
    I restarted the Cisco Desktop Services in CCX Serviceability but the issue continue.
    I appreciate any help respect this case.
    Thanks a lot.
    ErnestoG

    Hi Ernesto,
    Did you click or selct the Specific Agent\Inbound call which is currently being handled by the Agent. From the Screenshot you have attached (first one) doesn't look like the call has been selected.
    Please select or click on that Specific Agent\Inbound call from CSD and check these values.
    Hope this helps.
    Anand
    Please rate helpful posts !!

  • I have just installed a new yahoo messenger acc on my iphone, and cannot log in, getting...user underage message.. how do i log in and stay logged in on my iphone5

    i closed and installed a new yahoo messenger acc on my iphone5 last night, and are unable to log onto it now, i get a ....user ungerage.. please try again message continuously, how do i get logged in and stay logged in on my iphone thanks

    You might receive more assistance from users in the Lion forum.
    Click here  https://discussions.apple.com/community/mac_os/mac_os_x_v10.7_lion?view=discussi ons
    Then click New / Discussion / Lion

  • How to know the size of archived logs created under ASM

    I using Oracle 10g on Linux x86-64.
    I need to ship the archived logs(not the entire directory, only a few) from the Live database to the DR site, so I need an estimate of how much time it will take to ship them across the network ?
    Is there anyway I can know the size of a specific archived log file stored under ASM ?
    We can use du in ASM to know the size of directory but I dont find a command in ASM to get the size of a file ?

    No we are also switching logfiles manually , so the maximum size may not
    have reached.
    What I need is something like ls -l command in the Unix prompt which will
    help us to find the size of the file , a similar command to help us determine
    a size of file in ASM ?What is the objective?
    Anyways, you can get the size of an archived log file by quering V$ARCHIVED_LOG view.
    SQL> select sequence#, name, blocks*block_size from v$archived_log where sequence# > 180;
    SEQUENCE# NAME                                     BLOCKS*BLOCK_SIZE
           182 C:\MYDB\ARCH\ARC00182_0633314306.001             223053312
           181 C:\MYDB\ARCH\ARC00181_0633314306.001             264281600
           183 C:\MYDB\ARCH\ARC00183_0633314306.001              26209280
           184 C:\MYDB\ARCH\ARC00184_0633314306.001                  4096
           185 C:\MYDB\ARCH\ARC00185_0633314306.001                 16384
    SQL>

  • How do find all database slog size and mdf file size ?

    hi experts,
    could you share query to find all databases log file size and mdf file (includes ndf files ) and total db size ? in MB and GB
    I have a task to kae the dbs size  around 300 dbs
    ========               ============     =============        = ===        =====
    DB_Name    Log_file_size           mdf_file_size         Total_db_size           MB              
    GB
    =========              ===========               ============       ============     
    Thanks,
    Vijay

    Use this ViJay
    set nocount on
    Declare @Counter int
    Declare @Sql nvarchar(1000)
    Declare @DB varchar(100)
    Declare @Status varchar(25)
    Declare @CaptureDate datetime
    Set @Status = ''
    Set @Counter = 1
    Set @CaptureDate = getdate()
    Create Table #Size
    SizeId int identity,
    Name varchar(100),
    Size int,
    FileName varchar(1000),
    FileSizeMB numeric(14,4),
    UsedSpaceMB numeric(14,4),
    UnusedSpaceMB numeric(14,4)
    Create Table #DB
    Dbid int identity,
    Name varchar(100)
    Create Table #Status
    (status sql_Variant)
    Insert Into #DB
    Select Name
    From Sys.Databases
    While @Counter <=(Select Max(dbid) From #Db)
    Begin
    Set @DB =
    Select Name
    From #Db
    Where @Counter = DbId
    Set @Sql = 'SELECT DATABASEPROPERTYEX('''+@DB+''', ''Status'')'
    Insert Into #Status
    Exec (@sql)
    Set @Status = (Select convert(varchar(25),status) From #Status)
    If (@Status)= 'ONLINE'
    Begin
    Set @Sql =
    'Use ['+@DB+']
    Insert Into #Size
    Select '''+@DB+''',size, FileName ,
    convert(numeric(10,2),round(size/128.,2)),
    convert(numeric(10,2),round(fileproperty( name,''SpaceUsed'')/128.,2)),
    convert(numeric(10,2),round((size-fileproperty( name,''SpaceUsed''))/128.,2))
    From sysfiles'
    Exec (@Sql)
    End
    Else
    Begin
    Set @SQL =
    'Insert Into #Size (Name, FileName)
    select '''+@DB+''','+''''+@Status+''''
    Exec(@SQL)
    End
    Delete From #Status
    Set @Counter = @Counter +1
    Continue
    End
    Select Name, Size, FileName, FileSizeMB, UsedSpaceMB, UnUsedSpaceMB,right(rtrim(filename),3) as type, @CaptureDate as Capturedate
    From #Size
    drop table #db
    drop table #status
    drop table #size
    set nocount off
    Andre Porter

  • Import performance and archive logs

    Well we are working on Oracle 10 R2 on Solaris.
    During import (impdp) its generating huge volume of archive logs.
    Our database size is in terabytes.
    How to stop the archive log generation during import or atleast minimize the generation ??

    Hello,
    If you can restart your database then you may set your database in NOARCHIVELOG mode.
    Then, after the import is finished, you'll have to set back your database in ARCHIVELOG mode (you'll need to restart again the database).
    Afterwards, you'll have to Backup your database.
    Else, without changing the Archive mode of the database, you can Backup and compress your archived "logs".
    For instance, with RMAN:
    connect target /
    backup
      as compressed backupset
      device type disk
      tag 'BKP_ARCHIVE'
      archivelog all not backed up
      delete all input;
    exit;By that way you'll save space on disk.
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • Acrobat 7 Pro

    I am having activation problems with Acrobat 7 Professional.  After trying to activate by phone, I got a recording to download an activation-free version.  There is not one listed for the Pro version, just the standard version.  What do I need to do?

  • Help ! Connecting Sony TRV14E to a G5

    Quite New to Mac and expected simplicity ! I have Sony TRV14e camcorder and would like to connect it, so I can burn DVD's and hopefully take some pictures off the Film. It does not even recognise it ! What can I do ? Thanks

  • Multiple redirect URLs for mutliple guest VLANs

    We are trying to implement 2 guest WLANs tunnneled to our DMZ and want to redirect users to 2 different URLs (one for each WLAN) when they click the "Accept" button. We are running 6.0.182 on the DMZ controllers and have a customized web passthrough

  • For writing a BDC for MIGO

    Hi, Can i write a BDC for Automatic good recipt in MIGO on the save of inbound delivery. Regards.

  • NonValidating DOM Parser & Whitespace text nodes

    Hello, Using the DOM parser and the Oracle samples (DOMSample),I wrote a java progam to filter an XML file. Ex:<a> 1 <c>2</c> </a> to a: b:1 c:2 end It works fine. But when I set the parsers validation mode to false, my code fails. It seems that a TE