Archve log generation in a day

Hi all
I want to check the sum bytes of archive log generation in a day. kindly suggest me any query which can help me. I am using 10.2.0.4 DB
Thanks
Krishna

set pagesize 18
set linesize 2000
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM < 16
<Or>
Source: archivelogs
select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*) from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY') order by to_char(COMPLETION_TIME,'DD-MON-YYYY');
<or>
If you wish to know for a partitular date :
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
where to_char(first_time,'DD-MON-RR')='16-AUG-10'
group by to_char(first_time,'DD-MON-RR')
order by 1
Source: Query help
I have many more in my notes.... !
Regards
Girish Sharma

Similar Messages

  • How to reduce excessive redo log generation in Oracle 10G

    Hi All,
    Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
    previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
    below is the size of redo log file members:
    L.BYTES/1024/1024     MEMBER
    200     /u05/applprod/prdnlog/redolog1a.dbf
    200     /u06/applprod/prdnlog/redolog1b.dbf
    200     /u05/applprod/prdnlog/redolog2a.dbf
    200     /u06/applprod/prdnlog/redolog2b.dbf
    200     /u05/applprod/prdnlog/redolog3a.dbf
    200     /u06/applprod/prdnlog/redolog3b.dbf
    here is the some content of alert message for your reference how frequent log switch is occuring:
    Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Thread 1 advanced to log sequence 17439
    Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Tue Jul 13 14:46:17 2010
    Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
    Tue Jul 13 14:46:38 2010
    Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Thread 1 advanced to log sequence 17440
    Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
    Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
    Tue Jul 13 14:46:52 2010
    Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
    Tue Jul 13 14:53:33 2010
    Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Thread 1 advanced to log sequence 17441
    Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
    Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
    Tue Jul 13 14:53:37 2010
    Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
    Tue Jul 13 14:55:37 2010
    Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
    Tue Jul 13 15:15:37 2010
    Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
    Tue Jul 13 15:35:38 2010
    Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
    Tue Jul 13 15:55:39 2010
    Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
    Tue Jul 13 16:15:41 2010
    Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
    Tue Jul 13 16:35:41 2010
    Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
    Tue Jul 13 16:42:28 2010
    Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
    Thread 1 advanced to log sequence 17442
    Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
    Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
    Thanks in advance

    hi,
    Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
    L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by daythanks,
    baskar.l

  • Query help for archive log generation details

    Hi All,
    Do you have a query to know the archive log generation details for today.
    Best regards,
    Rafi.

    Dear user13311731,
    You may use below query and i hope you will find it helpful;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
           , TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
        FROM V$LOG_HISTORY
        WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    ) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
    ) WHERE ROWNUM < 8;Hope That Helps.
    Ogan

  • Growth of Archive log generation

    Hi,
    In my case the the rate of archive log generation has been increased, so I want to know the query
    to find out the rate of archive log generation per hour.
    Regards
    Syed

    Hi Syed;
    What is your DB version? Also ebs and os?
    I use below query for my issue:
    select to_char(first_time,'MM-DD') day, to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
    from v$log_history group by to_char(first_time,'MM-DD')
    Regard
    Helios

  • High redo, log.xml and alert log generation with streams

    Hi,
    We have a setup where streams and messaging gateway is implemented on Oracle 11.1.0.7 to replicated the changes.
    Until recently there was no issue with the setup but for last few days there is an excessive amount of redo and log.xml and alert log generation, which takes up about 50gb for archive log and 20 gb for the rest of the files.
    For now we have disabled the streams.
    Please suggest the possible reasons for this issue.
    Regards,
    Ankit

    Obviously, as no one here has access to the two files with error messages, log.xml and alert log, the resolution starts with looking into those files
    and you should have posted this question only after doing this.
    Now no help is possible.
    Sybrand Bakker
    Senior Oracle DBA

  • Redo log generation

    Hi,
    I have oracle 9i on HP-UX, I want to know how many redo getting generated every hour for the past one month or so, is there any script that I can use to find this?
    Thanks

    Hi,
    You can use this script which gives hour by hour log generation and its total for a month
    SQL> L
      1  select
      2    to_char(first_time,'DD-MM-YY') day,
      3    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
      4    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
      5    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
      6    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
      7    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
      8    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
      9    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    10    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    11    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    12    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    13    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    14    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    15    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    16    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    17    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    18    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    19    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    20    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    21    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    22    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    23    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    24    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    25    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    26    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    27    COUNT(*) TOT
    28    from v$log_history
    29  group by to_char(first_time,'DD-MM-YY')
    30  order by day
    31*
    SQL> @log_hour
    DAY      00   01   02   03   04   05   06   07   08   09   10   11   12   13   14   15   16   17   18   19   20   21   22   23          TOT
    01-07-10   29   19   10    8    8   13   12   19   24   20   10   16   10   10   33   24   23   26   27   32   25   13    8   11        430
    02-07-10    9    8    8    8    8    8   10    8   15    8   10    8    8    9   44   30   14    8   25   11    8   12    9   11        297
    03-07-10   10    9    8    8    8    8    9   13   14   18   14   22   16   14   10    8    9   10    9   12   10   11    9   11        270
    04-07-10   10    8    8    9    8    8    8    9   13    9    8   17   10    8   10   13   14    8    9   11    9   10    8   11        236
    05-07-10   10    8    9    8    8    8    9    8   16   14   11    9   10    8    9   10    8   19   13   16    8   10    9   11        249
    06-07-10   10    8    9    8    8    8    9   12   13   15    9    8   10   12   13   11   10    9   10   11   13    9    8   11        244
    07-07-10   10   11    8    8    8    8    9    8   15    8   19   10   19   14   12   10   21    9    8   12    9    9   14   17        276
    08-07-10   13    9    8    8    8    8    9   14   13   11    8    8   13    9    8   21    8    8   13    9    8    9    8    8        239
    09-07-10   10    8    8    8    9    8   10   12   15    9    8   14    8    9   19    9    8    8   16    8    9    8    8    9        238
    10-07-10    9    9    8    8    8    8    9   14   14   10    8    8    9    9   13    8    8    8    8    9    8    9    8    8        218
    11-07-10   10    8    9    8    8    8    9   11   14    8    8    8    8    9    8    8    9    8   10    8    8    8    8    8        209
    12-07-10   10    9    8    8    8    8    9   12   14   11   14   13    8   13   13    9    8   13   10    8   10    8    8    8        240
    13-07-10    9   12    8    8    8    8    8   14   16   17   11    9   12   17   12   11    9    9    8   15    9   12   11    8        261
    14-07-10   10   12    8    8    8    8    9   12   14   10    8    8    8    8    9    9    8    8    9    9    9   31   52   40        315
    15-07-10   16    8    9    8    8    8   12   10   17    9   13   16    9   11    8   15    8   10   15   12    8   12    8    8        258
    16-07-10   10    9    8    8    8    8    9   11   15   11   14   12   20    8    9    9    9    8    8   12   10    8    8    8        240
    17-07-10   10    8    8    8    8    8    8   12   16   10   15    8    9    8    9    8    8    8    9    8    8    9    8    8        219
    18-07-10   11   10    8    8    8    8    8    8   10    9   10    8    9   12   16   10    9    8    8    8    8    8   10    8        220
    19-07-10   10    9    8    8    8    8    9    4   15    9    8    9    8    8    9   11    9    8   17    8   21    8    8    8        228
    20-07-10    9    9    8    8    8    8   12    9   15   16   11    8    9    8   10   12    8    8    9   11    8    9    8    9        230
    21-07-10   19    9    8    8    9    8    8    8    9    9   13    8    8    8    9   11    8   14   24   12   37   40   35    8        330
    22-07-10   10    8   12   10    8    8    8   11   17    9    8    9    8    9    9    8   14   16   31   11   39   53   40    9        365
    23-07-10   10   15   18    8    9    8    9    8   13    9    8   16    9   10    8   14   11   10    9    8    9    9    8   14        250
    24-07-10   10    9    8    8    8    8    8   12   14    9    8    8   10    9    9    9    8   11   12    8    8    8    8    8        218
    25-07-10   10    8    9    8    8    8    9    8   14    9    8    8    8    9    8    9    8   10    8    9    9    8    8    8        209
    26-07-10   10    9    8    8    8    8    9    9   13   10    8   14    8    8   10   19   24   10   27   37   36   23    8    9        333
    27-07-10    9    9    8    8    8    8    8    6    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0         64
    30-06-10    0    1   49   11    9    8   15    8    8   15   17   25   16   27   28    9   13    9   12   14   12   28   40   43        417
    28 rows selected.thanks,
    baskar.l

  • I bought an apple iPod 5th generation the other day and I cannot remember the unlock passcode. I have done the restore thing, but because I never set it up or synced it with iTunes, it does not recognize my device. It is disabled for one hour. Please help

    I bought an apple iPod 5th generation the other day and I cannot remember the unlock passcode. I have done the restore thing, but because I never set it up or synced it with iTunes, it does not recognize my device. It is disabled for one hour. There's only a few things that could be my passcode, but because it disables for an hour, it is time consuming trying to figure it out. What do I do? I do not care about having to erase everything since it's new and there's nothing on it, but it was never synced with iTunes.

    Disabled
    Place the iOS device in Recovery Mode and then connect to your computer and restore via iTunes. The iPod will be erased.
    iOS: Wrong passcode results in red disabled screen                         
    If recovery mode does not work try DFU mode.                        
    How to put iPod touch / iPhone into DFU mode « Karthik's scribblings        
    For how to restore:
    iTunes: Restoring iOS software
    To restore from backup see:
    iOS: Back up and restore your iOS device with iCloud or iTunes
    If you restore from iCloud backup the apps will be automatically downloaded. If you restore from iTunes backup the apps and music have to be in the iTunes library since synced media like apps and music are not included in the backup of the iOS device that iTunes makes.
    You can redownload most iTunes purchases by:
    Downloading past purchases from the App Store, iBookstore, and iTunes Store        
    If problem what happens or does not happen and when in the instructions? When you successfully get the iPod in recovery mode and connect to computer iTunes should say it found an iPod in recovery mode.

  • Archive log generation in standby

    Dear all,
    DB: 11.1.0.7
    We are configuring physical standby for our production system.we have the same file
    system and configuration for both the servers.. now primary archive
    destination is d:/arch and the standby server also have d:/arch .Now
    archive logs are properly logged into the standby and the data is
    intact . the problem we have archive log generation proper in the
    primary arch destionation. but no archive logs are getting
    generated in the standby archive location. but archive logs are being
    applied to the standby database ?
    is this normal ?..in standby archive logs will not be generated ?
    Please guide
    Kai

    There are no standby logs should be generated on standby side. Why do you think it should. If you are talking about parameter standby_archive_dest then, if you set this parameter oracle will copy applied log to this directory, not create new one.
    in 11g oracle recomended to not use this parameter. Instead oracle recomended to set log_archive_dest_1 and log_archive_dest_3 similar to this:
    ALTER SYSTEM SET log_archive_dest_1 = 'location="USE_DB_RECOVERY_FILE_DEST", valid_for=(ALL_LOGFILES,ALL_ROLES)'
    ALTER SYSTEM SET log_archive_dest_3 = 'SERVICE=<primary_tns> LGWR ASYNC db_unique_name=<prim_db_unique_name> valid_for=(online_logfile,primary_role)'
    /

  • Tx to view error log generated during a day

    hi all,
    can anyone pls tell me the Tx Code to view all type of error logs generated in a day.
    Regards
    vikas chhabra

    hi
    go to Easy access and  in application tool bar  press " SAP businees work place" or "control+F12" then select INBOX and
    select unread documents her eu can find the list if errors.
    reward me if it helps.
    regards,
    kishore.

  • How to control too much of archived log generation

    Hi ,
    This is one of the interview questions,
    I have replied to this. Just like to know what is answer of this.
    How we can control the excessive archived log generation ?
    Thanks,

    796843 wrote:
    Hi ,
    This is one of the interview questions,
    I have replied to this. Just like to know what is answer of this.
    How we can control the excessive archived log generation ?
    Thanks,do not do any DML, since only DML generates REDO

  • Archive Log Generation in EBusiness Suite

    Hi,
    I am responsible for EBusiness suite 11.5.10.2 AIX Production server. Until last week (for the past 1.5 years), there were excessive archive log generation (200 MB for every 10 mins) which has been reduced to (200 MB for every 4.5 hours).
    I am unable to understand this behavior. The number of users still remain the same and the usage is as usual.
    Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
    Please suggest what can be done.
    (I have raised this issue in Metalink Forum also and awaiting a response)
    Thanks
    qA

    Log/archive logs generation is directly related to the level of activities on the database, so it is almost certain that the level of activities have dropped significantly.
    If possible, can you run this query and post the result:
    select trunc(FIRST_TIME), count(SEQUENCE#) from v$archived_log
    where to_char(trunc(FIRST_TIME),'MONYYYY') = 'SEP2007'
    group by trunc(first_time)
    order by 1
    --Adams                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Hourly archive log generation

    Hi,
    I am working in Oracle 10g RAC database on Hp-UX... in the standby environment..
    Instance name :
    R1
    R2
    R3
    for the above three instance... i need to find the hourly archive log generation in standby site.....
    Hours 1 2 3
    R1
    R2
    R3
    Total
    Share the query...

    set parameter archive_lag_target to required value. its a dynamic parameter and specified in secs.

  • Heavy archive log generation

    Hi,
    Database Version: Oracle 11.1.0.6
    Platform: Enterprise Linux 5
    Can someone please tell me the troubleshooting steps_ in a situation where there is heavy inflow of archive log generation i mean, say around 3 files of around 50MB every minute and eating away the space on disk.
    1) How to find out what activity is causing such heavy archive log generation? I can run the below query to find out the currently running sql queries with status;
    select a.username, b.sql_text,a.status from v$session a inner join v$sqlarea b on a.sql_id=b.sql_id;
    But is there any other query or a better way to find out current db activity in this situation.
    Tried using DBMS_LGMNR Log Miner but failed because (i) utl_file_dir is not set in init param file (So, mining the archive log file on production is presently ruled out as I cannot take an outage)
    (ii) alter database add supplemental log data (all) columns query takes for ever because of the locks. (So, cannot mine the generated archive log file on another machine due to DBID mismatch.)
    2) How to deal with this situation? I read here in OTN discussion board that increasing the number of redo log groups or redo log members will help to manage this situation when there is lot of DML activity going on application side....But I didn't understand how it is going to help in controlling the rigorous archive logs generation
    Edited by: user10313587 on Feb 11, 2011 8:43 AM
    Edited by: user10313587 on Feb 11, 2011 8:44 AM

    Hi,
    Other than logminer, which will tell you exactly what the redo is by definition, you can run something like the following:
    select value/((sysdate-logon_time) * 1440) redo_per_minute,s.sid,serial#,logon_time,value
      from v$statname sn,
           v$sesstat ss,
           v$session s
      where s.sid = ss.sid
        and sn.statistic# = ss.statistic#
        and name = 'redo size'
    and value>0Then trace the "high" sessions above and it should jump out at you. If not, then run logmnr with something like...
    set serveroutput on size unlimited
    begin
      dbms_logmnr.add_logfile(logfilename => '&log_file_name');
      dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog + dbms_logmnr.no_rowid_in_stmt);
      FOR cur IN (SELECT *
                    FROM v$logmnr_contents) loop
        dbms_output.put_line(cur.sql_redo);
      end loop;
    end;
    /Note you don't need utl_file_dir for log miner if you use the online catalog.
    HTH,
    Steve

  • Archive log generation in every 7 minute interval

    One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
    Current settings are
    fast_start_mttr_target integer 300
    log_buffer integer 5242880
    Regards
    Manoj

    You can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 i.block_changes
    3 FROM v$session s, v$sess_io i
    4 WHERE s.sid = i.sid
    5 ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 t.used_ublk, t.used_urec
    3 FROM v$session s, v$transaction t
    4 WHERE s.taddr = t.addr
    5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.

  • Supressing Redo Log Generation

    Hi,
    I have a PL/SQL script that uses FORALL function with SAVE EXCEPTIONS clause to bulk insert/update a table.
    How can I supress redo log generation in my script?
    I have placed the database in NOARCHIVELOG mode, does this supress redo log generation?

    Read these chapters:
    [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/startup.htm#g12154]Oracle Database Instance
    [url http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/physical.htm#CNCPT11302]Overview of the Online Redo Log
    Redo is needed to redo transactions. Archiving of the redo is needed to redo transactions that go beyond the online redo. Archivelog mode means to keep the redo beyond the time it is immediately used. If your instance or disk crashes, you need to recover the transactions that were in process. There are reasons to use noarchivelog mode, but if you do, you have to be able to recreate what gets lost in a crash. That's why most normal production OLTP is done in archivelog mode, the data is important enough to need to come back if the instance or computer goes away. Other types of production may not need it, but it needs to be explicitly justified why running in noarchivelog mode is done, as well as how to recover if things go south.
    Since archiving redo requires reading the redo and writing it elsewhere, heavy batch loads may benefit from noarchivelog mode. So it might be ok to suppress archiving, but you must have redo. Sometimes it can be justified to do large loads in noarchivelog mode, then switch to archivelog mode, but you must take backups before and after. Since the backups will use the same order of magnitude I/O and cpu as archivelog mode, only a time-window limitation can make it make sense, and you can't have production access either. So it usually makes more sense just to have the hardware to stay archivelog.
    Demo, test, archive and development databases are usually somebodies production, too. They all have their own restoration and recovery requirements.

Maybe you are looking for