Excessive archive generation

DB: 817
OS : Sun 5.3
Suddenly my db started generating too many archive logs, normally it used to generate dozens of archive logs in a day, now a day's it's started generating in 2 to 3 minutes, Nothing has been change in the database side and at application side. What could be the obviosu reason for the same?

Hi Vignesh,
Thanks for the reply, I have checked, there is no background job or process is running.... Still rate of archive generation is very high.

Similar Messages

  • Excess archive generation after migration

    hi
    Am facing large size of archive generation after my migration to 11.2.0.1 from 10.2.0.1 in windows 32 bit.
    While I was using 10g my total archive size in a day was only 10GB. now after the migration its coming al most 50gb.
    Guys.. can anybody let me know what i need to do to reduce my archive generation?
    Am using the same application and queries as in 10g.

    Disconnect everyone from the database.
    Or learn how to read documentation and how to administer Oracle.
    Or read up on the differences between 11gR2 and 10gR2.
    Sybrand Bakker
    Senior Oracle DBA

  • Excess archive log generation

    Hello to All,
    lot of archives are generating by the production database since 15 days.Nearly 30 archives are generating per hour.Before only one or two archives are going to be genarated per hour.The size of logfile is 300m and database is having 3 log groups.Now i want to know which application or which user is generating this much of archives.How i can find the reason for this much of archive generation.
    Thankx...

    Can you tell us the Oracle version which you are using?
    For the time, you can query v$sess_io to findout the session which is generating too much redo.
    Jaffar

  • Excessive Redo Generation After Upgrading on Oracle 10gR2

    We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0.
    After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 20 where as after migration it become around 200 files per day. I have run statspack report on this database. Statspack report is also saying that log file switch wait is become very high. Parameter timed_statistics has also been set to FALSE. Workload on the database is same before & after upgrade. Queries running in the sessions are also same before & after upgrade. All the parameters & memory structures are same after upgrade. Satatpack report is saying that db block change & disk write is become very high. I had used import export for upgrading the databases. Please provide a solution for this problem.
    Thanks In advance for all your favours....

    Hi;
    Please check below notes which could be helpful for your issue:
    Diagnosing excessive redo generation [ID 199298.1]
    Excessive Archives / Redo Logs Generation Troubleshooting [ID 832504.1]
    Troubleshooting High Redo Generation Issues [ID 782935.1]
    How to Disable (Temporary) Generation of Archive Redo Log Files [ID 177218.1]
    Regard
    Helios

  • Urgent: Excessive archive logs 10g 10.2.0.4

    Hi,
    I am experiencing excessive archive logging.
    This has been happenning for the past 3 days. Usually achive logs were taking about 250 Mb.
    Yesteday they jumped to 129Gb, today - 30Gb until the hard drive ran out of space.
    Now I obviously have
    "Archiver is unable to archive a redo log because the output device is full or unavailable" message.
    The database is servicing a low number of transations application and awaiting deployment into production.
    Not sure where to start.
    Any advice would be appreciated.
    I can provide any other information necessary to troubleshoot.
    Thanks in advance.

    run this....it should point out the session currently going:
    SELECT s.sid, s.serial#, s.username, s.program,
    t.used_ublk, t.used_urec, vsql.sql_text
    FROM v$session s, v$transaction t, v$sqlarea vsql
    WHERE s.taddr = t.addr
    and s.sql_id = vsql.sql_id (+)
    ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    This assumes you can log in by maybe moving some of your archives to free up space to resume db operations.
    Also, reach out to developers/users (if you can) and see if anyone is testing something or loading tables, etc.
    Edited by: DBA_Mike on Apr 14, 2009 10:10 AM

  • Archive generation is too high

    HI
    Archive generation is too high in my 11i instance the database is 10g every 7 minutes a 100mb(size of redo log is 100mb) archive is generated.
    There are 15 seeded programs running at a schdule of 1 minutes to 2 minutes can some one give me any tips as to waht has to be done.
    Regards

    Hi,
    Please refer to this thread.
    Archive generation
    Archive generation
    Regards,
    Hussein

  • Estimation of archive generation

    Hi Experts,
    I am planning to shrink a segment in production db to reclaim the space.The size of the table is 217 gb and on calculation it's size estimated to be max 50 gb keeping the consideration of overhead in the blocks due to PCTFREE etc..The agenda is to estimate the archive generation so that I could be ready for the space at OS to accomodate the archive generation for this shrink activity.Pls suggest me for some way to estimate archives generation before this operation so as to make it successful..Here are the details that may be required by you
    DB: 10.2.0.4 Enterprise edition
    OS: server 2003 (windows)
    index on table: 1 unique index of size 12gb

    Hi Aman/Kuljeet
    I have already tested this on t&d server but since the data and the size was quite small than real so came up with this question.However I observed that aprox 4 times the archives was generated than the space reclaimed..Now using this observation it comes to 600 gb in production aprox which was a concern for keeping our dataguard in sync..Anyways thanks a lot for your response.
    Regards
    Asif Khan

  • Oracle 9i Streams - Lots of Archive generation.

    Hi,
    We are using Oracle 9i stream for data replication between two database servers running in ARCHIVELOG mode.
    We observed during data replication , with very few transactions, lots of redo, and hence the archived logs are getting generated, which is almost 2.5G per dae. What might be the reason for generation of lots of REDO ? This redo generation is almost 20 times more than the normal redo generation when replication process (capture, propagation and apply process) was not running. Stream replication do generate redo in multiple of normal redo, but not as big as we are getting. What might be the reason for the same.
    Kamlesh C

    Hello,
    You can query the v$sesstat and v$session view so as to find the session which generates
    the most redo. For instance:
    select a.sid, a.serial#, a.username, a.program, b.value "Redo blocks written"
    from v$session a, v$sesstat b
    where a.sid=b.sid
    and b.statistic# = (select statistic# from v$statname
                        where name = 'redo size')
    order by b.value;The query should be executed when the database is generating the Archived logs.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Excessive redo generation in Production DB?

    my production database is generation lot of redo's since one month.Nearly 45 redo's its archiving per hour.b4 it use to generate 2 redo's per hour.The size of redolog file is 300M and we are having 3 redogroups.I want to know what is causing this much of redo to generate as system is same as before it is one month back.from where and how i have to start the invistigation.
    Thankx...

    Hello,
    can you check if your tablespaces are in backup mode ?
    for example like this:
    select a.name, b.status from v$datafile a, v$backup b
    where a.file# = b.file#;
    does it say "ACTIVE" for any of the files ?

  • How to check archive generation?

    Hi,
    lot of archives generating in my db.
    how can i know which session is generating more archives? is there any sql for it?
    please suggest me how to face these kind of issue?

    Hi,
    To find sessions generating lots of redo, you can use either of the following methods. Both methods examine the amount of undo generated. When a transaction generates undo, it will automatically generate redo as well.
    The methods are:
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session. High values indicate a session generating lots of redo.
    The query you can use is:
           SQL> SELECT s.sid, s.serial#, s.username, s.program,
             2  i.block_changes
             3  FROM v$session s, v$sess_io i
             4  WHERE s.sid = i.sid
             5  ORDER BY 5 desc, 1, 2, 3, 4;Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. These view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the USED_UBLK and USED_UREC columns).
    The query you can use is:
          SQL> SELECT s.sid, s.serial#, s.username, s.program,
            2  t.used_ublk, t.used_urec
            3  FROM v$session s, v$transaction t
            4  WHERE s.taddr = t.addr
            5  ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by the session.
    You use the first query when you need to check for programs generating lots of redo when these programs activate more than one transaction. The latter query can be used to find out which particular transactions are generating redo
    Regards,
    Francisco Munoz Alvarez
    www.oraclenz.com

  • Archive Generation in RAC environment.

    Gems,
    I have one doubt it was not able to solve from my mind i.e.
    As per my knowledge,
    1) Every instance has its own thread so Thread is instance specific. My questions are, How the SCN will be maintained in ARCHIVES? will thread 1 & thread2 contains the same SCN?
    2) If i have primary database with two nodes, and standby as single stand alone, How it will apply the archives on standby? I monitored ALERT log file of standby, Mostly it applies first thread 1 sequence, then thread2 sequence then thread1 sequence and then thread 2 sequence and so on.. Let me know if my assumption is wrong.
    3) If i cloned from RAC to NON-RAC then, If in case if i performed SCN based recovery as
    run
    set until scn 10000;
    restore database;
    recover database;
    then scn *10000* will contain in both the archives or only in one archives? or it contains only in one sequence? Please explain me.
    Appriciated for the right answers, If you are unable to understand let me know, I will try to clear up more clearly.
    Thanks

    Hi,
    first of all there are 2 kinds of SCNs. A "global" SCN and a local SCN per instance. Since 10g the SCNs are advanced with a mechanism called BoC (Broadcast on Commit).
    Each time the LGWR writes to the redo log, LGWR sends a message to update the global SCN in all instances and to update the local SCN in all active instances.
    1.) So each RedoLog entry knows the global SCN. So even though you have multiple instances, the SCN is unique (and in order).
    2.) That depends on how the redlogs were written. If e.g. one instance did not write redologs for a while, it could be that 2 logs from instance 1 are needed before the log from instance 2 is needed. But in gerenall it will be "parts of redolog 1 then parts of redolog 2" etc.
    3.) Yes you can, since the (global) SCNs are unique and will be used.
    Regards
    Sebastian

  • DC archive generation - deploy without JDI

    Hi everybody
    I have Web Dynpro DC "parent" depending on WebDynpro DC "child", and no JDI in use.
    I would like to create a deployable archive for father and then one for child and then ask customer's sysadmins to deploy them (e.g. with SDM), as I cannot directly do that.
    I can see that right clicking on the LocalDevelopment~<project name> project folder from WD perspective - WD explorer, I can create an archive.
    Where is it stored? Can I use it for the specified purpose?
    Thanks
    Points will be awarded
    Vincenzo

    Hi Vincenzo,
    All the DC's will be stored in the file system.
    The path for the DC's you can find using Window ->Preferences ->Java Development Infrastructure -> Development Configuration Pool -> Root Folder. You can find the DC's path here.
    Go to the above path, you can find "LocalDevelopment" folder where all the local DC's will be stored. Under "LocalDevelopment" folder -> DCs -> "vendor name" (Ex: sap.com) -> project name -> _comp -> gen -> default -> deploy, you can find the ear file.
    You can use the same file to deploy using SDM.
    Hope this helps!
    Regards,
    VJR.

  • Exchange 2010 Excessive Archive Transaction Logs

    I've got an Exchange 2010 environment with about 225 users over 6 mailbox DB's with corresponding Archive DB's.
    They're seeing log files for the archive DB's growing by about 25-30% of the total DB size daily. Those logs are generally about 8 x the logs generated by the corresponding primary mailbox database.
    Something like
    DB1 = 20GB  daily average logs 2GB
    DB1Archive = 90GB daily average logs ~28GB
    I don't see any Retention policies that have duplicated Tags and the Default policy has no tags assigned.
    The customer wants to continue with their existing backup process of weekly fulls and daily incremental/differentials, and missing two days is enough to max out their transaction log volumes. (which are as large as their combined database size).
    The archive db's have been in place for at least 12 weeks now so it's not just initial population, which is obvious by the fact that the archive db's are much larger than the original db's and the total logs in a week exceed the DB volumes by 2-3 times.

    Hi,
    First, please make sure there is no corruption in the retention policies.
    If there is no corruption, I recommend you create a new archive mailbox database and move archive mailbox to this new database to check result.
    Besides, here is a similar thread for your reference.
    http://social.technet.microsoft.com/Forums/en-US/77845ba9-96c4-4b94-8d55-b47c51ef8974/exchange-2010-archive-transaction-logs?forum=exchange2010
    Hope this is helpful to you.
    Best regards,
    Belinda Ma
    TechNet Community Support

  • Archive generation growth.

    Hi All,
    DB-10.2.0.2
    Is there any way, we can know, which SQL/Transaction causing the maximum REDO to generate? Also, HOW much redo in terms of MB Oracle generating every seconds.
    hare krishna

    You can see how many archives you have generated recently by running: select * from v$log_history order by first_time desc

  • Support.mozilla - excessive archiving, no contact info

    Hi,
    First up... I don't think this is the appropriate place to discuss site-policy, but when I hit "contact us" link, there is no email-address, no link to an administrator... the most helpful link I found was "forums"... which contained some 300 forums which... based on past experience, prefer NOT to answer questions from other .mozilla subdomains. So here I am. Just to clarify... you might update your "contact us" page with a link to "file a ticket" or "leave a complaint"... I'm sure you don't want to deal with that... but neither doesn't a person with a valid complaint not want to have no clue as to where to file their "site-administrative" complaint. On the other hand, I could be mistaken, and HERE (not via email or such) is exactly the place you want these kinds of questions answered?
    Secondly... as to policy I find that you should laxen your "this discussion is locked/archived" policy. for example, the closed article at - https://support.mozilla.org/en-US/questions/920556 - contained an inline image that was pertinent to the article at - https://support.mozilla.org/en-US/questions/959999?esab=a&as=aaq - and yet I couldn't insert such an image because both articles were locked.
    So let's look at the principles of the thing:
    Q) you don't want LONG threads in which new answer-seekers get lost.
    A) Yet I myself was getting lost. Cause first I was directed to the first page, and when it was locked, I went to file a question, but then I was directed to a hypothetical answer, and that linked to the second page, which in fact was correct, but wasn't linked to by Google, nor did it have the attached image, and I was just about to move on when I realized it was in fact the correct article. So I was getting lost, because I had to surf a bunch of defunct articles/hoops before I reached a valid article.
    I'm not saying you are wrong to desire relatively short articles, just that their is no continuity or connectedness of information. BOTH articles had pertinent information to the total solution of the problem, but their was no link (such as "related articles"), nor was their a way of merging info from both articles. Further, it would have been fruitless of me to attempt to start a new article on the topic, when that would just have further fragmented the coherency of the topic, and made finding a result via google harder.
    As a propsed solution or two (probably not optimal), try inserting links to "related articles" on article-pages (and not just when you want to ask a question). Secondly, you could allow the submission of "append-to-archived-articles". You could limit these appendums to solutions (not questions) or links to further other articles even.
    Hope you can take constructive criticism, and not forget that I am not demeaning all the great solutions this site and administrators have afforded many a confused firefox answer-seeker.

    That's a lot of stuff.
    Regarding locked threads, they are auto-locked after 6 months. Due to frequent changes in Firefox, they tend to go out of date... also, we encourage users to post their own question and include their system information, since OS, Firefox version, and add-ons are so often relevant. But I think it would be a good idea to make it easier to refer to an old post if you know that one is highly relevant.
    When you're researching solutions, the site generally tries to find Knowledge Base and Forum matches for your query. Once you pick one of those, it turns out to be pretty difficult to keep in mind enough context to suggest related articles that actually are relevant to the question. Or perhaps it's fruitless to try being relevant, because the reader may have developed a new idea about the nature of their question? Anyway, the site used to have links to related articles, and discovered that at least at this point, the software wasn't doing a very good job. More info in this thread (on the separate forum for discussion among "contributors"): [https://support.mozilla.org/forums/contributors/709718]. Maybe you have some ideas for improvement that would make it possible to restore this feature?
    Regarding adding to Knowledge Base articles, it's a wiki and you can mark in proposed edits 24/7. Check out this page for how to get started: https://support.mozilla.org/get-involved/kb

Maybe you are looking for