Oracle 9i Streams - Lots of Archive generation.

Hi,
We are using Oracle 9i stream for data replication between two database servers running in ARCHIVELOG mode.
We observed during data replication , with very few transactions, lots of redo, and hence the archived logs are getting generated, which is almost 2.5G per dae. What might be the reason for generation of lots of REDO ? This redo generation is almost 20 times more than the normal redo generation when replication process (capture, propagation and apply process) was not running. Stream replication do generate redo in multiple of normal redo, but not as big as we are getting. What might be the reason for the same.
Kamlesh C

Hello,
You can query the v$sesstat and v$session view so as to find the session which generates
the most redo. For instance:
select a.sid, a.serial#, a.username, a.program, b.value "Redo blocks written"
from v$session a, v$sesstat b
where a.sid=b.sid
and b.statistic# = (select statistic# from v$statname
                    where name = 'redo size')
order by b.value;The query should be executed when the database is generating the Archived logs.
Hope this help.
Best regards,
Jean-Valentin

Similar Messages

  • Excess archive generation after migration

    hi
    Am facing large size of archive generation after my migration to 11.2.0.1 from 10.2.0.1 in windows 32 bit.
    While I was using 10g my total archive size in a day was only 10GB. now after the migration its coming al most 50gb.
    Guys.. can anybody let me know what i need to do to reduce my archive generation?
    Am using the same application and queries as in 10g.

    Disconnect everyone from the database.
    Or learn how to read documentation and how to administer Oracle.
    Or read up on the differences between 11gR2 and 10gR2.
    Sybrand Bakker
    Senior Oracle DBA

  • Estimation of archive generation

    Hi Experts,
    I am planning to shrink a segment in production db to reclaim the space.The size of the table is 217 gb and on calculation it's size estimated to be max 50 gb keeping the consideration of overhead in the blocks due to PCTFREE etc..The agenda is to estimate the archive generation so that I could be ready for the space at OS to accomodate the archive generation for this shrink activity.Pls suggest me for some way to estimate archives generation before this operation so as to make it successful..Here are the details that may be required by you
    DB: 10.2.0.4 Enterprise edition
    OS: server 2003 (windows)
    index on table: 1 unique index of size 12gb

    Hi Aman/Kuljeet
    I have already tested this on t&d server but since the data and the size was quite small than real so came up with this question.However I observed that aprox 4 times the archives was generated than the space reclaimed..Now using this observation it comes to 600 gb in production aprox which was a concern for keeping our dataguard in sync..Anyways thanks a lot for your response.
    Regards
    Asif Khan

  • How to check archive generation?

    Hi,
    lot of archives generating in my db.
    how can i know which session is generating more archives? is there any sql for it?
    please suggest me how to face these kind of issue?

    Hi,
    To find sessions generating lots of redo, you can use either of the following methods. Both methods examine the amount of undo generated. When a transaction generates undo, it will automatically generate redo as well.
    The methods are:
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session. High values indicate a session generating lots of redo.
    The query you can use is:
           SQL> SELECT s.sid, s.serial#, s.username, s.program,
             2  i.block_changes
             3  FROM v$session s, v$sess_io i
             4  WHERE s.sid = i.sid
             5  ORDER BY 5 desc, 1, 2, 3, 4;Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. These view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the USED_UBLK and USED_UREC columns).
    The query you can use is:
          SQL> SELECT s.sid, s.serial#, s.username, s.program,
            2  t.used_ublk, t.used_urec
            3  FROM v$session s, v$transaction t
            4  WHERE s.taddr = t.addr
            5  ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by the session.
    You use the first query when you need to check for programs generating lots of redo when these programs activate more than one transaction. The latter query can be used to find out which particular transactions are generating redo
    Regards,
    Francisco Munoz Alvarez
    www.oraclenz.com

  • Lot of Archives are getting generated due to db block changes in table fnd_concurrent_requests and fnd_concurrent_queues

    Hi Guys,
    In our R12.1.3 EBS RAC Production Database (Size ~ 450GB), found heavy archives generation which almost 200GB - 250GB daily from past couple of months (Due to which our Standby/DR database have been lagging). After taking AWR Reports for last 3 months, I've found that fnd_concurrent_requests and fnd_concurrent_queues are 2 heavily redos transnational tables. (from db block changes section). Also the same confirmed from dba_hist_seg_stat, dba_hist_seg_stat_obj & dba_hist_snapshot tables. Purge Concurrent Request and/or Manager Data request has been scheduled with ALL, age, 14 days parameters.
    We have also done re-org activity on these tables, but still no luck. is there any way to reduce archives generation on due to these FND% Tables ? is there any other request for purging ? any help would be appreciated.
    Thanks in Advance,
    Regards,
    Manish Nashikkar

    Have you checked the contents of those archived log files?
    Do you purge FND_CONCURRENT% tables (and other tables) on regular basis? -- https://community.oracle.com/search.jspa?q=purging+strategy+EBS
    Please see previous threads which cover the same topic -- https://community.oracle.com/search.jspa?q=too+many+archived+log+files
    Thanks,
    Hussein

  • When I forward a HTML mail, the mail arrive stripped in a lot of archives

    When I forward a HTML mail or even a simple rich text, the mail arrives to the receipts stripped in a lot of archives.
    Example: If is a history of three mails, will arrive three or more (if have photos) archives to be opened as .html in some browser + the photos archives.
    I use: OS X 10.9
    The server is a exchange server.
    The strange reason is because IF I REPLY, the mail arrives well. Only if I forward manually and/or automatic (some rule), happen what I informed above.
    Any tip?
    Thanks

    Thanks to the tip. But I am very upset that Apple have no solution to this. I mean, if you have the option to forward, if you have the option to resend in HTML. Why we need to find ways to do this?
    It is a shame after buy a MAC start to bw aware about a lot of problems....
    Thanks again...

  • Archive generation is too high

    HI
    Archive generation is too high in my 11i instance the database is 10g every 7 minutes a 100mb(size of redo log is 100mb) archive is generated.
    There are 15 seeded programs running at a schdule of 1 minutes to 2 minutes can some one give me any tips as to waht has to be done.
    Regards

    Hi,
    Please refer to this thread.
    Archive generation
    Archive generation
    Regards,
    Hussein

  • Oracle EBS Data Purging and Archival

    Hi,
    I would like to know if there is any tool available in market for Oracle EBS data purging and Archival?
    Thanks,

    yes, there are 3rd-party tool available which will apply a set of business rules (ie all data older than Nov.1, 2007) through the various Oracle modules implemented at a customer site.
    They are 3rd-party tools; You can go to Oracle.com and look in partners validated integration solutions. At the moment there are 2 partners offering such integrated solution:
    Solix EDMS Validated Integration with 12.1
    IBM Optim Data Growth Solution
    the only other solution is to hire OCS for a customized developed solution

  • GoldenGate Replication or Oracle 11g Streams?

    We are working with a customer that is interested in implementing Oracle 11g Streams.
    I don’t know the GoldenGate products and
    I am not real familiar with streams other then hearing it can be tough to tune and if something goes wrong, it can be a pain to get right..
    They recently heard of Oracle’s purchase or GoldenGate and became interested when they read that the company may offer possibly a simpler replication strategy then streams as far as setup, tuning, maintainin, etc..
    Has anyone heard of what Oracle’s future plans/direction are for integrating GoldenGate into their fold, specifically the replication side like streams?
    Thanks

    I predict that in the future the GoldenGate Oracle users are going to get the short end of the stick now that Oracle has purchased them. I do not believe that Oracle purchased GoldenGate for their Oracle to Oracle capabilities. I assume that they wanted to acquire their ability to replicate from non-Oracle databases.
    There is nothing inherently wrong (design-wise) that makes the Oracle Streams product inferior to GoldenGate or Quest's Shareplex product. But both companies have continued to sell their products (which are not cheap) against a product which basically free.
    Why is this? Do you think that Oracle does not have the technical capabilities to build a great replication solution? If Oracle was interested in making a great Oracle to Oracle replication product, they could have invested a small fraction of the amount they paid for GoldenGate and added the necessary resources to build the best replication product on the market. But the don't because that is not what they are interested in doing.
    There are a couple of press releases that Oracle put out about the GoldenGate acquisition and one of the advantages they claim GoldenGate will benefit from is their $30 Billion dollar R&D budget. But why didn't Oracle use some of that 30 billion to make Streams a great product??
    Oracle purchased GG for reasons that have nothing to do with Oracle to Oracle replication. Future resources and priorities will go towards those goals.

  • Replicaition environment from non-Oracle to Oracle using Streams

    Hi guys,
    I'm finding my way to establish a replication environment from any non-Oracle db to Oracle using Streams.
    I've checked Oracle's documentation about this function. The answer is I have to write a custom application which will capture changes in Heterogeneous db and access pl/sql to format this changes to LCRs and enqueue these LCRs into Oracle Streams.
    theoretically, we all can understand this idea, but I don't know if there's any more detailed comments on this.
    Anyone have such experiences, pls give your introduction hereby, and we can make an example like replicate from Sqlserver to Oracle using Streams, or even any other techniques.
    Assuming we have establish the HS connectivity using Transparent gateway.
    Followups are welcomed.

    Hi Yukun,
    I'm developing right now an environment that will replicate changes from an Adabas (mainframe) database to Oracle. The greatest challenge is to pass your (sqlserver or other database) data to the Oracle db. When done you must have (like it is said in documentation) a manual process that will convert your data in LCRs then enqueue them. From there, it is a normal streams environment. I can (try to) answer more detailed questions if you need it. I don't have any experience with sqlserver database.
    Claudine

  • HT4437 Can I use AirPlay to stream from a 1st generation iPad?

    Can I use AirPlay to stream from a 1st generation iPad?  Thanks

    Yes. The airplay option is availabe for you iPad as long as it is running iOS 4.3 or higher. Hope this helps.

  • Excessive archive generation

    DB: 817
    OS : Sun 5.3
    Suddenly my db started generating too many archive logs, normally it used to generate dozens of archive logs in a day, now a day's it's started generating in 2 to 3 minutes, Nothing has been change in the database side and at application side. What could be the obviosu reason for the same?

    Hi Vignesh,
    Thanks for the reply, I have checked, there is no background job or process is running.... Still rate of archive generation is very high.

  • OVM 3.0 Database Creating Lots of Archive Logs

    Greetings - ever since we initially installed OVM 3.0 earlier this fall (~October), the OVM database has generated archive logs at a very rapid rate. It continually threatens to fill up our 16 GB filesystem dedicated to archive logs, even after daily backup and purging.
    Our OVM database itself is about 4-6 GB large, and we would need to increase the archive log filesystem to about 20-25 GB in size, which we see as unreasonable for such a small database.
    What is causing OVM to generate so many redo logs? Our best guess is that OVM is continuously gathering guest VM CPU usage on each physical server.
    Is there a way to configure the OVM application in order to reduce the amount of redo/archive logs being created?
    We are currently running 3.0.3, having upgraded each time a 3.0.* patch was released. OVMM running on OEL 6.1, database running on latest HP-UX.

    majedian21 wrote:
    Greetings - ever since we initially installed OVM 3.0 earlier this fall (~October), the OVM database has generated archive logs at a very rapid rate. It continually threatens to fill up our 16 GB filesystem dedicated to archive logs, even after daily backup and purging.I would log an SR with Oracle Support for this, so that Development can look at it. Sounds like your environment has lots of VMs running and yes, collecting usage stats for all of those environments. However, there may be some old data from the previous versions that's causing more stats to be collected than necessary.

  • For streams , database in Archive log mode or non archive log mode?

    Hello ,
    I have a basic question,
    To set up oracle streams, what should be the the database mode (archive log or non archive log mode)?
    Thanks in advance,
    Raj

    It needs to be in archive log mode..thts the place frm where it captures the necessary information....
    Kapil

  • Oracle 10g streams replication problem.

    Hi,
    Can someone help me oracle replication troubleshooting ?
    I have source and target databases. I have setup one-way stream replication successfully. Whatever data I insert into table A, it is replicated to target database succesfully.
    However, if I delete the records from the destination database and then it does not apply old values of the source database to destination database.
    I reset the START_SCN in the source database and then I setup destination SCN number using DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN and started capture process on the source database.
    Can somebody help how should I recover the destination database ?

    Hi Basu,
    Thanks a lot for response
    I set up streams between between source and destination database using schema instantiation and setup SCN number for destination after which apply process apply all the changes and all it is woking file.
    Now I deleted records from table of the same schema in the destination database. Now how do I get all the records in this table using streams.
    I looked at the oracle doc and found that I need to do build using dbms_streams_capture package and I did that and after that I reset START_SCN to the old value on the source database. I also reset the SCN at the SCHEMA label on the destination database for apply process to re-apply the old changes. But all this did not help me to get old data.
    Do you know specific steps which I need to do to get old data ?
    Thanks a lot

Maybe you are looking for