Archive log filling up a gig per minute

Hi there,
No one is on the database amd every day now the archive log is filling up over a gig per minute so when I get in work in the morning the backup didn't run and the archive directory is full and the database is not available.
This has been happening for the last 6 days and I can't figure it out. I have to keep clearing out archive files to get the directory percentage down so a few users can run some reports.
This database is not growing at all. There is no data being added to it, its a histrical database for report running only.
Here is the file system, as you can see, big drive so I am at a loss.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
3.1G 829M 2.2G 28% /
/dev/mapper/VolGroup00-LogVol_USR
4.0G 3.0G 813M 79% /usr
/dev/mapper/VolGroup00-LogVol_d01
160G 116G 36G 77% /d01
/dev/mapper/VolGroupBackup-LogVol_Backup
213G 195G 7.6G 97% /backup
/dev/sda1 99M 11M 84M 11% /boot
none 1.5G 0 1.5G 0% /dev/shm

user13286861 wrote:
Hi there,
No one is on the database amd every day now the archive log is filling up over a gig per minute so when I get in work in the morning the backup didn't run and the archive directory is full and the database is not available.
This has been happening for the last 6 days and I can't figure it out. I have to keep clearing out archive files to get the directory percentage down so a few users can run some reports.
This database is not growing at all. There is no data being added to it, its a histrical database for report running only.
Here is the file system, as you can see, big drive so I am at a loss.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
3.1G 829M 2.2G 28% /
/dev/mapper/VolGroup00-LogVol_USR
4.0G 3.0G 813M 79% /usr
/dev/mapper/VolGroup00-LogVol_d01
160G 116G 36G 77% /d01
/dev/mapper/VolGroupBackup-LogVol_Backup
213G 195G 7.6G 97% /backup
/dev/sda1 99M 11M 84M 11% /boot
none 1.5G 0 1.5G 0% /dev/shmRun off a statspack report and post the "Instance Activity", "Top 5" and "Load Profile" sections here. Don't forget to use the "cost" tags to get the output in fixed font (see end of post).
Question 1 - what did you do 6 days ago ?
Question 2 - what method are you using for backups.
Guess 1 - you have a load of tablespaces in backup mode, and you're doing a lot of delayed block cleanout.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Archive log writes frequency rate (4.73 minute(s)) is too high

    hello,
    DB version=10.2.0.1.0
    OS sun Solaris 5.10
    i got one alert telling
    Archive log writes frequency rate (4.73 minute(s)) is too high could someone help me out to know what is this error and what causingit to occure?
    thanks

    Raman wrote:
    please go throrgh this URL
    http://www.dba-oracle.com/t_redo_log_tuning.htm
    Well, the first step on that is wrong. The link in the step corrects it. However, both pages advocate putting redo logs on SSD, and if you google about for recent blog postings about that, you see that that is a bad idea. Even on Exadata, it only works because it is also writing to normal spinning rust, and says the write to redo is done when the first write is acknowledged. For normal non-Exadata databases, it's at best an expensive waste of time, and at worst shows up the deterioration of SSD's as redo log corruption. So, you might not want to link there.
    You should size the redo for the data rate expected at maximum, and use the parameters CKPT mentioned to switch for normal operating data rates.

  • To generate email alert when archive logs fill

    Hi. I was away yesterday and the archive logs were filling up and were at 92% when I got in this morning.
    I need to have an email generated once the /archlogs directory fills past 90%.
    The total size of that directory is 8064M. So once it hits around 7257M I would like an email fired to me and my boss.
    Oracle 9.2.0.5.0
    UNIX AIX 5.2

    In addition to Sybrand's reply, you might want to consider some other factors.
    First, when setting up email notifications, you need to adjust your expectations to account for the possible latency in email delivery. I have seen 'critical' emails take several hours to get through the system. This is not a function of Oracle, but of the email servers.
    Second, if archivelog destination filling up is an ongoing problem, I'd be looking at how my archivelogs are backed up and deleted - how I do my housekeeping on that destination. You should have more important things to do than constant monitoring and responding to 'destination full' conditions.

  • Archive Logs in every 15 minutes(Oracle 11g 64 bit  EE on Linux RHEL 4

    In our production database we have very few transactions may be of few MB's in whole day but it is generating archive logs constantly in every 15 minutes(may be sometime in 14 minutes also) of file size 50 MB each and this way consume 4 GB of space in a day for archive log which is way above than expected.
    I have checked archive_lag_target and value of this is 0.
    Any clue why it is creating 50 MB archive log file after every 14-15 minutes?

    It's easy enough to reduce redo log file size without downtime; just add new smaller redo log files, switch logfile a couple of times and drop the old redo log files.
    However, if the redo logs are filling up before they switch, then this will probably only make matters worse.
    If the redo logs are switching before they are full then maybe you also need to consider log_checkpoint_interval and log_checkpoint_timeout settings.
    If the redo logs are filling up before they switch then use the techniques suggested by a couple of the other posters to track down the guilty SQL.

  • Data Warehouse Archive logging questions

    Hi all,
    I'd like some opinions/advice on archive logging and OWB 10.2 with a 10.2 database.
    Do you use archive logging on your non-production OWB instances? I have a development system that only has "on demand" backups done and the archive logs fill frequently. In this scenario, should I disable archive logging? I realize that this limits my recovery options to cold backups but on a development environment, this seems sufficient for me. Would I be messing up any OWB features by turning off archive logging?
    For production instances, how large do you make your archive log (as a percentage of your total DW size perhaps)?
    How do you manage them? With Flash recovery areas? Manually? RMAN or other tools?
    Thanks in Advance,
    Mike

    Usualy, I don't set any DW tables to log. Since it's a data warehouse, I believe it's better to make cold backups. In some cases, ETL Mappings may work like backup procedures themselves.
    In OWB, select the object you need (table or index) to create. Right-click it, select Configuration -> Performance Parameters -> Logging Mode -> NOLOGGING
    Flash RecoveryDon't think it's going to help you, since most of your data manipulation is based on batch jobs.
    RMANIf you want to make hot backups, this is something that can really help you manage backup procedures.
    ManuallyMaybe... Why not?
    I don't take hot backups from DW databases. I prefer to take cold backups. In a recovery scenario, you restore the cold backup and if it's 3 days late, I execute the ETL mappings for the last 3 days.
    Regards,
    Marcos

  • Archive log generation in every 7 minute interval

    One of the HP Unix 11.11 hosts two databases uiivc and uiivc1. It is found that there is heavy archive log generation in every 7 minute in both databases. Redo log size is 100mb and configured with 2 members each on three groups for these databases.Version of the database is 9.2.0.8. Can anyone help me to find out how to monitor the redo log file contents which is filling up more frequently making more archived redo to generate (filling up the mount point)?
    Current settings are
    fast_start_mttr_target integer 300
    log_buffer integer 5242880
    Regards
    Manoj

    You can try to find the sessions which are generating lots of redo logs, check metalink doc id: 167492.1
    1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
    how much blocks have been changed by the session. High values indicate a
    session generating lots of redo.
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 i.block_changes
    3 FROM v$session s, v$sess_io i
    4 WHERE s.sid = i.sid
    5 ORDER BY 5 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
    2) Query V$TRANSACTION. This view contains information about the amount of
    undo blocks and undo records accessed by the transaction (as found in the
    USED_UBLK and USED_UREC columns).
    The query you can use is:
    SQL> SELECT s.sid, s.serial#, s.username, s.program,
    2 t.used_ublk, t.used_urec
    3 FROM v$session s, v$transaction t
    4 WHERE s.taddr = t.addr
    5 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
    Run the query multiple times and examine the delta between each occurrence
    of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
    the session.

  • Archived logs quickly filling up

    DBA from the client site where one of our product is running is complaining that one of the processes from our application is generating too much redo and are filling up the archive redo log files quickly. How can i find the SQL which is creating the most redo?

    Hi,
    Enable STATSPACK/AWR on the database and monitor whats happening at the peak time.
    If you would like to see which process/statement generated more redo then you need to mine the archive logs.
    To monitor which process is currently generating redo then use the below query:
    select b.name stat_name, sum(a.value)
    from v$sesstat a, v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    group by b.name ;

  • Managing ARCHIVE Logs in Oracle 10.2.0.3

    I am working with a customer who seems to think there is a way of controling the database other than a custom JOB, script or RMAN in how it creates, manages and deletes its archive logs while running in archivelog mode. He wants the database to automatically delete obsolete archive logs. He also wants to control the duration in time between each time an archive log is written in order to stop the growth of archive logs and filling up disk space.
    I am saying this is not possible. You either configure RMAN to delete the obsolete or expired archive logs based on your retention policy or do it manually in the Enterprise Manager or Grid Control Console by deletenig obsolete or expired logs.
    Am I correct or am I off base here?

    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/build_db.htm#sthref237
    If you are talking about data guard then:
    4.1.3 Sizing Redo Log Files
    The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.
    Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control.
    It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
    Automatic Deletion of Applied Archive Logs
    Archived logs, once they are applied on the logical standby database, will be automatically deleted by SQL Apply.
    This feature reduces storage consumption on the logical standby database and improves Data Guard manageability.
    See also:
    Oracle Data Guard Concepts and Administration for details
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14214/chapter1.htm#sthref269

  • DB Back to a Specific Time with Archive Logs (Until cancel or time?)

    I'll try to be as clear as possible with my intended goals and the limitations of the system I'm working with:
    1. We have a test instance of our Oracle DB that I would like to be able to refresh with data from the production instance at any time, without having to shut down the production database or put tablespaces into hot backup mode.
    2. Both systems are HP Itanium boxes with differing numbers of CPUs and RAM. Those differences have been taken into account in the init.ora file for the DB instances.
    3. The test and production instances are using a SAN to hold the following file systems: ora_redo1, ora_redo2, ora_archlog, oradata10g. The test instance is using SAN snapshots (HP EVA series SAN) of the production file systems to pull the data over when needed. The problem is that the only window to do this is when the production system is down for nightly maintenance which is about a 20 minute period. I want to escape this limitation.
    What I've been doing is using the HP SAN to take snapshots of the file systems on the production system mentioned above. I do this while the production DB is up and running. I then import those snapshots into the test system, run an fsck to ensure file system integrity, then start up the Oracle instance as follows:
    startup mount
    Then I run the following query I found on line to determine the current redo log:
    select member from v$logfile lf , v$log l where l.status='CURRENT' and lf.group#=l.group#;
    Then I attempt to run a recovery as follows:
    recover database using backup controlfile until cancel;
    When prompted, I enter the path to the first of the current redo logs and hit enter. After waiting, sometimes it says that the recovery completed, other times it stops saying there was an error and that more files are needed to make the DB consistent.
    I took the above route because doing a recover until time (which is what I really want to do) kept prompting me for the next archive log in sequence that didn't yet exist when I took the SAN snapshot. Here was my recover until time command:
    recover database until time 'yyyy-mm-dd:hh:mm:ss' using backup controlfile;
    What I would like to do is take the SAN snapshot and then recover the database to about a minute before the snapshot using the archive logs. I don't want to use RMAN since that seems to be overkill for this purpose. A simple recovery to a specific point in time seems to be all that is needed and I have archive logs which, I assume, SHOULD help me get there. Even if I have to lose the last hour's worth of transactions I could live with that. But my tests setting the specific time of recovery to even 12 hours earlier still resulted in a prompt for the next, non-existent archivelog.
    I will also note that I even tried copying the next archive log over once it did exist and the recovery would then prompt me for the next archive log! I will admit right now that I really don't know a whole lot about Oracle or DBs, but it's my task to try and make it possible to "refresh" the test DB with the most recent data with no impact on the production DB.
    The reason I don't want to use hot backup mode is that I don't know the DB schema other than there are probably 58 or more tablespaces. The goal is to use SAN snapshots for their speed instead of having to take RMAN files and copy them to the test instance. I'm sure I'm not the only person who has ever tried this. But most of what I've found on line refers to RMAN, hot backup mode, or down time. The first two don't take advantage of SAN snapshots for a quick swap of all the Oracle file systems and I can't afford downtime other than that window at night. Is there some reason that the recover to time didn't work even though I have archive logs?
    One final point. The recover until cancel actually worked a couple of times, but it seems to be sporadic. It likely has something to do with what was happening on the production DB when I created the SAN snapshots. I actually thought I had a solution with recover until cancel last week until it didn't work three times in a row.

    I haven't completely discounted it but it seems like I would need to back up to files, then restore from files. I don't want to do that. I want a full file system level SAN snapshot that I can just drop into place. The does work when the production base is shut down. However, considering that I do have archive logs, shouldn't it be possible to use them to recover to a specific scene without having to do an RMAN backup at all? It seems that doing an RMAN backup/recovery would just make this whole process a lot longer since the DB is 160 gigs in size (not huge, but the dump would take more time than I would like). With a SAN snapshot, if I can get this to work, I'm looking at about a 15-20 minute period of time to move the production DB over to test.
    Since you are suggesting that RMAN may be a better approach I'll provide more details about what I'm trying to do. Essentially this is like trying to recover a DB from a server that had its power plug pulled. I was hoping that Oracle's automatic recovery would do the same thing it would do in that instance. But obviously that doesn't work. What I want to do is bring over all the datafiles, redo logs, and archive logs using the SAN snapshot. Then if possible use some aspect of Oracle (RMAN if it can do it) to mount the database, then recover to a specific time or SCN if using RMAN. However, when I tried using RMAN to do it, I got an error saying that it couldn't restore the data file because it already existed. Since I don't want to start from scratch and have RMAN rebuild files that I've already taken snapshots of (needless copying of data), I gave up on the RMAN approach. But, if you know of a way to use RMAN so that it can recover to a specific incarnation without needing to runm a backup first, I am completely open to trying it.

  • Secondary destination for Archived logs

    Version: 10.2, 11.1, 11.2
    We occasionally get 'archiver error' on our production DBs due to our LOG_ARCHIVE_DEST_1 being full. How can I have a secondary location for archive logs in case my 'primary' location (LOG_ARCHIVE_DEST_1) becomes full ?
    I gather that LOG_ARCHIVE_DEST_2 is reserved for shipping archive logs to Dataguard standby DB in which you specify the tns entry of standby using SERVICE parameter.
    Can I specify LOG_ARCHIVE_DEST_3 as my secondary location in case LOG_ARCHIVE_DEST_1 becomes full ? Is it what LOG_ARCHIVE_DEST_n meant for ? Although the documentation says you can have upto 10 locations, I am confused if they are meant to store Multiplexed copies of archive logs ? That is not what I am looking for ?

    >
    Hi again Tom,
    I have one more question:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_4 = 'LOCATION=/disk4/arch';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_3 = 'LOCATION=/disk3/arch
        ALTERNATE=LOG_ARCHIVE_DEST_4';
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ALTERNATE;
    SQL> SELECT dest_name, status, destination FROM v$archive_dest;
    DEST_NAME               STATUS    DESTINATION
    LOG_ARCHIVE_DEST_1      VALID     /disk1/arch     -------------> Dest1
    LOG_ARCHIVE_DEST_2      VALID     +RECOVERY       -------------> Dest2
    LOG_ARCHIVE_DEST_3      VALID     /disk3/arch     -------------> Dest3
    LOG_ARCHIVE_DEST_4      ALTERNATE /disk4/archMy understanding is (and I'm not terribly sure at the minute - don't have a test system to hand. I haven't
    set up a backup/recovery strategy in a while - I just restore backups from time to time (normally every 4 weeks)
    to ensure that the database recovers as it should) - my understanding is that under the scheme above
    DEST_3 will be a copy of what's in DEST_1. DEST_4 on the other hand will "step in" should DEST_1
    or DEST_3 fill up/fail.
    As to DEST_2, I'm not sure - maybe something to do with Fast Recovery Area? I've Googled but can't
    find anything - the trouble is that all the pages about this contain the word "recovery" and the "+"
    sign doesn't appear to affect the search - does "+" mean something special to Google?
    I don't have a system at the moment - if you do, why don't you test and see? On a test system, fill
    up the file system for DEST_1 with rubbish and check to see what happens?
    All of the above is to be taken with a pinch of salt - I don't have a system to hand and am not certain,
    so CAVEAT EMPTOR
    HTH,
    Paul...
    Edited by: Paulie on 21-Jul-2012 17:20

  • OID (iasdb) 9.0.1.4.0 generating tons of archive logs

    Hi,
    When the tspurge jobs run, from dba_jobs it looks like there is a lot of archive logs being generated. Is this normal ? If so, are records being deleted from certain log and audit tables (PORTAL, ORASSO etc). I have a situation when the tspurgeDirObject ('cn=secrefresh events purgeconfig' ); END; runs, 125MB archive logs are generated every 2 minutes and is filling up disk space.
    Thanks for your help.
    Ramesh.

    Hi,
    When the tspurge jobs run, from dba_jobs it looks like there is a lot of archive logs being generated. Is this normal ? If so, are records being deleted from certain log and audit tables (PORTAL, ORASSO etc). I have a situation when the tspurgeDirObject ('cn=secrefresh events purgeconfig' ); END; runs, 125MB archive logs are generated every 2 minutes and is filling up disk space.
    Thanks for your help.
    Ramesh.

  • Archive log generation

    Hai all,
    In My production environment, some times archives log are generating 5-6 logs a min.. very less no of users are connected to the database now..
    rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4810.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4811.arc
    -rw-r----- 1 oraprod dba 10483712 Jan 12 14:10 prod_arch_4812.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4813.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4814.arc
    -rw-r----- 1 oraprod dba 10484224 Jan 12 14:10 prod_arch_4815.arc
    Y is this happenin ?
    Any comments or ideas to resolve this?
    Yusuf

    Whenever you create a thread, it is always advisable to specify your current OS and DB versions.
    You could be generating this redo information by means of your scheduled tasks or by the current users activity, the few concurrent number of users doesn't mean they won't be generating a lot of transactions. Check your v$undostat and v$rollstat, v$transaction, and v$session to monitor users and transaction activity.
    10M for a redo log size IMO is very little for the current transaction requirements for most databases. Your database currently generates transaction information at a rate of about 50M/min. With 100M redolog files you would be generating one archivelog file around each two minutes, instead of the current 5 archivelog per minute.
    Since your database is highly transactional, make sure you have enough free space to store your generated archive log files, you will be generating about 3G/hr.
    ~ Madrid

  • System.log filling up with: mbp kernel[0]: dp events: 0x04

    Dec 21 17:09:52 mbp kernel[0]: dp events: 0x04
    This line seems to repeat 5 or 6 times per minute, filling system.log.
    Looks like it started Dec 19th. I don't see any obvious unusual thing I did at that time.
    Any ideas?
    Thanks, jeff

    Although Apple has not fixed this yet, I have learned some interesting things. While on the phone with Apple, they had me run a Data Capture program to collect info about my system to send back to them.
    While running the application, I had Console open so I could look at the system log. To my surprise, after running this application, the repeating dp events: 0x04 completely stopped and the mouse cursor behaved normally. The mouse continued to behave normally until I put the computer to sleep and woke it up again.
    In summary:
    1) The dp events: 0x04 messages are present when the mouse skips and not present when it doesn't. This seems like proof that the muliple writes to the system log are what make the cursor skip.
    2) Running Capture Data.app (from the Apple tech) runs a handful of scripts that not only capture data, but also kills the dp events for some reason.
    I have verified this four different times and reported my findings to the Apple tech I was speaking to.
    Can anyone else who receive the data capture program from Apple give this a shot? Just have Console open and look at your system log before, during, and after running the program. Perhaps you'll see as I did that the dp events go away and your mouse works right (at least until sleep).

  • Restore archive log which were not backed up

    Hi experts,
    as per understandings, after the creation of database the scn will starts from 1 and it grows sequentially,
    1)i have not taken any backup until today.
    2) i deleted all the archivelogs, but i have database alive
    3) taken a full backup.
    so i tried to restore from scn 100 to 200.
    restore archivelog from scn 100 until scn 200;error:
    RMAN-20242: specification does not match any archivelog in recovery cataloag
    now my scn is 504575
    is it possible to restore archive log from 100 scn to 200 scn?
    Thanks & Best Regards.
    appriciated for the right answers.

    784585 wrote:
    Hi,
    iam not using any STANDBY database..But earlier you said "see if any case accidently missed archives are deleted, those archives need to apply in standby, so what can we do?"
    So of course people thought you were trying to sync up a standby ...
    i want to generate all the archives from the scratch of the database..
    my question is , database SCN will start from 1, so all the database has information from 1 to end...so why cant we take backup again as archives from database?This is pretty basic. You cannot recover from a backup you do not have. You've already said
    1)i have not taken any backup until today.
    2) i deleted all the archivelogs, but i have database aliveSo with no backup and no archive log, what would expect to recover from?
    OK, here's how it works.
    As changes are made to the database, a record of the change is written to a memory (not disk) structure called the "redo log buffer". When a transaction is committed the redo log buffer is written to the online redo log file. You must have a minimum of two online redo log files. This online redo log file is of fixed size. When it is full, the process of writing to it switched to the other (or next, if you have more than two). When an online redo log file is filled, and writing of redo info is switched to the next file, the filled file is copied to a NEW archived log file. After going through all the online redo logs in this round-robin fashion, it will come back to the first and start overwriting its previous contents. The archived log files are given unique names so that they are not over written but instead generate a constant stream of new archivelogs. It is these archive log files that contain all of your redo info that pre-dates the oldest info remaining in the online redo logs.
    In a recovery operation, the following happens
    first, the database files are restored from backup, starting with the newest full backup that meets your recovery needs, then following with all incremental backups after that. You said you have no backups prior to today.
    Second, after the files are restored from backups (which you said you don't have) additional redo information is applied to get the changes since the most recent full or incremental backup that was restored. This additional info comes from the archivelog files, which you said you didn't back up and apparently you deleted.
    So, you've said you have no database backup and no archivlogs either backed up or original. So from where do you expect to be able to recover this information?

  • Error while taking archive log backup

    Dear all,
    We are getting the below mentioned error while taking the archive log backup
    ============================================================================
    BR0208I Volume with name RRPA02 required in device /dev/rmt0.1
    BR0210I Please mount BRARCHIVE volume, if you have not already done so
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.41
    BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRARCHIVE:
    c
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0257I Your reply: 'c'
    BR0259I Program execution will be continued...
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0226I Rewinding tape volume in device /dev/rmt0 ...
    BR0351I Restoring /oracle/RRP/sapreorg/.tape.hdr0
    BR0355I from /dev/rmt0.1 ...
    BR0278W Command output of 'LANG=C cd /oracle/RRP/sapreorg && LANG=C cpio -iuvB .tape.hdr0 < /dev/rmt0.1':
    Can't read input
    ===========================================================================
    We are able to take offline, online backups but we are facing the above mentioned problem while taking archive log backup
    We are on ECC 6 / Oracle / AIX
    The kernel is latest
    The drive is working fine and there is no problem with the tapes as we have tried using diffrent tapes
    can this be a permissions issue?
    I ran saproot.sh but somehow it is setting owner as sidadm and group as sapsys to some of the br* files
    I tried by changing the permissions to oraSID : dba but still the error is the same
    Any suggestions?

    Means you have not initialized the medias but trying to take backups.
    First check how many medias you have entered in your tape count parameter for archive log backups (just go to initSID.sap and check)
    Then increase/reduce them to according to your archive backup plan >> Initialize all the tapes according to their name (same as you have initialized in initSID.sap) >> stick physical label to all the medias according to name >> Schedule archive backups
    It will not ask you for initialization as already you have initialized in second step.
    Suggestion: Use 7 medias per week (one tape per day)
    Regards,
    Nick Loy

Maybe you are looking for

  • Battery life and Heating up problems

    After updating to mavericks , it seems that my macbook pro is always heated and the battery life got shorter even if I'm not using any app... anyone could help me ?

  • Read data from a .csv located on another server

    Hi!: I've an application that currently reads some data from a .CSV file that I upload everyday using a file browser item plus a custom procedure to parse it. I was wondering if it'd be possible to automatically read the information from a remote fil

  • Error when creating a new Portfolio

    Hi guys, When I try to create a new Portfolio, I get the following error message:                "The value of the mandatory input field Portfolio Type is initial" This error occurs in spite of the field being populated with a valid Portfolio Type. C

  • External Monitor in Bootcamp question.

    Hi All! For the life of me, I cannot get my LG display to run dual with my macbook pro retina in windows 7 bootcamp. I have installed every driver with no luck. HDMI, nor thunderbolt>DVI are even recognised. Windows doesnt even recognise the display

  • HT4623 My update to ios 6 seems stalled - I cant get past the network security pane. Suggestions?

    I made the mistake of hitting the update and now I cant access anything on my iPhone - I am usually more confident with tech but this is scary