Backup and restore full and transaction log in nonrecovery mode failed due to LSN

In SQL 2012 SP1 enterprise, when taking a full backup and followed up a transaction log backup immediately, the transaction log backup starts with an earlier LSN than the ending LSN of the full backup. As a result, I cannot restore
the transaction log backup after the full backup both as nonrecovery on another machine. I was trying to make the two machine in sync for mirroring purpose. An example is as follows.
full backup:       first 1121000022679500037, last 1121000022681200001
transaction log: first 1121000022679000001, last 1121000022682000001
--- SQL Scripts used  
BACKUP DATABASE xxx  TO DISK = xxx WITH FORMAT
go
backup log  xxx to disk = xxx
--- When restore, I tried the
restore log BarraOneArchive  from disk=xxx  WITH STOPATMARK  = 'lsn:1121000022682000001', NORECOVERY
Also tried StopBeforeMark, did not work either. Complained about the LSN too early to apply to the database

I think that what I am saying is correct .I said in sync mirroring ( i was not talking about witness) if network goes for few minutes or some longer time may be 20 mins ( more than that is reare scenario IS team has backup for that) logs on Principal will
continue to grow as they wont be able to commit because there connection with mirror is gone so commit from mirror is not coming.After network comes online Mirror will replay all logs and will soon try to come up with principal
Books Online says this: This is achieved by waiting to commit a transaction on the principal database, until the principal server receives a message from the mirror server stating that it has hardened the transaction's log to disk. That is,
if the remote server would go away in a way so that the primary does not notice, transactions would not commit and the primary would also be stalled.
In practice it does not work that way. When a timeout expires, the principal will consider the mirror to be gone, and Books Online says about this case
If the mirror server instance goes down, the principal server instance is unaffected and runs exposed (that is without mirroring the data). In this section, BOL does not discussion transaction logs, but it appear reasonable that the log records are
retained so that the mirror can resync once it is back.
In Async Mirroring Transaction log is sent to Mirror but it does not waits for Acknowledgement from mirror and commits the transaction.
But I would expect that the principal still gets acknowledgement that the log records have been consumed, or else your mirroring could start failing f you backup the log too frequently. That is, I would not expect any major difference between sync and async
mirroring in this regard. (Where it matters is when you fail over. With async mirroring, you are prepared to accept some data loss in case of a failover.)
These are theories that could be fairly easily tested if you have a mirroring environment set up in a lab, but I don't.
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • RMAN BACKUPS AND ARCHIVED LOG ISSUES

    제품 : RMAN
    작성날짜 : 2004-02-17
    RMAN BACKUPS AND ARCHIVED LOG ISSUES
    =====================================
    Scenario #1:
    1)RMAN이 모든 archived log들을 삭제할 때 실패하는 경우.
    database는 두 개의 archive destination에 archive file을 생성한다.
    다음과 같은 스크립트를 수행하여 백업후에 archived redo logfile을 삭제한다.
    run {
    allocate channel c1 type 'sbt_tape';
    backup database;
    backup archivelog all delete input;
    Archived redo logfile 삭제 유무를 확인하기 위해 CROSSCHECK 수행시 다음과
    같은 메시지가 발생함.
    RMAN> change archivelog all crosscheck;
    RMAN-03022: compiling command: change
    RMAN-06158: validation succeeded for archived log
    RMAN-08514: archivelog filename=
    /oracle/arch/dest2/arcr_1_964.arc recid=19 stamp=368726072
    2) 원인분석
    이 문제는 에러가 아니다. RMAN은 여러 개의 arhive directory중 하나의
    directoy안에 있는 archived file들만 삭제한다. 그래서 나머지 directory안의
    archived log file들은 삭제되지 않고 남게 되는 것이다.
    3) 해결책
    RMAN이 강제로 모든 directory안의 archived log file들을 삭제하게 하기 위해서는
    여러 개의 채널을 할당하여 각 채널이 각 archive destination안의 archived file을
    백업하고 삭제하도록 해야 한다.
    이것은 아래와 같이 구현될 수 있다.
    run {
    allocate channel t1 type 'sbt_tape';
    allocate channel t2 type 'sbt_tape';
    backup
    archivelog like '/oracle/arch/dest1/%' channel t1 delete input
    archivelog like '/oracle/arch/dest2/%' channel t2 delete input;
    Scenario #2:
    1)RMAN이 archived log를 찾을 수 없어 백업이 실패하는 경우.
    이 시나리오에서 database를 incremental backup한다고 가정한다.
    이 경우 RMAN은 recover시 archived redo log대신에 incremental backup을 사용할
    수 있기 때문에 백업 후 모든 archived redo log를 삭제하기 위해 OS utility를 사용한다.
    그러나 다음 번 backup시 다음과 같은 Error를 만나게 된다.
    RMAN-6089: archive log NAME not found or out of sync with catalog
    2) 원인분석
    이 문제는 OS 명령을 사용하여 archived log를 삭제하였을 경우 발생한다. 이때 RMAN은
    archived log가 삭제되었다는 것을 알지 못한다. RMAN-6089는 RMAN이 OS 명령에 의해
    삭제된 archived log가 여전히 존재하다고 생각하고 백업하려고 시도하였을 때 발생하게 된다.
    3) 해결책
    가장 쉬운 해결책은 archived log를 백업할 때 DELETE INPUT option을 사용하는 것이다.
    예를 들면
    run {
    allocate channel c1 type 'sbt_tape';
    backup archivelog all delete input;
    두 번째로 가장 쉬운 해결책은 OS utility를 사용하여 archived log를 삭제한 후에
    다음과 같은 명령어를 RMAN prompt상에서 수행하는 것이다.
    RMAN>allocate channel for maintenance type disk;
    RMAN>change archivelog all crosscheck;
    Oracle 8.0:
         RMAN> change archivelog '/disk/path/archivelog_name' validate;
    Oracle 8i:
    RMAN> change archivelog all crosscheck ;
    Oracle 9i:
    RMAN> crosscheck archivelog all ;
    catalog의 COMPATIBLE 파라미터가 8.1.5이하로 설정되어 있으면 RMAN은 찾을 수 없는
    모든 archived log의 status를 "DELETED" 로 셋팅한다. 만약에 COMPATIBLE이 8.1.6이상으로
    설정되어 있으면 RMAN은 Repository에서 record를 삭제한다.

    Very strange, I issue following command in RMAN on both primary and standby machine, but it they don't delete the 1_55_758646076.dbf, I find in v$archived_log, this "/home/oracle/app/oracle/dataguard/1_55_758646076.dbf" had already been applied.
    RMAN> connect target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    old RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters are successfully stored
    RMAN>
    ----------------------------------------------------------------------------------

  • The log shipping restore job restores a corrupted transaction log backup to a secondary database

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,
    Well when failover happens SQL Server is stopped and restarted on other node. So when SQL Server is stopped and it is doing Log backup the backup operation would stop and there would be no trn files . The backup operation wont complete and hence no backup
    information would be stored in SQL Server MSDB and no .trn file would be generated.
    You can run restore verifyonly on .trn file to see whether it is damaged or not. Logshipping is quite flexible even if previous log backup did not complete the next wont be affected because SQL Server has no information about whether backup completed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • WAE 512 and transaction logs problem

    Hi guys,
    I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
    transaction-logs enable
    transaction-logs archive interval every-day at 23:00
    transaction-logs export enable
    transaction-logs export interval every-day at 23:30
    transaction-logs export ftp-server 10.253.8.125 cache **** .
    and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
    Transaction log configuration:
    Logging is enabled.
    End user identity is visible.
    File markers are disabled.
    Archive interval: every-day at 23:00 local time
    Maximum size of archive file: 2000000 KB
    Log File format is squid.
    Windows domain is not logged with the authenticated username
    Exporting files to ftp servers is enabled.
    File compression is disabled.
    Export interval: every-day at 23:30 local time
    server type username directory
    10.253.8.125 ftp cache .
    HTTP Caching Proxy logging to remote syslog host is disabled.
    Remote syslog host is not configured.
    Facility is the default "*" which is "user".
    Log HTTP request authentication failures with auth server to remote syslog host.
    HTTP Caching Proxy Transaction Log File Info
    Working Log file - size : 96677381
    age: 44278
    Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
    Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
    Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
    I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
    What can I do?
    Thx
    davide

    Hi Davide,
    You seem to be missing the path on the FTP server; which goes on the export command.
    Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
    Let me know how it goes. Thanks!
    Jose Quesada.

  • Temp tables and transaction log

    Hi All,
    I am on SQL 2000.
    When I am inserting(or updating or deleting) data to/from temp tables (i.e. # tables), is transaction log created for those DML operations?
    The process is, we have a huge input dataset to process. So, we insert subset(s) of input data in temp table, treat that as our input set and do the processing in parts. Can I avoid transaction log generation for these intermediate steps?
    Soon, we will be moving to 2008 R2. Are there any features in 2008, which can help me in avoiding this transaction logging?
    Thanks in advance

    Every DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
    http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance

  • Full Backups, Level 0 Backups, and Archived Logs

    We have an active Oracle server and a standby Oracle server. We keep the standby database up to date with a cron script. The script tells the active database to do 'alter system switch logfile;'. We then rsync the archived logs to our standby server and have rman apply them.
    This works everyday except Monday (of course!) and it only recently started failing on Mondays. The only change was that our Sunday backups used to be 'Full' backups but are now 'level 0' backups. Ever since that change, the first attempt to apply the archived logs to the standby server after the level 0 is taken on the active server gives us something like this:
    ORA-00308: cannot open archived log
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.arc'
    ORA-27037: unable to obtain file status
    Of course, the file is not there and doesn't exist on the active server either. And of course, the nightly level1 backups fo not give us problems applying archived logs to the standby database the rest of the week.
    The only way I know to recover from this is to apply the level 0 backup or take a new level 0 and apply it. After that, all subsequent archive logs just work. Any idea why changing from Full to Level 0 would break this? The Oracle docs insist that a Level 0 is identical to a Full except that level 1s can reference them as parents. This simply cannot be true based on what I'm seeing! I really want to keep the level 0 backups in play if possible. Level 1 cumulatives wont be useful without them.

    Here are the RMAN settings:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/102/dbs/snapcf_ORCL.f'; # default
    I'm not sure how changing ARCHIVELOG BACKUP COPIES would help. Can you give me a little more information about how that setting comes into play in this situation?
    I actually don't want an archive deletion policy here. We have this done in script three days after the needed archive logs have been applied. Is it possible that the we're deleting archivelogs too soon? Would we ever need to reach back in time to previously applied archive logs to apply new ones?
    The %u does resolve, but this message isn't showing it. Here is that same log entry plus a few previous entries that show it does resolve.
    ORA-00279: change 1284618956 generated at 04/13/2012 15:30:05 needed for thread
    1
    ORA-00289: suggestion :
    /opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60518_%u_.arc
    ORA-00280: change 1284618956 for thread 1 is in sequence #60518
    ORA-00278: log file
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_13/o1_mf_1_60517_7rjzox
    0l_.arc' no longer needed for this recovery
    ORA-00279: change 1284618958 generated at 04/13/2012 15:30:05 needed for thread
    1
    ORA-00289: suggestion :
    /opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.arc
    ORA-00280: change 1284618958 for thread 1 is in sequence #60519
    ORA-00278: log file
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_13/o1_mf_1_60518_7rjzox
    0x_.arc' no longer needed for this recovery
    ORA-00308: cannot open archived log
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.ar
    c'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3

  • My Backups and Archive Logs

    Hi All,
    I have an Oracle 10G installation on a solaris host and at the moment, there are a lot of archive log files on the server, full level 0 backups are executed every week, and level 1 differentials are done on a daily basis, now the problem is that I have a lot of archive log which has not been deleted, I am not sure if I can delete it now.
    At the moment, the last backkup which was done today also backed up the archive log as well and the latest sequence ID is 2367, the backup done this morning covered the archive logs up till sequence ID 2351 - 2360. The oldest backup goes as far back as sequence ID 2070. I was wondering whether I can delete all archive logs prior to this sequence ID.
    Also, is there a functionality in Enterprise Manager that automatically deletes.
    I look forward to your replies.
    thanks

    If it's a simple "no change to existing scripts" sort of thing you are after, then just add another job to run after the existing ones, as follows (you'll want to follow whatever approach your existing scripts use - this is just a demo.):
    $ORACLE_HOME/bin/rman <<EOF
    connect target
    backup archivelog all delete input;
    exit;
    EOFAnd add whatever error checking the existing script has (if any).
    Or, if changing the existing script/s is OK, add PLUS ARCHIVELOG DELETE INPUT to the script/s.
    And TEST it first, before putting it into production.

  • Database in log archive mode and redo log file in mode not archive

    Hello,
    I have a dabatabase running in archive log mode, recently changed, I have 5 redo log groups and one of them (the current one) shows in the v$log view, that ARC: NO, I mean, no archiving. All redo logs except it shows ARC:Yes
    What does it mean?
    Am I going to have problems with this redo log file?
    Thanks

    If you do describe on v$log, you'll find that the full column name is Archived (meaning is it archived yet?).
    You could try alter system switch logfile and then check v$log again a few times after.
    Use the docu for finding out more about v$ views and so on
    http://www.oracle.com/pls/db102/print_hit_summary?search_string=v%24log

  • Starting firefox brings up a dialog box. Title is "Session Manager" and the message is "This operation failed due to a file access error: unknown error". What's up?

    Above says it all

    If anyone is is intrested. I figured out the solution. In the Weblogic Console Located in the JDBC Data Sources. Added correct JNDI Name for Name: soademoDatabase which was jdbc/soademoDatabase. Redeploy and restart server. Works fine

  • How to delete Transaction Logs in SQL database

    Hi,
    Can any one explain me the process how to delete the transcation logs in SQL database.
    Thanks
    Sunil

    Sunil,
    Yes you can take online backup in MS SQL server.
    The transaction log files contain information about all changes made to the database. The log files are necessary components of the database and may never be deleted. Why you want to delete it?
    I am taking any backup, do i need to turn off the SAP server that is running at the moment or can i take it online
    There are three main types of SQL Server Backup: Full Database Backup, Differential Database Backup and Transaction Log Backup. All these backups can be made when the database is online and do not require you to stop the SAP system.
    Check below link for details
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/89/68807c8c984855a08b60f14b742ced/frameset.htm
    Thanks
    Sushil

  • How to do following restore? Full backup takes long and transaction backup in the middle.Thanks!

    One database starts to run full backup at 10am. Full backup finishes at 11:45 AM
    transaction log backups every 30 minutes: 10:00am, 10:30am,11:00 am , 11:30 am and 12:00 PM
    At 1:30 PM
    I need to restore database back to 12:00PM.
    So I should restore:
    1) Full backup at 10:00 am + transaction log backup 10:30am+ transaction log backup 11:00 am +transaction log backup at 11:30 am + transaction log backup at 12:00 PM
    or
    2) Full backup at 10:00 am + transaction log backup at 12:00PM
    Because full backup starts at 10:00 am and ends at 11:45 am, I am not sure 1) or 2) should I choose.
    Please let me know which one is correct--1) or 2) .Thanks

    alternatively you can make use of 
    RESTORE HEADERONLY FROM <BACKUP_DEVICE>
    database backup has LSN number information in te backup header, and last LSN will help you to locate next log backup in
    sequence to be restored.
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    Praveen Dsa | MCITP - Database Administrator 2008 |
    My Blog | My Page

  • SQL Server Restore Full + SQL Server Transaction Log backup

    Dear DB Admin,
    I would like to restore my SQL  Full + SQL Server Transaction Log backup. I tell you my current setup.. we have SQL Server 2005 full backup Time morning 6AM and SQL Server Transaction Log backup time at night 8 PM . I would like to restore
    to new server full + SQL Server Transaction Log backup how can i do it please could you explain to us...
    Best Regards
    Subash

    Dear Boss,
    Thanks for your support . I followed your steps and restore  full database with "No Recovery" option . it;s done 100% with out error . after i have
    seen my database it's say  green arrow up mark ( Restoring ) still going on ...
    Please could tell my what is the problem ..what i did wrong options...
    Best regards
    Subash 
     did you also restore transaction log backup or just full backup??
    do you have any transaction log backup taken after the full backup you restored?
    do you want to restore transaction log backups as well, how many transaction log backups did you take after the full backup, you just restored.  per your explanation, there should be one transaction log backup.. 
    so,you now need to restore the transaction log backup..
    Restore log <<RestoreDatabaseName>>
    From DISK ='BackupFileLocation\Transaction_log_BackupFileName.bak' With File= <<fileposition>>,
    ,Recovery
    Hope it Helps!!

  • Please tell me the step by step process of backup and full restore

    Dear all,
    we are using ecc5 and windows and oracle 9i I am going to apply support package .Please tell me the step by step process of backup and full restore . I am using DB13 for backup. Please suggest.
    Regards,
    Shiva

    Hi,
    Login as <SID>adm user & run the following command
    brrestore -b <backup logfilelogfile> -m full
    this command restore the backup.
    you can find backup logfile in /oracle/<SID>/sapbackup/.ant or .aft .
    For backup using brtools follow the following procedure:-
    Run Brtools Than you get the following option
    1 = Database backup
    2 - Archivelog backup
    3 - Database copy
    4 - Non-database backup
    5 - Verification of database backup
    6 - Verification of archivelog backup
    7 + Additional functions
    8 - Reset input values
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR662I Enter your choice:
    1
    BR280I Time stamp 2008-04-04 18.41.57
    BR663I Your choice: '1'
    BR280I Time stamp 2008-04-04 18.41.57
    BR657I Input menu 15 - please check/enter input values
    BRBACKUP main options for backup and database copy
    1 - BRBACKUP profile (profile) ....... [initPRD.sap]
    2 - Backup device type (device) ...... [tape]
    3 ~ Tape volumes for backup (volume) . []
    4 # BACKINT/Mount profile (parfile) .. []
    5 - Database user/password (user) .... [system/*******]
    6 - Backup type (type) ............... [online]
    7 - Back up disk backup (backup) ..... [no]
    8 # Delete disk backup (delete) ...... [no]
    9 ~ Files for backup (mode) .......... [all]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR662I Enter your choice:
    3
    BR280I Time stamp 2008-04-04 18.42.25
    BR663I Your choice: '3'
    BR280I Time stamp 2008-04-04 18.42.25
    BR681I Enter string value for "volume" (scratch|<tape_vol>|<tape_vol
    PRDB04
    BR280I Time stamp 2008-04-04 18.43.06
    BR683I New value for "volume": 'PRDB04'
    BR280I Time stamp 2008-04-04 18.43.06
    BR657I Input menu 15 - please check/enter input values
    BRBACKUP main options for backup and database copy
    1 - BRBACKUP profile (profile) ....... [initPRD.sap]
    2 - Backup device type (device) ...... [tape]
    3 ~ Tape volumes for backup (volume) . [PRDB04]
    4 # BACKINT/Mount profile (parfile) .. []
    5 - Database user/password (user) .... [system/*******]
    6 - Backup type (type) ............... [online]
    7 - Back up disk backup (backup) ..... [no]
    8 # Delete disk backup (delete) ...... [no]
    9 ~ Files for backup (mode) .......... [all]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR662I Enter your choice:
    c
    BR280I Time stamp 2008-04-04 18.43.46
    BR663I Your choice: 'c'
    BR259I Program execution will be continued...
    BR280I Time stamp 2008-04-04 18.43.46
    BR657I Input menu 16 - please check/enter input values
    Additional BRBACKUP options for backup and database copy
    1 - Confirmation mode (confirm) ....... [yes]
    2 - Query mode (query) ................ [no]
    3 - Compression mode (compress) ....... [hardware]
    4 - Verification mode (verify) ........ [no]
    5 - Fill-up previous backups (fillup) . [no]
    6 - Parallel execution (execute) ...... [0]
    7 - Additional output (output) ........ [no]
    8 - Message language (language) ....... [E]
    9 - BRBACKUP command line (command) ... [-p initPRD.sap -d tape -v
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR662I Enter your choice:
    c
    BR280I Time stamp 2008-04-04 18.44.29
    BR663I Your choice: 'c'
    BR259I Program execution will be continued...
    BR291I BRBACKUP will be started with options '-p initPRD.sap -d tape
    E'
    BR280I Time stamp 2008-04-04 18.44.29
    BR670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to
    c
    BR280I Time stamp 2008-04-04 18.45.13
    BR257I Your reply: 'c'
    BR259I Program execution will be continued...
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    BR051I BRBACKUP 6.20 (113)
    BR189W Expiration period equal 0 - backup volumes could be immediate
    BR055I Start of database backup: bdxpzadt.ant 2008-04-04 18.45.13
    BR319I Control file copy was created: /oracle/PRD/sapbackup/cntrlPRD
    BR280I Time stamp 2008-04-04 18.45.13
    BR057I Backup of database: PRD
    BR058I BRBACKUP action ID: bdxpzadt
    BR059I BRBACKUP function ID: ant
    BR110I Backup mode: ALL
    BR077I Database file for backup: /oracle/PRD/sapbackup/cntrlPRD.dbf
    BR061I 25 files found for backup, total size 45884.883 MB
    BR143I Backup type: online
    BR113I Files will be compressed by hardware
    BR130I Backup device type: tape
    BR102I Following backup device will be used: /dev/rmt0.1
    BR103I Following backup volume will be used: PRDB04
    BR280I Time stamp 2008-04-04 18.45.13
    BR256I Enter 'c[ont]' to continue, 's[top]' to cancel the program:
    c
    BR280I Time stamp 2008-04-04 18.45.50
    BR257I Your reply: 'c'
    BR259I Program execution will be continued...
    BR208I Volume with name PRDB04 required in device /dev/rmt0.1
    BR210I Please mount BRBACKUP volume, if you have not already done so
    BR280I Time stamp 2008-04-04 18.45.50
    BR256I Enter 'c[ont]' to continue, 's[top]' to cancel the program:
    c
    BR280I Time stamp 2008-04-04 18.46.26
    BR257I Your reply: 'c'
    BR259I Program execution will be continued...
    BR280I Time stamp 2008-04-04 18.46.26
    BR226I Rewinding tape volume in device /dev/rmt0 ...
    BR351I Restoring /oracle/PRD/sapbackup/.tape.hdr0
    BR355I from /dev/rmt0.1 ...
    BR241I Checking label on volume in device /dev/rmt0.1
    BR280I Time stamp 2008-04-04 18.46.26
    BR226I Rewinding tape volume in device /dev/rmt0 ...
    BR202I Saving /oracle/PRD/sapbackup/.tape.hdr0
    BR203I to /dev/rmt0.1 ...
    BR209I Volume in device /dev/rmt0.1 has name PRDB04
    BR202I Saving init_ora
    BR203I to /dev/rmt0.1 ...
    BR202I Saving /oracle/PRD/920_64/dbs/initPRD.sap
    BR203I to /dev/rmt0.1 ...
    Let me know If you have any problem regarding backup & restore
    karan

Maybe you are looking for