Hoping for a quick response : EXP and Archived REDO log files

I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
Hoping for a quick response
Best Regards, and thanks in advance
Bruce Davis

Bruce Davis wrote:
Amardeep Sidhu
Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
Thank you for taking the time to reply.
Best Regards
BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

Similar Messages

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Delete old and unused Archived Redo Log Files

    Hello forum!
    My db was in ARCHIVELOG mode and It created 9GB of archived redo log files.
    Now I put the db in NOARCHIVELOG mode, can I delete all the ARLF? am I sure that the DB never need those files in the future?
    If yes, how can I delete them? I will use 'del' operative system command?
    In addition, I found this command:
    SQL>alter database open resetlogs;
    is it useful for my purpose?
    thank you!

    You are safe to remove those archivelog files if you altered your database to no archive log mode. Just remove them from OS level
    Please bear in mind that database in no archivelog mode will lost data in the event of disaster.
    SQL>alter database open resetlogs;Doesn't help in your situation, it's use to bring up database after incomplete recovery.

  • How could I find specified date of REDO log and archive REDO log ?

    we use Oracle11gr2 on win2008R2.
    1
    How could I find specified date of REDO log(2013/10/17,etc) and archive REDO log ?
    2
    What is the format of archive REDO log.? (zipped file ?)

    user12075536123 wrote:
    1)
    select * from v$archived_log;
    select * from v$log_history;
    but there is a possibility there is no old data
    below contains no filename column
    SQL> desc v$log_history
    Name                                      Null?    Type
    RECID                                              NUMBER
    STAMP                                              NUMBER
    THREAD#                                            NUMBER
    SEQUENCE#                                          NUMBER
    FIRST_CHANGE#                                      NUMBER
    FIRST_TIME                                         DATE
    NEXT_CHANGE#                                       NUMBER
    RESETLOGS_CHANGE#                                  NUMBER
    RESETLOGS_TIME                                     DATE
    there is NO data when archive mode is disabled

  • What is the difference between undo tablespace and online redo log files.

    what is the difference between undo tablespace and online redo log files. I am confused
    as per my knowledge undo tablespace is used to store the undo information when a table is being updated so that, just incase we need to rollback a transaction we know what was present in the table earlier.
    when a transaction fails the SMON performs the rollback of the data.
    This undo data is stored in the undo tablespace and read consistency if any is enforced.
    is my understanding till here correct?
    Now, can this undo data/before image not be stored in the redo log buffer and online redolog files?
    can redo-log files not store this information?
    in fact, is it that when undo tablespaces exist in a database, the undo data/before image is stored in both the undo tablespace and also the redo log files?
    kindly clarify my doubt.
    thank you.

    This question has been asked many times before. The answer is always the same.
    Yes, redo contains the before image of data (and the after-image). Therefore, it **COULD** be used to roll back a transaction.
    BUT... Redo is written sequentially. Using it to rollback your transaction would involve reading through all the redo written by maybe thousands of other people. It would be painfully slow.
    Your transaction is, however, directly linked to just the UNDO that it generates (which is JUST the before image of the data). So, your undo is your undo and doesn't share space with anyone else's undo. Therefore, using it to roll back YOUR transaction is fast.
    The fact that undo is only the before image of the data also makes it faster than wading through a sea of before and AFTER images as you'd find in redo. About twice as fast, in fact, since there's half the data. Roughly.
    Redo also gets written and flushed to disk whenever there's a commit, 3 seconds are up or too much (1MB, actually) redo gets generated between flushes caused by other factors. Your redo gets flushed when those things happen, even if you haven't actually committed your transaction. And redo logs recycle themselves, meaning that your redo -even if your transaction hasn't been committed yet- can be over-written by later transactions. Try rolling back when that's happened, if redo was the source of your rollback data!
    Undo, however, cannot be over-written if the transaction has not been committed. Ever. If you don't commit for three years, there will be three years' undo stored in your database (assuming you had the space, of course!).
    I could go on, but that will do. Redo is there fore RECOVERY, after catastrophe. Undo is there for read-consistency (and the occasional change of mind). Two different functions. Two different mechanisms. Each one highly tuned to doing what it does, why it does it, most efficiently and effectively.

  • Dataguard lost both Primary redo log and standby redo log files

    Hi,
    I am new to data guard, i came acorss a scenario where we loose both primary redo log file and standby redo log files.
    Can someone please help me understand how to recover from this situation.
    Thanks!

    >loose both primary redo log file and standby redo log files
    We have to be very clear.
    There are (set A) online redo log files  and (set B) standby redo log files at (location 1) Primary and (location 2) Standby.
    The standby redo log files, depending on the configuration, aren't strictly mandatory.  The standby can be applying redo without online redo log files present as well, depending on how it was setup.
    So, the question is  : Did you lose online redo log files at the primary ?  Didn't the primary shutdown itself then ? If so, you have to do an incomplete recovery at the primary OR switch over to the standby (which may or may not have received the last transaction, depending on how it was configured and operating)   OR restore from the standby (again, with possible loss of transactions) to the primary.
    Hemant K Chitale

  • Require 9i Primary and Standby redo logs files same size?

    Hi,
    We have 9.2.0.6 Oracle RAC (2 node) and configured data guard (physical standby).
    I want to increase redo log files size, but i can't this do same time primary and standby side.
    Is there a rule, primary and standby database instances have same size redo log files?
    If I increase only primary redo log files, is there any side effect? However I try this issue on test system. I increased all primary redo log files(if status='INACTIVE' drop redo log group and add redo log group, switch logfile,...)
    , but i couldn't changed standby side. So the system is work well. Is this correct solution or not? How can i increase both sides redo log files?
    Thank you for helps..

    Thank you for your helps.. I found this issue answer:
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1010448
    Consequently, when you add or drop an online redo log file at the primary site, it is important that you synchronize the changes in the standby database by following these steps:
    If Redo Apply is running, you must cancel Redo Apply before you can change the log files.
    If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.
    Add or drop an online redo log file:
    To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log' SIZE 100M;
    To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    Repeat the statement you used in Step 3 on each standby database.
    Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    bye..

  • Is it possible to use Archive Redo log file

    Hi friends,
    My database is runing in archive log mode.I had taken cold backup on Sunday but i am taking archive log file daily evening.
    Wednesday my database crash that means i lost all control file,redo log file,datafile etc.
    I have archived log file backup till Tuesday Night and other files like control file,datafile etc of Sunday .
    1)Is it possible to recover database till tuesday if yes HOW to use archive log file.
    (See SCN no of control file and datafiles is same ,if we use RECOVER DATABASE command oracle shows that media recovery is not requide.)
    we don't have current control file we had lost in media crash.
    null

    Dear friend
    In this scenario
    you lost the contorl file
    1>If you have old copy of Contorl file,
    which has the current structure of the
    database and all the archive files then
    you can recover the database with
    Ponint in time recovery (Using Backup Controlfile)
    suresh

  • Where RFS exactly write redo data ?  ( archived redo log or standby redo log ) ?

    Good Morning to all ;
    I am getting bit confused from oracle official link . REF_LINK : Log Apply Services
    Redo data transmitted from the primary database is received by the RFS on the standby system ,
    where the RFS process writes the redo data to either archived redo log files  or  standby redo log files.
    In standby site , does rfs write redo data in any one file or both ?
    Thanks in advance ..

    Hi GTS,
    GTS (DBA) wrote:
    Primary & standby log file size should be same - this is okay.
    1) what are trying to disclose about  largest & smallest here ? -  You are confusing.
    Read: http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB4752
    "Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size."
    GTS (DBA) wrote:
    2) what abt group members ? should be same as primary or need  to add some members additionally. ?
    Data Guard best practice for performance, is to create one member per each group in standby DB. on standby DB, one member per group is reasonable enough. why? to avoid write penalty; writing to more than one log files at the standby DB.
    SCENARIO 1: if in your source primary DB you have 2 log member per group, in standby DB you can have 1 member  per group, additionally create an extra group.
    primary
    standby
    Member per group
    2
    1
    Number of log group
    4
    5
    SCENARIO 2: you can also have this scenario 2 but i will not encourage it
    primary
    standby
    Member per group
    2
    2
    Number of log group
    4
    5
    GTS (DBA) wrote:
    All standby redo logs of the correct size have not yet been archived.
      - at this situation , can we force on standby site ? any possibilities ? 
    you can not force it , just size your standby redo files correctly and make sure you don not have network failure that will cause redo gap.
    hope there is clarity now
    Tobi

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Database in log archive mode and redo log file in mode not archive

    Hello,
    I have a dabatabase running in archive log mode, recently changed, I have 5 redo log groups and one of them (the current one) shows in the v$log view, that ARC: NO, I mean, no archiving. All redo logs except it shows ARC:Yes
    What does it mean?
    Am I going to have problems with this redo log file?
    Thanks

    If you do describe on v$log, you'll find that the full column name is Archived (meaning is it archived yet?).
    You could try alter system switch logfile and then check v$log again a few times after.
    Use the docu for finding out more about v$ views and so on
    http://www.oracle.com/pls/db102/print_hit_summary?search_string=v%24log

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • Best RAID configuration for storing Datafiles and Redo log files

    Database version:10gR2
    OS version: Solaris
    Whis is the best RAID level for storing Datafiles and Redo log files?

    Oracle recommends SAME - Stripe And Mirror Everything.
    In the RAC Starter Kit documentation, they specifically recommend not using RAID5 for things like voting disk and so on.
    SAN vendors otoh claims that their RAID5 implementations are as fast as RAID10. They do have these massive memory caches...
    But I would rather err on the safer side. I usually insist on RAID10 - and for those databases that I do not have a vested interest in (other than as a DBA), and owners, developers and management accept RAID5, I put the lead pipe away and do not insist on having it my way. :-)

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • Db restore non archive mode lost redo log file..restore from controlfile tr

    i have a db 11g I had taken non archive backup but failed to take redo log files backup...
    so while i restored the db ... after formatting the machine ..the oracle instance wont start.
    I create a controlfile trace but when i run it i get errors.
    since i dont have the older log files.. how do i get around with this issue
    Thanks
    Following is the sample of control file trace ..Note i cannot create the redo log file
    since db wont be mounted at most it shall be in nonmount mode
    and below is my created controlfile ....
    CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
    SIZE 50M BLOCKSIZE 512,
    GROUP 2
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
    SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF'
    CHARACTER SET AL32UTF8
    I dont have these 2 files ..what do i do to get around this situation
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
    SIZE 50M BLOCKSIZE 512,
    GROUP 2
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
    SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
    Edited by: zycoz100 on Feb 27, 2013 10:57 PM

    If you have a cold backup (database shutdown properly) without the redo logs, change this :
    CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOGto
    CREATE CONTROLFILE REUSE DATABASE "XE" RESETLOGS NOARCHIVELOGYou have to change the NORESETLOGS to RESETLOGS for Oracle to recreate the online redo logs.
    Hemant K Chitale

Maybe you are looking for

  • Error while provisioning resource object to any user

    I get below error while trying to provision or reconcile any user to AD resource object. This was working fine till yesterday and suddenly started behaving otherwise. This error occurs while provisioning to the user i.e., as we de-selected auto save

  • Query /Function Help

    Hi, I have a requriment, can any one please help me to get this in sql query or plsql function, Example: I have errors messages in database they will in this format 1)AdjCost;sellprice;customerprice;dealerprice...... 2)ctrtprice;invamt;invnum;AdjCost

  • How do I print out my saved passwords?

    I want a hard copy of my saved passwords for future reference.

  • 10.4.8, Bonjour, an Epson printer and a network all walk into a bar...

    Just Kidding, but I do have a real problem. I have two macbooks, networked through an actiontec router/dsl modem wirelessly, and an Epson printer connected to one of them via a usb cable. Now, the macbook connected to the printer physically can print

  • Kernel26 locks up on autodetect

    Trying to update kernel but it hangs on autodetect: (1/1) upgrading kernel26 [######################] 100% >>> Updating module dependencies. Please wait ... >>> MKINITCPIO SETUP >>> ---------------- >>> If you use LVM2, Encrypted root or software RAI