Brarchive help - regrading the arch SID .log file ...

Hi all,
Recently, I have a disk filling up fast problem on the archivelog directory, so I have to do the following:
01 - move some files to a temporary directory
02 - run brarchive - SUCCESFUL with warnings due to the archivelogs that I moved
03 - return some of the files that I moved to the original archivelog directory
04 - make a copy of arch<SID>.log as arch<SID>.log.01
05 - modify the arch<SID>.log file to start backing up from the files that I returned to its original directory
06 - run brarchive again - SUCCESSFUL with warnings due to archivelogs that I moved and have not returned to the original directory
07 - repeat the process from step 03 until all archivelogs has been backed up
NOTE:
- For step 04, I rename arch<SID>.log to arch<SID>.log.02 on the second run, arch<SID>.log.03 on its 3rd run and so on and so forth.
Now after all is said and done, I end up with several files names arch<SID>.log.01, arch<SID>.log.02 .... arch<SID>.log.NN and arch<SID>.log.
I want to know if anyone knows how I can combine all these files into one new arch<SID>.log file. Reason being is that some of the missing archivelogs that are reported in arch<SID>.log has actually already been backed up but are not recorded in the most recent arch<SID>.log file.
I just discovered that I need to do this, otherwise brrestore will complain that the file to restore is not found because it is not in the most recent arch<SID>.log file.  Or maybe, is there a way to tell brrestore what arch<SID>.log file to use? That is, can I run brrestore and tell it to use arch<SID.log.01?
Any feedback will be very much appreciated. Thanks in advance.

This post has been duplicated, can't see the option to delete a thread so mark it as answered instead.

Similar Messages

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Moving the online redo log files to different location

    We just installed few more drives into our sandbox system and I want to move the online redo log files for better performance.  We've got the SAPARCH directory moved to a different location. 
    Does anyone know how/where I can change the parameters so redo log files are pointed at different drives?  It's not in the <b>init<SID>.ora</b> file...
    Regards,
    Sumit

    Hi Sumit,
    The following link contains information about moving the redo logs:
    http://www.stanford.edu/dept/itss/docs/oracle/9i/server.920/a96521/onlineredo.htm
    Best regards,
    Alwin

  • NImax.exe has generated errors. You must restart the program, a log file is being created.

    NImax.exe has generated errors. You must restart the program, a log file is
    being created.
    This error occurs just after bootup, no programs started except windows..
    Running Windows 2000 latest version on a brand new computer.
    (pent 4 3.0 ghz 500MB ram).
    Can anyone help me with this problem? Happened after install Labview 7.0
    express. Installed twice and same problem occured with both installs.
    Labview 7.0 is the only thing installed right now. Also selected "Visual
    Basic Support" install from driver cd's.
    One other thing I just noticed. System info reports 2 processors and I
    think I only have one.
    TIA

    Happy update.
    After ignoring this problem for awhile and installing some NI DAQ boards,
    this problem is not happening anymore.
    Would urge NI to investigate further though. Messed me up for a couple of
    days, and still don't know why it happened.
    Much thanks to those who helped.
    JJ
    "JJF" wrote in message
    news:qB6Hb.48702$VB2.90660@attbi_s51...
    > One other point, I found out this is a message generated by drwtsn32. Can
    > get rid of the error by unchecking "show visual feedback on errors" box,
    but
    > don't like that fix. Still need help.
    >
    > Thanks,
    >
    > JJ
    >
    > "JJF" wrote in message
    > news:%JRGb.138921$8y1.419649@attbi_s52...
    > > Hi Nirmal and all,
    > >
    > > Happy Holidays to you too, and thanks for the reply.
    > >
    > > Making progress. Did all the uninstalling and registry editing as you
    > > suggested. Also, updated Win2K at Microsoft.com as suggested.
    > > Re-installed with default settings of LV express 7.0. Now I only get
    > the
    > > error when I exit NImax.
    > >
    > > Also, my p2p home network is not that hot. Have it set up as a
    workgroup.
    > > Sometimes when I boot up I have file access between the two computers
    and
    > > sometimes windows explorer can't find the workgroup network path
    (between
    > > the two computers). Both computers can always access the internet
    though.
    > > Using a 4 port Linksys cable/dsl router on a cable modem. Using XP
    home
    > > edition on base computer (the one set up for the isp) and Win2K on the
    > other
    > > computer.
    > >
    > > Also, any idea where this log file is going? Thought it was part of the
    > > event viewer but the errors in the event viewer don't seem to correspond
    > to
    > > the NImax.exe logging event now. There is a file in
    > > "winnt\sytem32\config\software.log" that seems to change at about the
    same
    > > time but I can't access the file because it is in use by the system.
    > >
    > > Using NImax ver 3.02.3005. Do you know if I can download "NI
    measurement
    > > and automation explorer" and/or the driver cd's from NI.com? Maybe I
    have
    > a
    > > flaky disk or something. I have a feeling it may also have to do with
    the
    > > "on again/off again" network neighborhood connection problem.
    > >
    > >
    > > Thanks again,
    > >
    > > JJ
    > >
    > >
    > >
    > > "Nirmal Sharma" wrote in message
    > > news:506500000005000000AE470100-1068850981000@exch​ange.ni.com...
    > > > Hi,
    > > > Happy cristmas & new year for you & ur brand new pc...
    > > >
    > > > How are you uninstalling & then installing LV in your pc ?
    > > >
    > > > I suggest to remove complete LV (Remove All option) from your
    > > > computer. Once uninstallation is completed, go to windows registry -
    > > > by windows start -> run -> regedit -> Enter
    > > >
    > > > Goto HKEY_LOCAL_MACHINE-> SOFTWARE -> National Instruments - Delete
    > > > this folder (National Instruments folder)
    > > >
    > > > Remove any other foloder/file related to NI's software. Be very
    > > > cautious while deleting files from windows registry bcoz wrong file
    > > > deletion may hang your whole system.
    > > >
    > > > Restart your computer..hope it should bootup without any errors.
    > > >
    > > > Now as answered by Alexander, update windows.
    > > >
    > > > After updating, bootup your system. If it boots up without any error
    > > > message, install LV with the typical (default installation).
    > > >
    > > > Hope this helps. Your feedbacks are welcome.
    > > >
    > > > Best Regards,
    > > > Nirmal Sharma
    > > > India
    > >
    > >
    >
    >

  • The SiteLog_ appl_name .log  file in  $ORACLE_HOME/j2ee/log is not created

    Hello experts,
    The SiteLog_<appl_name>.log file in the $ORACLE_HOME/j2ee/log directory stopped to be created 2 days ago.
    No errors.
    Which reason can be for that? How to enable creating this file?
    Regards,
    Ram

    Hi,
    Are there any error when login OWA or ECP?
    As Ed mentioned, please post specific information about your question for further troubleshooting.
    Besides, please check to see if there is a SharedWebConfig.Config file in the following directories:
    <install drive>\Program Files\Microsoft\Exchange Server\V15\ClientAccess
    <install drive>\Program Files\Microsoft\Exchange Server\V15\FrontEnd\HttpProxy
    Also, here’s an thread about miss this file in relevant location. For your reference:
    https://social.technet.microsoft.com/Forums/office/en-US/6699ad92-701d-4966-b202-90c9be6bf735/exchange-2013-cu6-event-id-1003?forum=exchangesvrgeneral
    Thanks
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Allen Wang
    TechNet Community Support

  • Abrupt increase in alert SID .log file size

    alert<SID>.log file is abrubtly increasing in size which is in process filles up a disk space, hence further no DB login.
    I shutdown the database, took a bakup of the alert log file, nullified the alert log ( using cat /dev/null > alert.log) and started up the database.
    As of now, its okay, but can I nullify this alert log file while the database is up and running..???

    It is better to write a simple shell script to housekeep the alert.log.
    Below is an example
    if [ `ls -al $ALERTLOG | awk '{print $5}'` -gt 2500000 ]
    then
    cp -p $ALERTLOG $ALERTLOG.`date +%d%m%y`
    cat /dev/null > "$ALERTLOG
    gzip $ALERTLOG.`date +%d%m%y`
    find "$ALERTLOGFOLDER "-name *.gz -mtime +10 -print -exec rm {} \;"
    fi
    Also, you need to housekeep adump, bdump, cdump ... etc folders

  • How to increase the size and the number of log files

    OS Win2000 BD version Oracle8.0.5
    When i Execute a big Transaction, for example: Insert into a table with 100,000 records,
    Oracle tell me that archive log require,
    but I write log_archive_start =true in init.ora;
    I think Maybe the size of log file too small?
    Any other suggestion?
    Thanks

    Hello Long Guohui,
    If your Database is running in Archive Log mode and if the Automatic Archiving is not enable then obviously Oracle
    will ask that "Archive Log Required".
    But if your Database is running with NoArchive Log mode then Oracle should never ask you that "Archive Log Required".
    If your Database is running with Archive Log Mode then Oracle always recommends that the Tablespace where
    you are going to Insert the 1,00,000 records, make that Tablespace Mode from Logging to Nologging, so that
    Archive will not be generated for those entries because Redo does not Generate for that Tablespace as it is
    Running in Nologging mode. If Redo generates for those 1,00,000 entries then it may affect the other DML
    executing in you Database. So it is always recommended to do like this. But after the Insertion is done
    successfully Oracle recommends immediately takes the Back of that Tablespace where you have just
    Inserted 1,00,000 Data.
    For making a Tablespace Nologging:
    ALTER TABLESPACE <TABLESPACE NAME> NOLOGGING;
    For Making a Database from NoArchive to Archive:
    1)     SHUTDOWN IMMEDIATE
    2)     IN THE init<sid>.ora FILE SET THE FOLLOWING PARAMETER WITH VALUE:
    LOG_ARCHIVE_START = TRUE
    3)     STARTUP MOUNT
    4)     ALTER DATABASE ARCHIVE LOG
    5)     ALTER DATABASE OPEN

  • Retrive Cookie Information in the Apache Access Log Files

    Hi All,
    Can anyone give me the solution or any link to follow the steps for retriving cookie information and user information in the Apache Access log files using httpd.conf file.
    we are using Oracle Appserver 10.1.2 Version and we have specfied below commands in httpd.conf file.
    LogFormat "%h %l %u %t \"%r\" %>s %b %v \"%{Referer}i\" \"%{User-Agent}i\" \"%{cookie}n\"" combined
    But it failed to retrive cookie and user informations
    Looking forward any one help.....
    Thanks
    Regards
    Sona

    Thanks for your reply
    Can u please check the below link for the cookie flag information
    http://download-west.oracle.com/docs/cd/B31017_01/web.1013/q20201/mod/mod_usertrack.html
    For your information i have logged in already.
    Our Sample O/p is given below
    151.146.191.186 - - [28/Dec/2006:10:13:05 +0530] "GET /Tab_files/lowerbox.gif HT
    TP/1.1" 200 150 - "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows)"
    We are using the below command format
    LogFormat "%h %l %u %t \"%r\" %>s %b %{cookie}n \"%{Referer}i\" \"%{User-Agent}i\"" combined
    But User and Cookie informations is not displaying.
    what steps should i follow.
    Looking for the favourable reply
    Thanks

  • Looking for the Garbage Collection log files

    Hello,
    I am looking for the Garbage Collection log files which contain the GC events, like this:
    [GC 2095K->1709K(2160K), 0.0017628 secs]
    [Full GC 2161K->1018K(2276K), 0.0576353 secs]
    The server's GC is already configured accordingly:
    –verbose:gc
    I have examined the std_server<x>.out files at the work folder but can't see this info.
    My question is: Where would I find these files on the server?

    I can't find info in the format I know it not at dev_server* or std_server*.
    in std_server I see xml file like this:
    <gc type="scavenger" id="1" totalid="1" intervalms="0.000">
        <flipped objectcount="303111" bytes="24994248" />
        <tenured objectcount="0" bytes="0" />
        <refs_cleared soft="429" weak="4329" phantom="0" />
        <finalization objectsqueued="1711" />
        <scavenger tiltratio="50" />
        <nursery freebytes="497866672" totalbytes="524288000" percent="94" tenureage="10" />
        <tenured freebytes="1094921936" totalbytes="1098907648" percent="99" >
          <soa freebytes="1039977168" totalbytes="1043962880" percent="99" />
          <loa freebytes="54944768" totalbytes="54944768" percent="100" />
        </tenured>
        <time totalms="185.585" />
      </gc>
    I guess this is the info I need but it's not formatted in a way that your tool can read it and having a tool like this reading it is very helpful.
    Any ideas?

  • How to correct the corrupted archive log files?

    Friends,
    Our restore method is cloning type.
    today i fired this statement(this is one is usually do for the restore) "recover database until cancel using backup controlfile"
    i have 60 files in the archive folder.
    it executs only 50 files when it comes to 51st file it came out and says could not copy the file.
    is that particular file is corrupted?
    or is there any restriction in copying no of archivelog files? i mean only 60 files like that.....
    suppose if the archive file is corrupted how can i correct it?
    thanks
    sathyguy

    Now, this is the error message....
    ORA-00310: archived log contains sequence 17480; sequence 17481 required
    ORA-00334 archived log; '/archive2/OURDB/archive/ar0000017481.arc'i googled....and find out.....
    ORA-00310 archived log contains sequence string; sequence string required
    Cause: The archived log is out of sequence, probably because it is corrupted or
    the wrong redo log file name was specified during recovery.
    Action: Specify the correct redo log file and then retry the operation.so...from the above error messages.....i think the particular archive file(17481) is corrupted. now can i correct this corrupted archive file.
    According to the above action, it says to specify the correct redo log file. if the file is not corrupted then where should i specify the redo log file's path?
    thanks
    sathyguy

  • What determines the location of the shared services log files

    I've just completed an install and configuration of Shared Services 9.2.1. The shared services log files, SharedServices_Security.log, SharedServices_Admin.log, etc., are being written to the C:\WINDOWS\system32\null\logs\SharedServices9 path. These files should be written to the Tomcat application folders in the Shared Services home, CSS_HOME, folder. e.g. d:\hyperion\sharedservices\9.2\appserver\installedapps\tomcat\5.0.28. This is according to the Shared Services installation documentation.
    Is there any way to get these log files written to the d: drive? The following references sharedservices.logdir from the hsslogger.properties file but I don't see where a value is set that I can change.
    log4j.appender.FILE.File=${sharedservices.logdir}${file.separator}SharedServices_Security.log
    Thanks,
    Tom

    Hi there!
    Are you looking for the AOM log file?
    By default it´s located at:
    SIEBEL_ROOT\ENTERPRISE\SIEBEL_SERVER\log
    If you can´t find the AOM log files here, try to check the "Log directory" server parameter value.
    Best regards,
    João Paulo

  • Change the Data and Log file locations in livecache

    Hi
    We have installed livecache in unix systems in the /sapdb mount directory where the installer have created sapdata and sapdblog directories. But the unix team has already created two mount direcotries as follows:
    /sapdb/LC1/lvcdata and /sapdb/LC1/lvclog mount points.
    While installing livecache we had selected this locations for creating the DATA and LOG volumes. Now they are asking to move the DATA and LOG volumes created in sapdata and saplog directories to these mount points. How to move the data and log file and make the database consistent. Is there any procedure to move the files to the mount point directories and change the pointers of livecahce to these locations.
    regards
    bala

    Hi Lars
    Thanks for the link. I will try it and let u know.
    But this is livecache (even it uses MaxDB ) database which was created by
    sapinst and morover is there any thing to be adjusted in SCM and as well as
    any modification ot be done in db level.
    regards
    bala

  • SCCM 2007 R2 Copying the sms setup log files ( C:\ConfigMgr*.lo*) failed.

    Copying the sms setup log files ( C:\ConfigMgr*.lo*) failed.
    Copying setup logs failed...
    Backup task completed successfully with zero errors but there could be some warnings, AFTERBACKUP.BAT will be started if available in its predefined location

    Just for the record!
    Check the permission of the folders SCCMBkp$ and SCCMBkpArch$, compare with a server that is working. Alfter that restart the services: (SMS_SITE_VSS_WRITER and SQL Server VSS Writer). Then Open the Smsbkup.log and restart the SMS_SITE_BACKUP and check the
    logs to see if works.

  • Alert SID .log file size too big ,How to keep it under control

    alert<SID>.log file size too big ,How to keep it under control?
    -rw-r--r-- 1 oracle dba 182032983 Aug 29 07:14 alert_g54nha.log

    Metalink Note:296354.1

  • Troubleshooting Mail using the Mail.crash.log file and TN2123 CrashReporter

    This morning, for apparently no reason at all, Mail began to consistently crash each time I attempted to send or reply to a message. By observing what occured in mail, opening the crash log [mail.crash.log] located here…
    Macintosh HD:Users:<username>:Library:Logs:CrashReporter
    …and parsing the information with the help of the Apple Developer Technical Note linked below, I was able to quickly determine that my last action just prior to sending the message—appending a signature to it—resulted in the application crashing. Simply by deleting the existing signature and recreating it, I was able to resolve the problem in only a matter of minutes.
    This demonstrates how useful crash logs can be, and why they should likely be the very first place you look for an indication of trouble when an application unexpectely quits.
    Here is a very short, relevent section of my Mail crash log which you can examine using the instructions linked below:
    Host Name: Michael-Lafferty
    Date/Time: 2006-05-15 09:37:57.885 -0700
    OS Version: 10.4.6 (Build 8I127)
    Report Version: 4
    Command: Mail
    Path: /Applications/Mail.app/Contents/MacOS/Mail
    Parent: WindowServer [64]
    Version: 2.0.7 (746.2)
    Build Version: 2
    Project Name: MailViewer
    Source Version: 7460200
    PID: 1662
    Thread: 0
    Exception: EXCBADACCESS (0x0001)
    Codes: KERNPROTECTIONFAILURE (0x0002) at 0x00000000
    Thread 0 Crashed:
    0 com.apple.WebCore 0x956065d4 khtml::CompositeEditCommand::insertNodeAfter(DOM::No
    1 com.apple.WebCore 0x956b0b34 khtml::DeleteSelectionCommand::moveNodesAfterNode()
    2 com.apple.WebCore 0x956b1640 khtml::DeleteSelectionCommand::doApply() + 688
    3 com.apple.WebCore 0x955ff41c khtml::EditCommand::apply() + 52
    4 com.apple.WebCore 0x955ff9c8 khtml::CompositeEditCommand::applyCommandToComposite
    5 com.apple.WebCore 0x956a75c4 khtml::CompositeEditCommand::deleteSelection(khtml::…
    This is the most significant section of a very long, multi-threaded log, because it shows the detail of the thread which crashed, the actual step at which it crashed, and the particular crash type: in this case, a EXCBADACCESS (0x0001) - KERNPROTECTIONFAILURE, indicating an attempt to write to a read-only memory location.
    Here is a link to the Apple Developer documentation you can use to troubleshoot such issues:
    http://developer.apple.com/technotes/tn2004/tn2123.html
    Technical Note TN2123
    CrashReporter

    This morning, for apparently no reason at all, Mail began to consistently crash each time I attempted to send or reply to a message. By observing what occured in mail, opening the crash log [mail.crash.log] located here…
    Macintosh HD:Users:<username>:Library:Logs:CrashReporter
    …and parsing the information with the help of the Apple Developer Technical Note linked below, I was able to quickly determine that my last action just prior to sending the message—appending a signature to it—resulted in the application crashing. Simply by deleting the existing signature and recreating it, I was able to resolve the problem in only a matter of minutes.
    This demonstrates how useful crash logs can be, and why they should likely be the very first place you look for an indication of trouble when an application unexpectely quits.
    Here is a very short, relevent section of my Mail crash log which you can examine using the instructions linked below:
    Host Name: Michael-Lafferty
    Date/Time: 2006-05-15 09:37:57.885 -0700
    OS Version: 10.4.6 (Build 8I127)
    Report Version: 4
    Command: Mail
    Path: /Applications/Mail.app/Contents/MacOS/Mail
    Parent: WindowServer [64]
    Version: 2.0.7 (746.2)
    Build Version: 2
    Project Name: MailViewer
    Source Version: 7460200
    PID: 1662
    Thread: 0
    Exception: EXCBADACCESS (0x0001)
    Codes: KERNPROTECTIONFAILURE (0x0002) at 0x00000000
    Thread 0 Crashed:
    0 com.apple.WebCore 0x956065d4 khtml::CompositeEditCommand::insertNodeAfter(DOM::No
    1 com.apple.WebCore 0x956b0b34 khtml::DeleteSelectionCommand::moveNodesAfterNode()
    2 com.apple.WebCore 0x956b1640 khtml::DeleteSelectionCommand::doApply() + 688
    3 com.apple.WebCore 0x955ff41c khtml::EditCommand::apply() + 52
    4 com.apple.WebCore 0x955ff9c8 khtml::CompositeEditCommand::applyCommandToComposite
    5 com.apple.WebCore 0x956a75c4 khtml::CompositeEditCommand::deleteSelection(khtml::…
    This is the most significant section of a very long, multi-threaded log, because it shows the detail of the thread which crashed, the actual step at which it crashed, and the particular crash type: in this case, a EXCBADACCESS (0x0001) - KERNPROTECTIONFAILURE, indicating an attempt to write to a read-only memory location.
    Here is a link to the Apple Developer documentation you can use to troubleshoot such issues:
    http://developer.apple.com/technotes/tn2004/tn2123.html
    Technical Note TN2123
    CrashReporter

Maybe you are looking for