Large and Many Replication Manager Database Log Files

Hi All,
I've recently added replication manager support to our database systems. After enabling the replication manager I end up with many log.* files of many gigabytes a piece on the master. This makes backing up the database difficult. Is there a way to purge the log files more often?
It also seems that the replication slave never finishes syncronizing with the master.
Thank you,
Rob

So, I setup a debug env on test machines, with a snapshot of the db. We now set rep_set_limit to 5mb.
Now its failing to sync, so I recompiled with --enable-diagnostic and enabled DB_VERB_REPLICATION.
On the master we see this:
2007-06-06 18:40:26.646768500 DBMSG: ERROR:: sendpages: 2257, page lsn [293276][4069284]
2007-06-06 18:40:26.646775500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35e370
2007-06-06 18:40:26.646782500 DBMSG: ERROR:: sendpages: 2257, lsn [640947][6755391]
2007-06-06 18:40:26.646794500 DBMSG: ERROR:: sendpages: 2258, page lsn [309305][9487507]
2007-06-06 18:40:26.646801500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35f3b4
2007-06-06 18:40:26.646803500 DBMSG: ERROR:: sendpages: 2258, lsn [640947][6755391]
2007-06-06 18:40:26.646809500 DBMSG: ERROR:: send_bulk: Send 562140 (0x893dc) bulk buffer bytes
2007-06-06 18:40:26.646816500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
2007-06-06 18:40:26.647064500 DBMSG: ERROR:: wrote only 147456 bytes to site 10.0.3.235:9003
2007-06-06 18:40:26.648559500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
2007-06-06 18:40:26.648561500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
2007-06-06 18:40:26.648562500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
2007-06-06 18:40:26.648563500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
2007-06-06 18:40:26.649966500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
2007-06-06 18:40:26.649968500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
2007-06-06 18:40:26.649970500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
2007-06-06 18:40:26.649971500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
2007-06-06 18:40:26.651699500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
2007-06-06 18:40:26.651702500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb3d801c
2007-06-06 18:40:26.651704500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
2007-06-06 18:40:26.651705500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
2007-06-06 18:40:26.651706500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
2007-06-06 18:40:26.652858500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
2007-06-06 18:40:26.652860500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb2d701c
2007-06-06 18:40:26.652861500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
2007-06-06 18:40:26.652862500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
2007-06-06 18:40:26.652864500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
2007-06-06 18:40:38.951290500 1 28888 dbnet: 0,0: MSG: ** checkpoint start **
2007-06-06 18:40:38.951321500 1 28888 dbnet: 0,0: MSG: ** checkpoint end **
On the slave, we see this:
2007-06-06 18:40:26.668636500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.668637500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.668644500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66c1fc ep 0x2afb671344 pgrec data 0x2afb66c1fc, size 4152 (0x1038)
2007-06-06 18:40:26.668645500 DBMSG: ERROR:: PAGE: Received page 2254 from file 0
2007-06-06 18:40:26.668658500 DBMSG: ERROR:: PAGE: Received duplicate page 2254 from file 0
2007-06-06 18:40:26.668664500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.668666500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.668672500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66d240 ep 0x2afb671344 pgrec data 0x2afb66d240, size 4152 (0x1038)
2007-06-06 18:40:26.668674500 DBMSG: ERROR:: PAGE: Received page 2255 from file 0
2007-06-06 18:40:26.668686500 DBMSG: ERROR:: PAGE: Received duplicate page 2255 from file 0
2007-06-06 18:40:26.668703500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.668704500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.668706500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66e284 ep 0x2afb671344 pgrec data 0x2afb66e284, size 4152 (0x1038)
2007-06-06 18:40:26.668707500 DBMSG: ERROR:: PAGE: Received page 2256 from file 0
2007-06-06 18:40:26.668714500 DBMSG: ERROR:: PAGE: Received duplicate page 2256 from file 0
2007-06-06 18:40:26.668715500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.668722500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.668723500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66f2c8 ep 0x2afb671344 pgrec data 0x2afb66f2c8, size 4152 (0x1038)
2007-06-06 18:40:26.668730500 DBMSG: ERROR:: PAGE: Received page 2257 from file 0
2007-06-06 18:40:26.668743500 DBMSG: ERROR:: PAGE: Received duplicate page 2257 from file 0
2007-06-06 18:40:26.668750500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.668752500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.668758500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb67030c ep 0x2afb671344 pgrec data 0x2afb67030c, size 4152 (0x1038)
2007-06-06 18:40:26.668760500 DBMSG: ERROR:: PAGE: Received page 2258 from file 0
2007-06-06 18:40:26.668772500 DBMSG: ERROR:: PAGE: Received duplicate page 2258 from file 0
2007-06-06 18:40:26.668779500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:26.690980500 DBMSG: ERROR:: /ask/bloglines/db/sitedb-slave rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391]
2007-06-06 18:40:26.690982500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
2007-06-06 18:40:26.690983500 DBMSG: ERROR:: rep_bulk_page: p 0x736584 ep 0x7375bc pgrec data 0x736584, size 4152 (0x1038)
2007-06-06 18:40:26.690985500 DBMSG: ERROR:: PAGE: Received page 2124 from file 0
2007-06-06 18:40:26.690986500 DBMSG: ERROR:: PAGE: Received duplicate page 2124 from file 0
2007-06-06 18:40:26.690992500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
2007-06-06 18:40:36.289310500 DBMSG: ERROR:: election thread is exiting
I have full log files if that could help, these are just the end of those.
Any ideas? Thanks...
-Paul

Similar Messages

  • How to create and manage the log file

    Hi,
    I want to trace and debug the program process.
    I write the code for creating log file and debugging the process.
    But i am not able get the result.
    please help me how to create and manage the log file.
    Here i post sample program
    package Src;
    import java.io.*;
    import org.apache.log4j.*;
    import java.util.logging.FileHandler;
    public class Mylog
         private static final Logger log = Logger.getLogger("Mylog.class");
         public static void main(String[] args) throws Exception {
         try {
           // Create an appending file handler
            boolean append = true;
            FileHandler handler = new FileHandler("debug.log", append);
            // Add to the desired logger
            Logger logger = Logger.getLogger("com.mycompany");
            System.out.println("after creating log file");     
              catch (IOException e)
                   System.out.println("Sys Err");
            }Please give the your valuable suggesstion... !
    Thanks
    MerlinRoshina

    Just i need to write a single line in log file.

  • Database Log file Shrink information

    Hello Team,
    Database log file shrink information: Due to space problem
    One of my database having 600 gb,under  600 gb log file having 260 gb, in this database we take full backup daily no log backup. If we shrink the log file any impact is happen?
                   We shrink the log file internally what will happen?
    One of my database having 600 gb,log file having 260 gb, in this database we take full backup daily and every 15 min take log backup.
    This is scenario we shrink the log file what will happen.

    Hello,
    You should not try to shrink the log file regularly, because is resource intensive, it takes a lot of time and it creates fragmentation on the disk storage.
    If you don’t backup the log files of the databases (option 1) you don’t have the ability to restore to a point in time between backups full and differential. If
    you don’t want to backup files then it makes sense to change the recovery model to simple.
    Backing up the logs regularly minimize the risk of the logs getting filled and extending their size.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Large and many layered file slow on CS5

    I've installed CS5 on a new 17" MBP i5 8gb. Opening a CMYK PSB of 5.75 gb and around 100 layers, I was disheartened to find that scrolling and zooming were noticeably slower on CS5 than on CS3 on the same machine, and not substantially faster than CS3 on a 2007 17" MBP. It's frustrating, as Adobe has suggested that I might "process very large images up to ten times faster by taking advantage of cross-platform 64-bit support."
    Zooms for this file vary between a few seconds and a minute. At the extremes, it is noticeably worse than CS3 on the same MBP. But even where the time to redraw is similar, CS5's only-when-complete redraw offers less information and less ability to responsively modify zooms than CS3.
    Scrolls cause freezes of several seconds to a half minute. I assume that CS3's progressive redraw may sometimes feel faster than it is. But that rapid if partial response is usable information when scrolling. It's often unnecessary to fully image a screen, but CS5 makes it unavoidable. In both CS3 and CS5, once the whole image has been viewed at a particular zoom, scrolling becomes relatively fluid within that zoom until changes have been made.
    Looking around this site, I've seen and taken suggestions to repair permissions with Disk Utility, turn off Font Preview and increase the Cache Tile Size to 1024k. There are no 3rd party plugins loaded. I've not seen a substantial improvement in zooming or scrolling, after these changes.
    I'm a bit confused by the "tall and thin" and "big and fat" options, as this file and many that I work on have both large height/width and many layers, so neither description fits.
    Can anyone recommend settings that might substantially speed up CS5's handling of large files with many layers? Thanks!

    Unless you work with us to find the cause of your slowdown, it won't get fixed.
    We don't know why you're running slow.
    So far the slowdowns seem to be due to bad fonts, bad third party plugins.
    But without steps/files to reproduce a slowdown, we aren't going to be able to indentify a cause, much less fix anything.
    Chris, are you going to ask me for information that I have not already supplied? Where do I send the multi-GB files?

  • Managing Alert log files : Best practices?

    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.

    ScottsTiger wrote:
    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.Every end of day (or every periodic time) archive your alert.log and move other directory then remove alert.log.Next time Database instance will automatically create own alert.log

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • Database Log File Size

    We are in the process of migrating disabled users to a new Exchange 2013 database on secondary storage. I've noticed that the Logs folder is abnormally large (182 GB). I was wondering if there was a way to clean this up?
    We have other Exchange 2013 databases whose Logs folder is much smaller in comparison (~200 MB). How can I go about cleaning these log files?

    Hi,
    Have you checked the above suggestion to do a full backup and check the result?
    Is there any update with your issue?
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support
    Sorry to reply to this so late but I wanted to provide somewhat of an update. I am running a DPM job on the database now. We'll see if it resolves the issue. When I ran the DPM job originally it had an error. I believe it was because the drive was at capacity.
    I've since expanded the drive and kicked off the DPM job again.

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • How to Create a batch file to display and count specific words in log file

    Hi All,
    I have requirement Program to be written that will go through a log file and look for following key words.
    Unexpected Essbase error
    And also it will count the # of times the word error appear in a log file.
    You may use batch file or Perl script to complete this task.
    e.g. in the log file - It will flag yes that keyword "Unexpected Essbase error" found and word error occurs 9 times.
    Pls help me in know process to achieve above requirement.
    and pls let me know what is perl scripting ?
    Thanks in Advance
    Regards,
    SM

    Sorry but it sounds like you have been asked to do something and you have pasted the requirement on the forum, have you done any research to find out which scripting language you are going to use or any find examples, there are so many differents examples and help on the internet it just takes a little bit of time and investment.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Remote management audit log file

    I've read the documentation @
    http://www.novell.com/documentation/...a/ad4zt4x.html
    which indicates that the audit file is auditlog.txt and is located in the
    system directory of the managed workstation. The problem is I can't find the
    log file in that location or anywhere else on the computer. I even looked in
    C:\Program Files\Novell\ZENworks\RemoteManagement\RMAgent but I can't find
    anything. Any ideas? Can someone point me in the right direction.
    BTW, I'm using ZDM 6.5 SP2 for both the server and the workstations.
    Jim Webb

    Just an FYI, with ZDM 6.5 HP3 the file name changed from AuditLog.txt to
    ZRMAudit.txt still located under system32 on Windows XP.
    Jim Webb
    >>> On 5/22/2006 at 3:27 PM, in message
    <[email protected]>,
    Jim Webb<[email protected]> wrote:
    > Well I found out the ZDM 6.5 HP2 fixes the problem of the log file not
    > being
    > created.
    >
    > Jim Webb
    >
    >>>> On 5/19/2006 at 8:37 AM, in message
    > <[email protected]>,
    > Jim Webb<[email protected]> wrote:
    >> Well, it does show up in the event log but not in the inventory. If I
    >> disable inventory the log file won't be deleted, correct?
    >>
    >> Jim Webb
    >>
    >>>>> On 5/18/2006 at 10:03 AM, in message
    >> <[email protected]>, Marcus
    >> Breiden<[email protected]> wrote:
    >>> Jim Webb wrote:
    >>>
    >>>> I did a search on a machine I am remote controlling, no log file. What
    >>>> next?
    >>> good question... does the session show up in the eventlog?

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Too many lines in my log file

    Hy every body,
    I've made a Tomcat project under Eclipse and I'm using log4j.
    I've configured my log4j in "log4j-config.xml" file.
    I've made one file per level (DEBUG, INFO, WARN and ERROR), but I'm receiving too many lines in my "debug.log" file.
    Example:
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : BeanUtils.populate(org.apache.struts.tiles.TilesPlugin@14d556e, {definitions-parser-validate=true, definitions-parser-details=2, definitions-debug=2, moduleAware=true, definitions-config=/WEB-INF/struts/struts-tiles-defs.xml})
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : setProperty(org.apache.struts.tiles.TilesPlugin@14d556e, definitions-parser-validate, true)
    17:52:39 - [main] [DEBUG] org.apache.commons.beanutils.BeanUtils : setProperty(org.apache.struts.tiles.TilesPlugin@14d556e, definitions-parser-details, 2)
    Can some one tell me how to filter my logs.
    Here is a part of my "log4j-config.xml" file:
    <appender name="DEBUG" class="org.apache.log4j.FileAppender">
              <param name="File" value="${log.path}/debug.log" />
    <param name="Threshold" value="DEBUG" />
    <param name="Append" value="false" />
    <layout class="org.apache.log4j.PatternLayout">
    <param name="ConversionPattern" value="%d{HH:mm:ss} - [%t] [%-5p] %c : %m%n" />
    </layout>
         </appender>
    Thanks in advance.

    Hy every body,
    I've made a Tomcat project under Eclipse and I'm
    using log4j.
    I've configured my log4j in "log4j-config.xml" file.
    I've made one file per level (DEBUG, INFO, WARN and
    ERROR), but I'm receiving too many lines in my
    "debug.log" file.Then don't use that level. The whole point of the various levels is to be able to trade off level of detail for volume of output. Debug is meant to be very verbose.
    You can set various classes or packages to log at different levels, overriding the default for that logger. So if you don't want debug output for com.acme.whatever, then in log4j.properties or log4j.xml or whatever, you can configure that package and all "subpackages" for info, warn, or error. See log4j's docs for details.

  • Correct cmdlet for moving my database/log files?

    Currently, my databases/logs reside on an external USB 1TB drive (D:\Mailbox\Mail400).  
    That being said, I need to move both database/logs to the C:\ drive
    Database - "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400"
    Logs-  "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Logs\". 
    Is this the correct cmdlet I would use to accomplish this move?
    Move-DatabasePath "Mail400" -EdbFilePath "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Mail400.edb" -LogFolderPath "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Logs\"
    I realize the best practice is to separate the database file and transaction logs onto separate disks but that isn't an option for us at this time.
    Ex2013sp1
    PennyM

    Hi PennyM,
    I have a test in my environment using the following cmdlet. I use Exchange server 2013.
    Move-DatabasePath -Identity 'Mailbox Database 1294421375' -EdbFilePath 'C:\Data\Mailbox Database 1294421375\test.edb' -LogFolderPath 'C:\Data\Mailbox Database 1294421375'
    There is no need to create these folders manually. When you run the above cmdlet, the folders you set will be created automatically.
    Besides, the move will take some time, it depends on the size of the database and transaction log files being moved.
    Hope it helps.
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support

Maybe you are looking for

  • Related names in contacts

    Do I understand correctly that the related name field in Contacts can only be used as a simple text field? If correct, I think I miss something with this function. What is it good for? By entering a contact as related name, I would strongly assume th

  • Regarding taxes in po

    hi iam facing the problem in printing po,i want item wise taxes .for each item exice,vat and cst are there.but my requrement is retrive these taxes from me22n->item->invoice->enter tax code->press enter.from here i want to get. .ok iam expecting answ

  • Ipod mini sync issue

    Cannot sync 2005 ipod mini to 2013 itunes.  Any ideas?

  • I can't see some of the pictures in iOS 7

    The missing photos appear with the text JPG, but not all of them are gone. Does anybody know what's wrong??

  • Incoming Survey/Mail Problem

    Hi Gurus, I met an incoming survey/mail problem, and need your help: "Survey" can be created and sent to vendor's external mail address  successfully via the "Portal --> Supplier Evaluation --> Create Survey", but the problem is, the system could not