Database Log File getting full by Reindex Job

Hey guys
I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
file space.  Any suggestions?

Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
need to take log backup after changing back to Full recovery
I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • What will happen if log file is full

    Hi All,
    I have a doubt.
    I have one user db called monitorindb. If log is full in this db, it will make server to shutdown?
    Thanks
    Shashikala
    Shashikala

    Hi All,
    I have a doubt.
    I have one user db called monitorindb. If log is full in this db, it will make server to shutdown?
    NO Server cannot shutdown due to log file getting filled for a database.Application might crash which is accessing this database.Or if a transaction is running it will stop and and will try to rollback but will not be able to as log will not grow ,so this can
    cause DB in recovery
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • DB2, Log File has reached its saturation point" DIA8309C Log file was full,

    Hello Experts,
    I have successfully installed a ECC 6.0 System-ABAP + JAVA (DB2 v9.5 windows server 2008-x64 bit).
    Kernel: 700 , Patch: 185 ; SP level : rel 700 , level 17.
    However now i suddenly cannot connect to database and SAP is down.
    C:\Users\dsqadm.DUCATI>r3trans -d
    This is r3trans version 6.14 (release 700 - 16.10.08 - 16:26:00).
    unicode enabled version
    2EETW169 no connect possible: "DBMS = DB6                              --- DB2DB
    DFT = 'DSQ'"
    r3trans finished (0012).
    db2diag.log:-
    ADM1823E  The active log is full and is held by application handle "51886".  Terminate this application by COMMIT, ROLLBACK or FORCE APPLICATION.
    "Log File has reached its saturation point" DIA8309C Log file was full.
    "Backup pending.  Database has been made recoverable.  Backup now required."  DIA8168C Backup pending for database .
    Also, regarding DB2 licensing,i have a query:
    db2licm -l gives the following:
    C:\Users\db2dsq.DUCATI>db2licm -l
    Product name:                     "DB2 Enterprise Server Edition"
    License type:                     "CPU Option"
    Expiry date:                      "Permanent"
    Product identifier:               "db2ese"
    Version information:              "9.5"
    Enforcement policy:               "Soft Stop"
    Features:
    DB2 Database Partitioning:        "Licensed"
    DB2 Performance Optimization ESE: "Licensed"
    DB2 Storage Optimization:         "Licensed"
    DB2 Advanced Access Control:      "Not licensed"
    DB2 Geodetic Data Management:     "Not licensed"
    IBM Homogeneous Replication ESE:  "Not licensed"
    Product name:                     "DB2 Connect Server"
    License type:                     "Trial"
    Expiry date:                      "10/19/2009"
    Product identifier:               "db2consv"
    Version information:              "9.5"
    I have applied both sap and DB2 license. Is everything ok regarding licensing of DB v9.5 for using with SAP?
    I am new to DB2 database and looking for expert guidance regarding the above mentioned issues.
    Thanks,
    Rakesh

    C:\Users\db2dsq.DUCATI>db2 get dbm cfg
              Database Manager Configuration
         Node type = Enterprise Server Edition with local and remote clients
    Database manager configuration release level            = 0x0c00
    Maximum total of files open               (MAXTOTFILOP) = 16000
    CPU speed (millisec/instruction)             (CPUSPEED) = 4,723442e-007
    Communications bandwidth (MB/sec)      (COMM_BANDWIDTH) = 1,000000e+002
    Max number of concurrently active databases     (NUMDB) = 8
    Federated Database System Support           (FEDERATED) = NO
    Transaction processor monitor name        (TP_MON_NAME) =
    Default charge-back account           (DFT_ACCOUNT_STR) =
    Default database monitor switches
       Buffer pool                         (DFT_MON_BUFPOOL) = ON
       Lock                                   (DFT_MON_LOCK) = ON
       Sort                                   (DFT_MON_SORT) = ON
       Statement                              (DFT_MON_STMT) = ON
       Table                                 (DFT_MON_TABLE) = ON
       Timestamp                         (DFT_MON_TIMESTAMP) = ON
       Unit of work                            (DFT_MON_UOW) = ON
    Monitor health of instance and databases   (HEALTH_MON) = OFF

  • Database Log file Shrink information

    Hello Team,
    Database log file shrink information: Due to space problem
    One of my database having 600 gb,under  600 gb log file having 260 gb, in this database we take full backup daily no log backup. If we shrink the log file any impact is happen?
                   We shrink the log file internally what will happen?
    One of my database having 600 gb,log file having 260 gb, in this database we take full backup daily and every 15 min take log backup.
    This is scenario we shrink the log file what will happen.

    Hello,
    You should not try to shrink the log file regularly, because is resource intensive, it takes a lot of time and it creates fragmentation on the disk storage.
    If you don’t backup the log files of the databases (option 1) you don’t have the ability to restore to a point in time between backups full and differential. If
    you don’t want to backup files then it makes sense to change the recovery model to simple.
    Backing up the logs regularly minimize the risk of the logs getting filled and extending their size.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • The event log file is full

    Hi All
    In Message monitoring(RWB) in  adapter engine i am getting the following error
    SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
    Can any one suggest me what might be the problem
    Thanks
    Jayaraman
    Edited by: Jayaraman P on May 20, 2010 4:27 PM
    Edited by: Jayaraman P on May 20, 2010 4:28 PM

    >
    Jayaraman P wrote:
    > SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
    this is because of a problem at the WS server (it might mostly be a windows server).
    You can request the WS team to have a look into this issue. it is not a PI problem.

  • "The event log file is full" error - SAP LVS Report Viewer 1.0

    Hi Experts,
    I'm getting an error when trying to generate a report in SAP LVS.
    Error:
    The event log file is full
    How do I resolve this? Anyone encountered this error?
    Thanks in advance,
    Cyrous

    Hi,
    It seems that the audit log mechanism is activated, at your system. But, if you didn't set the parameter DIR_AUDIT, audit log will be created under "/usr/sap/<SID>/<instance>/log", not "data" folder. You can check its default value on RZ11. Revise the parameters, below;
    DIR_AUDIT = <path>
    FN_AUDIT = audit_++++++++.AUD
    rsau/enable = 1
    rsau/max_diskspace/local = <SizeOfFile>
    Best regards,
    Orkun Gedik

  • How to recover the database when some of the archive log file get deleted.

    I am facing a problem with Oracle database, which is related to archivelogs.
    Our development database is running in archivelog mode, but we don't have backups scheduled and have no recovery catalog.
    When the database was in running condition, disk got full, so some archivelogs were deleted manually.
    After this they restarted the DB, and now DB is not coming up. Errors are as follows:
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1444383504 bytes
    Fixed Size 731920 bytes
    Variable Size 486539264 bytes
    Database Buffers 956301312 bytes
    Redo Buffers 811008 bytes
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    SQL> recover datafile '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    ORA-00283: recovery session canceled due to errors
    ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
    SQL> recover database using backup controlfile;
    ORA-00279: change 215548705 generated at 09/02/2008 17:06:10 needed for thread
    1
    ORA-00289: suggestion :
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00280: change 215548705 for thread 1 is in sequence #1107
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00308: cannot open archived log
    '/export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    Media recovery cancelled.
    SQL>
    1. How to recover the database and bring it online
    Any help will be highly appreciated.
    With Regards
    Hemant Joshi
    Edited by: hem_Kec on Sep 7, 2008 9:07 AM

    Hi,
    Archive log files are the copies of redolog files.As redo log files are circularly overwritten,oracle generates archive log file of the corresponding redo logfiles being overwritten.So if you have a backup that dates back to 10 am in the morning and if your database creashed at 3 pm,you cannot use the redo log files alone as they have incomplete information.To completely recover the database upto 3 pm,you need archive log files generated between 10 am to 3 pm. In your case since you are missing one archive log file,you cannot perform complete recovery and hence would suffer data loss.

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Database Log File Size

    We are in the process of migrating disabled users to a new Exchange 2013 database on secondary storage. I've noticed that the Logs folder is abnormally large (182 GB). I was wondering if there was a way to clean this up?
    We have other Exchange 2013 databases whose Logs folder is much smaller in comparison (~200 MB). How can I go about cleaning these log files?

    Hi,
    Have you checked the above suggestion to do a full backup and check the result?
    Is there any update with your issue?
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support
    Sorry to reply to this so late but I wanted to provide somewhat of an update. I am running a DPM job on the database now. We'll see if it resolves the issue. When I ran the DPM job originally it had an error. I believe it was because the drive was at capacity.
    I've since expanded the drive and kicked off the DPM job again.

  • Unable to delete records as the transaction log file is full

    My disk is running out of space and as a result I decided to free some space by deleting old data. I tried to delete 100,000 by 100,000 as there are 240 million records to be deleted. But I am unable to delete them at once and shrinking the database doesn't
    free much space. This is the error im getting at times.
    The transaction log for database 'TEST_ARCHIVE' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
     How can I overcome this situation and delete all the old records? Please advice.
    mayooran99

    In order to delete the SQL Server need to write the information in the log file, and you do not have the place for those rows in the log file. You might succeeded to delete less in each time -> next backup the log file each time -> next shrink the
    log file... but this is not the way that I would chose.
    Best option is probably to add another disc (a simple disk do not cost a lot), move the log file there permanently. It will increase the database work as well (it is highly recommend not to put the log file on the same disk as the data file in most cases).
    If you can't add new disk permanently then add one temporary. Then add file to the database in this disk -> create new table in this disk -> move all the data that you do
    not want to delete to the new table -> truncate the current table -> bring back the data om the new table -> drop the new table and the new file to release the temporary disk.
    Are you using full mode or simple recovery mode ?
    * in full mode you have to backup the log file if you want to shrink it
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • [SOLVED]Log files getting LARGE

    I ran a pacman -Syu for the first time in several months yesterday, and my computer has become almost useless due the fact that everything.log, kernel.log and messages.log gets extremely large (3.8 GB) after a while, causing / to become 100% full.
    I've located the following in kernel.log:
    Mar 14 15:06:44 elvix attempt to access beyond end of device
    Mar 14 15:06:45 elvix attempt to access beyond end of device
    Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
    Mar 14 15:06:45 elvix attempt to access beyond end of device
    Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
    Mar 14 15:06:45 elvix attempt to access beyond end of device
    Not sure what it means, but the last two lines are repeated XX times and are the reason why log files grow beyond limits. Anyone got ideas to what can be done to fix this?
    Last edited by bistrototal (2008-03-14 16:27:15)

    logrotate works really well:
    http://www.archlinux.org/packages/14754/
    There's quite a few threads about configuration floating around.

  • Large and Many Replication Manager Database Log Files

    Hi All,
    I've recently added replication manager support to our database systems. After enabling the replication manager I end up with many log.* files of many gigabytes a piece on the master. This makes backing up the database difficult. Is there a way to purge the log files more often?
    It also seems that the replication slave never finishes syncronizing with the master.
    Thank you,
    Rob

    So, I setup a debug env on test machines, with a snapshot of the db. We now set rep_set_limit to 5mb.
    Now its failing to sync, so I recompiled with --enable-diagnostic and enabled DB_VERB_REPLICATION.
    On the master we see this:
    2007-06-06 18:40:26.646768500 DBMSG: ERROR:: sendpages: 2257, page lsn [293276][4069284]
    2007-06-06 18:40:26.646775500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35e370
    2007-06-06 18:40:26.646782500 DBMSG: ERROR:: sendpages: 2257, lsn [640947][6755391]
    2007-06-06 18:40:26.646794500 DBMSG: ERROR:: sendpages: 2258, page lsn [309305][9487507]
    2007-06-06 18:40:26.646801500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35f3b4
    2007-06-06 18:40:26.646803500 DBMSG: ERROR:: sendpages: 2258, lsn [640947][6755391]
    2007-06-06 18:40:26.646809500 DBMSG: ERROR:: send_bulk: Send 562140 (0x893dc) bulk buffer bytes
    2007-06-06 18:40:26.646816500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.647064500 DBMSG: ERROR:: wrote only 147456 bytes to site 10.0.3.235:9003
    2007-06-06 18:40:26.648559500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.648561500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.648562500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.648563500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649966500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.649968500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649970500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.649971500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.651699500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.651702500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb3d801c
    2007-06-06 18:40:26.651704500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.651705500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.651706500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.652858500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.652860500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb2d701c
    2007-06-06 18:40:26.652861500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.652862500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.652864500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:38.951290500 1 28888 dbnet: 0,0: MSG: ** checkpoint start **
    2007-06-06 18:40:38.951321500 1 28888 dbnet: 0,0: MSG: ** checkpoint end **
    On the slave, we see this:
    2007-06-06 18:40:26.668636500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668637500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668644500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66c1fc ep 0x2afb671344 pgrec data 0x2afb66c1fc, size 4152 (0x1038)
    2007-06-06 18:40:26.668645500 DBMSG: ERROR:: PAGE: Received page 2254 from file 0
    2007-06-06 18:40:26.668658500 DBMSG: ERROR:: PAGE: Received duplicate page 2254 from file 0
    2007-06-06 18:40:26.668664500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668666500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668672500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66d240 ep 0x2afb671344 pgrec data 0x2afb66d240, size 4152 (0x1038)
    2007-06-06 18:40:26.668674500 DBMSG: ERROR:: PAGE: Received page 2255 from file 0
    2007-06-06 18:40:26.668686500 DBMSG: ERROR:: PAGE: Received duplicate page 2255 from file 0
    2007-06-06 18:40:26.668703500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668704500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668706500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66e284 ep 0x2afb671344 pgrec data 0x2afb66e284, size 4152 (0x1038)
    2007-06-06 18:40:26.668707500 DBMSG: ERROR:: PAGE: Received page 2256 from file 0
    2007-06-06 18:40:26.668714500 DBMSG: ERROR:: PAGE: Received duplicate page 2256 from file 0
    2007-06-06 18:40:26.668715500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668722500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668723500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66f2c8 ep 0x2afb671344 pgrec data 0x2afb66f2c8, size 4152 (0x1038)
    2007-06-06 18:40:26.668730500 DBMSG: ERROR:: PAGE: Received page 2257 from file 0
    2007-06-06 18:40:26.668743500 DBMSG: ERROR:: PAGE: Received duplicate page 2257 from file 0
    2007-06-06 18:40:26.668750500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668752500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668758500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb67030c ep 0x2afb671344 pgrec data 0x2afb67030c, size 4152 (0x1038)
    2007-06-06 18:40:26.668760500 DBMSG: ERROR:: PAGE: Received page 2258 from file 0
    2007-06-06 18:40:26.668772500 DBMSG: ERROR:: PAGE: Received duplicate page 2258 from file 0
    2007-06-06 18:40:26.668779500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.690980500 DBMSG: ERROR:: /ask/bloglines/db/sitedb-slave rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391]
    2007-06-06 18:40:26.690982500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.690983500 DBMSG: ERROR:: rep_bulk_page: p 0x736584 ep 0x7375bc pgrec data 0x736584, size 4152 (0x1038)
    2007-06-06 18:40:26.690985500 DBMSG: ERROR:: PAGE: Received page 2124 from file 0
    2007-06-06 18:40:26.690986500 DBMSG: ERROR:: PAGE: Received duplicate page 2124 from file 0
    2007-06-06 18:40:26.690992500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:36.289310500 DBMSG: ERROR:: election thread is exiting
    I have full log files if that could help, these are just the end of those.
    Any ideas? Thanks...
    -Paul

  • Does the windowserver log files get rotated?

    Hi,
    I have been looking at my /var/log directory and found the two windowserver related log files: windowserver.log and windowserver_last.log
    I am wondering if these get rotated on a regular basis. If so, how often? (seeing that there is a windowserver_last.log gives me the impression that these files are rotated.
    Does anyone know?
    Thanks,
    Steve
    Power Mac G5/2Ghz Dual   Mac OS X (10.4.2)  

    Hi Stephen,
       I agree with Michael. I could find no reference to "windowserver" in /etc or in the StartupItems directory that would explain its rotation. I have a few machines at work that I can check and it's my guess that the WindowServer simply creates a new one when it starts.
       Thus, if you shutdown regularly, the log file shouldn't get very big. I don't shutdown unless I have to so I added the windowserver.log to my own log rotation script. All you really have to do is to duplicate Apple's /etc/periodic/weekly/500.weekly file. Rename it with a new name and number and cut out everything but the log rotation code and the initial stuff that sets up the environment. Then simply put your own filenames into the rotation code. You can also move it to a different directory if you don't want it to run weekly.
       I went ahead and altered their code so that it uses a loop instead of seven lines that differ only by a number. However, I did that mostly because I enjoy scripting. Apple's code is simpler and a little faster.
    Gary
    ~~~~
       The alarm clock that is louder than God's own belongs to
       the roommate with the earliest class.

Maybe you are looking for

  • Can viewObject get a parameter from sessionScope to binding variable?

    hello, using Jdev11g ADF Fusion application Can viewObject get a parameter from sessionScope and assign into binding variable ? without using executeWithParameter Thanks greenApple

  • HP Photosmart 6520 Server Connection Error: -1: Apps don't work and Scan to email doesn't work

    My HP PhotoSmart 6520 has a persistent error to connect to the server and therefore Apps and Scan to email doesn't work. I have manually set DNS to 8.8.8.8 / 8.8.4.4, I have installed latest driver on my Macbook pro, I have updated firmware, but stil

  • Problem when engineering to relational model

    I have modeled everything in Logical model and then engineered it to the relational model. Now I have to make some changes, and I'm doing them in Logical model. When I'm engineering it to Relational, although I just altered one single item, I see tha

  • Assign resource to task via abap automatically

    Hi experts. I need to assign via abap a resource to a task. The scenario is the following: I have a RoleA with 4 resources staffed (Resource1, Resource2, Resource3, Resource4) and 2 tasks assigned (Task1, Task2). I need to assign for example Resource

  • LabVIEW 7.1 App (exe) hangs

    I am currently experiencing a problem where an application (exe) that was built using LabVIEW 7.1, hangs (or freezes).  No error dialog/s are displayed and  CPU usage is 100%.  The application must then be shutdown using the Windows Task Manager. Thi