ACC log files filling up my computer

I did not turn the power of my computer off during the night, and when I started using it today, I noticed that there were no space at all left on the hard drive. It had gone from 30GB to 0GB during the night.
I investigated what had happened and found 30GB of "ACC" log files from the last night.
The contents of these log files were million and million of repeated lines of this sequence:
04/22/15 07:07:43:276 | [INFO] | 19783 | ACC | ProductsAppsBL | handleTriggerNotification |  | ProductsAppsBL | 8412 | Received Trigger Notification from Server
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | BLDataHandlerCommand |  | ProductsAppsBL | 8412 | apps panel busy. pushing product Notification back to queue
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | handleTriggerNotification |  | ProductsAppsBL | 8412 | Received Trigger Notification from Server
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | BLDataHandlerCommand |  | ProductsAppsBL | 8412 | apps panel busy. pushing product Notification back to queue
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | handleTriggerNotification |  | ProductsAppsBL | 8412 | Received Trigger Notification from Server
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | BLDataHandlerCommand |  | ProductsAppsBL | 8412 | apps panel busy. pushing product Notification back to queue
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | handleTriggerNotification |  | ProductsAppsBL | 8412 | Received Trigger Notification from Server
04/22/15 07:07:43:277 | [INFO] | 19783 | ACC | ProductsAppsBL | BLDataHandlerCommand |  | ProductsAppsBL | 8412 | apps panel busy. pushing product Notification back to queue
I uninstalled the Creative Cloud updater, and the issue disappeared.
This must be a bug in the updater which should be resolved by Adobe?

Same here - Seems like it - I count 8 log entries per milisecond:
04/22/15 12:31:41:934|[INFO]|90220|ADCS|ProductsAppsBL|BLDataHandlerCommand||ProductsAppsBL|14116| appspanelbusy.pushingproductNotificationbacktoqueue
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|handleTriggerNotification||ProductsAppsBL|1 4116|ReceivedTriggerNotificationfromServer
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|BLDataHandlerCommand||ProductsAppsBL|14116| appspanelbusy.pushingproductNotificationbacktoqueue
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|handleTriggerNotification||ProductsAppsBL|1 4116|ReceivedTriggerNotificationfromServer
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|BLDataHandlerCommand||ProductsAppsBL|14116| appspanelbusy.pushingproductNotificationbacktoqueue
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|handleTriggerNotification||ProductsAppsBL|1 4116|ReceivedTriggerNotificationfromServer
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|BLDataHandlerCommand||ProductsAppsBL|14116| appspanelbusy.pushingproductNotificationbacktoqueue
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|handleTriggerNotification||ProductsAppsBL|1 4116|ReceivedTriggerNotificationfromServer
04/22/15 12:31:41:935|[INFO]|90220|ADCS|ProductsAppsBL|BLDataHandlerCommand||ProductsAppsBL|14116| appspanelbusy.pushingproductNotificationbacktoqueue
04/22/15 12:31:41:936|[INFO]|90220|ADCS|ProductsAppsBL|handleTriggerNotification||ProductsAppsBL|1 4116|ReceivedTriggerNotificationfromServer
And it could have something with the facialdetection of Lightroom CC - I started scanning for faces before going to bed - And at 5 in the morning my SSD got filled (approx 35GB of this)

Similar Messages

  • System.log files filling up in 10.6?

    Something is filling up my system log files, to the point that console.app will only show the last half hour of each file. It looks like something is trying to escape its sandbox, but I'm not exactly sure what... I'm getting MANY repeated messages that look like:
    Feb 3 00:29:57 Brians-mini sandboxd[16]: syslogd(15) deny file-read-data /private/var/log/asl/StoreData
    Feb 3 00:29:57 Brians-mini sandboxd[16]: syslogd(15) deny mach-task-name
    Feb 3 00:29:59: --- last message repeated 1 time ---
    Feb 3 00:29:57 Brians-mini sandboxd[16]: * process 16 exceeded 500 log message per second limit - remaining messages this second discarded *
    Feb 3 00:29:57 Brians-mini sandboxd[16]: syslogd(15) deny mach-task-name
    has anyone seen this? (And does this belong in a different forum?)

    It sounds like someone may be trying to connect to your computer. When things are repeatedly denied, especially task-opening commands, it is usually suspected that something is trying to get in. This may be malicious, or may not be, depending on how you connect to the Internet. If you share a connection at home or even between residencies (check with your ISP if you're not sure), it can be normal. See if it still happens after you physically disconnect the network. Also, make sure to disconnect peripherals just to make sure they aren't causing any of the issues.

  • Huge system.log file filled with over a million messages about NSATSTypeset

    Hi,
    I just happened to notice the system.log file on my MacBook Pro laptop was absolutely huge - over 1.2 GBytes in just one day. Almost all of it is messages like this:
    Mar 31 12:26:23 macbook quicklookd[228]: <NSATSTypesetter: 0x146b40>: Exception * table 0x1473caa0 has block 0x147b3040 rather than 0x147b4850 at index 7573 raised during typesetting layout manager <NSLayoutManager: 0x124140>\n 1 containers, text backing has 16461 characters\n selected character range {16461, 0} affinity: downstream granularity: character\n marked character range {16461, 0}\n Currentl
    y holding 16461 glyphs.\n Glyph tree contents: 16461 characters, 16461 glyphs, 4 nodes, 128 node bytes, 16384 storage bytes, 16512 total bytes, 1.00 bytes per character, 1.00 bytes per glyph\n Layout tree contents: 16461 characters, 16461 glyphs, 7573 laid glyphs, 207 laid line fragments, 3 nodes, 96 node bytes, 13656 storage bytes, 13752 total bytes, 0.84 bytes per character, 0.84 bytes per glyph, 36.58 laid glyphs per laid line fragment, 66.43 bytes per laid line fragment\n, glyph range {7573 0}. Ignoring...
    and there are over 1.3 million of these messages!!!! There is a tiny bit of variation in some of the numbers (but not all). That is just a little bit more than the total number of files I have on the hard drive in the laptop, so it would appear that the quicklookd is doing something to each and every file on the computer. Any idea why all of a sudden these messages appear and why so many. I only have about 7 versions of the system.log files and none of them are even close to this big, but the one one thing I did do today that I have not in a few weeks is reboot my laptop because of another problem with the laptop screen not waking this morning from being put to sleep last night (was just black but computer was running and could login to it from another computer in the LAN it is attached).
    Any ideas why this is happening, or is this something that always happens on a reboot/boot rather than waking from sleep. Why would the quicklookd be printing out so many of these messages that are almost exactly alike???
    I have only had this MacBook for a few weeks, so don't have a good feel for what is normal and what isn't yet.
    THanks...
    -Bob

    Bob,
    Thanks for your further thoughts and the additional information. My guess is that Quick Look does its file processing independently of whether or not or how recently the computer has been rebooted. The NSATSTypesetter messages filling up the log file are almost certainly error messages and should not occur with normal operation of Quick Look. I suspect that your reboot doesn't directly have anything to do with this problem. (It might have indirectly contributed in the sense that either whatever caused the need for the reboot or the reboot process itself corrupted a file, which in turn caused Quick Look to fail and generate all those error messages in the log file.)
    In the meantime I may have a solution for this problem. This morning I rebooted in single user mode and ran AppleJack in manual mode so that I could tell it to clean up all user cache files. (I'd previously downloaded AppleJack application from http://applejack.sourceforge.net/ . To boot in single user mode hold command and s keys at startup chime. ... Run the five AppleJack maintenance tasks in order. The third task will give you the option to enter numbers of users whose cache files will be cleaned. Do this cache cleaning for all users.) In the six hours since I ran AppleJack I've seen exactly two NSATSTypesetter error messages in /var/log/system.log . This compares with hundreds of thousands in the same period yesterday. I just set an iCal alarm to remind me to report back to this discussion thread in two weeks on this issue.
    Best,
    Chris.
    PS: Above you mention 7 log files. Are the older ones of the form system.log.0.bz2 ? If so they have been compressed. Just because they are small doesn't necessarily mean there are not a lot of nearly identical error messages. Uncompress to check. I haven't tried this because large files are very inconvenient to work with on my old iBook.

  • Log files filling up rapidly

    I have noticed that each time I mount a smbfs share the wireless network fails after a couple of minutes, and has to be reset. The log files seem to be filling up rapidly.
    The daemon and sys log contain similar lines.
    Jan 2 21:51:34 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:51:40 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:53:48 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:53:54 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:56:03 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:56:09 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_view_get_selection: assertion `GTK_IS_TREE_VIEW (tree_view)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_selection_unselect_all: assertion `GTK_IS_TREE_SELECTION (selection)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_selection_select_iter: assertion `GTK_IS_TREE_SELECTION (selection)' failed
    Jan 2 21:58:10 TOSHIBA-User gdmgreeter[21619]: Gtk-CRITICAL: gtk_tree_view_scroll_to_cell: assertion `GTK_IS_TREE_VIEW (tree_view)' failed
    Jan 2 21:58:17 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 21:58:23 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 21:58:41 TOSHIBA-User NetworkManager: <info> Updating allowed wireless network lists.
    Jan 2 22:00:31 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    Jan 2 22:00:37 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 1
    Jan 2 22:02:45 TOSHIBA-User NetworkManager: <info> Supplicant state changed: 0
    This might have some bearing on anther issue already logged
    Any ideas?
    malcolli

    Further reading and checking log files it appears that this may be two faults.
    The NetwokManager and the Gnome Display Manager.
    Now to find out more.
    malcolli

  • [SOLVED] dvd drive not working, log file filled with error.

    hi,
    i dont know if this is something to do with the latest kernel update or something else that may have occured.
    but my dvd writer has stopped working. no light on the front, pressing the button does not open the tray.
    it wtill works fine when i boot into windows (dual boot with XP) so it's not a hardware issue.
    also my everything.log file is filling up with the following every two seconds!
    Mar  1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
    Mar  1 19:27:56 adax hda: status error: error=0x00 { }
    Mar  1 19:27:56 adax ide: failed opcode was: unknown
    Mar  1 19:27:56 adax hda: drive not ready for command
    Mar  1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
    Mar  1 19:27:56 adax hda: status error: error=0x00 { }
    Mar  1 19:27:56 adax ide: failed opcode was: unknown
    Mar  1 19:27:56 adax hda: drive not ready for command
    Mar  1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
    Mar  1 19:27:56 adax hda: status error: error=0x00 { }
    Mar  1 19:27:56 adax ide: failed opcode was: unknown
    Mar  1 19:27:56 adax hda: drive not ready for command
    Mar  1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
    Mar  1 19:27:56 adax hda: status error: error=0x00 { }
    Mar  1 19:27:56 adax ide: failed opcode was: unknown
    Mar  1 19:27:56 adax hda: drive not ready for command
    can someone please help me before i run out of disk space for my logs!
    thanks,
    ad.
    Last edited by adax (2008-03-01 23:25:37)

    Hello,
    I strongly suspect that you need to change 'ide' to 'pata' in your /etc/mkinitcpio.conf
    #HOOKS="base udev autodetect ide scsi sata usb keymap filesystems"
    HOOKS="base udev autodetect pata scsi sata usb keymap filesystems"
    Then recreate with: 
    mkinitcpio -g /boot/kernel26.img
    Last edited by colnago (2008-03-01 21:21:37)

  • URGENT: SBS 2011 Exchange log files filling up drive in minutes!

    I need some help with ideas as to why Exchange is generating hundreds of log files every minute.
    The server had 0MB free on the C: drive, and come to find out, there were over 119,000 log files in the Exchange server folder (dated within the last 7 days).  These files are named like E00001D046C.log.  Oddly, the Exchange database store
    is not growing in size as you'd expect.  Frantically searching for a way to free up space, I turned on circular logging and remounted the store (after freeing up enough space for it to mount!).  Almost instantly, the 119,000+ log files disappeared,
    but now there are about 40 or so that are constantly being created/written/deleted, over and over and over.
    This is a small 5 person office with a 4GB database store.  The 119,000 log files were taking up over 121GB.  It's nice to have that space back, but something is in a loop, constantly creating log files as fast as the system can write them.
    I checked the queues...nothing.  Where else can I look to see what might be causing this?
    Thanks for the help.
    ps.  Windows server backup failed about the time this problem started, stating the backup drive is out of space.  It's a 2TB drive, backing up 120GB of data.  Isn't it supposed to delete old backups to make room for new?

    Hi,
    Regarding the current issue, please refer to the following article to see if it could help.
    Exchange log disk is full, Prevention and Remedies
    http://www.msexchange.org/articles-tutorials/exchange-server-2003/planning-architecture/Exchange-log-disk-full.html
    If you want to disable Exchange ActiveSync feature, please refer to the following article.
    Disable Exchange ActiveSync
    http://technet.microsoft.com/en-us/library/bb124502(v=exchg.141).aspx
    Best Regards,
    Andy Qi
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Andy Qi
    TechNet Community Support

  • System Log file filled with arp message from teamed nics

    Hello,
    We are running a Win2k3 server with dual nics teamed to one IP address. All of our Mac's system logs are being filled with the following message:
    Mar 28 12:48:58 Son-of-Spacedog kernel[0]: arp: 192.168.200.50 moved from 00:04:23:b9:00:4a to 00:04:23:b9:00:4b on en0
    This is repeated over and over. Is there any way to get the Mac's to stop generating this message? Is the fix for this on the server side or on the Mac side? From what I have seen online in other forums, this started happening with Panther.

    There are actually several different types of NIC Teaming Modes:
    Adapter Fault Tolerance (AFT) - provides automatic redundancy for a server's network connection. If the primary adapter fails, the secondary adapter takes over. Adapter Fault Tolerance supports two to eight adapters per team. This teaming mode works with any hub or switch, and all team members must be connected to the same device.
    Switch Fault Tolerance (SFT) - provides a failover relationship between two adapters when each adapter is connected to a separate switch. Switch Fault Tolerance supports two adapters per team. This feature works with any switch. Spanning Tree Protocol (STP) must be enabled when you create a team in SFT mode.
    Adaptive Load Balancing (ALB) - provides load balancing for transmission traffic and adapter fault tolerance. You can also enable or disable receive load balancing (RLB) in ALB teams. This teaming mode works with any switch.
    Fast EtherChannel/Link Aggregation - provides increased transmission and reception throughput in a team of two to eight adapters. This mode also includes adapter fault tolerance and load balancing (only routed protocols). This requires a switch with Link Aggregation or FEC capability.
    Gigabit EtherChannel/Link Aggregation - is the gigabit extension of the FEC/Link Aggregation/802.3ad: static mode. All team members must operate at gigabit speed.
    IEEE 802.3ad: dynamic mode - creates one or more teams using dynamic Link Aggregation with same-speed or mixed-speed adapters. Like the static Link Aggregation modes, Dynamic 802.3ad teams increase transmission and reception throughput and provide fault tolerance. This mode requires a switch that fully supports the IEEE 802.3ad standard.
    Now, most likely, somewhere on the network a server is configured for ALB or AFT, and these are the most common methods of NIC teaming (Sadly, the XServes do not). They also do not require any certain type of switch, but they do require STP (Spanning Tree Protocol) to be turned off to function most effectively, or you will see errors on the network like the ones you are seeing in your log. Most managed switches come with STP turned on by default, so you will need to check ALL switches (as the errors will cascade throughout the network) to see if it STP is turned on. Most people do not use STP any more (it basically allows ports on one switch to appear as on another switch), so see if you can turn it off. That should stop the error messages.
    -MacBoy in SLC
    PowerMac G5 Dual 2.0 GHz   Mac OS X (10.4.6)  

  • Windows 8.1 RTM, Windows Temp Filling up with MSI*.LOG files

    I have a machine that seems to be filling up its C: drive by placing a large number of 3MB .LOG files all of the form MSI*.LOG in the C:\Windows\Temp directory. I've tried researching and the closest was kb 223300, however this machine does not
    appear to have the registry setting referenced in that article. The Hotfix also says its not valid for this OS (Windows 8.1). Never seen this before. Can someone help? I've completely turned off Windows Update on this machine and so far that seems to be working.
    Also, I couldn't figure out for a while why where the disk space was going. Nothing was reporting anywhere close to the full disk utilization. I had to use ROBOCOPY /MIR /L to essentially get a full view of what's on the drive.

    I'm getting the same problem too, with 3MB MSI*.LOG files filling up c:\windows\temp. It puts so many on, that it completely fills C: drive. It does it randomly, sometimes going for weeks without problem, sometimes doing it every couple of days, but has
    done so since the OS was reinstalled from scratch on an SSD several months ago.
    It has been doing it pretty much since I did a complete fresh reinstall. I don't get the problem on any of my other machines, only this one - but this is my only 64 bit machine, so that may have something to do with it. 
    I first discovered the problem basically while I was installing the basics for the first time.  I did a clean OS install, ran the updates, installed Office, and then when I was installing Visual Studio, it failed with an out of disc space error. That's
    when I discovered the C:\WINDOWS\TEMP was full of log files.
    Unfortunately I've just deleted them again, so can't upload one. But last time I had a look in one, there was something that pointed to Visual Studio possibly being the culprit but I can't remember what it was. I uninstalled and reinstalled Visual Studio,
    and thought it had fixed it, as it went for a couple of months without the problem, but has done it three times in the last week.
    The files are about 3MB each - I'm new on the forums, so what would be the best way of uploading one so that someone might be able to have a look at it?

  • Probleme avec le log file path de Data Logging Control de Veristand

    Bonjour à tous,
    Mon problème est que j'utilise un ordinateur comme passerelle sur le réseau. Cette dernière est connecté au PXI pour acquisitionner en Real Time. J'ai un autre ordinateur connecté à la passerelle pour lire les donnés du PXI. Je n'arrive pas en enregistrer sur mon disque dur local en utilisant le Data logging Control de Veristand sur le deuxième ordinateur. Cependant, il peut m'enregistrer sur le disque dur  se trouvant sur le réseau. De plus, je n'ai pas de problème à enregistrer si l'ordinateur est une passerelle.
    Cordialement,
    Kamal Bouamran

    Apologies for Google translate...
    Am I correct in assuming that you have a Logging Control connected to a remote gateway running on another computer and you want to access the log file on your local computer?
    Excuses pour Google Translate ...
    Ai-je raison de supposer que vous avez un Log Control relié à une Gateway distante exécutée sur un autre ordinateur et que vous voulez accéder au fichier journal sur votre ordinateur local ?

  • Set alert for log file free space

    We need to set an alert in case the log file fills up to a certain level (ST04 - Space usage - Total log size / Free space)
    Which parameter should we set in solman for this?
    Thanks in advance

    setup your CCMS agents to monitor the log files
    http://help.sap.com/saphelp_nw04/helpdata/en/65/f3156d443e744abe15dbe14e4e32b5/content.htm

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Need to understand when  redo log files content  is wrote to datafiles

    Hi all
    I have a question about the time when redo log files are wrote to the datafiles
    supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
    after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
    Thanks for help
    Edited by: rachid on Sep 26, 2012 5:05 PM

    rachid wrote:
    the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
    The wording is poor and should be modified to something like:
    <blockquote>
    +"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
    </blockquote>
    Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
    If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • How open and see Log File in Active Directory

    Hello Friends..   ^-^
    how i can open log files active directory and see this data files ?
    Can export this logs ?
    thanks for help.

    And adds a definition of edbxxxxx.log for completeness:
    These are auxiliary transaction logs used to store changes if the main Edb.log file
    gets full before it can be flushed toNtds.dit.
    The xxxxx stands for a sequential number in hex. When the Edb.log file
    fills up, an Edbtemp.log file
    is opened. The original Edb.log file
    is renamed to Edb00001.log, and Edbtemp.log is
    renamed to Edb.log file,
    and the process starts over again. Excess log files are deleted after they have been committed. You may see more than one Edbxxxxx.log file
    if a busy domain controller has many updates pending.
    Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable. This helps the community, keeps the forums tidy, and recognises useful contributions. Thank you!

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Log Groups/Log Files Oracle8i

    Sorry for the basic question but I need clarification help.
    The Oracle8i DBA Certification Exam Guide indicates:
    1. You must have at least two Log Groups each with at least one Log File in each. For Recoverability you
    must have more than one Log File in each Log Group (and obviously the same number in each Log Group).
    2. LGWR writes to only one Log Group at a time. If more than one Log File is in the Log Group, LGWR will write to each at the same time. For recoverability it is advised that each Log File within a Log Group should
    reside on a separate disk.
    3. As the Log File fills, a Checkpoint occurs and LGWR will then begin writing to the next Log Group as
    previously described.
    PROBLEM:
    My current DBA Instructor maintains the understanding above is incorrect. He maintains:
    1. LGWR will write to the first Log File in each Log Group simultaneously (for recoverability).
    2. As the first log file is filled, a Checkpoint occurs, and LGWR will write to the second (etc) Log File in
    each Log Group.
    3. All Log Files within one Log Group must reside on the same disk (Syntax problem if you don't).
    4. Each Log Group should be placed on a different disk for recoverability.
    Who is correct? Did I misunderstand the DBA Certification Exam Guide?
    Thanks for any help.

    Hi,
    When you take an online back including logs (which is the default option for online backups in 700 systems)
    this means that a backup is taken and all the logs that are written to during the time of the backup are also inlcuded in the backup. this means that you have the backup image and the logs to enable a restore and rollforward through the logs to a consistent point in time. If you took an online backup without including the logs, you would need to ensure you have those logs from the time of that backup to be able to restore that image and rollforward to ensure consisntecny.
    One mjor point, never ever delete or touch active log files from /db2/NMS/log_dir as this will result in the system becoming inconsistent due to the logging mechanism being interrupted. If such a case does occur, please contact support.
    Regards,
    Paul

Maybe you are looking for

  • How can I stop the sharing on the iPhones that are in the same account because its sharing every thing even text messages

    How can I stop the sharing on the iPhones that are on my account because its sharing everything even text messages and its driving everybody crazy

  • Recurring hard drive problems in MacBook Pro.

    Hi all, This will be a fairly long post, so I apologize beforehand. I feel as though I should share as much information as possible in order to try to nail down this problem as best as I can. So, here we go... About 6 years ago I purchased a 15" MacB

  • Problems in stock posting

    Dear Gurus After doing usage decision when I am trying to post the goods  to scrap in qa32 the message is coming as Posting only to free or blocked stock due to missing goods receip message no is QHU 17.What configuration I need to do to post in the

  • Use button to assign page items a value

    Hi All, I have a simple question (hopefully). I need to assign two page items some value when a button is pressed. I cannot see what I'm doing wrong ! I have created the button and set Button Request Source Type to PL/SQL Expression or Function. I ha

  • Changing Photo Dates

    Have what could be a simple question. Using iPhoto 6 and really like it, with a couple reservations. What I would like to knw is how to change the dates on photos. Specifically, I am converting some old slide photos and am using iPhoto to organize an