6 Log Files every minute

I have a networked HP All In One Office Jet Pro L7500 running on Windows XP.
I removed all HP Software except the drivers. I was constantly getting messages from HP Monitoring SW that my printer was connected, disconnected, connected and on and on and on.
In addition, my temp folder would create 6 HP log files every minute. I hoped removing everything but the drivers would fix this - no.
I checked to see if there were updated drivers and I seem to have the latest.
How can this be fixed? My computer is freezing up if I do not delete these every few hours, also as it writes the files, my computer will slow down to a halt. The files are typically 1K but every 10 files or so will be around a 1MB.
HELP!!
Next step is removing the drivers and buying a better printer without this issue and trust me it will not be an HP if that is the case.

This is worrying to me because
A. nobody has answered yet
B. I have the same issue with an HP Photosmart D7400
I love the printer and the network caqpability, but it goes crazy with the log files to where it will eventually clog the hard drive on my Windows XP machine.
HP should reconsider the banner across the top of this paqe saying "Join the Conversation" if the chance of people doing so are slim.

Similar Messages

  • Forcing log switch every minute.

    Hi,
    I want to force a log switch every one minute how can i do it?
    What should be the value of fast_start_mttr_target?
    Does a checkpoint force a log switch?
    Do i need to only reduce the size of redo log to a small size?
    How can i make sure that a log switch will happen after a particular time period for ex. 1Minute,2 minute.
    I want to force a log switch every minute because i want to send the archive redo log to standby database so that not more than 1 minute changes in database are lost. I am using 10g R2 on windows 2003 server.
    I am unable to find a solution. Any help?

    Hi,
    I want to force a log switch every one minute how
    can i do it? yes with archive_lag_target parameter
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934
    What should be the value of fast_start_mttr_target?incremental or normal checkpoint "fast instance recovery/downtime concerned" introduced from oracle 8, this feature is enabled with the initialization parameter FAST_START_MTTR_TARGET in 9i.
    fast_start_mttr_target to database writer tries to keep the number of dirty blocks in the buffer cache low enough to guarantee rapid recovery in the event of a crash. It frequently updates the file headers to reflect the fact that there are not dirty buffers older than a particular SCN.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin004.htm#sthref1110
    Does a checkpoint force a log switch? log switch force to checkpoint ,checkpoint never force to log switch.
    Do i need to only reduce the size of redo log to a
    small size?depends yours SLA how far you can risk the data ,but it will effect yours database performance ,recommended to set the size of log which should imply the log swtich after filling to 20 mins,its a trade off risk vs perofrmance.
    How can i make sure that a log switch will happen
    after a particular time period for ex. 1Minute,2
    minute.
    want to force a log switch every minute because i
    want to send the archive redo log to standby database
    so that not more than 1 minute changes in database
    are lost. I am using 10g R2 on windows 2003 server.
    am unable to find a solution. Any help?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934Khurram

  • SQL Server log generating minidump files every minute

    Good afternoon.
    Our SQL server recently started generating minidump files almost every minute.
    I'm waiting on the final dbcc checkdb of the final database but all others have returned without error.
    I've run the dump files through the debugging utility and have the following stack dump which I'm not sure how to interpret.
    At this point, I'm not quite sure how to proceed, pending on the results from dbcc.
    There are no other obvious errors in the event or system logs to cross-reference.
    Thanks
    0:027> kC 1000
    Call Site
    KERNELBASE!RaiseException
    sqlservr!CDmpDump::Dump
    sqlservr!SQLDumperLibraryInvoke
    sqlservr!CImageHelper::DoMiniDump
    sqlservr!stackTrace
    sqlservr!stackTraceCallBack
    sqlservr!ex_raise2
    sqlservr!ex_raise
    sqlservr!RaiseInconsistencyError
    sqlservr!Page::DeleteRow
    sqlservr!PageRef::ExpungeGhostRow
    sqlservr!IndexPageRef::ExpungeGhost
    sqlservr!CleanVersionsOnBTreePage
    sqlservr!IndexDataSetSession::CleanupVersionsOnPage
    sqlservr!GhostExorciser::CleanupPage
    sqlservr!TaskGhostCleanup::ProcessTskPkt
    sqlservr!GhostRecordCleanupTask
    sqlservr!CGhostCleanupTask::ProcessTskPkt
    sqlservr!TaskReqPktTimer::ExecuteTask
    sqlservr!OnDemandTaskContext::ProcessTskPkt
    sqlservr!SystemTaskEntryPoint
    sqlservr!OnDemandTaskContext::FuncEntryPoint
    sqlservr!SOS_Task::Param::Execute
    sqlservr!SOS_Scheduler::RunTask
    sqlservr!SOS_Scheduler::ProcessTasks
    sqlservr!SchedulerManager::WorkerEntryPoint
    sqlservr!SystemThread::RunWorker
    sqlservr!SystemThreadDispatcher::ProcessWorker
    sqlservr!SchedulerManager::ThreadEntryPoint
    msvcr80!_callthreadstartex
    0x0
    0x0
    0x0

    i agree with Erland. Run DBCC CHECKDB on the database. Most likely there is corruption in database.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • [WRT400N] Several security log entries - every minute

    I have several entries in the security log that I have no idea where they are coming from. They are all blank like this:
    Incorrect User login : Username is , Password is From 192.168.1.101=> Wed Mar 17 17:42:48 2010
    Incorrect User login : Username is , Password is From 192.168.1.101=> Wed Mar 17 17:43:48 2010
    Incorrect User login : Username is , Password is From 192.168.1.101=> Wed Mar 17 17:44:48 2010
     The only exception are a few on the first day these started appearing:
    ncorrect User login : Username is badcred, Password is himom
    From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is admin From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is 1234 From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is password From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is , Password is admin From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is , Password is From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is motorola From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is root, Password is From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is , Password is password From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is root, Password is !root From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is Admin, Password is Admin From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is Admin, Password is From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is junxion From 192.168.1.101=> Tue Mar 9 15:04:12 2010
    Incorrect User login : Username is admin, Password is cableroot From 192.168.1.101=> Tue Mar 9 15:04:12 2010
     SInce I am getting blank entries every minute, I'm curious what would be causing this. The first entries are obvious that someone was trying to access with various passwords.
    Solved!
    Go to Solution.

    Well, if it is your computer it looks as if you have some malware running on your computer which tries to hack into your router... Or did you try those passwords at any time?

  • Create a new log file every hour

    I have a program which receives measurement data and writes this data to TDMS file. However, I want to create a new TDMS file every hour. When I put a while loop in mu sub vi which creates the new file the program doesn't run. I have attached the sub vi. Can anyone help me out here?
    Cheers
    Attachments:
    ConfigTDMS (SubVI)_loopTDMS.vi ‏16 KB

    One trick for doing this is to prepend the file name with YYJJJHH See example
    Then just check if file exists (if not create and add headder) when you write to file.
    This also generates the reports in "alphabetical" order so windows explorer presents them chronologically.
    Jeff

  • Is there any Harm to switch the log file of production every minute

    Is there any Harm to switch the log file of production every minute as we r trying to build a replication process. We are plaining to swith the redologs everyminute and then FTP it and use it for replication to achive the relatime senario

    Is there a reason that you're not using DataGuard and/or Streams here? It would seem a lot easier to use the tools Oracle provides than to roll your own solution...
    Switching the log file every minute may have some negative performance implications in your environment.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • URGENT: SBS 2011 Exchange log files filling up drive in minutes!

    I need some help with ideas as to why Exchange is generating hundreds of log files every minute.
    The server had 0MB free on the C: drive, and come to find out, there were over 119,000 log files in the Exchange server folder (dated within the last 7 days).  These files are named like E00001D046C.log.  Oddly, the Exchange database store
    is not growing in size as you'd expect.  Frantically searching for a way to free up space, I turned on circular logging and remounted the store (after freeing up enough space for it to mount!).  Almost instantly, the 119,000+ log files disappeared,
    but now there are about 40 or so that are constantly being created/written/deleted, over and over and over.
    This is a small 5 person office with a 4GB database store.  The 119,000 log files were taking up over 121GB.  It's nice to have that space back, but something is in a loop, constantly creating log files as fast as the system can write them.
    I checked the queues...nothing.  Where else can I look to see what might be causing this?
    Thanks for the help.
    ps.  Windows server backup failed about the time this problem started, stating the backup drive is out of space.  It's a 2TB drive, backing up 120GB of data.  Isn't it supposed to delete old backups to make room for new?

    Hi,
    Regarding the current issue, please refer to the following article to see if it could help.
    Exchange log disk is full, Prevention and Remedies
    http://www.msexchange.org/articles-tutorials/exchange-server-2003/planning-architecture/Exchange-log-disk-full.html
    If you want to disable Exchange ActiveSync feature, please refer to the following article.
    Disable Exchange ActiveSync
    http://technet.microsoft.com/en-us/library/bb124502(v=exchg.141).aspx
    Best Regards,
    Andy Qi
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Andy Qi
    TechNet Community Support

  • Exchange 2010 personal archive database massive log file generation

    Exchange Server 2010 SP3 + Update Rollup 4
    Windows Server 2008 R2, all updates
    VMware ESXi 5.5
    Server config: 2 x Xeon Quad Core 2.20GHz, 16GB RAM
    We recently started using personal archives. I created a database for this purpose ("Archive Mailboxes") on the same datastore as our live mailbox database ("Live Mailboxes"). It works great except that the mailbox maintenance generates
    massive amounts of log files, over 220GB per day on average. I need to know why. The Live Mailbox database generates around 70GB of log files every day. The database sizes are: Live = 159.9GB, Archive = 196.8GB. Everything appears to be working fine, there
    are no Error events related to archiving. There are 10025 MSExchangeMailboxAssistant warning events logged every day. I have moved those mailboxes back-and-forth to temp databases (both Live and Archive mailboxes) and the 10025 events have not stopped so I'm
    reasonably certain there is no corruption. Even if there were it still doesn't make sense to me that over 100 log files are generated every single minute of the day for the Archive store. And it's not that the database isn't being fully backed up; it is, every
    day.
    Do I need to disable the 24x7 option for mailbox maintenance to stop this massive log file generation? Should I disable mailbox maintenance altogether for the Archive store? Should I enable circular logging for the Archive store (would prefer to NOT do this,
    though I am 100% certain we have great backups)? It appears to me that mailbox maintenance on the Live store takes around 12 hours to run so I'm not sure it needs the 24x7 option.
    This is perplexing. Need to find a solution. Backup storage space is being rapidly consumed.

    I'm sure it will be fine for maintenance to run only on weekends so I'll do that.
    We use Veeam B&R Enterprise 7.0.0.833. We do not run incremental backups during the day but probably could if necessary. All this is fine and dandy but it still doesn't explain why this process generates so many logs. There are a lot of posts around
    the internet from people with the same issue so it would be nice to hear something from Microsoft, even if this is expected behavior.
    Thank you for the suggestions!

  • Setting up a Log File monitor to inactivity for a set amount of time

    I have set up a number of log file monitors to alert when certain conditions apply, such as the word "ERROR" or "exception".  Now I have a request to set up an alert if the log file has not changed for 20 minutes.  I have been
    searching and have not found any information on how or if this can be done.   Anyone???
    I am running Operations Manager 2012 SP1
    The log files are simple text files.

    Hi!
    You could create a timer reset monitor that reads the log file every 19 minutes for a wildcard pattern (everything matches) and configure the successful search to healthy. Further, you've to configure the timer reset to 20 minutes and configure the
    timer reset state to unhealthy (warning/critical).
    Keep in mind that SCOM reads from the last line from the previous run every time. If your file rotates (based on a schedule or size) SCOM will not read the lines until the latest line is reached. For more information refer to
    http://www.systemcenterrocks.com/2011/06/log-file-monitoring.html
    HTH, Patrick
    Please 'Propose/Mark as answer' if this post solved your problem. <br/> <br/> http://www.syliance.com | http://www.systemcenterrocks.com

  • Reading and Writing to a Log file

    Hello,
    I'm writing a class that will write a user id to a log file every time they click on a particular button. However, I don't want a new line every time a user clicks on the button, I'd like to be able to find their id in the log, increment a counter and write that back to the log in place of the previous entry.
    For example, User A clicks on the button for the first time so I write in the log:
    User A 1
    Next time they come along, I want to see if User A exists in the log - which they do - add 1 to the counter and replace User A 1 with User A 2. So the log will now say:
    User A 2
    I was thinking of writing to the log, reading the log back in to a HashMap and then writing the HashMap out every time. Seems like a rather inefficient solution. Is there anything else I can do?
    Thanks!

    Hi,
    counters are a standard topic. Many solutions are to be found in
    textbooks. Here is one of them (Hunter & Crawford Java Servlet
    Programming):
    String activeUser = ... ;
    try
      FileReader fileReader = new FileReader( "mylog" );
      BufferedReader bufferedReader = new BufferedReader(fileReader);
      FileWriter fileWriter = new FileWriter("mylog.new");
      BufferedWriter bufferedWriter = new BufferedWriter( fileWriter );
      String line;
      String user;
      String count;
      while((line = bufferedReader.readLine() != null )
        StringTokenizer tokenizer = new StringTokenizer( line );
        if( tokenizer.countTokens != 2 ) continue; // bogus line
        user = tokenizer.nextToken;
        if( activeUser.equals( user ) )
          count = tokenizer.nextToken;
          bufferedWriter.write(user+" "+(Integer.parseInt(count)+1)+"\n" );
        else
          bufferedWriter.write( line );
      bufferedWriter.close();
      fileWriter.close();
      bufferedReader.close();
      fileReader.close();
      File oldLog = new File("mylog");
      File newLog = new File("mylog.new");
      newLog.renameTo(oldLog);
    catch( Exception e )
        // do whatever appropriate
    }Have fun,
    Klaus

  • Can anyone make sense of these log files?

    Hey Guys,
    I'm getting ~100MB log files every day in private/var/log/DiagnosticMessages. Mostly it seems to be something resembling the following over and over and over again:
    {And here is where the log file gets eaten every time I try to post it - mostly messages from com.apple.message.signature, com.apple.message.domain0, my-Computer-4mDNSResponder, com.apple.message.uuid, and com.apple.mDNSResponder.autotunnel.domainstatus}
    Can anyone help based on the above? Is there any way to post log code with lots of weird binary characters without it getting eaten? Or maybe a screenshot of the log code?
    These logs do not appear when I am out of town, which makes me think that it's one of my Airport Extremes, but I've done factory restores on both of them and the logs are still being generated. I've also had to reinstall my OS due to an unrelated issue and the logs are still being generated.
    Any ideas of what might be generating these giant logs would be really helpful. Thanks!
    Message was edited by: loudguitars81

    I see seven logs in private/var/log, which IS normal. There isn't anything repeating in those beyond the aforementioned Epson thing, which was not what was causing the logs to be generated in private/var/log/DiagnosticMessages. The Epson errors in system.log and its daily predecessors stopped when I got rid of a bunch of Epson cruft, but the giant log files in the DiagnosticMessages logs continue.
    The private/var/log/DiagnosticMessages logs don't seem to be clearing out - I had them going all the way back to when I first installed Snow Leopard before I reinstalled the OS recently (the reinstall wiped out the old logs, obviously). This was actually how I discovered this weird logging problem in the first place - I couldn't figure out what was taking up so much space on my drive and I ran DiskUtilityX to find I had 10 gigs of log files in the DiagnosticMessages folder dating back from my move on 7/3.
    Everything before my move was well under 1MB daily. Everything after my move (and when my computer wasn't staying somewhere besides my own house) was between 75-100MB daily.
    Even after I reinstalled my OS 2/16, the private/var/log/DiagnosticMessages logs are not clearing out - until I ran @LincDavis's terminal commands, I had logs going back to that date, which was obviously more than 7 days ago.
    Does any of that detail help you guys in trying to pinpoint what's happening here?

  • Throttling a file adapter to consume a max number of files per minute

    Is there a way to design a file adapter to throttle its processing bandwidth.
    A simple use case scenario is describes as follows;
    A File Adapter can only consumes a max of 5 files per minute. The producer average throughput is 3 files per minute but during peak times it can send 100 files per min. The peak times occur during end of year or quarterly accounting periods. If the consumer consumes more than 5 files per min the integrity of the environment and data is compromised.
    The SLA for the adapter is to :
    - Each file will be processed within 2 seconds.
    - Maximum File Transactions Per minute is 5
    An example is as follows.
    The producer sends 20 files to its staging directory within a minute. The consumer only processes 5 of these files in the first minute, sleeps then wakes up to consume the next 5 files in the second minute. This process is repeated until all files are processed.
    The producer can send another batch of files when ever it likes. So in the second minute the producer can send another 70 files. The consumer will throttle the files so it only processes 5 of these files every minute.

    Hi,
    If you have a polling frequency set to 2 secs..then controlling it to read only five files a min is difficult.?You can have the polling frequecy changed to 12secs
    or u can schedule a BPEL for every min and use synchronous read operation and loop it 5 times to read 5 files.

  • Standby log files

    I want to convert our old script based primary /standby database into a dataguard config using the LGWR as log transport.
    I already have old log files on the standby database, but the data in them is from 2004. Not entirely interesting since the database gets recovered from the arch log files every night.
    Point is, can I use these as standby log files, or do I have to (somehow) drop these and re-create new standby logfiles. I cant drop them anyway since when I try, I get "ORA-01624, log 1 needed for crash recovery". (Like h*ll, since the data is older than Noah).
    Will these just get re-written?
    null
    null

    Note:219344.1 This note from metalink gives "Usage, Benefits and Limitations of Standby Redo Logs (SRL)".
    Standby Redo Logs are only supported for the Physical Standby Database in
    Oracle 9i and as well for Logical Standby Databases in 10g. Standby Redo Logs
    are only used if you have the LGWR activated for archival to the Remote Standby
    Database.
    The great Advantage of Standby Redo Logs is that every Entry written into
    the Online RedoLogs of the Primary Database is transfered to the Standby
    Site and written into the Standby Redo Logs at the same time; threfore, you
    reduce the probability of Data Loss on the Standby Database.

  • Any ideas creating app to read log file when updated?

    Afternoon all,
    I have been asked to write a java app which will read the contents of the server log file every time the log file is updated by the server. The app will be deployed onto WebSphere Application Server.
    Can anyone point me in the right direction as I have never written anything like this before and I don't know where to start. Any help will be much appreciated.
    Thanks in advance,
    A.

    alex@work wrote:
    I agree with most of what you've said but unfortunately I don't have a say in what the company wants. However, I am interested in the appender idea, perhaps they may go for that if I suggest it.I'd say it'll take you a day to read up about Log4J and how to write basic appenders, and another day to write your own appender for this problem. Compare that to the effort of writing something to poll a log file, re-read it constantly and update another file, operations which will get slower and slower as they go along. That's a fair amount more code than a single appender would be. There's how to sell it to your company.
    Can you give me a brief overview in how it works?Log4J uses objects called appenders, which take logging info - generated by your container - and do something with it. It ships with some appenders already in it, for writing to stdout, files, sockets and databases. You can also write your own appenders that do something more than these standard ones do. You write logging code in your application - in this case, your container already does this so you don't have to - and the configuration of Log4J decides what happens to these logging messages. That's what you're interested in. You could write an appender - a simple class - that takes raw logging messages in, and writes them out to file in whatever format you want
    Come to think of it, depending on how complex the required XML is, you may even be able to do this without writing any code at all. You can write formatting patterns in the Log4J config that existing file appenders will use to write your XML files
    A bit of an abstract explanation, I guess. Your best bet is to first ascertain that Log4J is indeed in use, and then read the documentation, which is surprisingly good for an Apache project :-)
    [http://logging.apache.org/log4j/1.2/index.html]

  • AFP log files not rotating

    Wen I enable logging on my AFP server the log files aren't being rotated correctly. Does anyone know the location of the script Apple uses to rotate the log files in /Library/Logs/AppleFileService. There must be a bug.
    The AFP service seems to ignore the interval you set for rotating the AppleFileServiceAccess.log. I had it set to 1 day but its only rotating the log files every 7 days. What's worse. when it does try to rotate the log it doesn't work completely. It doesn't remove the old uncompressed log and stops writing new new entries tot he log file. The only way to get the AFP service to log entries again is to stop the AFP service, rename the log file, then start the service again.

    I'm observing this same bug on a 10.5.6 server. Is this a known issue?

Maybe you are looking for