Rolling logs by date instead of size

Hi
Does anyone have an example/solution to share regarding rolling logs by date instead of by size using java.util.logging?
Thanks in advance
-Dave

Thanks for your reply - a follow-up question is: What is the best way to roll the log over to a different directory (at the specified time by date)? I am attempting to extend the FileHandler class but am not making much progress. Do you have a method I could use as an example?
Thanks in advance

Similar Messages

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

  • Tuning of Redo logs in data warehouses (dwh)

    Hi everybody,
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    Here are the facts:
    - Oracle 10g, 32 GB RAM
    - 6 GB SGA, 20 GB PGA
    - 5 log groups each with 1 Gb log file
    - 4 MB Log buffer
    - every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
    - some sysstat metrics after one etl load:
    Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
    "NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
    "redo synch writes" " 300.636"
    "redo synch time" " 61.421"
    "redo blocks read for recovery"" 0"
    "redo entries" " 327.090.445"
    "redo size" " 159.588.263.420"
    "redo buffer allocation retries"" 95.901"
    "redo wastage" " 212.996.316"
    "redo writer latching time" " 1.101"
    "redo writes" " 807.594"
    "redo blocks written" " 321.102.116"
    "redo write time" " 183.010"
    "redo log space requests" " 10.903"
    "redo log space wait time" " 28.501"
    "redo log switch interrupts" " 0"
    "redo ordering marks" " 2.253.328"
    "redo subscn max counts" " 4.685.754"
    So the questions:
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
    kind regards,
    Mirko

    user5341252 wrote:
    I'm looking for some guidance to configure redo logs in data warehouse environments.
    Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
    Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
    >
    Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
    For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
    "redo synch time" " 61.421"
    "redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
    "redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
    Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
    Regards
    Jonathan Lewis

  • How to efficiently log multiple data streams with TDMS

    Ok, first off, I'll admit I am completely clueless when it comes to logging, TDMS in particular.  That said, I'm trying to work out the best way to log some data from an existing LabVIEW-based control system, so that users can later access that data in the event of catastrophic failure or other situations where they might want to see exactly what happened during a particular run.
    I've got a total of between 6 and 12 data points that need to be stored (depending on how many sensors are on the system).  These are values being read from a cRIO control system.  They can all be set to Single data type, if necessary - even the one Boolean value I'm tracking is already being put through the "convert to 0,1" for graph display purposes.  The data is currently read at 100ms intervals for display, but I will be toying with the rate that I want to dump data to the disk - a little loss is OK, just need general trending for long term history.  I need to keep file sizes manageable, but informative enough to be useful later.
    So, I am looking for advice on the best way to set this up.  It will need to be a file that can be concurrently be read as it is being written, when necessary - one of the reasons I am looking at TDMS in the first place (it was recommended to me previously).  I also need an accurate Date/Time stamp that can be used when displaying the data graphically on a chart, so they can sync up with the external camera recordings to correlate just what happened and when.
    Are there specific pitfalls I should watch for?  Should I bundle all of the data points into an array for each storage tick, then decimate the array on the other end when reading?  I've dug through many of the examples, even found a few covering manual timestamp writing, but is there a preferred method that keeps file size minimized (or extraction simplified)?
    I definitely appreciate any help...  It's easy to get overwhelmed and confused in all of the various methods I am finding for handling TDMS files, and determining which method is right for me.

    I need to bump this topic again...  I'll be honest, the TDMS examples and available help are completely letting me down here.
    As I stated, I have up to 12 data values that I need to stream into a log file, so TDMS was suggested to me.  The fact that I can concurrently read a file being written to was a prime reason I chose this format.  And, "it's super easy" as I was told...
    Here's the problem.  I have multiple data streams.  Streams that are not waveform data, but actual realtime data feedback from a control system, that is being read from a cRIO control system into a host computer (which is where I want to log the data).  I also need to log an accurate timestamp with this data.  This data will be streamed to a log file in a loop that consistently writes a data set every 200ms (that may change, not exactly sure on the timing yet).
    Every worthwhile example that I've found has assumed I'm just logging a single waveform, and the data formatting is totally different from what I need.  I've been flailing around with the code, trying to find a correct structure to write my data (put it all in an array, write individual points, etc) and it is, quite honestly, giving me a headache.  And finding the correct way for applying the correct timestamp (accurate data and time the data was collected) is so uncharacteristically obtuse and hard to track down...  This isn't even counting how to read the data back out of the file to display for later evaluation and/or troubleshooting...  Augh!
    It's very disheartening when a colleague can throw everthing I'm trying to do together in 12 minutes in the very limited SCADA user interface program he uses to monitor his PLCs...  Yet LabVIEW, the superior program I always brag about, is slowly driving me insane trying to do what seems like a relatively simple task like logging...
    So, does anyone have any actual useful examples of logging multiple DIFFERENT data points (not waveforms) and timestamps into a TDMS file?  Or real suggestions for how to accomplish it, other than "go look at the examples" which I have done (and redone).  Unless, of course, you have an actual relevant example that won't bring up more questions than it answers for me, in which case I say "bring it on!"
    Thanks for any help...  My poor overworked brain will be eternally grateful.

  • Rolling Log Files

    Hi,
    This is more of an Application Management Issue, and it has nothing to do with Weblogic as such. My problem is, the log file that is created by the standarad out redirection from the Application Server is getting bigger and bigger day by day and the system is running out of space. Some really important statements are being logged, which are monitored by real time alarming system. I would like the log files to get roll over after reaching a particular size so that the old files can be archived. I know this is quite easy with log4j, but the statements that are being outputted are not coming from different levels of log4j statement.
    I am runnging Weblogic 8.1 on Sun Solaris.
    Any help on this would be higly Appreciated
    Regards
    Nitin

    I do not have the details yet about the structure of the logs, but these will be similar to web logs.  IVR systems produce log records about each call and each "jump" that a caller makes thru the system (like clickthru in a web log).  These logs are written to constantly by the IVR application and it decides when a log has "filled up".  When it has, it closes the active log and opens a new one.  I suspect that each log file has a timestamp appended to its name to indicate when it has created.  I also suspect that a lock is placed on the file while it is active and then released when it is closed.

  • Error logging for data rules in owb11gr2

    Hi all,
    I was playing around with error logging for data rules and I realized that when an error gets logged into the error table for failing a particualar data rule for a table, some of the columns in the error table such as ORA_ERR_NUMBER$, ORA_ERR_MESG$, ORR_ERR_OPTYP$ were not filled in. Why is this so? Is there anyway to populate these fields as well when a row gets populated in? This is because the optype field may be useful to identify the operation type of the erronous row.
    ALso, does anyone know whether the error table for dimensions work correctly? I replicate the portion of the mapping flow that goes to my dimension and even though the errornous row gets logged into the error table, the ERR$$$_OPERATOR_NAME for that row did not show the dimension object but instead show another of my table operator in the mapping. Pretty bewildered as to why this is the case.

    The cube operator in 11gR2 also supports DML error logging (as well as orphan management handling). This is enabled by setting the property 'DML Error table name' (in group Error table) on the Cube operator inside the mapping. The error table specified will be created when the mapping is deployed (if you specify an existing one the error is trapped).
    The DML error handling will catch physical errors on the load of the fact.
    Cheers
    David

  • [svn:osmf:] 14476: DVR: using Date instead of getInterval to do local recording timing.

    Revision: 14476
    Revision: 14476
    Author:   [email protected]
    Date:     2010-03-01 05:24:17 -0800 (Mon, 01 Mar 2010)
    Log Message:
    DVR: using Date instead of getInterval to do local recording timing.
    Modified Paths:
        osmf/trunk/framework/OSMF/org/osmf/net/dvr/DVRCastDVRTrait.as
        osmf/trunk/framework/OSMF/org/osmf/net/dvr/DVRCastNetConnectionFactory.as
        osmf/trunk/framework/OSMF/org/osmf/net/dvr/DVRCastRecordingInfo.as
        osmf/trunk/framework/OSMF/org/osmf/net/dvr/DVRCastTimeTrait.as
        osmf/trunk/framework/OSMFTest/org/osmf/net/dvr/TestDVRCastRecordingInfo.as

    Are you trying to record to "live" application which comes with default installation of FMS, if you are doing that then you will get Record.NoAccess as "live" application does not allow recording. Now hopefully you have FMIS and not FMSS. If you are having FMSS, you will just get live & vod apps and both of them you wont be allowed to record.
    Now I will assume you have FMIS :
    Please use other app to record other than "live". Renaming "live" to "live_test" or changing its location wont help, you will still get error.
    In order to record do following things,
    1. Create a folder called "recordApp" - since you are triggering record from client side you dont need any server-side code.
    2. Just change application name in your client connection URL to connect to "recordApp" instead of "live_test"
    3. You dont have to change anything in fms.ini so keep the default as it comes in installation.
    4. As mentioned by JayCharles please use :http://livedocs.adobe.com/flashmediaserver/3.0/docs/help.html?content=03_configtasks_37.ht ml for doing VirtualDirectory settings so that you can record your files in particular directory. (However I would note one thing is that VirtualDirectory feature is more for play than record/ingest)
    5.Give write permissions to your folder where you would be recording.
    please let me know if this does not work

  • I did a fresh install on my computer, but when I tried to sync, it put the new bookmarks (standard firefox ones) onto the sync data instead. Is there any way to recover my bookmarks?

    Like the question says, is there any way to recover them?
    I can't pull them off my old install since it was erased.

    Lets hope that you did not really erase your old profile, you would have had to go a little out of your way to do that. I've never used Sync but you have to have something to Sync to.
    Use the following to see if you have more than one profile or your lost bookmarks
    in the bookmarkbackups folder.
    Type about:support into the location bar the click on "Open Containing Folder" that will take you to your current profile. The bookmarkbackups folder has 5 backups (.json files) any of which can be restored with Firefox up through restore bookmarks.
    It doesn't sound like the install pulled in your old bookmarks, so continuing ...
    Sounds like you created a new profile instead of using your existing profile.
    Open your profile folder and go up one level to see if you have multiple
    profiles ...
    HELP (Alt+H) --> Troubleshooting information... --> "Open Containing Folder"
    (or type about:support into the location bar then click on "Open Containing Folder")
    note the filename, then go up one level to see if you have additional profile folders.
    Check the dates of places.sqlite (new in Fx3) for update date just before you went to Fx4.
    Since it is updated each time you close your session that would be a good place to check.
    If you find one then note its filename (the profile name is part of the filename) and
    then start the Firefox Profile Manager and open with that profile.
    To prevent the problem in the future don't let Firefox start up for you after
    downloading and installing a new version. Instead exit the install at that
    last point, and restart Firefox in your normal manner.
    If you really do want to start with a new profile like the one you have, them look in the bookmarkbackups folder of the other profiles and look particularly at the dates and the size of the .json files to find one that you can restore to your current profile -- restoring is complete replacement of what you have for bookmarks.
    Lost bookmarks - MozillaZine Knowledge Base
    http://kb.mozillazine.org/Lost_bookmarks
    Backing up and restoring bookmarks - Firefox - MozillaZine Knowledge Base
    http://kb.mozillazine.org/Backing_up_and_restoring_bookmarks_-_Firefox

  • I just got an iphone 6. Not all of my music was on itunes, so I logged in and instead of clicking set up as new iphone I hit restore from backup, thinking it only pertained to the music in itunes. Now my new iphone looks just like my old iphone and h

    I just got an iphone 6. Not all of my music was on itunes, so I logged in and instead of clicking set up as new iphone I hit restore from backup, thinking it only pertained to the music in itunes. Now my new iphone looks just like my old iphone and has iphone 5 settings. How can I undo and get back to iphone 6 settings?

    Settings/Reset/Erase all content and settings...
    Recently I had to do the sequence above and when the phone rebooted, it came up as a new phone.

  • HT4847 Can we check camera roll and documents data which we had back up byiphone ??

    Can we check camera roll and documents data which we had back up byiphone ??

    Consider it a loss... sorry

  • Document date instead of the BP Ref. No.

    Hi All,
    In Aging Reports:
      On Either  customer and vendor aging reports:
      For the detailed again report printout, i need to see the document date instead of the BP Ref. No.
    will you pls guide me how to do this?
    thanks,
    philip

    Hi, hope still not late to solve your issue, you may this link to get the aging report in XLR
    https://websmp209.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000654469&_SCENARIO=01100035870000000183&_ADDINC=&_OBJECT=011000358700001172792006E.htm
    , under Supporting Materials >> Sample Reports XL Reporter 2005..
    Cheers

  • Need help in logging JTDS data packets

    Hi All,
    I m having web application which uses SQL Server database.
    I have to find out some problems in database connection for that there is need to log the jtds data packets.
    I have tried to use class net.sourceforge.jtds.jdbc.TdsCore but in constructor of TdsCore class there are two parameters needed one is ConnectionJDBC2 and another is SQLDiagnostic.
    I have tried a lot but it did not allow me to import class *SQLDiagnostic*.
    I need help in logging JTDS data packets. If there are any other ways or any body having any idea about logging JTDS data packets/SQLDiagnostic.
    Please reply it is urgent...!!
    Thanks in advance......!!

    if you want to use log4j then,
    in your project create a file called log4j.properties and add this
    # Set root logger level to INFO and its only appender to ConsoleOut.
    log4j.rootLogger=INFO,ConsoleOut
    # ConsoleOut is set to be a ConsoleAppender.
    log4j.appender.ConsoleOut=org.apache.log4j.ConsoleAppender
    # ConsoleOut uses PatternLayout.
    log4j.appender.ConsoleOut.layout=org.apache.log4j.PatternLayout
    log4j.appender.ConsoleOut.layout.ConversionPattern=%-5p: [%d] %c{1} - %m%n
    log4j.logger.org.apache.jsp=DEBUG
    #Addon for
    com.sun.faces.level=FINEGo to your class and add this line
    private static final Logger logger = Logger.getLogger("classname");and then you can use
    logger.info();
    logger.error();
    methods

  • How can I log the data transmission of my switch in a file to analyze the quality of my communication channel?

    How can I log the data transmission of my switch in a file to analyze the quality of my communication channels?

    A lot depends on what type of switch you have and what kind of communication channels you're asking about.
    There are several Cisco tools (e.g., "ip sla", SNMP-queried values, show commands etc.) that can give useful information.
    If you give us some more information we can help more specifically.

  • Windows native file dialog default sort is by date instead of name

    I somehow managed to get the Windows common file dialog default sort to be by date instead of name, so I have to resort by name every time I select a VI to load. I haven't been able to identify the registry entry that tell how the MRU list is sorted. How do I make the default sort by name instead of date, short of removing and reinstalling LabVIEW?

    I think its a Windows issue and nothing to do with LabVIEW since LabVIEW just calls the MS common dialog. Do you see the same behaviour when using Windows Explorer? I know that with Win2K, I can set the default folder view to always sort by date. To go back to the default, select My Computer>Tools>Folder Options>View>Reset All Folders. I don't have easy access to any computers with a different OS right now so I can't describe what the process is for them.

  • BADI exchange rate according to document date instead of posting date

    Hello,
    I am looking for a BADI that will enable us to change the exchange rate according to document date instead of posting date.
    This change is needed for documents of travel expenses.
    I tried to use substitution but it didn't work for documents created in Tcode PRRW.
    Thank you,
    Dan

    Friend,
    Try to use Field exit functionality.
    You can create field exit for Translation date(WWERT). While currency converstion this date only taken as base.
    Technically you can pass Doc.Date(BLDAT) value to Translation date.
    Or in Substituton(GGB1) also for the header "Transaltion date = Docu.Date".
    I hope it serves your purpose.
    Chandu

Maybe you are looking for

  • Scroll Bar Troubles, Please Help.

    When In A WebPage, I Have No Scroll Bars, And The Up And Down Arrows Will Not Scroll The Page Either. The Only Way I Can Scroll Is By Dragging The Screen Up And Down By Highlighting And It's Rather Bugging Now. I'm New To Macs By Two Weeks So Please

  • How to Connect to Server using Server Admin

    I am trying to connect to Server using Server Admin so I can setup a wiki. I enter my hostname, username and password and I get the error unable to connect to server, any ideas what is going wrong? I can ping the server name and that works. Is there

  • How to have voicemail in my iphone?

    i can't find voicemail in my phone....

  • Pro Tools files imported to FCP X causing QuickTime movie problem

    Well 2 days later and hours on the phone with Apple tech support Senior Advisors, and problem still not solved. HELP.  Everything works fine in FCP X, but when I create the QuickTime movie, it is way out of sync (by many seconds). We have isolated th

  • Last date of the month

    Hi All, Can anyone tell me the logic to get the last date of the month for vbrk-fkdat. The date format is dd/mm/yyyy. i need the output on the same format. for ex: if vbrk-fkdat = 17/12/2007 the output should be 31/12/2007. Thanks, Madhu