Parse XMLFormatter log files back into LogRecords

My app uses a java.util.logging.FileHandler and XMLFormatter to generate XML log files, which is great. Now I want to display those logs in my app's web admin console.
I assumed that there would be an easy way to read my XML logs files back into a List of LogRecord objects, but my initial investigation has revealed nothing.
Anyone have any advice?

If you remove an "active" log file, then this can cause problems. If you remove an archieved log file, then it is OK.
If you change the log directory, then you SHOULD inform all your applications that use this directory... Depending on the service, this information is usually stored inside the config files of the services.
Mihalis.

Similar Messages

  • How can I import an ibook file back into iBook Author?

    I exported an ebook from iBook Author thinking it is the equivalent of saving the file.
    I did not save the file, I only exported it and than exited the program.
    My active file is vannished.
    I need to make some adjustments to the ebook, but can not bring it back up.
    Is there a way that I can re-import the iBook file back into iBook Author?
    When I open iBook Author and try to open the exported file the program does not recognise the ibook file...
    Thanks for your help!

    Check for previous versions of that book in iBA....even if not saved.
    You can dig into the .ibooks file by changing the suffix to zip, but you'll have to manually extract what you can from assets you find. Please only do this with your own books.

  • Is it possible to export contacts  from Outlook 2011 for Mac and importing this file back into Mac Address Book.? please reply me its urgent

    Is it possible to export contacts  from Outlook 2011 for Mac and importing this file back into Mac Address Book.? please reply me its urgent

    Is it possible to export contacts  from Outlook 2011 for Mac and importing this file back into Mac Address Book.? please reply me its urgent

  • I have accidentally deleted a large number of develped images in Lightroom before I did a backup. I reimported the original raw files back into Lightroom hoping the develop settings would be re-established but no luck. Notice system mau have done an auto-

    Question?
    I have accidentally deleted a large number of develped images in Lightroom before I did a backup. I reimported the original raw files back into Lightroom hoping the develop settings would be re-established but no luck. Notice system mau have done an auto-backup as have an lrcat-journal file. Can I use this to restore my develop settings. I also have jpgs generated from all the deleted images.

    Hello,
    if you have a backup of your catalog you can do the following:
    1. Backup your catalog first
    2. Restore your backup catalog to some location
    3. Open your current catalog and select "files->import from another catalog".
    4. Select your backup catalog and your lost images. LR ask you if you want to overwrite the current settings or save them as a virtual copy.
    As an alternativ you can open your backup catalog, select the "lost" images and save the development settings as xmp sidecar fiels (using ctrl-s). Then open your current catalog, seletct the images and use "Metadata->Read Metadata from files".

  • Hi thee.  As logical as I can be.  I export a file back into my folder as a PSD.  I then export that file to PS.  Work on it, save and return it to LR.  The file goes to the bottom of the folder from where I have to put it back in the original place, taki

    Hi thee.  As logical as I can be.  I export a file back into my folder as a PSD.  I then export that file to PS.  Work on it, save and return it to LR.  The file goes to the bottom of the folder from where I have to put it back in the original place, taking time and driving me slightly mad.  Why does the file not go back to where it came from please?  Thanks  Tim

    Umm, but this folder is an amalgam of two cameras and five cards, and I want them in capture order!  Thanks for trying though Jim.
    Cheers
    Tim

  • Is it possible to get published to folder files back into iweb?

    Hello,
    I have created a website in iweb and published it to a folder and to my .mac account. I have recently closed down my .mac account so I could reopen one with my new business name. In the meantime I have upgraded to Leopard which deleted my iweb application.I have got another copy of iweb but now when I open iweb it does not automatically open my website up. Does anyone know if it is possible to open my old website which I have Published to a folder in iweb so I can publish it to my new .mac site?
    If i click on the html files in the folder it runs like the website but from that folder only. All the work is there I just can't find out how to get it back on the net.
    Please help so I don't have to spend weeks re entering all the files back into iweb one by one!
    Thanks

    You can upload your website files by drag and drop to Finder/Go/iDisk/My iDisk/Web/Sites to get your website up and running.
    To be able to publish changes you will have to do a search for the original domain.sites2 file and drop this into Home Folder/Library/Application Support/iWeb.
    If you can't find it, then rebuilding is your only option. This is frustrating but doesn't take too long if you copy and paste all your text, images etc from the published files to your new iWeb pages.

  • Parsing a log file on Weblogic

    Hi!
    I'd like to know how to get started on parsing a log file present in the default directory of Weblogic (ver 6.1 to be precise).
    I thought of using regular expressions, and use java.util.regex , but that is supported from JDK1.5 onwards, whereas WL6.1 supports JDK1.3.
    If u can also provide the code template for the same , that would be nice.
    Thanks in advance,
    Deepthy.

    uncle_alice wrote:
    String regex = "([^\"\\\\]++|\\\\.)++"{code} The trick is to match anything except a quotation mark or a backslash, OR match a backslash followed by anything (because the backslash is usually used to escape other characters as well, including backslashes).Superb! Thanks! I have to admit I've never used the ++ before (only the greedies), but that's the thing I was looking for.
    Just for the completeness, this is the whole thing that's able to parse a log line:
    {code}
    public class LogParser {
    private static final String NOSPACE_PARAM = "([^ ]++)";
    private static final String DATE_PARAM = "([^\\]]++)";
    private static final String ESCAPED_PARAM = "((?:[^\"\\\\]++|\\\\.)++)";
    private static final String PATTERN_STRING = NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " \\[" + DATE_PARAM + "\\]"
    + " \"" + ESCAPED_PARAM + "\""
    + " " + NOSPACE_PARAM
    + " " + NOSPACE_PARAM
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " " + NOSPACE_PARAM
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\""
    + " \"" + ESCAPED_PARAM + "\"";
    private static final Pattern PATTERN = Pattern.compile(PATTERN_STRING);
    public static String[] parse(String line) {
    Matcher m = PATTERN.matcher(line);
    if (m.matches()) {
    String[] result = new String[m.groupCount()];
    for (int i = 0; i < m.groupCount();) {
    result[i] = m.group(++i);
    return result;
    return null;
    {code}
    Any idea about the efficiency of this thing?

  • Move System Preferences Help file back into System Preferences folder?

    How do I move System Preferences Help file back into System Preferences folder.  I unintentionally moved it.  My Mac blocks putting it back.  Suggestions?  Thanks!

    Somehow I managed to delete my System Preferences shortly after upgrading, and was able to use the 10.4 one for a while until I applied the patch. Luckily, I already had Pacifist, and with your directions I was able to get close enough so that I could find the Applications folder within the ContentsofEssentialsSystemSoftwareGroup > ContentsofEssentials.pkg > Applications > SystemPreferences.app.
    For that, I owe you the deepest gratitude and a gigantic THANK YOU. My System Preferences is now fully reinstalled and working wonderfully.
    Pacifist might actually be worth purchasing.

  • How do I put recovery files back into ASM?

    I'm trying to recover a database for testing. I have all the backup files in a directory. I've restored the controlfile.
    When I run the restore, the error is:
    RMAN-03002: failure of restore command at 10/21/2009 19:24:52
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    I'm guessing that I either need to tell rman where to find the files or put the files back into the ASM +FLASHBACK.
    So 1) How do I tell rman that the files are now in a directory
    or 2) How do I put the files back into ASM?
    Thanks - JR

    I restored the control file with the rman "RESTORE CONTROLFILE FROM 'filenamegoeshere'; It was the control file backup from the corresponding DB backup.
    I couldn't use the RESTORE CONTROLFILE FROM AUTOBACKUP because it was expecting the backup files to be in the ASM +FLASHBACK.
    The catalog lists all the files that I have, but it thinks they are in ASM.
    Thanks - JR

  • How to import modified files back into Muse

    Maybe this is an easy sync issue....hopefully.
    I've exported my html files from Muse (after publishing to Biz Catalyst) and want to modify the html, such as ad href's to divs in the slideshow, etc. How can I import these modified html files back into Muse so I can publish back to Business Catalyst?
    I realize it's an easy solution to host the exported files on another server, but I'd really like to stay within the Muse/Biz Catalyst environment with updated html's.
    thanks much!!

    There isn't a way at this time to import HTML files into Muse and expect it to load in Design mode for a subsequent re-publish. The only kind of editing that Muse supports today (outside of Muse) is through Business Catalyst's In-Browser editing feature but that is limited to editing only text (without any formatting or styles) and foreground images <http://www.adobekb.com/working-with-in-browser-editing_pt01.html>. Feel free to submit your feature request over at our Ideas section.
    Thanks,
    Vinayak

  • Masterd files back into wave burner

    Hi everyone.
    Im currently getting my projects mastered abroad; usually when I master locally the mastering engineer can embed the my ISRC codes and information on a CD ready for duplication. However I dont think I can get a CD sent to me, it would just be the mastered WAV file.
    Can I take the mastered WAV file back into wave burner to insert the relevant information and codes and export it back out without effecting the file?

    That doesn't sound like a problem to me as long as you don't add any plug-ins, normalize, or make any volume level changes. If the mastered files come back as individual tracks, you will have to deal with timing gaps between tracks. If they come back as a single file, you will need to insert track markers.
    Out of curiosity, where do you have them mastered? Are you getting good results?

  • ITunes is empty, how do I get my files back into it?

    This morning I was trying to clean up my Macbook, get rid of applications I'm not using, clearing caches etc. (Everything was seeming to run a bit slow - which prompted the attempt to clean up unnecessary stuff)
    Something ended up happening with my iTunes library - It's like it reset it's self nothing's in it anymore. However, I'm almost certain all the music files are still on my computer. It seems like the library is still there too, I just can't really figure out how I'm supposed to get it back into iTunes. I started to add one (Library file, I think it was) back into iTunes - and it was going fine, until it said that it was running out of memory. I click "OK" I click "Cancel" the error message keeps popping up and not going away. I force quit iTunes, thinking what's already been imported will be there and I can find a way to work out the rest. Didn't work. iTunes is still empty and now about half of my available memory is gone. I have 2 previous itunes Library files (of the lib file type) and an iTunes library.xml there's also a file with all of the music. But I want to see if I can restore my videos as well. I do have everything backed up on an external hard drive, but that's kind of a pain to restore, and I think my files are still here, I'm just confused as to what to do.

    The itunes folder now seems to only a fraction of the stuff that I originally had - I'm thinking from when I tried to reimport the stuff and then it said I was out of memory. (the things that were imported are in there, even though no showing up in itunes, and the things that didn't get imported aren't) I don't really get how my entire itunes library can be gone (5000 songs and about a week's worth of videos) yet I have 13MG less memory available. If I lost that much stuff shouldn't I have more?

  • Everytime I close my laptop, and then re-open it later, it automatically has logged me back into my account. Why is this?

    Hi,
    So, normally when I close my laptop after usage (putting it in sleep mode) it will automatically log it out of the account I am using. Whenever I want to go back into it, I have to type in my password before getting back in. However, every time I now open it, I don't have to type in any password; I am already logged into the account I was using before I had closed my laptop. Why is this happening? And how do I get this to stop?
    Any help would be appreciated!!
    -Thanks

    Some settings may have changed. Open System Preferences > Security & Privacy > General, and make sure that "Require password inmediately after sleep or screen saver begins" is ticked

  • How do i get hosted website files back into iWeb after they were lost off iWeb?

    Lost all website files from my iWeb program.  Possible I updated a newer non compatible iWeb version?  At any rate........have website files back on my desktop and have no idea how to get them back into iWeb so I can modify.  Any help will be greatly appreciated!

    You can't.  iWeb has no import capability.  Search your hard drive for a file named Domian.sites2.  It normally resides in your Users/Home/Library/Application Support/iWeb folder. If you use Time Machine you can go back in it to find the latest copy of that file.
    However,
    NOTE: In Lion and Mountain Lion the Home/Library folder is now invisible. To make it permanently visible enter the following in the Terminal application window: chflags nohidden ~/Library and hit the Enter button - 10.7: Un-hide the User Library folder.
    If you can't find it you'll have to start a new website.  You can open the files from the website in your browser and copy/drag photos from the existing website pages into a new or blank iWeb page.  Text should be selected and copied and then pastes into the new webpage.  No need to add a text box first.
    OT

  • Parse robocopy Log File - new value

    Hello,
    I have found a script, that parse the robocopy log file, which looks like this:
       ROBOCOPY     ::     Robust File Copy for Windows                             
      Started : Thu Aug 07 09:30:18 2014
       Source : e:\testfolder\
         Dest : w:\testfolder\
        Files : *.*
      Options : *.* /V /NDL /S /E /COPYALL /NP /IS /R:1 /W:5
         Same          14.6 g e:\testfolder\bigfile - Copy (5).out
         Same          14.6 g e:\testfolder\bigfile - Copy.out
         Same          14.6 g e:\testfolder\bigfile.out
                   Total    Copied   Skipped  Mismatch    FAILED    Extras
        Dirs :         1         0         1         0        
    0         0
       Files :         3         3         0         0        
    0         0
       Bytes :  43.969 g  43.969 g         0         0         0         0
       Times :   0:05:44   0:05:43                       0:00:00   0:00:00
       Speed :           137258891 Bytes/sec.
       Speed :            7854.016 MegaBytes/min.
       Ended : Thu Aug 07 09:36:02 2014
    Most values at output file are included, but the two speed paramter not.
    How can I get this two speed paramters at output file?
    Here is the script:
    param(
    [parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false,HelpMessage='Source Path with no trailing slash')][string]$SourcePath,
    [switch]$fp
    write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
    #Arguments
    # -fp File parse. Counts status flags and oldest file Slower on big files.
    $ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
    $refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
    # These summary fields always appear in this order in a robocopy log
    $HeaderParams = @{
    "04|Started" = "date";
    "01|Source" = "string";
    "02|Dest" = "string";
    "03|Options" = "string";
    "07|Dirs" = "counts";
    "08|Files" = "counts";
    "09|Bytes" = "counts";
    "10|Times" = "counts";
    "05|Ended" = "date";
    #"06|Duration" = "string"
    $ProcessCounts = @{
    "Processed" = 0;
    "Error" = 0;
    "Incomplete" = 0
    $tab=[char]9
    $files=get-childitem $SourcePath
    $writer=new-object System.IO.StreamWriter("$(get-location)\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
    function Get-Tail([object]$reader, [int]$count = 10) {
    $lineCount = 0
    [long]$pos = $reader.BaseStream.Length - 1
    while($pos -gt 0)
    $reader.BaseStream.position=$pos
    # 0x0D (#13) = CR
    # 0x0A (#10) = LF
    if ($reader.BaseStream.ReadByte() -eq 10)
    $lineCount++
    if ($lineCount -ge $count) { break }
    $pos--
    # tests for file shorter than requested tail
    if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
    $reader.BaseStream.Position=0
    } else {
    # $reader.BaseStream.Position = $pos+1
    $lines=@()
    while(!$reader.EndOfStream) {
    $lines += $reader.ReadLine()
    return $lines
    function Get-Top([object]$reader, [int]$count = 10)
    $lines=@()
    $lineCount = 0
    $reader.BaseStream.Position=0
    while(($linecount -lt $count) -and !$reader.EndOfStream) {
    $lineCount++
    $lines += $reader.ReadLine()
    return $lines
    function RemoveKey ( $name ) {
    if ( $name -match "|") {
    return $name.split("|")[1]
    } else {
    return ( $name )
    function GetValue ( $line, $variable ) {
    if ($line -like "*$variable*" -and $line -like "* : *" ) {
    $result = $line.substring( $line.IndexOf(":")+1 )
    return $result
    } else {
    return $null
    function UnBodgeDate ( $dt ) {
    # Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
    if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
    $dt=$dt.split(" ")
    $dt=$dt[2],$dt[1],$dt[4],$dt[3]
    $dt -join " "
    if ( $dt -as [DateTime] ) {
    return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
    } else {
    return $null
    function UnpackParams ($params ) {
    # Unpacks file count bloc in the format
    # Dirs : 1827 0 1827 0 0 0
    # Files : 9791 0 9791 0 0 0
    # Bytes : 165.24 m 0 165.24 m 0 0 0
    # Times : 1:11:23 0:00:00 0:00:00 1:11:23
    # Parameter name already removed
    if ( $params.length -ge 58 ) {
    $params = $params.ToCharArray()
    $result=(0..5)
    for ( $i = 0; $i -le 5; $i++ ) {
    $result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
    $result=$result -join ","
    } else {
    $result = ",,,,,"
    return $result
    $sourcecount = 0
    $targetcount = 1
    # Write the header line
    $writer.Write("File")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
    if ( $HeaderParam.value -eq "counts" ) {
    $tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
    $tmp=$tmp.replace("~","$(removekey $headerparam.name)")
    $writer.write(",$($tmp)")
    } else {
    $writer.write(",$(removekey $HeaderParam.name)")
    if($fp){
    $writer.write(",Scanned,Newest,Summary")
    $writer.WriteLine()
    $filecount=0
    # Enumerate the files
    foreach ($file in $files) {
    $filecount++
    write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
    $results=@{}
    $Stream = $file.Open([System.IO.FileMode]::Open,
    [System.IO.FileAccess]::Read,
    [System.IO.FileShare]::ReadWrite)
    $reader = New-Object System.IO.StreamReader($Stream)
    #$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
    $HeaderFooter = Get-Top $reader 16
    if ( $HeaderFooter -match "ROBOCOPY :: Robust File Copy for Windows" ) {
    if ( $HeaderFooter -match "Files : " ) {
    $HeaderFooter = $HeaderFooter -notmatch "Files : "
    [long]$ReaderEndHeader=$reader.BaseStream.position
    $Footer = Get-Tail $reader 16
    $ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
    if ($ErrorFooter) {
    $ProcessCounts["Error"]++
    write-host -foregroundcolor red "`t $ErrorFooter"
    } elseif ( $footer -match "---------------" ) {
    $ProcessCounts["Processed"]++
    $i=$Footer.count
    while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
    $Footer=$Footer[$i..$Footer.Count]
    $HeaderFooter+=$Footer
    } else {
    $ProcessCounts["Incomplete"]++
    write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
    foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
    $name = "$(removekey $HeaderParam.Name)"
    $tmp = GetValue $($HeaderFooter -match "$name : ") $name
    if ( $tmp -ne "" -and $tmp -ne $null ) {
    switch ( $HeaderParam.value ) {
    "date" { $results[$name]=UnBodgeDate $tmp.trim() }
    "counts" { $results[$name]=UnpackParams $tmp }
    "string" { $results[$name] = """$($tmp.trim())""" }
    default { $results[$name] = $tmp.trim() }
    if ( $fp ) {
    write-host "Parsing $($reader.BaseStream.Length) bytes"
    # Now go through the file line by line
    $reader.BaseStream.Position=0
    $filesdone = $false
    $linenumber=0
    $FileResults=@{}
    $newest=[datetime]"1/1/1900"
    $linecount++
    $firsttick=$elapsedtime.elapsed.TotalSeconds
    $tick=$firsttick+$refreshrate
    $LastLineLength=1
    try {
    do {
    $line = $reader.ReadLine()
    $linenumber++
    if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16) ) {
    # line is end of job
    $filesdone=$true
    } elseif ($linenumber -gt 16 -and $line -gt "" ) {
    $buckets=$line.split($tab)
    # this test will pass if the line is a file, fail if a directory
    if ( $buckets.count -gt 3 ) {
    $status=$buckets[1].trim()
    $FileResults["$status"]++
    $SizeDateTime=$buckets[3].trim()
    if ($sizedatetime.length -gt 19 ) {
    $DateTime = $sizedatetime.substring($sizedatetime.length -19)
    if ( $DateTime -as [DateTime] ){
    $DateTimeValue=[datetime]$DateTime
    if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
    if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
    $line=$line.Trim()
    if ( $line.Length -gt 48 ) {
    $line="[...]"+$line.substring($line.Length-48)
    $line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
    write-host $line.PadRight($LastLineLength) -NoNewLine
    $LastLineLength = $line.length
    $tick=$tick+$refreshrate
    } until ($filesdone -or $reader.endofstream)
    finally {
    $reader.Close()
    $line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
    write-host $line -NoNewLine
    $writer.Write("`"$file`"")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
    $name = "$(removekey $HeaderParam.Name)"
    if ( $results[$name] ) {
    $writer.Write(",$($results[$name])")
    } else {
    if ( $ErrorFooter ) {
    #placeholder
    } elseif ( $HeaderParam.Value -eq "counts" ) {
    $writer.Write(",,,,,,")
    } else {
    $writer.Write(",")
    if ( $ErrorFooter ) {
    $tmp = $($ErrorFooter -join "").substring(20)
    $tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
    $writer.write(",,$tmp")
    } elseif ( $fp ) {
    $writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")
    foreach ( $FileResult in $FileResults.GetEnumerator() ) {
    $writer.write(",$($FileResult.Name): $($FileResult.Value);")
    $writer.WriteLine()
    } else {
    write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
    write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
    write-host "Results written to $($writer.basestream.name)"
    $writer.close()
    I hope somebody can help me,
    Horst
    Thanks Horst MOSS 2007 Farm; MOSS 2010 Farm; TFS 2010; TFS 2013; IIS 7.5

    Hi Horst,
    To convert mutiple robocopy log files to a .csv file with "speed" option, the script below may be helpful for you, I tested with a single robocopy log file, and the .csv file will output to "D:\":
    $SourcePath="e:\1\1.txt" #robocopy log file
    write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
    #Arguments
    # -fp File parse. Counts status flags and oldest file Slower on big files.
    $ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
    $refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
    # These summary fields always appear in this order in a robocopy log
    $HeaderParams = @{
     "04|Started" = "date"; 
     "01|Source" = "string";
     "02|Dest" = "string";
     "03|Options" = "string";
     "09|Dirs" = "counts";
     "10|Files" = "counts";
     "11|Bytes" = "counts";
     "12|Times" = "counts";
     "05|Ended" = "date";
     "07|Speed" = "default";
     "08|Speednew" = "default"
    $ProcessCounts = @{
     "Processed" = 0;
     "Error" = 0;
     "Incomplete" = 0
    $tab=[char]9
    $files=get-childitem $SourcePath
    $writer=new-object System.IO.StreamWriter("D:\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
    function Get-Tail([object]$reader, [int]$count = 10) {
     $lineCount = 0
     [long]$pos = $reader.BaseStream.Length - 1
     while($pos -gt 0)
      $reader.BaseStream.position=$pos
      # 0x0D (#13) = CR
      # 0x0A (#10) = LF
      if ($reader.BaseStream.ReadByte() -eq 10)
       $lineCount++
       if ($lineCount -ge $count) { break }
      $pos--
     # tests for file shorter than requested tail
     if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
      $reader.BaseStream.Position=0
     } else {
      # $reader.BaseStream.Position = $pos+1
     $lines=@()
     while(!$reader.EndOfStream) {
      $lines += $reader.ReadLine()
     return $lines
    function Get-Top([object]$reader, [int]$count = 10)
     $lines=@()
     $lineCount = 0
     $reader.BaseStream.Position=0
     while(($linecount -lt $count) -and !$reader.EndOfStream) {
      $lineCount++
      $lines += $reader.ReadLine()  
     return $lines
    function RemoveKey ( $name ) {
     if ( $name -match "|") {
      return $name.split("|")[1]
     } else {
      return ( $name )
    function GetValue ( $line, $variable ) {
     if ($line -like "*$variable*" -and $line -like "* : *" ) {
      $result = $line.substring( $line.IndexOf(":")+1 )
      return $result
     } else {
      return $null
    }function UnBodgeDate ( $dt ) {
     # Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
     if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
      $dt=$dt.split(" ")
      $dt=$dt[2],$dt[1],$dt[4],$dt[3]
      $dt -join " "
     if ( $dt -as [DateTime] ) {
      return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
     } else {
      return $null
    function UnpackParams ($params ) {
     # Unpacks file count bloc in the format
     # Dirs :      1827         0      1827         0         0         0
     # Files :      9791         0      9791         0         0         0
     # Bytes :  165.24 m         0  165.24 m         0         0         0
     # Times :   1:11:23   0:00:00                       0:00:00   1:11:23
     # Parameter name already removed
     if ( $params.length -ge 58 ) {
      $params = $params.ToCharArray()
      $result=(0..5)
      for ( $i = 0; $i -le 5; $i++ ) {
       $result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
      $result=$result -join ","
     } else {
      $result = ",,,,,"
     return $result
    $sourcecount = 0
    $targetcount = 1
    # Write the header line
    $writer.Write("File")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
     if ( $HeaderParam.value -eq "counts" ) {
      $tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
      $tmp=$tmp.replace("~","$(removekey $headerparam.name)")
      $writer.write(",$($tmp)")
     } else {
      $writer.write(",$(removekey $HeaderParam.name)")
    if($fp){
     $writer.write(",Scanned,Newest,Summary")
    $writer.WriteLine()
    $filecount=0
    # Enumerate the files
    foreach ($file in $files) { 
     $filecount++
        write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
     $results=@{}
    $Stream = $file.Open([System.IO.FileMode]::Open,
                       [System.IO.FileAccess]::Read,
                        [System.IO.FileShare]::ReadWrite)
     $reader = New-Object System.IO.StreamReader($Stream)
     #$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
     $HeaderFooter = Get-Top $reader 16
     if ( $HeaderFooter -match "ROBOCOPY     ::     Robust File Copy for Windows" ) {
      if ( $HeaderFooter -match "Files : " ) {
       $HeaderFooter = $HeaderFooter -notmatch "Files : "
      [long]$ReaderEndHeader=$reader.BaseStream.position
      $Footer = Get-Tail $reader 16
      $ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
      if ($ErrorFooter) {
       $ProcessCounts["Error"]++
       write-host -foregroundcolor red "`t $ErrorFooter"
      } elseif ( $footer -match "---------------" ) {
       $ProcessCounts["Processed"]++
       $i=$Footer.count
       while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
       $Footer=$Footer[$i..$Footer.Count]
       $HeaderFooter+=$Footer
      } else {
       $ProcessCounts["Incomplete"]++
       write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
      foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
       $name = "$(removekey $HeaderParam.Name)"
                            if ($name -eq "speed"){ #handle two speed
                            ($HeaderFooter -match "$name : ")|foreach{
                             $tmp=GetValue $_ "speed"
                             $results[$name] = $tmp.trim()
                             $name+="new"}
                            elseif ($name -eq "speednew"){} #handle two speed
                            else{
       $tmp = GetValue $($HeaderFooter -match "$name : ") $name
       if ( $tmp -ne "" -and $tmp -ne $null ) {
        switch ( $HeaderParam.value ) {
         "date" { $results[$name]=UnBodgeDate $tmp.trim() }
         "counts" { $results[$name]=UnpackParams $tmp }
         "string" { $results[$name] = """$($tmp.trim())""" }  
         default { $results[$name] = $tmp.trim() }  
      if ( $fp ) {
       write-host "Parsing $($reader.BaseStream.Length) bytes"
       # Now go through the file line by line
       $reader.BaseStream.Position=0
       $filesdone = $false
       $linenumber=0
       $FileResults=@{}
       $newest=[datetime]"1/1/1900"
       $linecount++
       $firsttick=$elapsedtime.elapsed.TotalSeconds
       $tick=$firsttick+$refreshrate
       $LastLineLength=1
       try {
        do {
         $line = $reader.ReadLine()
         $linenumber++
         if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16)  ) {
          # line is end of job
          $filesdone=$true
         } elseif ($linenumber -gt 16 -and $line -gt "" ) {
          $buckets=$line.split($tab)
          # this test will pass if the line is a file, fail if a directory
          if ( $buckets.count -gt 3 ) {
           $status=$buckets[1].trim()
           $FileResults["$status"]++
           $SizeDateTime=$buckets[3].trim()
           if ($sizedatetime.length -gt 19 ) {
            $DateTime = $sizedatetime.substring($sizedatetime.length -19)
            if ( $DateTime -as [DateTime] ){
             $DateTimeValue=[datetime]$DateTime
             if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
         if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
          $line=$line.Trim()
          if ( $line.Length -gt 48 ) {
           $line="[...]"+$line.substring($line.Length-48)
          $line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
          write-host $line.PadRight($LastLineLength) -NoNewLine
          $LastLineLength = $line.length
          $tick=$tick+$refreshrate      
        } until ($filesdone -or $reader.endofstream)
       finally {
        $reader.Close()
       $line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
       write-host $line -NoNewLine
      $writer.Write("`"$file`"")
      foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
       $name = "$(removekey $HeaderParam.Name)"
       if ( $results[$name] ) {
        $writer.Write(",$($results[$name])")
       } else {
        if ( $ErrorFooter ) {
         #placeholder
        } elseif ( $HeaderParam.Value -eq "counts" ) {
         $writer.Write(",,,,,,")
        } else {
         $writer.Write(",")
      if ( $ErrorFooter ) {
       $tmp = $($ErrorFooter -join "").substring(20)
       $tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
       $writer.write(",,$tmp")
      } elseif ( $fp ) {
       $writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")   
       foreach ( $FileResult in $FileResults.GetEnumerator() ) {
        $writer.write(",$($FileResult.Name): $($FileResult.Value);")
      $writer.WriteLine()
     } else {
      write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
    write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
    write-host  "Results written to $($writer.basestream.name)"
    $writer.close()
    If you have any other questions, please feel free to let me know.
    If you have any feedback on our support,
    please click here.
    Best Regards,
    Anna Wang
    TechNet Community Support

Maybe you are looking for

  • Need to Print the Report Output

    Hi, I want to print the report output. How to install a printer in Oracle Appls. If i give number of copies 1 or morethan 1(from SRS Window)...the report is completed-warning. pls advice Thanks in advance.

  • Loosing portion of the picture

    This is my 1st post, hopefully someone will understand why I'm trying to explain (my english is not perfect and I'm not an expert in computer) I've done few iMovie and just finish another one but: 1st: when I open iDVD and want to use my iMovie proje

  • Open word document in web browser

    i need to open the word document using activex web browser. just like windows 7 shows the preview panel. any body know this? Regards, Balaji DP

  • SCOM Agent Cache

    Hi Guys, we have SCOM 2012 SP1 in production. I'm receiving complaints from the customer that SCOM has failed to alert on an event. Example: though the agent service is running and the threshold is breached, logical disk monitor, service monitor (man

  • Adobe AIR for Android

    Hi there I would want to know if Adobe AIR will still be supporting Android. We are currently creating a app using Flash Builder and we are just cautious that it won't be supporting Android in the future, is this true?