Robocopy transfer speed in log file
In xp or 2003 robocopy log transfer speed at the end log file. How to get this in w7 or 2008 r2?
thanks
Hi, have the same problem .. tested all possible log-options, but still missing the speed-lines on 2k8 R2 Ent. german ... noticed that on a 2k8R2 Std. english server robocopy shows me the speed summary .. have tested with no options and standard options
(as below) too.
> Example from 2008 R2 Ent. german (robocopy version 5.1.10.1027 - XP027):
Optionen: *.* /S /E /COPY:DAT /PURGE /MIR /XJF /XJD /XA:SH /MT:16 /R:0 /W:0
Insgesamt KopiertšbersprungenKeine šbereinstimmung FEHLER Extras
Verzeich.: 1 0 1 0 0 0
Dateien: 9 9 0 0 0 0
Bytes: 236.352 g 236.352 g 0 0 0 0
Zeiten: 3:28:12 0:43:31 0:00:00 0:18:26
Beendet: Tue Dec 04 02:08:18 2012
> Example from 2008 R2 Std. english (same robo version: 5.1.10.1027 - XP027):
Options : *.* /S /E /COPY:DAT /PURGE /MIR /XJF /XJD /XA:SH /R:0 /W:0
Total Copied Skipped Mismatch FAILED Extras
Dirs : 5165 2 5163 0 0 0
Files : 60646 72 60574 0 0 0
Bytes : 32.662 g 3.281 g 29.380 g 0 0 0
Times : 2:02:00 1:33:04 0:00:00 0:28:56
Speed : 630988 Bytes/sec.
Speed : 36.105 MegaBytes/min.
Ended : Tue Dec 04 03:02:01 2012
The /MT switch isn't the reason, maybe the language ? .. or can you tell me the exactly switch (example) with german lang pack
.. need help, thanks, Andy
Similar Messages
-
Robocopy unicode output jibberish log file
When I use the unicode option for a log file or even redirect unicode output from robocopy then try to open the resuting file in notepad.exe or whatever, it just looks like jibberish. How can I make this work or when will Microsoft fix it? Since Microsoft put that option in there, one supposes that it works with something. What is the expected usage for this option?
Yes, I have file names with non-ASCII characters and I want to be able to view them correctly. Without unicode support robocopy just converts such characters into a '?'. It does, however, actually copy the file over correctly, thankfully. I have tried running robocopy from PowerShell and from cmd /u. Neither makes any difference. Also, one odd thing is that if I use the /unicode switch, the output to screen does properly show the non-ASCII characters, except that it doesn't show all of them, such as the oe ligature used in French 'œ'. That was just converted into an 'o' (not even an oe as is usually the case). Again, it does properly make a copy of the file. This just makes it not quite possible to search log results.
Let's see if this post has those non-ASCII characters transmuted when this gets posted even though everything looks fine as I type it. âéèïöùœ☺♥♪When I use the unicode option for a log file or even redirect unicode output from robocopy then try to open the resuting file in notepad.exe or whatever, it just looks like jibberish. How can I make this work or when will Microsoft fix it? Since Microsoft put that option in there, one supposes that it works with something. What is the expected usage for this option?
Yes, I have file names with non-ASCII characters and I want to be able to view them correctly. Without unicode support robocopy just converts such characters into a '?'. It does, however, actually copy the file over correctly, thankfully. I have tried running robocopy from PowerShell and from cmd /u. Neither makes any difference. Also, one odd thing is that if I use the /unicode switch, the output to screen does properly show the non-ASCII characters, except that it doesn't show all of them, such as the oe ligature used in French 'œ'. That was just converted into an 'o' (not even an oe as is usually the case). Again, it does properly make a copy of the file. This just makes it not quite possible to search log results.
Let's see if this post has those non-ASCII characters transmuted when this gets posted even though everything looks fine as I type it. âéèïöùœ☺♥♪
Uses /UNILOG:logfile instead of /LOG:logfile -
Maximum write speed to log file, trying to get a log entry every 100ms.
I have a DAQmx application which data I want ot log. The aquistion is ran at a high speed, 1 KHz. I want to log the measurements to a text file which can read in Matlab or excel. I a wondering what the best approach is for this and what is the maximum speed.
I have created a program below to test. But it seems that the log is not consistent at 10 Hz alread. The log interval is set to 100 ms while the loop is running at 10 ms. If I look at the time between the sampes 6 of them are higher than the 100ms. The time between some steps is 700ms which is quite high. Is this implementation wrong or is this just due to the undeterminstic computer?LennartM wrote:
But the program does not stop...It seems it is waiting on another notification to stop or something?
That is a very likely scenario. The simple solution here is to destroy the notifier after the top loop completes and make your bottom loop stop when there is an error from the Notifier (when it was destroyed). That would eliminate the need for the local variable and would ensure that both of your loops stopped.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines -
Hi All
My home setup has a Homehub 3, with a gigabit cable coming out of the gigabit port, into an old Netgear router that I'm using as a switch, and from that I have a media PC, and homeplugs coming out of the switch. The homeplugs link my network to a server, and 2 WDTV Live boxes around my house.
I've recently upgraded my Comtrend 200mbps to TP-Link 200mbps homeplugs, and upgraded my Dlink NAS to a HP server.
My transfer speeds have gone up from 6mbps to 8mbps, when I'm copying data from my media PC to my server and back.
I also bought a TP-Link Wireless N USB adapter last week, in the hope that I could get significantly better speeds between my PC and server, however, after changing every setting on my Homehub (N, mixed mode, manually shifted channels) I can only get 3mbps. The Homehub sits downstairs, and the server sits upstairs, probably about 15 feet away in a direct line - no major walls. I even went direct from PC to homehub via the gigabit switch, but the speed was unaltered. I finally turned off every other device possible to avoid interference, but it made no difference.
I've always assumed that the Dlink NAS was the weak link in my network, but I've upgraded that (HP Proliant server with 2mb of memory). I then assumed it as my homeplugs, but I've upgraded those. I therefore assumed it was my house wiring giving me poor speeds. However, now that I'm only getting 3mbps from a USB N adapter I'm starting to think that it's the hub.
Has anyone got any experience of hub transfer speeds via homeplugs and USB N adapters? Can anybody give me any suggestions as to how I could improve my speeds (new router)?
Thanks in advance!
SimonNobody?
Is the Homehub 3 releatively slow at data transfer speeds when copying files arounf my network? Would there be any benefit to me purchasing a new modem router to replace my BT HH3? -
Robocopy Log File - Skipped files - Interpreting the Log file
Hey all,
I am migrating our main file server that contains approximately 8TB of data. I am doing it a few large folders at a time. The folder below is about 1.2TB. Looking at the log file (which is over 330MB) I can see it skipped a large number of files,
however I haven't found text in the file where it specifies what was skipped, any idea on what I should search for?
I used the following Robocopy command to transfer the data:
robocopy E:\DATA Z:\DATA /MIR /SEC /W:5 /R:3 /LOG:"Z:\Log\data\log.txt"
The final log output is:
Total Copied Skipped Mismatch FAILED Extras
Dirs : 141093 134629 6464 0 0 0
Files : 1498053 1310982 160208 0 26863 231
Bytes :2024.244 g1894.768 g 117.468 g 0 12.007 g 505.38 m
Times : 0:00:00 18:15:41 0:01:00 -18:-16:-41
Speed : 30946657 Bytes/sec.
Speed : 1770.781 MegaBytes/min.
Ended : Thu Jul 03 04:05:33 2014
I assume some are files that are in use but others may be permissions issues, does the log file detail why a file is not copied?
TIA
CarlHi.
Files that are skipped are files that already exists. Files that are open/permissions etc will be listed under failed. As Noah said use /v too see which files were skipped. From robocopy /?:
:: Logging Options :
/V :: produce Verbose output, showing skipped files.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. Even if you are not the author of a thread you can always help others by voting as Helpful. This can
be beneficial to other community members reading the thread.
Oscar Virot -
Parse robocopy Log File - new value
Hello,
I have found a script, that parse the robocopy log file, which looks like this:
ROBOCOPY :: Robust File Copy for Windows
Started : Thu Aug 07 09:30:18 2014
Source : e:\testfolder\
Dest : w:\testfolder\
Files : *.*
Options : *.* /V /NDL /S /E /COPYALL /NP /IS /R:1 /W:5
Same 14.6 g e:\testfolder\bigfile - Copy (5).out
Same 14.6 g e:\testfolder\bigfile - Copy.out
Same 14.6 g e:\testfolder\bigfile.out
Total Copied Skipped Mismatch FAILED Extras
Dirs : 1 0 1 0
0 0
Files : 3 3 0 0
0 0
Bytes : 43.969 g 43.969 g 0 0 0 0
Times : 0:05:44 0:05:43 0:00:00 0:00:00
Speed : 137258891 Bytes/sec.
Speed : 7854.016 MegaBytes/min.
Ended : Thu Aug 07 09:36:02 2014
Most values at output file are included, but the two speed paramter not.
How can I get this two speed paramters at output file?
Here is the script:
param(
[parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false,HelpMessage='Source Path with no trailing slash')][string]$SourcePath,
[switch]$fp
write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
#Arguments
# -fp File parse. Counts status flags and oldest file Slower on big files.
$ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
$refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
# These summary fields always appear in this order in a robocopy log
$HeaderParams = @{
"04|Started" = "date";
"01|Source" = "string";
"02|Dest" = "string";
"03|Options" = "string";
"07|Dirs" = "counts";
"08|Files" = "counts";
"09|Bytes" = "counts";
"10|Times" = "counts";
"05|Ended" = "date";
#"06|Duration" = "string"
$ProcessCounts = @{
"Processed" = 0;
"Error" = 0;
"Incomplete" = 0
$tab=[char]9
$files=get-childitem $SourcePath
$writer=new-object System.IO.StreamWriter("$(get-location)\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
function Get-Tail([object]$reader, [int]$count = 10) {
$lineCount = 0
[long]$pos = $reader.BaseStream.Length - 1
while($pos -gt 0)
$reader.BaseStream.position=$pos
# 0x0D (#13) = CR
# 0x0A (#10) = LF
if ($reader.BaseStream.ReadByte() -eq 10)
$lineCount++
if ($lineCount -ge $count) { break }
$pos--
# tests for file shorter than requested tail
if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
$reader.BaseStream.Position=0
} else {
# $reader.BaseStream.Position = $pos+1
$lines=@()
while(!$reader.EndOfStream) {
$lines += $reader.ReadLine()
return $lines
function Get-Top([object]$reader, [int]$count = 10)
$lines=@()
$lineCount = 0
$reader.BaseStream.Position=0
while(($linecount -lt $count) -and !$reader.EndOfStream) {
$lineCount++
$lines += $reader.ReadLine()
return $lines
function RemoveKey ( $name ) {
if ( $name -match "|") {
return $name.split("|")[1]
} else {
return ( $name )
function GetValue ( $line, $variable ) {
if ($line -like "*$variable*" -and $line -like "* : *" ) {
$result = $line.substring( $line.IndexOf(":")+1 )
return $result
} else {
return $null
function UnBodgeDate ( $dt ) {
# Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
$dt=$dt.split(" ")
$dt=$dt[2],$dt[1],$dt[4],$dt[3]
$dt -join " "
if ( $dt -as [DateTime] ) {
return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
} else {
return $null
function UnpackParams ($params ) {
# Unpacks file count bloc in the format
# Dirs : 1827 0 1827 0 0 0
# Files : 9791 0 9791 0 0 0
# Bytes : 165.24 m 0 165.24 m 0 0 0
# Times : 1:11:23 0:00:00 0:00:00 1:11:23
# Parameter name already removed
if ( $params.length -ge 58 ) {
$params = $params.ToCharArray()
$result=(0..5)
for ( $i = 0; $i -le 5; $i++ ) {
$result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
$result=$result -join ","
} else {
$result = ",,,,,"
return $result
$sourcecount = 0
$targetcount = 1
# Write the header line
$writer.Write("File")
foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
if ( $HeaderParam.value -eq "counts" ) {
$tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
$tmp=$tmp.replace("~","$(removekey $headerparam.name)")
$writer.write(",$($tmp)")
} else {
$writer.write(",$(removekey $HeaderParam.name)")
if($fp){
$writer.write(",Scanned,Newest,Summary")
$writer.WriteLine()
$filecount=0
# Enumerate the files
foreach ($file in $files) {
$filecount++
write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
$results=@{}
$Stream = $file.Open([System.IO.FileMode]::Open,
[System.IO.FileAccess]::Read,
[System.IO.FileShare]::ReadWrite)
$reader = New-Object System.IO.StreamReader($Stream)
#$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
$HeaderFooter = Get-Top $reader 16
if ( $HeaderFooter -match "ROBOCOPY :: Robust File Copy for Windows" ) {
if ( $HeaderFooter -match "Files : " ) {
$HeaderFooter = $HeaderFooter -notmatch "Files : "
[long]$ReaderEndHeader=$reader.BaseStream.position
$Footer = Get-Tail $reader 16
$ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
if ($ErrorFooter) {
$ProcessCounts["Error"]++
write-host -foregroundcolor red "`t $ErrorFooter"
} elseif ( $footer -match "---------------" ) {
$ProcessCounts["Processed"]++
$i=$Footer.count
while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
$Footer=$Footer[$i..$Footer.Count]
$HeaderFooter+=$Footer
} else {
$ProcessCounts["Incomplete"]++
write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
$name = "$(removekey $HeaderParam.Name)"
$tmp = GetValue $($HeaderFooter -match "$name : ") $name
if ( $tmp -ne "" -and $tmp -ne $null ) {
switch ( $HeaderParam.value ) {
"date" { $results[$name]=UnBodgeDate $tmp.trim() }
"counts" { $results[$name]=UnpackParams $tmp }
"string" { $results[$name] = """$($tmp.trim())""" }
default { $results[$name] = $tmp.trim() }
if ( $fp ) {
write-host "Parsing $($reader.BaseStream.Length) bytes"
# Now go through the file line by line
$reader.BaseStream.Position=0
$filesdone = $false
$linenumber=0
$FileResults=@{}
$newest=[datetime]"1/1/1900"
$linecount++
$firsttick=$elapsedtime.elapsed.TotalSeconds
$tick=$firsttick+$refreshrate
$LastLineLength=1
try {
do {
$line = $reader.ReadLine()
$linenumber++
if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16) ) {
# line is end of job
$filesdone=$true
} elseif ($linenumber -gt 16 -and $line -gt "" ) {
$buckets=$line.split($tab)
# this test will pass if the line is a file, fail if a directory
if ( $buckets.count -gt 3 ) {
$status=$buckets[1].trim()
$FileResults["$status"]++
$SizeDateTime=$buckets[3].trim()
if ($sizedatetime.length -gt 19 ) {
$DateTime = $sizedatetime.substring($sizedatetime.length -19)
if ( $DateTime -as [DateTime] ){
$DateTimeValue=[datetime]$DateTime
if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
$line=$line.Trim()
if ( $line.Length -gt 48 ) {
$line="[...]"+$line.substring($line.Length-48)
$line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
write-host $line.PadRight($LastLineLength) -NoNewLine
$LastLineLength = $line.length
$tick=$tick+$refreshrate
} until ($filesdone -or $reader.endofstream)
finally {
$reader.Close()
$line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
write-host $line -NoNewLine
$writer.Write("`"$file`"")
foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
$name = "$(removekey $HeaderParam.Name)"
if ( $results[$name] ) {
$writer.Write(",$($results[$name])")
} else {
if ( $ErrorFooter ) {
#placeholder
} elseif ( $HeaderParam.Value -eq "counts" ) {
$writer.Write(",,,,,,")
} else {
$writer.Write(",")
if ( $ErrorFooter ) {
$tmp = $($ErrorFooter -join "").substring(20)
$tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
$writer.write(",,$tmp")
} elseif ( $fp ) {
$writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")
foreach ( $FileResult in $FileResults.GetEnumerator() ) {
$writer.write(",$($FileResult.Name): $($FileResult.Value);")
$writer.WriteLine()
} else {
write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
write-host "Results written to $($writer.basestream.name)"
$writer.close()
I hope somebody can help me,
Horst
Thanks Horst MOSS 2007 Farm; MOSS 2010 Farm; TFS 2010; TFS 2013; IIS 7.5Hi Horst,
To convert mutiple robocopy log files to a .csv file with "speed" option, the script below may be helpful for you, I tested with a single robocopy log file, and the .csv file will output to "D:\":
$SourcePath="e:\1\1.txt" #robocopy log file
write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
#Arguments
# -fp File parse. Counts status flags and oldest file Slower on big files.
$ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
$refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
# These summary fields always appear in this order in a robocopy log
$HeaderParams = @{
"04|Started" = "date";
"01|Source" = "string";
"02|Dest" = "string";
"03|Options" = "string";
"09|Dirs" = "counts";
"10|Files" = "counts";
"11|Bytes" = "counts";
"12|Times" = "counts";
"05|Ended" = "date";
"07|Speed" = "default";
"08|Speednew" = "default"
$ProcessCounts = @{
"Processed" = 0;
"Error" = 0;
"Incomplete" = 0
$tab=[char]9
$files=get-childitem $SourcePath
$writer=new-object System.IO.StreamWriter("D:\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
function Get-Tail([object]$reader, [int]$count = 10) {
$lineCount = 0
[long]$pos = $reader.BaseStream.Length - 1
while($pos -gt 0)
$reader.BaseStream.position=$pos
# 0x0D (#13) = CR
# 0x0A (#10) = LF
if ($reader.BaseStream.ReadByte() -eq 10)
$lineCount++
if ($lineCount -ge $count) { break }
$pos--
# tests for file shorter than requested tail
if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
$reader.BaseStream.Position=0
} else {
# $reader.BaseStream.Position = $pos+1
$lines=@()
while(!$reader.EndOfStream) {
$lines += $reader.ReadLine()
return $lines
function Get-Top([object]$reader, [int]$count = 10)
$lines=@()
$lineCount = 0
$reader.BaseStream.Position=0
while(($linecount -lt $count) -and !$reader.EndOfStream) {
$lineCount++
$lines += $reader.ReadLine()
return $lines
function RemoveKey ( $name ) {
if ( $name -match "|") {
return $name.split("|")[1]
} else {
return ( $name )
function GetValue ( $line, $variable ) {
if ($line -like "*$variable*" -and $line -like "* : *" ) {
$result = $line.substring( $line.IndexOf(":")+1 )
return $result
} else {
return $null
}function UnBodgeDate ( $dt ) {
# Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
$dt=$dt.split(" ")
$dt=$dt[2],$dt[1],$dt[4],$dt[3]
$dt -join " "
if ( $dt -as [DateTime] ) {
return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
} else {
return $null
function UnpackParams ($params ) {
# Unpacks file count bloc in the format
# Dirs : 1827 0 1827 0 0 0
# Files : 9791 0 9791 0 0 0
# Bytes : 165.24 m 0 165.24 m 0 0 0
# Times : 1:11:23 0:00:00 0:00:00 1:11:23
# Parameter name already removed
if ( $params.length -ge 58 ) {
$params = $params.ToCharArray()
$result=(0..5)
for ( $i = 0; $i -le 5; $i++ ) {
$result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
$result=$result -join ","
} else {
$result = ",,,,,"
return $result
$sourcecount = 0
$targetcount = 1
# Write the header line
$writer.Write("File")
foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
if ( $HeaderParam.value -eq "counts" ) {
$tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
$tmp=$tmp.replace("~","$(removekey $headerparam.name)")
$writer.write(",$($tmp)")
} else {
$writer.write(",$(removekey $HeaderParam.name)")
if($fp){
$writer.write(",Scanned,Newest,Summary")
$writer.WriteLine()
$filecount=0
# Enumerate the files
foreach ($file in $files) {
$filecount++
write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
$results=@{}
$Stream = $file.Open([System.IO.FileMode]::Open,
[System.IO.FileAccess]::Read,
[System.IO.FileShare]::ReadWrite)
$reader = New-Object System.IO.StreamReader($Stream)
#$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
$HeaderFooter = Get-Top $reader 16
if ( $HeaderFooter -match "ROBOCOPY :: Robust File Copy for Windows" ) {
if ( $HeaderFooter -match "Files : " ) {
$HeaderFooter = $HeaderFooter -notmatch "Files : "
[long]$ReaderEndHeader=$reader.BaseStream.position
$Footer = Get-Tail $reader 16
$ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
if ($ErrorFooter) {
$ProcessCounts["Error"]++
write-host -foregroundcolor red "`t $ErrorFooter"
} elseif ( $footer -match "---------------" ) {
$ProcessCounts["Processed"]++
$i=$Footer.count
while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
$Footer=$Footer[$i..$Footer.Count]
$HeaderFooter+=$Footer
} else {
$ProcessCounts["Incomplete"]++
write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
$name = "$(removekey $HeaderParam.Name)"
if ($name -eq "speed"){ #handle two speed
($HeaderFooter -match "$name : ")|foreach{
$tmp=GetValue $_ "speed"
$results[$name] = $tmp.trim()
$name+="new"}
elseif ($name -eq "speednew"){} #handle two speed
else{
$tmp = GetValue $($HeaderFooter -match "$name : ") $name
if ( $tmp -ne "" -and $tmp -ne $null ) {
switch ( $HeaderParam.value ) {
"date" { $results[$name]=UnBodgeDate $tmp.trim() }
"counts" { $results[$name]=UnpackParams $tmp }
"string" { $results[$name] = """$($tmp.trim())""" }
default { $results[$name] = $tmp.trim() }
if ( $fp ) {
write-host "Parsing $($reader.BaseStream.Length) bytes"
# Now go through the file line by line
$reader.BaseStream.Position=0
$filesdone = $false
$linenumber=0
$FileResults=@{}
$newest=[datetime]"1/1/1900"
$linecount++
$firsttick=$elapsedtime.elapsed.TotalSeconds
$tick=$firsttick+$refreshrate
$LastLineLength=1
try {
do {
$line = $reader.ReadLine()
$linenumber++
if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16) ) {
# line is end of job
$filesdone=$true
} elseif ($linenumber -gt 16 -and $line -gt "" ) {
$buckets=$line.split($tab)
# this test will pass if the line is a file, fail if a directory
if ( $buckets.count -gt 3 ) {
$status=$buckets[1].trim()
$FileResults["$status"]++
$SizeDateTime=$buckets[3].trim()
if ($sizedatetime.length -gt 19 ) {
$DateTime = $sizedatetime.substring($sizedatetime.length -19)
if ( $DateTime -as [DateTime] ){
$DateTimeValue=[datetime]$DateTime
if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
$line=$line.Trim()
if ( $line.Length -gt 48 ) {
$line="[...]"+$line.substring($line.Length-48)
$line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
write-host $line.PadRight($LastLineLength) -NoNewLine
$LastLineLength = $line.length
$tick=$tick+$refreshrate
} until ($filesdone -or $reader.endofstream)
finally {
$reader.Close()
$line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
write-host $line -NoNewLine
$writer.Write("`"$file`"")
foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
$name = "$(removekey $HeaderParam.Name)"
if ( $results[$name] ) {
$writer.Write(",$($results[$name])")
} else {
if ( $ErrorFooter ) {
#placeholder
} elseif ( $HeaderParam.Value -eq "counts" ) {
$writer.Write(",,,,,,")
} else {
$writer.Write(",")
if ( $ErrorFooter ) {
$tmp = $($ErrorFooter -join "").substring(20)
$tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
$writer.write(",,$tmp")
} elseif ( $fp ) {
$writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")
foreach ( $FileResult in $FileResults.GetEnumerator() ) {
$writer.write(",$($FileResult.Name): $($FileResult.Value);")
$writer.WriteLine()
} else {
write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
write-host "Results written to $($writer.basestream.name)"
$writer.close()
If you have any other questions, please feel free to let me know.
If you have any feedback on our support,
please click here.
Best Regards,
Anna Wang
TechNet Community Support -
File transfer speed when on battery power
Hey there - I've noticed that my file transfer speed is ridiculously slow when i'm not plugged in to a power source. Even when my battery is fully charged it's super slow. This happens not only when I'm uploading files from a memory card to my computer but when I'm attaching files to an email, or even sending an email. I've asked other mac users and they say this is not the norm for their machines....any ideas?
thanks!
Sunnyadditionally, you can disable power menagement for your wireless card completely through iwconfig's power option. I suggest you read the manpage for iwconfig, I have quotes the 2 related options for you.
txpower
For cards supporting multiple transmit powers, sets the transmit power in dBm. If W is the power in Watt, the power in dBm is P = 30 + 10.log(W). If the value is postfixed by mW, it will be automatically converted to dBm.
In addition, on and off enable and disable the radio, and auto and fixed enable and disable power control (if those features are available).
Examples :
iwconfig eth0 txpower 15
iwconfig eth0 txpower 30mW
iwconfig eth0 txpower auto
iwconfig eth0 txpower off
power
Used to manipulate power management scheme parameters and mode.
To set the period between wake ups, enter period 'value'. To set the timeout before going back to sleep, enter timeout 'value'. You can also add the min and max modifiers. By default, those values are in seconds, append the suffix m or u to specify values in milliseconds or microseconds. Sometimes, those values are without units (number of beacon periods, dwell or similar).
off and on disable and reenable power management. Finally, you may set the power management mode to all (receive all packets), unicast (receive unicast packets only, discard multicast and broadcast) and multicast (receive multicast and broadcast only, discard unicast packets).
Examples :
iwconfig eth0 power period 2
iwconfig eth0 power 500m unicast
iwconfig eth0 power timeout 300u all
iwconfig eth0 power off
iwconfig eth0 power min period 2 power max period 4
website with the current manpage:
http://linux.die.net/man/8/iwconfig
or simply:
$ man iwconfig
Last edited by stefanwilkens (2010-09-13 11:11:37) -
External hard drive, error -36, corrupt files and slow transfer speeds
This is a long post but stay with me.
I've got an external 250GB Lacie drive connected to my 24" Intel iMac via Firewire 800. I have approximately 80GB worth of files (including my iPhoto library) on the Lacie that I'm trying to move over to my iMac but have run into some frustrating problems.
Overall, transfer speeds are erratic – fine at times and SUPER SLOW (5 minutes to transfer a 3MB file) at other times. At first I was transferring entire folders, but after always getting errors (error -36 to be exact), I was having to transfer individual files. This works fine as long as the files aren't too big (which makes the transfer speeds SUPER SLOW) and not corrupt (which gets me an error -36). Is my Lacie crashing on me? Is there any hope?
Here's what I've done to try and salvage my sanity with no resolve to my problems (not exactly in this order):
1) Ran Disk Utility to repair Lacie. Everything came back OK
2) Ran Disk Warrior on Lacie. Found a handful of bad items that Disk Warrior was able to repair. Had Disk Warrior replace the old directory of the Lacie with the new directory.
3) Ran Drive Genius: Raired and Rebuilt the directory, did an Integrity Check (came back OK), attempted to Defrag but could not because of errors to the drive, did a Scan for bad blocks (which after running for 3 days! it was only 20% complete and had found over 150 bad blocks, at which point I quit to try something else)
4) Checked that my Firewire cable was properly connected and replaced the older one I was using with a brand new one.
5) Disconnected all peripherals except for keyboard, mouse and Lacie.
6) And at times, even viewing a file on the Lacie has caused my Finder to crash, requiring a restart of my iMac.
I'm sure I've left something off I've tried but you get the idea.
I've read that maybe I can transfer my files using Terminal. Yes/No? Does this bypass any corruption the files may have?
What about creating a second partition on the Lacie and moving files over to the second partition? Does this solve anything? I don't know anything about partitioning so maybe I'm talking crazy.
I've also read that iPhoto tends to corrupt files which makes me hate iPhoto, and myself for not backing up.
And I've read a lot about error -36's and none of it sounds good so I'm just hoping that someone might have other suggestions for me to try and save my files (especially my iPhoto files which include baby pictures that if I lose, my wife will kill me).
So if anyone has any ideas, let me know. Please. I'll try anything.Hi Aaron Thompson-
I am fairly certain you have a bad power supply brick to the LaCie drive. This is a hard problem to troubleshoot as generally when the bricks go bad they do so slowly and the drive appears to power up and stuff but it just doesn't work.
If the drive is under warranty LaCie will replace the brick at no charge. Otherwise they sell them for around $20US on their website.
Luck-
-DP -
Efficient way get FCE4 Log and Transfer to read .mts files stored on drive?
Hi All
I've searched the FCE discussion forum and not found an answer verified by more than one user to this question: What is an efficient way to get FCE4 (via the Log and Transfer window) to see .mts files from an AVCHD camera stored on a drive (NOT via the camera -- directly from the drive)?
I am trying to plan the most space-efficient system possible for storing un-transcoded .mts files from a Panasonic AG-HMC151 on a harddrive so that I can easily ingest them into FCE4. I am shooting a long project and I want to be able to look at .mts files so that I can decide which ones to transcode to AIC for the edit.
Since FCE4 cannot see .mts files unless they have their metadata wrapper the question is really 'how do I most efficiently transfer .mts files from the camera to a storage harddrive with their metadata wrappers so that FCE4 can see them via the log and transfer window?'
Nick Holmes, in a reply in this thread
http://discussions.apple.com/thread.jspa?messageID=10423384�
gives 2 options: Use the Disk Utility to make a disk image of the whole SD card, or copy the whole contents of the card to a folder. He says he prefers the first option because it makes sure everything on the card is copied.
a) Have other FCE users done this successfully and been able to read the .mts files via Log and Transfer?
In a response to this thread:
http://discussions.apple.com/thread.jspa?messageID=10257620�
wallybarthman gives a method for getting Log and Transfer to see .mts files that have been stored on a harddrive without their metadata wrappers by using Toast 9 or 10.
b) Have any other FCE4 users used this method? Does it work well?
c) Why is FCE4 unable to see .mts files without their metadata wrappers in the Log and Transfer window? Is it just a matter of writing a few lines of code?
d) Is there an archiving / library app. on the market that would allow one to file / name / tag many .mts clips and view them prior to transcoding into space-hungry AIC files in FCE?
Any/all help would be most gratefully received!I have saved the complete file structure on DVD as a backup, but have not needed to open them yet. But I will add this. As I understand the options with Toast you are infact converting the video to AIC or something like it. I haven't looked into it myself, but I can't imagine the extra files are that large, but maybe there are significant, I don't know. The transcoded files are huge in comparison to the AVCHD file.
A new player on the scene for AVCHD is Clipwrap 2.0. As I understand this product. It rewraps the AVCHD into a wrapper the Quicktime can open and play. This is with the MTS files only, the rest of the file structure is not needed. The rewrap is much faster that the transcode to AIC. So you have the added benefit of being able to play the files as well as not storing the extra files. The 2.0 version (which is for AVCHD) was just recently released. I haven't tried it and don't personally know of anyone who has. You might want to try this, there is a trial version as I recall. -
K8T Mstr2-FAR7 ---- File Transfer Speed
I measured file transfer speed between partitions, 1) both in the same physical drive and 2) each in a different physical drive. The measurements were made on WXP and WVT by using a file having 1048MB in size. Results are as follows:
1) Partitions in the same physical Drive
WXPx86 : 18.55 seconds
WVTx86 : 38.91 seconds
2) Each partition in different physical drive
WXPx86 : 9.31 seconds
WVTx86 : 19.13 seconds
Are the above results normal? Vista's speed appear way lower than the maximum allowed by SATA-150. The speed may be affected by many factors. My hardware spec is shown below. RAM is in single channel mode and is running as DDR333.
Could anyone shed light on the matter of file transfer speed?It's because Vista is buggy, everybody knows this.
If you want speed, use XP.
And that isn't the only problem with Vista, also graphics is 3x faster under XP. -
DataGuard Windows 9201 - log file transfer interrupt with a big redo log
OS WINDOWS
Oracle 9201
Primary: service_name orcl1 db_name orcl1
Standby: service_name orcl2 db_name orcl1
Same dir structure distribute on different VMware machine but connect with a real physical fiber network enviorment, two node distance more than 20km.
LOG FILE - 100M
MAX PERFORMACE MODE
we can got succesful result when input 'alter system switch log file' manually, the log usually small than 20m.
but when we try to switch a full redo log the error occur, log can't transfer to standby site.
it's seem to a transfer interrupt by some unnameable reason.
we check the network ping, lsnrctl service_name status, dataguard configration and windows tcpip configration, but have no conclusion.
we will crzy!! help
the log trace that use log_archive_trace=128 on primary site show:
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
- Created archivelog as 'C:\ORACLE\ORAARCH\ARC00095.001'
*** 2010-09-02 15:30:39.000
Fail to ping standby 'orcl2', error = 12571
Error 12571 when pinging standby orcl2.
*** 2010-09-02 15:30:39.000
kcrrfail: dest:2 err:12571 force:0
*** 2010-09-02 15:31:40.000
Fail to ping standby 'orcl2', error = 1010
Error 1010 when pinging standby orcl2.
*** 2010-09-02 15:31:41.000
kcrrfail: dest:2 err:1010 force:0
*** 2010-09-02 15:32:32.000
Setting trace level: 31 (1f)
*** 2010-09-02 15:32:32.000
ARC0: Evaluating archive log 3 thread 1 sequence 97
VALIDATE
PREPARE
*** 2010-09-02 15:32:32.000
Acquiring global enqueue on thread 1 sequence 97
*** 2010-09-02 15:32:32.000
Acquired global enqueue on thread 1 sequence 97
INITIALIZE
SPOOL
*** 2010-09-02 15:32:32.000
ARC0: Beginning to archive log 3 thread 1 sequence 97
*** 2010-09-02 15:32:32.000
Creating archive destination LOG_ARCHIVE_DEST_2: 'orcl2'
Network re-configuration required
Detaching RFS server from standby instance at 'orcl2'
RFS message number 151
Error 1010 detaching RFS from standby instance at host 'orcl2'
Disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
Ignoring kcrrvnc() detach error 1010
Primary database is in CLUSTER CONSISTENT mode
Primary database is in MAXIMUM PERFORMANCE mode
Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
Attaching RFS server to standby instance at 'orcl2'
RFS message number 152
Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
Standby database restarted; old mount ID 0x4258a5ae now 0x42590f20
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
Issuing standby Create archive destination at 'orcl2'
RFS message number 153
*** 2010-09-02 15:32:32.000
Creating archive destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
- Created archivelog as 'C:\ORACLE\ORAARCH\ARC00097.001'
Dest LOG_ARCHIVE_DEST_1 primary mount ID: '0x42586021'
Archiving block 1 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 1 count 2048 to 'orcl2'
RFS message number 154
Archiving block 1 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 2049 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 2049 count 2048 to 'orcl2'
RFS message number 155
Archiving block 2049 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 4097 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 4097 count 2048 to 'orcl2'
RFS message number 156
Archiving block 4097 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 6145 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 6145 count 2048 to 'orcl2'
RFS message number 157
Archiving block 6145 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 8193 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 8193 count 2048 to 'orcl2'
RFS message number 158
Archiving block 8193 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 10241 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 10241 count 2048 to 'orcl2'
RFS message number 159
Archiving block 10241 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 12289 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 12289 count 2048 to 'orcl2'
RFS message number 160
Archiving block 12289 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 14337 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 14337 count 2048 to 'orcl2'
RFS message number 161
Archiving block 14337 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 16385 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 16385 count 2048 to 'orcl2'
RFS message number 162
Archiving block 16385 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 18433 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 18433 count 2048 to 'orcl2'
RFS message number 163
Archiving block 18433 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 20481 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 20481 count 2048 to 'orcl2'
RFS message number 164
Archiving block 20481 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 22529 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 22529 count 2048 to 'orcl2'
RFS message number 165
Archiving block 22529 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 24577 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 24577 count 2048 to 'orcl2'
RFS message number 166
Archiving block 24577 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 26625 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 26625 count 2048 to 'orcl2'
RFS message number 167
Archiving block 26625 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 28673 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 28673 count 2048 to 'orcl2'
RFS message number 168
Archiving block 28673 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 30721 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 30721 count 2048 to 'orcl2'
RFS message number 169
Archiving block 30721 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 32769 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 32769 count 2048 to 'orcl2'
RFS message number 170
Archiving block 32769 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 34817 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 34817 count 2048 to 'orcl2'
RFS message number 171
Archiving block 34817 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 36865 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 36865 count 2048 to 'orcl2'
RFS message number 172
Archiving block 36865 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 38913 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 38913 count 2048 to 'orcl2'
RFS message number 173
Archiving block 38913 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 40961 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 40961 count 2048 to 'orcl2'
RFS message number 174
Archiving block 40961 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 43009 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 43009 count 2048 to 'orcl2'
RFS message number 175
Archiving block 43009 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 45057 count 2048 block(s) to 'orcl2'
Issuing standby archive of block 45057 count 2048 to 'orcl2'
RFS message number 176
*** 2010-09-02 15:33:22.000
RFS network connection lost at host 'orcl2'
Error 3114 writing standby archive log file at host 'orcl2'
*** 2010-09-02 15:33:22.000
ARC0: I/O error 3114 archiving log 3 to 'orcl2'
*** 2010-09-02 15:33:22.000
kcrrfail: dest:2 err:3114 force:0
Local destination LOG_ARCHIVE_DEST_1 is still active
ORA-03114: not connected to ORACLE
Archiving block 45057 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 47105 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 49153 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 51201 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 53249 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 55297 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 57345 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 59393 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 61441 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 63489 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 65537 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 67585 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 69633 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 71681 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 73729 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 75777 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 77825 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 79873 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 81921 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 83969 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 86017 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 88065 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 90113 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 92161 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 94209 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 96257 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 98305 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 100353 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 102401 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 104449 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 106497 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 108545 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 110593 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 112641 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 114689 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 116737 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 118785 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 120833 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 122881 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 124929 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 126977 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 129025 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 131073 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 133121 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 135169 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 137217 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 139265 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 141313 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 143361 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 145409 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 147457 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 149505 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 151553 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 153601 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 155649 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 157697 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 159745 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 161793 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 163841 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 165889 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 167937 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 169985 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 172033 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 174081 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 176129 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 178177 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 180225 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 182273 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 184321 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 186369 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 188417 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 190465 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 192513 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 194561 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 196609 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 198657 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 200705 count 2048 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Archiving block 202753 count 2024 block(s) to 'C:\ORACLE\ORAARCH\ARC00097.001'
Closing archive destination LOG_ARCHIVE_DEST_1: C:\ORACLE\ORAARCH\ARC00097.001
FINISH
Archival failure destination LOG_ARCHIVE_DEST_2: 'orcl2'
Archival success destination LOG_ARCHIVE_DEST_1: 'C:\ORACLE\ORAARCH\ARC00097.001'
COMPLETE, min-succeed count met
*** 2010-09-02 15:33:27.000
ArchivedLog entry added for thread 1 sequence 97 ID 0x42585a2b: C:\ORACLE\ORAARCH\ARC00097.001
Marking [1] log 3 thread 1 sequence 97 spooled
Updating thread 1 sequence 97 archive SCN 0:4503061
Scanning 'to be archived' list': kcrrdal
log 2 thread 1 sequence 98
Completed 'to be archived' list
*** 2010-09-02 15:33:27.000
Releasing global enqueue
ARCHIVED
*** 2010-09-02 15:33:27.000
ARC0: Completed archiving log 3 thread 1 sequence 97
Scanning 'to be archived' list': kcrrwk
log 2 thread 1 sequence 98
Completed 'to be archived' list
Scanning 'to be archived' list': kcrrwk
log 2 thread 1 sequence 98
Completed 'to be archived' list
*** 2010-09-02 15:34:29.000
ARC0: Heartbeat ticks... (thread 1)
Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
Primary database is in CLUSTER CONSISTENT mode
Primary database is in MAXIMUM PERFORMANCE mode
Connecting to destination LOG_ARCHIVE_DEST_2 standby host 'orcl2'
Attaching RFS server to standby instance at 'orcl2'
RFS message number 177
Dest LOG_ARCHIVE_DEST_2 standby mount ID: '42590f20'
Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
RFS message number 178
Not in RAC mode
*** 2010-09-02 15:35:30.000
ARC0: Heartbeat ticks... (thread 1)
Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
RFS message number 179
Not in RAC mode
*** 2010-09-02 15:36:22.000
ARC0: Heartbeat ticks... (thread 1)
Establishing link for destination LOG_ARCHIVE_DEST_2 to standby orcl2
Pinging destination LOG_ARCHIVE_DEST_2 at standby orcl2
RFS message number 180
Not in RAC mode
*** 2010-09-02 15:36:39.000
Setting trace level: 128 (80)
Setting trace level: 128 (80)
Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
- Created archivelog as 'C:\ORACLE\ORAARCH\ARC00099.001'
Setting trace level: 128 (80)
*** 2010-09-02 15:37:32.000
Setting trace level: 128 (80)Something is going on in your network:
RFS network connection lost at host 'orcl2'
Error 3114 writing standby archive log file at host 'or
Network Administrators may help -
Measure file-transfer speed?
Hello to all of you!
I would like to know if there's a way to monitor the speed during a file transfer (either from the internal disk to an external, or from an internal folder to another internal folder). I'm not referring to online file tranfers (eg. FTP).
To help describe my question even more, lets say I'm transfering 5GB of data. I can see the estimated time in the progress window, but I'd like to view the speed at which the data is transfered.
Any tips?
Thanks a lot!Efthymis , It really depends on just what you want to measure.....'Benchmarking'- which is what you are trying to do is (truly) a complex art - IF you want meaningful results...there are various special-purpose tools (you CAN'T rely on the computer's/program's estimates because they tend to be sec-by-sec and don't allow for start-up,overheads, etc etc and are usually quite misleading)...)
You WILL get different results if you transfer 100 * 1MB files compared to those you get if you xfer one 100MB file, for example.
Most accurate test of speed is a set of known filesizes:e.g 100meg, 500meg etc, a stopwatch and a quick hand.
For an accurate 'real life' test,include a folder containing 3 or4 real-life filesizes -e.g. a few 2kb, a few 100kb, a few 500k, a 1000k, a cpl of 5 meg, DUPE That folder a few times, place results in yet another then copy THAT somewhere whilst armed with a stopwatch
Otherwise, as I said here, there are various tools (see versiontracker and try a few of those)
best of luck -
Local file transfer speed slow on E1200
On a local file transfer (computer to NAS or NAS to computer) my file transfer speed with the E1200 is way too slow. I would expect the Wireless N speed to be at least 54mbps (megabits per second), which would be equivalent to 9 MegaBytes per second.
In practice the speed of transfer appears to be at 8mbps (1 MegaByte per second). Here is a screen shot:
The speed is the same whether I copy from the computer (on a wireless N link) to the NAS, or from the NAS (on a 100Mbit ethernet port on the E1200) to the computer. The NAS is brand new, auto configured to a RAID-1 setup with two new drives.
Anyone know why this is? is the encryption (WPA2-PSK/AES) slowing down the transfer? The same problem occurred with copying a folder of photos as with copying this large file. I don't think I can upgrade the firmware on my device, as I have the first version of the E1200.
Solved!
Go to Solution.Sabretooth --
There is no option to set the Qos to zero. I can set it to "auto" in which case the number has no effect.
I could set it to manual but it restricts me to a range as you can see from the graphic.
In case you are wondering, my internet speed is 10Mbps downstream, 5Mbps upstream. So I am getting the same speed for the internet that I am getting for a local transfer.
The next troubleshooting step is to bypass the wireless and hook my PC to the router. I will let you know if that makes a difference. -
Slow file transfer speed on OS X / Windows LAN
I have a Mac Mini with OS X Server (Yosemite) on a network of 5 Windows 7 PCs, the server hosts FileMaker Server 13 and also some files in a shared folder.
I notice the file transfer speed when copying a file from one PC up to the server's shared folder is around 350KB/second, is this an acceptable speed? how could I improve it?
Thanks for your help.I'm having the same issues.. I recorded a bunch of video with the 4s.. now i'm copying them to my windows 7 pc with it peaking out at 900KB/sec. My corsair thumb drive in the same port gets 5-10x this speed. any luck getting faster speeds?
-
Configure log file transfer to downstream capture daabase!
Dear all,
I am setting up bidirectional replication among 2 database server that are Linux based and oracle 11gR1 is the database.
I am following the document Oracle Streams Administrator Guide, Have completed all the pre-configuration tasks but I have confusion with this step where we have to configure log file transfer to downstream capture database.
I am unable to understand this from the documentation.
I mean how do I configure Oracle Net so that the source database can communicate with each other in bi-directional replication?
Configure authentication at both databases to support the transfer of redo data? How can I do this?
Third thing is the paramter setting that obviously i can do
Kindly help me through this step.
Regards, Imranand what about this:
Configure authentication at both databases to support the transfer of redo data?
Thanks, ImranFor communication between the two databases, you create streams administrator at both the databases. The strmadmin talk to each other.
Regards,
S.K.
Maybe you are looking for
-
Hello- First if this has been answered already I'm sorry as I have not yet located this answer to these questions (or they are half answered) 1) Can I build a PDF form that on submit it will upload to a server (the entire pdf file) using PHP code?
-
Can't print from MacBook 10.8.6 to HP 6010 All-in-One Printer
We bought an HP 6-10 All-in-One and it has always printed perfectly, color and black and white, on my partner's Dell PC (Windows) but will only print in black and white from my MacBook OS X 10.8.6 (Snow Leopard.) I also would like to know how to mak
-
ODBC Connectivity with Crystal Reports 2009 and MySQL
Hi, I'm hoping someone can help me. We are currently running Crystal Reports CR Developer version 9.2.2.693 with an ODBC connection (ODBC 3.51 driver) to a My SQL database, version 4.0.18.* We want to upgrade to Crystal Reports 2008, CR Developer ver
-
Unique problem with installing new version of adobe flash player
Hi there. I am unable to install the newest version of adobe flash player. I am not getting any error messages like most people's issue on this forum. What happens is I run the installer, and it attempts to install for a second and then the installer
-
Hi Gurus, Iam not able to determine shipping point in the STO for transfering from two different plants of different companies. But when i use same plant as both supplying plant and receiving plant then shipping point is determined. How to dtermine