ACS -current log file CSMonLog Active.csv is showing blank

under ACS service monitoring TAB, the current log file CSMonLog Active.csv is showing blank ?
Could anyone let me know why this happens ?

CSMon—CSMon service is responsible for the monitoring, recording, and notification of Cisco Secure CS ACS performance, and includes automatic response to some scenarios. For instance,TACACS+ and RADIUS service dies, CS ACS by default restarts all the services, unless otherwise configured. Monitoring includes monitoring the overall status of Cisco Secure ACS and the system on which it is running. CSMon actively monitors three basic sets of system parameters:
    Generic host system state—monitors disk space, processor utilization, and memory utilization.
    Application-specific performance—periodically performs a test login each minute using a special built-in test account by default.
    System resource consumption by Cisco Secure ACS—CSMon periodically monitors and records the usage by Cisco Secure ACS of a small set of key system resources. Handles counts, memory utilization, processor utilization, thread used, and failed log-on attempts, and compares these to predetermined thresholds for indications of atypical behavior.
CSMon works with CSAuth to keep track of user accounts that are disabled for exceeding their failed attempts count maximum. If configured, CSMon provides immediate warning of brute force attacks by alerting the administrator that a large number of accounts have been disabled.
By default CSMon records exception events in logs both in the CSV file and Windows Event Log that you can use to diagnose problems. Optionally you can configure event notification via e-mail so that notification for exception events and outcomes includes the current state of Cisco Secure ACS at the time of the message transmission. The default notification method is simple mail-transfer protocol (SMTP) e-mail, but you can create scripts to enable other methods. However, if the event is a failure, CSMon takes the actions that are hard-coded when the triggering event is detected. If the event is a warning event, it is logged, the administrator is notified if it is configured, and no further action is taken. After a sequence of re-tries, CSMon also attempts to fix the cause of the failure and individual service restarts. It is possible to integrate custom-defined action with CSMon service, so that a user-defined action can be taken based on specific events.
Answering your query: This may be a brand new installation OR none of ACS services restarted lately so logs OR CSMON logging might have disabled under system configuration > Logging.
~BR
Jatin Katyal
**Do rate helpful posts**

Similar Messages

  • Current log file is lost

    Hi all,
    I am using oracle 9i in windows, my database is in archive log mode, i have 2 log group, now i lost my current log file, i know it will solve by incomplete backup.
    Kindly any one tell me the command. how can i recover it,shall i use the backup control file?
    senthil

    Plan A. Is your log group multiplexed? Drop lost logfile member and Add a new one.
    ALTER SYSTEM SWITCH LOGFILE;
    ALTER DATABASE DROP LOGFILE MEMBER 'lost redo log';
    ALTER DATABASE ADD LOGFILE MEMBER 'new redo log file' TO GROUP <n>;
    Plan B. It is not multiplexed, but database is still open.
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP <n>;
    ALTER SYSTEM SWITCH LOGFILE;
    ALTER DATABASE DROP LOGFILE MEMBER 'lost redo log';
    ALTER DATABASE ADD LOGFILE MEMBER 'new redo log file' TO GROUP <n>;
    Plan C. Not multiplexed, database shutdown.
    Perform incomplete recover.

  • File system task - Move file issue, skip the current log file thats open by another process - IIS logs

    Hi,
    I have created a package to move IIS log files from their source directory to another directory called processing. However, when the package tries to move the current log file it errors because the file is in use by another process (by IIS as its still writing
    to it). I would like the package to just ignore the "in-use" file and process the rest and show the package completing successfully. The file in use will be rolled by IIS at midnight and the file will be picked up by the next package run.
    However, when the file is rolled and becomes free a new log file is created which will be marked as "in-use"...... which should be ignored at the next run.
    Is there some way that I can tell the file system task to ignore in use files or add an event handler to do the same sort of thing?
    Any assistance is appreciated

    Hi Arthur,
    Thank you for your reply.
    I have resolved the issue with the following example:
    http://www.timmitchell.net/post/2010/08/23/ssis-conditional-file-processing-in-a-foreach-loop/
    What I realised was that I needed to just ignore the current log file which gets created daily, so using the above example (and changing the last written date to created date) I am able to ignore/skip the log thats in use.
    Adam

  • Logical Standby 'CURRENT' log files

    I have an issue with a Logical Standby implementation where although everything in the Grid Control 'Data Guard' page is Normal, when I view Log File Details I have 62 files listed with a status of 'Committed Transactions Applied'.
    The oldest (2489) of these files is over 2 days old and the newest (2549) is 1 day old. The most recent applied log is 2564 (current log is 2565).
    As for the actual APPLIED_SCN in the standby, it's greater than the highest NEXT_CHANGE# for newest logfile 2549. The READ_SCN is less than the NEXT_CHANGE# of the oldest log (2489) appearing in the the list of files - which is why all these log files appear on this list.
    I am confused why the READ_SCN is not advancing. The documentation states that once the NEXT_CHANGE# of a logfile falls below READ_SCN the information in those logs has been applied or 'persistently stored in the database'.
    Is it possible that there is a transaction that spans all these log files? More recent logfiles have dropped off the list and have been applied or 'persistently stored'.
    Basically I'm unsure how to proceed and clean up this list of files and ensure that everything has been applied.
    Regards
    Graeme King

    Thank you Larry. I have actually already reviewed this document. We are not getting the error they list for long running transactions though.
    I wonder if it is related to the RMAN restore we did where I restored the whole standby database while the standby redo log files were not obviously restored and therefore were 'newer' than the restored database?
    After I restored I did see lots trace files with this message:
    ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
    ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
    ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
    ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
    I just stopped and restarted the SQL apply and sure enough it was cycled through all the log files in the list from the (READ_SCN onwards) but they are still in the list. Also there is very little activity on this non-production database.
    regards
    Graeme

  • How open and see Log File in Active Directory

    Hello Friends..   ^-^
    how i can open log files active directory and see this data files ?
    Can export this logs ?
    thanks for help.

    And adds a definition of edbxxxxx.log for completeness:
    These are auxiliary transaction logs used to store changes if the main Edb.log file
    gets full before it can be flushed toNtds.dit.
    The xxxxx stands for a sequential number in hex. When the Edb.log file
    fills up, an Edbtemp.log file
    is opened. The original Edb.log file
    is renamed to Edb00001.log, and Edbtemp.log is
    renamed to Edb.log file,
    and the process starts over again. Excess log files are deleted after they have been committed. You may see more than one Edbxxxxx.log file
    if a busy domain controller has many updates pending.
    Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable. This helps the community, keeps the forums tidy, and recognises useful contributions. Thank you!

  • Logging File Management Activity

    We are trying to figure out if Dreamweaver keeps a text file that logs the history of renaming and moving files within a site.  We want a list of both the old and new file name and path for each file moved and/or renamed within Dreamweaver.  We tried the "Log..." button below the Files pane in Dreamweaver 8, but after making some test changes we found that it lists "File activity complete" for each change made instead of the specific changes applied.  Is there a way to get this list?  Or do we need to use subversion software in order to do this?
    Thank you!

    The log feature is meant for syncing local vs. server. If you must track local changes, indeed you will need to use other tools. I'm not sure how you fall for Subversion, though. Since it uses explicit check-in/check-out it doesn't really track anything automatically and its primary purpose is documenting changes inside specific files, anyways, not their location. It would also many times duplicate files rather than just move them, as it uses its own versioning. What you want could be achieved using tools like GridIron Flow or, most simply, by configuring your intranet file server software to use shadow copies and log all user activity.
    Mylenium

  • Any ways to roll over to a different log file when the current log file big

    How to roll over a log file when it reaches maximum to a different log file?
    any ways of doing this??????

    More info in the new owners....
    http://www.oracle.com/technology/pub/articles/hunter_logging.html
    And more!!!!! here to build a configuration file with filehandler properly setted to an specified size
    http://www.linuxtopia.org/online_books/programming_books/thinking_in_java/TIJ317_021.htm

  • Audit Current logged in User activity

    Dear Legends,
    As I am trying out to find who are all logged in to  our Database and what are queries executed by the users. So while trying out the below query
    QUERY
    SELECT DISTINCT
      USERNAME,
      STATUS,
      SCHEMANAME,
      OSUSER,
      MACHINE,
      TO_CHAR(AR.LAST_LOAD_TIME, 'DD-Mon-YYYY HH24:MM:SS') LOAD_TIME,
      ss.WAIT_CLASS,
      ss.SECONDS_IN_WAIT,
      ss.STATE,
      ar.sql_text SQLTEXT
    FROM
      v$session ss,
      v$sqlarea ar
    WHERE
      ss.MACHINE NOT LIKE 'ip%'
    AND ss.STATUS = 'ACTIVE'
    AND ss.SQL_ADDRESS = ar.ADDRESS;
    The Issue is when I execute the above query it returns a valid output that I'm logged in from a machine and it's client as SQL DEVELOPER. But I'm not able to view the same queries output in SQLPLUS.... I have scheduled this into a cron so I need SQLPLUS to work this query.
    Do you know why? Let me know your suggestions.
    Note: I need this condition "ss.MACHINE NOT LIKE 'ip%'" if not it will fetch the hosted servers machine and user.
    Thanks,
    Karthik

    I'm able to view output without the "ss.MACHINE NOT LIKE 'ip%'" which is fetching the INTERNAL ORACLE USER also. But I need only the USERS WHO are logged in from a REMOTE MACHINE using any tools except SQLPLUS, because We have restricted SQLPLUS access.
    BELOW is the output from SQLPLUS
    SQL> SELECT DISTINCT
      USERNAME,
      2    3    STATUS,
      4    SCHEMANAME,
      5    OSUSER,
      6    MACHINE,
      7    TO_CHAR(AR.LAST_LOAD_TIME, 'DD-Mon-YYYY HH24:MM:SS') LOAD_TIME,
      8    ss.WAIT_CLASS,
      9    ss.SECONDS_IN_WAIT,
      ss.STATE,
    10   11    ar.sql_text SQLTEXT
    12  FROM
    13    v$session ss,
    14    v$sqlarea ar
    15  WHERE ss.STATUS = 'ACTIVE'
    16  AND ss.SQL_ADDRESS = ar.ADDRESS;
    USERNAME                       STATUS   SCHEMANAME
    OSUSER
    MACHINE
    LOAD_TIME
    WAIT_CLASS                                                       SECONDS_IN_WAIT
    STATE
    SQLTEXT
    APPS                           ACTIVE   APPS
    oracle
    ip-10-20-30-40
    27-Jan-2014 15:01:17
    Idle                                                                           5
    WAITING
    begin fndcp_tmsrv.read_message(:ec, :to, :typ, :enddt, :rid, :retid, :nl, :nc, :
    dl, :secgrp, :usr, :rspap, :rsp, :log, :app, :prg, :argc, :orgt, :orgid, :a1,  :
    a2,  :a3,  :a4,  :a5,  :a6,  :a7,  :a8,  :a9,  :a10, :a11, :a12, :a13, :a14, :a1
    5, :a16, :a17, :a18, :a19, :a20); end;
    APPS                           ACTIVE   APPS
    oracle
    ip-10-20-30-40
    28-Jan-2014 22:01:48
    Network                                                                        0
    WAITED SHORT TIME
    SELECT DISTINCT   USERNAME,   STATUS,   SCHEMANAME,   OSUSER,   MACHINE,   TO_CH
    AR(AR.LAST_LOAD_TIME, 'DD-Mon-YYYY HH24:MM:SS') LOAD_TIME,   ss.WAIT_CLASS,   ss
    .SECONDS_IN_WAIT,   ss.STATE,   ar.sql_text SQLTEXT FROM   v$session ss,   v$sql
    area ar WHERE ss.STATUS = 'ACTIVE' AND ss.SQL_ADDRESS = ar.ADDRESS
                                   ACTIVE   SYS
    oracle
    ip-10-20-30-40
    27-Jan-2014 15:01:16
    Idle                                                                           0
    WAITING
    update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,e
    xtpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decod
    e(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:
    17),scanhint=:18, bitmapranges=:19 where ts#=:1 and file#=:2 and block#=:3
    APPS                           ACTIVE   APPS
    oracle
    ip-10-20-30-40
    27-Jan-2014 15:01:17
    Idle                                                                           4
    WAITING
    begin fndcp_tmsrv.read_message(:ec, :to, :typ, :enddt, :rid, :retid, :nl, :nc, :
    dl, :secgrp, :usr, :rspap, :rsp, :log, :app, :prg, :argc, :orgt, :orgid, :a1,  :
    a2,  :a3,  :a4,  :a5,  :a6,  :a7,  :a8,  :a9,  :a10, :a11, :a12, :a13, :a14, :a1
    5, :a16, :a17, :a18, :a19, :a20); end;
    APPS                           ACTIVE   APPS
    oracle
    ip-10-20-30-40
    27-Jan-2014 15:01:17
    Idle                                                                           3
    WAITING
    begin fndcp_tmsrv.read_message(:ec, :to, :typ, :enddt, :rid, :retid, :nl, :nc, :
    dl, :secgrp, :usr, :rspap, :rsp, :log, :app, :prg, :argc, :orgt, :orgid, :a1,  :
    a2,  :a3,  :a4,  :a5,  :a6,  :a7,  :a8,  :a9,  :a10, :a11, :a12, :a13, :a14, :a1
    5, :a16, :a17, :a18, :a19, :a20); end;
    SQL>
    Message was edited by: karthiksingh_dba

  • Missing current redo log files

    hello to all
    i have a question please if you can guide me.
    if i lose all current redo log files (current redo log group files) how i can repair it and open
    database ? (i don't have problem by missing INACTIVE redo group)
    thanks

    Hi,
    >>if i lose all current redo log files (current redo log group files) how i can repair it and
    open database ? (i don't have problem by missing INACTIVE redo group)Well, It depends. The database was active when the current log file was lost ? Are you using RMAN ? The database is operating in ARCHIVELOG mode or NOARCHIVEMOG mode ? Basically, an incomplete recovery is necessary when there is a loss of the current redo log files. It means that you don’t have all the redo log files up to the point of failure, so the only alternative is[b] to recover prior to the point of failure. To perform incomplete recovery after the redo log files have been lost, you do the following (if you are not using RMAN):
    1) Execute a SHUTDOWN command and restore all data files (*.dbf) from most recent backup.
    2) Execute a STARTUP MOUNT command to read the contents of the control file.
    3) Execute a RECOVER DATABASE UNTIL CANCEL command to start the recovery process.
    SQL> startup mount;
    4) Execute a RECOVER DATABASE UNTIL CANCEL command to start the recovery process.
    SQL> recover database until cancel;
    5) Apply the necessary archived logs up to, but not including, the lost or corrupted log.
    6) Open the database and reset the log files.
    SQL> alter database open resetlogs;
    7) Shut down the database.
    SQL> shutdown normal;
    8) Take a full cold backup
    In resume, for more information, take a look at [url http://download-west.oracle.com/docs/cd/B19306_01/backup.102/b14191/recoscen008.htm#sthref1872]Recovering After the Loss of Online Redo Log Files: Scenarios
    Cheers
    Legatti

  • Moving the log file of a publisher database SQL Server 2008

    There are many threads on this here. Most of them not at all helpful and some of them wrong. Thus a fresh post.
    This post regards SQL Server 2008 (10.0.5841)
    The PUBLISHER database primary log file which is currently of 3 blocks and not extendable,
    must be moved as the LUN is going away.
    The database has several TB of data and a large number of push transactional replications as well as a couple of bi-directional replications.
    While the primary log file is active, it is almost never (if ever) used due to its small fixed size.
    We are in the 20,000 TPS range at peak (according to perfmon). This is a non-trivial installation.
    This means that
    backup/restore is not even a remotely viable option (it never is in the real world)
    downtime minimization is critical - measured in minutes or less.
    dismantling and recreating the replications is doable, but I have to say, I have zero trust in the script writer to generate accurate scripts. Many of these replications were originally set up in older versions of SQL Server and have come along for the
    ride as upgrades have occurred. I consider scripting everything and dismantling the whole lot pretty high risk. In any case, I do not want to have to reinitialize any replications as this takes, effectively, an eternity.
    Possible solution:
    The only option I can think of is to wind down everything, such that there are zero outstanding uncommitted transactions and detach the database, delete the offending log file and reattach using the CREATE DATABASE xyz ATTACH_REBUILD_LOG option.
    This should, if I have understood things correctly, cause SQL Server to recreate the default log file in the same directory as the .mdf file. I am not sure what will happen to the secondary log file which is not moving anywhere at this point.
    The hard bit is insuring that every transaction in the active log files have been replicated before shutdown. This is probably doable. I do not know how to manually flush any left over transactions to replication. I expect if I shut down all "real"
    activity and wait for a certain amount of time, eventually all the replications will show "No replicated transactions are available" and then I would be good to go.
    Hillary, if you happen to be there, comments appreciated.

    Hi Philip
    you should try this long back suggested way of stopping replication and restore db and rename or detach attach
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/6731803b-3efa-4820-a303-4ffb7edf154a/detaching-a-replicated-database?forum=sqlreplication
    Thanks
    Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem
    I do not wish to be rude, but which part of the OP didn't you understand?
    Specifically the bit about 20,000 transactions a second and database size of several TB. Do you have any concept whatsoever of what this means? I will answer for you, "no, you are clueless" as your answer clearly shows.
    Stop wasting bandwidth by proposing pointless and wrong solutions which indicate that you did not read the OP, or do you just do this to generate points?
    Also, you clearly failed to notice that I was on the thread to which you referred, and I had some pointed comments to make. This thread was an attempt to garner some input for an alternative proposal.

  • ACS SE Log Retention?

    Can I preserve ACS-SE logs for let say 90 days?
    Is there way to Purged Logs automatically or manually?

    In Solution Engine, A log file written into till it reaches 10 MB in size, Cisco Secure
    ACS starts a new log file. Cisco Secure ACS retains the most recent 7 log files for each
    CSV log. There is no option to create daily files in Solution Engine until we use Remote
    Agent for Remote logging.That will give us an option for creating daily log files.
    The links given below give details about the default logging and remote logging in
    Solution Engine.
    http://www.cisco.com/univercd/cc/td/doc/product/access/acs_soft/csacsapp/csa
    pp33/user/r.htm#wp952081
    http://www.cisco.com/univercd/cc/td/doc/product/access/acs_soft/csacsapp/csa
    pp33/user/r.htm#wp952361
    Regards,
    ~JG
    Do rate helpful posts

  • Hardening & Keeping Log files in 10.9

    I'm not in IT but I'm trying to Harden our Macs to please a client.  I found several Hardening Tips & Guides written for older versions of OS X, but none for 10.9.  Does anyone know of a Hardening Guide written with commands 10.9.
    Right now I found a guide written for 10.8 and have been mostly sucessful implementing it except for a couple sticking points.
    They suggested keeping security.log files for 30 days, I found out that they got rid of security.log and most of its functionality is in authd.log.  But I can't figure out how to keep authd logs for 30 days.  Does anyone know how I can set this?
    I also need to keep install.log for 30 days as well, but not seeing a way to control this in /etc/newsyslog.conf.  Anyone know how to set this as well.
    Does anyone know if the following audit flags should still work: lo,ad,fd,fm,-all?
    I'm trying to keep system.log & appfirewall.log for 30 days as well, I've figured out these have moved from /etc/newsyslog.conf to etc/asl.conf, but I'm not sure if I've set this correctly. Right now I have added "store_ttl=30" to these 2 lines asl.conf.  Should this work? Is there a better way to do this?
              > system.log mode=0640 format=bsd rotate=seq compress file_max=5M all_max=100M store_ttl=30
              ? [= Facility com.apple.alf.logging] file appfirewall.log file_max=5M all_max=100M store_ttl=30

    Hi Alex...
    Jim,
    who came up with this solution????
    I got these solutions for creating log files and reconstructing the database from this forum a while back....probably last year sometime.
    Up until recently after doing this, there has been
    no
    problem - server runs as it should.
    I dare to say pure luck.
    The reason I do
    this is because if I don't, the server does NOT
    automatically create new empty .log files, and
    when
    it fills the current log file, it "crashes" with
    the
    "unkown mailbox path" displayed for all mailboxes.
    I would think you some fundamental underlying issue
    there.
    I assume by "unkown mailbox path" problem you mean a
    corrupt cyrus database?
    Yes, I believe that db corruption is the case...
    You should never ever manually modify anthing inside
    cyrus' configuration database. This is just a
    desaster waiting to happen.
    If your database gets regularly corrupted, we need to
    investigate why. Many possible reasons: related
    processes crashing, disk failure, power
    failure/surges and so on.
    Aha!...about a month ago - thinking back to when this problem started - there was a power outage here, over a weekend! The hard drive was "kicked out" of the server box when I returned to work on that Monday....and that's when this problem started!
    I suggest you increase the logging level for a few
    days and keep an eye on things. Then post log
    extracts and /etc/imapd.conf and we'll take it from
    there.
    Alex
    Ok, thanks, will do!
    P.S. Download mailbfr from here:
    http://osx.topicdesk.com/downloads/
    This will allow you to easily rebuild if needed and
    most important to do proper backups of your mail
    services.
    Thanks for that, too. I will check it out and return to this forum with an update in the near future.
    Jim
    Mac OS X (10.3.9)

  • CFMX7 cfserver.log and exception.log files on Linux

    Is there any way to make these log files CF Admin viewer
    friendly in Linux? The CF Admin log viewer doesn't have any sorting
    or filtering capabilities for these files and they start at the
    beginning of the log file. So it'll show me for example, 1 - 40 of
    54376 and I have to hit next, next, next, next, etc... to see the
    most recent activity. There's no last page button, which makes it
    impossible to use. I know I can cat the dang thing to the screen or
    use another linux utility but shouldn't this work in the CF Admin?
    Anyone have a tweak that does this? I'm sure there's an xml
    file somewhere I can change < log-usability
    logfile="exception.log" value="unusable" /> to <
    log-usability logfile="exception.log" value="admin-friendly" />.
    Thanks in advance.

    Is there any way to make these log files CF Admin viewer
    friendly in Linux? The CF Admin log viewer doesn't have any sorting
    or filtering capabilities for these files and they start at the
    beginning of the log file. So it'll show me for example, 1 - 40 of
    54376 and I have to hit next, next, next, next, etc... to see the
    most recent activity. There's no last page button, which makes it
    impossible to use. I know I can cat the dang thing to the screen or
    use another linux utility but shouldn't this work in the CF Admin?
    Anyone have a tweak that does this? I'm sure there's an xml
    file somewhere I can change < log-usability
    logfile="exception.log" value="unusable" /> to <
    log-usability logfile="exception.log" value="admin-friendly" />.
    Thanks in advance.

  • Parse robocopy Log File - new value

    Hello,
    I have found a script, that parse the robocopy log file, which looks like this:
       ROBOCOPY     ::     Robust File Copy for Windows                             
      Started : Thu Aug 07 09:30:18 2014
       Source : e:\testfolder\
         Dest : w:\testfolder\
        Files : *.*
      Options : *.* /V /NDL /S /E /COPYALL /NP /IS /R:1 /W:5
         Same          14.6 g e:\testfolder\bigfile - Copy (5).out
         Same          14.6 g e:\testfolder\bigfile - Copy.out
         Same          14.6 g e:\testfolder\bigfile.out
                   Total    Copied   Skipped  Mismatch    FAILED    Extras
        Dirs :         1         0         1         0        
    0         0
       Files :         3         3         0         0        
    0         0
       Bytes :  43.969 g  43.969 g         0         0         0         0
       Times :   0:05:44   0:05:43                       0:00:00   0:00:00
       Speed :           137258891 Bytes/sec.
       Speed :            7854.016 MegaBytes/min.
       Ended : Thu Aug 07 09:36:02 2014
    Most values at output file are included, but the two speed paramter not.
    How can I get this two speed paramters at output file?
    Here is the script:
    param(
    [parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false,HelpMessage='Source Path with no trailing slash')][string]$SourcePath,
    [switch]$fp
    write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
    #Arguments
    # -fp File parse. Counts status flags and oldest file Slower on big files.
    $ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
    $refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
    # These summary fields always appear in this order in a robocopy log
    $HeaderParams = @{
    "04|Started" = "date";
    "01|Source" = "string";
    "02|Dest" = "string";
    "03|Options" = "string";
    "07|Dirs" = "counts";
    "08|Files" = "counts";
    "09|Bytes" = "counts";
    "10|Times" = "counts";
    "05|Ended" = "date";
    #"06|Duration" = "string"
    $ProcessCounts = @{
    "Processed" = 0;
    "Error" = 0;
    "Incomplete" = 0
    $tab=[char]9
    $files=get-childitem $SourcePath
    $writer=new-object System.IO.StreamWriter("$(get-location)\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
    function Get-Tail([object]$reader, [int]$count = 10) {
    $lineCount = 0
    [long]$pos = $reader.BaseStream.Length - 1
    while($pos -gt 0)
    $reader.BaseStream.position=$pos
    # 0x0D (#13) = CR
    # 0x0A (#10) = LF
    if ($reader.BaseStream.ReadByte() -eq 10)
    $lineCount++
    if ($lineCount -ge $count) { break }
    $pos--
    # tests for file shorter than requested tail
    if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
    $reader.BaseStream.Position=0
    } else {
    # $reader.BaseStream.Position = $pos+1
    $lines=@()
    while(!$reader.EndOfStream) {
    $lines += $reader.ReadLine()
    return $lines
    function Get-Top([object]$reader, [int]$count = 10)
    $lines=@()
    $lineCount = 0
    $reader.BaseStream.Position=0
    while(($linecount -lt $count) -and !$reader.EndOfStream) {
    $lineCount++
    $lines += $reader.ReadLine()
    return $lines
    function RemoveKey ( $name ) {
    if ( $name -match "|") {
    return $name.split("|")[1]
    } else {
    return ( $name )
    function GetValue ( $line, $variable ) {
    if ($line -like "*$variable*" -and $line -like "* : *" ) {
    $result = $line.substring( $line.IndexOf(":")+1 )
    return $result
    } else {
    return $null
    function UnBodgeDate ( $dt ) {
    # Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
    if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
    $dt=$dt.split(" ")
    $dt=$dt[2],$dt[1],$dt[4],$dt[3]
    $dt -join " "
    if ( $dt -as [DateTime] ) {
    return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
    } else {
    return $null
    function UnpackParams ($params ) {
    # Unpacks file count bloc in the format
    # Dirs : 1827 0 1827 0 0 0
    # Files : 9791 0 9791 0 0 0
    # Bytes : 165.24 m 0 165.24 m 0 0 0
    # Times : 1:11:23 0:00:00 0:00:00 1:11:23
    # Parameter name already removed
    if ( $params.length -ge 58 ) {
    $params = $params.ToCharArray()
    $result=(0..5)
    for ( $i = 0; $i -le 5; $i++ ) {
    $result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
    $result=$result -join ","
    } else {
    $result = ",,,,,"
    return $result
    $sourcecount = 0
    $targetcount = 1
    # Write the header line
    $writer.Write("File")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
    if ( $HeaderParam.value -eq "counts" ) {
    $tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
    $tmp=$tmp.replace("~","$(removekey $headerparam.name)")
    $writer.write(",$($tmp)")
    } else {
    $writer.write(",$(removekey $HeaderParam.name)")
    if($fp){
    $writer.write(",Scanned,Newest,Summary")
    $writer.WriteLine()
    $filecount=0
    # Enumerate the files
    foreach ($file in $files) {
    $filecount++
    write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
    $results=@{}
    $Stream = $file.Open([System.IO.FileMode]::Open,
    [System.IO.FileAccess]::Read,
    [System.IO.FileShare]::ReadWrite)
    $reader = New-Object System.IO.StreamReader($Stream)
    #$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
    $HeaderFooter = Get-Top $reader 16
    if ( $HeaderFooter -match "ROBOCOPY :: Robust File Copy for Windows" ) {
    if ( $HeaderFooter -match "Files : " ) {
    $HeaderFooter = $HeaderFooter -notmatch "Files : "
    [long]$ReaderEndHeader=$reader.BaseStream.position
    $Footer = Get-Tail $reader 16
    $ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
    if ($ErrorFooter) {
    $ProcessCounts["Error"]++
    write-host -foregroundcolor red "`t $ErrorFooter"
    } elseif ( $footer -match "---------------" ) {
    $ProcessCounts["Processed"]++
    $i=$Footer.count
    while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
    $Footer=$Footer[$i..$Footer.Count]
    $HeaderFooter+=$Footer
    } else {
    $ProcessCounts["Incomplete"]++
    write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
    foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
    $name = "$(removekey $HeaderParam.Name)"
    $tmp = GetValue $($HeaderFooter -match "$name : ") $name
    if ( $tmp -ne "" -and $tmp -ne $null ) {
    switch ( $HeaderParam.value ) {
    "date" { $results[$name]=UnBodgeDate $tmp.trim() }
    "counts" { $results[$name]=UnpackParams $tmp }
    "string" { $results[$name] = """$($tmp.trim())""" }
    default { $results[$name] = $tmp.trim() }
    if ( $fp ) {
    write-host "Parsing $($reader.BaseStream.Length) bytes"
    # Now go through the file line by line
    $reader.BaseStream.Position=0
    $filesdone = $false
    $linenumber=0
    $FileResults=@{}
    $newest=[datetime]"1/1/1900"
    $linecount++
    $firsttick=$elapsedtime.elapsed.TotalSeconds
    $tick=$firsttick+$refreshrate
    $LastLineLength=1
    try {
    do {
    $line = $reader.ReadLine()
    $linenumber++
    if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16) ) {
    # line is end of job
    $filesdone=$true
    } elseif ($linenumber -gt 16 -and $line -gt "" ) {
    $buckets=$line.split($tab)
    # this test will pass if the line is a file, fail if a directory
    if ( $buckets.count -gt 3 ) {
    $status=$buckets[1].trim()
    $FileResults["$status"]++
    $SizeDateTime=$buckets[3].trim()
    if ($sizedatetime.length -gt 19 ) {
    $DateTime = $sizedatetime.substring($sizedatetime.length -19)
    if ( $DateTime -as [DateTime] ){
    $DateTimeValue=[datetime]$DateTime
    if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
    if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
    $line=$line.Trim()
    if ( $line.Length -gt 48 ) {
    $line="[...]"+$line.substring($line.Length-48)
    $line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
    write-host $line.PadRight($LastLineLength) -NoNewLine
    $LastLineLength = $line.length
    $tick=$tick+$refreshrate
    } until ($filesdone -or $reader.endofstream)
    finally {
    $reader.Close()
    $line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
    write-host $line -NoNewLine
    $writer.Write("`"$file`"")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
    $name = "$(removekey $HeaderParam.Name)"
    if ( $results[$name] ) {
    $writer.Write(",$($results[$name])")
    } else {
    if ( $ErrorFooter ) {
    #placeholder
    } elseif ( $HeaderParam.Value -eq "counts" ) {
    $writer.Write(",,,,,,")
    } else {
    $writer.Write(",")
    if ( $ErrorFooter ) {
    $tmp = $($ErrorFooter -join "").substring(20)
    $tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
    $writer.write(",,$tmp")
    } elseif ( $fp ) {
    $writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")
    foreach ( $FileResult in $FileResults.GetEnumerator() ) {
    $writer.write(",$($FileResult.Name): $($FileResult.Value);")
    $writer.WriteLine()
    } else {
    write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
    write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
    write-host "Results written to $($writer.basestream.name)"
    $writer.close()
    I hope somebody can help me,
    Horst
    Thanks Horst MOSS 2007 Farm; MOSS 2010 Farm; TFS 2010; TFS 2013; IIS 7.5

    Hi Horst,
    To convert mutiple robocopy log files to a .csv file with "speed" option, the script below may be helpful for you, I tested with a single robocopy log file, and the .csv file will output to "D:\":
    $SourcePath="e:\1\1.txt" #robocopy log file
    write-host "Robocopy log parser. $(if($fp){"Parsing file entries"} else {"Parsing summaries only, use -fp to parse file entries"})"
    #Arguments
    # -fp File parse. Counts status flags and oldest file Slower on big files.
    $ElapsedTime = [System.Diagnostics.Stopwatch]::StartNew()
    $refreshrate=1 # progress counter refreshes this often when parsing files (in seconds)
    # These summary fields always appear in this order in a robocopy log
    $HeaderParams = @{
     "04|Started" = "date"; 
     "01|Source" = "string";
     "02|Dest" = "string";
     "03|Options" = "string";
     "09|Dirs" = "counts";
     "10|Files" = "counts";
     "11|Bytes" = "counts";
     "12|Times" = "counts";
     "05|Ended" = "date";
     "07|Speed" = "default";
     "08|Speednew" = "default"
    $ProcessCounts = @{
     "Processed" = 0;
     "Error" = 0;
     "Incomplete" = 0
    $tab=[char]9
    $files=get-childitem $SourcePath
    $writer=new-object System.IO.StreamWriter("D:\robocopy-$(get-date -format "dd-MM-yyyy_HH-mm-ss").csv")
    function Get-Tail([object]$reader, [int]$count = 10) {
     $lineCount = 0
     [long]$pos = $reader.BaseStream.Length - 1
     while($pos -gt 0)
      $reader.BaseStream.position=$pos
      # 0x0D (#13) = CR
      # 0x0A (#10) = LF
      if ($reader.BaseStream.ReadByte() -eq 10)
       $lineCount++
       if ($lineCount -ge $count) { break }
      $pos--
     # tests for file shorter than requested tail
     if ($lineCount -lt $count -or $pos -ge $reader.BaseStream.Length - 1) {
      $reader.BaseStream.Position=0
     } else {
      # $reader.BaseStream.Position = $pos+1
     $lines=@()
     while(!$reader.EndOfStream) {
      $lines += $reader.ReadLine()
     return $lines
    function Get-Top([object]$reader, [int]$count = 10)
     $lines=@()
     $lineCount = 0
     $reader.BaseStream.Position=0
     while(($linecount -lt $count) -and !$reader.EndOfStream) {
      $lineCount++
      $lines += $reader.ReadLine()  
     return $lines
    function RemoveKey ( $name ) {
     if ( $name -match "|") {
      return $name.split("|")[1]
     } else {
      return ( $name )
    function GetValue ( $line, $variable ) {
     if ($line -like "*$variable*" -and $line -like "* : *" ) {
      $result = $line.substring( $line.IndexOf(":")+1 )
      return $result
     } else {
      return $null
    }function UnBodgeDate ( $dt ) {
     # Fixes RoboCopy botched date-times in format Sat Feb 16 00:16:49 2013
     if ( $dt -match ".{3} .{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}" ) {
      $dt=$dt.split(" ")
      $dt=$dt[2],$dt[1],$dt[4],$dt[3]
      $dt -join " "
     if ( $dt -as [DateTime] ) {
      return $dt.ToStr("dd/MM/yyyy hh:mm:ss")
     } else {
      return $null
    function UnpackParams ($params ) {
     # Unpacks file count bloc in the format
     # Dirs :      1827         0      1827         0         0         0
     # Files :      9791         0      9791         0         0         0
     # Bytes :  165.24 m         0  165.24 m         0         0         0
     # Times :   1:11:23   0:00:00                       0:00:00   1:11:23
     # Parameter name already removed
     if ( $params.length -ge 58 ) {
      $params = $params.ToCharArray()
      $result=(0..5)
      for ( $i = 0; $i -le 5; $i++ ) {
       $result[$i]=$($params[$($i*10 + 1) .. $($i*10 + 9)] -join "").trim()
      $result=$result -join ","
     } else {
      $result = ",,,,,"
     return $result
    $sourcecount = 0
    $targetcount = 1
    # Write the header line
    $writer.Write("File")
    foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
     if ( $HeaderParam.value -eq "counts" ) {
      $tmp="~ Total,~ Copied,~ Skipped,~ Mismatch,~ Failed,~ Extras"
      $tmp=$tmp.replace("~","$(removekey $headerparam.name)")
      $writer.write(",$($tmp)")
     } else {
      $writer.write(",$(removekey $HeaderParam.name)")
    if($fp){
     $writer.write(",Scanned,Newest,Summary")
    $writer.WriteLine()
    $filecount=0
    # Enumerate the files
    foreach ($file in $files) { 
     $filecount++
        write-host "$filecount/$($files.count) $($file.name) ($($file.length) bytes)"
     $results=@{}
    $Stream = $file.Open([System.IO.FileMode]::Open,
                       [System.IO.FileAccess]::Read,
                        [System.IO.FileShare]::ReadWrite)
     $reader = New-Object System.IO.StreamReader($Stream)
     #$filestream=new-object -typename System.IO.StreamReader -argumentlist $file, $true, [System.IO.FileAccess]::Read
     $HeaderFooter = Get-Top $reader 16
     if ( $HeaderFooter -match "ROBOCOPY     ::     Robust File Copy for Windows" ) {
      if ( $HeaderFooter -match "Files : " ) {
       $HeaderFooter = $HeaderFooter -notmatch "Files : "
      [long]$ReaderEndHeader=$reader.BaseStream.position
      $Footer = Get-Tail $reader 16
      $ErrorFooter = $Footer -match "ERROR \d \(0x000000\d\d\) Accessing Source Directory"
      if ($ErrorFooter) {
       $ProcessCounts["Error"]++
       write-host -foregroundcolor red "`t $ErrorFooter"
      } elseif ( $footer -match "---------------" ) {
       $ProcessCounts["Processed"]++
       $i=$Footer.count
       while ( !($Footer[$i] -like "*----------------------*") -or $i -lt 1 ) { $i-- }
       $Footer=$Footer[$i..$Footer.Count]
       $HeaderFooter+=$Footer
      } else {
       $ProcessCounts["Incomplete"]++
       write-host -foregroundcolor yellow "`t Log file $file is missing the footer and may be incomplete"
      foreach ( $HeaderParam in $headerparams.GetEnumerator() | Sort-Object Name ) {
       $name = "$(removekey $HeaderParam.Name)"
                            if ($name -eq "speed"){ #handle two speed
                            ($HeaderFooter -match "$name : ")|foreach{
                             $tmp=GetValue $_ "speed"
                             $results[$name] = $tmp.trim()
                             $name+="new"}
                            elseif ($name -eq "speednew"){} #handle two speed
                            else{
       $tmp = GetValue $($HeaderFooter -match "$name : ") $name
       if ( $tmp -ne "" -and $tmp -ne $null ) {
        switch ( $HeaderParam.value ) {
         "date" { $results[$name]=UnBodgeDate $tmp.trim() }
         "counts" { $results[$name]=UnpackParams $tmp }
         "string" { $results[$name] = """$($tmp.trim())""" }  
         default { $results[$name] = $tmp.trim() }  
      if ( $fp ) {
       write-host "Parsing $($reader.BaseStream.Length) bytes"
       # Now go through the file line by line
       $reader.BaseStream.Position=0
       $filesdone = $false
       $linenumber=0
       $FileResults=@{}
       $newest=[datetime]"1/1/1900"
       $linecount++
       $firsttick=$elapsedtime.elapsed.TotalSeconds
       $tick=$firsttick+$refreshrate
       $LastLineLength=1
       try {
        do {
         $line = $reader.ReadLine()
         $linenumber++
         if (($line -eq "-------------------------------------------------------------------------------" -and $linenumber -gt 16)  ) {
          # line is end of job
          $filesdone=$true
         } elseif ($linenumber -gt 16 -and $line -gt "" ) {
          $buckets=$line.split($tab)
          # this test will pass if the line is a file, fail if a directory
          if ( $buckets.count -gt 3 ) {
           $status=$buckets[1].trim()
           $FileResults["$status"]++
           $SizeDateTime=$buckets[3].trim()
           if ($sizedatetime.length -gt 19 ) {
            $DateTime = $sizedatetime.substring($sizedatetime.length -19)
            if ( $DateTime -as [DateTime] ){
             $DateTimeValue=[datetime]$DateTime
             if ( $DateTimeValue -gt $newest ) { $newest = $DateTimeValue }
         if ( $elapsedtime.elapsed.TotalSeconds -gt $tick ) {
          $line=$line.Trim()
          if ( $line.Length -gt 48 ) {
           $line="[...]"+$line.substring($line.Length-48)
          $line="$([char]13)Parsing > $($linenumber) ($(($reader.BaseStream.Position/$reader.BaseStream.length).tostring("P1"))) - $line"
          write-host $line.PadRight($LastLineLength) -NoNewLine
          $LastLineLength = $line.length
          $tick=$tick+$refreshrate      
        } until ($filesdone -or $reader.endofstream)
       finally {
        $reader.Close()
       $line=$($([string][char]13)).padright($lastlinelength)+$([char]13)
       write-host $line -NoNewLine
      $writer.Write("`"$file`"")
      foreach ( $HeaderParam in $HeaderParams.GetEnumerator() | Sort-Object Name ) {
       $name = "$(removekey $HeaderParam.Name)"
       if ( $results[$name] ) {
        $writer.Write(",$($results[$name])")
       } else {
        if ( $ErrorFooter ) {
         #placeholder
        } elseif ( $HeaderParam.Value -eq "counts" ) {
         $writer.Write(",,,,,,")
        } else {
         $writer.Write(",")
      if ( $ErrorFooter ) {
       $tmp = $($ErrorFooter -join "").substring(20)
       $tmp=$tmp.substring(0,$tmp.indexof(")")+1)+","+$tmp
       $writer.write(",,$tmp")
      } elseif ( $fp ) {
       $writer.write(",$LineCount,$($newest.ToString('dd/MM/yyyy hh:mm:ss'))")   
       foreach ( $FileResult in $FileResults.GetEnumerator() ) {
        $writer.write(",$($FileResult.Name): $($FileResult.Value);")
      $writer.WriteLine()
     } else {
      write-host -foregroundcolor darkgray "$($file.name) is not recognised as a RoboCopy log file"
    write-host "$filecount files scanned in $($elapsedtime.elapsed.tostring()), $($ProcessCounts["Processed"]) complete, $($ProcessCounts["Error"]) have errors, $($ProcessCounts["Incomplete"]) incomplete"
    write-host  "Results written to $($writer.basestream.name)"
    $writer.close()
    If you have any other questions, please feel free to let me know.
    If you have any feedback on our support,
    please click here.
    Best Regards,
    Anna Wang
    TechNet Community Support

  • ODI Log file encoding

    Greetings Everyone!
    I faced the following problem.
    I load data through ODI in Hyperion. For convenient display I made my log file in excel *.csv format. Due to encoding the error description text in my log file is displayed uncorrectly.
    Is it possible to configure the encoding for the log files? Maybe i need to put "ENCODING=UTF8" string somewhere?
    Thank you!
    Regards,
    Ilshat

    The other method would be just to run a pure maxl data load, so a maxl script and a dataload, this will allow an error file to be generated by essbase.
    This can be developed outside of ODI and then once you are happy called from within ODI.
    It is worth trying that method to see if essbase will generate the error logs you are looking for.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Connecting Power Query to SharePoint Foundation 2010

    I'm looking to set up PowerQuery in Excel 2013 to link into the various lists on our SharePoint 2010 Foundation server. I have established the connection URL. The SharePoint Foundation 2010 server is on the same LAN I'm connected to, so I've opted fo

  • Object queue error

    While connecting to the object queue within the Sun Java System RFID Massage Queue the following error is output: Unable to connect to the object store 'EpcisObjectStore'/ com.sun.messaging.jmq.admi.objstore.GeneralNamingException: A general naming e

  • Changes to sig 3010-0 with V6?

    Can anyone tell me what changed with this sig in V6? Our old filter no longer work. It appears the normal source and destination IP addresses have been swapped but that particular setting on the sig has not changed AFAICT (it was and is set to swap-a

  • I currently have Elements 10 and run Mac OS 10.7.5. Can I upgrade to Elements 12? 13?

    I currently have Elements 10 and run Mac OS 10.7.5. Can I upgrade to Elements 12? 13?

  • Datapump through grid control 12C

    hi, I have created a schema in database which has exp_full_database privilege to run datapump jobs (verified by running a datapump API job too). I have a os user(non dba group) which has ability to run expdp command. I also created an administrator i