Centralized logging producing empty log files searches

Not sure what I am doing wrong here. Experimenting with Lync 2013 Centralized logging. I started the AlwaysOn scenario which was off by default. I checked the directories on all 3 of my FE servers and a bunch of ETL files are present so it's doing something.
Thing is, no matter how I search, the output or the log file if I pipe the output is always zero. Following the Microsoft document on Centralized Logging. They make it look so easy. Anyone has success with this tool? Seems like a nice feature and more convenient
than ocslogger but its not producing the correct search results.

I am quickly finding out that this utility is nothing but a headache. I am getting errors in the Lync Server log telling me the threshold for logging has been reached. I changed the
CacheFileMaxDiskUsage from 80 to 20. 80%!! Seriously. Who wants a utility to take 80% of the disk space! Even at 20%, with a 125GB drive, I should be able to go up to 25GB. The ETL file was 14MB and I started getting errors saying the threshold
was reached!
Then, I could not stop the scenario. I tried 3 times. Either it would keep running or I got some weird error. I finally spelled AlwaysOn with the caps like it was case sensitive and it worked. This utility is whacked. Maybe I am doing something wrong.
According to MS article CacheFileMaxDiskUsage is Defined as the percentage of disk space that can be used by the cache files. So, 20 for this value means 20% of 125GB or if its talking about free disk space, 18GB in my case.  Below is the error I am
getting: 90,479.939.584 is the amount of free space on the disk. I did do the search again and it did work this time. I restarted the Agent on all FE servers. If I can get around this threshold error I think I am in business.
Lync Server Centralized Logging Service Agent Service reached the local disk usage threshold and no network share is configured
EtlFileFolder:  c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileLocalMaxDiskUsage: 20 %
CacheFileLocalFolders:
  c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileNetworkFolder: Not set
Cause: Lync Server Centralized Logging Service Agent Service will stop tracing when the local disk usage threshold is reached and no network share is configured. Verify CLS configuration using Get-CsCentralizedLoggingConfiguration. Necessary scenarios will
need to be re-enabled once the more space is made available locally or a network share is configured
Resolution:
Free up local disk space, or increase disk usage threshold for CLS, or configure network share with write permissions for Network Service account

Similar Messages

  • Lync 2013 - Issue with Snooper and Centralized Logging files

    I ran the AlwaysOn trace on 3 pools last night till this morning.  When I run the search script and include the specific time I want I get a different time in the trace.  For example....I want to see the traces from 3-3:30AM.  This is what
    I ran :
    Search-CsClsLogging -Pools "X","Y","Z" -StartTime "12/27/2013 03:00:00 AM" -EndTime "12/27/2013 03:30:00 AM" -OutputFilePath \\file01\LyncShare\Traces\lync_trace122713.txt
    When I open snooper the timestamp on the trace is from 8-8:30AM.  I checked all the servers in the 3 pools and there are .hdr and .cache files from yesterday to this morning as seen here:
    The timestamps on the servers are correct.  
    Not sure what the issue is.  Suggestions?  Thanks.

    Hi,
    The issue may cause by the influence of the cache before.
    Please try to run the cmdlet: Sync-CsClsLogging and then run the search cmdlet again to test the issue. The cmdlet
    Sync-CsClsLogging will flush the cache used by searching before. Flushing the cache helps to ensure that there is a clean log and trace file capture buffer at the CLSController for the next search operation.
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Seach in log file archive by using findevent

    Hi @all,
    we are using several IronPort C series systems. All our log files are stored via scp on a central log file server running under Linux. The log files are stored in subfolders for each system.
    Now it became to be necessary to search emails from last year. I did it by using the grep command and it was very complicated to find all informations (MID, ICID, DCID).
    Does someone knows a way to use the findevent command on a Linux based system or do someone have a normal shell script which do the same work as the findevent command do?
    Regards, Thomas

    There is a tool on the Support Portal that emulates the AsyncOS's findevent command. The tool was written in Python which should work on your Linux system, assuming that Python is available on it.
    Find Event Tool
    Python
    This is the core code to the CLI findevent command which will dump log information based on MID or regular expression searches on "To", "From" and "Subject". The help command description for findevent is "Find events in mail log files".
    1. Log onto the support portal (http://www.ironport.com/support/login.html).
    2. After you log in, click on "Appliance Documentation > Tools" on the left side and go down near the bottom of the page.
    good luck

  • Re: identifying server log file when runningdistributed

    Hi John, I can give you some TOOL code which will get the process
    id, but I do have to stray outside of Framework :-). The following
    code uses classes from the SystemMonitor project which was
    introduced in release 2 of Forte (this code won't work on R1):
    partAgent : SystemAgent;
    pidInst : ConfigValueInst;
    pid : TextData;
    partAgent = SystemAgent(task.Part.Agent);
    pidInst = ConfigValueInst(partAgent.FindInstrument('ProcessId');
    pid = TextData(pidInst.GetData);
    The result is that the variable pid contains the process id in string form.
    This could be converted to numeric form if needed.
    If what you're really after is the partition's log file name, then the following
    code will do the trick (it takes into account the differences in how the log
    files are named for interpreted vs. compiled partitions):
    partAgent : SystemAgent;
    logFileInst : ConfigValueInst;
    logFileName : TextData;
    -- Get our agent and try to get the log file inst
    partAgent = SystemAgent(task.Part.Agent);
    logFileInst = ConfigValueInst(partAgent.FindInstrument('LogFile');
    -- Interpreted partition don't have their own log file, so check
    if (logFileInst = NIL) then
    pidInst : ConfigValueInst;
    pid : TextData;
    -- We must be an interpreted partition get our pid
    pidInst = ConfigValueInst(partAgent.FindInstrument('ProcessId');
    pid = TextData(pidInst.GetData);
    -- Build log file name
    logFileName = 'forte_ex_';
    logFileName.Concat(pid);
    else
    -- Get the name of the log file from the instrument
    logFileName = TextData(logFileInst.GetData);
    end;
    The available agents and their instruments and commands are documented
    in the manual "SystemMonitor Project". I'm at home now, so I don't have
    the page numbers. Some additional agents (which were added after this
    manual went to press) can be found in Tech Note #10475. Also, econsole
    and escript can be handy since any instrument you can see in these tools
    can be accessed from TOOL code. Hope this is of some use.
    Sean
    At 05:24 PM 7/30/96 -0700, John L. Jamison wrote:
    >
    I'd like to solicit some ideas from you folks. As many of you are probably
    aware, when running in distributed mode, log output for server partitions is
    written out to log files on the server partition. However it is sometimes a
    trick trying to identify the process which is running your individual
    partitions, and
    thus knowing which log file to read.
    At one client, we added a 3gl call-out to obtain the process id and return it
    to the client. However this is not a good option at a new client which uses
    Sequent (3gl wrappering difficult in statically linked environments such as
    sequent). I am also aware that Econsole allows you to browse active
    partitions and display log files, but you still have to know which active
    partitions to watch.
    I have not yet seen a way to programmatically obtain the process ID for a
    partition within TOOL and using FrameWork classes.
    What kinds of strategies are folks employing out there?
    Thanks in advance,
    -John
    John Jamison
    Sage Solutions, Inc.
    353 Sacramento Street, Suite 1360
    San Francisco, CA 94111
    415 392 7243 x 508
    [email protected]

    Hi John,
    I think that Sean Fits answered your question about TOOL code to get the PID
    number. I just want to complement on the loging strategy.
    There is one log file for every active partition of an application. I think
    it is useful in some cases that a distributed application gets a centralized
    log file to trace the exact sequential flow of processing among all the
    partitions. This is useful during the initial debuging and tuning. In fact
    something similar to the UNIX syslog file.
    For doing so it is easy to implement a custom central log Mgr in one
    partition and to have all partitions use it when needed (it doesn't prevent
    to continue using the standard LogMgr in addition). This central LogMgr
    automatically adds the date&time plus the node name, partition name, ... to
    the log messages it receives.
    The flags which apply are those of the partition where the central Log Mgr is.
    Because of potential concurency requests from the several partitions
    accessing the central Log Mgr, it is not possible to support the "Put" and
    PutHex" methods. Only complete lines can be logged (Putline and PutHexLine).
    Attached is the TOOL code of my TraceService plan that implements it.
    Remark : the "Phr" in the names relate to the name of the application we
    have here under development.
    To use the central Log Mgr, a partition must create an object of class
    PartitionLog, and then log messages must be sent to it the way you send them
    to the standard LogMgr; it will manage to send them to the central Log Mgr.
    At 17:24 30/07/96 -0700, John Jamison wrote:
    >
    I'd like to solicit some ideas from you folks. As many of you are probably
    aware, when running in distributed mode, log output for server partitions is
    written out to log files on the server partition. However it is sometimes a
    trick trying to identify the process which is running your individual
    partitions, and
    thus knowing which log file to read.
    At one client, we added a 3gl call-out to obtain the process id and return it
    to the client. However this is not a good option at a new client which uses
    Sequent (3gl wrappering difficult in statically linked environments such as
    sequent). I am also aware that Econsole allows you to browse active
    partitions and display log files, but you still have to know which active
    partitions to watch.
    I have not yet seen a way to programmatically obtain the process ID for a
    partition within TOOL and using FrameWork classes.
    What kinds of strategies are folks employing out there?

  • Where can I find Installer and uninstaller log files in JSE 6?

    Where can I find Installer and uninstaller log files in JSE 6?

    For the installer log file, search for Sun_Java_Studio_Enterprise_6_2004Q1_install, or some portion of that.
    To find Java ES component product log files, search on Sun_ONE.
    You can also search for the timestamp that is part of each log file name. It is of the format MMddhhmm (Month/day/hour/minute)
    Refer also to the Troubleshooting chapter of the Installation Guide for additional details on reports and log files used by Java SE

  • Empty Log File - log settings will not save

    Description of Problem or Question:
    Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
    (empty log file is created)
    Product\Version\Service Pack\Fixpack (if applicable):
    BO Enterorise 12.0
    Relevant Environment Information (OS & version, java or .net & version, DB & version):
    Server: windows Server 2003 Enterprise SP2.
    Database Oracle 10g
    Client : Vista
    Sporadic or Consistent (if applicable):
    Consistent
    What has already been tried (where have you searched for a solution to your question/problem):
    Searched forum, SMP
    Steps to Reproduce (if applicable):
    From InfoViewApp, logged in as Admin
    Open ->Dashboard and Analytics Setp -> Parameters -> Trace
    Check "Log to folder" and "SQL Queries", Click Apply.
    Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.

    Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
    Feedback
    Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
    Feedback via Apple Developer
    Do a backup.
    Quit the application.
    Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J.  When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder.  Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
    Restart, open the application and test. If it works okay, delete the plist from the desktop.
    If the application is the same, return the .plist to where you got it from, overwriting the newer one.
    Thanks to leonie for some information contained in this.

  • Event ID 33020 LS Centralized Logging Agent - Error while moving cache files to network share

    I have the "AlwaysOn" CLS logging scenario running in my Lync 2013 Enterprise deployment.  I did not configure the CacheFileNetworkFolder option since i don't care about retaining these logs anywhere other than on the local drives of the Lync
    servers so i just left it blank.  Now every few hours or so I am getting Event ID 33020 in each Lync server and SCOM is firing an alert as well.
    The CsClsLogging configuration is as follows:
    PS C:\> Get-CsClsConfiguration
    Identity                      : Global
    Scenarios                     : {Name=AlwaysOn, Name=MediaConnectivity, Name=ApplicationSharing,
                                    Name=AudioVideoConferencingIssue...}
    SearchTerms                   : {Type=Phone;Inserts=ItemE164,ItemURI,ItemSIP,ItemPII,
                                    Type=URI;Inserts=ItemURI,ItemSIP,ItemPII,
                                    Type=CallId;Inserts=ItemCALLID,ItemURI,ItemSIP,ItemPII,
                                    Type=ConfId;Inserts=ItemCONFID,ItemURI,ItemSIP,ItemPII...}
    SecurityGroups                : {}
    Regions                       : {}
    EtlFileFolder                 : C:\CLSTracing
    EtlFileRolloverSizeMB         : 20
    EtlFileRolloverMinutes        : 60
    TmfFileSearchPath             : C:\Program Files\Common Files\Microsoft Lync Server 2013\Tracing\
    CacheFileLocalFolders         : C:\CLSTracing
    CacheFileNetworkFolder        :
    CacheFileLocalRetentionPeriod : 14
    CacheFileLocalMaxDiskUsage    : 80
    ComponentThrottleLimit        : 5000
    ComponentThrottleSample       : 3
    MinimumClsAgentServiceVersion : 6
    Is there a way to stop the flow of these events without having to configure CLS to transfer the logs to a network share?

    Yes, the CacheFileLocalFolders path of 'c:\CLSTracing' is valid on all the Lync servers.  The AlwaysOn scenario is started, running and producing .hdr & .cache files in this folder on all servers across my front-end, director and edge pools.  
    In addition, i am able to search for and extract, valid logging information that i can analyze using Snooper.exe.
    Reconfigure the CentralizedLoggingConfiguration how???
    Tried setting the CacheFileNetworkFolder value to null by running: Set-CsClsConfiguration -CacheFileNetworkFolder $null  and then restarting the CLS agent.  As expected, event 33037 fired confirming the settings were received from the CMS.
    New config received from CMS
    Following are the changed settings:
    EtlFileRolloverSizeMB: Old - NULL, New - 20
    CacheFileLocalRetentionPeriod: Old - NULL, New - 14
    CacheFileLocalMaxDiskUsage: Old - NULL, New - 80
    ComponentThrottleLimit: Old - NULL, New - 5000
    ComponentThrottleSample: Old - NULL, New - 3
    MinimumClsAgentServiceVersion: Old - NULL, New - 6
    TmfFileSearchPath: Old - NULL, New - C:\Program Files\Common Files\Microsoft Lync Server 2013\Tracing\
    CacheFileLocalFolders: Old - NULL, New - C:\CLSTracing
    CacheFileNetworkFolder: Old - NULL, New -
    SearchTerms: Old - NULL, New - Type=Phone;Inserts=ItemE164,ItemURI,ItemSIP,ItemPII,Type=URI;Inserts=ItemURI,ItemSIP,ItemPII,Type=CallId;Inserts=ItemCALLID,ItemURI,ItemSIP,ItemPII,Type=ConfId;Inserts=ItemCONFID,ItemURI,ItemSIP,ItemPII,Type=IP;Inserts=ItemIP,ItemIPAddr,ItemIPv6Addr,ItemURI,ItemSIP,ItemPII,Type=SIPContents;Inserts=ItemSIP
    Scenarios Added:
      Scenario: Name - AlwaysOn
        Provider List:...................... omitted to save space.

  • Steps to empty SAPDB (MaxDB) log file

    Hello All,
    i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
    I do have some idea what to do like the steps below
    1.  take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
    2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
    3. It will automatically overwrite log after log backups.
    or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
    Can the log area be overwritten cyclically without having to make a log backup?
    Yes, the log area can be automatically overwritten without log backups. Use the DBM command
    util_execute SET LOG AUTO OVERWRITE ON
    to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
    Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
    util_execute SET LOG AUTO OVERWRITE OFF
    and by creating a complete data backup in the ADMIN or ONLINE status.
    Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
    any reply will be highly appreciated.
    Thanks
    Mani

    Hello Mani,
    1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
    http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
               u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your  firewall and restrict access to these ports to only those computers that need to access the database.u201D
                 Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
    Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
    2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
    See the document u201CNetwork Communicationu201D at
    http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
    Thank you and best regards, Natalia Khlopina

  • How can I search for details in job log files

    Hi,
    I'm looking for a specific entry in the job log.  I don't know when it was written (other than the date), to find the log without trial and error, I need a specific time to open the correct one in IDC.
    The entry was written by the modifyADSuser pass and it would have an userID tag in the log file but there are many hundreds a day for me to hunt through.  If I could find where identity center pulls the log files from I could either use a SQL select (if it's held in the database) or text search (if it's held in a folder) to zero in on the correct log file.  Does anyone know where the information that's shown in the IDC job logs is stored?
    Thanks,
    Pete

    Thanks for the response, I checked MC_LOGS and that looks to be the same detail that is available in the management console, basically the rows displayed in the job log.  Do you know the table relationship after MC_LOGS, what's the tale name for the data (even if encrypted) that details each pass etc?
    Thanks,
    Pete

  • Teradata fast load log file empty

    hei all,
    after update odi 11g teradata fast load script not running, error tells to see the log file but log is empty
    any solution
    naseer

    any solution please

  • OES2 SP3 AFP How to empty AFP log file

    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

    If you get a lot of afp activity, your probably best off if you just set the log to rotate.
    create a file under /etc/logrotate.d/ and name it whatever you want.
    then just enter in something like this:
    /var/log/afptcpd/afptcp.log {
    compress
    dateext
    maxage 365
    rotate 99
    size=+4096k
    notifempty
    missingok
    create 644 root root
    postrotate
    /etc/init.d/novell-afptcpd reload
    endscript
    Originally Posted by Skylon5000
    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

  • Odi 11g - IKM SQL to Hyperion Essbase (DATA) log file always empty

    In odi 11g when using *"IKM SQL to Hyperion Essbase (DATA)"* setting the the "LOG_ENABLED" = true,
    only an empty file are generated.
    Just "LOG_ERRORS" file (if errors occurs) are created.
    Is this just an my issue?
    Can someone help me?
    p.s.: the same issue, I got even with the *"IKM SQL to Hyperion Planning"*
    Thx in advance, Paolo

    Thanks John for your suggestion.
    here the patch *"Patch 10302682: IKM SQL TO PLANNING: LOG FILE IS CREATED BUT NOTHING INSIDE."*
    I didn't see any other about Essbase...
    I try to check all day on support site.
    Paolo
    Edited by: Paolo on 19-apr-2011 8.44

  • Errors in Ultra Search crawler's log file

    Hi all,
    I'm trying to configure Ultra Search to do advanced search on a set of attributes of my Portal items.
    After executed Synchronization Schedule in the Ultra Search Administration Tool, I found these in the Crawler Progress Summary:
    Documents to Fetch 0
    Documents Fetched 284
    Document Fetch Failures 0
    Documents Rejected 0
    Documents Discovered 284
    Documents Indexed 59
    Documents non-indexable 225
    Document Conversion Failures 0
    The number of non-indexable documents are 225 while the indexed documents are only 59. Then I looked into the Crawler's log file and found that all the almost processes tried to index Portal items got errors. The errors look like:
    http://winas10g.tinhvan.com/pls/portal/PORTAL.wwsbr_srchxml.execute?p_action=generate_item&p_thingid=276935&p_siteid=753&p_result_lang=en: Portal server returned an error message.
    Documents to process = 0
    Error message returned from Portal server is: User-Defined Exception.
    The successful processed documents look like:
    Documents to process = 186
    Processing http://winas10g.tinhvan.com/pls/portal/PORTAL.wwsbr_srchxml.execute?p_action=generate_item&p_thingid=276386&p_siteid=753
    Total documents successfully processed = 86
    Documents to process = 185
    Then when I performed search, I just can find the text that appear directly on Portal pages with URL address format: http://host:port/pls/portal/url/PAGE/pagegroup_name/page_name.
    Is there any error with my Utra Search configuration here? How can I index all the Portal items along with their attributes?
    Thanks,
    Vietdt

    Liya
    The log file for the crawler schedule should be explicitly listed in the information for the schedule. Please check the information for that schedule.
    This log file will be located in the log directory that you've specified in the "crawler" tab.
    Please locate the log file and let us know what you see in that.
    thanks
    edward

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Log file audit script to search and collect

    Hi guys,
    I'm trying to figure out the best way to complete this log file audit, so I would like to scripted, but can't seem to get a grasp on it and how best to do it. I need to search for the log files (all log files OS and App logs) on a few dozen systems on a
    few different drives per system. I'm looking to collect the log location, name, size, last log file event in that log and then export info that to a CSV file and email it to myself monthly to report on.  

    Please read the following:
    Posting guidelines
    Handy tips for posting to this forum
    How to ask questions in a technical forum
    Rubber duck problem solving
    How to write a bad forum post
    Help Vampires: A Spotter's Guide
    -- Bill Stewart [Bill_Stewart]

Maybe you are looking for

  • Bridge cc will not open on pc after adding output module

    I just added the output module to my bridge CC and now it won't open it was working fine before. I have uninstalled and reinstalled. I have restarted the computer. I have no idea what to do now.

  • Intel Xserve + Xserve RAID + SCSI

    I have a requirement for an Intel Xserve system that I can connect an Xserve RAID and SCSI tape autoloader to. It seems that Apple won't let me configure a system with a SCSI card and a fibre channel card in it at the same time, and repeated phone ca

  • Query in SAPKKA12 - taking long time

    Hi, The following query is from program SAPKKA12 and taking a long time to run . Any inputs how can I  optimize it? Will index creation on VBAP help?   SELECT vbeln                                                               posnr                  

  • How to Swithch on Modification Assitance in Program

    Hi friends,    please let me know how to switch on modification assitance in my program.     pls help me out regarding that..   regards   devi

  • HR BENEFITS AND COMPENATATION MANAGMENT

    HI EXPERTS!!! IS THERE ANY SAMPLE OF BLUEPRINT THAT I COULD USE FOR BENEFTIS OR COMPENTATION MANAGEMENT???