Question. usage of transaction log

Hello,
our BI-Database (db2 v.9.5) is 3 TB big. log_dir 20GB
1) What would be your suggestions regarding size of log:dir to prevent the error transaction log is full.
2) How can I monitor the usage of the transaction log? We would like to set up a monitoring to send out an alarm, when the usage reaches 90%. A database select or can I find the information in db2diag.log?
Regards,
Alexander

Hi Alexander,
in addition to the script Joachim provided, the following gives a SQL based solution to find about Log Space usage
db2 "SELECT TOTAL_LOG_AVAILABLE AS LOG_AVAILABLE,TOTAL_LOG_USED as LOG_USED, APPL_ID_OLDEST_XACT as OLDEST_APPL_ID from SYSIBMADM.SNAPDB"
Yo can find more interesting values on the snapdb admin view here:
DB2 9.5 Information Center
Gruss,
Hans-Jürgen
Edited by: Hans-Juergen Moldowan  on Jan 19, 2009 5:07 PM

Similar Messages

  • Transaction log usage grows due to replication even if I don't use replication at all

    Hi
    The transaction log usage keeps growing a lot on my user database since few days back. the database is in full recovery model and I do transaction log backups every 10 minutes. The DB was part of Database Mirroring but I removed it. The usage was controlled
    for many years by the backups but something happened that is messing up the transaction log
    this is DBCC OPENTRAN
    Transaction information for database 'MyDB'.
    Replicated Transaction Information:
            Oldest distributed LSN     : (0:0:0)
            Oldest non-distributed LSN : (1450911:6823:1)
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    log_reuse_wait_desc reports REPLICATION
    the funny thing is that I am not using replication at all. I am using CDC.
    To reduce the transaction log usage I run below statement every day since the problem started
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, 
        @numtrans = 0, @time = 0, @reset = 1
    Any idea what should I do to solve this problem and be back on a normal situation?
    BTW, The server is SQL 2012 (11.0.2383)
    Thanks
    Javier Villegas |
    @javier_vill | http://sql-javier-villegas.blogspot.com/
    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

    CDC uses the replication log reader agent and if you manually ran sp_repldone like that you lost information in your CDC capture.  If the capture job can't keep up with the workload or is not running for CDC, you would have the exact problems you describe.
     If you execute sp_repldone like that, you might as well disable CDC.
    http://technet.microsoft.com/en-us/library/dd266396(v=sql.100).aspx
    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog |
    Twitter |
    MVP Profile
    Training |
    Consulting |
    Become a SQLskills Insider
    Troubleshooting SQL Server

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • The system failed to flush data to the transaction log. Corruption may occur.

    We have a windows server 2008 R2 Virtual machine and we are getting the following Warning Event.
    Event 51 Volmgr
    The system failed to flush data to the transaction log.  Corruption may occur.
    Any idea what is wrong with this server? Why this event is occurring?

    Hi Jitender KT,
    Before going further, would you please let me know the complete error message that you can find (such as a
    screenshot if you can provide)? Please check and confirm in Event Viewer if there other related event you can find, such as Event 57 and so on. Meanwhile, can you remember what operations you have done before the warning occurred?
    Based on current message that you provided, please run
    Chkdsk command to check if you can find error. The issue seems to be related to the storage device. Please refer to the following similar question.
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/044b10af-c253-46de-b40d-ce9d128b83d7/event-id-57-source-volmgr?forum=winservergen
    In addition, please also refer to the following link. It should be helpful.
    http://www.eventid.net/display-eventid-57-source-volmgr-eventno-8865-phase-1.htm
    Hope this helps.
    Best regards,
    Justin Gu

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • You want to know the amount of space the transaction log for the Customer database is using. Which T-SQL command would you use?

    You want to know the amount of space the transaction log for the Customer database is using. Which T-SQL command would you use?

    Forced me to do a little research.
    DBCC SQLPERF(logspace)
    See also
    http://stackoverflow.com/questions/198343/how-can-i-get-the-size-of-the-transaction-log-in-sql-2005-programmatically
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

  • Audit Vault 12.1.1 error creating audit trail with TRANSACTION LOG

    Hi,
    i installed AV 12.1.1 , the DB target is with Data Guard.
    when i run the script oracle_user_setup with the mode REDO_COLL the final message is that was succesfull , but when i go to the AV console and try to create an audit trail with TRANSACTION LOG the AV console shows me an error and the log shows me this :
    [2013-10-16T03:37:18.593-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] RedoCollector : runSourceScript : Error while running script on source for REDO collector.
    [2013-10-16T03:37:19.528-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] OAV-8004: Failed to start collector {0}:{1}CollectionFactory : createCollection : Exception while creating collection. [[
    Failed to start collector {0}:{1}
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:816)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    Nested Exception:
    java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
    ORA-06512: at line 1
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
                    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
                    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
                    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
                    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
                    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
                    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1044)
                    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)
                    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3584)
                    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
                    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
                    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                    at java.lang.reflect.Method.invoke(Method.java:616)
                    at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:230)
                    at oracle.ucp.jdbc.proxy.PreparedStatementProxyFactory.invoke(PreparedStatementProxyFactory.java:124)
                    at $Proxy2.execute(Unknown Source)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:747)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    i don't understand why the issue because the user has the privileges given by the script and i tried with grant as sysdba but without any result
    i don't understand what are the privileges that the collector needs.
    any idea?
    thnks for any help

    Hi
    Just run the script $AV_AGENT/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql  USER_NAME REDO_COLL
    This will grant the user some privileges and roles like DBA and CREATE Database Link
    I hope this answer your question
    Thanks
    Ahmed Moustafa

  • We have "dbbackup.exe" in SqlAnywhere in BI 4.1 for running the transaction log truncation/backup. This wasn't present in BOXI 3.1. Any alternative for 3.1?

    1) OS version:
    OS Name : Windows Server 2008 R2
    2) BO version:
        BusinessObjects XI 3.1 SP05.
    3) My question:
        We have “dbbackup.exe” utility in SqlAnywhere in BI 4.1 for running the transaction log ( CMS and Audit) truncation/backup. But the same utility was not present in BOXI 3.1 SP05 for backup.
       Is there an equivalent/alternative utility in BOXI 3.1 SP05 for the same purpose? We use the command below for BI 4.1 Transaction Log truncation/backup:
    E:\Program Files\SAP BusinessObjects\sqlanywhere\BIN64>dbbackup.exe -c "dsn=<System DSN>;uid=< SQL_AW_DBA_UID>;pwd=< SQL_AW_DBA_PASSWD>;host=localhost:2638" -t -x -n "E:
    \Transaction_log_backup\CMS"  
    Any help or clarification on this issue would be greatly appreciated.
    Thanks in advance.
    Conor.

    Hi Conor,
    BOXI 3.1 SP05 does not include the dbbackup utility.  Instead, you issue SQL statements to create the backup.  We published a paper on the subject:
    http://scn.sap.com/docs/DOC-48608
    The paper uses a maintenance plan to schedule regular backups, but you don't need to do that if you want to simply create a backup when required.  To do that (along with transaction log truncation), you run the SQL statement:
    BACKUP DATABASE DIRECTORY 'backup-dir'
    TRANSACTION LOG TRUNCATE;
    For complete details about the BACKUP statement, have a look here:
    http://dcx.sap.com/index.html#1201/en/dbreference/backup-statement.html
    You'll need to execute the statement inside a SQL console - the paper above describes how to get that.
    I hope this helps!
    José Ramos
    Product Manager
    SAP Canada

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Knowledge on Transaction log ?

    Hi All,
    I have couple of questions?
    Question-1:
    I need to know will running import/export wizard increase the T-log growth? OR will running simple select statement increase the T-log.
    To my little knowledge data modification (insert, update, or delete) or data definition language (DDL) statements only increase the T-log how about import/export wizard or simple select statement..
    Question-2:
    Also what will happen inside simple recovery model when comparable to full recovery model ?
    I assume the data is first written in T-log and once committed will move to mdf. In this scenario what will happen in simple and full recovery, how they differ from they differ from each other? Please help me to understand the internal architecture/inside oprations
    of recovery models...
    Best Regards,
    Moug
    Best Regards Moug

    Hi,
    Q1) No. Select statements doesnt get logged. Import/export will write to the database hence the tlog will be used.  Any statement other than a DRL (Data Retrieval Language) will be either fully or minimally logged.
    Q2) In case of Single recovery model, the data is in the transaction log till it commits. Once it is committed it is written to mdf and then the space is cleared which means it can be reused for other transactions. In case of Full Recovery model the
    space can only be reused once the log backup is taken.
    Check this link about transaction log which should clear all your doubts.
    http://msdn.microsoft.com/en-gb/library/ms190925.aspx
    You can check the log_reuse_wait_desc column in sys.databases to know why transaction log is not reused.
    http://msdn.microsoft.com/en-gb/library/ms178534.aspx
    Listen to this video to know about the internals in deep for transaction log -
    http://technet.microsoft.com/en-US/sqlserver/gg313762.aspx
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Viewing a transaction log for a query

    Hi all,
    I have an application which has a query in some functionality of it.
    I want to check how many times in a month that query is referenced or used.
    I hope oracle maintains a transaction log or something of this sort.
    Is there a way to analyze this?
    Thanks in advance.
    BRK

    > I want to check how many times in a month that query is referenced or used.
    Simple answer, no.
    Yes, Oracle do have a transaction (redo and undo) logs. Yes, Oracle has advance features like flashback queries. But keeping track on how many times a query (SQL) has been used in a month? Not the best of ideas. I have SQLs that are run over a billion times per month. I do not have the storage space for Oracle to keep track on just when these were run, with that values, what the performance were, etc.
    Remember that on a busy system that runs 1000's of SQLs per second, there is no time to waste on maintaining something like a query log.
    > Is there a way to analyze this
    Simple answer, yes. The SQL Shared Pool will tell you how many times a query (cursor) in the pool has been executed - assuming that it has not been aged out of the pool as being "old and cold". AWR reporting can be used. Etc.
    The real question is what do you want to achieve with this analysis? The number of times a SQL is executed is meaningless on its own - additional measures are required for meaningful analysis.

  • Usage of Transaction types in other than asset accounting

    Hi,
    can someone tell me the usage of transaction types in FI other than Asset accounting.
    Thanx,
    Sowmya

    Hi,
    Transaction types are used in several components in SAP, e.g. in CS
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/d7/07542843b911d189ee0000e81ddfac/frameset.htm
    Since, it's a more 'what is it?' question, you can esaily find the answer by searching 'Transaction type' on hep.sap.com.
    Regards,
    Eli

  • Big transaction log file

    Hi,
    I found a sql server database with a transaction log file of 65 GB.
    The database is configured with the recovery model option = full.
    Also, I noticed than since the database exist, they only took database backup.
    No transaction log backup were executed.
    Now, the "65 GB transaction log file" use more than 70% of the disk space.
    Which scenario do you recommend?
    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
    Does the " 65 GB file shrink" operation would have impact on my database users ?
    The sql server version is 2008 sp2 (10.0.4000)
    regards
    D

    I've read the other posts and I'm at the position of: It really doesn't matter.
    You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
    and just taking up space.
    For the fastest option I would personally do the following:
    1. Take a full database backup
    2. Set the database recovery model to Simple
    3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
    4. Shrink the log using the truncate option to lop off the end of the log
    5. Manually re-size the log based on usage needed
    6. Set the recovery model to full
    7. Take a differential database backup to bridge the log gap
    The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
    need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
    Sean Gallardy | Blog |
    Twitter

  • Content Engine transaction logs -- monitoring and analysis

    At our remote sites there's a local Cisco CE511 to ease our WAN bandwidth. I have been tasked to find a method to gather CE usage for trending and troubleshooting.
    From my search on the internet I decided to go with the Webalizer application. I setup the CEs to export their transaction logs every hour to my FTP server. After a test of Webalizer on a log file, it produced a nice HTML report for that hour.
    I would like to discuss with anyone on bringing this up to a new level. I would like webalizer to run as a cron job, but the log file names changes every hour. So that's a hurdle I need to figure out. Also keeping track of user web hits is important. I would like to make sure my reports are accurate in reporting what IP address is the top talker.
    I hope this will start a productive exchange of ideas. Thanks.

    Simple Network Management Protocol (SNMP) is an interoperable standards-based protocol that allows for external monitoring of the Content Engine through an SNMP agent.
    An SNMP-managed network consists of three primary components: managed devices, agents, and management systems. A managed device is a network node that contains an SNMP agent and resides on a managed network. Managed devices collect and store management information and use SNMP to make this information available to management systems that use SNMP. Managed devices include routers, access servers, switches, bridges, hubs, computer hosts, and printers.
    An SNMP agent is a software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. The SNMP agent gathers data from the Management Information Base (MIB), which is the repository for information about device parameters and network data. The agent can also send traps, or notification of certain events, to the manager.
    http://www.cisco.com/en/US/products/sw/conntsw/ps491/products_configuration_guide_chapter09186a0080236630.html#wp1101506

  • Moving Exchange 2010 Transaction Log

    Hi. Running a single instance of SBS2011 with Exchange 2010. We only have one mailbox.
    Earlier today I moved our main exchange database (.edb) file to a different hard drive. Used EMC, went through smoothly, no issues.
    I then started moving the transaction logs. Same process:
    Exchange Management Console > Organization Configuration > Mailbox > Move
    Database Paths
    This appeared to be going fine but after 40 mins came up with an error along the lines of "WinRM cannot process the request in the time specified"
    Now I cant access any mailboxes. OWA giving the error - Your mailbox appears to be unavailable. Try to access it again in 10 seconds. If you see this
    error again, contact your helpdesk
    I have checked in EMC and the mailbox is successfully mounted. I have restarted all exchange services.
    I also now have 8.5GB of log files in the "new" location. Not if these are just copies or exchange is actually using this drive.

    Hi:
    I think you may be over complicating this.  SBS includes a wizard in the SBS Console to move most/all of the user impacted data stores, including Exchange, SharePoint, WSUS and Redirected folders.  The transaction logs are limited in size if you
    use either the SBS backup or a third party backup that is "Exchange Aware" to backup the server or the Exchange subsystem, so there is really no need to move the logs, assuming you are making good backups.
    Larry Struckmeyer[MVP] If your question is answered please mark the response as the answer so that others can benefit.

Maybe you are looking for

  • My timeline is stop when browser open a new tab

    any one know how to fix that every time i switch to a new tab in  browser (laptop or device) my timeline is stop when i back to the page ... then continue .... why ? and how to fix that ? thanks

  • Yahoo Stattracker

    I keep getting a message when clicking on Yahoo Stattracker that I need to dowload Flash Player 11 even though I have it already installed and enabled.   I am currently running on Windows 7 operating system, with IE 11 and with Adobe Flash Player 11

  • Clear ISE live session

    Is it possible to clear / delete an live session on the ISE (1.2)? There are some sessions which are really old and for sure not live or active.

  • 2 queries: Indian Rupee font and direct typing of Indic fonts

    I got a MacBook Pro 2 months ago. How do i do these: Use and install the Indian Rupee font? Type indic fonts (sanskrit, tamil, kannada) directly (a la Baraha which, alas, is made only for the PC) into documents on my macbook pro? The Devanagari keybo

  • Itunes 5 and 5.1 sync problem

    I downloaded iTunes 5, but didn't sync my pod once I heard about all of the problems that people were having, so I waited for ver 5.1 and synced my pod though I get a message saying that not all the songs on iTunes can be copied over to my pod becaus