Content Engine transaction logs -- monitoring and analysis

At our remote sites there's a local Cisco CE511 to ease our WAN bandwidth. I have been tasked to find a method to gather CE usage for trending and troubleshooting.
From my search on the internet I decided to go with the Webalizer application. I setup the CEs to export their transaction logs every hour to my FTP server. After a test of Webalizer on a log file, it produced a nice HTML report for that hour.
I would like to discuss with anyone on bringing this up to a new level. I would like webalizer to run as a cron job, but the log file names changes every hour. So that's a hurdle I need to figure out. Also keeping track of user web hits is important. I would like to make sure my reports are accurate in reporting what IP address is the top talker.
I hope this will start a productive exchange of ideas. Thanks.

Simple Network Management Protocol (SNMP) is an interoperable standards-based protocol that allows for external monitoring of the Content Engine through an SNMP agent.
An SNMP-managed network consists of three primary components: managed devices, agents, and management systems. A managed device is a network node that contains an SNMP agent and resides on a managed network. Managed devices collect and store management information and use SNMP to make this information available to management systems that use SNMP. Managed devices include routers, access servers, switches, bridges, hubs, computer hosts, and printers.
An SNMP agent is a software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. The SNMP agent gathers data from the Management Information Base (MIB), which is the repository for information about device parameters and network data. The agent can also send traps, or notification of certain events, to the manager.
http://www.cisco.com/en/US/products/sw/conntsw/ps491/products_configuration_guide_chapter09186a0080236630.html#wp1101506

Similar Messages

  • Content database transaction log is full

    Hi guys,
    i am facing some very serious issues right here, SharePoint content can't be updated because the transaction log drive for the logs is full. the following message is displayed in the event viewer
    'The transaction for database wss_content_guid is full. To find out why space can't be reused see the log_reuse_wait_desc column in sys.databases'
    Pls help

    Hi,
    The recommended way to truncate the transaction log if you are using a full recovery model is to back up the log. SQL Server 2005 automatically truncates the inactive parts of the transaction log when you back up the log. It is also recommended that you pre-grow the transaction log to avoid auto-growing the log. For more information about growing the transaction log, see Managing the Size of the Transaction Log File (http://go.microsoft.com/fwlink/?LinkId=124882). For more information about using a full recovery model, see Backup Under the Full Recovery Model (http://go.microsoft.com/fwlink/?LinkId=127985). For more information about using a simple recovery model, see Backup Under the Simple Recovery Model (http://go.microsoft.com/fwlink/?LinkId=127987).
    We do not recommend that you manually shrink the transaction log size or manually truncate the log by using the Truncate method.
    Transaction logs are also automatically backed up when you back up the farm, Web application, or databases by using either the SharePoint Central Administration Web site or the Stsadm command-line tool. For more information about the Stsadm command-line tool, see Backup: Stsadm operation (Windows SharePoint Services).
    So I would suggest you backing up SharePoint by either the SharePoint Central Administration Web site or the Stsadm command-line tool.
    For more information about Best Practice on Backups, please refer to the following articles:
    Best Practice on Backups
    http://blogs.msdn.com/joelo/archive/2007/07/09/best-practice-on-backups.aspx
    Back up logs (Windows SharePoint Services 3.0)
    http://technet.microsoft.com/en-us/library/cc811601.aspx
    Hope this helps.
    Rock Wang
    Rock Wang– MSFT

  • Transaction linking, Monitoring and Alerting product

    Hi,
    I wanted to check if there is any product from BEA Systems which can be used to
    capture meta data at different points in a business transaction...The business
    process could be a multi hop process...Meta data from one hop to another can usually
    be linked with some attributes and some times the linking algorithm can be complex...
    We are investigating if there are any products which can be used to capture the
    transaction attributes, perform the linking between the meta data at different
    hops and also where we can specify some business rules like if we don't have matching
    data at some hops for a transaction flow to perform some altering function...So
    the product will be doing some transaction monitoring activity...
    If anyone has any information about this kind of software please let me know...
    Thanks
    Surajit

    Hi Ralph,
    We are facing the same issue is our system. I have checked the specified note and observed that few of the systems had to be updated in Managed System Configuration. I have updated the systems. But, the alerts have not stopped.
    Can you please tell me the procedure to resolve this issue?
    Thanks.
    Regards,
    Deepika R

  • Transaction log and access log

    The transaction log (TransactionLogFilePrefix) and the access log are stored
    relative to the directory where the server is started rather than where it
    resides as with the rest of the log files. Why is this?
    Eg.
    I start the server with a batch file contained in
    projects\bat
    My server is in
    projects\server\config\myDomain
    When I start the server the access and transaction logs end up in
    projects\bat
    while all the rest of the log files (such as the domain and server log) end
    up in
    projects\server
    My batch file that starts the server looks like this
    "%JAVA_HOME%\bin\java" -hotspot -ms64m -mx64m -classpath %CLASSPATH%
    "-Dbea.home=e:\bea"
    "-Djava.security.policy==i:\projects\server\config\myDomain\weblogic.policy"
    "-Dweblogic.Domain=myDomain" "-Dweblogic.Name=adminServer"
    "-Dweblogic.RootDirectory=i:/projects/server"
    "-Dweblogic.management.password=weblogic" weblogic.Server
    Thanks for help on this,
    Myles

    The same case with me, I sent email to apple support, but got not reply.
    The apple status page indicated that every thing is fine now, what a joke.
    Many devs are in this situation too, I guess we could do nothing but waiting for their system to come up.

  • WAE 512 and transaction logs problem

    Hi guys,
    I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
    transaction-logs enable
    transaction-logs archive interval every-day at 23:00
    transaction-logs export enable
    transaction-logs export interval every-day at 23:30
    transaction-logs export ftp-server 10.253.8.125 cache **** .
    and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
    Transaction log configuration:
    Logging is enabled.
    End user identity is visible.
    File markers are disabled.
    Archive interval: every-day at 23:00 local time
    Maximum size of archive file: 2000000 KB
    Log File format is squid.
    Windows domain is not logged with the authenticated username
    Exporting files to ftp servers is enabled.
    File compression is disabled.
    Export interval: every-day at 23:30 local time
    server type username directory
    10.253.8.125 ftp cache .
    HTTP Caching Proxy logging to remote syslog host is disabled.
    Remote syslog host is not configured.
    Facility is the default "*" which is "user".
    Log HTTP request authentication failures with auth server to remote syslog host.
    HTTP Caching Proxy Transaction Log File Info
    Working Log file - size : 96677381
    age: 44278
    Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
    Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
    Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
    I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
    What can I do?
    Thx
    davide

    Hi Davide,
    You seem to be missing the path on the FTP server; which goes on the export command.
    Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
    Let me know how it goes. Thanks!
    Jose Quesada.

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • Monitoring and optimaztion

    I am a .net programmer mainly and have worked in Microsoft SQL my entire career so I have picked up many skills, but I am far from a DBA.  The issue I am running into now is we are starting to need more maintenance and optimization.  I currently
    work for a small company with a very limited IT department so the DBA duties fall to me.  The majority of the time everything seems fine, but  our databases are begging to grow rapidly.  We are begging to have some performance issues
    In most cases the database transaction log file and data files are below 100Mb but we a few larger and one is now has a log file of 130GB with a data file of 380Mb.  And have another database 90GB Data file and 75GB log file.
    My question is, is their a good piece of monitoring software that will help monitor issue and  optimize the database. 

    If you are not doing like 15 min transaction log backup for point-in-time recovery, log-shipping or similar, then best to put the database into the
    SIMPLE recovery model. You need to shrink it once, after it will be managed automatically and will not grow huge.
    QUOTE from the 
    http://www.sqlusa.com/bestpractices2005/shrinklog/ blog: "Simplest method: in SSMS Object Explorer change Recovery Model to SIMPLE (DB properties, options). Then shrink log file (DB tasks, shrink, file, File Type : log). Change database back
    to original FULL Recovery Model only if you plan to do transaction log backup for disaster recovery. For FULL DB backup only disaster recovery, leave database in SIMPLE mode. "
    Kalman Toth Database & OLAP Architect
    T-SQL Scripts at sqlusa.com
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Transaction log maintainance

    Hi All,
    Is there anyway that i can clear the transaction logs within a specific timeperiod.
    My primary objective is to keep this in daily automation so that it will backup the transaction logs for replay transaction and clear the previous logs.

    Stop the application and have a script that deletes the files in the transaction log directory.
    Or the official Oracle take is :-
    *"Periodically, you might want to remove the transaction log store and the files in the Replay directory to increase available disk space on Essbase Server.*
    *Transaction log store: Oracle recommends removing the transaction log store for one database at a time. The log store is in a subdirectory under the log location specified by the TRANSACTIONLOGLOCATION configuration setting. For example, if the log location for the Sample.Basic database is /Hyperion/trlog, delete the contents of the following directory:*
    */Hyperion/trlog/Sample/Basic*
    *Replay directory: After you have replayed transactions, the data and rules files associated with the replayed transactions can be removed from the ARBORPATH/app/appname/dbname/Replay directory (see Configuring Transaction Replay). You can delete all of the files in the Replay directory, or follow these guidelines for selectively removing files:*
    ** Remove the data and rules files in chronological order, from earliest to latest.*
    ** Do not remove data and rules files with a timestamp that is later than the timestamp of the most recent archive file.*
    *Note: Oracle recommends waiting until several subsequent database backups have been taken before deleting files associated with transaction logging and replay."*
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • We have "dbbackup.exe" in SqlAnywhere in BI 4.1 for running the transaction log truncation/backup. This wasn't present in BOXI 3.1. Any alternative for 3.1?

    1) OS version:
    OS Name : Windows Server 2008 R2
    2) BO version:
        BusinessObjects XI 3.1 SP05.
    3) My question:
        We have “dbbackup.exe” utility in SqlAnywhere in BI 4.1 for running the transaction log ( CMS and Audit) truncation/backup. But the same utility was not present in BOXI 3.1 SP05 for backup.
       Is there an equivalent/alternative utility in BOXI 3.1 SP05 for the same purpose? We use the command below for BI 4.1 Transaction Log truncation/backup:
    E:\Program Files\SAP BusinessObjects\sqlanywhere\BIN64>dbbackup.exe -c "dsn=<System DSN>;uid=< SQL_AW_DBA_UID>;pwd=< SQL_AW_DBA_PASSWD>;host=localhost:2638" -t -x -n "E:
    \Transaction_log_backup\CMS"  
    Any help or clarification on this issue would be greatly appreciated.
    Thanks in advance.
    Conor.

    Hi Conor,
    BOXI 3.1 SP05 does not include the dbbackup utility.  Instead, you issue SQL statements to create the backup.  We published a paper on the subject:
    http://scn.sap.com/docs/DOC-48608
    The paper uses a maintenance plan to schedule regular backups, but you don't need to do that if you want to simply create a backup when required.  To do that (along with transaction log truncation), you run the SQL statement:
    BACKUP DATABASE DIRECTORY 'backup-dir'
    TRANSACTION LOG TRUNCATE;
    For complete details about the BACKUP statement, have a look here:
    http://dcx.sap.com/index.html#1201/en/dbreference/backup-statement.html
    You'll need to execute the statement inside a SQL console - the paper above describes how to get that.
    I hope this helps!
    José Ramos
    Product Manager
    SAP Canada

  • Database Transaction log suspected pages

    We migrated our Production Databases to New SQL Cluster and when I run query to find any suspected pages entries in MSDB Database .I found there are 5 entries in msdb.dbo.suspected_pages tables .These enries for Production Database Transaction file (File_id=2)
    , Pages_id =1,2,3,6,7 and the event _type was updated to 4 for all pages after I did DB restore and error_count is 1 for each page_id.
    As my understanding , before I did the DB restore ,there were transaction log pages were corrupted ,but the restored repaired those corrupted pages .Since pages are repaired then there is no need to concern for now .I have now Database consistency check
    job scheduled to check the Database corruption on Report server each night .I restore Database on report server using the a copy of Production Database Backup .Someone please help me to understand what caused the log file pages to get corrupted .Page_id 1,2,3,6,7 
    are called boot pages for the log file  ? What shold I do if I will find the Log file supected Pages ?
    Thank so your help in advance .
    Daizy

    Hi Andreas , Thanks for your reply .
    FYI- You have the event_type 1and 3 for your Database , but the event_type was updated to 4 on my system after I did restore and the date/time shows the exact date/time when the event_type was updated .
    Please help me understand usually Database Data file is organized in pages ,not the log file ??
    Thanks
    Daizy
    Hello Daizy
    yes, the event types 1-3 were the error-state before the "repair".
    After I did a Full backup + Restore I now have type 4 just as you do.
    Yes, the Log files is organized in so called "Virtual Log Files"/VLFs, which have nothing in common with the 8-KB data-pages of the data-files. Therefore a page_id does not make sense there.
    You can read more on the architecture of the Transaction Log here:
    SQL Server Transaction Log Architecture and Management
    This article by Paul Randal might also be of interest to you for:
    Transaction log corruption and backups
    Hope that helps.
    Andreas Wolter (Blog |
    Twitter)
    MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
    www.SarpedonQualityLab.com |
    www.SQL-Server-Master-Class.com

  • JTA Transaction log circular collision

    Greetings:
              Just thought I'd share some knowledge concerning a recent JTA-related
              issue within WebLogic Server 6.1.2.0:
              On our Production cluster, we recently ran into the following critical
              level problem:
              <Jan 10, 2003 6:00:14 PM EST> <Critical> <JTA> <Transaction log
              circular collision, file number 176>
              After numerous discussions with BEA Support, it appears to be a (rare)
              race condition within the tlog file. It was also noted by BEA during
              their testing of WebLogic 7.0.
              Some additional research lead to an MBean attribute under *WebLogic
              Server 7.0* entitled, "CheckpointIntervalSeconds". The documentation
              states:
              ~~~~
              Interval at which the transaction manager creates a new transaction
              log file and checks all old transaction log files to see if they are
              ready to be deleted. Default is 300 seconds (5 minutes); minimum is 10
              seconds; maximum is 1800 seconds (30 minutes).
              Default value = 300
              Minimum = 10
              Maximum = 1800
              Configurable = Yes
              Dynamic = Yes
              MBean class = weblogic.management.configuration.JTAMBean
              MBean attribute = CheckpointIntervalSeconds
              ~~~~
              After searching for a equivalent setting under WebLogic Server
              6.1.2.0, nothing was found and a custom (unsupported) patch was
              created to change this hardcoded setting under 6.1:
              from
              ... CHECKPOINT_THRESHOLD_MILLIS = 5 * 60 * 1000;
              to
              ... CHECKPOINT_THRESHOLD_MILLIS = 10 * 60 * 1000;
              within com.bea.weblogic.transaction.internal.ServerTransactionManagerImpl.
              If you'd like additional details, feel free to contact me via e-mail
              <[email protected]> or by phone +1.404.327.7238. Hope this
              helps!
              Brian J. Mitchell
              BEA Systems Administrator
              TRX
              6 West Druid Hills Drive
              Atlanta, GA 30329 USA
              

    Hi 783703,
    As Sridhar suggested for your problem you have to set transaction-time out in j2ee/home/config/transaction-manager.xml.
    If you use Idempotent as false for your partnerlinks, BPEL PM will store the status till that invoke(Proof that this invoke gets executed).
    So better to go for increasing the time instead of going for idempotent as it has some side effects.
    And coming to dehydration ....Ideally performance will be more if there are no much dehydration poitns in our process. But for some scenarios it is better to have dehydration(ex: we can know the status of the process...etc)
    Dehydration store will not get cleared after completion of the process. Here dehydration means ....it will store these dtails in tables(like Cube_instance,cube_scope...etc).
    Regards
    PavanKumar.M

  • Client Deletion, Transaction log getting full.

    Hi Gurus,
    We are trying to delete a client by running:
    clientremove
    client = 200 (*200 being the client we want to remove)
    select *
    The transaction log disk space allocated is 50GB, it is getting full (in simple mode) and client deletion never completes. The size of the table it is accessing is 86 GB, and i think 200 client will be occupying around 40-45GB. Client 200 has 15.5 million rows in the table.
    I am i giving proper command ?is there any explicit commit  i can include or any  workaround for deleting the client and not hammer the log file.
    Thanks guys
    Edited by: SAP_SQLDBA on Jan 22, 2010 6:51 PM

    Hi,
    Backup the active transaction log file and Shrink the file directly.
    Please refer the following SAP Notes to get more information.
    [  Note 625546 - Size of transaction log file is too big|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=625546]
    [  Note 421644 - SQL error 9002: The transaction log is full|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=421644]
    Which version of SQL Server u are using ? SP Level ?
    Frequently perform Transaction Log backup (BACKUP TRANS) to remove inactive space within the Transaction Log Files.
    Please refer  [Note 307911 - Transaction Log Filling Up in SQL Server 7.0|https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=307911] to get more information about the reasons for such kind of situation.
    Regards,
    Bhavik G. Shroff

  • Content engine newbie

    I am fairly new to content engines allthough I think I understand the concepts. I have about 80 content engines in remote sites and the person who set them up , did it for web traffic to permit and deny specific sites that each remote office wants to access or not.
    I want to extend this to do file share caching. Can this be done on the same physical CE or do I need another box? They are internal to the 3825 routers.
    Thanks!

    Olivier,
    You'll need to have a router running wccp in order to redirect http requests to the cache. Withouth this, the cache has no visibilty of traffic on your LAN.
    Regards,
    Dave

  • The transaction log for database 'BizTalkMsgBoxDb' is full.

    Hi All,
    We are getting the following error continously in the event viewer of our UAT servers. I checked the jobs and all the backup jobs were failing on the step to backup the transaction log file and were giving the same error. Our DBA's cleaned the message box manually and backed up the DB but still after some time the jobs starts failing and this error is logged in the event viewer.
    The transaction log for database 'BizTalkMsgBoxDb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases".
    Thanks,
    Abdul Rafay
    http://abdulrafaysbiztalk.wordpress.com/
    Please mark this answer if it helps

    Putting the database into simple recovery mode and shrinking the log file isn't going to help: it'll just grow again, it will probably fragment across the disk thereby impacting performance and, eventually, it will fill up again for the same reason
    as before.  Plus you put yourself in a very vulnerable position for disaster recovery if you change the recovery mode of the database: and that's before we've addressed the distributed transaction aspect of the BizTalkDatabases.
    First, make sure you're backing up the log file using the BizTalk job Backup BizTalk Server (BizTalkMgmtDb).  It might be that the log hasn't been backed up and is full of transactions: and, eventually, it will run out of space.  Configuration
    instructions at this link:
    http://msdn.microsoft.com/en-us/library/aa546765(v=bts.70).aspx  Your DBA needs to get the backup job running properly rather than panicking!
    If this is running properly, and backing up (which was the case for me) and the log file is still full, run the following query:
    SELECT Name, log_reuse_wait_desc
    FROM sys.databases
    This will tell you why the log file isn't properly clearing down and why it cannot use the space inside.  When I had this issue, it was due to an active transaction.
    I checked for open transactions on the server using this query:
    SELECT
    s_tst.[session_id],
    s_es
    .[login_name]
    AS [Login Name],
    DB_NAME
    (s_tdt.database_id)
    AS [Database],
    s_tdt
    .[database_transaction_begin_time]
    AS [Begin Time],
    s_tdt
    .[database_transaction_log_record_count]
    AS [Log Records],
    s_tdt
    .[database_transaction_log_bytes_used]
    AS [Log Bytes],
    s_tdt
    .[database_transaction_log_bytes_reserved]
    AS [Log Rsvd],
    s_est
    .[text]
    AS [Last T-SQL Text],
    s_eqp
    .[query_plan]
    AS [Last Plan]
    FROM
    sys.dm_tran_database_transactions
    s_tdt
    JOIN
    sys.dm_tran_session_transactions
    s_tst
    ON s_tst.[transaction_id]
    = s_tdt.[transaction_id]
    JOIN
    sys.[dm_exec_sessions]
    s_es
    ON s_es.[session_id]
    = s_tst.[session_id]
    JOIN
    sys.dm_exec_connections
    s_ec
    ON s_ec.[session_id]
    = s_tst.[session_id]
    LEFT
    OUTER
    JOIN
    sys.dm_exec_requests
    s_er
    ON s_er.[session_id]
    = s_tst.[session_id]
    CROSS
    APPLY
    sys.dm_exec_sql_text
    (s_ec.[most_recent_sql_handle])
    AS s_est
    OUTER
    APPLY
    sys.dm_exec_query_plan
    (s_er.[plan_handle])
    AS s_eqp
    ORDER
    BY [Begin Time]
    ASC;
    GO
    And this told me the spid of the process with an open transaction on BizTalkMsgBoxDB (in my case, this was something that had been open for several days).  I killed the transaction using KILL spid, where spid is an integer.  Then I ran the BizTalk
    Database Backup job again, and the log file backed up and cleared properly.
    Incidentally, just putting the database into simple transaction mode would have emptied the log file: giving it lots of space to fill up again.  But it doesn't deal with the root cause: why the backups were failing in the first place.

  • The transaction log for database 'ECC' is full + ECC6.0 Installation Failur

    Guyz,
    my ecc6 installation failed after 8 hours run with following error log snippet...
    exec sp_bindefault 'numc3_default','SOMG.MSGNO'
    DbSlExecute: rc = 99
      (SQL error 9002)
      error message returned by DbSl:
    The transaction log for database 'ECC' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
    (DB) ERROR: DDL statement failed
    (ALTER TABLE [SOMG] ADD CONSTRAINT [SOMG~0] PRIMARY KEY CLUSTERED ( [MANDT], [OBJTP], [OBJYR], [OBJNO] ) )
    DbSlExecute: rc = 99
      (SQL error 4902)
      error message returned by DbSl:
    Cannot find the object "SOMG" because it does not exist or you do not have permissions.
    ECCLOG1 data file has got 25GB initial size and growth was restricted to 10% (PROPOSED BY SAPInst)...
    i'm assuming this error was due to lack of growth space for ECCLOG1 datafile...am i right? if so how much should i allocate memory for this log ? or is there any workaround ?
    thanks in advance

    Kasu,
    If SQL is complaining that the log file is full then the phase of the install that creates the SQL data/log files has already occurred (happens early in the install) and the install is importing programs, config and data into the db.
    Look at the windows application event log for "Transaction log full" events to confirm.
    To continue, in SQL Query analyzer try:
    "Backup log [dbname] with truncate_only"
    This will remove only inactive parts of the log and is safe when you don't require point-in-time recovery (which you don't during an install).
    Then, go to the SQL Enterprise manager, choose the db in question and choose the shrink database function, choose to shrink only the transaction log file and the space made empty by the truncate will be removed from the file.
    Change the recovery mode in SQL Server to "simple" so that the log file does not grow for the remainder of the install.
    Make sure you change the recovery mode back to "full" after the install is complete.
    Your transaction log appears to have filled the disk partition you have assigned to it.
    25GB is huge for a transaction log and you would normally not see them grow this large if you are doing regular scheduled tlog backups (say every 30-60 minutes) because the log will truncate every time, but its not unusual to see one get big during an install, upgrade or when applying hotpacks.
    Tim

Maybe you are looking for

  • CF10 500 internal server errors

    I have recently installed CF10 on a Win 2008 R2 server it will open the main pages (index.cfm) but if I try to browse to the links within the site I get a 500  internal server error or 404 file not found. I have looked at a few articles in regards to

  • Table Name Issue of CAF Entity Service

    Hi All I have a problem in the following scenario :-- I have created a CAF project named xflit_01 in local development. Another CAF project xflit_02 is created in local development of another machine's NWDS but both the project has a same entity serv

  • Blu-Ray workflow

    I have a 1080i project which has a 2 channel audio. I created a surround sound mix from the 2 channel audio using soundtrack pro. I want to burn a blu-ray using Encore but have the 5.1 surround as well. I am able to burn SD DVDs with 5.1 surround sou

  • Replacing a word in Dictionary

    I have a problem when incorrectly typing the as "teh" which I am very prone to to do. When I try to correct it, in Apple's Mail (v 6.5 1508), by right clicking (mountain lion) it tells me that the word "teh" exists when right clicking on it, I get th

  • I keep getting an error message while connecting to the server ("Unknown error") and there is no way to reset preferences on Mac OS X.

    My sync preferences keep disappearing and I keep getting the "Unknown error" error message.