Terminate Suspended Instances without filling Transaction Log

In one of our Test environments, we have around 200K suspended instances. Is there a way to terminate all those without filling up the transaction log?
Thanks, Pavan MCTS-Microsoft Biztalk Windows Server 2010

Hi Pavan,
You can consider using BizTalk Terminator tool to terminate the high number of suspended messages gracefully.
BizTalk Terminator tool
You don't need to worry about filling up of transaction logs when terminating the suspended messages. If you have any data integrity issues, this tool can also be used to resolve the same.
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful.

Similar Messages

  • Maxdb restore - transaction log backup

    Hi,
    Is it possible to restore the db backup without the transaction log backup? I know this is kinda lame to ask this but just wondering if this is possible and how it can be done.
    Database is MaxDB and OS is Linux.
    Thanks in advance!

    Hi,
    the restore does not depend onto the database state in which the databackup has been made.
    Instead you are able to recover every complete databackup without a logrecovery. You can do this by using the dbmcli command  DB_ACTIVATE RECOVER <medium_name> or the corresponding dbmgui actions.
    After the recovery you simply need to restart the database.
    Kind regards, Martin

  • System Crash after transactional log filled filesystem

    Dear gurus,
    We have an issue in our PRD system under FlexFrame platform. We SAP NW 7.4 (SP03) with ASE 15.7.0.042 (SuSe SLES 11 SP1) running as BW system.
    While uploading data from ERP system, the transactional log was filled. We can see in <SID>.log:
    Can't allocate space for object 'syslogs' in database '<SID>' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
    After this, we increase the transactional log (disk resize). Then, executed ALTER DATABASE <SID> log on <LOGDEVICE> = '<size>'
    While ALTER is running the log filesystem was filled (100%), after this, <SID>.log began to grow tremendously.
    We stopped Sybase and now, when we try start it all FF node will be down. The filesystem has free space (around 10 GB)
    Could you help us?
    Add: We think that a posible solution could be to delete the transactional log due to the fact that we understand that the failure is related to this log (maybe corrupted?)
    Regards

    ====================
    00:0008:00000:00009:2014/06/26 15:49:37.09 server  Checkpoint process detected hardware error writing logical page '2854988', device 5, virtual page 6586976 for dbid 4, cache 'log cache'. It will sleep until write completes successfully.
    00:0010:00000:00000:2014/06/26 15:49:37.10 kernel  sddone: write error on virtual disk 5 block 6586976:
    00:0010:00000:00000:2014/06/26 15:49:37.10 kernel  sddone: No space left on device
    00:0008:00000:00009:2014/06/26 15:49:37.10 server  bufwritedes: write error detected - spid=9, ppage=2854988, bvirtpg=(device 5, page 6586976), db id=4
    =======================
    1 - check to make sure the filesystem that device #5 (vdevno=5) sits on is not full; make sure filesystem is large enough to hold the entire defined size of device #5; make sure no other processes are writing to said filesystem
    2 - have your OS/disk admin(s) make sure the disk fragment(s) underlying device #5's filesystem isn't referenced by other filesystems and/or raw device definitions

  • Transaction logs filling

    Hi,
    Our transaction logs are getting filled up very rapidly. Within two hours of time about 60 GB of transactions logs
    are getting filled. But the users activity is not that high and very normal.
    As I am new to MSSQL, kindly guide us in this regard.
    Best Regards,
    DVRK

    Hi Rama,
    Are you doing any client import process or any other process that cause  filling the Transaction logs .
    Go to SM37 and see the Acive jobs and check out any backgroung jobs running continuous causing this issue .
    Also go to SM50 and see any long runnning activities going on .
    To Free up Transaction logs space you can follow any of the below options :
    1) Take Backup of Tlogs , it will work out and some free space will be generated.
    2) You can shrink Transaction log  . Refer note : 625546
    regards,
    Nibu

  • Update deactivated when DB2 transaction logs fill.

    Hi All,
    We are running SAP on DB2 Version9 on AIX platform.
    We notirced that when transaction logs get full, the Update workprocess gets deactivated. This happens in any usage type that we have.. SRM, ECC , BI.
    In the workprocess log we find this
    C  &+     ABAP location info 'SAPLSNR3', 2480
    C  &+     SAP user 'P1245690', transaction code 'WRF_WSOA2'
    C  &+
    C  *** ERROR in DB6Execute[dbdb6.c, 4556] (END)
    B  ***LOG BYL=> DBQ action required because of database error            [dbsh#2 @ 1100] [dbsh    1100 ]
    B  SQL code: -964, SQL text: SQL0964C  The transaction log for the database is full.  SQLSTATE=57011 row=1
    M  ThIVBChangeState: update deactivated
    M  ***LOG R0R=> ThIVBChangeState, update deactivated () [thxxvb.c     11442]
    M
    M Sun Aug  8 08:30:44 2010
    M  *** ERROR => ThVBCheckCommit: reject bad ROLLBACK [thxxvb.c     13471]
    A
    A Sun Aug  8 08:57:56 2010
    A  TH VERBOSE LEVEL FULL
    A  ** RABAX: level LEV_RX_PXA_RELEASE_MTX entered.
    A  ** RABAX: level LEV_RX_PXA_RELEASE_MTX completed.
    A  ** RABAX: level LEV_RX_COVERAGE_ANALYSER entered.
    A  ** RABAX: level LEV_RX_COVERAGE_ANALYSER completed.
    I have never heard of Update being deactivated for transaction log full at DB level. Update can be held up butwill it get deactivated.
    Is it possible that update gets deactivated if it not able to post the tranaction data in DB?
    Please let me know your ideas on this and how we can prevent this update from deactivating.
    Thanks and REgards,
    Raghavan

    Hi,
    If you search for "SQL0964C The transaction log for the database is full" then some of solutions ask for to execute the below mentioned steps :-
    Use the following procedure to increase the size of the DB2 transaction log (logfilsiz):
    1. Determine the current log file size setting by issuing the command:
    su - <db2instance>
    db2 list db directory # to list the database name
    db2 connect to <databaseName>
    db2 get db config for <databaseName> and check LOGFILSIZ parameter.
    A sample output given from my test system :-
    Log file size (4KB)                         (LOGFILSIZ) = 16380
    Number of primary log files                (LOGPRIMARY) = 20
    Number of secondary log files               (LOGSECOND) = 40
    Changed path to log files                  (NEWLOGPATH) =
    Path to log files                                       = /db2/<dataname>/log_dir/NODE0000/
    Overflow log path                     (OVERFLOWLOGPATH) =
    Mirror log path                         (MIRRORLOGPATH) =
    First active log file                                   =
    Block log on disk full                (BLK_LOG_DSK_FUL) = YES
    Percent max primary log space by transaction  (MAX_LOG) = 0
    Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
    2. Increase the size of the log file size setting by issuing the command:
    db2 UPDATE db cfg for <databaseName> using LOGFILSIZ <new_value>
    Example:
    db2 UPDATE db cfg for <databaseName> using LOGFILSIZ 5000
    3. Stop the ibmslapd process.
    4. Issue the commands:
    db2 force applications all
    db2stop force
    5. Restart ibmslapd process.
    Once the db issue is resolved then your update work process must be running fine.

  • Need a Walkthrough on How to Create Database & Transaction Log Backups

    Is this the proper forum to ask for this type of guidance?  There has been bad blood between my department (Research) and the MIS department for 30 years, and long story short I have been "given" a virtual server and cut loose by my MIS department
    -- it's my responsibility for installs, updates, backups, etc.  I have everything running really well, I believe, with the exception of my transaction log backups -- my storage unit is running out of space on a daily basis, so I feel like I have to be
    doing something wrong.
    If this is the proper forum, I'll supply the details of how I currently have things set up, and I'm hoping with some loving guidance I can work the kinks out of my backup plan.  High level -- this is for a SQL Server 2012 instance running on a Windows
    2012 Server...

    Thanks all, after posting this I'm going to read the materials provided above.  As for the details:
    I'm running on a virtual Windows Server 2012 Standard, Intel Xeon CPU 2.6 GHz with 16 GB of RAM; 64 bit OS.  The computer name is e275rd8
    Drives (NTFS, Compression off, Indexing on):
    DB_HVSQL_SQL-DAT_RD8-2(E:) 199 GB (47.2 used; 152 free)
    DB_HVSQL_SQL-Dat_RD8(F:) 199 GB (10.1 used; 189 free)
    DB_HVSQL_SQL-LOG_RD8-2(L:) 199 GB (137 used; 62 free) **
    DB_HVSQL_SQL-BAK_RDu-2(S:) 99.8 GB (64.7 used; 35 free)
    DB_HVSQL_SQL-TMP_RD8-2(T:) 99.8 GB (10.6 used; 89.1 free)
    SQL Server:
    Product: SQL Server Enterprise (64-bit)
    OS: Windows NT 6.2 (9200)
    Platform: NT x64
    Version: 11.0.5058.0
    Memory: 16384 (MB)
    Processors: 4
    Root Directory: f:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL
    Is Clustered: False
    Is HADR Enabled: False
    Database Settings:
    Default index fill factor: 0
    Default backup media retention (in days): 0
    Compress backup is checkmarked/on
    Database default locations:
      Data: E:\SQL\Data
      Log: L:\SQL\LOGs
      Backup: S:\SQLBackups
    There is currently only one database: DistrictAssessmentDW
    To create my backups, I'm using two maintenance plans, and this is where I'm pretty sure I'm not doing something correctly.  My entire setup is me just guessing what to do, so feel free to offer suggestions...
    Maintenance Plan #1: Backup DistrictAssessmentDW
      Scheduled to run daily Monday Through Friday at 3:33 AM
      Step 1: Backup Database (Full) 
        Backup set expires after 8 days 
        Back up to Disk (S:\SQLBackups)
        Set backup compression: using the default server setting
      Step 2: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: L:\SQL\Logs
          File extension: trn
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 1 Day
      Step 3: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: S:\SQLBackups
          File extension: bak
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 8 Days
    Maintenance Plan #2: Backup DistrictAssessmentDW TRANS LOG ONLY
      Scheduled to run daily Monday through Friday; every 20 minutes starting at 6:30 AM & ending at 7:00 PM
      Step 1: Backup Database Task
        Backup Type: Transaction Log
        Database(s): Specific databases (DistrictAssessmentDW)
        Backup Set will expire after 1 day
        Backup to Disk (L:\SQL\Logs\)
        Set backup compression: Use the default server setting
    Around 2:30 each day my transaction log backup drive (L:) runs out of space.  As you can see, transactions are getting backed up every 20 minutes, and the average size of the backup files is about 5,700,000 KB.
    I hope this covers everything, if not please let me know what other information I need to provide...

  • Backup and restore full and transaction log in nonrecovery mode failed due to LSN

    In SQL 2012 SP1 enterprise, when taking a full backup and followed up a transaction log backup immediately, the transaction log backup starts with an earlier LSN than the ending LSN of the full backup. As a result, I cannot restore
    the transaction log backup after the full backup both as nonrecovery on another machine. I was trying to make the two machine in sync for mirroring purpose. An example is as follows.
    full backup:       first 1121000022679500037, last 1121000022681200001
    transaction log: first 1121000022679000001, last 1121000022682000001
    --- SQL Scripts used  
    BACKUP DATABASE xxx  TO DISK = xxx WITH FORMAT
    go
    backup log  xxx to disk = xxx
    --- When restore, I tried the
    restore log BarraOneArchive  from disk=xxx  WITH STOPATMARK  = 'lsn:1121000022682000001', NORECOVERY
    Also tried StopBeforeMark, did not work either. Complained about the LSN too early to apply to the database

    I think that what I am saying is correct .I said in sync mirroring ( i was not talking about witness) if network goes for few minutes or some longer time may be 20 mins ( more than that is reare scenario IS team has backup for that) logs on Principal will
    continue to grow as they wont be able to commit because there connection with mirror is gone so commit from mirror is not coming.After network comes online Mirror will replay all logs and will soon try to come up with principal
    Books Online says this: This is achieved by waiting to commit a transaction on the principal database, until the principal server receives a message from the mirror server stating that it has hardened the transaction's log to disk. That is,
    if the remote server would go away in a way so that the primary does not notice, transactions would not commit and the primary would also be stalled.
    In practice it does not work that way. When a timeout expires, the principal will consider the mirror to be gone, and Books Online says about this case
    If the mirror server instance goes down, the principal server instance is unaffected and runs exposed (that is without mirroring the data). In this section, BOL does not discussion transaction logs, but it appear reasonable that the log records are
    retained so that the mirror can resync once it is back.
    In Async Mirroring Transaction log is sent to Mirror but it does not waits for Acknowledgement from mirror and commits the transaction.
    But I would expect that the principal still gets acknowledgement that the log records have been consumed, or else your mirroring could start failing f you backup the log too frequently. That is, I would not expect any major difference between sync and async
    mirroring in this regard. (Where it matters is when you fail over. With async mirroring, you are prepared to accept some data loss in case of a failover.)
    These are theories that could be fairly easily tested if you have a mirroring environment set up in a lab, but I don't.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Transaction Logs in SQL Server

    Hi, the BW system has the following properties:
    BW 3.1C Patch 14
    BASIS/ABA 6.20 Patch 38
    BI_CONT 310 Patch 2
    PI_BASIS Patch 2004_1_620
    Windows 2000 Service Pack 4
    SQL Server 2000 SP3 version 8.00.760
    Database used space: 52 GB
    Database free space: 8.9 GB
    Transaction log space: 8 GB
    I am having the following problem.  The SQL transaction logs on the SQL Server fill up very rapidly while aggregates are rolling up.  Sometimes taking up to 16-20 GB of transaction log space in the SQL Server.  We only have 8 GB of space available for the transaction logs.  When the aggregates are not rolling up, the logs do not fill up at all.  I have tried changing the logs to Simple logging, but all that does is delay the fill, and at that point you cannot backup simple logs to free up DB space.
    What is it about aggregates that fills up the transaction log?  Anybody know a solution to this without adding disk space to the transaction log disk?
    Thanks,

    Hello,
    the log backup on simple mode is not necessary. The full database after switching back to full is a must.
    Please keep in mind, that even running on simple mode the log can be filled up, as all transactions are still written to the log. Commited transaction then can truncated from the log. But when you run a hugh transaction like a client copy, the log might grow as well. The log will be freed once the transaction commits or rolls back. And no, you can't split a client copy in several transactions.
    Best regards
      Clas

  • The log shipping restore job restores a corrupted transaction log backup to a secondary database

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,
    Well when failover happens SQL Server is stopped and restarted on other node. So when SQL Server is stopped and it is doing Log backup the backup operation would stop and there would be no trn files . The backup operation wont complete and hence no backup
    information would be stored in SQL Server MSDB and no .trn file would be generated.
    You can run restore verifyonly on .trn file to see whether it is damaged or not. Logshipping is quite flexible even if previous log backup did not complete the next wont be affected because SQL Server has no information about whether backup completed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • The transaction log for database 'Test_db' is full due to 'LOG_BACKUP'

    My dear All,
    Came up with another issue:
    App team is pushing the data from one Prod1 server 'test_1db' to another Prod2 server 'User_db' through a job, here while pushing the data after some duration job is failing and throwing the following error
    'Error: 9002, Severity: 17, State: 2.'The transaction log for database 'User_db' is full due to 'LOG_BACKUP'''.
    On Prod2 server 'User_db' log is having enough space 400gb on drive and growth is 250mb. I really confused that why job is failing as there is lot of space available. Kindly guide me to troubleshoot the issue as this issue is occuring from more than
    1 week. Kindly refer the screenshot for the same.
    Environment: SQL Server 2012 with sp1 Ent-edition. and log backup duration is every 15 mints and there is no High availability between the servers.
    Note: Changing to simple recovery model may resolve but App team is required to run in Full recovery model as they need of log backups.
    Thanks in advance,
    Nagesh
    Nagesh

    Dear V,
    Thanks for the susggestions.
    I have followed some steps to resolve the issue, as of now my jobs are working without issue.
    Steps:
    Generating log backup for every 5 minutes
    Increased the growth 500mb to unrestricted. 
    Once whole job completed we are shrinking the log file.
    Nagesh

  • Performance problem with transaction log

    We are having some performance problem in SAP – BW 3.5 system running on MS – SQL server 2000.The box is sized 63,574 MB. The transaction logs gets filled up after loading data in to a transactional cube or after doing selective deletion. The size of the transaction log is 7,587MB currently.
    Basis team feels that when performing either loading or selective deletion, SQL server views it as a single transaction and doesn't commit until every record is written. And so as a result, transaction logs fills up ultimately bringing the system down.
    The system log shows a DBIF error during the transaction log fill up as follows:
    Database error 9002 at COM
    > [9002] the log file for database 'BWP' is full. Back up the
    > Transaction log for the database to free up some log space.
    Function COMMIT on connection R/3 failed
    Perform rollback
    Can we make changes to Database to make commit action frequently? Is there any parameters we could change to reduce the packet size? Is there some setting to be changed in SQL server?
    Any Help will be appreciated.

    if you have disk space avialable you can allocate more space to the transaction log.

  • Audit Vault 12.1.1 error creating audit trail with TRANSACTION LOG

    Hi,
    i installed AV 12.1.1 , the DB target is with Data Guard.
    when i run the script oracle_user_setup with the mode REDO_COLL the final message is that was succesfull , but when i go to the AV console and try to create an audit trail with TRANSACTION LOG the AV console shows me an error and the log shows me this :
    [2013-10-16T03:37:18.593-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] RedoCollector : runSourceScript : Error while running script on source for REDO collector.
    [2013-10-16T03:37:19.528-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] OAV-8004: Failed to start collector {0}:{1}CollectionFactory : createCollection : Exception while creating collection. [[
    Failed to start collector {0}:{1}
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:816)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    Nested Exception:
    java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
    ORA-06512: at line 1
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
                    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
                    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
                    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
                    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
                    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
                    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
                    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1044)
                    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)
                    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3584)
                    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
                    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
                    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                    at java.lang.reflect.Method.invoke(Method.java:616)
                    at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:230)
                    at oracle.ucp.jdbc.proxy.PreparedStatementProxyFactory.invoke(PreparedStatementProxyFactory.java:124)
                    at $Proxy2.execute(Unknown Source)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:747)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
                    at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
                    at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
                    at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
                    at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
                    at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
                    at java.lang.Thread.run(Thread.java:679)
    i don't understand why the issue because the user has the privileges given by the script and i tried with grant as sysdba but without any result
    i don't understand what are the privileges that the collector needs.
    any idea?
    thnks for any help

    Hi
    Just run the script $AV_AGENT/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql  USER_NAME REDO_COLL
    This will grant the user some privileges and roles like DBA and CREATE Database Link
    I hope this answer your question
    Thanks
    Ahmed Moustafa

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Oracle DB equivalent of SQL Server's Simple Transaction Logging mode?

    G'Day Experts !
    Was wondering if Oracle DB has the functional equivalent of the 'simple' transaction logging available in SQL Server?
    Would this be availabe at the schema level, or would it have to be the entire instance?
    I'm asking because the WebCenter Interaction portal and related services has no practical use for point-in-time rollbacks. The portal uses discreet event boundaries which unfortunately do not map into the relational world.
    Thanks!
    Rob in Vermont

    Plumtree wrote:
    G'Day Experts !
    Was wondering if Oracle DB has the functional equivalent of the 'simple' transaction logging available in SQL Server?
    Would this be availabe at the schema level, or would it have to be the entire instance?
    I'm asking because the WebCenter Interaction portal and related services has no practical use for point-in-time rollbacks. The portal uses discreet event boundaries which unfortunately do not map into the relational world.
    Thanks!
    Rob in VermontHi Rob
    I assume you are referring to the simple recovery model, i.e lose everything since last backup. Oracle's equivalent of that is to run a database in NOARCHIVELOG mode. It applies to the database rather than the instance, though you probably intended database when you said instance.
    Niall Litchfield
    http://www.orawin.info/

  • How to determine which RAC-instance the appl. is logged onto?

    Dear all,
    I need to have my application server determine which RAC-
    instance is currently active (logged onto). I have a
    tnsnames.ora file with a primary-, and secondary RAC-
    instance configured, and Failover/Failback between the
    instances work fine. However, I would be interested in
    determining which instance I am curently using.
    Does the Oracle Net Protocol have support for letting me
    "read" this out, or...?
    Thanks.
    Regards, Eldor R.

    Thank you for the prompt reply.
    Is there, in the Oracle Net Protocol, available
    function(s) for reading out this information
    directly without "parsing" the trace file?
    I would like to read out this information from my
    application run-time.
    Thanks.

Maybe you are looking for

  • I feel I must move beyond iMovie, what program should I use ?

    I have been happily using iMovie and iDVD from versions 1 through 6.  iMovie 08 was so bad that Apple made iMovie 06 available as a free download for buyers of iMovie 08. The newer iMovies were better, but they seemed "dumbed down" (even for me).  No

  • Urgent need for Adding Comments

    Hai, I have to add a comment before the root node dynamically. I am adding a comment dynamically by using the following code. //Getting the docuement. DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocu

  • Keywords not picking up

    Can anyone help me on a problem I'm having please? I have created a website in Dreamweaver, added keywords, uploaded it and submitted it to Google but it doesn't appear that the keywords are being acknowledged. When I log in to the Google stats, it s

  • Adding plugin deletes InDesign (workspace) settings

    Hi, we discovered the following strange behaviour: By adding a plugin to the InDesign plugin folder (via Drag&Drop) several user settings of InDesign will be deleted. For example: The list of recently opend documents, the preview checkbox in paragrap

  • When I first download Firefox there are no toolbars of any sort,just the name Firefox plus small arrow, so how do i access them?

    When I first loaded Firefox4 there are no toolbars, just the name Firefox (with small arrow) plus Google. Clicking the arrow does not offer access to the toolbars so how doI get them ?