Probleme mit Gru00F6u00DFe des Transaction Log
Hallo,
wir haben auf unserem BW-System Probleme mit dem Transaction Log:
Eckdaten des Systems:
DB:
Gesamtgröße 737.280
Allokiert 534.735
Frei 202.545
Protokoll:
Anzahl Dateien 1
Gesamtgröße 46.080
SAP Version: SAP EHP 1 for SAP NetWeaver 7.0 (x64bit)
MSSQL: 9.00.4053
Das Transaktion Log ist akt. 45 GB groß. Bei bestimmten BW-Funktionen (zB InfoCube komprimieren, Index auf InfoCubes löschen, Aggregatsaufbau ...) kommt es zu folgendem Fehler:
The transaction log for database 'BW1' is full. To find out
why space in the log cannot be reused, see the
log_reuse_wait_desc column in sys.databases
Datenbankfehler 9002 aufgetreten
Backup des Transaction Log ist aktuell wie folgt eingerichtet:
Automatisch stündliches Backup
Sobald das Transaction Log 7 GB allokiert hat, wird mittels Alert ebenfalls ein Backup gestartet.
Hinweis 421644 beschreibt, dass langlaufende oder umfangreiche Transaktionen u.U. ein Truncate des Logs verhindern - und das ist bei uns der Fall!
Der betreffende Infocube bei der der Funktion "InfoCube komprimieren" sieht wie folgt aus (Auszug DB02).
Schema TabName Belegt (kb) Reserviert Daten Anz. Zeilen ZeiModZäh
bw1 /BIC/FUCSA_C83 56.659.872 57.195.384 10.638.776 119.828.524 13.024.553
Was wäre eine vernünftige Größe des Transaction Log? Gibt es andere Lösungsmöglichkeiten (als das Transaction Log zu vergrößern)??
Bin für jede Hilfe/Vorschlag dankbar!
LG, Bernd
Hi Bernd,
wir haben ein BW mit rund 11,5 TB und einer entsprechenden Anzahl an Cubes, DSOs usw. Um die von Dir beschriebene Situation zu vermeiden ist unser TA-Log rund 280 GB groß (welches auch noch weiter wachsen könnte!). Die Größe haben wir an der durchschnittlichen Menge der TA-Logs-Backups pro Tag festgemacht, welche in etwa 250 GB entspricht. Damit "überlebt" man auch mal eine Nacht, in der einiges schief geht, ohne daß das System komplett stehen bleibt bzw. alle Prozesse auf die Bretter gehen.
Habt ihr die Situation schon mal genauer untersucht?
Grüße,
Sven
Similar Messages
-
Transaction Logs in SQL Server
Hi, the BW system has the following properties:
BW 3.1C Patch 14
BASIS/ABA 6.20 Patch 38
BI_CONT 310 Patch 2
PI_BASIS Patch 2004_1_620
Windows 2000 Service Pack 4
SQL Server 2000 SP3 version 8.00.760
Database used space: 52 GB
Database free space: 8.9 GB
Transaction log space: 8 GB
I am having the following problem. The SQL transaction logs on the SQL Server fill up very rapidly while aggregates are rolling up. Sometimes taking up to 16-20 GB of transaction log space in the SQL Server. We only have 8 GB of space available for the transaction logs. When the aggregates are not rolling up, the logs do not fill up at all. I have tried changing the logs to Simple logging, but all that does is delay the fill, and at that point you cannot backup simple logs to free up DB space.
What is it about aggregates that fills up the transaction log? Anybody know a solution to this without adding disk space to the transaction log disk?
Thanks,Hello,
the log backup on simple mode is not necessary. The full database after switching back to full is a must.
Please keep in mind, that even running on simple mode the log can be filled up, as all transactions are still written to the log. Commited transaction then can truncated from the log. But when you run a hugh transaction like a client copy, the log might grow as well. The log will be freed once the transaction commits or rolls back. And no, you can't split a client copy in several transactions.
Best regards
Clas -
Performance problem with transaction log
We are having some performance problem in SAP BW 3.5 system running on MS SQL server 2000.The box is sized 63,574 MB. The transaction logs gets filled up after loading data in to a transactional cube or after doing selective deletion. The size of the transaction log is 7,587MB currently.
Basis team feels that when performing either loading or selective deletion, SQL server views it as a single transaction and doesn't commit until every record is written. And so as a result, transaction logs fills up ultimately bringing the system down.
The system log shows a DBIF error during the transaction log fill up as follows:
Database error 9002 at COM
> [9002] the log file for database 'BWP' is full. Back up the
> Transaction log for the database to free up some log space.
Function COMMIT on connection R/3 failed
Perform rollback
Can we make changes to Database to make commit action frequently? Is there any parameters we could change to reduce the packet size? Is there some setting to be changed in SQL server?
Any Help will be appreciated.if you have disk space avialable you can allocate more space to the transaction log.
-
Problems with transaction-logs on cache engines
Good Day All,
I have a Cache Engine 550 here and the transaction log working.log file got quite large.
I was not able to export it to my ftp server so I logged into the Cache engine via ftp and downloaded the file to a PC.
I then deleted the working.log file on the Cache Engine and rebooted the cache engine.
The working.log file was not re-created as I had hoped it might be.
I have created a file called working.log in the correct directory. This file does not seem to get updated though so this must not be right either.
Any suggestions?
regards,
amandaHi Zach,
Thank you so much for writing back. I am running an archaic version of the software... i can check tomorrow. As to the logging.... i had not enabled transaction-logging in itself so it was a silly config error ...
:) amanda -
WAE 512 and transaction logs problem
Hi guys,
I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
transaction-logs enable
transaction-logs archive interval every-day at 23:00
transaction-logs export enable
transaction-logs export interval every-day at 23:30
transaction-logs export ftp-server 10.253.8.125 cache **** .
and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
Transaction log configuration:
Logging is enabled.
End user identity is visible.
File markers are disabled.
Archive interval: every-day at 23:00 local time
Maximum size of archive file: 2000000 KB
Log File format is squid.
Windows domain is not logged with the authenticated username
Exporting files to ftp servers is enabled.
File compression is disabled.
Export interval: every-day at 23:30 local time
server type username directory
10.253.8.125 ftp cache .
HTTP Caching Proxy logging to remote syslog host is disabled.
Remote syslog host is not configured.
Facility is the default "*" which is "user".
Log HTTP request authentication failures with auth server to remote syslog host.
HTTP Caching Proxy Transaction Log File Info
Working Log file - size : 96677381
age: 44278
Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
What can I do?
Thx
davideHi Davide,
You seem to be missing the path on the FTP server; which goes on the export command.
Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
Let me know how it goes. Thanks!
Jose Quesada. -
MS SQL Server Transaction Log Problem
Hello,
MS SQL 2005 Database on R/3 production system takes transaction log
back up in every 15 minutes. I can follow transaction log backup being
taken normally on SQL Server Management Studio, but it is not observed
from DB02 Transaction. The system stops operating when Transactionlog
Database is loaded. this continues always when the following CO Cost
closing processes work.
CO-PC Post Closing Transaction (SAPRCKMA_RUN_CLOSE)
CO-PC singe level price determination Transaction (SAPRCKMA_RUN_SETTLE)
Windows2003Entx64 > MS SQL Server2005Entx64 > ECC 5.0
Regards
ismail KARAYAKAHi
All backup logs can be seen in DB12.
Thanks
Adil -
Problem mit PSE 3 Form des Stempels
Ich habe folgendes Problem:
Nach Auswahl des Kopierstempels oder anderer Werkzeuge ist IMMER das Fadenkreuz voreingestellt. Auch wenn ich in die Voreinstellungen gehe und unter "Bildschirm- und Zeigerdarstellung" die Einstellung auf "Standard" oder "Größe der Spitze" ändere, erreiche ich KEINE Veränderung.
Bei meinem 2. PC kann ich hier locker wechseln, wobei ich vor allem die Einstellung "Größe der Spitze" bevorzuge.
Ich hoffe es kann mir jemand helfen
Gruß und danke im voraus
Mario aus ÖsterreichFür die Antwort sage auch ich herzlichen Dank. Sowas steht doch nirgendwo; höchstens in Publikationen wie "The hidden tricks"...
Danke!
Klaus W. aus H. -
Problem while data processing TRANSACTION data from DSO to CUBE
Hi Guru's,
we are facing problem while data processing TRANSACTION data from DSO to CUBE. data packets processing very slowly and updating .Please help me regarding this.
Thanks and regards,
SridharHi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (give BIrequest name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. See if its accessing/updating some tables or is not doing anything at all.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.
You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the cube.
Thanks,
JituK -
Error in db6conv failed due to transaction log full
Hi,
I have a huge problem with my production system.
I was executing db6conv v4.08 to convert a table to a new tablespace and it stopped due to a transaction log full.
Now I have this situation:
table soffcont1
db6conv: status: preliminary
I check the job db6conv_job_soffcont1 with status scheduled.
The problem is that when I want to execute this jobs it gives me an error:
Definition of job db6conv_job_soffcont1 is incomplete. Operation is not possible.
regards,
filipe vasconceloshi filipe,
i will follow up on this problem in you OSS message.
regards, frank -
Transaction log shipping restore with standby failed: log file corrupted
Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
Date
9/10/2014 6:09:27 AM
Log
Job History (LSRestore_DATA_TPSSYS)
Step ID
1
Server
DATADR
Job Name
LSRestore_DATA_TPSSYS
Step Name
Log shipping restore log job step.
Duration
00:00:03
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted
0
Message
2014-09-10 06:09:30.37 *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: An error occurred while processing the log for database 'TPSSYS'.
If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Deleting old log backup files. Primary Database: 'TPSSYS'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.38 ----- END OF TRANSACTION LOG RESTORE
Exit Status: 1 (Error)I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in that server with log shipping configuration
error :
Message
2014-09-12 10:50:03.18 *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-12 10:50:03.18 *** Error: An error occurred while processing the log for database 'EAPDAT'. If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles -
Just spent the entire night with virtually no sleep with our firm's group of IT engineers trying to keep our Exchange system online due to massive transaction log growth. Confirmed the issue related to 6.1 calendar bug with Activesync. The workarounds are not practical for large groups of users who depend on their mobile devices for work. Our users have no way of knowing that they are causing an issue so the Apple guidance isnt terribly useful to communicate to 1000 users. When can we expect a resolution? The problem is only going to get worse as more and more users hit the bug. Does anyone know if the issue will resolve as soon as someone installs the 6.1.2 update, assuming that has the fix. Im not trying to bash anyone but this is a very serious problem in enterprise deployments.
The update was released some time today. 6.1.2 appears to specifically fix the Exchange issue causing excess comms and logging issues. However, although the update is available i do not see the notification badge on the Settings icon. Is this controlled by Apple or is there a user setting i am missing somewhere? I would prefer that all users see the badge to expedite user action.
-
Log Reader Agent: transaction log file scan and failure to construct a replicated command
I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
Error Message
2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
Replication agent job kept trying after specified intervals and kept failing with that message.
Investigation
I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
SELECT * FROM dbo.MSrepl_transactions -- 1162
SELECT * FROM dbo.MSrepl_commands -- 821922
The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
-- Returns the commands for transactions marked for replication
EXEC sp_replcmds
-- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
EXEC sp_repltrans
-- Returns the commands for transactions marked for replication in readable format
EXEC sp_replshowcmds
Resolution
Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
EXEC sp_addlogreader_agent
@job_login = 'LoginName',
@job_password = 'Password',
@publisher_security_mode = 1;
When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
--Updates the record that identifies the last distributed transaction of the server
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
-- Flushes the article cache
EXEC sp_replflush
Bingo !
Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
Question
Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.Hi Hilary,
Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1 -
How to reduce transaction log in SQL Server 2000
Dear All gurus/experts,
I need your help. The problem is that the time to take to reduce transaction log in SQL server 2000 when I use shrink method. Are there another way to do that beside shrink database (right click : all tasks --> shrink database) ?
I appreciate your answers. TIA
Rgds,Hi Steve,
Is this for a test system or a production system?
For a test system, as per Ad's post, setting the recovery model to simple should do the trick.
For a production system, I'd recommend you leave the recovery model at full and set up transaction log backups. This will keep the log file at a reasonable size and you will gain point in time recovery (eg if you back up the logs on an hourly basis, you can recover the database to the last log backup, meaning you would never lose more than an hour's work).
Kind Regards,
Owen -
The transaction log for database 'SharePoint_Config' is full
Hi all ,
I am very new to sharepoint. when i tried to remove a wsp file from central administration i got the message like
The transaction log for database 'SharePoint_Config' is full. To find out why space in the log cannot
be reused, see the log_reuse_wait_desc column in sys.databases.
Can anybody help me to solve this please. I saw one solution in net like this but i don't know how to do this. Can anybody help me how to do these steps please.
1.
Take the configuration database offline and detach it
2.
Copy the current MDF to a new location (to be used as a way of recovering the database if needed)
3.
Put the database back on-line, reattach it and then put it within simple mode (from full), with an aim of this stopping the database from
increasing in size
4.
Shrink the database and recover log space
5.
Should the shrinking fail, we'd look at detaching the database, making a sideways copy of the log file to another database
6.
We would then reattach the database, which should generate a new log file
Thank youHi Soumya,
I don't have any dba resource. And it is not a lab environment. It's Corporate environment.
Thanks alot for ur quick reply.
I normally don't just come onto the threads and tell people that they need a consultant, but you might want to look at getting a consultant to help make sure that your SharePoint environment is health and can be restored in the event of a server failure.
Based on the problem that you have described it might not be recoverable, or at least it might not be as recoverable as you want it to be.
At the very least you'll want to watch this video of my session at TechEd 2014 which talks about backups and how to set them up.
http://channel9.msdn.com/events/TechEd/NorthAmerica/2014/DBI-B214#fbid=
Thank You,
Denny Cherry -
Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation
Hey All,
I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
MS Exchange Transport
Mail Submission
IIS (Site Stopped in IIS)
Mailbox Assistants
Content Indexing Service
With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
about 300, size increases for one user of about 150Mb (over the whole day).
I am not sure what else to check here? Any ideas?
Thanks,
Robert
RobertHmm - this sounds like an device is chewing up the logs.
If you use log parser studio, are there any stand out devices in terms of the # of hits?
And for the ExMon was that logged over a period of time? The default 60 second window normally misses a lof of stuff. Just curious!
Cheers,
Rhoderick
Microsoft Senior Exchange PFE
Blog:
http://blogs.technet.com/rmilne
Twitter: LinkedIn:
Facebook:
XING:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
Rhoerick,
Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following:
1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
2) We didn't have our enterprise monitoring system watching SMTP traffic, this is being resolved.
3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that.
4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
from growing at a psychotic rate".
5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
the log file generation seems to have dropped by about 30%.
Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT.
Robert
Robert
Maybe you are looking for
-
IPod hasn't worked since 3.1.3 update
Has anyone else had problems with their iPod after updating the software to 3.1.3? I plugged in my iPod Touch to the PC a few days ago to upload some new music and decided to follow the prompts to update the software. Big mistake. The iPod hasn't wor
-
What is query optimization and how to do it.
Hi What is query optimization? Any link can any one provide me so that i can read and learn the techniques. Thanks Elias Maliackal
-
the problem that I have facing when change the SIM card the mobile told me please connect with iTunes even Mobily SIM's. iPhone 3G 16G Black Old New IEMI 011773008768573 011932008420730 SERIAL NUMBER 88903YM4Y7K 5K9401QZY7K Appreciate your support to
-
I migrated to a new Apple laptop + update to new OS X Version 10.9.4, ever since I can't use WORD any more. Error report is the following: Microsoft Error Reporting log version: 2.0 Error Signature: Exception: EXC_BAD_ACCESS Date/Time: 2014-07-09 11:
-
Are shared photo streams permenant?
I realize photos on photo stream only last 30 days and holds your most recent 1000 photos. In order to organize my photos I have made "shared photo streams" that are stored on the icloud. Are these going to go away at 30 days or can I use this as a