CSA MC Events Log and Agent Panel Events Corrolation

I have recently install CSA MC 6.0.0.201 and the agent on a Win2003 server. I have a question of events showing up in the agent panel and not showing up in the MC events log.
I see a number of events in the agent 'panel' event viewer. At the end of the event is a number in brackets like [176].
When I look at the MC event viewer but those events are not being reported.
My query is:
#1 I believe the example [176} is the rule being triggered. So if the event is not showing up in the event viewer how to I find that rule in the policies? I finally did stumble across the rule and I see that logging is disabled for that rule, but finding that rule was a needle in the haystack search. Is there an easier way to find rules?
#2 Maybe I do not understand this part but in the MC I placed this server (the one with the MC) into 'Audit Mode' in hopes that would get the events from the agent to show up in the MC event log. No good. Is there a way to get all events - even if the rule says to not log the event - so show up in the MC log so I can creat an exception?
Thanks
Larry

Tom,
I think I may have made some progress. Yes I'm in advanced mode. I went into Systems | Groups and first selected the 'Servers' and turned on logging. Still most the events in the agent event viewer were not making it to the MC event log.
So I went back in to the Systems | Groups and found there was a group called 'Servers - CSA Management Center' and turned on logging there and that got the events to start flowing into the MC events.
Maybe this will help me get going.
Larry

Similar Messages

  • I can no longer use all of the "Computer Management" tools against a remote computer. "Local Users and Groups", "Event Viewer", "Performance Logs and Alerts" and "Device Manager"

    Hello All,
    I can no longer use all of the "Computer Management" tools against a remote
    computer. "Local Users and Groups", "Event Viewer", "Performance Logs and
    Alerts" and "Device Manager"
    kindly see the below snapshot for assistance
    REGARDS DANISH DANIE

    This link may help....
    http://windowsxp.mvps.org/admintools.htm
    Freeman

  • The process could not execute 'sp_repldone/sp_replcounters' error for Log Reader Agent and SQL Server Assertion 17066 & 3624 errors in SQL Logs

    One of our SQL Server started creating SQLDUMP file and and on investigation I found the error longs are filled with Errors 3624 & 17066. There is transnational replication configured on one of the databases is the LogReader Agent is failing error "The
    process could not execute 'sp_repldone/sp_replcounters' on XXXXX". 
    Not sure if both these Assertion & Logreader Agent errors are related. Before I remove and put the replication, I wanted to check if anyone has experienced the same issues or aware of what the cause. 
    ***********Error messages from SQL Logs******
    **Dump thread - spid = 0, EC = 0x0000000111534460
    Message
    A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to
    Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
    Error: 3624, Severity: 20, State: 1.
    SQL Server Assertion: File: <logscan.cpp>, line=2123 Failed Assertion = 'UtilDbccIsInsideDbcc () || (m_ProxyLogMgr->GetPru ()->GetStartupState () < RecoveryUnit::Recovered)'. This error may be timing-related. If the error persists after rerunning
    the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.
    Error: 17066, Severity: 16, State: 1.
    External dump process return code 0x20000001.
    External dump process returned no errors.
    Thank you in advance.

    You need to determine if this error is a transient one or a show stopper one.
    It sounds like your log reader agent has crashed and can't continue.
    If so your best bet is to call Microsoft CSS and open a support incident.
    It also sounds like DBCC CHECKDB was running while the log reader agent crashed.
    If you need to get up and running again run sp_replrestart, but then you might find that replicated commands are not picked up. You will need to run a validation to determine if you need to reinitialize the entire publication or a single article.
    I have run into errors like this, but they tend to be transient, ie the log reader agent crashes, and on restart it works fine.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • ACD Logs Display and Agent Call Display is N/A on CSD

    Hello,
    We are facing the problem that the ACD logs display and Agent call display is N/A on the Cisco Supervisor Desktop. The version of CSD is 8.5.4, overall status of CSD is fine, for reference please see the attached screenshot.Thanks.
    Looking forward for the response.
    BR,
    Durraze 

    Hi Karthi,
    On PGA run the postinstall.exe setup turn off/on the Rascal Replication setup, in services.msc cycle the services shared below:
    Cisco Chat Service 
    Cisco Enterprise Service 
    Cisco LDAP monitor service
    Cisco License and Resource management
    Cisco Recording and Playback
    Cisco Recording and Statistics Service
    Cisco Sync Service
    Cisco VOIP monitor
    Please rate if it helps.
    BR,
    Durraze Khan

  • Log Reader Agent error "could not execute sp_replcmds' and causes stack dump

    Publisher/Subscriber db:  SQL 2008 R2, 2000 compatability mode
    Distributor database is on separate server.
    (note:  There is another database on this instance that is running replication without error, it is not in compatibility mode)
    After snapshot agent finishes, the log reader agent starts and fails immediately with this error in the Agent Job.
    Then I get a SEV20 error and stack dump in the error logs.
    Date  6/12/2014 3:12:26 PM
    Log  Job History (SERVER\INSTANCE-DBNAME-43)
    Step ID  2
    Server  ######RT02
    Job Name  SERVER\INSTANCE-DBNAME-43
    Step Name  Run agent.
    Duration  00:00:01
    Sql Severity  0
    Sql Message ID  0
    Operator Emailed  
    Operator Net sent  
    Operator Paged  
    Retries Attempted  0
    Message
    2014-06-12 20:12:26.302 Copyright (c) 2008 Microsoft Corporation
    2014-06-12 20:12:26.302 Microsoft SQL Server Replication Agent: logread
    2014-06-12 20:12:26.302
    2014-06-12 20:12:26.302 The timestamps prepended to the output lines are expressed in terms of UTC time.
    2014-06-12 20:12:26.302 User-specified agent parameter values:
       -Publisher SERVER\INSTANCE
       -PublisherDB DBNAME
       -Distributor ######RT02
       -DistributorSecurityMode 1
       -Continuous
       -XJOBID 0x8958DF32810C6849B28A037A8FF8DD92
       -XJOBNAME SERVER\INSTANCE-DBNAME-43
       -XSTEPID 2
       -XSUBSYSTEM LogReader
       -XSERVER SERVER\INSTANCE
       -XCMDLINE 0
       -XCancelEventHandle 0000000000000F98
       -XParentProcessHandle 0000000000000F34
    2014-06-12 20:12:26.459 Parameter values obtained from agent profile:
       -pollinginterval 5000
       -historyverboselevel 1
       -logintimeout 15
       -querytimeout 1800
       -readbatchsize 500
       -readbatchsize 500000
    2014-06-12 20:12:26.493 Status: 4096, code: 20024, text: 'Initializing'.
    2014-06-12 20:12:26.493 The agent is running. Use Replication Monitor to view the details of this agent session.
    2014-06-12 20:12:27.885 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.'.
    2014-06-12 20:12:27.886 The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.
    2014-06-12 20:12:27.886 Status: 0, code: 21, text: 'Warning: Fatal error 3624 occurred at Jun 12 2014  3:12PM. Note the error and time, and contact your system administrator.'.
    2014-06-12 20:12:27.886 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'SERVER\INSTANCE'.'.
    I've tried removing replication and setting it back up again, restarting SQL, and restarting the server itself.
    Let me know if you need any more information to help troubleshoot.  Thanks.
    Please help, thanks. 

    Hi,
    Enable Verbose logging and check the results.
    Execute following commands: -Output C:\Temp\OUTPUTFILE.txt –Outputverboselevel 2.
    Please refer following KB article for your reference -
    http://support.microsoft.com/kb/q312292/
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Cisco Supervisor Desktop show "Agent Logs - call" and "Agent Logs - state" in N/A ::: UCCX 8.5.1

    Hi team.
    The Cisco Supervisor Desktop don't show any logs in the "Agent Logs - State" and "Agent Logs - Call" in some agents.
    I restarted the Cisco Desktop Services in CCX Serviceability but the issue continue.
    I appreciate any help respect this case.
    Thanks a lot.
    ErnestoG

    Hi Ernesto,
    Did you click or selct the Specific Agent\Inbound call which is currently being handled by the Agent. From the Screenshot you have attached (first one) doesn't look like the call has been selected.
    Please select or click on that Specific Agent\Inbound call from CSD and check these values.
    Hope this helps.
    Anand
    Please rate helpful posts !!

  • Log Reader Agent and Snapshot Agent wont start

    Hi There,
    I've two SQL 2012 servers with multiple instances installed.
    I've started replicating the databases in these instances using transactional replication and thus far they've worked without a hitch.
    One of my instances, annoyingly, has an issue where the Log Reader Agent and Snapshot Agent refuse to start, and I've followed exactly the same process as with the other instances\databases.
    The Agents are configured to make use of a domain user account with sysadmin permissions to the instances on both servers.
    I get the following two error when I View Log Reader Agent Status:
    The job failed. The Job was invoked by User sa. The last step to run was step 2 (Run agent.).
    I've asked the agent to run as my DOMAIN\sqlservice account, so I've no idea why it's moaning about sa?!!?
    I get the following error when I View Snapshot Agent Status:
    The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Di
    If I try to start either agent I'm told that the request to run job was refused because the job has been suspended, "Changed database context"??  Error 22022.
    Can anyone help?

    This is because your job owner is sa.  Right click on your job and notice the owner - but is should be sa.
    You likely have another issue. You may need to run the job and configure it for logging to see what the error is.
    http://support.microsoft.com/kb/312292/en-us
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Xgrid server admin controller tab won't create password entries for client and agent authentication.

    I am trying to set up password-based access for my OSX Server 10.7.3 running on a mac mini.  When I try to enter passwords into the Client Authentication and Agent Authentication fields from the Controller tab and click Save, the fields empty out.  When I then try to start the Xgrid service, it fails with an error in the log file controller missing password file "/etc/xgrid/controller/agent-password".  Can someone help?
    Thanks,
    Chris

    Thanks for the pointer to createhomedir - that did indeed do the trick. (How on earth do people find these little nuggets).
    I hesitate to mark this as solved however - it's a functioning workaround, but does nothing to explain why on earth the GUI suddenly stopped functioning.
    But in the (likely) event that that question never gets answered, thanks again for letting me get on with working!

  • VSTS Controller and Agent don't communicate

    We are doing an exercise of load testing a website using VSTS 2010 with multiple machines as clients. To do so, we have tried setting up controller and agent in 2 different machines,
    Machine1 (Controller) and Machine2 (Agent). When the test is initiated from Machine 1 (This machine has VSTS 2010 installed) , Machine 2 is not providing any load to the test. The tests are configured to run as a service. The
    results were same when running the tests as a interactive process instead of a service
    To tests the setup of the machines the following tests were conducted
    The below services were successfully run from both the machines
    Performance logs and Alerts
    Performance counter DLL host
    Remote registry
    Setting the execution method as Remote Execution under Edit
    Test Settings > Roles
    The agent machine is also getting displayed under Test >
    Manage Test Controller; we are able to take the agent machine online
    The controller and agent machines were able to communicate using the Typeperf command
    When load test is run with this configuration we don’t see any Page Response/Key Indicators in the graph and there are no errors in the test run or in the event viewer although we can see Controller/Agent graph getting generated. During this time, there
    were no updates in the database but whenever we ran the test setting with execution method as
    Local Execution, the test runs successfully with having transacted to the database.
    This configuration was done on our client’s environment where we have McAfee anti-virus running and windows firewall is enabled.
    Please let us know how to go forward with this. Is there anything missing in the configuration. 

    Hi Gautam_ap,
    Thank you for posting in MSDN forum.
    According to your description, it seems that you are trying run load test remotely from VS2010 with test controller and test agent, but the load test run fail remotely.
    Based on your issue, could you please share me the detailed error message about you run load test remotely from VS2010 failed?
    As far as I know that if we want to run the load test remotely from VS2010,we will need to ensure that the version of the test controller and test agent are same with the VS2010. So please you check if you are install the Test Controller 2010
    and Test Agent 2010.
    As you said that you have McAfee anti-virus running and windows firewall is enabled, please disable the anti-virus and windows firewall and then check this issue.
    Generally, I know that if you want to run automated tests that interact with the desktop, you must set up your agent to run as a process instead of a service. For example, if you want to run a coded UI test remotely using a test controller and test agent,
    you must set up your agent to run as a process.
    Since you are running the load test remotely from VS IDE with test controller and test agent, so you will not need to run the load test as a interactive process instead of a service.
    Therefore, I suggest you could also refer the following MSDN document to check this test controller and test agent install and configure for running load test remotely in the VS2010.
    https://msdn.microsoft.com/en-us/library/ff400223(v=vs.100).aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • CSA - file copy log

    Hi all!
    I have a question.
    We want to protect the data in out company and I set the CSA-MC to log when someone try to copy the private datas to a removable device, pendrive...
    and the CSA send me a mail about this event.
    But it isn't enough protection. If the user change the filename (.mp3) I don't know what is the file actually, it is really an "mp3" or a private data.
    What can you suggest me?
    Can I save the file somewhere to check it later?
    or create a better rule, to catch if someone try to steal the data.
    (I don't want deny saving, just log the stealing)
    I hope you understand what I want.
    Thank you, br Gabor

    Gabor,
    The first key step is to identify where this sensitive data lives, or what program is generating it (even easier.) Let's say that you want to secure everything from your financial application. You would setup CSA to static tag any data file written by that program as "sensitive." Then you would write some CSA rules to monitor whenever that sensitive data was modified (i.e. change extension) and/or block when used improperly (i.e. copied to USB.)
    All those options are available without the optional DLP license.
    Thanks,
    Josh

  • Can you please take a look at my TM Buddy log and opine on what the problem is?

    Pondini,
    Can you please take a look at my TM Buddy log and opine on what the problem is?  I'm stuck in the "Preparing Backup" phase for what must be hours now.  My last successful backup was this morning at 7:16 am.  I did do a series of Software Update this morning, one of which, a security update I believe, required a restart.
    I'm confused as to what the issue is, and how to get everything back to "it just works".
    Many thanks in advance.
    Starting standard backup
    Backing up to: /Volumes/JDub's Drop Zone/Backups.backupdb
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotState path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Error: (5) getxattr for key:com.apple.backupd.SnapshotContainer path:/Volumes/JDub's Drop Zone/Backups.backupdb/Jason Wisniowski’s iMac/2013-05-30-002104
    Event store UUIDs don't match for volume: Area 420
    Event store UUIDs don't match for volume: Macintosh HD
    Error: (5) getxattr for key:com.apple.backupd.SnapshotSt

    Time Machine can't read some data it needs from your backups (each of those date-stamps is one of your backups). 
    That's usually a problem with the drive itself, but could be the directory on it. First be sure all plugs are snug and secure, then see if you can repair it, per #A5 in Time Machine - Troubleshooting. 
    If that doesn't help, post back with the results.  Also either tell us what kind of Mac you have, what version of OSX you're running, or post that to your Profile, so it's accessible.  
    This is unrelated to the original post here, so I'm going to ask the Hosts to split it off into a new thread.  Since you've posted in the Lion forum, I'll assume that's what you're running.  You should get a notice from them

  • NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A

    제품 : SQL*NET
    작성날짜 : 1999-07-30
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A
    ==================================================
    PURPOSE
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대해 알아 보도록한다
    Explanation
    1. NET8에서 trace를 왜 사용하고 어떤 component들에 trace를 할 수 있나요 ?
    Trace의 특징은 네트워크을 수행하게 될때 network event들을 기술한다
    즉 trace와 관련된 일련의 문장들이 자세하게 생성된다.
    "Tracing"의 운영으로 log파일에 제공되어 있는 것 보다 NET8의 component들의
    내부적인 정보를 보다 많이 얻을 수 있다.
    이러한 정보는 에러의 결과로 인하여 발생하는 동일한 event들로 파일들에
    결과가 생성되어 이를 이용하여 문제의 원인을 판단할 수 있다.
    주의 : trace의 기능을 이용하는 경우 충분한 disk space와 system
    performance의 현격한 저하를 가져올 수 있다.
    즉 trace의 기능은 반드시 필요할 경우에만 사용할 것을 권한다.
    Example
    Reference Ducumment
    << trace의 기능을 이용하여 trace를 할수 있는 component들 >>
    * Network listener
    * Net8 components on the client and server
    * Connection Manager
    * Oracle Names Server
    * Oracle Names Control Utility
    * TNSPING utility
    2. 어떤 parameter들을 설정하면 trace 기능을 이용할 수 있는가 ?
    tracing을 하기 위해서는 특정 trace parameter들을 설정함으로써 가능하며
    아래에 주어진 방법들과 또는 utility들중 하나를 선택하여 설정함으로써
    사용할 수 있다.
    * Component Configuration Files
    * Component Control Utilities
    * Oracle Trace
    component의 configuration 파일을 이용하여 traceing parameter를 설정하려면
    1) component의 configuration 파일에 다음의 traceing parameter를 설정한다.
    - SQLNET.ORA for client or server, LISTENER.ORA for listener:
    TRACE_LEVEL_<CLIENT/LISTENER/SERVER>=(0/4/10/16)
    TRACE_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    LOG_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    2) 만일 component들이 수행중인 동안 configuration 파일의 수정이 있었다면
    변경된 parameter들을 사용하기 위해 component들을 다시 시작하여야 한다.
    component control utility들을 이용하여 trace parameter들을 설정하려면
    1) listener의 경우, Listener Control Utility(lsnrctl)에서 TRACE 명령어를
    이용하여 listener가 수행중인 동안에도 trace level을 설정할 수 있다.
    EX)
    RC80:/mnt3/rctest80> lsnrctl
    LSNRCTL for SVR4: Version 8.0.4.0.0 - Production on 01-SEP-98 15:16:52
    (c) Copyright 1997 Oracle Corporation. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> trace admin
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    Opened trace file: /mnt4/coe/app/oracle/product/8.0.4/network/trace/
    lsnr_coe.trc
    The command completed successfully
    LSNRCTL> trace off
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    The command completed successfully
    LSNRCTL> exit
    RC80:/mnt3/rctest80>
    2) Oracle Names의 경우, Names Control Utility(namesctl)에서 TRACE_LEVEL
    명령어를 이용하여 Oracle Names가 수행중인 동안에도 trace level을
    설정할 수 있다.
    주의 : Connection Manager의 경우, trace level은 configuration 파일인
    CMAN.ORA 에서만 설정할 수 있다.
    Oracle Enterprose manager(이하 OEM)에 있는 Oracle Trace는 trace parameter
    들을 설정하고 GUI를 통해 trace data의 형태를 볼수 있도록 하는 tracing tool
    이다.
    3. Trace된 data를 해석할 수 있는 다른 utility들이 있다면 ?
    Trace Assistant를 사용하면 사용자의 *.trc 파일 (SQL*Net v2의 형식에 의해
    생성된) 또는 *.txt (Orace Trace 과 TRCFMT에 의해 생성된 출력물)을 통해
    trac된 정보를 해석할 수 있다.
    이 유틸리티 네트워크의 문제들로 인해 발생하는 문제점들을 진단하고
    해결하는 데 보다 많은 정보를 제공하여 사용자의 이해를 돕는다.
    * the source and destination of trace files
    * the flow of packets between network nodes
    * which component of Net8 is failing
    * pertinent error codes
    다음에 주어진 명령어를 수행하므로써 Trace Assistant 실행할 수 있다.
    trcasst [options] <filename>
    Trace Assistant Text Formatting Options
    -o Displays connectivity and Two Task Common (TTC) information.
    After the -o the following options may be used:
    c (for summary connectivity information)
    d (for detailed connectivity information)
    u (for summary TTC information)
    t (for detailed TTC information)
    q (displays SQL commands enhancing summary TTC
    information)
    -p Oracle Internal Use Only
    -s Displays statistical information
    -e Enables display of error information After the -e, zero
    or one error decoding level may follow:
    0 or nothing (translates the NS error numbers dumped
    from the nserror function plus lists all
    other errors)
    1 (displays only the NS error translation from
    the nserror function)
    2 (displays error numbers without translation)
    만일 option들이 제공되지 않는다면 기본적으로 -odt -e -s가 지정되어 자세한
    connectivity, Two-Task Common, 에러 level 0 그리고 통계정보들이 tracing
    된다.
    4. SQL*Net v2 tracing과 어떻게 다른가 ?
    Net8 tracing에서는 이전 버전인 SQL*NET V2에서 제공 되는 모든 option을
    포함하고 있고 Oracle Trace의 기능이 추가되었다.
    이것은 Oracle Trace Repository를 OEM 콘솔을 통하여 사용자의 trace 정보를
    관리할 수 있도록 허용한다.
    5. *.cdf와 *.dat은 어떤 파일 인가 ?
    *.cdf 와 *.dat 파일들은 Oracle Trace에 의해 생성되는 파일들로서 이 파일들을
    읽기 위해서는 반드시 trcfmt utility를 이용해야만 한다.
    trcfmt는 binary (*.dat와 *.cdf의 확장자) 파일내에 있는 data를 일반text
    (.txt의 확장자)로 정보를 추출한다. 이 tool을 사용하기 위해서는 다음의
    명령어를 이용하면 된다.
    $ trcfmt collection.cdf
    주의 : .cdf와 .dat파일이 존재하는 디렉토리가 아닌 곳에서 이 tool을 이용
    한다면 path가 포함되야 한다. 만일 하나의 .cdf 와 .dat 파일들내에
    여러 프로세스들의 traceing정보가 수집된다면 그것들은 process_id.txt
    의 이름과 함께 파일이 추출될 것이다.
    6. trac관련 configuration은 어떤 것이 있으며 설정할 수 있는 parameter는
    무엇이 있는가 ?
    ==========================================================================
    || SQLNET.ORA Parameters ||
    ==========================================================================
    DAEMON.TRACE_DIRECTORY
    Purpose: Controls the destination directory of the Oracle
    Enterprise Manager daemon trace file
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_DIRECTORY=/oracle/traces
    DAEMON.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level for
    the Oracle Enterprise Manager daemon.
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_LEVEL=10
    DAEMON.TRACE_MASK
    Purpose: Specifies that only the Oracle Enterprise Manager daemon
    trace entries are logged into the trace file.
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_MASK=(106)
    LOG_DIRECTORY_CLIENT
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_CLIENT=/oracle/network/trace
    LOG_DIRECTORY_SERVER
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Valid in File: SQLNET.ORA
    Example: LOG_DIRECTORY_SERVER=/oracle/network/trace
    LOG_FILE_CLIENT
    Purpose: Controls the log output filename for an Oracle client.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_CLIENT=client
    LOG_FILE_SERVER
    Purpose: Controls the log output filename for an Oracle server.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_SERVER=svr
    NAMESCTL.TRACE_LEVEL
    Purpose: Indicates the level at which the NAMESCTL program should
    be traced.
    Default Value: OFF
    Values: OFF, USER, or ADMIN
    Example: NAMESCTL.TRACE_LEVEL=ADMIN
    NAMESCTL.TRACE_FILE
    Purpose: Indicates the file in which the NAMESCTL trace output is
    placed.
    Default Value: namesctl_PID.cdf and namesctl_PID.dat
    Example: NAMESCTL.TRACE_FILE=NMSCTL
    NAMESCTL.TRACE_DIRECTORY
    Purpose: Indicates the directory where trace output from the NAMESCTL
    utility is placed.
    Default
    Value: $ORACLE_HOME/network/trace
    Example: NAMESCTL.TRACE_DIRECTORY=/ORACLE/TRACE
    NAMESCTL.TRACE_UNIQUE
    Indicates whether a process identifier is appended to the
    Purpose: name of each trace file generated, so that several can
    coexist.
    Default
    Value: OFF
    Values: OFF or ON
    Example: NAMESCTL.TRACE_UNIQUE = ON
    TNSPING.TRACE_DIRECTORY
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TNSPING.TRACE_DIRECTORY=/oracle/traces
    TNSPING.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TNSPING.TRACE_LEVEL=10
    TRACE_DIRECTORY_CLIENT
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_CLIENT=/oracle/traces
    TRACE_DIRECTORY_SERVER
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_SERVER=/oracle/traces
    TRACE_FILE_CLIENT
    Purpose: Controls the name of the client trace file
    Default Value: SQLNET.CDF and SQLNET.DAT
    Example: TRACE_FILE_CLIENT=cli
    TRACE_FILE_SERVER
    Purpose: Controls the name of the server trace file
    Default Value: SVR_PID.CDF and SVR_PID.DAT
    Example: TRACE_FILE_SERVER=svr
    TRACE_LEVEL_CLIENT
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_CLIENT=10
    TRACE_LEVEL_SERVER
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_SERVER=10
    TRACE_UNIQUE_CLIENT
    Used to make each client trace file have a unique name to
    Purpose: prevent each trace file from being overwritten with the next
    occurrence of the client. The PID is attached to the end of
    the filename.
    Default
    Value: OFF
    Example: TRACE_UNIQUE_CLIENT=ON
    USE_CMAN
    If the session is in an Enhanced Discovery Network with a
    Purpose: Names Server, this parameter forces all sessions to go
    through a Connection Manager to get to the server.
    Default
    Value: FALSE
    Values: TRUE or FALSE
    Example: USE_CMAN=TRUE
    ==========================================================================
    || LISTENER.ORA Parameters ||
    ==========================================================================
    LOG_DIRECTORY_listener_name
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_LISTENER=/oracle/traces
    LOG_FILE_listener_name
    Purpose: Specifies the filename where the log information is
    written
    Default Value: listener_name.log
    Example: LOG_FILE_LISTENER=lsnr
    TRACE_DIRECTORY_listener_name
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_LISTENER=/oracle/traces
    TRACE_FILE_listener_name
    Purpose: Controls the name of the listener trace file
    Default Value: LISTENER_NAME.CDF and LISTENER_NAME.DAT
    Example: TRACE_FILE_LISTENER=lsnr
    TRACE_LEVEL_listener_name
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 - WorldWide Customer Support trace information
    Example: TRACE_LEVEL_LISTENER=10
    ==========================================================================
    || NAMES.ORA Parameters ||
    ==========================================================================
    NAMES.TRACE_DIRECTORY
    Purpose: Indicates the name of the directory to which trace files
    from a Names Server trace session are written.
    Default
    Value: platform specific
    Example: names.trace_directory = complete_directory_name
    NAMES.TRACE_FILE
    Purpose: Indicates the name of the output file from a Names Server
    trace session. The filename extension is always.trc
    Default
    Value: names
    Example: names.trace_file = filename
    NAMES.TRACE_LEVEL
    Purpose: Indicates the level at which the Names Server is to be
    traced.
    Default Value: OFF
    Example: names.trace_level = OFF
    NAMES.TRACE_UNIQUE
    indicates whether each trace file has a unique name, allowing
    Purpose: multiple trace files to coexist. If the value is set to ON, a
    process identifier is appended to the name of each trace file
    generated.
    Default
    Value: OFF
    Example: names.trace_unique = ON
    names.trace_file = names_05.trc
    ==========================================================================
    CMAN.ORA Parameters
    ==========================================================================
    TRACING
    Default
    Value: NO
    Example: TRACING = NO
    References
    7. listener.log 파일에 loggin정보를 남기지 않게 하는 방법이 있나요 ?
    고객이 개발하여 사용중인 application에서 NET8을 이용하여 접속하거나 접속을
    종료하는 경우 listener.log에 이와 관련된 정보가 남으며, 수 많은 사용자가
    접속을 하게 되므로서 급속하게 listener.log 파일이 커져 $ORACLE_HOME이 있는
    file system이 꽉 차서 데이터베이스가 hang이 되는 결과를 초래하는 경우가 있다.
    고객들은 listener.log에 write할수 있는 메세지의 양에 제한을 두기를 원하는
    경우가 있으나 이러한 기능은 제공되지 않는다. 하지만 listener의 logging은
    ON 또는 OFF는 설정을 통해서 가능하다.
    Net8에서는 listener.ora에 "LOGGING_(the listener name)=off"를 설정하게
    되면 listener의 logging을 멈출 수 있다.
    물론 설정후 listener stop후 재기동을 하셔야 변경된 paramerter에 의해
    이 기능이 enable됩니다.
    참고 : SQL*NET 2.3.x 에서도 이 parameter가 유효한가요 ?
    물론 사용이 가능합니다. NET8에서 사용하는 것과 동일하게 parameter를
    listener.ora에 설정함으로서 가능합니다.
    EX)
    LOGGING_LISTENER=OFF
    이 parameter는 listener의 전체 logging을 disable하는 parameter로 일부만
    여과하여 logging할 수 있는 기능은 아니다.
    이 parameter는 NET8에 알려진 parameter로 SQL*NET 2.3.x manuals에 나와
    있지는 않지만 정상적으로 사용할 수 있다.

  • Failed attempt to move log and database paths

    Hi. Can anyone offer any advice on what might have caused an attempt to move Exchange 2010 (SP3) mailbox database and log folder paths to fail? I can't diagnose it, and would appreciate any advice.
    We have two databases in a two-node DAG, one mounted on each node. I removed the passive copy of each database before the move attempts. The first moved ok. The second failed. Here's a summary of what happened:
    Task 1:
    DAG-NODE-A ----------------------------DAG-NODE-B
    MAILBOX-DB-01                               MAILBOX-DB-01
    MOUNTED                                        HEALTHY (passive)
    - Dismount database
    - Remove passive copy.
    - Move logfolder path from R: to L:
      and move EDB path from S: to M:
    - Succeeded.
    - Mount datbase.
    - Re-add passive copy
    All ok.
    Task 2:
    DAG-NODE-A ----------------------------DAG-NODE-B
    MAILBOX-DB-02                               MAILBOX-DB-02
    HEALTHY (passive)                           MOUNTED
    - Dismount database
    - Remove passive copy.                                                
    - Move logfolder path from R: to L:
    and move EDB path from S: to M:
    - Failed. Paths not moved.
    - Database now won't mount.
    - Database eventually mounts
    after approx. 30 auto retries.
    - Abandon attempt to move paths.
    - Re-add passive copy.                                            
    END
    Here's some more detail, included event log and shell output:
    So, as per the above, with Mailbox-DB-01 (active copy on DAG-NODE-A), Exchange moved the paths without a hitch and I was then able to mount the database and re-add the passive copy. I then tried to move the paths for Mailbox-DB-02 (active copy on DAG-NODE-B),
    but after a few minutes Exchange aborted the move, outputting errors to the Shell, the application log and the MSExchange Management log. A second attempt failed because Exchange found that "The .edb file path is not available. There is already a file
    named M:\Mailbox-DB-02\Mailbox-DB-02.edb" - Exchange had created an EDB file in the target location, but the source EDB file was still in the source location and the path was unchanged. Rather than re-trying the move again, I decided to try to mount the
    database, to check it was ok, but it wouldn’t mount.
    I left it alone for an hour or so to have a think about what to do, came back to it and found Exchange had mounted the database (without moving the paths) after around 30 automatic retries. I was then able to re-add the passive copy.
    I’m not keen to re-try the move without knowing why it failed, why the database then wouldn’t mount, and why it subsequently recovered. Any advice would be appreciated. Many thanks.
    Output included
    - a WinRM error in the Shell’;
    - an App Log error event from my attempt to mount the database after the move failure, saying an attempt to open the EDB file for read / write access failed with system error 32 because the file was being used by another process;
    - events recording failure to configure and start the database, and noting a serious error which caused it to terminate its functional activity;
    - Event ID 3154 from MSExchangeRepl, saying an Active Manager operation failed / database action failed / MapiExceptionCallFailed;
    - Event ID 10056 from source MSExchangeIS Mailbox Store in the App Log just before Exchange successfully mounted the troublesome database, saying “Patch all ID counters for database Mailbox-DB-02.”
    One other thing which may or may not be relevant – the MSExchange Service Host service stops soon after being started on DAG-NODE-B (reboots have made no difference). On DAG-NODE-A, it runs consistently.
    Here’s what seems to me to be the most relevant output from the Application Log and the MSExchange Management Log on DAG-NODE-B, and from the Exchange Shell:
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:19 ESE 327 General "Information Store (3720) Mailbox-DB-02: The database engine detached a database (4, S:\Mailbox-DB-02\Database\Mailbox-DB-02.edb). (Time=6 seconds)
    Internal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 6.188, [7] 0.156, [8] 0.016, [9] 0.015, [10] 0.016, [11] 0.031.
    Revived Cache: 0"
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:19 ESE 103 General "Information Store (3720) Mailbox-DB-02: The database engine stopped the instance (4).
    Dirty Shutdown: 0
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:19 MSExchangeIS Mailbox Store 9539 General The Microsoft Exchange Information Store database "Mailbox-DB-02" was stopped.
    ==================================================================================
    DAG-NODE-B, MSExchange Management log
    Information DD/MM/14 10:46:19 MSExchange CmdletLogs 1 General Cmdlet succeeded. Cmdlet Dismount-Database, parameters {Identity=Mailbox-DB-02}.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:19 MSExchangeRepl 3161 Service Active Manager dismounted database Mailbox-DB-02 on server DAG-NODE-B.company.corp.
    ==================================================================================
    DAG-NODE-B, MSExchange Management log
    Error DD/MM/14 10:46:20 MSExchange CmdletLogs 6 General Cmdlet failed. Cmdlet Get-PublicFolderDatabase, parameters {Status=True, Identity=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:21 MSExchange Assistants 9002 Assistants Service MSExchangeMailboxAssistants. Stopped processing database Mailbox-DB-02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 10:46:21 MSExchange Assistants 9002 Assistants Service MSExchangeMailSubmission. Stopped processing database Mailbox-DB-02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 10:51:08 MSExchange Search Indexer 104 General Exchange Search Indexer failed to enable the Mailbox Database Mailbox-DB-02 (GUID = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) after 10 tries. The last failure was: MapiExceptionMdbOffline: Unable to
    get CI watermark (hr=0x80004005, ec=1142)
    ==================================================================================
    DAG-NODE-B, MSExchange Management log
    Warning DD/MM/14 11:06:44 MSExchange CmdletLogs 4 General Cmdlet stopped. Cmdlet Move-DatabasePath, parameters {Identity=Mailbox-DB-02, EdbFilePath=M:\Mailbox-DB-02\Mailbox-DB-02.edb, LogFolderPath=L:\Mailbox-DB-02\Logs}.
    ==================================================================================
    DAG-NODE-B, Exchange Management Shell output
    Processing data from remote server failed with the following error message: The WinRM client cannot complete the operation within the time specified. Check if the machine name is valid and is reachable over the network and firewall exception for Windows Remote
    Management service is enabled. For more information, see the about_Remote_Troubleshooting Help topic.
    + CategoryInfo : OperationStopped: (System.Manageme...pressionSyncJob:PSInvokeExpressionSyncJob) [], PSRemotingTransportException
    + FullyQualifiedErrorId : JobFailure
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 11:23:42 MSExchangeIS Mailbox Store 1000 General Attempting to start the Information Store "Mailbox-DB-02".
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 11:23:43 ESE 102 General "Information Store (3720) Mailbox-DB-02: The database engine (14.03.0162.0000) is starting a new instance (4).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 11:23:43 ESE 105 General "Information Store (3720) Mailbox-DB-02: The database engine started a new instance (4). (Time=0 seconds)
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:53 ESE 490 General "Information Store (3720) Mailbox-DB-02: An attempt to open the file ""S:\Mailbox-DB-02\Database\Mailbox-DB-02.edb"" for read / write access failed with system error 32 (0x00000020): ""The
    process cannot access the file because it is being used by another process. "". The open file operation will fail with error -1032 (0xfffffbf8).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:53 MSExchangeIS 9519 General "The following error occurred while starting database Mailbox-DB-02: 0xfffffbf8.
    Failed to configure MDB. "
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:53 MSExchangeIS 9519 General "The following error occurred while starting database Mailbox-DB-02: 0xfffffbf8.
    Start DB failed.. "
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:53 ExchangeStoreDB 215 Database recovery At 'DD/MM/2014 11:23:53' the Microsoft Exchange Information Store Database 'Mailbox-DB-02' copy on this server experienced a serious error which caused it to terminate its functional activity.
    The error returned by the remount attempt was "There is only one copy of this mailbox database (Mailbox-DB-02). Automatic recovery is not available.". Consult the event log on the server for other storage and "ExchangeStoreDb" events for
    more specific information about the failures.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:53 ExchangeStoreDB 231 Database recovery At 'DD/MM/2014 11:23:53', the copy of database 'Mailbox-DB-02' on this server encountered an error during the mount operation. For more information, consult the Event log on the server for "ExchangeStoreDb"
    or "MSExchangeRepl" events. The mount operation will be tried again automatically.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 11:23:54 ESE 103 General "Information Store (3720) Mailbox-DB-02: The database engine stopped the instance (4).
    Dirty Shutdown: 0
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:54 ExchangeStoreDB 231 Database recovery At 'DD/MM/2014 11:23:54', the copy of database 'Mailbox-DB-02' on this server encountered an error during the mount operation. For more information, consult the Event log on the server for "ExchangeStoreDb"
    or "MSExchangeRepl" events. The mount operation will be tried again automatically.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Error DD/MM/2014 11:23:54 MSExchangeRepl 3154 Service Active Manager failed to mount database Mailbox-DB-02 on server DAG-NODE-B.company.corp. Error: An Active Manager operation failed. Error The database action failed. Error: Operation failed with message:
    MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=-1032)
    ==================================================================================
    NOTE:repeat events similar to some of the above continue until Exchange eventually manages to mount the database:
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:56 MSExchangeIS Mailbox Store 1000 General Attempting to start the Information Store "Mailbox-DB-02".
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:56 ESE 102 General "Information Store (3720) Mailbox-DB-02: The database engine (14.03.0162.0000) is starting a new instance (4).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:56 ESE 105 General "Information Store (3720) Mailbox-DB-02: The database engine started a new instance (4). (Time=0 seconds)
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:57 ESE 326 General "Information Store (3720) Mailbox-DB-02: The database engine attached a database (5, S:\Mailbox-DB-02\Database\Mailbox-DB-02.edb). (Time=0 seconds)
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:57 MSExchangeIS Mailbox Store 10056 General Patch all ID counters for database Mailbox-DB-02.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:57 MSExchangeIS Mailbox Store 1133 General Allocating message database resources for database "Mailbox-DB-02".
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:58 MSExchangeIS Mailbox Store 9523 General "The Microsoft Exchange Database ""Mailbox-DB-02"" has been started.
    Database File: S:\Mailbox-DB-02\Database\Mailbox-DB-02.edb
    Transaction Logfiles: R:\Mailbox-DB-02\Logs\
    Base Name (logfile prefix): E03
    System Path: R:\Mailbox-DB-02\Logs\
    (Start Duration=00:00:01.844) "
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:15:58 MSExchangeRepl 3156 Service Active Manager successfully mounted database Mailbox-DB-02 on server DAG-NODE-B.company.corp.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:16:06 MSExchange Assistants 9001 Assistants Service MSExchangeMailSubmission. Started to process mailbox database Mailbox-DB-02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:16:09 MSExchange Assistants 9001 Assistants Service MSExchangeMailboxAssistants. Started to process mailbox database Mailbox-DB-02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx).
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:16:12 MSExchange Assistants 9017 Assistants Service MSExchangeMailboxAssistants. Managed Folder Mailbox Assistant for database Mailbox-DB-02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) is entering a work cycle. There are 125 mailboxes on
    this database.
    ==================================================================================
    DAG-NODE-B, Application Log:
    Information DD/MM/2014 12:16:30 MSExchange Search Indexer 108 General Exchange Search Indexer has enabled indexing for the Mailbox Database Mailbox-DB-02 (GUID = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx).
    ==================================================================================
      

    If you used the steps outlined in this TechNet article - http://technet.microsoft.com/en-us/library/dd979782.aspx - then you should have been OK.  I will say that I read one
    article that said to work one database at a time, then when you are sure it is operational to go back and do the next, etc.
    (I also assume by your shorthand at the beginning of this document that you performed the tasks on the active node when you did this work.  If so, that's the best way to do it, so if it failed, it wasn't because you used the wrong steps or did them
    on the wrong system.)

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for

  • Cannot open PDF outside of browser without "save as" dialog.

    Hello, We currently upgraded from Acrobat 8 Standard to Acrobat 10.1.6. On our old version of Acrobat, when we unchecked the option under Preferences - Internet - Display the PDF in Browser, it would open files in an new Acrobat window. With X instal

  • How to supervise (and fix) using BPEL web services in AIA?

    We have unbelievably many problems with using AIA & web services concerned. Is there any way of online monitoring using the web services, checking data/parameters provided and fixing what needed - before a disasters comes? The supervising layer shoul

  • Audio out of sync after export!

    I've exported this video I've been working on several times under different conditions, no effects, no transitions, no additional media, different formats between AVI, MPEG, H.264, I've uninstalled and re-installed premier, running as administrator. 

  • HT201272 Cannot download previously purchased audiobook

    If I go to View Account > Purchase History I can see all the audiobooks I have bought, however when I go to the new Purchased feature I can see the music that I can re-download but the audiobook tab is empty even though the audiobooks appear in my pu

  • Video format to play in iPad vertical ?

    I made a movie in AfterEffects that I want to play on my iPad 2.  I want it to fill the screen when I hold the iPad vertical. So, I made the movie 768 wide x 1024 high.  I exported from AfterEffects as ProRes 422, then took it into Compressor and exp