Opmn logs not being written
Hi All,
We are facing an issue.
No logs are being written to the opmn/logs directory. It was being written correctly till 4th December and then stopped all of a sudden.
Are there any configuration files which may have been affected.
Best regards,
Brinda
To clarify.
We are now rotating the logfiles with the linux/unix command logrotate. I suspect that this is what is causing the issue that the logs are note being filled after rotation and we need to restart opmn for the logs to start getting populated.
So I think we need to configure rotating logs in opmn.xml.
The Application server version is 10.1.3. This is the log line in our opmn.xml.
<log path="$ORACLE_HOME/opmn/logs/opmn.log" comp="internal;ons;pm" rotation-size="1500000"/>
So the question is how do we activate opmn to rotate the log so that we do not need to use logrotate.
In this document it says that you have to activate ODL for log rotation to work:
http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#sthref370
Is this true or can we rotate text logs as well. This is what we would prefer.
Best regards,
Gustav
Similar Messages
-
Ocrfile is not being written to. open file issues. Help please.
I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
From imon_<instance>.log:
Health check failed to connect to instance.
GIM-00090: OS-dependent operation:mmap failed with status: 22
GIM-00091: OS failure message: Invalid argument
GIM-00092: OS failure occurred at: sskgmsmr_13
That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
Any help greatly appreciated.Check Bug... on metalink
if Bug 6931689
Solve:
To fix this issue please apply following patch:
Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
or
Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
Good Luck -
IO Labels are NOT being written to prefs!
I've followed all the advice on existing threads about this and it's definitely a serious bug for me. IO Labels are not being written to prefs for me.
Any ideas? Already tried deleting prefs, creating new blank project and not saving, nothing has worked.I found a workaround for anyone having this issue - and this is the ONLY thing that has worked after a week of trying everything on several forums.
Open Logic, set your labels how you want.
While Logic is open go to ~/Library/Preferences and delete com.apple.logic.pro.plist
Quit Logic. I don't think it matters whether you save your project or not. When Logic quits a new plist will be written, and this one WILL have your labels!
Seems on my machine Logic would not update the IO labels bit of the prefs unless it was writing a complete new prefs file. -
Hprof heap dump not being written to specified file
I am running with the following
-Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
When I start the appserver, the file /tmp/englog1.txt gets created, but
when I do a kill -3 pid on the .kjs process, nothing else is being written to
/tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
and a core file is generated.
Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
Thanks.Hi
It seems that the option you are using is correct. I may modify it to something like
java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
This seems to work on 1.3.1_02, so may be something specific to the JDK version that
you are using. Try a later version just to make sure.
-Manish -
Transaction logs not being generated.
Hi Team,
We have configured the log shipping sucessfully on SQl 2000 & ECC 5.0. However it ran fine untill yesterdays night and today we found that logshipping is not happening as expected.
Now ,we found that transactions logs are not being genereated. Couldn't find any errors and logshipping maintainance job seems to be in executing state from quite a long time. We tried stop / start the job manually but no sucess.
Can some one please suggest.
Thanks & Regards,
VinodHi Markus,
Recovery model is set to 'Full' only. Even when we take manual backup of transaction logs.it's not intitaing the process it was just hanging.
Any idea wht could be causing this?
Thanks & Regards,
Vinod -
Snapshot Logs not being purged
Hi All,
I have a problem where my snapshot logs are not being purged due to materlized veiw dropped from the remote site.
I have looked at DBMS_SNAPSHOT.PURGE_SNAPSHOT_FROM_LOG porcedure to remove the orphaned entries in materialized view logs.
I want to execute this procedure based on the snapshot ID from SYS.SLOG$ using the snapshot_id parameter. Since the target snapshot is not listed in the list of registered snapshots (DBA_REGISTERED_SNAPSHOTS).
Please find the results of the below query to determine which entries in SYS.SLOG$ at the master site are no longer being used.
SELECT NVL (r.NAME,'-') snapname
, snapid
, NVL (r.snapshot_site, 'not registered') snapsite
, snaptime
FROM SYS.slog$ s
, dba_registered_snapshots r
WHERE s.snapid = r.snapshot_id(+) AND r.snapshot_id IS NULL;
SNAPNAME SNAPID SNAPSITE SNAPTIME
- 435 not registered 27/09/2010 22:11
- 456 not registered 27/09/2010 22:11
Please let me know if there is any other method to purge the logs safely?DBMS_MVIEW.PURGE_MVIEW_FROM_LOG is overloaded.
If you have the MVIEW_ID, you do not hvae to specify the (MVIEW_OWNER, MVIEW_NAME, MVIEW_SITE) parameters.
Check the PLSQL Procedures documentation on DBMS_MVIEW.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
I have two wfe servers, in one server the logs file are being generated but another server no log files only usage files
what could be the reason. Under central admin monitoring uls logs have been already enabled.Hi Rizzk,
Have you solved this issue?
Have you checked and added the service account for SharePoint Services Tracing in local group Performance Log Users or Administrators on SharePoint server, then restart this service?
http://www.justinkobel.com/post/2013/06/07/Solving-ULS-Log-Files-Being-Created-But-Empty-(0-Kb).aspx
http://sharepointinsight.wordpress.com/2010/04/05/solution-for-zero-byte-sharepoint-2010-log-files/
Thanks
Daniel Yang
TechNet Community Support -
Finder data not being written to .DS_Store file
The symptom of the problem I am experiencing is that the position of icons on my desktop (as well as icons in other folders in my home directory, as well as Finder window settings for folder in my home directory) are lost every time I log out. On a subsequent login, all of the icons "reflow" starting from the top right corner of the screen, just below the boot volume icon. I have been able to determine that this has to do with the .DS_Store file(s) in the Desktop folder and other folders of my home directory. If a .DS_Store file exists, the timestamp does not change when I change the icon layout and logout. If I delete the .DS_Store file, it is not re-created.
In my case, my home directory (and the child Desktop folder) is being mounted off an SMB share on a Windows 2003 server. My Mac is using AD authentication to the same Windows 2003 server, which is the domain controller. I'm logging in with my AD credentials, and my home directory mapping is picked up from the home directory field in AD.
Now, Googling this problem, I found a lot of people complaining about wanting to suppress the use/creation of the .DS_Store files on Windows network volumes. This led to an Apple KB article (http://support.apple.com/kb/HT1629) on how to modify a default to prevent the creation of .DS_Store files on network volumes--essentially the very behavior I am experiencing. The upshot of the KB article is to us the following command in terminal:
*defaults write com.apple.desktopservices DSDontWriteNetworkStores true*
I did a 'defaults read' to confirm this default isn't set on my install of Mac OS X 10.5.6--and it isn't. I then tried using the following variation in the hope I could force the behavior I wanted:
*defaults write com.apple.desktopservices DSDontWriteNetworkStores false*
Predictably, this had to effect.
The upshot is, NONE of the Finder data for files and folders in my home directory (icon positions, Finder view window defaults, etc.) is preserved between logons. And this is driving me nuts! I've spent several hours over the past two evening trying to troubleshoot this, and I've had no luck.
As a footnote, I'll mention that if I drag a folder from my home directory onto the local hard disk, the .DS_Store file is created/updated and things behave as expected (icon position and Finder window defaults are preserved). But, if I then drag the folder back over to my home directory, the values in the .DS_Store file become "frozen" and don't change.Hey, try this:
1.
Put a file in a folder on your Desktop.
Edit: not your Desktop, but rather a typical local home on an HFS+ volume ~/Desktop
2.
Use "Get Info" to add a "Comment".
The comment can be up to somewhere between 700 and 800 characters.
3.
Copy the folder to a FAT formatted flash drive, SMB share, etc.
4.
Create a new folder on the flash drive.
5.
Move the file from the first folder to the second folder.
Confirm that the "Finder" shows the comment to be there.
6.
Quit the "Finder" - in 10.4 and earlier, this action would ensure that all .DS_Store files get updated, including transfer of comments. I.e. don't "relaunch" or "kill", etc. Enable and user the "Quit" menu, or use:<pre>
osascript -e 'tell application "Finder" to quit</pre>
7.
Now where's the comment?
In step 2, you could also have just created a file on the non-HFS volume and wasted your time typing in a 700 character comment.
In step 6, a more real-world scenario is ejecting the drive or logging out, but deliberately quitting the "Finder" is the most conservative way to ensure that changes get written to .DS_Store files and comments survive. In 10.5.6, even under these conditions, comments are lost.
Icon positions and view preferences are one thing, but with comments - that's real user-inputted data that Apple is playing fast and loose with. And if they no longer support comments on non-HFS volumes, the "Finder" shouldn't be showing you the comments when you double-check them in step 5, or allow them to be added in the alternate version of step 2.
..."C'mon Apple... what gives here?"...
Unfortunately, this "Discussions" site is not frequented by Apple devs. Have you considered filing a bug report? I wonder what they would tell you, eg. if the treatment of .DS_Store files actually does turn out to be a bug or is intentional as it suspiciously seems...
http://developer.apple.com/bugreporter/index.html -
OSD Logs not getting written to SLShare
We have the SLShare defined in the CustomSettings.ini as..
SLShare=\\<servername>\Logs$
SLShareDynamicLogging=\\<servername>\OSDLogs$
Logs used to get written to both locations with no issue. Over the past couple of months, no logs get written to either location. Shares have been recreated and still nothing.. Checked the permissions, nothing has changed.
Ideas?I do apologize for not getting a lot more info
Been a bad week with other issues and trying to clear up some old questions and concerns - this being one of them....
During the WinPE phase of the deployment, we’ve connected to the SLSHare with the account that we use for joining the domain, that account connects without any issue.
In the ZTIGather.log what we see when we compare existing logs taken from the local drive on a machine that’s been imaged or in the process of imaging, to the logs up on the server that were centrally stored shows that the Property
SLSHare is now = \\<servername>\Logs$
In the logs extracted from the local drives(..Windows\CCM\Logs\) of machines that have been imaged in the past couple of months, that Property value does not exist, but we also see the following highlighted errors in the logs
which are not in the logs centrally stored on the server from a couple of months back. We have verified that the file
Microsoft.BDD.Utility.dll is in the correct location for
… Toolkit Package\Tools\x86 and …Toolkit Package\Tools\x64.
Finished getting network info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Getting DP info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Unable to determine ConfigMgr distribution point ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Finished getting DP info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Getting WDS server info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Unable to determine WDS server name, probably not booted from WDS. ZTIGather 11/5/2014 10:29:11
AM 0 (0x0000)
Finished getting WDS server info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Property HostName is now = MININT-PUPTGTH ZTIGather 11/5/2014 10:29:11 AM
0 (0x0000)
Getting asset info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
FindFile: The file x86\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile: The file x64\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FAILURE (Err): 429: CreateObject(Microsoft.BDD.Utility) - ActiveX component can't create object ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
Property AssetTag is now = No Asset Information ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SerialNumber is now = R900VB6B ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Make is now = LENOVO ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Model is now =*** ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Product is now =*** ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property UUID is now = 08136381-5318-11CB-8777-F9DA97025E14 ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property Memory is now = 7851 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Architecture is now = X86 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property ProcessorSpeed is now = 2701 ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property CapableArchitecture is now = AMD64 X64 X86 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsLaptop is now = True ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property IsDesktop is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsServer is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsUEFI is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsOnBattery is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsX86 is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsX64 is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsSLAT is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Finished getting asset info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Getting OS SKU info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Unable to determine Windows SKU while in Windows PE. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Determining the Disk and Partition Number from the Logical Drive X:\windows ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property OriginalArchitecture is now = ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Getting virtualization info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
FindFile: The file x86\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile: The file x64\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FAILURE (Err): 429: CreateObject(Microsoft.BDD.Utility) - ActiveX component can't create object ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FAILURE (Err): 424: GetVirtualizationInfo for Gather process - Object required ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Connection succeeded to MicrosoftVolumeEncryption ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
There are no encrypted drives ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsBDE is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Processing the phase. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Determining the INI file to use. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Finished determining the INI file to use. ZTIGather 11/5/2014 10:29:13
AM 0 (0x0000) -
Hello there,
I had to edit the errors file in DS7.0 version but after that DS7.0 is not writing anything into errors file, although i dd not change any permission. I just opened the file and deleted few lines and now errors logs are not written anymore.
Any idea how to solve this issue?Usually restarting slapd does the trick. Worst case, you might need to stop the server, move all files out of the logs directory, and restart. If that doesn't work, then there's probably something wrong with the system or the configuration.
I've never tried it, but the rotate-now function might fix it too. -
Messages not being written to page
HI,
What could stop messages from being displayed/written. I have changed my page template from a table based layout to a css layout and have wrapped the following #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# in a div. I cause an error intentionally on my page but the message does not appear. When I inspect the page source it confirms that the message was not written to the page. I have an identical application that uses table layouts for the page template and the same process produces a message. When I inspect the page source the message is written in the #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# position as expected.
Would anybody have any ideas for me to pursue to get to the bottom of this problem?
Thank You
BenScott,
I found what the problem was ...
The page template has a subtemplate region and the entry for Success Message and Notification where empty.
Ben
Edited by: Benton on Jun 18, 2009 2:09 PM -
Hi,
I'm using the workflow application "Audit" as an activity in my custom workflow and I'm passing the required arguments.
In the workflow trace file, I can see that the Audit application is run using the passed parameters but no record is being created matching that information in the "log" table.
Any ideas/suggestions?
Thanks
Here is the trace for your information:
Resolved reference requesterWSUser = object
Assigning requesterFullName = Test1 Manager1
Action Set Audit Resources List
Result title set to 'Set Audit Resources List'
Evaluating XPRESS
Resolved reference approved = false
Resolved reference auditApps = [AD_Simulated]
Resolved reference auditApps = [AD_Simulated]
Assigning depApps = [AD_Simulated]
Action Audit
Result title set to 'Audit'
Iterating over depApps = [AD_Simulated]
Iteration 0
app = AD_Simulated
Argument op = audit
Argument type = User
Argument status = success
Argument action = View
Argument reason = User Access Recertification
Argument subject = TestManager1
Resolved reference user.waveset.organization = null
Resolved reference app = AD_Simulated
Resolved reference app = AD_Simulated
Argument resource = AD_Simulated
Resolved reference enduserId = testuser4
Argument accountId = testuser4
Resolved reference enduserView.accounts[Lighthouse].firstname = Test4
Resolved reference enduserView.accounts[Lighthouse].lastname = User4
Resolved reference enduserId = testuser4
Resolved reference requesterFullName = Test1 Manager1
Argument error = The access of the user Test4 User4(testuser4) has been recertified by Test1 Manager1
Calling application 'com.waveset.session.WorkflowServices'
Application requested argument op
Application requested argument logResultErrors
Application requested argument action
Application requested argument status
Application requested argument type
Application requested argument subject
Application requested argument name
Application requested argument resource
Application requested argument accountId
Application requested argument error
Application requested argument parameters
Application requested argument attributes
Application requested argument originalAttributes
Application requested argument overflowAttributes
Application requested argument auditableAttributesList
Application requested argument organizations
Step complete 'Audit'
Step inactive 'Display Message'
-------------------------------------------------------------------------I agree with the anokun7. Check to make sure the action your are giving it is a valid one. ( See IDM Workflow Forms and Views pdf and search for Action Names, it will give you a list of all the valid actions) Also you can add your own attributes to the Audit object as well using the attributes variable. ( It expects a map: <map>
<s>Key</s>
<ref>value</ref>
<map>
Value can be a reference, or string, or however complex you want to make it. Just be aware of what view (if any) is available at the time you call the audit. Hope this helps
Message was edited by:
dmac28
Oh yeah..The attributes will appear on the audit log reports, Based on what action and type you audited it will show up on that record. i.e Delete action, on Type User...that audit record will have a changes value which will have whatever attributes you passed to the audit object. -
Chat logs not being saved?
I went into my chat logs today and found that for the last two weeks, iChat has not been saving any of my chats. I am now missing several import chat logs that I was depending on, and they are not there. I obviously checked to make sure the setting was still set to save them to the correct folder, and nothing has changed, however it seems to randomly not save the logs I am looking for.
Any ideas?
-ScottSOLVED!
I even got my missing chat back!
See, every once in a while I go though my iChat logs. As you know, iChat creates a folder with the date and then each chat in the folder inside of the main iChat Logs folder. What I do every once in a while is go though those folders and clean them up. If the chat is worth saving, I take it OUT of the "dated" folder and put them in the "regular" iChat folder, and if the chat is not needed, I delete it, and the empty dated folder.
What happened was, one of logs I was looking for must have accidentally been deleted, as when I opened my Time Machine backup for that date, I found it there!
So, I guess it was just my mistake deleting the file by accident. And why did it seem that it was not saving future chats? I don't know. I guess no one chated with me in a while, and that is why there were no recent logs in there.
So just wanted to report back that if you think you are missing something, CHECK YOUR TIME MACHINE BACK UP!
Thanks for all the help!
-Scott -
Agent logs not being generated
On a recent installation of 2 x Exchange Server 2013 Cumulative Update 2 (CU2-v2), neither server appears to be generating Agent Logs. The default path for such logs (C:\Program Files\Microsoft\Exchange Server\v15\TransportRoles\logs\Hub)
does not even appear to contain an "AgentLog" folder.
The Get-TransportService cmdlet returns:
AgentLogMaxAge : 7.00:00:00
AgentLogMaxDirectorySize : 250 MB (262,144,000 bytes)
AgentLogMaxFileSize : 10 MB (10,485,760 bytes)
AgentLogPath : C:\Program Files\Microsoft\Exchange Server\v15\TransportRoles\logs\Hub\AgentLog
AgentLogEnabled : True
I have tried modifying these settings, changing the path, manually creating the "AgentLog" folder, restarting the transport service, and nothing seems to make any difference.
If anyone can offer any suggestions / provide any pointers, I would be extremely grateful. We regularly use agent logs when querying mail flow.
Many thanks in advance,
Simon.Hi,
As Martina mentioned,you should install Anti-Spam Agents first.
By default, anti-spam features aren't enabled in the Transport service on a Mailbox server. Typically, you only enable the anti-spam features on a Mailbox server if your Exchange organization doesn't do any prior anti-spam filtering before accepting incoming
messages.
In addition,if you want to configure Spam Agent Logging,plesae refer to the fowllowing article:
Configure Anti-Spam Agent Logging
Thanks.
Niko Cheng
TechNet Community Support -
"Sidecar file has conflict", keyword changes in LR not being written to JPEGs
I am new to Lightroom and evalutating 3.6 on a trial.
I performed the following test:
1. Imported 3 pictures into Lightroom from my hard drive
2. In Lightroom, I made a keyword change to one photo (removed a keyword).
3. All three photos' Metadata icon show the arrow, indicating that there has been a change. (I only changed the keyword on one photo.)
4. I click the icon for the photo whose keyword I changed.
5. I get the message "The metadata for this photo has been changed in Lightroom. Save the changes to disk?"
6. I click "Yes".
7. The photo's Metadata icon shows an exclamation point. When you hover over the exclamation point, it shows the message "Sidecar has conflict".
I have two questions:
1. My expectation is that when I changed a keyword, or any other metadata, in LR, it will show up on the photo outside of LR. My understanding is that this is a functionality of LR. Am I wrong? (The Help files seem to indicate that this is a reasonable expectation.)
2. I am assuming, having read many posts on the forum, that the error message "Sidecar has conflict" is a bug. Am I right? (My understanding is that JPEGs don't have sidecar files, which just makes this message even more odd.)
I am on Windows 7 Home Premium (upgraded from Vista Home Premium), 32 bit.
Thanks for your help,
--Elisabeth.Beat, Thanks for taking the time to answer.
Your expectation is not quite right:
Any change to metadata will primarily be recorded in your catalog, and nowhere else. Only if you perform "Save Metadata to File" (which can be set to be done automatically in your catalog settings) or "Save the Changes to Disk" after pressing the arrow, the changes will be written into your JPEGs (or into *.xmp sidecard files for Raw images) and can be seen from outside of LR.
Yes, you are right. What I should have written was: My expectation is that when I change a keyword, or any other metadata in LR, the change will show up outside of LR, after I "Save Metadata to File". This is not happening, instead I get the error message and different keywords in LR than outside.
Where do your originals reside? Could it be you don't have the proper authority to rewirte the JPEGs?
My originals reside on my Public drive: I definitely have rights. In fact, I did many other change to photos in the same drive after I posted this on the forum.
Maybe you are looking for
-
BPM - Message has error on the outbound side
I get this error 'Message has error on the outbound side" . I am using IDOC - BPM - JDBC. I checked SXI_CACHE and I see ret code 99 for this Integration Process. I tried to activate it. I see an error saying that ZCUST_NUMBER is not a component objec
-
Hi! I work with Photoshop/Premiere Elements 10 on my snow leopard (10.6.8) without problems since years. Now I want to move over to the new MacBook Pro (and lion 10.7.5) but only Photoshop runs on it, every Premiere version (I downloaded Elements 12
-
I just bought a used 1DS MKII. The camera had a new shutter installed by Canon last year and the first owner then stored it. I was curious about shutter count so went to myshuttercount.com and after uploading a JPEG from the camera, was told the cam
-
GNOME 3.2.1 crash all the time
I just installed Arch Linux and update to the latest kernel: [root@Eagle ~]# uname -a Linux Eagle 3.2.4-1-ARCH #1 SMP PREEMPT Sat Feb 4 11:21:15 UTC 2012 i686 Intel(R) Pentium(R) 4 CPU 2.66GHz GenuineIntel GNU/Linux [root@Eagle ~]# But after I instal
-
Need help installing Adobe Reader 11
Adobe Reader will not update. I have tried following multiple forum help suggestions. I have received errors 1302, 1310, and one other. Usually get error 1310. I am running Windows 7 Professional. The downloader will download the files , attempt to