WCF 4.5 doesn't generate log file
Hi,
Recently, I am using WCF 4.5 windows 7 for my testing of WCF appliation. When enable tracing, the WCF servce works fine, however, the WCF application doesn't generate the log file(
web_messages.svclog or web_tracelog.svclog).
Here are the configuration of the web.confi
g file:
<system.diagnostics>
<sources>
<source name="System.ServiceModel.MessageLogging" switchValue="警告,ActivityTracing">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="ServiceModelMessageLoggingListener">
<filter type="" />
</add>
</listeners>
</source>
<source propagateActivity="true" name="System.ServiceModel" switchValue="详细,ActivityTracing">
<listeners>
<add type="System.Diagnostics.DefaultTraceListener" name="Default">
<filter type="" />
</add>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
</sources>
<sharedListeners>
<add initializeData="e:\project\test\wcf_tutorial\webapp_consume_wcf\webapp_consume_wcf\web_messages.svclog"
type="System.Diagnostics.XmlWriterTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
name="ServiceModelMessageLoggingListener" traceOutputOptions="Timestamp">
<filter type="" />
</add>
<add initializeData="e:\project\test\wcf_tutorial\webapp_consume_wcf\webapp_consume_wcf\web_tracelog.svclog"
type="System.Diagnostics.XmlWriterTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
name="ServiceModelTraceListener" traceOutputOptions="Timestamp">
<filter type="" />
</add>
</sharedListeners>
<trace autoflush="true" />
</system.diagnostics>
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true"></serviceHostingEnvironment>
<diagnostics wmiProviderEnabled="true" performanceCounters="All">
<messageLogging logEntireMessage="true" logMalformedMessages="true"
logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" />
</diagnostics>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true"></serviceHostingEnvironment>
<diagnostics wmiProviderEnabled="true" performanceCounters="All">
<messageLogging logEntireMessage="true" logMalformedMessages="true"
logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" />
</diagnostics>
</system.serviceModel>
Regards
Hi Paul_PMA,
Based on this
MSDN document, trace files are not created without initially creating the log directory. Make sure that the directory for
web_messages.svclog or web_tracelog.svclog exists, or specify an alternate logging directory in the listener configuration. Please also mark sure that IIS has write permissions to the log file path.
For more information, please try to refer to the following articles:
#Configuring Message Logging:
https://msdn.microsoft.com/en-us/library/ms730064(v=vs.110).aspx .
#Simple Steps to Enable Tracing in WCF:
http://www.codeproject.com/Articles/420538/Simple-Steps-to-Enable-Tracing-in-WCF .
#How to enable wcf tracing and wcf message logging:
http://dotnetmentors.com/how-to-enable-wcf-tracing-and-wcf-message-loging.aspx .
Best Regards,
Amy Peng
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.
Similar Messages
-
NFS File adapter donu00B4t generate log file.
Hello!
We have a problem with a File adapter. when adapter catch file, this one don´t archive this file into archive directory.
We have:
Processing mode: Archive.
Archive directory : /XIcom/INT181_GECAT/LOG
This directory is created correctly.
Someone can i help me. Thanks.
Best regards.Hi ,
>>>NFS File adapter don´t generate log file
Do you mean you are not getting the processed files archived only when the file adapter set to NFS File system ?
Did you try same thing by setting File adapter as FTP ?
If you face same issue with File adpter set in FTP mode also then there is some issue with access to the folders.
Please check this ...
Regards,
Nanda
Message was edited by: Nanda kishore Reddy Narapu Reddy -
How do you generate log files using ActionScript 3?
Is there any way to log activity using AS3 by using a Red5 Media Server?
I cam across www.as3commons.org/as3-commons-logging but in this I have no clue how to access logfiles or where the logfiles are generated.
I was wondering if there's a way to generate and store log files on a media server. Say for example, I need to log when the user starts recording a video, when the connection to the media server is successful and so on.
Any help would be greatly appreciated. Thanks.In FMS backend scrips also works then log file can be create from beckend scripts.
-
Aperture doesn't generate xml file
Aperture 3.5.1 is supposed to generate an ApertureDatabase.xml file to use images in third-party software. I have the Preference set to generate this file whenever Aperture closes, but my main Aperture library does not generate this file. I tried setting the Preference to Always and that also doesn't work.
My smaller, newer Aperture libraries do generate the xml file as they're supposed to. Only the older, much larger Aperture library does not generate it. This same problem exists on my iMac as well as on a copy of the library on my MacBook Air.
I created a new Aperture library that did generate the xml file. Then I imported the older library into it (took forever) but this library no longer generated the xml file.
Can I get Aperture to do what it's supposed to do?I was asking the OP, it is usually best to start your own thread becuse, as this case shows, posting to another's thread can cause confusion.
You are looking at the preferences when the library in question is opened, correct? As I wrote this is one (if not the only) preference that is set on a per library basis. If you look at it while a different library is opened you might not be seeing the correct setting.
You could try the steps shown in Aperture 3: Troubleshooting Basics, that is always a good place to start.
Having said that, and while not having the XML file can be a problem, I don;t understand why Hazel would need the XML file to import into Aperture. Aperture is responsible for writing the XML file and it does that to reflect the state of the library.
If you are looking for a simple way to import into Aperture you should look at Automator. A simple one or two action workflow does it.
regards -
Rman can not able generate log file in linux shell script
Friends,
In shell script we use below command to save processing information.
$ORACLE_HOME/bin/rman "target=/" > ${LOG} <<- EOF
Recent days, it does not work and return message as
./test.sh: line 78: LOG: command not found
./test.sh: line 83: /u01/test/oracle/scripts/crons/logs/testdb_013120141148.log: No such file or directory
find: /u01/test/oracle/scripts/crons/logs: No such file or directory
It seems that rman can not create a log file when we passed log variable value.
Thanks for helpHi,
Seems some one removed the log directory.. check the log directory as specified in the script. and create that one if not exist. -
Idoc types that generate the log files
Hi All,
Can anybody provide me the list of IDOC types that generated application log files.
As I found that material master IDOCs generate log files. Is there IDOC types other
than material master that generate log files.
Regards,
Azra.Hi Azra,
You can tell whether an IDOC does create the application log from the function module it uses.
Go to SE37, and type in IDOC_INPUT_<message_type> for inbound or IDOC_OUTPUT_<message_type> for outbound, and search for any function module which start with 'BAL_'. If you can find it, then it creates application log files.
Regards,
Lim... -
Hello All,
When we execute a .sql file in CMD prompt using oracle client, the generated log file will be created in the file execution path.
e.g.,
C:\Users\xxx\Desktop>
But when a .sql file is executed in Pentaho tool as a transformation, I am unable to get any physical log file as a result of the .sql execution. Is there any default location for the log creation or doesnt pentaho create any physical files for logging. Kindly help me.You can specify the log file location at command line
-logfile=Location/log_file.txtAnd please not this is a forum for SQL and PL/SQL issues..Please dont post these questions here.. -
Can I generate .txt / .log file from Oracle Diagnostic Logging?
Hi all,
on my application I need to limit the size of application log, I've read this [http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#CHDJHHHB|http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#CHDJHHHB] that using ODL, but generated file is .xml, but I need .log / .txt file. How can I do this??
thanks in advance.
Regards,
KahlilODL generates log files in XML format only. Using ODL you can not have plain text format log files. You have to decide what is more desired - log rotation (that ODL provides) or text format (which is the default, non-ODL, format). If text format is more desired than don't enable ODL and write your own shell script to rotate application log (but that can only be done while OC4J is down).
At the same time if you are concern about readability of log file (i.e. text format is easier to read vs. xml format) than you might consider using one the log viewer tool (log viewer in EM or printlogs command line) provided by Oracle. Both of these tools help you view the logs in much more readable format than just looking at xml format log file.
(printlogs utility is under $OH/diagnostic/bin directory. Run "printlogs -help" to read about it).
Hope this helps.
Thanks
Shail -
Rule created to monitor a single line entries in a text.log file does not work
Hi All,
I have this strange issue. I created a script which generates .log file and i have configured a rule to monitor it. Whenever the .log is altered the alert does not come at all in SCOM 2012 R2.
I want this alert to be raised when one specific line in the center is altered from LISTENING to NOT LISTENING.
I have configured it. It triggered a alert for the first time and again it did not trigger at all.
I created this rule and disabled it and overrided the value to true only to the MS acting as the watcher for this log file.
The log file generates in the local drive of the MS itself.
Changed the log watcher to a different server and also mentioned the application data source to a network location when the watcher was changed so it can pull the log accordingly.
The log is generated in the MS itself. Tried using both local location where the log is located as well as converted the same to a network location still didn't help.
C:\Port_checker is the directory where the .log file is located also there is no other log file present only 1.
I also changed the parameters such as "Contains, Wildcard matches etc but nothing worked.
Screenshots:
2.
The SCOM Action account has Full permissions on all servers over the entire forest itself.
Target used to create this rule is "Windows server operating system"
Can any one help me please.
Gautam.75801Since you have a script that updates a file line from "LISTENING" to "NOT LISTENING"
you might want to try and configure a Two State Script Unit Monitor rather then a rule. So your script just need to check say every 5 minutes the content of the log file and generate an alert when it matches "Not Listening" and clear when
it changes to "listening".
http://www.systemcentercentral.com/wp-content/uploads/2009/04/HOW-TO_2-state_ScriptMonitor.pdf
Cheers,
Martin
Blog:
http://sustaslog.wordpress.com
LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. -
LOG FILE for batch scripting in MAXL
Hello,
I just wanted to know how to create a LOG FILE for batch scripting.
essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
REM
ECHO OFF
ECHO Loading GL actuals into WFS \ Combined......
E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
SET LogFile=E:\Batch\Apps\TOG_DET\Logs.log
This file does not generate log file can any help me what might be the problem? Even though some of the steps above are not correct it should generate me log file atleast. I need syntax or whatever it is to generate Log file.
Regards
SomaI wanted to have a logfile of the following batch script regardless of whether the script is running or not.
essmsh E:\Batch\Apps\TOG_DET\Scripts\unload_App.msh
copy e:\batch\apps\tog_det\loadfile\gldetail.otl e:\hyperion\analyticservices\app\tog_det\gldetail /Y
essmsh E:\Batch\Apps\TOG_DET\Scripts\Build_Hier_Data.msh
REM
ECHO OFF
ECHO Loading GL actuals into WFS \ Combined......
E:\HYPERION\common\Perl\5.8.3\bin\MSWin32-x86-multi-thread\PERL.EXE E:\Batch\Apps\WFS.COMBINED\AMLOAD\WFSUATAMLOAD.PLX
Drop object d:\NDM\Data\StampFiles\STAMPLOADBKUP.csv of type outline force;
Alter object d:\NDM\Data\StampFiles\STAMPLOAD_cwoo.csv of type outline rename to d:\NDM\Data\StampFiles\STAMPLOADBKUP.CSV;
What I really want is I need a log file of the above batch script, how the above scripts are running. I do not care whether they are giving me positive results but I need to know what is happening in logfile. HOw will the log file be generated.
Regards
SOma -
Multiple log files using Log4j
Hello,
I want to generate log files based on package structure. Like com.temp.test in test.log ,also I am having a log file at application like app.log .
This is my requirement what has been logged in test.log should not be logged in app.log.This is my log4j.properties file.
# Log4j configuration file.
# Available levels are DEBUG, INFO, WARN, ERROR, FATAL
# Default logger
log4j.rootLogger=DEBUG, PFILE
log4j.logger.com.temp.test=DEBUG,TEST
# PFILE is the primary log file
log4j.appender.PFILE=org.apache.log4j.RollingFileAppender
log4j.appender.PFILE.File=./App.log
log4j.appender.PFILE.MaxFileSize=5120KB
log4j.appender.PFILE.MaxBackupIndex=10
#log4j.appender.PFILE.Threshold=DEBUG
log4j.appender.PFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.PFILE.layout.ConversionPattern=%p %d[%l][%C] %m%n
#log4j.appender.PFILE.layout.ConversionPattern=%p %d %m%n
log4j.appender.TEST=org.apache.log4j.RollingFileAppender
log4j.appender.TEST.File=./test.log
log4j.appender.TEST.MaxFileSize=5120KB
log4j.appender.TEST.MaxBackupIndex=10
log4j.appender.TEST.layout=org.apache.log4j.PatternLayout
log4j.appender.TEST.layout.ConversionPattern=%p %d[%l][%C] %m%n
Can u help me!!!You have to configure the temp logger so that it does not send its info on to the root logger.
For this, you can use the additivity flag.
# Default logger
log4j.rootLogger=DEBUG, PFILE
log4j.additivity.com.temp.test=false
log4j.logger.com.temp.test=DEBUG,TESTThe rest of the file remains the same. -
hi,erveryone,
one difficult question need help.
Environment: WLS8.1sp2 + WLI8.1sp2 + ORACLE9i + solaris9
when I started archiver manually,just for a while, wli system generated about 40,000 JMS messages in
wli.internal.worklist.timer.queue,and consume the great mass of system resource of Database server,I had to stop these
archive processes immediately to keep other applicaitons which using the same database running normal. I did so by
following steps:
(1) in WLI console, delete wli.internal.worklist.timer.queue;
(2) in WLI console, reconstruct wli.internal.worklist.timer.queue;
(3) restart wli server.
after server was restarted, wli server output endless and repeatly exception to log file ,the typical exception was:
####<May 8, 2005 3:08:26 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '54' for queue:
'weblogic.kernel.Default'> <<anonymous>> <BEA1-54B26B551CC1A8856F80> <BEA-010049> <EJB Exception in method: remove:
java.sql.SQLException: Transaction rolled back: Unknown reason.
java.sql.SQLException: Transaction rolled back: Unknown reason
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1299)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_loadGroup0(ListenerBean_1nsp14__WebLogic_CMP_R
DBMS.java:1055)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
Logic_CMP_RDBMS.java:596)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
Logic_CMP_RDBMS.java:584)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.ejbRemove(ListenerBean_1nsp14__WebLogic_CMP_RDBMS.j
ava:2423)
at weblogic.ejb20.manager.DBManager.remove(DBManager.java:1318)
at weblogic.ejb20.internal.EntityEJBLocalHome.remove(EntityEJBLocalHome.java:214)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14_LocalHomeImpl.remove(ListenerBean_1nsp14_LocalHomeImpl.java:131)
at
com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean.removeTaskListeners(RemoteWorklistManagerBean.java:3001)
at
com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean_us8t1c_EOImpl.removeTaskListeners(RemoteWorklistManagerBean_us8t
1c_EOImpl.java:698)
at com.bea.wli.worklist.timer.WorklistTimerMDB.processListenerToRemove(WorklistTimerMDB.java:102)
at com.bea.wli.worklist.timer.WorklistTimerMDB.onMessage(WorklistTimerMDB.java:61)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:382)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
>
####<May 8, 2005 3:08:26 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '96' for queue:
'weblogic.kernel.Default'> <<anonymous>> <BEA1-54B96B551CC1A8856F80> <BEA-010049> <EJB Exception in method: remove:
javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found..
javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found.
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_loadGroup0(ListenerBean_1nsp14__WebLogic_CMP_R
DBMS.java:1165)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
Logic_CMP_RDBMS.java:596)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.__WL_setTaskBean_listeners(ListenerBean_1nsp14__Web
Logic_CMP_RDBMS.java:584)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14__WebLogic_CMP_RDBMS.ejbRemove(ListenerBean_1nsp14__WebLogic_CMP_RDBMS.j
ava:2423)
at weblogic.ejb20.manager.DBManager.remove(DBManager.java:1318)
at weblogic.ejb20.internal.EntityEJBLocalHome.remove(EntityEJBLocalHome.java:214)
at
com.bea.wli.worklist.beans.entity.ListenerBean_1nsp14_LocalHomeImpl.remove(ListenerBean_1nsp14_LocalHomeImpl.java:131)
at
com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean.removeTaskListeners(RemoteWorklistManagerBean.java:3001)
at
com.bea.wli.worklist.beans.session.RemoteWorklistManagerBean_us8t1c_EOImpl.removeTaskListeners(RemoteWorklistManagerBean_us8t
1c_EOImpl.java:698)
at com.bea.wli.worklist.timer.WorklistTimerMDB.processListenerToRemove(WorklistTimerMDB.java:102)
at com.bea.wli.worklist.timer.WorklistTimerMDB.onMessage(WorklistTimerMDB.java:61)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:382)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
>
The wli server generated log file very quickly ,:it can output 1M bytes log file per second,all logged information
is similar to the <BEA-010049> excetpion metioned above. BEA support engineer suggested me to totally stop the
archive ,I did so,but the server was still ouput the log file like crazy as before and the normal log information are
completely override by <BEA-010049> excetpion.
I checked the EntityEJBs in WLI console :Mywlidomain> Applications> WLI System EJBs> WLI Worklist Persistence$)A#,and
found that in statistics table :
ListenerBean : Pool miss ratio = 99.67%, transaction rollback ration = 99.90%,Destory Bean Ratio = 99.48%(see
attachment)
WorklistTimerMDB: transaction rollback ratio = 99.97%
It seems ListenerBean worked incorrectly.I searched in support.bea.com and found one example which also about server
output endless log file,the author solved this problem by changing Bean's transaction-attribute from 'Required'
to 'RequiresNew' thought he didn't know why it works. I try this method by changing ListenerBean's
transaction-attribute from 'Required' to 'RequiresNew'.
$weblogic_home/integration/lib/wli-ejbs.ear/ejb-jar-generic.xml:
<ejb-name>CommentBean</ejb-name>
<method-name>*</method-name>
</method>
<trans-attribute>Required</trans-attribute>
</container-transaction>
<container-transaction>
<method>
<ejb-name>ListenerBean</ejb-name>
<method-name>*</method-name>
</method>
<trans-attribute>RequiresNew</trans-attribute> -----------the default value is Required,I modified it to
RequiresNew.
</container-transaction>
<container-transaction>
really it works, the log file output resume normal. But there are still some problems:
(1) this exception is still exist:
javax.ejb.NoSuchEntityException: [EJB:010140]Bean with primary key: '153.22.52.28-17343c7.10243c3c6ec.a51' not found.
(2) is this method safe ?(Does "Modify ListenBean's transaction-attribute" impat other parts of wli system?)
(3) after changed the transaction attribute, if turn on archive again, the server output endless exception:
####<Jun 1, 2005 5:14:58 PM CST> <Info> <EJB> <app01> <jcwliserver> <ExecuteThread: '63' for queue:
'weblogic.kernel.Default'> <<anonymous>> <BEA1-2F43890B86B0A8856F80> <BEA-010036> <Exception from ejbStore:
java.sql.SQLException: XA error: XAER_RMERR : A resource manager error has occured in the transaction branch start()
failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction
branch
oracle.jdbc.xa.OracleXAException
at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1160)
at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:311)
at weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:50)
at weblogic.jdbc.jta.DataSource.start(DataSource.java:617)
at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1075)
at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1007)
at weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:218)
at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:419)
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1287)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
6)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
java.sql.SQLException: XA error: XAER_RMERR : A resource manager error has occured in the transaction branch start()
failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction
branch
oracle.jdbc.xa.OracleXAException
at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1160)
at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:311)
at weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:50)
at weblogic.jdbc.jta.DataSource.start(DataSource.java:617)
at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1075)
at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1007)
at weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:218)
at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:419)
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1287)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
6)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1292)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1250)
at weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:385)
at weblogic.jdbc.jta.DataSource.connect(DataSource.java:343)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:305)
at weblogic.ejb20.cmp.rdbms.RDBMSPersistenceManager.getConnection(RDBMSPersistenceManager.java:2247)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.__WL_store(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:363
6)
at
com.bea.wli.worklist.beans.entity.TaskBean_9fxazu__WebLogic_CMP_RDBMS.ejbStore(TaskBean_9fxazu__WebLogic_CMP_RDBMS.java:3548)
at weblogic.ejb20.manager.DBManager.beforeCompletion(DBManager.java:927)
at weblogic.ejb20.internal.TxManager$TxListener.beforeCompletion(TxManager.java:745)
at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1142)
at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1868)
at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:250)
at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:221)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:412)
at weblogic.ejb20.internal.MDListener.transactionalOnMessage(MDListener.java:316)
at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:281)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2596)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:2516)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
>
How can I solve these problem ? any suggestion is warm welcome.
Thanks in advance.
Great LouBack up all data to at least two different storage devices, if you haven't already done so. The backups can be made with Time Machine or with a mirroring tool such as Carbon Copy Cloner. Preferably both.
Boot into Recovery (command-R at startup), launch Disk Utility, and erase the startup volume with the default options.This operation will destroy all data on the volume, so you had be better be sure of your backups. Quit Disk Utility and install OS X. When you reboot, you'll be prompted to go through the initial setup process. That’s when you transfer the data from one of your backups. For details of how this works, see here:
Using Setup Assistant
Transfer only "Users" and "Settings" – not "Applications" or "Other files." Don't transfer the Guest account, if it was enabled on the old system. Test. If the problem is still there, you have a hardware fault. Take the machine to an Apple Store for diagnosis.
If the problem is resolved, reinstall your third-party software cautiously. Self-contained applications that install into the Applications folder by drag-and-drop or download from the App Store are safe. Anything that comes packaged as an installer or that prompts for an administrator password is suspect, and you must test thoroughly after reinstalling each such item to make sure you haven't restored the problem.
Note: You need an always-on Ethernet or Wi-Fi connection to the Internet to use Recovery. It won’t work with USB or PPPoE modems, or with proxy servers, or with networks that require a certificate for authentication. -
Why is the transaction log file not truncated though its simple recovery model?
My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99
If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
log clearing, marking
parts of the log free for reuse.
As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
Please read below link for understanding "Transaction log Truncate vs Shrink"
http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/ -
Problem specifying SQL Loader Log file destination using EM
Good evening,
I am following the example given in the 2 Day DBA document chapter 8 section 16.
In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
Thank you for your help,
John.
DDL (generated using SQL developer, you may want to change the space allocated to be less)
CREATE TABLE "NICK"."PURCHASE_ORDERS"
"PO_NUMBER" NUMBER NOT NULL ENABLE,
"PO_DESCRIPTION" VARCHAR2(200 BYTE),
"PO_DATE" DATE NOT NULL ENABLE,
"PO_VENDOR" NUMBER NOT NULL ENABLE,
"PO_DATE_RECEIVED" DATE,
PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 67108864
TABLESPACE "USERS" ;
Load.dat file contents
1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
Steps I am carrying out in EM
log in, select data movement -> Load Data from User Files
Automatically generate control file
(enter host credentials that work on your machine)
continue
Step 1 of 7 ->
Data file is located on your browser machine
"Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
click next
step 2 of 7 ->
Table Name
nick.purchase_orders
click next
step 3 of 7 ->
click next
step 4 of 7 ->
click next
step 5 of 7 ->
Generate log file where logging information is to be stored
Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
Validation Error
Examine and correct the following errors, then retry the operation:
LogFile - The directory does not exist.Hi John,
But, i did'nt found any error when i am going the same what you did.
My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
1.I created one table in scott schema :
SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
2 (
3 "PO_NUMBER" NUMBER NOT NULL ENABLE,
4 "PO_DESCRIPTION" VARCHAR2(200 BYTE),
5 "PO_DATE" DATE NOT NULL ENABLE,
6 "PO_VENDOR" NUMBER NOT NULL ENABLE,
7 "PO_DATE_RECEIVED" DATE,
8 PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
9 )
10 TABLESPACE "USERS";
Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
Here i total 3 text boxes :
1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
Step 3 of 7 I just clicked Next
Step 4 of 7 I just clicked Next
Step 5 of 7 I just clicked Next
Step 6 of 7 I just clicked Next
Step 7 of 7 Here it is Control File Contents:
LOAD DATA
APPEND
INTO TABLE scott.PURCHASE_ORDERS
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
PO_NUMBER INTEGER EXTERNAL,
PO_DESCRIPTION CHAR,
PO_DATE DATE,
PO_VENDOR INTEGER EXTERNAL,
PO_DATE_RECEIVED DATE
And i just clicked on submit job.
Now i got all 3 rows in purchase_orders :
SCOTT@orcl> select count(*) from purchase_orders;
COUNT(*)
3So, there is no bug, it worked and please retry if you get any error/issue.
HTH
Girish Sharma -
Streaming log file analyzer?
Hi there,
We host several flv files with Akamai and recently enabled
the log file delivery service. So, we now have Akamai-generated log
files in W3C format. I was assuming I could use WebTrends to
analyze these, but after looking at them briefly, it shows
different events like play, stop, seek, etc., and I don't know that
WebTrends would be able to process all of that.
Our most basic requirement is to see how many times each
video was viewed. If we could get more detailed analysis, like
video X gets viewed on average for 2:00, but video Y only gets
viewed for 20 seconds, that would be great as well.
Does anyone have any suggestions for the best software to
analyze these files?
Thanks,
MattImmediate AI:
0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
2. You can consider shrinking the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
Santosh Singh
Maybe you are looking for
-
How can I get a phone number that I dialed a week ago?
I dialed a number last week and I need to get it back. How do I do that?
-
Search-Mailbox Search Query not returning correct results.
Hi, I'd appreciate some assistance with the search and delete I'm trying to I'm run. I'm using the following command to find messages with a certain message class received before 01/01/14 in a mailbox. Get-Mailbox xxxxxxxxxxxxxxx | Search-Mailbox -Se
-
BAPI Error for BAPI_PO_Create1
hi all, i am supposed to create PO s for all the invoices in a Ztable.so i chose BAPI_po_create1.I am passing all the values but it throws some exception tht it was intercepted by some superior class. it says FAILURE Exception. Please help. Thanks a
-
I can't import vertical photos to my iPad using the camera connection kit. Horizontal photos import fine, but verticals have a red exclamation mark, and do not import. Help!?
-
Second Reader Window takes 30 seconds to start.
When clicking a PDF File, Adobe Reader X starts (in about 1-2 seconds). If i leave this window open an click another pdf, the second window takes exactly 30 seconds. Not good for workplaces who open 100 and more pdf's a day. This problem is not fixed