Write in logs files
hello
I want to use my simple log system in my web application
"No log4j"
public class AppLoger
public static void writeMessage(String msg)
try {
String _date = (new Date()).toString();
String pP = session.getServletContext().getRealPath("/");
String path = pP+java.io.File.separator+"logFiles"+java.io.File.separator;
String _fName = path+"logs.txt";
BufferedWriter out = new BufferedWriter(new FileWriter(_fName , true));
out.write("["+_date+"] "+msg+"\n");
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
in other classes
try
}catch{
AppLoger.writeMessage("here is my message]");
local on my home pc it works fine
and the logs written in logs.txt
but ahter hostion logs can not be written in log.txt
the file logs.txt is empty " after hostion"
what shoud be the problem regards
so to use log4j in web application I did folowing
1. I have put log4j-1.2.8.jar in WEB-INF/lib
2. in WEB-INF/classes I put log4j.properties
#set the level of the root logger to DEBUG and set its appender as X
log4j.rootLogger = DEBUG,X
#set the appender named X to be a console appender
log4j.appender.X=org.apache.log4j.ConsoleAppender
#set the layout for the appender X
log4j.appender.X.layout=org.apache.log4j.PatternLayout
log4j.appender.X.layout.conversionPattern=%m%n
#define the appender named
FILE log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=${user.home}/myLogs
3. I have create myLogs as txt file
now
1. is this configuration ok ?
2. how to write messges in myLog
try{
}catch()
// your help
}regards
Similar Messages
-
Node.js loss of permission to write/create log files
We have been operating Node.js as a worker role cloud service. To track server activity, we write log files (via log4js) to C:\logs
Originally the logging was configured with size-based roll-over. e.g. new file every 20MB. I noticed on some servers the sequencing was uneven
socket.log <-- current active file
socket.log.1
socket.log.3
socket.log.5
socket.log.7
it should be
socket.log.1
socket.log.2
socket.log.3
socket.log.4
Whenever there is uneven sequence, i realise the beginning of each file revealed the Node process was restarted. From Windows Azure event log, it further indicated worker role hosting mechanism found node.exe to have terminated abruptly.
With no other information to clue what is exactly happening, I thought there was some fault with log4js roll over implementation (updating to latest versions did not help). Subsequently switched to date-based roll-over mode; saw that roll-over happened every
midnight and was happy with it.
However some weeks later I realise the roll-over was (not always, but pretty predictably) only happening every alternate midnight.
socket.log-2014-06-05
socket.log-2014-06-07
socket.log-2014-06-09
And each file again revealed that midnight the roll-over did not happen, node.exe was crashing again. Additional logging on uncaughtException and exit happens showed nothing; which seems to suggest node.exe was killed by external influence (e.g. process
kill) but it was unfathomable anything in the OS would want to kill node.exe.
Additionally, having two instances in the cloud service, we observe the crashing of both node.exe within minutes of each other. Always. However if we had two server instances brought up on different days, then the "schedule" for crashing would
be offset by the difference of the instance launch dates.
Unable to trap more details what's going on, we tried a different logging library - winston. winston has the additional feature of logging uncaughtExceptions so it was not necessary to manually log that. Since winston does not have date-based roll-over it
went back to size-based roll-over; which obviously meant no more midnight crash.
Eventually, I spotted some random midday crash today. It did not coincide with size-based rollover event, but winston was able to log an interesting uncaughtException.
"date": "Wed Jun 18 2014 06:26:12 GMT+0000 (Coordinated Universal Time)",
"process": {
"pid": 476,
"uid": null,
"gid": null,
"cwd": "E:
approot",
"execPath": "E:\\approot
node.exe",
"version": "v0.8.26",
"argv": ["E:\\approot\\node.exe", "E:\\approot\\server.js"],
"memoryUsage":
{ "rss": 80433152, "heapTotal": 37682920, "heapUsed": 31468888 }
"os":
{ "loadavg": [0, 0, 0], "uptime": 163780.9854492 }
"trace": [],
"stack": ["Error: EPERM, open 'c:\\logs\\socket1.log'"],
"level": "error",
"message": "uncaughtException: EPERM, open 'c:\\logs\\socket1.log'",
"timestamp": "2014-06-18T06:26:12.572Z"
Interesting question: the Node process _was_ writing to socket1.log all along; why would there be a sudden EPERM error?
On restart it could resume writing to the same log file. Or in previous cases it would seem like the lack of permission to create a new log file.
Any clues on what could possibly cause this? On a "scheduled" basis per server? Given that it happens so frequently and in sync with sister instances in the cloud service, something is happening in the back scenes which I cannot put a finger to.
thanks
The melody of logic will always play out the truth. ~ Narumi Ayumu, SpiralHi,
It is strange. From your description, how many instances of your worker role? Do you store the log file on your VM local disk? To avoid this question, the best choice is you could store your log file into azure storage blob . If you do this, all log
file will be stored on blob storage. About how to use azure blob storage, please see this docs:
http://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
Please try it.
If I misunderstood, please let me know.
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
How does LGWR write redo log files, I am puzzled!
The document says:
The LGWR concurrently writes the same information to all online redo log files in a group.
my undestandint of the sentence is following for example
group a includes file(a1, a2)
group b includes file(b1, b2)
LGWR write file sequence: write a1, a2 concurrently; afterwards write b1, b2 concurrently.
my question is following:
1、 my understanding is right?
2、 if my understanding is right, I think that separate log file in a group should save in different disk. if not, it cann't guarantee correctly recovery.
my opinion is right?
thanks everyone!Hi,
>>That is multplexing...you should always have members of a log file in more than 1 disk
Exactly. You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. In addition, when multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal.
Cheers
Legatti -
Who writes alert log file?
Hi guys ,
I have been searching for answer to this question,.............
Which process(backgroud process) writes to alert log file?Trace files for user processes are normally in the user_dump. They are created only when tracing is requested OR when Oracle encounters an error.
The alert.log is a file that is used to continously display the status of the database as it changes with important events (e.g. archival of log, alter system commands, ora-1555 errors, indexes unusable, datafile space allocation etc). Most of these are issues which affect the entire instance/database. However, where a sever error is encountered in a user process, Oracle writes a trace file in user_dump_dest for that error and a message indicating the error and the name of the trace file to the alert.log.
Similarly, the background processes may also be writing to their own trace files to indcate status /tracing.
The level of detail being logged can vary by version and by setting specific database events (if specified by Oracle Support) in the instance.
See http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm#sthref729
Hemant K Chitale -
[SOLVED] Writting to log file on Oracle AS in ADF BC
Hi!
I'm trying to write some information to any of the log files on Oracle Application Server but can't seem to get it working.
I have a http servlet class in which I try to log records like this:
Logger loLogger = Logger.getAnonymousLogger();
loLogger.info("HttpServlet: doGet start!");This code displays
1.4.2008 9:44:02 package.HttpServlet doGet
INFO: HttpServlet: doGet start!
in JDeveloper output window, but after deployment to AS it is not written to any of the files. I do not use a custom logger or log files in my application so my question is: how can I write information to any of the log files on AS (for example logs in <ORACLE_HOME>\opmn\logs folder or in log.xml file in oc4j folder of j2ee home log)?
I've been reading about j2ee logging but as I said, can't get it to work. Are there any ADF classes to do this?
Thanks,
regards!Hi!
Thank you both for your effort. Both ways work now, I messed something up at the beginning.
Logger loLogger = Logger.getAnonymousLogger();
loLogger.info("method start!");writes to 'default_group~home~default_group~1.log' file in opmn/logs directory, while
ADFLogger loADFLogger = ADFLogger.createADFLogger("oracle");
loADFLogger.log(ADFLogger.NOTIFICATION, "method start!");writes to 'log.xml' in j2ee\home\log\home_default_group_1\oc4j directory.
Now I just have to decide which is better for me ;).
BB -
How to write to log files using java files from JSP
Anybody knows different options in writing to log files using JSP?
Do you have an example?in the init() method of the servlet put the following
FileOutputStream out = new FileOutputStream("your-log-file");
PrintStream ps = new PrintStream(out);
System.setOut(ps);
System.setErr(ps);load the servlet on startup using <load-on-startup> in web.xml -
Oracle write archive log files continuosly
Hi all,
I don't know why my Oracle Database has this problem. Online log are writen to archive log file continuously(3 minutes period). My archive logfile is 300M. I have startup force my database. It work, but archive log files are writen so much. This is alert log:
>
Sat Jan 1 14:23:19 2011
Successfully onlined Undo Tablespace 5.
Sat Jan 1 14:23:19 2011
SMON: enabling tx recovery
Sat Jan 1 14:23:19 2011
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan
where NUMA PG = 1, CPUs = 16
replication_dependency_tracking turned off (no async multimaster replication found)
Sat Jan 1 14:23:40 2011
WARNING: AQ_TM_PROCESSES is set to 0. System operation might be adversely affected.
Sat Jan 1 14:24:32 2011
db_recovery_file_dest_size of 204800 MB is 28.64% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Sat Jan 1 14:24:40 2011
Completed: ALTER DATABASE OPEN
Sat Jan 1 14:27:05 2011
Warning: PROCESSES may be too low for current load
shared servers=360, want 7 more but starting only 3 more
Warning: PROCESSES may be too low for current load
shared servers=363, want 9 more but starting only 0 more
Sat Jan 1 14:27:39 2011
Warning: PROCESSES may be too low for current load
shared servers=363, want 9 more but starting only 1 more
Warning: PROCESSES may be too low for current load
shared servers=364, want 9 more but starting only 0 more
Sat Jan 1 14:28:58 2011
Thread 1 advanced to log sequence 9463 (LGWR switch)
Current log# 3 seq# 9463 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9463 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:30:20 2011
Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_j000_17762.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00018: maximum number of sessions exceeded
Sat Jan 1 14:39:47 2011
Thread 1 advanced to log sequence 9464 (LGWR switch)
Current log# 1 seq# 9464 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9464 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 14:42:51 2011
Errors in file /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc:
ORA-07445: exception encountered: core dump [_intel_fast_memcpy.J()+80] [SIGSEGV] [Address not mapped to object] [0x2B8988CE2018] [] []
Sat Jan 1 14:42:57 2011
Thread 1 advanced to log sequence 9465 (LGWR switch)
Current log# 2 seq# 9465 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9465 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 14:43:11 2011
found dead shared server 'S008', pid = (42, 1)
Sat Jan 1 14:45:39 2011
Thread 1 advanced to log sequence 9466 (LGWR switch)
Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:48:47 2011
Thread 1 cannot allocate new log, sequence 9467
Checkpoint not complete
Current log# 3 seq# 9466 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9466 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:48:50 2011
Thread 1 advanced to log sequence 9467 (LGWR switch)
Current log# 1 seq# 9467 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9467 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 14:52:11 2011
Thread 1 advanced to log sequence 9468 (LGWR switch)
Current log# 2 seq# 9468 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9468 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 14:55:12 2011
Thread 1 advanced to log sequence 9469 (LGWR switch)
Current log# 3 seq# 9469 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9469 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 14:58:12 2011
Thread 1 advanced to log sequence 9470 (LGWR switch)
Current log# 1 seq# 9470 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9470 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:02:00 2011
Thread 1 advanced to log sequence 9471 (LGWR switch)
Current log# 2 seq# 9471 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9471 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 15:05:16 2011
Thread 1 advanced to log sequence 9472 (LGWR switch)
Current log# 3 seq# 9472 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9472 mem# 1: /u02/oradata/TNORA3/redo03b.log
Sat Jan 1 15:08:30 2011
Thread 1 advanced to log sequence 9473 (LGWR switch)
Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:11:12 2011
Thread 1 cannot allocate new log, sequence 9474
Checkpoint not complete
Current log# 1 seq# 9473 mem# 0: /u01/oradata/TNORA3/redo01a.log
Current log# 1 seq# 9473 mem# 1: /u02/oradata/TNORA3/redo01b.log
Sat Jan 1 15:11:14 2011
Thread 1 advanced to log sequence 9474 (LGWR switch)
Current log# 2 seq# 9474 mem# 0: /u01/oradata/TNORA3/redo02a.log
Current log# 2 seq# 9474 mem# 1: /u02/oradata/TNORA3/redo02b.log
Sat Jan 1 15:14:15 2011
Thread 1 advanced to log sequence 9475 (LGWR switch)
Current log# 3 seq# 9475 mem# 0: /u01/oradata/TNORA3/redo03a.log
Current log# 3 seq# 9475 mem# 1: /u02/oradata/TNORA3/redo03b.log
>
PLs, help me.This is the contait of tail -100 /opt/app/oracle/admin/TNORA3/bdump/tnora_s008_17165.trc | more
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1569
KCBS: nbseg[3] is 1568
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1569
KCBS: nbseg[7] is 1568
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1569
KCBS: nbseg[11] is 1568
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1569
KCBS: nbseg[15] is 1568
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1569
KCBS: nbseg[19] is 1568
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 445
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1569
KCBS: nbseg[3] is 1568
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1569
KCBS: nbseg[7] is 1568
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1569
KCBS: nbseg[11] is 1568
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1569
KCBS: nbseg[15] is 1568
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1569
KCBS: nbseg[19] is 1568
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 445
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1568
KCBS: nbseg[3] is 1569
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1568
KCBS: nbseg[7] is 1569
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1568
KCBS: nbseg[11] is 1569
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1568
KCBS: nbseg[15] is 1569
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1568
KCBS: nbseg[19] is 1569
KCBS: Act cnt = 15713
KCBS: bufcnt = 31365, nb_kcbsds = 31365
KCBS: fbufcnt = 444
KCBS: Tot bufs in set segwise
KCBS: nbseg[0] is 1568
KCBS: nbseg[1] is 1568
KCBS: nbseg[2] is 1568
KCBS: nbseg[3] is 1569
KCBS: nbseg[4] is 1568
KCBS: nbseg[5] is 1568
KCBS: nbseg[6] is 1568
KCBS: nbseg[7] is 1569
KCBS: nbseg[8] is 1568
KCBS: nbseg[9] is 1568
KCBS: nbseg[10] is 1568
KCBS: nbseg[11] is 1569
KCBS: nbseg[12] is 1568
KCBS: nbseg[13] is 1568
KCBS: nbseg[14] is 1568
KCBS: nbseg[15] is 1569
KCBS: nbseg[16] is 1568
KCBS: nbseg[17] is 1568
KCBS: nbseg[18] is 1568
KCBS: nbseg[19] is 1569
KCBS: Act cnt = 15713
KSOLS: Begin dumping all object level stats elements
KSOLS: Done dumping all elements. Exiting.
Dump event group for SESSION
Unable to dump event group - no SESSION state objectDump event group for SYSTEM
ssexhd: crashing the process...
Shadow_Core_Dump = partial -
Unable to write a log file from EJB
Hi i have a Stateless EJB deployed on OC4J 10.1.3 and it is tryig to create a logfile with the location given in properties file.when it is trying to create the file it is getting Access denied to that particular folder I have changed the folder to another location but it is still the same.I am able to create a file to the same folder using simple jave class.
here is the stack trace.
javax.ejb.CreateException: D:\Kernel7.3\GW_EJB\log (Access is denied)
at com.xxxxx.fcubs.gw.ejb.GWEJBBean.ejbCreate(GWEJBBean.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke
(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSI
nterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.SetContextActionIntercepto
r.invoke(SetContextActionInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:619)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:606)
at com.evermind.server.ejb.LifecycleManager.postConstruct(LifecycleManag
er.java:89)
at com.evermind.server.ejb.StatelessSessionBeanPool.createContextImpl(St
atelessSessionBeanPool.java:41)
at com.evermind.server.ejb.BeanPool.createContext(BeanPool.java:405)
at com.evermind.server.ejb.BeanPool.allocateContext(BeanPool.java:232)
at com.evermind.server.ejb.StatelessSessionEJBHome.getContextInstance(St
atelessSessionEJBHome.java:51)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(S
tatelessSessionEJBObject.java:83)
at GWEJBRemote_StatelessSessionBeanWrapper2.processMsg(GWEJBRemote_State
lessSessionBeanWrapper2.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Relea
sableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
javax.ejb.EJBException: Exception while creating bean/context instance for bean
GW_EJB_Bean; nested exception is: javax.ejb.CreateException: D:\Kernel7.3\GW_EJB
\log (Access is denied)
at com.evermind.server.rmi.RMICall.EXCEPTION_ORIGINATES_FROM_THE_REMOTE_
SERVER(RMICall.java:110)
at com.evermind.server.rmi.RMICall.throwRecordedException(RMICall.java:1
28)
at com.evermind.server.rmi.RMIClientConnection.obtainRemoteMethodRespons
e(RMIClientConnection.java:472)
at com.evermind.server.rmi.RMIClientConnection.invokeMethod(RMIClientCon
nection.java:416)
at com.evermind.server.rmi.RemoteInvocationHandler.invoke(RemoteInvocati
onHandler.java:63)
at com.evermind.server.rmi.RecoverableRemoteInvocationHandler.invoke(Rec
overableRemoteInvocationHandler.java:28)
at com.evermind.server.ejb.StatelessSessionRemoteInvocationHandler.invok
e(StatelessSessionRemoteInvocationHandler.java:43)
at __Proxy1.processMsg(Unknown Source)
at GW_EJB_Client.callEJB(GW_EJB_Client.java:68)
at GW_EJB_Client.main(GW_EJB_Client.java:22)
Caused by: javax.ejb.CreateException: D:\Kernel7.3\GW_EJB\log (Access is denied)
at com.xxxxx.fcubs.gw.ejb.GWEJBBean.ejbCreate(GWEJBBean.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke
(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSI
nterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.SetContextActionIntercepto
r.invoke(SetContextActionInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(Inv
ocationContextImpl.java:69)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:619)
at com.evermind.server.ejb.LifecycleManager$LifecycleCallback.invokeLife
cycleMethod(LifecycleManager.java:606)
at com.evermind.server.ejb.LifecycleManager.postConstruct(LifecycleManag
er.java:89)
at com.evermind.server.ejb.StatelessSessionBeanPool.createContextImpl(St
atelessSessionBeanPool.java:41)
at com.evermind.server.ejb.BeanPool.createContext(BeanPool.java:405)
at com.evermind.server.ejb.BeanPool.allocateContext(BeanPool.java:232)
at com.evermind.server.ejb.StatelessSessionEJBHome.getContextInstance(St
atelessSessionEJBHome.java:51)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(S
tatelessSessionEJBObject.java:83)
at GWEJBRemote_StatelessSessionBeanWrapper2.processMsg(GWEJBRemote_State
lessSessionBeanWrapper2.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Relea
sableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)public void ejbCreate ()
throws CreateException
try
initializeprop ();
catch(Exception ex)
throw new CreateException (ex.getMessage ());
on lone 140 I am throwing a create Exception.
initializeprop() method is used to initialize the logeer properties etc. -
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Getting script to write output to console and a log file
Hello everyone. I'm working on a script that will search through a bunch of folders, pull anything larger than 50Kb out and then write a log file of what it moved. What I have so far is what I've pieced together through my research. It works, but it doesn't
write anything to the console which I believe is why my log isn't showing any info.
Here's the code:
get-ChildItem -path $path -recurse -ErrorAction "SilentlyContinue" -include $Extension | ? {$_.GetType().Name -eq "FileInfo" } | where-Object {$_.Length -gt $size} | Copy-Item -Destination c:\test|results_big | Out=File c:\test\log.txt
What I've been trying to do is add a | write-host in there somewhere, but all I get is countless red text with a variety of messages reminding me of my ignorance :). Thoughts?Hi Scotty,
here's an edit:
Get-ChildItem -path $path -recurse -ErrorAction "SilentlyContinue" -include $Extension | Where-Object { (-not $_.PSIsContainer) -and ($_.Length -gt $size)} | Copy-Item -Destination "c:\test_results_big" -PassThru | Out-File "c:\test\log.txt"
Soo, what did I change?
- Replaced checking for item type I checked the PSIsContainer property (which is true for folders)
- Combined the two Where-Object calls into a single one
- Fixed the path for copy-item destination
- Added the -PassThru parameter - otherwise Copy-Item won't give output causing you to export nothing to file.
- Fixed a typo - Out=File --> Out-File
Cheers,
Fred
There's no place like 127.0.0.1 -
With alert configuration we can send notify user when there is an error.
Is there a way to write a log file when there is an error (assume no BPM used).There are a couple of things you can do to create a log file:
1. You can create a custom alert and trigger it by generating an exception in your message mapping. This can be done using a user defined function. In the same user defined function you can call a communication channel to write the error log file.
2. You can create 2 interfaces the first one is your main interface, in that interface donot error out the interface in the message mapping. Populate a certain error value in the target and use that value in another message map to create another map with the error
IntfA -
> UDF (Error --> X) ---> Target (<field Z>X</field>). --> IntfB
IntfB -
> Check value for X in field Z to populate the eror records to the target structure --> intFC.
Same condition u need to put in the rcvr determination so that it doesnot give an error. -
Log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
Version : 9.2.0.8
Platform : Solaris
Application : Oracle Apps
The number of commits per second ranges between 10 and 30.
When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
Below just 2 samples where the ratio is even about 20.
"snap_time" " log file parallel write avg" "log file sync avg" "ratio
11/05/2008 10:38:26 8,142 156,343 19.20
11/05/2008 10:08:23 8,434 201,915 23.94
So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
And I think that it proves that I am NOT hitting this bug.
Below is a sample of the output for the log writer.
-- End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 = 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
Any clues?Yes that is true!
But that is the way I calculate the average wait time = total wait time / total waits
So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
I use the query below:
select snap_id
, snap_time
, event
, time_waited_micro
, (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
, total_waits
, (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
, trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
from (
select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
row_number() over (partition by event order by sn.snap_id) r
from perfstat.stats$system_event se, perfstat.stats$snapshot sn
where se.SNAP_ID = sn.SNAP_ID
and se.EVENT = 'log file sync'
order by snap_id, event
where time_waited_micro - p_time_waited_micro > 0
order by snap_id desc; -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
What are the log files created while running a mapping?
Hi Sasi , I have a doubt who actually generates the " Log Events ". Is it the Log Manager or Application services ?
Hi Anitha, The Integration service will be generate two logs when the mapping runs 1) Session log -- Has the details of the task ,session errors and load statistics..2) Workflow log -- Has the details of the workflow processing, and workflow errors.. The workflow log will be generated when the workflow started and the session log will be generated once the session intiated.For more detail Please refer the infa help docs... Normally the services will generate log ex: IS and RS will log their activity... The below process will happen the when the workflow inititated..[Copied fropm infa help docs] 1.#The Integration Service writes binary log files on the node. It sends information about the sessions and workflows to the Log Manager.2.#The Log Manager stores information about workflow and session logs in the domain configuration database. The domain configuration database stores information such as the path to the log file location, the node that contains the log, and the Integration Service that created the log.3.#When you view a session or workflow in the Log Events window, the Log Manager retrieves the information from the domain configuration database to determine the location of the session or workflow logs.4.#The Log Manager dispatches a Log Agent to retrieve the log events on each node to display in the Log Events window. ThanksSasiramesh
-
Important Configuration/Log Files for Storage Tek
Hi,
I have 6140 Storage Arrays, in the same ref. I would like to know more about command line details.
1. Important Configuration files
2. Important Log Files
3. Event Messages file
4. Important utilities like sscs, service, suppport and their default path.
5. Solaris FC constroller configuration files.
Please suggest in case I can get this details from any doc, as I already referred Sun Storage Tek 6140 Getting Started Guide for the same but not found much details.
Thanks
RajanI figured out what my problem was here. The Windows user that the portal processes run as is the user who writes the log files. In this case, this user is a local user to the portal server and not a Windows domain user. This means that the user does not have access to write to the shared drive.
We had two options as to what we could do here. The first would be to change the user that's running the portal to a user that could write to that shared drive. The second option would be to have the portal write the log files locally on the portal server and then have a script run as a domain user on that server sometime in the day and copy the files to the shared drive. We are going with the second option.
Maybe you are looking for
-
Taking MacBook from US to Australia, power source question
I am taking my US purchased MacBook to Australia. Is it safe to plug the computer into the 240volt 50 cycle electrical outlets there(with plug adapter) or do I need to connect through a voltage converter?
-
Problem with output in Invoice
Hi, I have an output ZXXX which should come when invoice is created using VF04 tcode.It is coming for some invoices properly but for some other invoices it is not coming but when i open that invoice in change mode(VF02) and press the save button then
-
Hi I have an Iphone 5c and set my email account (local provider company). I can receive mails perfectly and also send mails but the notification of a new mail comming is not appearing with the red globe on top of the Mail as suppose to. Once I entere
-
BDC is not working in when executing the program in background
Hi gurus, I have a problem that in BDC is not working when i'm executing the upload program in background thru scheduling(SM36) , while it works fine when i execute directly with NO screen display mode
-
How to change the Domain name of portal
Hi All, We've developed an application using the WDA(Web Dynpro for ABAP) on BI 7.0 and in this we are displaying the BI reports as well. This whole appliaction works fine when we run this on the BI portal 7.0. But when we run it from the Enterprise