Redirect all logs e trace files.

How can I redirect all the logs and trace files in the WebAS Java to a different directory?
Tks for the help.
José Simões

Hi Jose,
                   You can't change the standard directory. So you better look for an work around.
Regards,
Hari.

Similar Messages

  • Timestamp in Java log and trace files.

    Hi SAP'ies
    Running PI 7.11 on AIX 6.3 we face an issue with the content of log and trace files from Java.
    Eg.
    The file DefaultTrace_00.0.trc is timestamped 18-05-10 11:16:15. (The same time as the time of the OS/AIX)
    Looking inside the file the last statement is timestamped 2010 05 18 09:16:15.
    How can we ensure that the content of these Java files are timestamped with OS time?
    Looking into ABAP files like dev_w0 the timestamp of the file and the content are equal.
    Best regards,
    Teddy Løv Andersen

    Hello All
    Best way to convert the default trace time is , visit the site
    http://www.csgnetwork.com/epochtime.html
    here enter , for example if you have the following in default trace
    #1.#00265510DE7300120000000F000022030004C77AD7309DF5#1345230317066#com.sap.portal.fpn.rdl
    1345230317066   is time stamp , enter this in the above site to get the time
    Fri Aug 17 2012 21:05:17 GMT+0200
    Regards

  • Useful logs and trace files

    Hello experts, for our Netweaver AS administration, I am in charge of periodically checking logs and trace files. I would like to know which are the most useful logs and trace files and the information each one will hold. I am familiar with "DefaultTrace.trc", and as of today it is the only one I have used, but I believe I should also be looking at other logs and trace files.
    Any suggestions?

    Hi Pedro,
    If you are talking about JAVA only system defaulttrace is the best log/trace to look, there are other log files like application log, but maybe the best way to check you logs is using NWA (NetWeaver Administrator) on the following URL on your JAVA system:
    http://<hostname>:<port>/nwa
    From there you need to go to Monitoring -> Logs and Traces and then Predefined View/SAP logs.
    My other recommendation is to change the severity level to ERROR for all you JAVA component within the Visual Administrator -> ServeNode -> Services -> Log Configurator -> Locations, otherwise it is possible that you see a lot of garbage on the defaulttraces. Anyway you can change the severity level per component, on demand, to investigate any possible problem.
    The work directory is very imporant and maybe you can also check the file "dev_serverX" that also will give you information about any out of memory conditions and garbage collection activity if you have these values set for the server node using the config tool:
    -verbose:gc
    -XX:+PrintGCDetails
    -XX:+PrintGCTimeStamps
    You can find more information on here:
    http://help.sap.com/saphelp_nw70/helpdata/en/ac/e9d8a51c732e42bd0e7de54b9ff4e2/content.htm
    Hopefully this help you, let me know if you need more information,
    Zareh

  • External portal capturing internal portal URL in Log and trace file

    Hi,
    We are facing one issue in portal like we have two portals for internal (Intranet) and external (Internet) users.
    Once users logged in the application and try to get the information about mylink from the external portal link (internet) they should not get any information about the internal portal.
    But in log and trace file we can see the external portal link capturing the internal portal URL.
    We need to find, from where system capturing the internal portal URL.
    Thanks.

    The tkproffed trace file is in seconds.
    "set timing" is in hh:mi:ss.uu format. So 00:00:01.01 is 1.01 seconds.
    You have to remember that most of these measurements are rounded. While your trace file says it contains one second of trace data, you know it's more.
    One excellent resource for trace files is "Optimizing Oracle Performance" by Cary Millsap & Jeff Holt. (http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X ) I thought I knew trace files before, but this book brings your knowledge to a whole new level.
    There is also an excellent WP by Cary Millsap ( http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap ) that gives you some insight.

  • How to configure logs and trace files

    Hello people,
       We have just implemented ESS-MSS, we have around 25000 people using this service and every 2 days my logs and trace file in server gets full and portal gets down.
    Please suggest how to solve this problem,how can i reduce trace and log files,,,,,any configuration or setting is there to configure this...please suggest and explain how can it be done.
    Biren

    Hi,
    You can control what messages gets logged depending on the severity.
    This can be configured using Log Configurator, check this how you can set severity to different locations.
    Netweaver Portal Log Configuration & Viewing (Part 1)
    Regards,
    Praveen Gudapati

  • Removing alert logs and trace files

    Hi everyone!
    I noticed that in all the oracle databases, the trace files are piling and alert log is growing like anything ....
    Thought of making a copy of the trace files somewhere and remove them from the hard disk excluding the most recent ones.
    For alert log, thought of making a copy and renaming the current file so that Oracle can create a new one.
    Any advice if there are better approaches in handling this?
    Thanks in advance.

    user645399 wrote:
    Hi everyone!
    I noticed that in all the oracle databases, the trace files are piling and alert log is growing like anything ....
    Thought of making a copy of the trace files somewhere and remove them from the hard disk excluding the most recent ones.
    For alert log, thought of making a copy and renaming the current file so that Oracle can create a new one.
    Any advice if there are better approaches in handling this?
    Thanks in advance.I would include the alert log file in my backup strategy as it includes many important information; database parameter values,when and how the database was shut down, the important database errors and when they occur , etc ... I usually backup the alert log file once a month and keep 1 year of alert log file copies..

  • Alert log and Trace file error

    Just found this in one of our alertlog file
    Thread 1 cannot allocate new log, sequence 199023
    checkpoint not complete
    and this in trace file:
    RECO.TRC file
    ERROR, tran=7.93.23662, session# =1, ose=60:
    ORA-12535: TNS: operation timed out

    Why would you increase the log files when the problem is a distributed transaction timed out?
    Distributed transactions time out when the data they need to access is locked. Unlike a local session that wants to update a row, which will wait forever, a distributed transaction times out. In earlier versions of Oracle you could set init.ora parameter distributed_lock_timeout to manage the timeout period. Oracle has since made this into an underbar parameter.
    The solution is to ignore the problem unless it appears regularly in which case you have an application design issue.
    HTH -- Mark D Powell --

  • Agent10g: Size of Management Agent Log and Trace Files get oversize ...

    Hi,
    I have the following problem:
    I had installed the EM Agent10g (v10.2.0.4) on each of all my Oracle servers. I've done a long time ago (a few months or a few years depending on wich server it was installed). Recently, I've got PERL error because the "trace" file of the Agent was too big (the emagent.trc was more than 1 Gb) !!!
    I don't know why. I checked on a particular server on the AGENT_HOME\sysman\config (Windows) for the emd.properties file.
    The following properties are specified in the emd.properties file:
    LogFileMaxSize=4096
    LogFileMaxRolls=4
    TrcFileMaxSize=4096
    TrcFileMaxRolls=4
    This file had never been modified (those properties correspond to the default value). It's the same situation for all Agent10g (setup) on all of the Oracle Server.
    Any idea ?
    NOTE: The Agent is stopped and started weekly ...
    Thank's
    Yves

    Why don't you truncate the trace file weekly. You can also delete the file. The file will be created automatically whenever there is a trace.

  • Delete logs or trace file in current directory

    how to delete before 5 days trace files or alert log in the current directory.

    Syntax
    find /path -name '*.trc' -mtime +days -exec command {} \;
    Example
    find /backup -name '*.log' -mtime +5 -exec ls -ltr {} \;
    find /backup -name '*.trc' -mtime +5 -exec ls -ltr {} \;
    find /backup -name '*.log' -mtime +5 -exec rm -rf {} \;
    find /backup -name '*.trc' -mtime +5 -exec rm -rf {} \;
    Edited by: hitgon on Jun 26, 2012 9:44 AM

  • Export all logs into ascii files

    Hi,
    is there a way to automatically export all logs created with SignalExpress (I use the 2009 edition) automatically to ascii files, taking the names of each log as a base for the file name?
    I am aware of the export to LVM/Ascii possibility, but this requires to manually edit the file name for each file.
    So what I am looking for would be a way to apply the Options->Logging->"Automatically export log to text file" _afterwards_. Is there a way?
    Cheers,
    Niels

    Hi Nils,
    There is currently no utility or simple mechanism built into SignalExpress that will take an already created log, and then export it to ASCII.
    I can think of a way to do it, as a "workaround" to say the least (assuming Time Waveform or Scalar logs).
    1. Create a new SignalExpress project.
    2. Import the logs you want to export to ASCII.
    3. Go to Tools >> Options... and in the Logging section, set Automatically export log to text file = Yes.
    4. Switch to Playback mode and select the first log.
    5. Drop a Formula step and leave the default values (with Formula Y=x0)
    6. Rename the output of the formula step as desired.
    7. Right-click on the output of the formula step and select "Enable Recording".
    8. Run the project.
    From this point out, running the project will create a new (duplicate) log, and inside the new log folder will also be the exported ASCII file. By changing the active log, you can export each log.
    Note 1: I did try using the Batch Process Log feature to export all logs at once, but SignalExpress returned an error after the first log...
    Note 2: I'm not sure the names will be generated exactly as you would have liked.
    I completely realize this is NOT ideal!
    Phil

  • Troubleshooting Network Problems Using Log and Trace Files

    Hi,
    can any one tell me how to generate trace and log files related to network errors.

    start with inspecting listener.log
    Post tailend (last 40 lines here) of listener.log

  • How to catch all logs to one file with rsyslog?

    Hi,
    I have this config in rsyslog:
    # cat /etc/rsyslog.conf
    # Minimal config
    $ModLoad imuxsock # provides support for local system logging
    $ModLoad imklog # provides kernel logging support
    $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
    $FileOwner root
    $FileGroup root
    $FileCreateMode 0640
    $DirCreateMode 0755
    $Umask 0022
    $WorkDirectory /var/spool/rsyslog
    $IncludeConfig /etc/rsyslog.d/*.conf
    # Un sol log
    *.* /var/log/missatges.log
    # Si hi ha notificacions crítiques (pànic), enviar per consola a tots els usuaris
    *.emerg :omusrmsg:*
    [root@serviedre ~]#
    but in 'missatges.log' there is no systemd-timesyncd output:
    # journalctl -n
    -- Logs begin at Fri 2010-01-01 01:00:05 CET, end at Mon 2014-12-08 18:33:08 CET. --
    Dec 08 18:31:20 serviedre systemd[162]: Starting Default.
    Dec 08 18:31:20 serviedre systemd[162]: Reached target Default.
    Dec 08 18:31:20 serviedre systemd[162]: Startup finished in 168ms.
    Dec 08 18:31:20 serviedre systemd[1]: Started User Manager for UID 0.
    Dec 08 18:31:44 serviedre systemd-timesyncd[129]: Using NTP server 86.59.80.170:123 (2.europe.pool.ntp.org).
    Dec 08 18:31:31 serviedre systemd[162]: Time has been changed
    Dec 08 18:31:31 serviedre systemd-timesyncd[129]: interval/delta/delay/jitter/drift 32s/-12.456s/0.115s/0.000s/+0ppm
    Dec 08 18:31:31 serviedre systemd[1]: Time has been changed
    Dec 08 18:32:03 serviedre systemd-timesyncd[129]: interval/delta/delay/jitter/drift 64s/+0.001s/0.113s/0.000s/+0ppm
    Dec 08 18:33:08 serviedre systemd-timesyncd[129]: interval/delta/delay/jitter/drift 128s/+0.004s/0.114s/0.001s/+15ppm
    [root@serviedre ~]# cat /var/log/missatges.log | grep drift
    [root@serviedre ~]#
    Why?
    Last edited by xanb (2014-12-17 09:54:50)

    This happens with a lot of info: `journalctl -b` prints:
    Dec 17 09:36:16 serviedre systemd[6276]: Received SIGRTMIN+24 from PID 6328 (kill).
    Dec 17 09:36:16 serviedre systemd[6277]: pam_unix(systemd-user:session): session closed for user root
    Dec 17 09:36:16 serviedre systemd[1]: Stopped User Manager for UID 0.
    Dec 17 09:36:16 serviedre systemd[1]: Stopping user-0.slice.
    Dec 17 09:36:17 serviedre systemd[1]: Removed slice user-0.slice.
    Dec 17 09:41:38 serviedre systemd-timesyncd[133]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.099s/0.000s/+55ppm
    Dec 17 09:55:51 serviedre sshd[6333]: Accepted password for root from 172.26.0.7 port 35114 ssh2
    Dec 17 09:55:51 serviedre sshd[6333]: pam_unix(sshd:session): session opened for user root by (uid=0)
    Dec 17 09:55:51 serviedre systemd[1]: Starting user-0.slice.
    Dec 17 09:55:51 serviedre systemd[1]: Created slice user-0.slice.
    Dec 17 09:55:51 serviedre systemd[1]: Starting User Manager for UID 0...
    Dec 17 09:55:52 serviedre systemd[1]: Starting Session c2 of user root.
    Dec 17 09:55:52 serviedre systemd-logind[137]: New session c2 of user root.
    Dec 17 09:55:52 serviedre systemd[6335]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
    Dec 17 09:55:52 serviedre systemd[1]: Started Session c2 of user root.
    Dec 17 09:55:52 serviedre systemd[6335]: Starting Paths.
    Dec 17 09:55:52 serviedre systemd[6335]: Reached target Paths.
    Dec 17 09:55:52 serviedre systemd[6335]: Starting Timers.
    Dec 17 09:55:52 serviedre systemd[6335]: Reached target Timers.
    Dec 17 09:55:52 serviedre systemd[6335]: Starting Sockets.
    Dec 17 09:55:52 serviedre systemd[6335]: Reached target Sockets.
    Dec 17 09:55:52 serviedre systemd[6335]: Starting Basic System.
    Dec 17 09:55:52 serviedre systemd[6335]: Reached target Basic System.
    Dec 17 09:55:52 serviedre systemd[6335]: Starting Default.
    Dec 17 09:55:52 serviedre systemd[6335]: Reached target Default.
    Dec 17 09:55:52 serviedre systemd[6335]: Startup finished in 143ms.
    Dec 17 09:55:52 serviedre systemd[1]: Started User Manager for UID 0.
    Dec 17 10:15:46 serviedre systemd-timesyncd[133]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.100s/0.000s/+55ppm
    Dec 17 10:19:43 serviedre systemd[1]: Reloading.
    Dec 17 10:19:44 serviedre systemd[1]: Unknown serialization item 'subscribed=:1.0'
    but
    [root@serviedre ~]# tail /var/log/missatges.log
    Dec 14 18:33:12 localhost kernel: [ 4.959569] mmcblk0: p1
    Dec 14 18:33:12 localhost kernel: [ 6.833607] EXT4-fs (sda1): re-mounted. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 9.783482] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 9.885975] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 10.004117] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 10.109992] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 10.243718] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 10.596835] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: discard
    Dec 14 18:33:12 localhost kernel: [ 12.914250] sunxi_emac sunxi_emac.0: eth0: link up, 100Mbps, full-duplex, lpa 0x41E1
    Dec 14 18:33:22 localhost kernel: [ 23.247589] eth0: no IPv6 routers present
    [root@serviedre ~]#

  • 1250737 - SMMS: Trace file dev_ms could not be opened

    Hi All,
    "1250737 - SMMS: Trace file dev_ms could not be opened"
    I have this error in smms on a CRM 7.0 W2k8 Failover Cluster, but the snote does not apply due to the version. Everything works, so I think it's not Security related.
    Could you please help me in finding the root cause for the above error.
    Thanks,
    Rudolf

    Hi Chetan,
    according to smms (goto/Trace level/increase, decrease) it seems, that he trace level could be changed in every combination where CI/DI/ASCS are running (see investigations below):
    But by chance I found the following (1.):
    1. If only DI is started I  can read the dev_ms via smms, when ASCS was on the "DI"-Node -> neither (I)* nor (II)*
    - Moving ASCS to the "CI"-Node, ms_dev was not found -> (II)*
    *Compared with Note 1250737:
    (I) using CI -> dev_ms could not be opened (due to M in CI "dev_ms on C":)
    (II) using Application-Server not on the "ASCS-Node" -> dev_ms not found (absolute Path used instead of share)
    2. If only CI was started, dev_ms could not be opened, regardless where the ASCS is running -> (I)
    3. Both Application-Server running and ASCS on "CI"-Node:
    - logged on DI for smms trying reading dev_ms via DI -> dev_ms not found (II)
      (logged on DI for smms trying reading dev_ms via CI -> dev_ms not found (II))
    - logged on CI for smms trying reading dev_ms via DI -> dev_ms could not be opened (I)
      (logged on CI for smms trying reading dev_ms via CI -> dev_ms could not be opened (I))
    4. Both Application-Server running and ASCS on "DI"-Node:
    - logged on DI for smms trying reading dev_ms via DI -> dev_ms could not be opened (?)
      (logged on DI for smms trying reading dev_ms via CI -> dev_ms could not be opened (?))
    - logged on CI for smms trying reading dev_ms via DI -> dev_ms could not be opened (I)
      (logged on CI for smms trying reading dev_ms via CI -> dev_ms could not be opened (I))
    I don't know if it's really related to note 125073, maybe it has something to do with access violation: why is (1.) working, and (4.) not?
    Thank you very much for your effort
    Regards
    Rudolf

  • How to clear the alertlog and trace file?

    since the database was created,the log and trace file have't been cleared.how to clear the alertlog and trace file?3tx!!

    Hi Friend.
    You can eliminate all the files ".TRC" (trace files) to purify the directory BDUMP. These are not necessary files in order that the Oracle Server works.
    The file Alert.log is a file that the Oracle Server can recreate if you eliminate it. When Oracle Server's process realizes certain action (for example ARCH), Oracle creates again the file (if it does not exist), or it adds text (if it exists) with the new income.
    It can happen, that appears some Bug if the file Alert.log does not exist. Though this is slightly possible.
    Anyhow I recommend to you in UNIX to use from the prompt: $> filename in order to take the size of the file to 0 bytes, without need to eliminate it. Is the same thing when you want to purify the listener.log, the sqlnet.log or the sbtio.log.
    I wait for my commentaries you be of great help.
    Bye Friend...

  • Capping dev_ms trace file size

    hi - i'm wondering if anyone knows a way to help.  we have been asked by SAP to run our msg server at an elevated trace level (trace level 3 - we set it from SMMS).  this writes out a huge amt of data to the dev_ms trace file.
    the default trace file (dev*) size, per the rdisp/TRACE_LOGGING param default is "on, 10m" (i.e. 10mb).
    that applies to "all" the dev* trace files obviously.
    w/ our elevated dev_ms trace set, it's overwrapping in ~3 min ...so between dev_ms.old and dev_ms, we never have more than ~10 min of logs kept at any one time.
    the pt of us running at this elevated dev_ms trace level is so we can capture (save off) the trace file and send to SAP the next time our msg server crashes.
    our SAP file system mountpoint /usr/sap/<SID> is limited in size...and by setting rdisp/TRACE_LOGGING value to a higher value it affects all the dev* files, not just the 1 file i really care about increasing the capped file size on (dev_ms).
    **QUESTION*:  does any one know a way i could keep dev_ms capped at a large value like 100mb yet keep all the other dev files at the normal 10mb default?  thanks in advance

    1.  Increase rdisp/TRACE_LOGGING to 100MB.
    2.  Set (SM51) > Select All Processes > Menu > Process > Trace > Active Components > Uncheck everything and set trace level to 1.
    3.  Menu > Process > Trace > Dispatcher > Change Trace Level > Set to 2
    Wouldn't this essentially just increase dev_ms to 100MB while leaving other dev* trace files to not log anything?

Maybe you are looking for

  • Updating firmware in Airport Express disables connection to Internet

    After updating my Airport Express to firmware 7.6.1 it will no longer connect to the internet via a 4-port router that is connected to a DSL modem. This arrangement was working immediately prior to the firmware upgrade. I have unplugged/plugged in th

  • Photo Recovery in Time Machine - Need HELP !!!

    Hello Everyone, Problem - How do i recover back all my photos that's in my iPhoto application inside my Time Machine Backup ? Do i need to recover the whole system back from my Time Machine backup to get all my photos back ? It's Important for me. Th

  • Reading strings

    Hello Abapers, This is the issue: I'm need logic for the following: let's say I have product# 00000TA12345 I need to get only: TA12345 but if the product starts with a number, let's say:  0000056789, I need to get the same: 0000056789. How can I hand

  • P-card functionaling in purchasing and in SAP 6.0

    I have the following issue : My client wants to process P-card transactions in purchasing . In other words when creating a PO in R/3 to link it to the P-card as payment method i.e. the same functionality in SRM  for P-card . I was told that this func

  • Cumulative Percent only displaying when there is a number present

    Hi I am using the below cumulative percent formula for a chart and pivot table display (see below code). The problem is that the cumulative percent and cumulative number are only showing up in the pivot table when there is a number to cumulate (see b