Trace file getting generated
Hi,
We have PI 7.1 installed in landscape. Some trace files getting generated at
server\sapmnt\SID\DVEBMGS01\j2ee\cluster\server0\jrfc07500_06148.trc and utilizing more disk space aroung 1 GB.
Could you please let me know from where trace can be disabled
thanks
Hi Yash,
Please find the details on this link:
http://help.sap.com/saphelp_nw04/Helpdata/EN/f6/daea401675752ae10000000a155106/content.htm
name
jrfc*.trc
location
directory j2ee/cluster/server* or defined path
how to switch on and off set JVM options for the server via the Config Tool and restart server: -Djrfc.trace=0/1, -Djco.trace_path=[defined_path]
Kindly let us know if this resolves the problem.
Thanks.
Regards,
Shweta
Similar Messages
-
Sqlnet.ora trace files getting generated even after turning off tracing
Hi,
I have recently added the following parameters to the sqlnet.ora file.
TRACE_LEVEL_SERVER=16
TRACE_FILE_SERVER=SERVER
TRACE_DIRECTORY_SERVER=/ftpland/trace
TRACE_TIMESTAMP_SERVER=on
Even after removing these enteries from the sqlnet.ora I still see the trave files being generated.
#####enable tracing###################
#Trace_level_server=0
#Trace_filelen_server=1000
#Trace_fileno_server=1
#Trace_timestamp_server=on
#Trace_directory_server=/opt/oracle/product/10.2.0/network/trace
#Diag_adr_enabled=off
AUTOMATIC_IPC = ON
TRACE_LEVEL_CLIENT = OFF
SQLNET.EXPIRE_TIME = 10
NAMES.DEFAULT_DOMAIN = bsca.eds.com
NAME.DEFAULT_ZONE = bsca.eds.com
SQLNET.CRYPTO_SEED = "232166927-1713903352"
NAMES.DIRECTORY_PATH = (ONAMES,TNSNAMES)
NAMES.PREFERRED_SERVERS =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames1.bsca.eds.com)
(Port = 1575)
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames2.bsca.eds.com)
(Port = 1575)
NAME.PREFERRED_SERVERS =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames1.bsca.eds.com)
(Port = 1575)
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames2.bsca.eds.com)
(Port = 1575)
BEQUEATH_DETACH=YES
Regards,
VNSID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME =ROSDMP.bsca.eds.com)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = ROSDMP)
TRACE_LEVEL_LISTENER=16
I believe, this is the reason, you are seeing trace files even after disabling it in sqlnet.ora -
Numerous trace files are generating every minute causing space issue
Hi All,
numerous trace files are generating every minute <SID>_<PID>_APPSPERF01.trc format.
entry in trace file will be like..
EXEC #10:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1734896627,tim=1339571764486430
WAIT #10: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491273
FETCH #10:c=0,e=0,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=1,plh=1734896627,tim=1339571764486430
WAIT #10: nam='SQL*Net message from client' ela= 277 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491806
EXEC #11:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
FETCH #11:c=0,e=0,p=0,cr=9,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
WAIT #11: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764493265
*** 2012-06-13 03:16:14.496
WAIT #11: nam='SQL*Net message from client' ela= 10003326 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571774496705
BINDS #10:
Bind#0
oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=1000001 frm=01 csi=871 siz=2064 off=0
kxsbbbfp=2b8ec799df38 bln=32 avl=03 flg=05
value="535"
Bind#1
oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=00 fl2=1000001 frm=01 csi=871 siz=0 off=32
kxsbbbfp=2b8ec799df58 bln=32 avl=04 flg=01
value="1003"
SQL> show parameter trace
NAME TYPE VALUE
tracefiles_public boolean TRUE
log_archive_trace integer 0
sec_protocol_error_trace_action string TRACE
sql_trace boolean FALSE
trace_enabled boolean TRUE
tracefile_identifier string
Profile options like "FND:Debug Log Enabled" and "Utilities:SQL Trace" are set to No
Can some one help me to stop these trace generation.
is there any way to find the cause for these trace?
Thanks in adv...Hi;
Please check who enable trace. Please see:
How to audit users who enabled traces?
check concurrent programs first
*from the screen
*F11, then select the trace, then Ctrl+F11
Concurrent > program > define
open the form, press F11 (query mode), select the trace, then (ctrl + f11) this should return all concurrent programs which have trace enabled
Regard
Helios -
Session trace files not generated
Hai all,
Am tracing a session as follows :
EXEC DBMS_SUPPORT.start_trace_in_session(sid=>2166, serial=>30629, waits=>TRUE, binds=>FALSE);
EXEC DBMS_SUPPORT.stop_trace_in_session(sid=>2166, serial=>30629);
but,no trace files are getting generated ?.. in udump dir..
Any idea ?
KaiLikely a version and/or platform specific bug.
However, according to you this information is not important, or too much cumbersome to type, so no one can help you, even if you were told to include this several time.
If you want free support from volunteers, at least provide information to work in.
Or consider not posting at all.
Sybrand Bakker
Senior Oracle DBA -
How to proceed further once the explain plan and trace files are generated?
Hi Friends,
I need to improve the performance of on of the views that i am working on.
As suggested in the thread - http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0 , i gave generated the explain plan and the trace file.
From the explain plan, we can see the expensive operations for the query.
Can any one please tell, how to proceed further from here on i.e. how to make this expensive operations less expensive?
For ex: FULL TABLE SCAN might be an expensive operation when the table has indexes.In such cases, how can we avoid such operations to make query faster?
Regards,
Sreekanth Munagala.Hi Veena,
An earlier post by you regarding P45 is as below
Starter report P45(3) / P46 efiling for UK
from my understanding though i have not worked on GB Payroll you have said that you deleted IT 65 details of leaver,however there must be clusters generated in system from where the earlier data needs to be deleted and may be that is why you are facing the issue.
In Indian payroll when we execute text file for efiling of tax after challan mapping all the data compiles and sits in PCL cluster and therefore we are unable to generate form 16 with proper output,here we delete the clusters and rerun again the mappings and then check form 16.
Hope this might help you,Experts have suggested you earlier also,they may correct me for this.
Salil -
Trace file not Generated.
Hey Guys,
I am using oracle 10g and trying to generate a trace file.
Using sql_Plus I can see that timed_statistics is set to true, max_dump_file_size is set to unlimited, and user_dump_dest is set to C:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DUMP.
In Oracle SQL Developer I am running the script:
alter session set sql_trace = true;
... PL/SQL block that I want to trace...
alter session set sql_trace = false;
After this sql runs without error there is no file on my computer at the user_dump_dest. In fact the path under user_dump_dest does not even exist. Do I have to create the path first? Am I looking in the wrong location? Am I doing something wrong when setting sql_trace?
Thanks,
IanIn fact the path under user_dump_dest does not even exist.Perhaps this is the source of the mystery.
If Oracle has no place to actually write the file, how should it proceed?
as a proof of concept do the following
ALTER SESSION SET SQL_TRACE=TRUE;
show parameter dump
SELECT SYSDATE FROM DUAL;
ALTER SESSION SET SQL_TRACE=FALSE;
now find the trace file
post results from session above
Edited by: sb92075 on Jun 1, 2010 4:45 PM -
Unusuall TREX Trace file getting created..
Dear All
we have installed TREX and in TREX server its getting crating a tarace file TrexQueueServerAlert_myportalci.trc size of more than 30GB..
following is the extract of that file ..waht can be the reason for this huge trace file
Regards
Buddhike
[2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc: not found
[5296] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc: not found
[3772] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756,Dear Michell
Thnx for your post..how can i change the trace levels in TREX..?
which trace level should i keep?
Regards
Buddhike -
Trace File STAT generated on J2EE
Hi everyone,
I am having problem with trace file called STAT. This file is being generated but I Don´t now by who.
They are being recorded on the file system sapdata3.
The system is J2EE SP15, XI 3.0 with Oracle Database!
ThanksHi Ricardo,
The stat files being generated by the CCMS Agent. Check
the link for more
http://help.sap.com/saphelp_erp2005/helpdata/en/c7/69bcc3f36611d3a6510000e835363f/frameset.htm
Regards
Srikishan -
Huge user trace files being generated under udump
last few days one of my database is generating huge number of user trace files under udump. I tried to investigate in alert.log file. But as I am not very expert to analyze the alert.log file I could not find the cause behind it. Only the ORA-00600: internal error code is found there. But it is a generic error . Please help.
Thank you again. I got the clue from your reply.
The error message is pasted below in short.
ORA-00600: internal error code, arguments: [12333], [0], [0], [0], [], [], [], []
Current SQL statement for this session:
BEGIN :1 := TESTDBA.LOCAL_SYSDATE_SFT_FNC( 1,:2,:3 ); END;
And it is obvious that the function had some problem recently. We need to investigate on that. Now my request is
Please guide me how to check in metalink with the value '12333' which came as argument of ORA-00600 error. Here my job was easy as I got the problem function name . By I want to know the art of analysing the error message in alert.log file.
Also tell me which places I should check regularly as a responsibility of a DBA ?
What are the common errors in Alert log file? Where I should be cautioned ? Where I should ignore. Give me some tips please. Do not restrict your discussion on Alret.log only . Also please cover all important areas. If you think I am taking much of your time then please send me some link or source of PDF etc. -
Web Service wdsl file getting generated as a HTTPS protocol instead of HTTP
Hi Experts,
I have a requirement
I have created a web service in development client which is used in interactive adobe form.
After moving the web service to production only defination is getting generated.
I have tried to manually create the service in SOAMANAGER , it is getting created as with https protocol.
In development client it was created as a http protocol.
Warm Regards
AbhinavHi Abhinav,
I'm assuming you don't have a QA system in your landscape if you're transporting straight from Dev to Production. I'm also assuming this because if you transported to QA before production you would have noticed that 'only' the definition is always transported. You have to do the configuration again each time you transport your change, this is SAP default behaviour because each webservice is client dependant so SAP won't know which client this service will be relevant for in the environment you're transporting to.
The HTTPS issue, this makes sense to me in the production environment. You don't normally have the HTTP protocol enabled in a production environment, only HTTPS for security reasons.
In yout production environment, check Transaction Code SMICM --> Got (drop down menu) --> Services
Check what protocols are active there (with a green tick). This will indicate whether the HTTP protocol in enabled in Prod or not.
Regards, Trevor -
Backup file getting generated on FRA insted of format path
Hi,
I have below script to generate RMAN full backup on given path in FORMAT directory, still RMAN genarate one .bkp file on FRA.
We require to generate all backup files on given format path only.
below file was generated on FRA
o1_mf_nnndf_TAG20100804T043926_65l9t06h_.bkpBelow is the code for generating full backup.
backup database plus archivelog FORMAT '/export/home/nfs_bak/lims/backup/ndb/backup_%d_DB_%D%M%Y_%U_%s' delete all input;
resync catalog;
report schema;
list backup;
CROSSCHECK BACKUP;
CROSSCHECK BACKUPSET;
DELETE NOPROMPT EXPIRED BACKUP;
DELETE NOPROMPT OBSOLETE DEVICE TYPE DISK;
CROSSCHECK ARCHIVELOG ALL;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;Oracle : 11gR2
OS:SolarisOutput for old command:
RMAN> backup database plus archivelog FORMAT '/export/home/nfs_bak/lims/backup/ndb/backup_%d_DB_%D%M%Y_%U_%s' delete all input;
Starting backup at 05-AUG-10
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=194 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=225 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=26 RECID=24 STAMP=726199910
channel ORA_DISK_1: starting piece 1 at 05-AUG-10
channel ORA_DISK_1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_16lkhrj8_1_1_38 tag=TAG20100805T021152 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_05/o1_mf_1_26_65nok6fb_.arc RECID=24 STAMP=726199910
Finished backup at 05-AUG-10
Starting backup at 05-AUG-10
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/lims/ndb/system01.dbf
input datafile file number=00002 name=+DATA/lims/ndb/sysaux01.dbf
input datafile file number=00006 name=+DATA/lims/ndb/datafile/ndb_cellmap_idx_tbs_01.dbf
input datafile file number=00008 name=+DATA/lims/ndb/datafile/ndb_fingerprint_tbs_idx_01.dbf
input datafile file number=00010 name=+DATA/lims/ndb/datafile/ndb_sys_idx_tbs_01.dbf
channel ORA_DISK_1: starting piece 1 at 05-AUG-10
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/lims/ndb/datafile/ndb_cellmap_tbs_01.dbf
input datafile file number=00007 name=+DATA/lims/ndb/datafile/ndb_fingerprint_tbs_01.dbf
input datafile file number=00009 name=+DATA/lims/ndb/datafile/ndb_sys_tbs_01.dbf
input datafile file number=00003 name=+DATA/lims/ndb/undotbs01.dbf
input datafile file number=00004 name=+DATA/lims/ndb/users01.dbf
channel ORA_DISK_2: starting piece 1 at 05-AUG-10
channel ORA_DISK_1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_17lkhrj9_1_1_39 tag=TAG20100805T021153 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:46
channel ORA_DISK_2: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/NDB/backupset/2010_08_05/o1_mf_nnndf_TAG20100805T021153_65nokdyo_.bkp tag=TAG20100805T021153 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:06:56
Finished backup at 05-AUG-10
Starting backup at 05-AUG-10
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=27 RECID=25 STAMP=726200330
channel ORA_DISK_1: starting piece 1 at 05-AUG-10
channel ORA_DISK_1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_19lkhs0b_1_1_41 tag=TAG20100805T021851 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_05/o1_mf_1_27_65noyb7m_.arc RECID=25 STAMP=726200330
Finished backup at 05-AUG-10
Starting Control File and SPFILE Autobackup at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/db11g_c-1074407344-20100805-01 comment=NONE
Finished Control File and SPFILE Autobackup at 05-AUG-10output for updated command:
RMAN> RUN
2> {
3> allocate channel channel1 device type disk format '/export/home/nfs_bak/lims/backup/ndb/backup_%d_DB_%D%M%Y_%U_%s';
4> backup database plus archivelog delete all input;
5> release channel channel1;
6> }
7> resync catalog;
8> report schema;
9> list backup;
10> crosscheck backup;
11> crosscheck backupset;
12> delete noprompt expired backup;
13> delete noprompt obsolete;
14> crosscheck archivelog all;
15> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
16>
starting full resync of recovery catalog
full resync complete
allocated channel: channel1
channel channel1: SID=98 device type=DISK
Starting backup at 05-AUG-10
current log archived
channel channel1: starting archived log backup set
channel channel1: specifying archived log(s) in backup set
input archived log thread=1 sequence=20 RECID=18 STAMP=726148869
input archived log thread=1 sequence=21 RECID=19 STAMP=726175480
input archived log thread=1 sequence=22 RECID=20 STAMP=726185922
input archived log thread=1 sequence=23 RECID=21 STAMP=726195156
input archived log thread=1 sequence=24 RECID=22 STAMP=726197409
channel channel1: starting piece 1 at 05-AUG-10
channel channel1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_12lkhp52_1_1_34 tag=TAG20100805T013010 comment=NONE
channel channel1: backup set complete, elapsed time: 00:00:07
channel channel1: deleting archived log(s)
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_04/o1_mf_1_20_65m3p0rz_.arc RECID=18 STAMP=726148869
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_04/o1_mf_1_21_65mxonbz_.arc RECID=19 STAMP=726175480
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_04/o1_mf_1_22_65n7vtog_.arc RECID=20 STAMP=726185922
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_05/o1_mf_1_23_65njwhld_.arc RECID=21 STAMP=726195156
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_05/o1_mf_1_24_65nm31ko_.arc RECID=22 STAMP=726197409
Finished backup at 05-AUG-10
Starting backup at 05-AUG-10
channel channel1: starting full datafile backup set
channel channel1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/lims/ndb/datafile/ndb_cellmap_tbs_01.dbf
input datafile file number=00001 name=+DATA/lims/ndb/system01.dbf
input datafile file number=00002 name=+DATA/lims/ndb/sysaux01.dbf
input datafile file number=00007 name=+DATA/lims/ndb/datafile/ndb_fingerprint_tbs_01.dbf
input datafile file number=00006 name=+DATA/lims/ndb/datafile/ndb_cellmap_idx_tbs_01.dbf
input datafile file number=00008 name=+DATA/lims/ndb/datafile/ndb_fingerprint_tbs_idx_01.dbf
input datafile file number=00009 name=+DATA/lims/ndb/datafile/ndb_sys_tbs_01.dbf
input datafile file number=00003 name=+DATA/lims/ndb/undotbs01.dbf
input datafile file number=00010 name=+DATA/lims/ndb/datafile/ndb_sys_idx_tbs_01.dbf
input datafile file number=00004 name=+DATA/lims/ndb/users01.dbf
channel channel1: starting piece 1 at 05-AUG-10
channel channel1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_13lkhp5a_1_1_35 tag=TAG20100805T013018 comment=NONE
channel channel1: backup set complete, elapsed time: 00:06:55
Finished backup at 05-AUG-10
Starting backup at 05-AUG-10
current log archived
channel channel1: starting archived log backup set
channel channel1: specifying archived log(s) in backup set
input archived log thread=1 sequence=25 RECID=23 STAMP=726197834
channel channel1: starting piece 1 at 05-AUG-10
channel channel1: finished piece 1 at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/backup_NDB_DB_05082010_14lkhpia_1_1_36 tag=TAG20100805T013714 comment=NONE
channel channel1: backup set complete, elapsed time: 00:00:01
channel channel1: deleting archived log(s)
archived log file name=/export/home/nfs_bak/lims/backup/ndb/NDB/archivelog/2010_08_05/o1_mf_1_25_65nmj9y9_.arc RECID=23 STAMP=726197834
Finished backup at 05-AUG-10
Starting Control File and SPFILE Autobackup at 05-AUG-10
piece handle=/export/home/nfs_bak/lims/backup/ndb/db11g_c-1074407344-20100805-00 comment=NONE
Finished Control File and SPFILE Autobackup at 05-AUG-10
released channel: channel1 -
Agent10g: Size of Management Agent Log and Trace Files get oversize ...
Hi,
I have the following problem:
I had installed the EM Agent10g (v10.2.0.4) on each of all my Oracle servers. I've done a long time ago (a few months or a few years depending on wich server it was installed). Recently, I've got PERL error because the "trace" file of the Agent was too big (the emagent.trc was more than 1 Gb) !!!
I don't know why. I checked on a particular server on the AGENT_HOME\sysman\config (Windows) for the emd.properties file.
The following properties are specified in the emd.properties file:
LogFileMaxSize=4096
LogFileMaxRolls=4
TrcFileMaxSize=4096
TrcFileMaxRolls=4
This file had never been modified (those properties correspond to the default value). It's the same situation for all Agent10g (setup) on all of the Oracle Server.
Any idea ?
NOTE: The Agent is stopped and started weekly ...
Thank's
YvesWhy don't you truncate the trace file weekly. You can also delete the file. The file will be created automatically whenever there is a trace.
-
Hi Experts,
We are testing a newly migrated scenario from XI 3.0 to PI 7.1.
The Interface mapping consists of Message mapping followed by a ABAP Mapping.This scenario works fine in XI 3.0
whereas when i try to test the same scenario in 7.1 it creates an empty file.
I have tried to test the scenario using the first with message mapping alone and then with abap mapping alone.
Both work fine generating the expected results.But when i try to run them together it creates an empty file.
I have maintained the values in exchange profile
IntegrationBuilder ->IntegrationBuilder.Repository -> com.sap.aii.repository.mapping.additionaltypes
R3_ABAP|Abap-class;R3_XSLT|XSL (ABAP Engine)
We are testing an apap mapping for the first time in 7.1.Please let me know if i am missing any specific setting.
Thanks,
SudhansuHi,
I assume that the abap mapping does not gets called from the ABAP Stack.
Try entering the abap mapping name again and save and activate. Sometimes this might work.
Check the class in abap stack and try to regenerate it.
In the worst case scenario, modify something in the abap mapping and regenerate it.
Regards
Krish -
Oraarch file getting generated very often in /oracle/ SID /oraarch/
Dear All
I am facing a problem of getting oracle archive (oraarch) generated very frequently nearly 50MB file in less than a minute due to which my Filesystem capacity of Oraarch is getting full soon.
I have monitored there is no any batch job, Upload job, or any loaded job going even in Idle time in late night the same frequency of Archival is going so in a duration of 6-7 hours i am getting the total archive of 50GB.
I have Oracle 10g Patch Set 10.2.0.4.0 and all the 39 Patches & 2 CPU Patches applied by Opatch & MOPatch and all the Oracle 10g Parameter are set as per the note 830576.
Can you please let me know what can be the issue?
Thanks & Best Regards
RahulYou probably have statistics being populated and AWR running. Oracle 10g has system jobs that are set to run frequently (i.e. every hour, every night, etc).
column repeat_interval format a70
select job_name, repeat_interval, enabled from dba_scheduler_jobs;
column what format a50 word_wrapped
select job, last_date, next_date, broken, what from dba_jobs;
50mb is probably too small for your environment. You'll probably want to at least double that... at least.
And if your oraarch is filling up so frequently you should consider increasing the space of oraarch. It sounds like your system is much busier than you expected.
Run this command to get an idea of what Oracle recommends for your redo log file sizes...
SELECT TARGET_MTTR,ESTIMATED_MTTR,WRITES_MTTR,WRITES_LOGFILE_SIZE, OPTIMAL_LOGFILE_SIZE FROM V$INSTANCE_RECOVERY;
If that doesn't work it's probably because you're not using the system parameter fast_start_mttr_target... which I guess is something SAP advises against using? I'm an Oracle DBA but a SAP newbie so I'm still learning what parameters SAP wants me to set and why
Hope that helps..
Rich -
Where is this file getting generated
DATA : w_csv(200) TYPE c. "CSV output file.
m_check_selscr_fnms c3 o.
OPEN DATASET w_srvr_nm_o_c3 FOR OUTPUT IN TEXT MODE.
IF sy-subrc NE 0.
MESSAGE e001 WITH 'File open error for output' w_srvr_nm_o_c3.
ENDIF.
WRITE: / c_output_file, 20 w_srvr_nm_o_c3.
Thanks in advance
BWerdo you have idea about FILE Transaction,
See the FILE_GET_NAME Function module and probably they might be using this FM.
Goto FILE Transaction and look at Logical Path ,Physical path.
Reward Points if it is helpful
Thanks
Seshu
Maybe you are looking for
-
Jobs running under sys user name
Oracle 10.2.0.3 Enterprise Win2k3 Just upgraded to 10.2.0.3 ee from 10.1.0.4 standard. The jobs were entered via em and given the proper owner. em was logged in as sys. All jobs are running as user name sys. Can the user name be changed to the owner
-
Help needed in Service Ticket view version CRM 2007
Hi, We have a requirement to replace the Service Level Agreements with Location details screen (custom defined) in the Service Ticket View. I did this in 5.0 version,but couldnt replicate it in the CRM 2007 version. Here is how I proceeded in the ne
-
Cache referesh issue - after upgrade of PI 7.1
Hi I have upgraded PI from 2004s to 7.1 including EHP1 and after I have implemented post installation steps as per the Upgrade Guide. While doing System Readiness check test, I am unable to refresh CPA cache because of this my new changes (IR & ID) a
-
Sending o/p to mail ( in back ground)
hi SDNs, i want to send output of my report to email, while running it in background... i think, i have to write : if SY-BATCH = 'X'. " send mail ENDIF. how to do send? and is this the correct way to do that? is there any other solution? could an
-
Because of all of the problems that I have been experiencing with the Flash Player 11.3, I uninstalled it and downgraded to 11.1, which seems to be a more stable version. I used the uninstaller as recommended by Firefox and then grabbed the 11.1 ver