Unusuall TREX Trace file getting created..
Dear All
we have installed TREX and in TREX server its getting crating a tarace file TrexQueueServerAlert_myportalci.trc size of more than 30GB..
following is the extract of that file ..waht can be the reason for this huge trace file
Regards
Buddhike
[2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[6052] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc: not found
[5296] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
[5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
[5296] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc: not found
[3772] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756,
Dear Michell
Thnx for your post..how can i change the trace levels in TREX..?
which trace level should i keep?
Regards
Buddhike
Similar Messages
-
Delete TREX Trace Files and Turn TREX Off
Hi ,
We have massive amounts of TREX trace files that are taking up all of the memory. How do I delete these files? How do I know what files to delete? Also we are not using TREX in our BI server, how do I turn this OFF so that these files and never created?
Note from Moderator- don't use all caps in title as it is seen as shouting
Thanks
Edited by: Marilyn Pratt on Jan 27, 2009 12:19 PMHi Sanam,
For the TREX Preprocessor trace, You define the maximum file size for the input and output of documents in INI file - TREXPreprocessor.ini
The default size is set to 'Unlimited'
Please check the following values maintained in the configuration file.
Section: [ httpclient ]
Parameter: max_content_length
Default: Unlimited
Also read,
http://help.sap.com/saphelp_sm32/helpdata/en/1d/8d49cc304dd843a3a4a23d74888939/content.htm
Hope this is helpful.
Regards,
Varun -
Hi,
We have PI 7.1 installed in landscape. Some trace files getting generated at
server\sapmnt\SID\DVEBMGS01\j2ee\cluster\server0\jrfc07500_06148.trc and utilizing more disk space aroung 1 GB.
Could you please let me know from where trace can be disabled
thanksHi Yash,
Please find the details on this link:
http://help.sap.com/saphelp_nw04/Helpdata/EN/f6/daea401675752ae10000000a155106/content.htm
name
jrfc*.trc
location
directory j2ee/cluster/server* or defined path
how to switch on and off set JVM options for the server via the Config Tool and restart server: -Djrfc.trace=0/1, -Djco.trace_path=[defined_path]
Kindly let us know if this resolves the problem.
Thanks.
Regards,
Shweta -
I have recently noticed that I am getting strange files that I am not sure were they are coming from. The files when opened using VI show the text of MUTX and that is all they say. They are names starting with 0x, followed by a rather long string of Hexidecimal numbers.Example (0x6c6d48142) The files are getting placed directly in my main hard. The files are 44bytes in size but many many of them are getting created daily. I have found little to no articles explaing what may be related to MUTX,except it may be related to some type of "leak", I do not understand what that means. Has anyone seen this or this type of thing before? Thank you for your help
AnkitV wrote:
Hi
I am facing the below peculiar problem.
Our prod database D1 is installed on host H1 and PL/SQL jobs run on server S1 accessing D1 and create files on S1 only.
Recently I got same jobs scheduled to run on S2 accessing D2 database (test) installed on H2 serve,r but files are getting created on H2 instead of S2.
Directory object is EXTERNAL_TABLE_DIR on both S1 and S2.
Actually files should be created on S2 only as jobs are being run there only.
Can you please tell why are the files getting created on H2 given that jobs are run from S2 and what can be done to rectify this ?
Thanks a lotHi,
The job will be created the file only on the server where the database is installed not in any other server. -
When we create a queue how many files get created???
Hi All,
When we create a resource suppose a queue then how many files get created or modified from that action internally??
Thanks in advance
SumeetIn 8.1, one file: "config.xml"; in 9.0 one or two files (your JMS module file, and potentially the config.xml) .
-
Which file gets created automatically while migrating
Which file gets created automatically while migrating from non-ASM files to ASM diskgroup by using RMAN?----No.113
A. the contol file
B. the alert log file
C. the parameter file
D. the server parameter file
E. the archive redo log file
Message was edited by:
frank.qianReviewing the steps in Documentation provided below:
Oracle® Database Backup and Recovery Advanced User's Guide
Migrating a Database into ASM
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmasm001.htm#i1016547
None of the choices are correct. -
Which file gets created automatically while migrating from---QNo.113
Which file gets created automatically while migrating from non-ASM files to ASM diskgroup by using RMAN?
A. the contol file
B. the alert log file
C. the parameter file
D. the server parameter file
E. the archive redo log fileCheck this out
http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10734/rcmasm.htm -
Too many trace files apparently created on behalf of Health Monitor
Every ~10 minutes I see two pairs of trc+trm files that appear to be nothing more than a nuisance. A nuisance because if you don't get rid of them, then commands such as "ls -lt | head" take too long to complete. Even after setting disablehealth_check=true, the files still keep coming. The only thing I can find is a correspondence between the rows in v$hm_run and these trace files.
The RDBMS appears to be working fine and there's nothing in the alert log that would explain their existence. Having said that, I'll go ahead and admit that I don't these are false negatives but it sure looks like they are.
Below is a sample of the two trace files in the 4 files that get created every 10 minutes. Thanks.
Here is V11106_m001_8138.trc (including the preamble)
Trace file /opt/oracle/diag/rdbms/v11/V11106/trace/V11106_m001_8138.trc
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /opt/oracle/product/11.1.0.6
System name: Linux
Node name: rhel01.dev.method-r.com
Release: 2.6.18-92.1.13.el5xen
Version: #1 SMP Thu Sep 4 04:20:55 EDT 2008
Machine: i686
Instance name: V11106
Redo thread mounted by this instance: 1
Oracle process number: 26
Unix process pid: 8138, image: [email protected] (m001)
*** 2009-04-03 00:31:36.650
*** SESSION ID:(139.36) 2009-04-03 00:31:36.650
*** CLIENT ID:() 2009-04-03 00:31:36.650
*** SERVICE NAME:(SYS$BACKGROUND) 2009-04-03 00:31:36.650
*** MODULE NAME:(MMON_SLAVE) 2009-04-03 00:31:36.650
*** ACTION NAME:(DDE async action) 2009-04-03 00:31:36.650
========= Dump for error ORA 1110 (no incident) ========
----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
Here is V11106_m000_8136.trc (minus the preamble)
*** 2009-04-03 00:31:36.471
*** SESSION ID:(139.34) 2009-04-03 00:31:36.471
*** CLIENT ID:() 2009-04-03 00:31:36.471
*** SERVICE NAME:(SYS$BACKGROUND) 2009-04-03 00:31:36.471
*** MODULE NAME:(MMON_SLAVE) 2009-04-03 00:31:36.471
*** ACTION NAME:(Monitor Tablespace Thresholds) 2009-04-03 00:31:36.471
DDE rules only execution for: ORA 1110
----- START Event Driven Actions Dump ----
---- END Event Driven Actions Dump ----
----- START DDE Actions Dump -----
----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
Successfully dispatched
----- (Action duration in csec: 1) -----
----- END DDE Actions Dump -----Oracle Database 11g by Sam R. Alapati, Charles Kim seems to be a better resource. Here's a link.
http://books.google.com/books?id=14ZH0eZV6G8C&pg=PA60&lpg=PA60&dq=oracle+disable+adr&source=bl&ots=brbhVP05RD&sig=WpaASLcGzJHgBB8Q-RqHu0Efy3k&hl=en&ei=AybaSemkNJSwywWSptXuDg&sa=X&oi=book_result&ct=result&resnum=7#PPA81,M1
When I list hm rows (from x$dbkrun I presume), it shows this detail (which repeats many, many times)
HM RUN RECORD 1
RUN_ID 184481
RUN_NAME HM_RUN_184481
CHECK_NAME DB Structure Integrity Check
NAME_ID 2
MODE 2
START_TIME 2009-04-05 22:44:43.385054 -05:00
RESUME_TIME <NULL>
END_TIME 2009-04-05 22:44:43.718198 -05:00
MODIFIED_TIME 2009-04-05 22:44:43.718198 -05:00
TIMEOUT 0
FLAGS 0
STATUS 5
SRC_INCIDENT_ID 0
NUM_INCIDENTS 0
ERR_NUMBER 0
REPORT_FILE <NULL>
The corresponding data from v$hm_run (and the view definition itself) show that status = 5 means 'COMPLETED'. The interesting thing is that RUN_MODE.RUN_MODE is 'REACTIVE'. This seems to say that it's not a proactive thing. But if something is reacting, then why is it showing no error (i.e., ERR_NUMBER = 0)? -
Agent10g: Size of Management Agent Log and Trace Files get oversize ...
Hi,
I have the following problem:
I had installed the EM Agent10g (v10.2.0.4) on each of all my Oracle servers. I've done a long time ago (a few months or a few years depending on wich server it was installed). Recently, I've got PERL error because the "trace" file of the Agent was too big (the emagent.trc was more than 1 Gb) !!!
I don't know why. I checked on a particular server on the AGENT_HOME\sysman\config (Windows) for the emd.properties file.
The following properties are specified in the emd.properties file:
LogFileMaxSize=4096
LogFileMaxRolls=4
TrcFileMaxSize=4096
TrcFileMaxRolls=4
This file had never been modified (those properties correspond to the default value). It's the same situation for all Agent10g (setup) on all of the Oracle Server.
Any idea ?
NOTE: The Agent is stopped and started weekly ...
Thank's
YvesWhy don't you truncate the trace file weekly. You can also delete the file. The file will be created automatically whenever there is a trace.
-
Sqlnet.ora trace files getting generated even after turning off tracing
Hi,
I have recently added the following parameters to the sqlnet.ora file.
TRACE_LEVEL_SERVER=16
TRACE_FILE_SERVER=SERVER
TRACE_DIRECTORY_SERVER=/ftpland/trace
TRACE_TIMESTAMP_SERVER=on
Even after removing these enteries from the sqlnet.ora I still see the trave files being generated.
#####enable tracing###################
#Trace_level_server=0
#Trace_filelen_server=1000
#Trace_fileno_server=1
#Trace_timestamp_server=on
#Trace_directory_server=/opt/oracle/product/10.2.0/network/trace
#Diag_adr_enabled=off
AUTOMATIC_IPC = ON
TRACE_LEVEL_CLIENT = OFF
SQLNET.EXPIRE_TIME = 10
NAMES.DEFAULT_DOMAIN = bsca.eds.com
NAME.DEFAULT_ZONE = bsca.eds.com
SQLNET.CRYPTO_SEED = "232166927-1713903352"
NAMES.DIRECTORY_PATH = (ONAMES,TNSNAMES)
NAMES.PREFERRED_SERVERS =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames1.bsca.eds.com)
(Port = 1575)
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames2.bsca.eds.com)
(Port = 1575)
NAME.PREFERRED_SERVERS =
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames1.bsca.eds.com)
(Port = 1575)
(ADDRESS =
(COMMUNITY = TCP.bsca.eds.com)
(PROTOCOL = TCP)
(Host = oraclenames2.bsca.eds.com)
(Port = 1575)
BEQUEATH_DETACH=YES
Regards,
VNSID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME =ROSDMP.bsca.eds.com)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = ROSDMP)
TRACE_LEVEL_LISTENER=16
I believe, this is the reason, you are seeing trace files even after disabling it in sqlnet.ora -
Oracle 10gR2 OMF - Control file getting created at default unix location
Dear Expets
I have configured following three parameters in initialization file.
DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_1
DB_CREATE_ONLINE_LOG_DEST_2
DB_RECOVERY_FILE_DEST
While creating new database (SQL>create database test), all files are created as per documented, but control files are created at OS dafult location and not as per defined variable.
Am I missing something or this is default behavior? As far as I understood admin guide, it should go along with Redo file locations.
Kind Rgds
RajendraResolved.
-
IDOC to CSV file getting created in Target with out any data
Dear All,
Scenario:IDOC to CSV
We have developed a ID part and reused the IR part.Interface is successfull end to end in both ABAP and Java Stack and file has been created in Target but without any data.
I think we need to do changes in Content Conversion.
Below are the details which we have using in Content Conversion.
Name Value
ListPrice.fieldSeparator ,
ListPrice.ProcessFieldNames fromConfiguration
ListPrice.fieldNames CompanyID,SalesOrg,ProductID,ValidFrom,ValidTo,UOM,ListPrice,RecordType
,LineID,UpdatedDate
can you please give me the parameter which we need to use other than above and give me the proper reason why target file has no data.
Note:We are Re-Using IR part and we have given the field name in Content conversion as correct order with proper case sensitive
Thanks and regards,
ManikandanHi Abhishek and Rahul,
I m sorry i have given the sender Payload early. please find the receiver payload.
<?xml version="1.0" encoding="UTF-8"?>
<ns0:SendListPriceToHub xmlns:ns0="http://tempuri.org/"><ns0:XMLData><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002648</ns0:ProductID><ns0:ValidFrom>2009-11-12T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>111.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22158</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>100000001</ns0:ProductID><ns0:ValidFrom>2009-11-23T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>2199-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>1000.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22363</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>100000003</ns0:ProductID><ns0:ValidFrom>2009-12-01T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>230.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22572</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002901</ns0:ProductID><ns0:ValidFrom>2000-12-04T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>2009-11-30T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>20.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22673</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002647</ns0:ProductID><ns0:ValidFrom>2009-12-04T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>90.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22674</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>100000007</ns0:ProductID><ns0:ValidFrom>2009-12-01T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>2010-01-19T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>900.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22715</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>100000010</ns0:ProductID><ns0:ValidFrom>2009-12-11T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>14.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22831</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002655</ns0:ProductID><ns0:ValidFrom>2009-12-16T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>350.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>22985</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002901</ns0:ProductID><ns0:ValidFrom>2009-12-01T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>80.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23196</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20003027</ns0:ProductID><ns0:ValidFrom>2010-01-04T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>120.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23309</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20003028</ns0:ProductID><ns0:ValidFrom>2010-01-06T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>12.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23428</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20002924</ns0:ProductID><ns0:ValidFrom>2010-01-07T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>96.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23469</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>20091001</ns0:ProductID><ns0:ValidFrom>2010-01-14T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>123.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23647</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice><ns0:ListPrice><ns0:CompanyID>1525</ns0:CompanyID><ns0:SalesOrg>A001</ns0:SalesOrg><ns0:ProductID>100000007</ns0:ProductID><ns0:ValidFrom>2010-01-20T00:00:0010:00</ns0:ValidFrom><ns0:ValidTo>9999-12-31T00:00:0010:00</ns0:ValidTo><ns0:UOM>PC</ns0:UOM><ns0:ListPrice>10.00</ns0:ListPrice><ns0:RecordType>U</ns0:RecordType><ns0:LineID>23681</ns0:LineID><ns0:UpdatedDate>0000-00-00T00:00:0000:00</ns0:UpdatedDate></ns0:ListPrice></ns0:XMLData></ns0:SendListPriceToHub> -
A problem within a third-party application is causing it to create and abandon Oracle sessions. At times three hundred or more abandoned sessions accumulated in the instance. The software company is working on the problem. Oracle's background processes will get rid of those sessions after several hours, but at times there were so many they caused the server to start using paging space. We wrote a SQL*Plus script to identify the abandoned sessions and kill them with command "alter system kill session <sid, serial#> immediate;". We automated the execution of the script a week ago. Today I noticed that in my udump directory an Oracle trace file has been created each time our script kills a session. A single trace file is created regardless of how many sessions are killed. No errors appear in the trace file.
Is the creation of these trace files an indication that problems have occurred or are they there for information only?
Since I know how and why the sessions are being killed, is it safe to ignore the trace files?
Thank you,
BillThe OS is AIX 5.2. The database server is 10.2.0.2. We are in the processing of upgrading to AIX 7.1 and database server 11.2.0.3.6.
The script does not enable tracing for the SQL*Plus session.
Below is the alert log message from a session killed at 11:22, and the corresponding trace file created at that same time:
From alert_<sid>.log:
Wed Jul 31 11:22:01 2013
Immediate Kill Session#: 1119, Serial#: 59885
Immediate Kill Session: sess: 70000014dc4a7e0 OS pid: 267254
/u02/admin/EXPRESS/udump $ ls -l express_ora_113358.trc
-rw-r----- 1 oracle dba 2276 Jul 31 11:22 express_ora_113358.trc
/u02/admin/EXPRESS/udump $ pg express_ora_113358.trc
Dump file /u02/admin/EXPRESS/udump/express_ora_113358.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: AIX
Node name: navis
Release: 2
Version: 5
Machine: 0005CD8C4C00
Instance name: EXPRESS
Redo thread mounted by this instance: 1
Oracle process number: 225
Unix process pid: 113358, image: oracleEXPRESS@navis
*** ACTION NAME:() 2013-07-31 11:22:01.181
*** MODULE NAME:(SQL*Plus) 2013-07-31 11:22:01.181
*** SERVICE NAME:(EXPRESS.WORLD) 2013-07-31 11:22:01.181
*** SESSION ID:(1723.61000) 2013-07-31 11:22:01.181
SO: 70000014d44d278, type: 2, owner: 0, flag: INIT/-/-/0x00
(process) Oracle pid=463, calls cur/top: 0/700000139166298, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 108 0 4
last post received-location: kslpsr
last process to post me: 70000014d36c398 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: 70000014d36c398 1 6
(latch info) wait_event=0 bits=0
Process Group: DEFAULT, pseudo proc: 70000014d6aec00
O/S info: user: oracle, term: UNKNOWN, ospid: 267254
OSD pid info: Unix process pid: 267254, image: oracleEXPRESS@navis
Short stack dump:
ksdxfstk+002c<-ksdxcb+04e4<-sspuser+0074<-00004CB0<-nttrd+0120<-nsprecv+0750<-ns
rdr+0114<-nsdo+1714<-nsbrecv+0040<-nioqrc+04a8<-opikndf2+0688<-opitsk+088c<-opii
no+0990<-opiodr+0adc<-opidrv+0474<-sou2o+0090<-opimai_real+01bc<-main+0098<-__st
art+0090
Dump of memory from 0x070000014D2CC3B0 to 0x070000014D2CC5B8
70000014D2CC3B0 00000004 00000000 07000001 39DA8D48 [............9..H]
70000014D2CC3C0 00000010 0003139D 07000001 39166298 [............9.b.]
70000014D2CC3D0 00000003 0003139D 07000001 4C73D508 [............Ls..]
70000014D2CC3E0 0000000B 0003139D 07000001 4DC4A7E0 [............M...]
70000014D2CC3F0 00000004 00031291 00000000 00000000 [................]
70000014D2CC400 00000000 00000000 00000000 00000000 [................]
Repeat 26 times
70000014D2CC5B0 00000000 00000000 [........]
Thanks,
Bill -
IDOC File target - file not getting created
I am using Realtime job for sending idoc from data services to sap. I am using idoc message as source, query and idoc file as target transform in the data flow. I have following settings in IDOC file as target
Idoc file : /tmp/costcentertest.txt
Idoc type: COSMAS01
Data store name: DS_SAP
Application server: saptest
When i send the idoc I can load the data into table or create the file in /tmp directory. After changing target to idoc file and entering the above settings i don't see any file getting created. I do see idoc was received from we05 and also show green in data management console.
I was checking if i have to change something or missing some steps.Hi Satheesh
Check whether the order qty field is there in flat file or not. Second if the data is there then it is getting uploaded
in read the file or not. Subsequently in converted data . When the mapping is correct and data is getting populated
in read file and converted file it will get uploaded. Hope it will help you.
Thanks
Sushant -
Silent Installation - Respond file not getting created !!!
Hi,
I am trying to create a respond file by running the installation as follows
./runInstaller -ignoreSysPrereqs -record -destinationFile /backup/ORA_DOWNLOADS/9201.rsp
ignoreSysPrereqs is used because we have Sun Solaris 5.10 which is not in oraparam.ini.
I am installing 9.2.0.1 Please do not ask why 9.2.0.1 now.
Now the problem is my respond file is not getting created when I run the install. Does the respond file gets created after the installation is complete? I do not want to end up doing installation at one server without the respond file.
Please help.
Thanks,
Ankit.Hm, you might be hitting a bug. Need to check my archive of historic Oracle events (means, I'll get back when I have looked through my notes in a few hours or more).
Maybe you are looking for
-
How can I change the date of birth on my child's Apple ID?
HI, My son's school have started an initiative where they use iPads as part of their lessons, and subsequently my son has a brand new iPad through the scheme. The setup guide that the school supplied directed us to enter the parent's date of birth an
-
Apache comons logging implemented it with a wrapper classes around
hi friends, i have an interesting (unsually )problem regaurding apache commons logging, i am using three classes . one is the logger class where i instantiate the logging instance ie Log log = LogFactory .getLog("some name"); then there is a class ca
-
I have a huge Wifi issue on my iPhone 5.
I restored it plenty of times, disconnected from it and reconnected it, and tried various youtube tutorials, but the wifi itself *****. My iPhone 4's wifi is better then the 5. Please can someone help me? I have apple care and was woundering if I sho
-
Why PSE 10 works fine while PSE 11 crashes?
I have used PhotoShop Elements (and Premiere) since 2003 (version 2). I recently was lured into upgrading from version 10 to 11 by a couple of new features, but I had trouble from the beginning. The first download would not start until I repeatedly a
-
The images are there but the previews take forever to show up. Any ideas?