MORE TRACE FILES IN 11Gr2
Recently we have upgraded databases from 10.2.0.2 to 11.2.0.3.4 and noticed that there many trace files(off course .trm files as well) are getting generated. Now i have to investigate why many trace files are getting generated and need to identify the source of all these trace files and need to reduce them. Any inputs for investigation approach?
Thanks in-advance.
Regards
DBA.
ADRCI: ADR Command Interpreter
Similar Messages
-
Ingo,
I am getting below error message in BO Test environment when I refresh WebI document.
MDDataSetBW.GetCellData. See RFC trace file or SAP system log for more details
It was working when we migrated the objects initially. Subsequently when we try to refresh WebI document we are getting the above error message. It still works in BO Dev environment and the no of records are the same in Dev and Test.
I am able run the underlying BEx query in SAP BW Test environment and it does return data.
I checked ST22 and SM21 log, but no details there.
Is it related to Authorization on the Universe or Bex Query?
We are in the middle of UAT and not able to move forward. I would greatly appreciate your input.
Thanks
RamIngo,
I ran the zip file and added the required entries to registry.
And then I tried to reproduce the error, but the files are not generated instead I noticeed below error message on the server:
The description for Event ID ( 7939 ) in Source ( Crystal OLAP Client ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: Registry Access Error: , [HKEY_LOCAL_MACHINE]Software\Business Objects\Suite 12.0\MDA\Log\Modules: The system cannot find the file specified..
Does tha t mean we do not have permissons to read the registry?
I would greatly appreciate your input.
Thanks in advance.
Ram -
Time drift detected. Please check VKTM trace file for more details
Running 11.2.0.2 on windows 64 bit virtualized..
VKTM, oracles virtual keeper of time is throwing a lot of warnings in the alert log. According to my research this is not a great concern (if someone could explain why that would be great as well) but you should be able to supress the trace file if you are at 11.2.0.2 by setting event = "10795 trace name context forever, level 2" in the the paramater file. I am using a spfile and did not use "alter system", i cretaed pfile and edited it , opened with it and created a new spfile then opened with that. I am however still receiving the trace files and the messages in the alert log, I am getting approximately 10 a day now so the alert log is filling rather quickly. Has anyone else encountered this or have any advice on how to solve this? Thanksuser12243721 wrote:
Running 11.2.0.2 on windows 64 bit virtualized..
VKTM, oracles virtual keeper of time is throwing a lot of warnings in the alert log. According to my research this is not a great concern (if someone could explain why that would be great as well) but you should be able to supress the trace file if you are at 11.2.0.2 by setting event = "10795 trace name context forever, level 2" in the the paramater file. I am using a spfile and did not use "alter system", i cretaed pfile and edited it , opened with it and created a new spfile then opened with that. I am however still receiving the trace files and the messages in the alert log, I am getting approximately 10 a day now so the alert log is filling rather quickly. Has anyone else encountered this or have any advice on how to solve this? ThanksI get this as well on Windows 64-bit with a multi-CPU machine, with the VM set up with 2 virtual CPUs.
Since it's not a production machine it hasn't bothered me, but there are three other side effects
a) the "tim=" values in 10046 trace files look as if they have two parallel clocks running out of synch with each other - with the reported values jumping from one clock to the other every few milliseconds.
b) sometimes the database refuses to restart with "vktm didn't start in time" error messages
c) a couple of times a call to dbms_lock.sleep(0.01) has slept for a very long time - possibly because the timer started on the faster of the two clocks, and the system then jumped to the slower.
I never trust the machine for performance tests, so the timing anomalies aren't a big issue for me, so I haven't followed it up; but I'd guess it's a vmware issue with the way it has virtualised the multiple CPUs.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Hi,
We have PI 7.1 installed in landscape. Some trace files getting generated at
server\sapmnt\SID\DVEBMGS01\j2ee\cluster\server0\jrfc07500_06148.trc and utilizing more disk space aroung 1 GB.
Could you please let me know from where trace can be disabled
thanksHi Yash,
Please find the details on this link:
http://help.sap.com/saphelp_nw04/Helpdata/EN/f6/daea401675752ae10000000a155106/content.htm
name
jrfc*.trc
location
directory j2ee/cluster/server* or defined path
how to switch on and off set JVM options for the server via the Config Tool and restart server: -Djrfc.trace=0/1, -Djco.trace_path=[defined_path]
Kindly let us know if this resolves the problem.
Thanks.
Regards,
Shweta -
Different Deadlock trace files
Hello,
In our application we use to have deadlock issues and i need to analyze that
trace file.Some time i use to have trace files which is having current session and
waiting session information and with modules and queries they are executing in top section
of trace file only , no need to read below data in trace file . But some times the
trace files are different..all update or select for update queries are spread
across the file and very difficult to understand which was locking what. Is that in rac or 11g environment
deadlock trace file is having different structure,?
One more question regarding deadlock ...many time we found that the current
query is updating on table A and waiting query updating on table B .. Is it possible
to have deadlock scenario when queries are working on different tables ? or
many be it is happening only if tables are in relation like parent and child ?hi,
Are you referring to .trm extention trace files which youare unable to read?
Here is good explanation of reading deadlock trace files
ORA-00060 Deadlock trace files.. how to read?
Thanks,
Ajay More
http://www.moreajays.com -
Logrotate and open trace files
On our racs we see a lot of open (lsof) trace files in our dump directories. We've been using logrotate with cumpersome pre/post rotate code to rotate the files, with the code supposedly filtering out open files, but we still run into issue and are going to revisit this code soon.
I'm wondering if anybody has found a clean way to rotate and delete trace files without accidently deleting open files?
-JeffHollis, let's set expectations first.
Digital Editions runs on your computer, not your ereader. What it does is
interface between the publisher or distributor and you, by taking files
from them, reformatting them into readable documents in some cases (in
others, checking and maintaining digital rights), and transferring those
files (now in .epub format) to your computer. It keeps a library of these
epubs and manages it for you. You have to have Digital Editions or some
other similar program on your computer if you intent to obtain ebooks from
various sources. If you're satisfied with Barnes and Noble and the ebooks
they have, then you can link up with them via the Nook's wireless features
or through your computer and download ebooks directly to it that way.
You do not load Digital Editions or any similar program onto your Nook.
Next, at least a couple of people have told you how to the download. Arpit
Kapoor works for Adobe. His reply tells you how to download Digital
Editions and get it installed onto your computer. So my question to is do
you understand what he's saying?
If you have a problem with the download, then tell us in a bit more detail
what your computer is and what it's running (Windows or Mac). There are
some issues that others have had which relate to Win7, but your post isn't
clear enough to tell us whether you're encountering those issues.
=================== -
Do you know Timmings for trace files generated?
Hi,
I have done some sql traceing using DBMS_MONITOR package.
We can also enable SQL traceing using DBMS_SESSION.
I want to generate sql trace file for "particuler part of application".
When i did that i got some sql trace files,, now that "particuler part of application" was over application was idle..
but as time goes these files are still populating in size means they are still sql traceing going on...
My question is when and how trace files are generated?
Do you have idea???
Thanks and Regards,
Rushang Kansara
Message was edited by:
Rushalso what content of my sql trace file should i
consider for exacly tracing that "particuler part of
application".
Rushang
Parse Count To Execute Ratio
Take the numbers of parse count and divide it by numbers of time execute count if it is 1 then it means you are parsing the same statment everytime,If this ratio is 1 then it will latch the shared SQL area which will degrade the overall performance.Like if you execute a query which is using bind variable and this query is at yours front end level trigger (Forms) POST_QUERY then it will show you (parse count=execute count) which shows you are parsing for every triggering event which is bad ,for that you should put this seqeuel within PL/SQL procedure which cache the cursor and will turn in (parse count<Execute Count).
Large Diffrence Between Elasped Time And CPU Time
If this diffrence (Elapsed time[b]-CPU time)>1 then it means you are spending yours time in for waiting resources this waiting resources will in turn wait events e.g some one updated the row and dont realease by COMMIT or ROLBACK and the same span of time you want to update then you will see a lock in tkprof result in wait event section.If you read the data from hard disk (as first time you issue it reads from HD and then put into buffer cache during this reading a latch is grabed and will not let you read this data until you perform the alls read from HD to buffer cache this will also show you in wait events which is cache buffers chain
Fetch Calls
If yours Fetch calls=Rows then it means you are not using Bulk fetch and yours this code will take a lot of roundtrips which will in turn jam the network.
Disk Count
If every time yours disk count=current + query mode then you are reading alls block from disk alls the time ,usually oracle read once from disk and put it into SGA and should be found in SGA second time.
And there is many more...depend on yors environemnt setup but above are common.
As you said its reproducing the tkprof again and again ,make sure you terminate the session or you explicitly turn off the tracer by
ALTER SESSION SET SQL_TRACE=FALSE Khurram -
How to read a trace file?
Can someone point me to a good resource where I can learn how to read a trace file? I have read somewhere that TKPROF can leave some things unattended. Worse, it reports things incorrectly.
I usually recommend use Trace Analyzer (TRCA), Note:224270.1
It includes all the details found on TKPROF, plus additional information normally requested and used for a transaction performance analysis. Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF. -
Hello experts, for our Netweaver AS administration, I am in charge of periodically checking logs and trace files. I would like to know which are the most useful logs and trace files and the information each one will hold. I am familiar with "DefaultTrace.trc", and as of today it is the only one I have used, but I believe I should also be looking at other logs and trace files.
Any suggestions?Hi Pedro,
If you are talking about JAVA only system defaulttrace is the best log/trace to look, there are other log files like application log, but maybe the best way to check you logs is using NWA (NetWeaver Administrator) on the following URL on your JAVA system:
http://<hostname>:<port>/nwa
From there you need to go to Monitoring -> Logs and Traces and then Predefined View/SAP logs.
My other recommendation is to change the severity level to ERROR for all you JAVA component within the Visual Administrator -> ServeNode -> Services -> Log Configurator -> Locations, otherwise it is possible that you see a lot of garbage on the defaulttraces. Anyway you can change the severity level per component, on demand, to investigate any possible problem.
The work directory is very imporant and maybe you can also check the file "dev_serverX" that also will give you information about any out of memory conditions and garbage collection activity if you have these values set for the server node using the config tool:
-verbose:gc
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
You can find more information on here:
http://help.sap.com/saphelp_nw70/helpdata/en/ac/e9d8a51c732e42bd0e7de54b9ff4e2/content.htm
Hopefully this help you, let me know if you need more information,
Zareh -
External portal capturing internal portal URL in Log and trace file
Hi,
We are facing one issue in portal like we have two portals for internal (Intranet) and external (Internet) users.
Once users logged in the application and try to get the information about mylink from the external portal link (internet) they should not get any information about the internal portal.
But in log and trace file we can see the external portal link capturing the internal portal URL.
We need to find, from where system capturing the internal portal URL.
Thanks.The tkproffed trace file is in seconds.
"set timing" is in hh:mi:ss.uu format. So 00:00:01.01 is 1.01 seconds.
You have to remember that most of these measurements are rounded. While your trace file says it contains one second of trace data, you know it's more.
One excellent resource for trace files is "Optimizing Oracle Performance" by Cary Millsap & Jeff Holt. (http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X ) I thought I knew trace files before, but this book brings your knowledge to a whole new level.
There is also an excellent WP by Cary Millsap ( http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap ) that gives you some insight. -
Agent10g: Size of Management Agent Log and Trace Files get oversize ...
Hi,
I have the following problem:
I had installed the EM Agent10g (v10.2.0.4) on each of all my Oracle servers. I've done a long time ago (a few months or a few years depending on wich server it was installed). Recently, I've got PERL error because the "trace" file of the Agent was too big (the emagent.trc was more than 1 Gb) !!!
I don't know why. I checked on a particular server on the AGENT_HOME\sysman\config (Windows) for the emd.properties file.
The following properties are specified in the emd.properties file:
LogFileMaxSize=4096
LogFileMaxRolls=4
TrcFileMaxSize=4096
TrcFileMaxRolls=4
This file had never been modified (those properties correspond to the default value). It's the same situation for all Agent10g (setup) on all of the Oracle Server.
Any idea ?
NOTE: The Agent is stopped and started weekly ...
Thank's
YvesWhy don't you truncate the trace file weekly. You can also delete the file. The file will be created automatically whenever there is a trace.
-
Get blocker from the (self) deadlock trace file
Hi,
Recently I had an issue on a 10.2.0.4 single instance database where deadlocks were occurring. The following test case reproduces the problem (I create three parent tables, one child table with indexed foreign keys to all three parent tables and a procedure which performs an insert into the child table in an autonomous transaction):
create table parent_1(id number primary key);
create table parent_2(id number primary key);
create table parent_3(id number primary key);
create table child( id_c number primary key,
id_p1 number,
id_p2 number,
id_p3 number,
constraint fk_id_p1 foreign key (id_p1) references parent_1(id),
constraint fk_id_p2 foreign key (id_p2) references parent_2(id),
constraint fk_id_p3 foreign key (id_p3) references parent_3(id)
create index i_id_p1 on child(id_p1);
create index i_id_p2 on child(id_p2);
create index i_id_p3 on child(id_p3);
create or replace procedure insert_into_child as
pragma autonomous_transaction;
begin
insert into child(id_c, id_p1, id_p2, id_p3) values(1,1,1,1);
commit;
end;
insert into parent_1 values(1);
insert into parent_2 values(1);
commit;And now the action that causes the deadlock:
SQL> insert into parent_3 values(1);
1 row created.
SQL> exec insert_into_child;
BEGIN insert_into_child; END;
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "SCOTT.INSERT_INTO_CHILD", line 4
ORA-06512: at line 1My question is: how can I determine which table the insert into CHILD was waiting on? It could be waiting on PARENT_1, PARENT_2, PARENT_3, a combination of them or even on CHILD if I tried to insert a duplicate primary key in CHILD. Since we have the full testcase we know that it was waiting on PARENT_3 (or better said, it was waiting for the "parent" transaction to perform a commit/rollback), but is it possible to determine that solely from the deadlock trace file? I'm asking that because to pinpoint the problem I had to perform redo log mining, pl/sql tracing with DBMS_TRACE and manual debugging on a clone of the production database which was restored to a SCN just before the deadlock occurred. So, I had to do quite a lot of work to get to the blocker table and if this information is already in the deadlock trace file, it would have saved me a lot of time.
Below is the deadlock trace file. From the "DML LOCK" part I guess that the child table (tab=227042) holds a mode 3 lock (SX), all the other three parent tables have mode 2 locks (SS), but from this extract I can't see that parent_3 (tab=227040) is blocking the insert into child:
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00070029-00749150 23 476 X 23 476 S
session 476: DID 0001-0017-00000003 session 476: DID 0001-0017-00000003
Rows waited on:
Session 476: obj - rowid = 000376E2 - AAA3biAAEAAA4BwAAA
(dictionary objn - 227042, file - 4, block - 229488, slot - 0)
Information on the OTHER waiting sessions:
End of information on OTHER waiting sessions.
Current SQL statement for this session:
INSERT INTO CHILD(ID_C, ID_P1, ID_P2, ID_P3) VALUES(1,1,1,1)
----- PL/SQL Call Stack -----
object line object
handle number name
3989eef50 4 procedure SCOTT.INSERT_INTO_CHILD
391f3d870 1 anonymous block
SO: 397691978, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
DML LOCK: tab=227042 flg=11 chi=0
his[0]: mod=3 spn=35288
(enqueue) TM-000376E2-00000000 DID: 0001-0017-00000003
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x6
res: 0x398341fe8, mode: SX, lock_flag: 0x0
own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398341ff8
SO: 397691878, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
DML LOCK: tab=227040 flg=11 chi=0
his[0]: mod=2 spn=35288
(enqueue) TM-000376E0-00000000 DID: 0001-0017-00000003
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x6
res: 0x3983386e8, mode: SS, lock_flag: 0x0
own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x3983386f8
SO: 397691778, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
DML LOCK: tab=227038 flg=11 chi=0
his[0]: mod=2 spn=35288
(enqueue) TM-000376DE-00000000 DID: 0001-0017-00000003
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x6
res: 0x398340f58, mode: SS, lock_flag: 0x0
own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398340f68
SO: 397691678, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
DML LOCK: tab=227036 flg=11 chi=0
his[0]: mod=2 spn=35288
(enqueue) TM-000376DC-00000000 DID: 0001-0017-00000003
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x6
res: 0x39833f358, mode: SS, lock_flag: 0x0
own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x39833f368
----------------------------------------Thank you in advance for any comments,
JureHi Jonathan,
thank you very much for your reply which more than answers my question. I think it actually clears a lot of doubts I had about TX locks, since your mentioning of "undo segment header transaction table" pointed me in the right direction for further research on this topic (honestly, I didn't know what's "behind" TX locks). So if I understood correctly, to determine which table is the blocker (in the testcase presented above), you have to have some kind of history of executed SQL statements (e.g. by mining redo logs)?
The statement you wrote:
At this point, and with your example, the waiting session is waiting on a TX (transaction) lock - this means it has not idea (and no interest) in the actual data involved, it is merely waiting for an undo segment header transaction table slot to clear. and the example with the savepoint you gave, made me think of some of the consequences of that behaviour. That is probably the reason why it is not possible to get the "blocker" table from v$lock (although sometimes it's possible to get it from v$session.row_wait_obj#) when a session tries to change a row another session holds in exclusive mode, e.g.:
create table t1 (id number);
insert into t1 values (1);
commit;
Session 126:
SID = 126> update t1 set id=2 where id=1;
1 row updated.
Session 146:
SID = 146> update t1 set id=2 where id=1;
{session hangs}
In a separate session:
SQL> SELECT CASE
2 WHEN TYPE = 'TM'
3 THEN (SELECT object_name
4 FROM user_objects
5 WHERE object_id = l.id1)
6 END object_name,
7 SID, TYPE, id1, id2, lmode, request, BLOCK
8 FROM v$lock l
9 WHERE SID IN (126, 146)
10 ORDER BY SID, TYPE, 1
11 /
OBJECT_NAME SID TY ID1 ID2 LMODE REQUEST BLOCK
T1 126 TM 68447 0 3 0 0
126 TX 262153 4669 6 0 1
T1 146 TM 68447 0 3 0 0
146 TX 262153 4669 0 6 0The only thing I can tell from this output is that session 146 is trying to get a TX lock in exclusive mode, and session 126 is blocking it, the reason of the blocking being unknown from this view alone.
Since I'd like to get a better understanding on the mechanics behind this (e.g. why the blocked session can't know the segment that is waiting for, since it has to go to the same segment's data block to find the address of the undo segment header transaction table slot? ; can we get the content/structure of the transaction table in the data block - probably by making a block dump?), do you have any source where a more in depth explanation what happens "behind the scenes" is available (perhaps in Oracle Core?)? Some time ago I found a link on your blog http://jonathanlewis.wordpress.com/2010/06/21/locks/ which points to Franck Pachot's article where he nicely explains the various locking modes: http://knol.google.com/k/oracle-table-lock-modes#. There I also found Kyle Hailey's presentation about locks http://www.perfvision.com/papers/09_enqueues.ppt where slide 23 nicely depicts what's going on when acquiring TX locks. Of course I'll try to search on my own, but any other source (especially from an authority like you) is more than welcome.
Thank you again and regards,
Jure -
Oracle XE 11.2 writing trace files every 30 sec.
Hi all,
we are using an oracle xe 11.2 database on linux. By searching the logs we find some trace files from dbrm that will be updated every 30 secs.
Does anyone can help understand whats the problem ?
Thanks a log
/u01/app/oracle/diag/rdbms/xe/XE/trace> tail -f XE_dbrm_8880.trc
Trace file /u01/app/oracle/diag/rdbms/xe/XE/trace/XE_dbrm_8880.trc
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Beta
ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe
System name: Linux
Node name: vtsbpm1
Release: 2.6.32.36-0.5-default
Version: #1 SMP 2011-04-14 10:12:31 +0200
Machine: x86_64
VM name: VMWare Version: 6
Instance name: XE
Redo thread mounted by this instance: 1
Oracle process number: 7
Unix process pid: 8880, image: oracle@vtsbpm1 (DBRM)
*** 2012-02-28 11:39:42.567
*** SESSION ID:(240.1) 2012-02-28 11:39:42.567
*** CLIENT ID:() 2012-02-28 11:39:42.567
*** SERVICE NAME:() 2012-02-28 11:39:42.567
*** MODULE NAME:() 2012-02-28 11:39:42.567
*** ACTION NAME:() 2012-02-28 11:39:42.567
kgsksysstop: blocking mode (2) timestamp: 1330425582566557
kgsksysstop: successful
kgskreset: Threshold setting[numa_pg(0)]
Threshold low[0] = 1, high[0] = 3
kgsksysresume: successful
RESOURCE MANAGER PLAN/CONSUMER GROUP DUMP
type: PLAN, Name: INTERNAL_PLAN_XE, number of directives: 2, bit mask: 0x3
policy index: 0, inst state index: 0, plan id: 1
Data from Management module:
Plan Parameters:
<None>
Plan Directives:
[1] Plan Directive Parameters:
Directive name: MGMT_P1, value: 100
Directive:
type: CONSUMER GROUP, Name: OTHER_GROUPS (addr: 0x8f85b120)
policy index: 0, inst state index: 0, class num: 0x1
mast: INFINITE, ASL qtout: INFINITE, PQQ qtout: INFINITE, mdop: INFINITE
Statistics:
current queued threads: 0,
class total time: 0 msec, penalty # 0
total threads: 0
total CPU yields: 0
total CPU wait: 0 msec
total IO wait: 0 msec
*** 2012-02-28 13:23:15.183
cpu%: cputm: cpuwt: avgrun: avgwt:
1 324 0 0.05 1.00
RQs: < 5: < 10: < 50: < 100: < 200: < 1000: > 1K:
4054 0 3 0 0 0 0
*** 2012-02-28 13:24:45.189
1 308 0 0.00 1.00
4112 0 3 0 0 0 0
4 2049 0 0.05 1.00
*** 2012-02-28 13:26:15.187
6190 14 17 1 1 0 0
*** 2012-02-28 13:27:45.191
24 23051 210 0.15 1.00
20357 30 37 26 17 5 0
*** 2012-02-28 13:29:15.196
9 377 0 0.00 1.00
4309 1 2 0 0 0 0
*** 2012-02-28 13:30:45.207
cpu%: cputm: cpuwt: avgrun: avgwt:
16 448 0 0.00 1.00
RQs: < 5: < 10: < 50: < 100: < 200: < 1000: > 1K:
4294 2 4 0 0 0 0
*** 2012-02-28 13:32:15.207
4 360 0 0.00 1.00
4136 0 4 0 0 0 0
*** 2012-02-28 13:33:45.207
1 392 0 0.00 1.00
4197 1 3 0 0 0 0understand whats the problem ?Without knowing specifics about what is going on in your instance, its hard to say.
Could be a problem indication, more likely not. Appears to be Resource Manager trace files, the resource manager doesn't have much effect on the instance until the host is getting starved for resources, then the engine can throttle down resources given to different resource groups.
http://docs.oracle.com/cd/E11882_01/server.112/e25494/dbrm001.htm#sthref2760 -
Oracle version: 11.2.0.3.0 Enterprise Edition
OS - IBM/AIX RISC System/6000
I am trying to generate a trace file from a piece of code executed by java server. What I asked the java developer to do is to place this block immediately after establishing a connection:
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = ''M1''';
dbms_monitor.session_trace_enable(waits => FALSE, binds => TRUE);
END;And at the end of the logical java block of code:
BEGIN
dbms_monitor.session_trace_disable;
END;What I want to know is how many rows the java server fetches after executing one particular select statement, because they complain about receiving less in count rows from the select statement than expecting.
For example, if I execute the same sql query in sqlplus session, then I fetch let's say 1000 rows.
When the same query is executed from java side, the fetched rows are less in count, let's say 500.
And because I doubt it, I wanted to trace to see what actually is executed and how.
From the excerpt of the trace file I see exactly the same query which I execute myself in a sqplus session.
There is no fine-grained control on the udnerlying tables in the query.
And my question is, how to interpret the FETCH phase of the cursor (for the select statement)?
For example, if I see one FETCH for this cursor, does this mean that the java server has fetched only one row?
If I see 100 FETCHes, does this mean they fetched 100 rows from the cursor?
Here is a short excerpt from the trace file (please don't crucify me for the query and the obvious denormalized design of the tables, this is not invented by me):
PARSING IN CURSOR #4573587152 len=667 dep=0 uid=737 oct=3 lid=737 tim=17685516462413 hv=954980718 ad='70000006d3e4940' sqlid='69pm96nwfrqbf'
select /* ordered */ o.id, nvl(o.par_id, -1) as par_id, o.NAME_GER, o.NAME_ENG, o.NAME_ESP, o.NAME_ITL,o.NAME_FRA, decode(lo.lflag, 'Y', 'L', 'N') as leaf_or_node, lo.distance + 1 as "LEVEL", to_char(o.beg_date, 'DD.MM.YYYY HH24:MI:SS'), o.mais_id, l.path, nvl(o.non_selectable, 'N') from st_prod o, lprod_new l, lprod lo where o.end_date = to_date('31.12.3999', 'DD.MM.YYYY') and (lo.id, lo.beg_date) in (select id, beg_date from st_prod where par_id is null and end_date = to_date('31.12.3999', 'DD.MM.YYYY')) and lo.lid = o.id and lo.lid_beg_date = o.beg_date and l.st_prod_id = o.id and l.st_prod_beg_date = o.beg_date order by lo.distance, o.name_ger
END OF STMT
PARSE #4573587152:c=31,e=152,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2027551050,tim=17685516462412
EXEC #4573587152:c=80,e=375,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2027551050,tim=17685516462936
*** 2013-03-11 11:28:09.122
FETCH #4573587152:c=519446,e=892645,p=0,cr=113446,cu=0,mis=0,r=10,dep=0,og=1,plh=2027551050,tim=17685517355715
FETCH #4573587152:c=37,e=59,p=0,cr=0,cu=0,mis=0,r=10,dep=0,og=1,plh=2027551050,tim=17685517359109
FETCH #4573587152:c=39,e=63,p=0,cr=0,cu=0,mis=0,r=10,dep=0,og=1,plh=2027551050,tim=17685517361128
FETCH #4573587152:c=29,e=46,p=0,cr=0,cu=0,mis=0,r=10,dep=0,og=1,plh=2027551050,tim=17685517362849
FETCH #4573587152:c=31,e=48,p=0,cr=0,cu=0,mis=0,r=10,dep=0,og=1,plh=2027551050,tim=17685517364621
<162 more FETCH-es here>
<STAT phase>
CLOSE #4573587152:c=533,e=849,dep=0,type=1,tim=17685517671878Is it possible based on the trace file (if I have to change something in the way of tracing) to determine how many rows were fetched?Hi
I read the traces into a table from the client from which I log and then read from the table If you can copy the content of the table column you are reading and paste it into a file say : your_trace_name.trc file then you can use this to generate a TKPROF trace fiile
C:\>tkprof your_trace_file.trc your_trace_file.txt
TKPROF: Release 10.2.0.3.0 - Production on Mon Mar 11 15:28:13 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.To find the arraysize you are using then use this formula
rows/Fetch = arraysizeFew details about interpreting TKRPOF exist here
http://hourim.wordpress.com/2012/09/14/tuning-by-tkprof-a-case-study/
Best regards
Mohamed Houri -
There is anonimous block:
begin
execute immediate 'alter session set tracefile_identifier = ''TS''';
dbms_monitor.session_trace_enable;
some_proc(true);
end;Procedure some_proc consists following code
loop
select val into i from a where par = 'Bar';
if i = 'EXIT' then
exit;
end if;
for cur in (select fld from t order by r) loop
processing(cur);
end loop;
end loop;Tables A and t is very small tables. So table t is empty.
As you can see, expected than will be works loop and select from very small table.
I have executed the block, and it works about 477 seconds.
select value
2 from v$sesstat s
3 natural
4 join v$statname n
5 where sid = sys_context('USERENV', 'SID')
6 and name = 'CPU used by this session';
VALUE
2
declare
2 t date;
3 begin
4 execute immediate 'alter session set tracefile_identifier = ''TS''';
5 dbms_monitor.session_trace_enable;
6 come_proc(true);
7 end;
8 /
PL/SQL procedure successfully completed.
Elapsed: 00:07:57.63
select value
2 from v$sesstat s
3 natural
4 join v$statname n
5 where sid = sys_context('USERENV', 'SID')
6 and name = 'CPU used by this session';
VALUE
45175But there is some strange moments:
1. In tkprof report shows only 277.83 sec (whereas statistic "CPU usage" above is different and more appropriate, 451.75 sec.)
declare
t date;
begin
execute immediate 'alter session set tracefile_identifier = ''TS''';
dbms_monitor.session_trace_enable;
some_proc(true);
end;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 260.95 277.83 0 64 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 260.95 277.83 0 64 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 10757
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 20.64 20.64
SELECT VAL
FROM
A WHERE PAR = 'BAR'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1782640 29.01 28.20 0 0 0 0
Fetch 1782640 32.78 31.77 0 5347922 0 1782640
total 3565281 61.80 59.97 0 5347922 0 1782640
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 43 (recursive depth: 1)
Rows Row Source Operation
1782640 INDEX RANGE SCAN A_UI (cr=5347922 pr=0 pw=0 time=31762812 us)(object id 530778)
SELECT FLD
FROM
T ORDER BY R
call count cpu elapsed disk query current rows
Parse 1 0.01 0.01 0 0 0 0
Execute 1782639 33.21 31.91 0 0 0 0
Fetch 1782639 95.52 95.82 0 12478473 0 0
total 3565279 128.74 127.75 0 12478473 0 0
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 43 (recursive depth: 1)
Rows Row Source Operation
0 SORT ORDER BY (cr=12478473 pr=0 pw=0 time=103178656 us)
0 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=12478473 pr=0 pw=0 time=92028737 us)
0 TABLE ACCESS FULL T PARTITION: 1 1 (cr=12478473 pr=0 pw=0 time=86376673 us)2. In raw trace very many rows with c=0. And somtimes there is rows with c=1000
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912283
FETCH #9:c=0,e=42,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912345
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912395
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912427
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912475
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912534
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912580
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912612
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912659
FETCH #9:c=0,e=39,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912718
EXEC #8:c=0,e=16,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912807
FETCH #8:c=0,e=14,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912865
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912916
FETCH #9:c=0,e=46,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912982
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913040
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913148
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913197
FETCH #9:c=0,e=40,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913256
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913302
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913334
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913381
FETCH #9:c=0,e=39,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913440
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913487
FETCH #8:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913525
EXEC #9:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913590
FETCH #9:c=0,e=36,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913661
EXEC #8:c=10000,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913710
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913742
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913790
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913846
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913893
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913924
EXEC #9:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913996
FETCH #9:c=0,e=51,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914077
EXEC #8:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914149
FETCH #8:c=0,e=17,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451914207
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914284
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914347Questions:
1. How do you think, lost time is time spent
a. in PLSQL engine during context switch,
b. or lost time is time spent in SQL processing where CPU time is less than minimum accurancy (0.01s)
c. or time lost during write in trace file (trace overhead)?
2. Value c=10000 is accumulated value? I think no, may be I am wrong?
But if I right, and time of processing was smaller than 0.01s, CPU time in tkprof will equal to zero, right?
And from this point of view, strange that CPU time and elapsed time is near.
3. Write in trace file is included into "elapsed time" in tracefile steps, e.g. "execute" and "fetch"?
In other words I want more deeply understand process of SQL processing and trace.What version of Oracle are you using?
Can you post the entire contents of the trace file?
Maybe you are looking for
-
i upload all of my photos from my iphone 5 onto my macbook air to get more space on my phone, and now whenver i try to upload it says there is no more storage or there was a problem transferring a picture. what can I do to get more space onto my ipho
-
Remote disk on imac wont work with macbook air.
it says that it can't find the original cd. But it shows up on the desk top ??? Please help. Thanks
-
What does the usage of CURSOR word mean in an SQL statement?
Hey folks, Please check out the following query and do please explain me what does the usage of CURSOR keyword in an SQL statement mean. select deptno,cursor(select ename from emp a where a.deptno=b.deptno) from dept b; well, the output was like this
-
Slowly but surely..finally approved for BofA
I'm new to this forum, but I have been following for several years. I really want to thank everybody for sharing their experiences and advice. I hope that my story will help others just the same as you have all helped me...n I'd like to apologize ahe
-
Problems when moving from OWB 10.1 to OWB 10.2
Hello! I got some problems when I try to move from OWB 10.1 to OWB 10.2. In the new environment I have Oracle Warehouse Builder Client 10.2.0.1.31 Oracle Warehouse Builder Repository 10.2.0.1.0 When I try to create a dimension I got the following err