Lost time in trace file

There is anonimous block:
begin
  execute immediate 'alter session set tracefile_identifier = ''TS'''; 
  dbms_monitor.session_trace_enable;
  some_proc(true);
end;Procedure some_proc consists following code
loop
  select val into i from a where par = 'Bar';
  if i = 'EXIT' then
    exit;
  end if;
  for cur in (select fld from t order by r) loop
    processing(cur);
  end loop;
end loop;Tables A and t is very small tables. So table t is empty.
As you can see, expected than will be works loop and select from very small table.
I have executed the block, and it works about 477 seconds.
   select value
  2      from v$sesstat s
  3   natural
  4      join v$statname n
  5     where sid = sys_context('USERENV', 'SID')
  6       and name = 'CPU used by this session';
     VALUE
         2
declare
  2    t date;
  3  begin
  4    execute immediate 'alter session set tracefile_identifier = ''TS''';
  5    dbms_monitor.session_trace_enable;
  6    come_proc(true);
  7  end;
  8  /
PL/SQL procedure successfully completed.
Elapsed: 00:07:57.63
   select value
  2      from v$sesstat s
  3   natural
  4      join v$statname n
  5     where sid = sys_context('USERENV', 'SID')
  6       and name = 'CPU used by this session';
     VALUE
     45175But there is some strange moments:
1. In tkprof report shows only 277.83 sec (whereas statistic "CPU usage" above is different and more appropriate, 451.75 sec.)
declare
  t date;
begin
  execute immediate 'alter session set tracefile_identifier = ''TS''';
  dbms_monitor.session_trace_enable;
  some_proc(true);
end;
call     count       cpu    elapsed       disk      query    current        rows
Parse        0      0.00       0.00          0          0          0           0
Execute      1    260.95     277.83          0         64          0           1
Fetch        0      0.00       0.00          0          0          0           0
total        1    260.95     277.83          0         64          0           1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 10757 
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1       20.64         20.64
SELECT VAL
FROM
A WHERE PAR = 'BAR'
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.01       0.00          0          0          0           0
Execute 1782640     29.01      28.20          0          0          0           0
Fetch   1782640     32.78      31.77          0    5347922          0     1782640
total   3565281     61.80      59.97          0    5347922          0     1782640
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 43     (recursive depth: 1)
Rows     Row Source Operation
1782640  INDEX RANGE SCAN A_UI (cr=5347922 pr=0 pw=0 time=31762812 us)(object id 530778)
SELECT FLD
FROM
T ORDER BY R
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.01       0.01          0          0          0           0
Execute 1782639     33.21      31.91          0          0          0           0
Fetch   1782639     95.52      95.82          0   12478473          0           0
total   3565279    128.74     127.75          0   12478473          0           0
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 43     (recursive depth: 1)
Rows     Row Source Operation
      0  SORT ORDER BY (cr=12478473 pr=0 pw=0 time=103178656 us)
      0   PARTITION RANGE SINGLE PARTITION: 1 1 (cr=12478473 pr=0 pw=0 time=92028737 us)
      0    TABLE ACCESS FULL T PARTITION: 1 1 (cr=12478473 pr=0 pw=0 time=86376673 us)2. In raw trace very many rows with c=0. And somtimes there is rows with c=1000
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912283
FETCH #9:c=0,e=42,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912345
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912395
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912427
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912475
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912534
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912580
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912612
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912659
FETCH #9:c=0,e=39,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912718
EXEC #8:c=0,e=16,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912807
FETCH #8:c=0,e=14,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451912865
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912916
FETCH #9:c=0,e=46,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451912982
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913040
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913148
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913197
FETCH #9:c=0,e=40,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913256
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913302
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913334
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913381
FETCH #9:c=0,e=39,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913440
EXEC #8:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913487
FETCH #8:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913525
EXEC #9:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913590
FETCH #9:c=0,e=36,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913661
EXEC #8:c=10000,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913710
FETCH #8:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913742
EXEC #9:c=0,e=13,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913790
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913846
EXEC #8:c=0,e=11,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913893
FETCH #8:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451913924
EXEC #9:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451913996
FETCH #9:c=0,e=51,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914077
EXEC #8:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914149
FETCH #8:c=0,e=17,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=2,tim=5857451914207
EXEC #9:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914284
FETCH #9:c=0,e=37,p=0,cr=7,cu=0,mis=0,r=0,dep=1,og=2,tim=5857451914347Questions:
1. How do you think, lost time is time spent
a. in PLSQL engine during context switch,
b. or lost time is time spent in SQL processing where CPU time is less than minimum accurancy (0.01s)
c. or time lost during write in trace file (trace overhead)?
2. Value c=10000 is accumulated value? I think no, may be I am wrong?
But if I right, and time of processing was smaller than 0.01s, CPU time in tkprof will equal to zero, right?
And from this point of view, strange that CPU time and elapsed time is near.
3. Write in trace file is included into "elapsed time" in tracefile steps, e.g. "execute" and "fetch"?
In other words I want more deeply understand process of SQL processing and trace.

What version of Oracle are you using?
Can you post the entire contents of the trace file?

Similar Messages

  • Different Deadlock trace files

    Hello,
    In our application we use to have deadlock issues and i need to analyze that
    trace file.Some time i use to have trace files which is having current session and
    waiting session information and with modules and queries they are executing in top section
    of trace file only , no need to read below data in trace file . But some times the
    trace files are different..all update or select for update queries are spread
    across the file and very difficult to understand which was locking what. Is that in rac or 11g environment
    deadlock trace file is having different structure,?
    One more question regarding deadlock ...many time we found that the current
    query is updating on table A and waiting query updating on table B .. Is it possible
    to have deadlock scenario when queries are working on different tables ? or
    many be it is happening only if tables are in relation like parent and child ?

    hi,
    Are you referring to .trm extention trace files which youare unable to read?
    Here is good explanation of reading deadlock trace files
    ORA-00060 Deadlock trace files.. how to read?
    Thanks,
    Ajay More
    http://www.moreajays.com

  • Not able to extract performance data from .ETL file using xperf commands. getting error "Events were lost in this trace. Data may be unreliable ..."

    Not able to extract  performance data from .ETL file using xperf commands.
    Xperf Commands:
    xperf –i C:\TempFolder\Test.etl -o C:\TempFolder\BootData.csv  –a process
    Getting following error after executing above command:
    "33288636 Events were lost
    in this trace. 
    Data may be unreliable
    This is usually caused
    by insufficient disk bandwidth for ETW lo
    gging.
    Please try increasing the minimum
    and maximum number of buffers
    and/or
                    the buffer size. 
    Doubling these values would be a good first at
    tempt.
    Please note, though, that
    this action increases the amount of me
    mory
                    reserved
    for ETW buffers, increasing memory pressure on your sce
    nario.
    See "xperf -help start"
    for the associated command line options."
    I changed page size file but its does not work for me.
    Any one have idea, how to solve this problem and extract ETL file data.

    I want to mention one point here. I have total 4 machines out of these 3 machines above
    commands working properly. Only one machine has this problem.<o:p></o:p>
    Hi,
    I consider that you can try to use xperf to collect the trace etl file and see if it can be extracted on this computer:
    Refer to following articles:
    start
    http://msdn.microsoft.com/en-us/library/windows/hardware/hh162977.aspx
    Using Xperf to take a Trace (updated)
    http://blogs.msdn.com/b/pigscanfly/archive/2008/02/16/using-xperf-to-take-a-trace.aspx
    Kate Li
    TechNet Community Support

  • How can I extract the data from a Real-time Execution Trace ".log" file?

    I would like to get the data for the traces from the Real-time Execution Trace toolkit ".log" file to read in Excel and generate a report.

    Hi Chuck,
    Have you tried reading it into a text or binary file first and then generating a report using the Report Generation VIs?
    Ipshita C.
    National Instruments
    Applications Engineer

  • Changed date and time and lost reference to proxy files

    I had a batch of transcoded proxy files (from a Canon 550d) that had the wrong time on them. I used "Modify > Adjust content created date and time" to shift them by an hour. FCP now reports "Missing Proxy" for all of them. I've checked in the events folder and all the proxy files are still there, it just seems FCP has lost its reference to them.
    I can re-transcode them but of course that takes a long time. Two questions: anyone any idea why this happened and is there a way to re-link FCP with the existing files?
    Curiously I adjusted the time on some files that had been transcoded from a AVCHD source and these are still OK!
    (Using V 10.0.5 on a 2011 MBA with LaCie Thunderbolt disc)

    In the end I re-transcoded the files but I think this article may have been the answer:
    http://www.larryjordan.biz/fcpx-relinking/

  • Time stamp information in default trace file

    How to check the time stamp in defaulttrace.trc or application.trc files for logs in XI server.
    I have seen following timestamp in above mentioned trace  files. But seem to me, during the shutdown time of server, system put the timestamp in trace file.
    ERROR       gateway shutdown
    TIME        Fri Jun 20 11:01:08 2008
    Except for shutdown time, how the system put the timestamp for other activity of system in the default trace and application trace file??
    Sometime, i need to check the timestamp to get some specific information in default trace file for some system activities.
    is timestamp mentioned in coded form as per following log:
    #1.#36CC34C00F02009B000001B400001FF40004510B33EAB6D3#1215008464352#com.sap.engine.services.rfcengine##com.sap.engine.services.rfcengine.RFCDefaultRequestHand
    ler.handleRequest()#J2EE_GUEST#0##XQA#SAPSYS                          #4869A1FC78690A8DE10000000A2C0AC7#SAPEngine_Application_Thread[impl:3]_51##0#0#Error##P
    lain###java.lang.reflect.InvocationTargetException#
    #1.#36CC34C00F02009B000001B500001FF40004510B33EAB76C#1215008464352#com.sap.engine.services.rfcengine##com.sap.engine.services.rfcengine.RFCDefaultRequestHand
    ler.handleRequest()#J2EE_GUEST#0##XQA#SAPSYS                          #4869A1FC78690A8DE10000000A2C0AC7#SAPEngine_Application_Thread[impl:3]_51##0#0#Error##P
    lain###java.lang.reflect.InvocationTargetException
    Thanks
    Amar

    Hi Amarjit
    The timestamp is noted in unix epoch time (java does use this as well). It is this field:
    #1.#36CC34C00F02009B000001B400001FF40004510B33EAB6D3# 1215008464352 #com.sap.engine.services.rfcengine##com.sap.engine.services.rfcengine.RFCDefaultRequestHand
    ler.handleRequest()#J2EE_GUEST#0##XQA#SAPSYS
    The first 10 digits are seconds since epoch. The last 3 digits are milisecondes. Converted in human readable format: 07/02/2008 16:21:04.352
    [Unix Time|http://en.wikipedia.org/wiki/Unix_time]
    Damn, i remeber having answered this already here in the forums, but i cannot find the thread anymore myself :-(((
    Best regards
    Michael

  • SQL Query taking longer time as seen from Trace file

    Below Query Execution timings:
    Any help will be benefitial as its affecting business needs.
    SELECT MATERIAL_DETAIL_ID
    FROM
    GME_MATERIAL_DETAILS WHERE BATCH_ID = :B1 FOR UPDATE OF ACTUAL_QTY NOWAIT
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.70 0 0 0 0
    Execute 2256 8100.00 24033.51 627 12298 31739 0
    Fetch 2256 900.00 949.82 0 12187 0 30547
    total 4513 9000.00 24984.03 627 24485 31739 30547
    Thanks and Regards

    Thanks Buddy.
    Data Collected from Trace file:
    SELECT STEP_CLOSE_DATE
    FROM
    GME_BATCH_STEPS WHERE BATCH_ID
    IN (SELECT
    DISTINCT BATCH_ID FROM
    GME_MATERIAL_DETAILS START WITH BATCH_ID = :B2 CONNECT BY PRIOR PHANTOM_ID=BATCH_ID)
    AND NVL(STEP_CLOSE_DATE, :B1) > :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.54 0 0 0 0
    Execute 2256 800.00 1120.32 0 0 0 0
    Fetch 2256 9100.00 13551.45 396 77718 0 0
    total 4513 9900.00 14672.31 396 77718 0 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 66 (recursive depth: 1)
    Rows Row Source Operation
    0 TABLE ACCESS BY INDEX ROWID GME_BATCH_STEPS
    13160 NESTED LOOPS
    6518 VIEW
    6518 SORT UNIQUE
    53736 CONNECT BY WITH FILTERING
    30547 NESTED LOOPS
    30547 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
    30547 TABLE ACCESS BY USER ROWID GME_MATERIAL_DETAILS
    23189 NESTED LOOPS
    53736 BUFFER SORT
    53736 CONNECT BY PUMP
    23189 TABLE ACCESS BY INDEX ROWID GME_MATERIAL_DETAILS
    23189 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
    4386 INDEX RANGE SCAN GME_BATCH_STEPS_U1 (object id 146144)
    In the Package there are lots of SQL Statements using CONNECT BY CLAUSE.
    Does the use of CONNECT BY Clause degrades performance?
    As you can see the Rows Section is 0 but the Query and elapsed time is taking longer
    Regards

  • Time drift detected. Please check VKTM trace file for more details

    Running 11.2.0.2 on windows 64 bit virtualized..
    VKTM, oracles virtual keeper of time is throwing a lot of warnings in the alert log. According to my research this is not a great concern (if someone could explain why that would be great as well) but you should be able to supress the trace file if you are at 11.2.0.2 by setting event = "10795 trace name context forever, level 2" in the the paramater file. I am using a spfile and did not use "alter system", i cretaed pfile and edited it , opened with it and created a new spfile then opened with that. I am however still receiving the trace files and the messages in the alert log, I am getting approximately 10 a day now so the alert log is filling rather quickly. Has anyone else encountered this or have any advice on how to solve this? Thanks

    user12243721 wrote:
    Running 11.2.0.2 on windows 64 bit virtualized..
    VKTM, oracles virtual keeper of time is throwing a lot of warnings in the alert log. According to my research this is not a great concern (if someone could explain why that would be great as well) but you should be able to supress the trace file if you are at 11.2.0.2 by setting event = "10795 trace name context forever, level 2" in the the paramater file. I am using a spfile and did not use "alter system", i cretaed pfile and edited it , opened with it and created a new spfile then opened with that. I am however still receiving the trace files and the messages in the alert log, I am getting approximately 10 a day now so the alert log is filling rather quickly. Has anyone else encountered this or have any advice on how to solve this? ThanksI get this as well on Windows 64-bit with a multi-CPU machine, with the VM set up with 2 virtual CPUs.
    Since it's not a production machine it hasn't bothered me, but there are three other side effects
    a) the "tim=" values in 10046 trace files look as if they have two parallel clocks running out of synch with each other - with the reported values jumping from one clock to the other every few milliseconds.
    b) sometimes the database refuses to restart with "vktm didn't start in time" error messages
    c) a couple of times a call to dbms_lock.sleep(0.01) has slept for a very long time - possibly because the timer started on the faster of the two clocks, and the system then jumped to the slower.
    I never trust the machine for performance tests, so the timing anomalies aren't a big issue for me, so I haven't followed it up; but I'd guess it's a vmware issue with the way it has virtualised the multiple CPUs.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • How to know date and time information from Trace File defaultTrace.X.trc?

    Hi, all.
      i'm using EP 6.0 SP9 patch 1.
      As you know, default trace is written in the <SID>\JC00\j2ee\cluster\server0\log\defaultTrace.X.trc.
      By using the default trace formatter, it shows like the following sample messages.
    #1.5#0011110E7B2000590000000100000A100003EC71562E6EBC#1104396451585#com.sap.jms.server.ServerClientAdapter##com.sap.jms.server.ServerClientAdapter#Guest#18####716257115a3f11d9b73b0011110e7b20#SAPEngine_Application_Thread[impl:3]_23##0#0#Error#1#/Applications/JMS#Plain###JMS internal error at ServerClientAdapter! JMS Service is not started!#
      My questions are
      1. how do we retrieve the date and time information from the above message? It seems that this message has date and time info because LogViewer shows the date and time based on the above message.
      2. how do we change the above date format to easier one
    like "YYYY:MM:DD:hh:mm:ss"? i know that it can be configured from the Visual Admin --> Services --> Log Configurator but i don't know the exact place for the trace file.
      Thanks.

    Hi Sejoon,
    I use the standalone Logviewer to read the log files. Works great.
    "If you have an SAP Web Application Server Java 6.20 or below you may also get a standalone_logviewer.zip file at the SAP Service Marketplace at service.sap.com/download &#8594; SAP NetWeaver &#8594; Release ‘04. In this case JDK version 1.3 or higher must be installed on the system. The java version must be same on the server and the client.
    In the SAP Web AS Java 6.30 installation a folder named logviewer_standalone can be found under: <path Of J2EE installation>/<SysID>/JC<nr>/j2ee/admin/logviewer_standalone. Verify that the batch file logviewer.bat is installed in the directory logviewer-standalone.
    (source -> http://help.sap.com/saphelp_nw04/helpdata/en/e4/540c404a435509e10000000a1550b0/frameset.htm)
    Best wishes,
    Noel

  • Fail to ping standby . error = 3113 in trace files (oracle 9i)

    Hi there. One of my standby databases is not working properly, I mean is not synchronized with primary. Both database are 9i Release 2 running on Compaq Tru64 servers
    I look for the primary trace files and I found this trace file:
    $ vi umercado_arc1_141731.trc
    "umercado_arc1_141731.trc" 28 lines, 1164 characters
    /oracle/app/oracle/admin/umercado/bdump/umercado_arc1_141731.trc
    Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
    ORACLE_HOME = /oracle/app/oracle/product/9.2.0
    System name: OSF1
    Node name: utora01.emsut.com.sv
    Release: V5.1
    Version: 2650
    Machine: alpha
    Instance name: umercado
    Redo thread mounted by this instance: 1
    Oracle process number: 13
    Unix process pid: 141731, image: [email protected] (ARC1)
    *** SESSION ID:(10.1) 2008-09-02 16:03:38.240
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    *** 2008-09-03 11:22:29.405
    RFS network connection lost at host 'umercado3'
    Fail to ping standby 'umercado3', error = 3113
    Error 3113 when pinging standby umercado3.
    *** 2008-09-03 11:22:29.411
    kcrrfail: dest:3 err:3113 force:0
    Destination LOG_ARCHIVE_DEST_2 is in CLUSTER CONSISTENT mode
    Destination LOG_ARCHIVE_DEST_2 is in MAXIMUM PERFORMANCE mode
    The error I want to highlight is "FAIL TO PING STANDBY UMERCADO3, ERROR 3113"
    I did this testings on the primary server:
    - Unix ping to standby node , ok
    - tnsping to standby database (tnsping umercado3), ok
    Therefore I do not find the reason for this error. By this time I have defeered the transfer from the primary to standby.
    Any advice is welcome.
    Thanks!

    Suggestion: would attempt to make a sqlplus connection from the primary to the standby using the tns entry that you have configured for the log_archive_dest parameter. you need to connect to the standby as sys.
    What happened:
    I did a sqlplus connection ON THE PRIMARY DATABASE using the tns entry that is exactly the same used in log_archive_dest_2 parameter. Here is it:
    SQL> conn sys/*********@UMERCADO3 as sysdba
    Connected.
    Therefore I do not think it is a problem in the standby password file.
    Any suggestions?

  • Sql net trace file

    Hi All,
    Our prod db is 2 nodes RAC 10g in MS window 2003 servers. It is located in a remote place. Recently one of our local app servers lost connection to this db although we can tnsping it but got ORA3113 error when we tried to logon using sql plus. Most of our local PCs did not have problem logon at all. We enabled the sql net trace on this app server and network administrators spent a lot of time on this and finally shutdown one of the app servers in the same remote location which seemed cause a lot of network traffic with a local app server. So the problem went away and the app server can logon to the db now. I (I know nothing about network) tried to read the sql net trace file generated during the trouble time using the help outlined in the “Examining Oracle Net Trace Files” written by Kevin Reardon. The error happened after client sending server character set and conversion graph it supports and before receiving character set and conversion graph from the server. It took a minute in this step and finally gave up. Following is a section of the trace file where error happens. My question is: even we know when the error happens and at what step how can we use this info to further identify the root cause: Is this because we have too much network traffic which caused timeout or other reason(s)? By the way our db servers have “INBOUND_CONNECT_TIMEOUT=180" (3 minutes) in the sqlnet.ora file and whole trace file starts at [06-NOV-2009 14:50:23:352] and ends at [06-NOV-2009 14:51:24:758] which is a little over 1 minute. Greatly appreciate your insights and thoughts.
    Shirley
    [06-NOV-2009 14:50:23:836] nsdo: normal exit
    [06-NOV-2009 14:50:23:836] nsdo: entry
    [06-NOV-2009 14:50:23:836] nsdo: cid=0, opcode=85, *bl=0, *what=0, uflgs=0x0, cflgs=0x3
    [06-NOV-2009 14:50:23:836] nsdo: rank=64, nsctxrnk=0
    [06-NOV-2009 14:50:23:836] nsdo: nsctx: state=8, flg=0x400d, mvd=0
    [06-NOV-2009 14:50:23:836] nsdo: gtn=127, gtc=127, ptn=10, ptc=32730
    [06-NOV-2009 14:50:23:836] nsdo: switching to application buffer
    [06-NOV-2009 14:50:23:836] nsrdr: entry
    [06-NOV-2009 14:50:23:836] nsrdr: recving a packet
    [06-NOV-2009 14:50:23:836] nsprecv: entry
    [06-NOV-2009 14:50:23:836] nsprecv: reading from transport...
    [06-NOV-2009 14:50:23:836] nttrd: entry
    [06-NOV-2009 14:51:24:742] nttrd: exit
    [06-NOV-2009 14:51:24:742] ntt2err: entry
    [06-NOV-2009 14:51:24:742] ntt2err: Read unexpected EOF ERROR on 644
    [06-NOV-2009 14:51:24:742] ntt2err: exit
    [06-NOV-2009 14:51:24:742] nsprecv: error exit
    [06-NOV-2009 14:51:24:742] nserror: entry
    [06-NOV-2009 14:51:24:742] nserror: nsres: id=0, op=68, ns=12537, ns2=12560; nt[0]=507, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
    [06-NOV-2009 14:51:24:742] nsrdr: error exit
    [06-NOV-2009 14:51:24:742] nsdo: nsctxrnk=0
    [06-NOV-2009 14:51:24:742] nsdo: error exit
    [06-NOV-2009 14:51:24:742] nioqer: entry
    [06-NOV-2009 14:51:24:742] nioqer: incoming err = 12151
    [06-NOV-2009 14:51:24:742] nioqce: entry
    [06-NOV-2009 14:51:24:742] nioqce: exit
    [06-NOV-2009 14:51:24:742] nioqer: returning err = 3113
    [06-NOV-2009 14:51:24:742] nioqer: exit
    [06-NOV-2009 14:51:24:742] nioqrc: exit
    [06-NOV-2009 14:51:24:742] nioqrs: entry

    I am certainly not an expert in this area, but these lines are of interest
    >
    06-NOV-2009 14:51:24:742 ntt2err: Read unexpected EOF ERROR on 644
    06-NOV-2009 14:51:24:742 nserror: nsres: id=0, op=68, ns=12537, ns2=12560; nt[0]=507, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
    06-NOV-2009 14:51:24:742 nioqer: incoming err = 12151
    >
    The TNS 12537, 12560 and 12151 codes indicate network errors.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/tnsus.htm#sthref14851
    Is this a consistently reproducible error ? Do other applications on this network operate without error ?
    HTH
    Srini

  • Not able to get the actual plan from trace file

    Hi all
    I have a Db package and want to get actual execution plan of all the statements in that pakcage it does provides the plan for System's statements but does not displays the plan for Sql statements
    DB version 9.2.0 using the following sequence of insructions
    set timing on
    set serveroutput on
    alter session set events '10046 trace name context forever ,level 12';
    begin
    run_service.collect_data(sysdate);
    end;
    alter session set sql_trace=false;
    exit; ---exit from Sql
    now look at the output
    select distinct obj#,containerobj#,pflags,xpflags,mflags
    from
    sum$, suminline$ where sumobj#=obj# and inline#=:1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 1 1 0 0
    total 3 0.00 0.00 1 1 0 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: SYS (recursive depth: 2)
    Rows Row Source Operation
    0 SORT UNIQUE
    0 NESTED LOOPS
    0 TABLE ACCESS BY INDEX ROWID SUMINLINE$
    0 INDEX RANGE SCAN I_SUMINLINE$_2 (object id 1614116)
    0 TABLE ACCESS BY INDEX ROWID SUM$
    0 INDEX UNIQUE SCAN I_SUM$_1 (object id 319)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 1 0.00 0.00
    SELECT SEQ_NUM, S_DATE, S_TIME, CSTATUS, G_SERVICE,
    B_REFERENCE, V_REFERENCE, M_PRIORITY
    FROM GL_HIST
    ORDER BY S_DATE DESC, S_TIME DESC
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 2819 0.37 0.32 0 0 0 0
    Fetch 2819 2.50 20.47 2786 20164 0 2819
    total 5639 2.88 20.81 2786 20164 0 2819
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 2786 0.05 18.19
    latch free 4 0.04 0.06
    UPDATE G_ORIG SET G_SERVICE = :B1
    WHERE
    SEQ_NUM = :B5 AND S_DATE = :B4 AND S_TIME = :B3 AND
    C_STATUS = :B2 AND NVL(G_SERVICE, '+') <> NVL(:B1, '+')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.03 0 0 0 0
    Execute 3731 0.74 0.99 261 18712 119 54
    Fetch 0 0.00 0.00 0 0 0 0
    total 3732 0.74 1.02 261 18712 119 54
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 261 0.01 0.19
    latch free 9 0.01 0.04
    COMMIT

    Remove the line alter session set sql_trace=false and just exit/disconnect. The explain plain is contained in the STAT lines in the trace file and are only written when the cursor closes. If you turn off tracing before the cursor closes the STAT lines will not get written.

  • How to create the trace file using run_report_object at runtime

    Dear All
    using :
    Oracel Application Server 10g
    Oracle Database 11g
    Windows XP/sp3
    I'm using run_report_object to call a report inside the form. THis report is running OK from reports builder, however it's too slow when run from Application server.
    How Can I create a trace file (at runtime) that contains the time spent in sql and formating the layout of the report ??
    Here is My code :
    repid := find_report_object('report5');
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_FILENAME,'INVOICE.REP');
    v_url :='paramform=no';
    v_url := v_url||' FROM_NO=' || :PRINT_BLOCK.FROM_NO ;
    v_url := v_url ||' TO_NO=' || :PRINT_BLOCK.TO_NO ||' FROM_DATE=' || v_from_date ||' TO_DATE='|| v_to_date ||' NO_DATE=' ;
    v_url := v_url ||:PRINT_BLOCK.NO_DATE||' IDENT=' ||:PRINT_BLOCK.IDENT_NO||' REPORT_HEADING='''||V_REPORT_HEADING||'''' ;
    v_url := v_url||' COMPANY_NO='||:global.company_no;
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_OTHER,v_url);
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_SERVER,:GLOBAL.INV_REPORT_SERVER_NAME);
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_DESFORMAT,'pdf');
    v_rep := RUN_REPORT_OBJECT(repid);
    IF rep_status = 'FINISHED' THEN
    V1:='/reports/rwservlet/getjobid'||substr(v_rep,instr(v_rep,'_',-1)+1);
    WEB.SHOW_DOCUMENT('/reports/rwservlet/getjobid'||substr(v_rep,instr(v_rep,'_',-1)+1)||'?server='||REPORT_SERVER_NAME,'_blank');
    END IF;
    Thanks a lot

    Slow running reports often are not the result of a flawed report, but rather a flawed configuration. For example:
    1. If you call your reports (from Forms) via the default or inProcess Reports Server, often because startup time is slow, it will appear that it took too long for the report to be delievered. Using a stand-alone Rep Server is the preferred way to do this.
    2. If your Forms application makes numerous calls to RRO (RUN_REPORT_OBJECT), this can tend to result in what might appear as a memory leak (although it is not). The result is delayed processing because of the excessive memory use. This problem has been overcome in Forms/Reports 11 by the use of JVM pooling. However in v10 enabling "6i compatibility" mode is the way to overcome the issue. See Note 266073.1
    3. If the report runs fine from the Builder and it is connecting to the same db as when you run it from App Server, the issue is unlikely a db problem. However, if you want to look anyway, enable sqlnet tracing.
    4. To enable Reports tracing and investigate other tuning options, refer to the Reports 10 documentation:
    http://docs.oracle.com/cd/B14099_11/bi.1012/b14048/pbr_tune.htm
    Almost forgot to mentioned this one....
    If you are using a v11 db with App Server 10, you will probably want to consider reviewing Note 1099035.1 as it discusses an issue related to performance with such a configuration.
    Edited by: Michael Ferrante on Apr 10, 2012 8:49 AM

  • Unable to generate trace file

    Hi,
    I have written the stored procedure for starting sql trace on a given session for say n number of seconds.
    create or replace procedure start_trace
    v_sid in number,
    v_serial# in number,
    seconds in number)
    IS
    v_user varchar2 (32);
    duration number;
    dump_dest varchar2 (200);
    db_name varchar2 (32);
    no_session_found exception;
    stmt varchar2(100);
    stmt1 varchar2(100);
    BEGIN
    begin
    select username into v_user
    from v$session
    where sid= v_sid and
    serial# = v_serial#;
    exception
    when NO_DATA_FOUND then
    raise no_session_found;
    end;
    dbms_output.put_line('Tracing Started for User: '|| v_user);
    dbms_output.put_line('Tracing Start Time: '|| TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true);
    if seconds is null then
    duration := 60;
    else
    duration := seconds;
    end if;
    dbms_lock.sleep(duration);
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false);
    dbms_output.put_line ('Tracing Stop Time: '|| TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));
    select value into dump_dest
    from v$parameter
    where name = 'user_dump_dest';
    dbms_output.put_line('Trace Directory: ' || dump_dest);
    exception
    when no_session_found then
    dbms_output.put_line('No session found for sid and serial# specified');
    END start_trace;
    The above procedure compiles successfully and when I call it from sql prompt it gives me the message that the pl/sql procedure was completed successfully and all put_line statements are displayed.
    Real problem comes when I check udump for the trace file. I can not find it there. It seems all the statements in the procedure are executed successfully except dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true) and dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false) for some strange reasons.
    Any help will be appreciated.
    Thanks.

    Thanks for the reply.
    I do not get any error message. The following is the output:
    SQL> exec start_trace(118,6243,30);
    Tracing Started for User: SVCWRK
    Tracing Start Time: 09-26-2011 16:28:29
    Tracing Stop Time: 09-26-2011 16:28:59
    Trace Directory: /orasoft/app/oracle/admin/testsvcb/udump
    PL/SQL procedure successfully completed.
    But the trace file is not generated.
    I am using Oracle 10.2.0.4.0.
    Will try using DBMS_MONITOR.
    Thanks again.

  • Ora-00604 error while taking tkprof of a trace file

    Sorry i am giving the full erro but omitting exact table names
    Hi ,
    I have an error while taking tkprof of a trace file.
    I gave the following command ---
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela explain= /
    error is --
    Error in create table of EXPLAIN PLAN table : unix_session_user.prof$paln_table
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: Step-6:DDL
    Event Security. You are not permitted to perform the requested structural
    changes to PROF (TABLE)
    Event triggered : CREATE
    ora_login_user
    (session_user) : unix_session_user(dummy)
    Search : select count(*) from
    tabl(dummy table name) where obj_name like '%\%%' escape '\' and obj_type =
    'TABLE' and obj_type = 'USER' and ( event_CREATE = 'Y' or status =
    'Override')
    ORA-06512: at line 162
    ORA-06510: PL/SQL: unhandled
    user-defined exception
    EXPLAIN PLAN option disabled.
    i searched for the error and in oracle forum i found a solution .. http://forums.oracle.com/forums/thread.jspa?threadID=844287&tstart=0
    but after giving the table option it is giving the same error
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=old_schema.plan_table explain= /
    it again gave the same error.
    In both two cases it gives elapsed time results,library cache missing etc but before giving this it throws ORA-00604 error as stated above
    then i again correct the tkprof statement ..
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=new_schema.plan_table explain= /
    say this schema name here i used is dummy schema name.
    My question is did this error came as we had not sufficient previlages in the old_schema but that previleges we have in new_schema?
    My databse version is 9.2.0.4.0
    Thanks in advance
    Edited by: bp on Feb 3, 2009 11:36 PM
    Edited by: bp on Feb 3, 2009 11:40 PM

    Please post here full error message, there should be lines with ORA-00604 and then some other ORA as well.
    And are there any trace files generated during this error?
    And as You can see from error description, probably You will have to contact with Oracle support in order to solve this case:
    oerr ora 00604
    00604, 00000, "error occurred at recursive SQL level %s"
    // *Cause:  An error occurred while processing a recursive SQL statement
    // (a statement applying to internal dictionary tables).
    // *Action: If the situation described in the next error on the stack
    // can be corrected, do so; otherwise contact Oracle Support.

Maybe you are looking for

  • New Install Pending, please review setup

    Hello Everyone: I'm posting this to get feedback on a proposed installation of OS X Server SL running on the on the newer mac mini platform. My main area of concern is to ensure that the DNS settings are correct from the get go. The server is actuall

  • PubSec Digital signatures in Acrobat 9

    Hi, i m developing a digital signature plug-in, PubSec, to be specific. My plugin will, hopefully, digitally sign open pdf, and also able to verify them. The signature i am creating will be standard so that any other plugin, including Acrobat's own,

  • Problem in DATE saving: all years are in the 21th century !!

    Hi for all, I'm using a lot of date items in my application and I have a problem caused by data format: in form fields, I enter dates in 'DD/MM/YY' format. So I had always a systemic concatenation of '20' to the year 'YY' always this : dispalyed data

  • Dialog in a Thread

    hi, I'm having problems displaying a JDialog. I have an operation in my code which takes a long time, i.e upload to a server. I want to display a dialog before the up load begins informing the user that the upload is happening. Then when the operatio

  • I can't download photos, the folder in my PC appears as empty

    I can't download photos, the folder in my PC appears as empty