Trace file rapidly increasing - dev_rd

Hello All,
file system /usr/sap/SID is full due to dev_rd trace  increasing rapidly,
I have reset the trace but again it incresing rapidly.................
-rw-rr    1 sidadm   sapsys   21778106980 Dec 06 22:11 dev_rd
plaixlps101:sidadm 5> pwd
/usr/sap/SID/DVEBMGS00/work
Is there any method to avoid dev_rd getting full.(any parameter?)
Regards
Mohsin M

As per SAP note Note 573800 - Reasons for trace files increasing in size
4. With transaction SMGW, check to see whether the trace level was set to "2".
5. Call the "rspfpar" report in transaction SE38 and search for rdisp/TRACE. Make sure that rdisp/TRACE is set to 1.
trc file: "dev_rd", trc level: 2, release: "640"
rdisp/TRACE is set to 1
Answer (workaround):
6. Reset the trace files (as described as Section 1 of note 532918).
7. Set the "rdisp/TRACE_LOGGING = on, 10 m" profile parameter
(10 m corresponds to a size of 10 megabytes)
8. If you do not want a trace to be propagated in your system, set the following profile parameter:
gw/accept_remote_trace_level = 0 (as of Kernel46D)
I have reset the files already no use....
Current value of rdisp/TRACE_LOGGING is OFF ( can I set as on , 10m  >> rdisp/TRACE_LOGGING = on, 10 m)
cureent value is gw/accept_remote_trace_level is 0
So pls suggest me can i set rdisp/TRACE_LOGGING = on, 10 m ??
Regards
Mohsin Mulani

Similar Messages

  • File system getting full // dev_icmmon rapidly increasing

    Hello All,
    /sapmnt/SID/profile file system getting full
    /in this file system dev_icmmon rapidly increasing
    more dev_icmmon
    [Thr 258] **** SigHandler: signal 1 received
    [Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
    [Thr 258] **** SigHandler: signal 1 received
    [Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
    please help me to resolve the issue
    Regards
    Mohsin M

    I have killed icmon process ..now problem is solved

  • Trace file of directory /ORACLE/ SID /SAPTRACE/USERTRACE increase ceaseless

    Dear all,
    Now I face a problem that the Trace file of directory /ORACLE/<SID>/SAPTRACE/USERTRACE increase ceaseless and very quickly, the trace files size was increased to 8G only two days,  so the directory always is full due to this.
    So anybody could tell me this is why  ? and what method could help to decrease the trace log to produce ?
    Thanks & Regards,
    Michael

    >
    mho wrote:
    > There could be various issues causing this. I recommend having a look at [1505012 - Unrequired Oracle trace data in R3trans and tp|https://service.sap.com/sap/support/notes/1505012], for an bug in R3trans and tp.
    >
    > If this does not match, then please tell us what's inside the files.
    >
    > Cheers Michael
    Thanks for your reply, I saw the notes and use Transaction SM50 to check the trace level, the level is default ,
    this system is our solution manager, although the directory is full but the system still could be connected.
    The trace files contents as below:
    u201C*** 2011-04-25 12:22:41.995
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [qertbFetchByRowID], [], [], [], [], [], [], []
    Current SQL statement for this session:
    SELECT * FROM "TBTCO" WHERE "JOBNAME" = :A0 AND "JOBCOUNT" = :A1
    Call Stack Trace -
    calling              call     entry                argument values in hex     
    location             type     point                (? means dubious value)    
    _ksedst38           CALLrel  _ksedst10           0 1
    _ksedmp898          CALLrel  _ksedst0            0
    _ksfdmp14           CALLrel  _ksedmp0            3
    _kgerinv+140         CALLreg  00000000             32560400 3
    _kgeasnmierr19      CALLrel  _kgerinv0           32560400 9548210 38FFDE0 0
                                                       ECDD670
    __VInfreq__qertbFet  CALLrel  _kgeasnmierr+0       32560400 9548210 38FFDE0 0
    chByRowID+2583                                    
    _opifch2+3115        CALL???  00000000             2AFAEC3C 1E9B2F4 ECDD9D8 2
    _opiefn0348         CALLrel  _opifch20           89 4 ECDDB7C
    _opiefn21           CALLrel  _opiefn00           4E 4 ECDF698 0 0 0 0 0 0 0
    _opiodr+1099         CALLreg  00000000             4E 4 ECDF698
    _ttcpip+1273         CALLreg  00000000             4E 4 ECDF698 C
    _opitsk+1017         CALL???  00000000            
    _opiino1087         CALLrel  _opitsk0            0 0
    _opiodr+1099         CALLreg  00000000             3C 4 ECDFC30
    _opidrv819          CALLrel  _opiodr0            3C 4 ECDFC30 0
    _sou2o45            CALLrel  _opidrv0            3C 4 ECDFC30
    opimaireal112     CALLrel  _sou2o0             ECDFC24 3C 4 ECDFC30
    _opimai92           CALLrel  opimaireal0       2 ECDFC5C
    _OracleThreadStart@  CALLrel  _opimai+0           
    4+708                                             
    77E66060             CALLreg  00000000            
    Binary Stack Dump -
    ========== FRAME [1] (_ksedst38 -> _ksedst10) ==========
    Dump of memory from 0x0ECDD544 to 0x0ECDD554
    ECDD540          0ECDD554 0040467B 00000000      [T...{F@.....]
    ECDD550 00000001                             [....]           
    ========== FRAME [2] (_ksedmp898 -> _ksedst0) ==========
    Dump of memory from 0x0ECDD554 to 0x0ECDD614
    Could you help to check this issue ?
    Thanks

  • Generation of MMON process trace files in large file size (GB Size)

    Hi,
    I have created a database using the dbca in windows platform. Few days I found that, in the BDUMP directory the MMON process trace files are getting generated. The files starts to generate in MB size and will increase upto GB size. I know that the back ground process trace files cannot be disabled. So now iam force to manually delete these files from the bdump directory.plz help me to resolve this issue.
    I have checked and verified the SGA size, Shared Pool size and other memory areas.
    The statistics level in Typical also.
    But still the files are generated.
    PLease Helppp.....
    Shiyas

    hi
    As per your instruction i have checked the Alert log file. I have pasted a part of errors that found in the alert log file.
    Mon Jun 07 09:30:58 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_652.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:31:02 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_652.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:36:00 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_652.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:36:08 2010
    Restarting dead background process MMON
    MMON started with pid=11, OS id=656
    Mon Jun 07 09:36:11 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:36:15 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:41:12 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:41:16 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:46:13 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:46:17 2010
    Errors in file d:\oracle\product\10.2.0\admin\mir\bdump\mir_mmon_656.trc:
    ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []
    Mon Jun 07 09:50:18 2010
    Shutting down instance: further logons disabled
    Mon Jun 07 09:50:19 2010
    Stopping background process QMNC
    Mon Jun 07 09:50:19 2010
    Stopping background process CJQ0
    Mon Jun 07 09:50:20 2010
    Stopping background process MMNL
    Mon Jun 07 09:50:21 2010
    Stopping background process MMON
    Mon Jun 07 09:50:22 2010
    Shutting down instance (immediate)
    License high water mark = 4
    Mon Jun 07 09:50:22 2010
    Stopping Job queue slave processes, flags = 7
    Mon Jun 07 09:50:22 2010
    Job queue slave processes stopped
    Waiting for dispatcher 'D000' to shutdown
    All dispatchers and shared servers shutdown
    Mon Jun 07 09:50:23 2010
    alter database close normal
    Mon Jun 07 09:50:23 2010
    SMON: disabling tx recovery
    SMON: disabling cache recovery
    Mon Jun 07 09:50:23 2010
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    Thread 1 closed at log sequence 71
    Successful close of redo thread 1
    Mon Jun 07 09:50:23 2010
    Completed: alter database close normal
    Mon Jun 07 09:50:23 2010
    alter database dismount
    Completed: alter database dismount
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    But I am not able to understand anything above of this.
    And I am sorry we dont have the metalink support or srs support.
    Is there any other way to resolve this issue.
    Shiyas

  • Deleting default trace file in java

    Hello all,
    Whats the best way to automatically delete default trace file in java instance? I want to delete these files lets say every week or 2 wkkds,Is there any type of configuration that can be done through NWA or OS...? or do i have to write some scripts? Please advice.

    Perfect.
      When set to 10, 0 -->9 are cyclic files. Regarding archiving, yes this is possible. Apart from defaulttrace files you can also archive other java log files (system, network,security etc). The configuration procedure is explained :-
    http://help.sap.com/saphelp_nw04/helpdata/en/48/2edfd5bd3e0d4a81b90325fe195a70/frameset.htm
    Once the 10th trace file(i.e.defaultrace9) is full, all the trace files (from defaulttrace0-defaultrace9) are zipped to a single file. The file compression ratio is extremely good.
    Unfortunately there is no automated mechanism to clear off the archived files. This needs manual intervention(regular cleanup, script etc).
    The archived log files may also be viewed for analysis from NWA.
    While estimating the size for each defaulttrace consider  three factors  i) System load ii) Number of defaulttrace files.
    iii) Retention days(without archiving)
    Ex: System load as observed :
    Say the trace file size is 10MB and 6 trace files get written in 4 hours of peak time and 1 trace files is written for 4 hours during non-peak time .Assuming 8 hours of peak business hours and rest all non-working hours.
    No of files generated is 62 + 14 = 16 files of 10 MB each are generated per week day
    If the defaulttrace file  count is set to 16, you can retain one days' logs . Say you wish to retain the same with less number of files (i.e. 8) you would have to increase the file size to 20 MB. Also will have to look at file system space availability.
                   CAUTION: For a file system occupancy estimate, you will have to also consider as to how many Server nodes are 
                                    configured as each node has its own log area !
    For this you need to observe the pattern as to how many files are being written then estimate the size and count
    cheers !
    PRADi

  • Alert log and Trace file error

    Just found this in one of our alertlog file
    Thread 1 cannot allocate new log, sequence 199023
    checkpoint not complete
    and this in trace file:
    RECO.TRC file
    ERROR, tran=7.93.23662, session# =1, ose=60:
    ORA-12535: TNS: operation timed out

    Why would you increase the log files when the problem is a distributed transaction timed out?
    Distributed transactions time out when the data they need to access is locked. Unlike a local session that wants to update a row, which will wait forever, a distributed transaction times out. In earlier versions of Oracle you could set init.ora parameter distributed_lock_timeout to manage the timeout period. Oracle has since made this into an underbar parameter.
    The solution is to ignore the problem unless it appears regularly in which case you have an application design issue.
    HTH -- Mark D Powell --

  • Details missing in default trace file of XI

    Hi Experts,
    I am doing RFC to RFC scenario. I am getting checkered flag in SXMB_MONI. But for some reason i am not getting the details of my scenario in the default trace of XI. I'm not sure if the detiails of scenarios involving RFC's will be updated in the trace files. If yes, please help me in finding the reasons as to why it is not heppening in my case.
    Thanks and Regards,
    Hari.

    Hi,
    Increase your logging/ Tracing levels in the Integration engine configuration to see the synchronous message in the SXMB_MONI.
    1.Execute SXMB_ADM in the ABAP stack of XI
    2.Navigate to Configuration --> Integration Engine Configuration --> Change Specific Configuration Data
    Set the following:
    Category : Runtime
    Parameter : LOGGING_SYNC
    value : 1 (activated)
    Parameter : TRACE_LEVEL 3
    value : 3 (activated)
    For information on how to activate and deactivate traces, see the following SAP Note: 532918    (RFC Trace Generation)
    also go with below links
    Configuring the Trace File
    http://help.sap.com/saphelp_nw04/helpdata/en/3d/93532ad37011d194ba00a0c94260a5/frameset.htm
    Enqueue Trace Analysis
    http://help.sap.com/saphelp_nw04/helpdata/en/3d/93532ad37011d194ba00a0c94260a5/frameset.htm
    Enqueue Trace Records
    http://help.sap.com/saphelp_nw04/helpdata/en/3d/93532ad37011d194ba00a0c94260a5/frameset.htm
    Please let me know if this helps you or do you need any more info.

  • Process m001 died, see its trace file

    Hi
    Kindly tell why process died & look at the trace file
    Process m001 died, see its trace file
    Thu Dec 28 14:30:53 2006
    ksvcreate: Process(m001) creation failed
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows 2000 Version V5.0 Service Pack 3
    CPU : 4 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:443M/2047M, Ph+PgF:3071M/4967M, VA:1572M/2047M
    Instance name: xxxx
    Redo thread mounted by this instance: 1
    Oracle process number: 11
    Windows thread id: 4712, image: ORACLE.EXE (MMON)
    *** SERVICE NAME:(SYS$BACKGROUND) 2006-12-26 18:31:12.750
    *** SESSION ID:(29.1) 2006-12-26 18:31:12.750
    *** 2006-12-26 18:31:12.750
    Process m001 is dead (pid=5516, state=3):
    *** 2006-12-27 15:30:13.421
    Process m001 is dead (pid=6128, state=3):
    *** 2006-12-27 16:31:07.359
    Process m001 is dead (pid=5572, state=3):
    *** 2006-12-28 14:30:53.312
    Process m001 is dead (pid=5792, state=3):

    According to
    KSVCREATE: PROCESS(M000) CREATION FAILED' MESSAGES IN ALERT LOG
         Doc ID:      Note:352388.1      Type:      PROBLEM
         Last Revision Date:      14-SEP-2006      Status:      PUBLISHED
    you should increase the PROCESSES parameter of your database instance.

  • 1250737 - SMMS: Trace file dev_ms could not be opened

    Hi All,
    "1250737 - SMMS: Trace file dev_ms could not be opened"
    I have this error in smms on a CRM 7.0 W2k8 Failover Cluster, but the snote does not apply due to the version. Everything works, so I think it's not Security related.
    Could you please help me in finding the root cause for the above error.
    Thanks,
    Rudolf

    Hi Chetan,
    according to smms (goto/Trace level/increase, decrease) it seems, that he trace level could be changed in every combination where CI/DI/ASCS are running (see investigations below):
    But by chance I found the following (1.):
    1. If only DI is started I  can read the dev_ms via smms, when ASCS was on the "DI"-Node -> neither (I)* nor (II)*
    - Moving ASCS to the "CI"-Node, ms_dev was not found -> (II)*
    *Compared with Note 1250737:
    (I) using CI -> dev_ms could not be opened (due to M in CI "dev_ms on C":)
    (II) using Application-Server not on the "ASCS-Node" -> dev_ms not found (absolute Path used instead of share)
    2. If only CI was started, dev_ms could not be opened, regardless where the ASCS is running -> (I)
    3. Both Application-Server running and ASCS on "CI"-Node:
    - logged on DI for smms trying reading dev_ms via DI -> dev_ms not found (II)
      (logged on DI for smms trying reading dev_ms via CI -> dev_ms not found (II))
    - logged on CI for smms trying reading dev_ms via DI -> dev_ms could not be opened (I)
      (logged on CI for smms trying reading dev_ms via CI -> dev_ms could not be opened (I))
    4. Both Application-Server running and ASCS on "DI"-Node:
    - logged on DI for smms trying reading dev_ms via DI -> dev_ms could not be opened (?)
      (logged on DI for smms trying reading dev_ms via CI -> dev_ms could not be opened (?))
    - logged on CI for smms trying reading dev_ms via DI -> dev_ms could not be opened (I)
      (logged on CI for smms trying reading dev_ms via CI -> dev_ms could not be opened (I))
    I don't know if it's really related to note 125073, maybe it has something to do with access violation: why is (1.) working, and (4.) not?
    Thank you very much for your effort
    Regards
    Rudolf

  • Trace File -ST01

    In QAS and PRD I am gettign an excellent trace file on ST01, But on thePRD system I am not getting any..THe file itself is not created.. I am on 46c
    Thanks

    Specified will overide default.
    What this means is that rstr/max_diskspace has a deault value of 16 384 000, if the current value of parameter rstr/max_diskspace is 0 then thats the value that the system will take...
    You can either delete the parameter from your profile (which will return this to the default) or increase the value to more than 0 to give the trace some space to grow.
    Regards
    Juan

  • Capping dev_ms trace file size

    hi - i'm wondering if anyone knows a way to help.  we have been asked by SAP to run our msg server at an elevated trace level (trace level 3 - we set it from SMMS).  this writes out a huge amt of data to the dev_ms trace file.
    the default trace file (dev*) size, per the rdisp/TRACE_LOGGING param default is "on, 10m" (i.e. 10mb).
    that applies to "all" the dev* trace files obviously.
    w/ our elevated dev_ms trace set, it's overwrapping in ~3 min ...so between dev_ms.old and dev_ms, we never have more than ~10 min of logs kept at any one time.
    the pt of us running at this elevated dev_ms trace level is so we can capture (save off) the trace file and send to SAP the next time our msg server crashes.
    our SAP file system mountpoint /usr/sap/<SID> is limited in size...and by setting rdisp/TRACE_LOGGING value to a higher value it affects all the dev* files, not just the 1 file i really care about increasing the capped file size on (dev_ms).
    **QUESTION*:  does any one know a way i could keep dev_ms capped at a large value like 100mb yet keep all the other dev files at the normal 10mb default?  thanks in advance

    1.  Increase rdisp/TRACE_LOGGING to 100MB.
    2.  Set (SM51) > Select All Processes > Menu > Process > Trace > Active Components > Uncheck everything and set trace level to 1.
    3.  Menu > Process > Trace > Dispatcher > Change Trace Level > Set to 2
    Wouldn't this essentially just increase dev_ms to 100MB while leaving other dev* trace files to not log anything?

  • ORA-48913: Writing into trace file failed

    Hi
    my OS: OUL5x64
    DB: 11.1.0.7
    receive this error in alert.log but could not figure out which parameter to increase.
    Can someone please help.
    Non critical error ORA-48913 caught while writing to trace file
    Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached
    the suggestion:
    ORA-48913: Writing into trace file failed
    *Cause:An attempt was made to write into a trace file that exceeds the trace's file size limit
    *Action:increase the trace's file size limit.
    Thanks in advance.

    Hi ,
    I have one more doubt :
    ORACLE_SID=XXXX
    /XXXX/XX/ofaroot/XXXX/diag/rdbms/xxxx/XXXXX/trace
    Non critical error ORA-48913 caught while writing to file "trace /XXXX/XX/ofaroot/XXXX/diag/rdbms/xxxx/XXXXX/trace/XXXX_ora_8218.trc"
    Error message: ORA-48913: Writing into trace file failed, file size limit [10485760]everywhere its written to increase the parameter max_dump_file_size or to relocate the alert log, but as far as i understand ,
    is this because a trace file with the name XXXX_ora_8218.trc was getting generated with a greater size than the one defined in max_dump_file_size. Is this what happened ?
    Also , I am not able to find what directory does this parameter points to ? is it the trace directory or diag directory ?
    i checked select * from v$diag_info ,but i could not find any conclusion.
    Probably , on getting the above info , i will be able to decide where to move the alert.log to create space.

  • Not able to get the actual plan from trace file

    Hi all
    I have a Db package and want to get actual execution plan of all the statements in that pakcage it does provides the plan for System's statements but does not displays the plan for Sql statements
    DB version 9.2.0 using the following sequence of insructions
    set timing on
    set serveroutput on
    alter session set events '10046 trace name context forever ,level 12';
    begin
    run_service.collect_data(sysdate);
    end;
    alter session set sql_trace=false;
    exit; ---exit from Sql
    now look at the output
    select distinct obj#,containerobj#,pflags,xpflags,mflags
    from
    sum$, suminline$ where sumobj#=obj# and inline#=:1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 1 1 0 0
    total 3 0.00 0.00 1 1 0 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: SYS (recursive depth: 2)
    Rows Row Source Operation
    0 SORT UNIQUE
    0 NESTED LOOPS
    0 TABLE ACCESS BY INDEX ROWID SUMINLINE$
    0 INDEX RANGE SCAN I_SUMINLINE$_2 (object id 1614116)
    0 TABLE ACCESS BY INDEX ROWID SUM$
    0 INDEX UNIQUE SCAN I_SUM$_1 (object id 319)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 1 0.00 0.00
    SELECT SEQ_NUM, S_DATE, S_TIME, CSTATUS, G_SERVICE,
    B_REFERENCE, V_REFERENCE, M_PRIORITY
    FROM GL_HIST
    ORDER BY S_DATE DESC, S_TIME DESC
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 2819 0.37 0.32 0 0 0 0
    Fetch 2819 2.50 20.47 2786 20164 0 2819
    total 5639 2.88 20.81 2786 20164 0 2819
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 2786 0.05 18.19
    latch free 4 0.04 0.06
    UPDATE G_ORIG SET G_SERVICE = :B1
    WHERE
    SEQ_NUM = :B5 AND S_DATE = :B4 AND S_TIME = :B3 AND
    C_STATUS = :B2 AND NVL(G_SERVICE, '+') <> NVL(:B1, '+')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.03 0 0 0 0
    Execute 3731 0.74 0.99 261 18712 119 54
    Fetch 0 0.00 0.00 0 0 0 0
    total 3732 0.74 1.02 261 18712 119 54
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 261 0.01 0.19
    latch free 9 0.01 0.04
    COMMIT

    Remove the line alter session set sql_trace=false and just exit/disconnect. The explain plain is contained in the STAT lines in the trace file and are only written when the cursor closes. If you turn off tracing before the cursor closes the STAT lines will not get written.

  • How to create the trace file using run_report_object at runtime

    Dear All
    using :
    Oracel Application Server 10g
    Oracle Database 11g
    Windows XP/sp3
    I'm using run_report_object to call a report inside the form. THis report is running OK from reports builder, however it's too slow when run from Application server.
    How Can I create a trace file (at runtime) that contains the time spent in sql and formating the layout of the report ??
    Here is My code :
    repid := find_report_object('report5');
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_FILENAME,'INVOICE.REP');
    v_url :='paramform=no';
    v_url := v_url||' FROM_NO=' || :PRINT_BLOCK.FROM_NO ;
    v_url := v_url ||' TO_NO=' || :PRINT_BLOCK.TO_NO ||' FROM_DATE=' || v_from_date ||' TO_DATE='|| v_to_date ||' NO_DATE=' ;
    v_url := v_url ||:PRINT_BLOCK.NO_DATE||' IDENT=' ||:PRINT_BLOCK.IDENT_NO||' REPORT_HEADING='''||V_REPORT_HEADING||'''' ;
    v_url := v_url||' COMPANY_NO='||:global.company_no;
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_OTHER,v_url);
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_SERVER,:GLOBAL.INV_REPORT_SERVER_NAME);
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_DESFORMAT,'pdf');
    v_rep := RUN_REPORT_OBJECT(repid);
    IF rep_status = 'FINISHED' THEN
    V1:='/reports/rwservlet/getjobid'||substr(v_rep,instr(v_rep,'_',-1)+1);
    WEB.SHOW_DOCUMENT('/reports/rwservlet/getjobid'||substr(v_rep,instr(v_rep,'_',-1)+1)||'?server='||REPORT_SERVER_NAME,'_blank');
    END IF;
    Thanks a lot

    Slow running reports often are not the result of a flawed report, but rather a flawed configuration. For example:
    1. If you call your reports (from Forms) via the default or inProcess Reports Server, often because startup time is slow, it will appear that it took too long for the report to be delievered. Using a stand-alone Rep Server is the preferred way to do this.
    2. If your Forms application makes numerous calls to RRO (RUN_REPORT_OBJECT), this can tend to result in what might appear as a memory leak (although it is not). The result is delayed processing because of the excessive memory use. This problem has been overcome in Forms/Reports 11 by the use of JVM pooling. However in v10 enabling "6i compatibility" mode is the way to overcome the issue. See Note 266073.1
    3. If the report runs fine from the Builder and it is connecting to the same db as when you run it from App Server, the issue is unlikely a db problem. However, if you want to look anyway, enable sqlnet tracing.
    4. To enable Reports tracing and investigate other tuning options, refer to the Reports 10 documentation:
    http://docs.oracle.com/cd/B14099_11/bi.1012/b14048/pbr_tune.htm
    Almost forgot to mentioned this one....
    If you are using a v11 db with App Server 10, you will probably want to consider reviewing Note 1099035.1 as it discusses an issue related to performance with such a configuration.
    Edited by: Michael Ferrante on Apr 10, 2012 8:49 AM

  • Unable to generate trace file

    Hi,
    I have written the stored procedure for starting sql trace on a given session for say n number of seconds.
    create or replace procedure start_trace
    v_sid in number,
    v_serial# in number,
    seconds in number)
    IS
    v_user varchar2 (32);
    duration number;
    dump_dest varchar2 (200);
    db_name varchar2 (32);
    no_session_found exception;
    stmt varchar2(100);
    stmt1 varchar2(100);
    BEGIN
    begin
    select username into v_user
    from v$session
    where sid= v_sid and
    serial# = v_serial#;
    exception
    when NO_DATA_FOUND then
    raise no_session_found;
    end;
    dbms_output.put_line('Tracing Started for User: '|| v_user);
    dbms_output.put_line('Tracing Start Time: '|| TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true);
    if seconds is null then
    duration := 60;
    else
    duration := seconds;
    end if;
    dbms_lock.sleep(duration);
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false);
    dbms_output.put_line ('Tracing Stop Time: '|| TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));
    select value into dump_dest
    from v$parameter
    where name = 'user_dump_dest';
    dbms_output.put_line('Trace Directory: ' || dump_dest);
    exception
    when no_session_found then
    dbms_output.put_line('No session found for sid and serial# specified');
    END start_trace;
    The above procedure compiles successfully and when I call it from sql prompt it gives me the message that the pl/sql procedure was completed successfully and all put_line statements are displayed.
    Real problem comes when I check udump for the trace file. I can not find it there. It seems all the statements in the procedure are executed successfully except dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true) and dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false) for some strange reasons.
    Any help will be appreciated.
    Thanks.

    Thanks for the reply.
    I do not get any error message. The following is the output:
    SQL> exec start_trace(118,6243,30);
    Tracing Started for User: SVCWRK
    Tracing Start Time: 09-26-2011 16:28:29
    Tracing Stop Time: 09-26-2011 16:28:59
    Trace Directory: /orasoft/app/oracle/admin/testsvcb/udump
    PL/SQL procedure successfully completed.
    But the trace file is not generated.
    I am using Oracle 10.2.0.4.0.
    Will try using DBMS_MONITOR.
    Thanks again.

Maybe you are looking for

  • Converted video wont add to itunes

    i have converted videos before on the same settings that go into itunes and ipod but i am having trouble with this one. i converted it to mp4, the same as all my previous converts but this one wont add to my itunes. when i try to open it with quickti

  • Vendor Master field display authorization

    Hi, In XK03, based on Vendor Group and Security Role, we would like to limit the display access of certain fields. For example: 1. For user1, vendor group 0001 - can display address 2. For user2, vendor group 0002 - cannnot display address. Your help

  • Non-servicing Report due to materials

    Dear Experts I have one doubt. Suppose we have an order of 100 units of A product and to service this product we required its supporting material (or component) say X, Y and Z. I don't have the stock of component X,Y and Z and for that system has rai

  • Is there a cord to allow me to view my MacBook Air on television?

    I checked the online store but not sure what I need or if Apple even has it.  I want a cord to go from my MacBook Air to my HDTV.  Which one do I need?

  • The old reload issue

    Hi Here we are at Logic 9.1 and still all my Kontakt and Play libraries randomly reload. Very annoying with large projects. What is the reason that this issue can't be solved since Logic8? Plus why can't logic release ram?Open a new empty project and