Trace file handling

SQLDeveloper 1.5 gives us sophisticated trace file handling.
However, this trace file handling appears to be designed for viewing files containing sql traces
(alter session set sql_trace=true).
While it's certainly true that the majority of trace files viewed in SQLDeveloper will contain
SQL traces, there are cases when I have to view other content in trace files. (anything
from error reports to the output of tracing events).
The current interface only allows me to view the raw file contents in the history tab of the trace
file's window, and I'm not convinced that this is the best way to view the raw contents of
a trace file.
I know that I could alway open the trace file with the editor of my choice to view the raw
contents, still would like to be able to do that from within SQLDeveloper.
I'd love to see a tab to just view the raw contents of the file. Do other feel the need for such
a tab too? Do you feel that this idea has enough merit to be posted as an enhancement request?
All the best
Michael

SQLDeveloper 1.5 gives us sophisticated trace file handling.
However, this trace file handling appears to be designed for viewing files containing sql traces
(alter session set sql_trace=true).
While it's certainly true that the majority of trace files viewed in SQLDeveloper will contain
SQL traces, there are cases when I have to view other content in trace files. (anything
from error reports to the output of tracing events).
The current interface only allows me to view the raw file contents in the history tab of the trace
file's window, and I'm not convinced that this is the best way to view the raw contents of
a trace file.
I know that I could alway open the trace file with the editor of my choice to view the raw
contents, still would like to be able to do that from within SQLDeveloper.
I'd love to see a tab to just view the raw contents of the file. Do other feel the need for such
a tab too? Do you feel that this idea has enough merit to be posted as an enhancement request?
All the best
Michael

Similar Messages

  • Duplicate File Handling Issues - Sender File Adapter - SAP PO 7.31 - Single Stack

    Hi All,
    We have a requirement to avoid processing of duplicate files. Our system is PI 7.31 Enh. Pack 1 SP 23. I tried using the 'Duplicate File Handling' feature in Sender File Adapter but things are not working out as expected. I processed same file again and again and PO is creating successful messages everytime rather than generating alerts/warnings or deactivating the channel.
    I went through the link  Michal's PI tips: Duplicate handling in file adapter - 7.31  . I have maintained similar setting but unable to get the functionality achieved. Is there anything I am missing or any setting that is required apart from the Duplicate file handling check box and a threshold count??
    Any help will be highly appreciated.
    Thanks,
    Abhishek

    Hello Sarvjeet,
    I'd to write a UDF in message mapping to identify duplicate files and throw an exception. In my case, I had to compare with the file load directory (source directory) with the archive directory to identify whether the new file is a duplicate or not. I'm not sure if this is the same case with you. See if below helps: (I used parameterized mapping to input the file locations in integration directory rather than hard-coding it in the mapping)
    AbstractTrace trace;
        trace = container.getTrace();
        double archiveFileSize = 0;
        double newFileSizeDouble = Double.parseDouble(newFileSize);
        String archiveFile = "";
        String archiveFileTrimmed = "";
        int var2 = 0;
        File directory = new File(directoryName);
        File[] fList = directory.listFiles();
        Arrays.sort(fList, Collections.reverseOrder());
        // Traversing through all the files
        for (File file : fList){   
            // If the directory element is a file
            if (file.isFile()){       
                            trace.addInfo("Filename: " + file.getName()+ ":: Archive File Time: "+ Long.toString(file.lastModified()));
                            archiveFile = file.getName();
                          archiveFileTrimmed = archiveFile.substring(20);       
                          archiveFileSize = file.length();
                            if (archiveFileTrimmed.equals(newFile) && archiveFileSize == newFileSizeDouble ) {
                                    var2 = var2 + 1;
                                    trace.addInfo("Duplicate File Found."+newFile);
                                    if (var2 == 2) {
                                            break;
                            else {
                                    continue;
        if (var2 == 2) {
            var2 = 0;
            throw new StreamTransformationException("Duplicate File Found. Processing for the current file is stopped. File: "+newFile+", File Size: "+newFileSize);
    return Integer.toString(var2);
    Regards,
    Abhishek

  • Webdynpro Exception in Default Trace File

    Hello ,
    We are on EP7.0 ECC6.0 ESS1.0 and kepp on getting the following error in the Default Trace file for ESS Travel Webdynpro :
    #1.5^H#00145EC6B0AC001200000072000740EC00045AF21F04ABB9#1225895855827#com.sap.engine.services.servlets_jsp.client.RequestInfo
    Server#sap.com/tcwddispwda#com.sap.engine.services.servlets_jsp.client.RequestInfoServer#JSS0WHZ#4383##goxsa664_PP1_7622452
    #JSS0WHZ#36d947e0ab4711ddb54c00145ec6b0ac#SAPEngine_Application_Thread[impl:3]_10##0#0#Error##Plain###application [webdynpro/
    dispatcher] Processing HTTP request to servlet [dispatcher] finished with error. The error is: com.sap.tc.webdynpro.services.
    sal.core.DispatcherException: Wrong Web Dynpro URL: "../WebDynpro/Servlet/<deployableObject>/<application>/xx?..". xx is not
    allowed without exchange key. Retrieved URI path: /sap.com/esstratri/TripForm/~wd_key115_1225895826865/background.gif.
            at com.sap.tc.webdynpro.serverimpl.wdc.adapter.HttpRequestAdapter.checkApplicationUri(HttpRequestAdapter.java:111)
            at com.sap.tc.webdynpro.clientserver.session.RequestManager.checkApplicationUri(RequestManager.java:665)
            at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:141)
            at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:62)
            at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doGet(DispatcherServlet.java:46)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
            at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
            at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:387)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:365)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:944)
            at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:266)
            at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
            at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:175)
            at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessio
    nMessageListener.java:33)
            at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
            at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
            at java.security.AccessController.doPrivileged(AccessController.java:215)
            at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
            at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    If anybody has encountered such situation ....please let me know.
    Thanks in advance.
    Shikhil

    Hi Shikhil,
    We are running into this exact same issue. Can you please tell me if you were able to resolve this and how it was done?
    Thank you so much,
    -Kevin

  • How to find trace file error in form

    hai all,
    i have big problem in my form in apps i open the form one error is ocuured,
    i got the trace file and check the error,
    PARSE ERROR #82:len=2903 dep=0 uid=173 oct=3 lid=173 tim=4265689973879 err=904 this is the trace file error meassage,
    how to find this error in form and also this is occured one select statement,how to find this select statement where can be used in form,
    i have one button ,when ever i press the button this error is ocuured,i check this button pl/sql procedure code but in this code not using that sql statement,
    if any one know to find the sql statement through trace file.
    plz give the replay as soon as possible.
    thank's

    The newest SQL Developer can converts the trace to readable format too, I'm not shure if you see the sql statement related to the error then.
    It seems you get ORA-00904 which says you use an invalid coumn in a dml statement.
    Normally such error should pop up in message - do you overwrite the message handling or use exception handling in you form which block this message?
    Easiest way to find this is to compile the form against the target database.
    If this does not give an error, you should check, if you use dynamic sql statements which are wrong.
    If you call database routines from your form, than this could be the errro cause too.

  • Removing alert logs and trace files

    Hi everyone!
    I noticed that in all the oracle databases, the trace files are piling and alert log is growing like anything ....
    Thought of making a copy of the trace files somewhere and remove them from the hard disk excluding the most recent ones.
    For alert log, thought of making a copy and renaming the current file so that Oracle can create a new one.
    Any advice if there are better approaches in handling this?
    Thanks in advance.

    user645399 wrote:
    Hi everyone!
    I noticed that in all the oracle databases, the trace files are piling and alert log is growing like anything ....
    Thought of making a copy of the trace files somewhere and remove them from the hard disk excluding the most recent ones.
    For alert log, thought of making a copy and renaming the current file so that Oracle can create a new one.
    Any advice if there are better approaches in handling this?
    Thanks in advance.I would include the alert log file in my backup strategy as it includes many important information; database parameter values,when and how the database was shut down, the important database errors and when they occur , etc ... I usually backup the alert log file once a month and keep 1 year of alert log file copies..

  • Location of client trace files sync'ed to the middleware

    Hi
    In the documentation (link below) it is mentioned that trace files sync'ed from the client to the middleware can be opened with an editor (on the middleware) but where are the files located?
    http://help.sap.com/saphelp_nw04/helpdata/en/42/f9943fbaf93268e10000000a1553f6/content.htm
    Thanks for any help,
    Andre

    Hi Andre,
    The transaction ME_RTRACE is used to set the trace settings on the MI client instead of the user of the mobile device setting it. This is usually handled by the administrator. Just by setting this on the middleware will not get the trace on the middleware. The process is as follows:
    The administrator makes the trace settings on the middleware. This is updated on the client device when that device makes a sync with the middleware. Now if the administrator wants to see the trace file then he has to activate the checkbox - Send with Next sync or the user on the mobile device can select the option - Send trace to server.
    This trace can be viewed in CCMS. For this go to transaction RZ11 - Mobile Infrastructure - Logs and traces.
    Here you can do a select for the particular device which you are interested in.
    Regards,
    Nameeta

  • Get blocker from the (self) deadlock trace file

    Hi,
    Recently I had an issue on a 10.2.0.4 single instance database where deadlocks were occurring. The following test case reproduces the problem (I create three parent tables, one child table with indexed foreign keys to all three parent tables and a procedure which performs an insert into the child table in an autonomous transaction):
    create table parent_1(id number primary key);
    create table parent_2(id number primary key);
    create table parent_3(id number primary key);
    create table child( id_c number primary key,
                       id_p1 number,
                       id_p2 number,
                       id_p3 number,
                       constraint fk_id_p1 foreign key (id_p1) references parent_1(id),
                       constraint fk_id_p2 foreign key (id_p2) references parent_2(id),
                       constraint fk_id_p3 foreign key (id_p3) references parent_3(id)
    create index i_id_p1 on child(id_p1);
    create index i_id_p2 on child(id_p2);
    create index i_id_p3 on child(id_p3);
    create or replace procedure insert_into_child as
    pragma autonomous_transaction;
    begin
      insert into child(id_c, id_p1, id_p2, id_p3) values(1,1,1,1);
      commit;
    end;
    insert into parent_1 values(1);
    insert into parent_2 values(1);
    commit;And now the action that causes the deadlock:
    SQL> insert into parent_3 values(1);
    1 row created.
    SQL> exec insert_into_child;
    BEGIN insert_into_child; END;
    ERROR at line 1:
    ORA-00060: deadlock detected while waiting for resource
    ORA-06512: at "SCOTT.INSERT_INTO_CHILD", line 4
    ORA-06512: at line 1My question is: how can I determine which table the insert into CHILD was waiting on? It could be waiting on PARENT_1, PARENT_2, PARENT_3, a combination of them or even on CHILD if I tried to insert a duplicate primary key in CHILD. Since we have the full testcase we know that it was waiting on PARENT_3 (or better said, it was waiting for the "parent" transaction to perform a commit/rollback), but is it possible to determine that solely from the deadlock trace file? I'm asking that because to pinpoint the problem I had to perform redo log mining, pl/sql tracing with DBMS_TRACE and manual debugging on a clone of the production database which was restored to a SCN just before the deadlock occurred. So, I had to do quite a lot of work to get to the blocker table and if this information is already in the deadlock trace file, it would have saved me a lot of time.
    Below is the deadlock trace file. From the "DML LOCK" part I guess that the child table (tab=227042) holds a mode 3 lock (SX), all the other three parent tables have mode 2 locks (SS), but from this extract I can't see that parent_3 (tab=227040) is blocking the insert into child:
    Deadlock graph:
                           ---------Blocker(s)--------  ---------Waiter(s)---------
    Resource Name          process session holds waits  process session holds waits
    TX-00070029-00749150        23     476     X             23     476           S
    session 476: DID 0001-0017-00000003     session 476: DID 0001-0017-00000003
    Rows waited on:
    Session 476: obj - rowid = 000376E2 - AAA3biAAEAAA4BwAAA
      (dictionary objn - 227042, file - 4, block - 229488, slot - 0)
    Information on the OTHER waiting sessions:
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    INSERT INTO CHILD(ID_C, ID_P1, ID_P2, ID_P3) VALUES(1,1,1,1)
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    3989eef50         4  procedure SCOTT.INSERT_INTO_CHILD
    391f3d870         1  anonymous block
            SO: 397691978, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227042 flg=11 chi=0
                      his[0]: mod=3 spn=35288
            (enqueue) TM-000376E2-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x398341fe8, mode: SX, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398341ff8
            SO: 397691878, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227040 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376E0-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x3983386e8, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x3983386f8
            SO: 397691778, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227038 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376DE-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x398340f58, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x398340f68
            SO: 397691678, type: 36, owner: 39686af98, flag: INIT/-/-/0x00
            DML LOCK: tab=227036 flg=11 chi=0
                      his[0]: mod=2 spn=35288
            (enqueue) TM-000376DC-00000000  DID: 0001-0017-00000003
            lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6
            res: 0x39833f358, mode: SS, lock_flag: 0x0
            own: 0x3980df420, sess: 0x3980df420, proc: 0x39859c660, prv: 0x39833f368
          ----------------------------------------Thank you in advance for any comments,
    Jure

    Hi Jonathan,
    thank you very much for your reply which more than answers my question. I think it actually clears a lot of doubts I had about TX locks, since your mentioning of "undo segment header transaction table" pointed me in the right direction for further research on this topic (honestly, I didn't know what's "behind" TX locks). So if I understood correctly, to determine which table is the blocker (in the testcase presented above), you have to have some kind of history of executed SQL statements (e.g. by mining redo logs)?
    The statement you wrote:
    At this point, and with your example, the waiting session is waiting on a TX (transaction) lock - this means it has not idea (and no interest) in the actual data involved, it is merely waiting for an undo segment header transaction table slot to clear. and the example with the savepoint you gave, made me think of some of the consequences of that behaviour. That is probably the reason why it is not possible to get the "blocker" table from v$lock (although sometimes it's possible to get it from v$session.row_wait_obj#) when a session tries to change a row another session holds in exclusive mode, e.g.:
    create table t1 (id number);
    insert into t1 values (1);
    commit;
    Session 126:
    SID = 126> update t1 set id=2 where id=1;
    1 row updated.
    Session 146:
    SID = 146> update t1 set id=2 where id=1;
    {session hangs}
    In a separate session:
    SQL> SELECT   CASE
      2                  WHEN TYPE = 'TM'
      3                     THEN (SELECT object_name
      4                             FROM user_objects
      5                            WHERE object_id = l.id1)
      6               END object_name,
      7                  SID, TYPE, id1, id2, lmode, request, BLOCK
      8          FROM v$lock l
      9         WHERE SID IN (126, 146)
    10     ORDER BY SID, TYPE, 1
    11  /
    OBJECT_NAME    SID TY        ID1        ID2      LMODE    REQUEST      BLOCK
    T1             126 TM      68447          0          3          0          0
                   126 TX     262153       4669          6          0          1
    T1             146 TM      68447          0          3          0          0
                   146 TX     262153       4669          0          6          0The only thing I can tell from this output is that session 146 is trying to get a TX lock in exclusive mode, and session 126 is blocking it, the reason of the blocking being unknown from this view alone.
    Since I'd like to get a better understanding on the mechanics behind this (e.g. why the blocked session can't know the segment that is waiting for, since it has to go to the same segment's data block to find the address of the undo segment header transaction table slot? ; can we get the content/structure of the transaction table in the data block - probably by making a block dump?), do you have any source where a more in depth explanation what happens "behind the scenes" is available (perhaps in Oracle Core?)? Some time ago I found a link on your blog http://jonathanlewis.wordpress.com/2010/06/21/locks/ which points to Franck Pachot's article where he nicely explains the various locking modes: http://knol.google.com/k/oracle-table-lock-modes#. There I also found Kyle Hailey's presentation about locks http://www.perfvision.com/papers/09_enqueues.ppt where slide 23 nicely depicts what's going on when acquiring TX locks. Of course I'll try to search on my own, but any other source (especially from an authority like you) is more than welcome.
    Thank you again and regards,
    Jure

  • Interpreting Trace File.

    Hi i am using 10.2.0.4.0 version of oracle.
    I am having trace file info as below, for one of the query. So how should i interpret the trace file? What is the issue in the query, and the scope of improvement in the query? please note that , i have removed the query and its plans from the trace file, i have only posted the wait sections.
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.14       0.13          0          0          1           0
    Execute      1      6.63     162.12      33540      72921        383           0
    Fetch    17272    178.89    1933.95     274835    3147603         20      259063
    total    17274    185.66    2096.21     308375    3220524        404      259063
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 36 
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      control file sequential read                    4        0.00          0.00
      db file sequential read                    302812        0.62       1913.89
      latch: cache buffers chains                     3        0.04          0.04
      direct path write temp                        501        0.01          0.30
      SQL*Net message to client                   17272        0.00          0.04
      db file scattered read                        120        0.02          0.63
      direct path read temp                         608        0.14          1.71
      SQL*Net message from client                 17272       44.81      31865.74
      SQL*Net more data to client                    15        0.00          0.00
      latch: object queue header operation            1        0.00          0.00
      latch: library cache                            3        0.03          0.04
      latch: library cache pin                        1        0.00          0.00
      latch: cache buffer handles                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.14       0.13          0          0          1           0
    Execute      1      6.63     162.12      33540      72921        383           0
    Fetch    17272    178.89    1933.95     274835    3147603         20      259063
    total    17274    185.66    2096.21     308375    3220524        404      259063
    Misses in library cache during parse: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                   17275        0.00          0.04
      SQL*Net message from client                 17274       75.57      31941.39
      SQL*Net more data from client                   2        0.00          0.01
      db file sequential read                    302812        0.62       1913.89
      control file sequential read                    4        0.00          0.00
      latch: cache buffers chains                     3        0.04          0.04
      direct path write temp                        501        0.01          0.30
      db file scattered read                        120        0.02          0.63
      direct path read temp                         608        0.14          1.71
      SQL*Net more data to client                    15        0.00          0.00
      latch: object queue header operation            1        0.00          0.00
      latch: library cache                            3        0.03          0.04
      latch: library cache pin                        1        0.00          0.00
      latch: cache buffer handles                     1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       11      0.02       0.01          0          0          0           0
    Execute    348      0.20       0.17          0          0          1           0
    Fetch      367      0.06       0.37         59       1187          0        3806
    total      726      0.28       0.56         59       1187          1        3806
    Misses in library cache during parse: 11
    Misses in library cache during execute: 10
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        59        0.01          0.32
        1  user  SQL statements in session.
      348  internal SQL statements in session.
      349  SQL statements in session.
    ********************************************************************************

    below is the estimate and actual results.
    | Id  | Operation                              | Name                        | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Us
    ed-Mem | Used-Tmp|
    |*  1 |  COUNT STOPKEY                         |                             |  13475 |        |  13475 |00:01:32.33 |     134K|  11357 |      0 |       |       |
        |         |
    |   2 |   NESTED LOOPS                         |                             |  13475 |      2 |  13475 |00:01:31.66 |     134K|  11357 |      0 |       |       |
        |         |
    |   3 |    NESTED LOOPS                        |                             |  13475 |      1 |  13475 |00:01:29.22 |   94325 |  11357 |      0 |       |       |
        |         |
    |*  4 |     INDEX RANGE SCAN                   |                |  13475 |      1 |  13475 |00:00:26.63 |   40425 |   4014 |      0 |       |       |
        |         |
    |   5 |     TABLE ACCESS BY INDEX ROWID        | |  13475 |      1 |  13475 |00:01:02.46 |   53900 |   7343 |      0 |       |       |
        |         |
    |*  6 |      INDEX RANGE SCAN                  ||  13475 |      1 |  13475 |00:00:16.80 |   40425 |   2056 |      0 |       |       |
        |         |
    |*  7 |    TABLE ACCESS FULL                   | |  13475 |      2 |  13475 |00:00:02.26 |   40425 |      0 |      0 |       |       |
        |         |
    |   8 |  TABLE ACCESS BY INDEX ROWID           ||  94399 |      1 |  94399 |00:06:17.09 |     389K|  32207 |      0 |       |       |
        |         |
    |*  9 |   INDEX UNIQUE SCAN                    | |  94399 |      1 |  94399 |00:02:59.79 |     294K|  15488 |      0 |       |       |
        |         |
    |  10 |  TEMP TABLE TRANSFORMATION             |                             |      1 |        |    170K|00:35:11.08 |    1575K|    195K|   6158 |       |       |
        |         |
    |  11 |   LOAD AS SELECT                       |                             |      1 |        |      1 |00:04:49.06 |   53704 |  28653 |    264 |   525K|   525K|  5
    25K (0)|              |
    |  12 |    PARTITION RANGE ALL                 |                             |      1 |  55430 |  16097 |00:06:26.06 |   53433 |  28651 |      0 |       |       |
        |         |
    |  13 |     PARTITION HASH ALL                 |                             |     54 |  55430 |  16097 |00:09:20.69 |   53433 |  28651 |      0 |       |       |
        |         |
    |* 14 |      TABLE ACCESS BY LOCAL INDEX ROWID | INV                         |    432 |  55430 |  16097 |00:06:11.42 |   53433 |  28651 |      0 |       |       |
        |         |
    |* 15 |       INDEX SKIP SCAN                  | |    432 |    125K|  16097 |00:00:39.90 |    4642 |   4508 |      0 |       |       |
        |         |
    |  16 |   TABLE ACCESS BY INDEX ROWID          | |      1 |      2 |    170K|00:30:21.66 |    1522K|    166K|   5894 |       |       |
        |         |
    |  17 |    NESTED LOOPS                        |                             |      1 |     97 |    276K|34:55:49.92 |    1470K|    150K|   5894 |       |       |
        |         |
    |  18 |     NESTED LOOPS                       |                             |      1 |     55 |    105K|00:22:14.57 |    1128K|    134K|   5894 |       |       |
        |         |
    |  19 |      NESTED LOOPS OUTER                |                             |      1 |     52 |    105K|00:16:32.91 |     694K|    105K|   5894 |       |       |
        |         |
    |* 20 |       HASH JOIN                        |                             |      1 |     52 |    105K|00:16:19.68 |     402K|    102K|   5894 |  9641K|  2205K| 16
    27K (1)|        10240 |
    |  21 |        VIEW                            |                             |      1 |  65234 |    105K|00:16:16.46 |     402K|    101K|   4655 |       |       |
        |         |
    |  22 |         SORT UNIQUE                    |                             |      1 |  65234 |    105K|00:16:16.46 |     402K|    101K|   4655 |  8724K|  1161K| 61
    8K (48)|         9216 |
    |  23 |          UNION-ALL                     |                             |      1 |        |    105K|00:14:59.93 |     402K|  97342 |    252 |       |       |
        |         |
    |  24 |           NESTED LOOPS OUTER           |                             |      1 |  19975 |    105K|00:14:10.24 |     395K|  94655 |      0 |       |       |
        |         |
    |  25 |            NESTED LOOPS                |                             |      1 |  19975 |    105K|00:13:58.47 |     140K|  93616 |      0 |       |       |
        |         |
    |  26 |             VIEW                       |                             |      1 |  55430 |  16097 |00:00:00.43 |     270 |    531 |      0 |       |       |
        |         |
    |  27 |              TABLE ACCESS FULL         | |      1 |  55430 |  16097 |00:00:00.19 |     270 |    531 |      0 |       |       |
        |         |
    |* 28 |             TABLE ACCESS BY INDEX ROWID| |  16097 |      1 |    105K|00:13:59.70 |     140K|  93085 |      0 |       |       |
        |         |
    |* 29 |              INDEX RANGE SCAN          |     |  16097 |     10 |    145K|00:00:40.42 |   32685 |   8237 |      0 |       |       |
        |         |
    |  30 |            TABLE ACCESS BY INDEX ROWID | |    105K|      1 |  84716 |00:00:16.78 |     254K|   1039 |      0 |       |       |
        |         |
    |* 31 |             INDEX UNIQUE SCAN          | |    105K|      1 |  84716 |00:00:13.05 |     169K|    982 |      0 |       |       |
        |         |
    |  32 |           NESTED LOOPS                 |                             |      1 |  45259 |      0 |00:00:17.19 |    7336 |   2687 |    252 |       |       |
        |         |
    |* 33 |            HASH JOIN RIGHT OUTER       |                             |      1 |  45259 |      0 |00:00:17.19 |    7336 |   2687 |    252 |   884K|   884K|  3
    09K (0)|              |
    |  34 |             TABLE ACCESS FULL          | |      1 |   1673 |   1677 |00:00:00.01 |      24 |      8 |      0 |       |       |
        |         |
    |* 35 |             HASH JOIN                  |                             |      1 |  45259 |      0 |00:00:17.13 |    7310 |   2678 |    252 |  3318K|  1235K|  4
    47K (1)|         2048 |
    |* 36 |              TABLE ACCESS FULL         | |      1 |  45259 |  49043 |00:00:07.41 |    7043 |   2170 |      0 |       |       |
        |         |
    |  37 |              VIEW                      |                             |      1 |  55430 |  16097 |00:00:00.14 |     267 |    256 |      0 |       |       |
        |         |
    |  38 |               TABLE ACCESS FULL        ||      1 |  55430 |  16097 |00:00:00.12 |     267 |    256 |      0 |       |       |
        |         |
    |  39 |            TABLE ACCESS BY INDEX ROWID | |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |      0 |       |       |
        |         |
    |* 40 |             INDEX UNIQUE SCAN          | |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |      0 |       |       |
        |         |
    |  41 |        VIEW                            |                             |      1 |  55430 |  16097 |00:00:00.02 |     267 |      0 |      0 |       |       |
        |         |
    |  42 |         TABLE ACCESS FULL              | |      1 |  55430 |  16097 |00:00:00.01 |     267 |      0 |      0 |       |       |
        |         |
    |  43 |       TABLE ACCESS BY INDEX ROWID      | |    105K|      1 |  93385 |00:00:24.90 |     291K|   2104 |      0 |       |       |
        |         |
    |* 44 |        INDEX UNIQUE SCAN               | |    105K|      1 |  93385 |00:00:16.45 |     196K|   1405 |      0 |       |       |
        |         |
    |  45 |      TABLE ACCESS BY INDEX ROWID       | |    105K|      1 |    105K|00:05:49.82 |     434K|  29495 |      0 |       |       |
        |         |
    |* 46 |       INDEX UNIQUE SCAN                ||    105K|      1 |    105K|00:02:54.37 |     328K|  14644 |      0 |       |       |
        |         |
    |* 47 |     INDEX RANGE SCAN                   | |    105K|      2 |    170K|00:03:01.14 |     342K|  15690 |      0 |       |       |
        |         |
    Predicate Information (identified by operation id):
       1 - filter(1>=ROWNUM)
       4 - access("XS"."SITEPK"=:B1)
       6 - access("XS"."VENDORPK"="XB"."VENDORPK")
       7 - filter(("XB"."BUYERCOMPANYPK"="CC"."PARENTCOMPANYPK" OR "XB"."BUYERCOMPANYPK"="CC"."CHILDCOMPANYPK"))
       9 - access("INVOICEPK"=:B1 AND "LINENUM"=:B2)
      14 - filter(("IH"."INVOICEORIGIN"='APP' AND "IH"."PO_PK" IS NULL AND "IH"."ISPOSTED"='Y'))
      15 - access("IH"."PAYPK"=3914297352 AND "IH"."POSTDATE">=1338508800000 AND "IH"."POSTDATE"<1341014400000)
           filter(("IH"."POSTDATE">=1338508800000 AND "IH"."PAYPK"=3914297352 AND "IH"."POSTDATE"<1341014400000))
      20 - access("NEWVIEW"."PRIMARYKEY"="TAB"."INVOICEPK")
      28 - filter(TO_NUMBER("RAT"."AUDITTYPE")<2)
      29 - access("INNERTAB1"."INVOICEPK"="RAT"."INVOICEPK")
           filter("RAT"."INVOICEPK" IS NOT NULL)
      31 - access("RAT"."USERPK"="UR"."USERPK")
      33 - access("RA"."QUEUEPK"="Q"."QUEUEPK")
      35 - access("INNERTAB2"."INVOICEPK"="RA"."INVOICEPK")
      36 - filter(("RA"."INVOICEPK" IS NOT NULL AND "RA"."RECEIVERPK" IS NOT NULL))
      40 - access("RA"."RECEIVERPK"="UR"."USERPK")
      44 - access("TAB"."ENTEREDBY"="UR"."USERPK")
      46 - access("TAB"."INVOICEPK"="ISUM"."INVOICEPK")
      47 - access("IDD"."INVOICEPK"="TAB"."INVOICEPK")
    87 rows selected.
    Elapsed: 00:00:04.10
    SQL>Edited by: 930254 on Aug 7, 2012 9:33 AM

  • Best practice to reduce size of BIA trace files

    Hi,
    I saw alert on BIA monitor says 'check size of trace files'. Most of my trace files are above 20MB. I clicked on details it says "Check the size of your trace files. Remove or move the trace files with the memory usage that is too high or trace files that are no longer needed."
    I would like to reduce them these trace files but not sure what is the safetest way to do it. Any suggestion would be appreciated!
    Thanks.
    Mimosa

    Mimosa,
    Let's be clear here first. The tracing set via sm50 is for tracing on the ABAP side of BI not the BIA.
    Yes, it is safe to move/delete TrexAlertServer.trc, TrexIndexServer.trc, etc from the OS level. You can also right click the individual trace when you enter the "Trace" tab in the TREX Admin Tool (python) and I believe there is options to delete them there but it is certaintly OKAY to do this on the OS level. They are simply recreated when new traces are generated.
    I would recommend that you simply .zip the files and move the .zip files to another folder in case SAP support may need them to analyze an issue. As long as they aren't huge, and if hard disk space permits, this shouldnt be an issue. After this you then will need to delete the trace file. It is important that if a trace file has an open handle registered to it then it wont let you delete/move it. Therefore it might be a good idea to do this task when system activity is low or non-existent.
    2 things also to check:
    1. Make sure the python trace is not on.
    2. In the python TREXAdmin Tool, check the Alerts tab and click "Alert Server Configuration". Make sure the trace level is set to "error".
    Hope that helps. As always check the TOM for any concerns:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/46e11c10-0e01-0010-1182-b02db2e8bafb
    Edited by: Mike Bestvina on Apr 1, 2008 3:59 AM - revised some statements to be more clear

  • 7.1: Insert-Exception only appears in trace-file ?

    Hello,
    this is a question relating to SAP Mobile 7.1, we are developing an application for Handhelds.
    I have the following situation:
    - In my Services Component I define the DO "Equip" with an unique index on attribute "nr".
    - I create a new Equip and set its attribute "nr" to an already existing value in the DB.
    - I perform a commit.
    => I only can see in the application trace-file that an SQLException occoured ("Error inserting row --&gt; java.sql.SQLException: Duplicate unique index") but the exception isn't propagated to my application so my application has no chance to notice that something went wrong. Even worse: The complete record is lost !
    What do I have to do to receive this exception so I am able to notify the user that the record contains invalid data before it is lost ?
    Thanks a lot,
    Björn

    we have another situation in which the framework catches an exception and just drops it silently, and drops the data as well.
    In another support message we noted the following behaviour:
    - form field length: 20
    - db field length: 10
    - insert succeeded (in an older Patch Level)
    - but an error happened during the sync (SAP message 510534)
    we just received a fix (SP05 PL04) and this is what happens now:
    - insert throws an exception (text see below)
    - the framework catches the exception
    - and drops it silently
    - the application does not see that anything went wrong
    ok, I understand that I just should fix the length in the form field. Easy.
    But the whole exception handling in the persistence layer should be reviewed.
    Cheers, Andre
    <r id="1219153804103" t="15:50:04" d="2008-08-19" s="E" c="000" u="ZKIAGE" g="en" m="Error inserting row --&gt; java.sql.SQLException: Value is too large, column: EQUIP_NR
         at com.sap.sdb.minDB.util.ErrorMsg.newSQLException(Unknown Source)
         at com.sap.sdb.minDB.util.ErrorMsg.conversionError(Unknown Source)
         at com.sap.sdb.minDB.common.ColumnDesc.checkColumnSize(Unknown Source)

  • Get the session trace files and also the TKPROF reports for storedprocedure

    Hi ,
    I am trying to find out the bottlenecks on a storedprocedure, which does a insert into a table, where the target table has lot of indexes/constraints, so i want to see, which indexes/constraints is causing the problem. so in order to do I want to get the session trace files and also the TKPROF reports to see the bottlenecks for a oracle stored procedure,
    Could you please give us the list of steps to get the tracefiles and tkprof reports.

    781649 wrote:
    Thanks for input, i am using oracle10g standard edition. i dont think i have these tools available (DBMS_PROFILER or DBMS_HPROF). Did you even bother to try the following?
    SQL> DESC DBMS_PROFILER
    SQL> DESC DBMS_HPROF
    I understand it would be too much to expect you to actually Read The Fine Manual
    I am using bulk collect for all in my storedprocedure to insert the rows into a big table. In order to perform analysis on this bulk collect which tool will help me to identify the bottlenecks .
    I want to compare background session properties for both runs (like inserting the data with indexes/constraints vs without indexes/constraints). please let me know..I am willing to bet you the problem/slowness is on the SELECT side & not the INSERT!
    Just Curious
    Handle:      781649
    Status Level:      Newbie
    Registered:      Jul 12, 2010
    Total Posts:      35
    Total Questions:      17 (14 unresolved)
    Why so many unanswered questions?
    Edited by: sb92075 on Jan 17, 2012 3:13 PM

  • Need help deleting trace files

    Hi,
    I am new to Oracle RAC and I need to get rid of trace files since they are growing too big. I have one that is 22Gigs and I only have 60 Gigs available right now on that drive. I want to know if I can delete the file manually using the rm command.
    My trace files are in: /u01/app/oracle/diag/rdbms/.../.../trace
    The file I am concerned is called: "linux1_pz99_13299.trc" and "linux1_pz99_13299.trm"
    I am using Oracle 11.1.0.6.0 on Oracle Unbreakable Linux.
    Thanks for any help that can be provided.

    Is that really going to work?
    It is my understanding (supported by some inconclusive observations and experiments) that once a trace file grows beyond about 8 kilobytes Oracle will hang onto it in a death grip. You may delete it but Oracle will still have the file handle open and be writing to it.
    If the database is running on a Windows platform then an equivalent script will not even give the illusion of deleting the file.
    It used to be the case that switching trace file output to a different directory would cause Oracle to release the active trace files but now the trace directory is part of the ADR heirarchy so switching is a more global and more intrusive operation and could interfere with other diagnostics.
    You may have to rely on ADRCI's "purge" command unless there is a PL/SQL equivalent.
    I shall be very interested to see what others have to say.

  • Error 00007 : 0Stale NFS file handle in Module rslgcoll(041)

    What could be the reason for "Error 00007 : 0Stale NFS file handle in Module rslgcoll(041) " in sm21. After restarting application server I get this errors and verry soon I can not connect to server anymore(have to restart it)

    the error area is(However we are not using SSO (in terms of single sign on functionality in order not to need enter password at logon)  :
    N  =================================================
    N  === SSF INITIALIZATION:
    N  ===...SSF Security Toolkit name SAPSECULIB .
    N  ===...SSF trace level is 0 .
    N  ===...SSF library is /usr/sap/CR7/SYS/exe/run/libsapsecu.sl .
    N  ===...SSF hash algorithm is SHA1 .
    N  ===...SSF symmetric encryption algorithm is DES-CBC .
    N  ===...sucessfully completed.
    N  =================================================
    N  MskiInitLogonTicketCacheHandle: Logon Ticket cache pointer retrieved from sh
    red memory.
    N  MskiInitLogonTicketCacheHandle: Workprocess runs with Logon Ticket cache.
    M
    M Mon Apr  6 21:28:36 2009
    M  ThReschedAfterCommit: th_force_sched_after_commit = 1
    A
    A Mon Apr  6 21:28:50 2009
    A  *** ERROR => RFC ======> Name or password is incorrect. Please re-enter
    [abrfcio.c    6880]
    N
    N Mon Apr  6 21:29:13 2009

  • 11g trace files

    Hi all,
    In my production database the trace file is occupying over 40gb of space. can I delete it to save space?
    It is of the name PRD_mrp0_24180.trc.
    Please help.
    Thanks.

    can I delete it to save space?Solution is both OS & application dependent.
    If application holds the file (handle) OPEN, Windows will prevent another session from messing with the file
    & *NIX will fake you out leave file untouched until application CLOSE the file.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Not able to get the actual plan from trace file

    Hi all
    I have a Db package and want to get actual execution plan of all the statements in that pakcage it does provides the plan for System's statements but does not displays the plan for Sql statements
    DB version 9.2.0 using the following sequence of insructions
    set timing on
    set serveroutput on
    alter session set events '10046 trace name context forever ,level 12';
    begin
    run_service.collect_data(sysdate);
    end;
    alter session set sql_trace=false;
    exit; ---exit from Sql
    now look at the output
    select distinct obj#,containerobj#,pflags,xpflags,mflags
    from
    sum$, suminline$ where sumobj#=obj# and inline#=:1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 1 1 0 0
    total 3 0.00 0.00 1 1 0 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: SYS (recursive depth: 2)
    Rows Row Source Operation
    0 SORT UNIQUE
    0 NESTED LOOPS
    0 TABLE ACCESS BY INDEX ROWID SUMINLINE$
    0 INDEX RANGE SCAN I_SUMINLINE$_2 (object id 1614116)
    0 TABLE ACCESS BY INDEX ROWID SUM$
    0 INDEX UNIQUE SCAN I_SUM$_1 (object id 319)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 1 0.00 0.00
    SELECT SEQ_NUM, S_DATE, S_TIME, CSTATUS, G_SERVICE,
    B_REFERENCE, V_REFERENCE, M_PRIORITY
    FROM GL_HIST
    ORDER BY S_DATE DESC, S_TIME DESC
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 2819 0.37 0.32 0 0 0 0
    Fetch 2819 2.50 20.47 2786 20164 0 2819
    total 5639 2.88 20.81 2786 20164 0 2819
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 2786 0.05 18.19
    latch free 4 0.04 0.06
    UPDATE G_ORIG SET G_SERVICE = :B1
    WHERE
    SEQ_NUM = :B5 AND S_DATE = :B4 AND S_TIME = :B3 AND
    C_STATUS = :B2 AND NVL(G_SERVICE, '+') <> NVL(:B1, '+')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.03 0 0 0 0
    Execute 3731 0.74 0.99 261 18712 119 54
    Fetch 0 0.00 0.00 0 0 0 0
    total 3732 0.74 1.02 261 18712 119 54
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 15550 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 261 0.01 0.19
    latch free 9 0.01 0.04
    COMMIT

    Remove the line alter session set sql_trace=false and just exit/disconnect. The explain plain is contained in the STAT lines in the trace file and are only written when the cursor closes. If you turn off tracing before the cursor closes the STAT lines will not get written.

Maybe you are looking for

  • Running mountain lion but no quicktime

    Have the latest mountain lion installed but cant run apple tutorials for iDVD. When I try to run a tutorial in IDVD it connects me to apple support to down load quicktime. Cant find out what the problem is?

  • Getchar() and putchar() in adobe javascript

    One of the simplest programs that are written in C for newbies is pure input  and output of characters from pressed keys in succession (with some max  repetition allowed by the operating system settings for key pressed for extended  period). #include

  • Trouble understanding RH customization when linking FM files

    Is there a video that will help with this, particularly creating and customizing a single CSS for use with my work? I also would like to have an idea of the best workflow. Here is what I am doing and some issues: Open RH. Click File > New > Project.

  • HT1444 Loaded 10.7.5 ...Now JPGs on external drives are corrupt!

    Can I reinstall 10.7.4 to recover my drives and photos???

  • Premiere pro cc 2014 problems with audio

    I have edited some sequences in premiere using sony xdcam ex clips. The clips where recorded as 1080 50i. I have edited it in the same sequence properties as premiere sugested it. All there is fine and audio is playing well. The problem strats when i