Large number of trace files generated

Many of the following trace files are being generted throughout the day, sometimes 4/5 per minute
There is nothing in the alert log
Any ideas?
Many Thanks in advance
Dump file e:\oracle\admin\nauti1\udump\nauti1_ora_5552.trc
Tue Nov 18 17:36:11 2008
ORACLE V10.2.0.4.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
Windows Server 2003 Version V5.2 Service Pack 2
CPU : 4 - type 586, 4 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:2045M/3839M, Ph+PgF:3718M/5724M, VA:649M/2047M
Instance name: nauti1
Redo thread mounted by this instance: 1
Oracle process number: 32
Windows thread id: 5552, image: ORACLE.EXE (SHAD)
*** ACTION NAME:() 2008-11-18 17:36:11.432
*** MODULE NAME:(Nautilus.Exe) 2008-11-18 17:36:11.432
*** SERVICE NAME:(nauti1) 2008-11-18 17:36:11.432
*** SESSION ID:(130.42066) 2008-11-18 17:36:11.432
KGX cleanup...
KGX Atomic Operation Log 342CD2A4
Mutex 452CC5F8(130, 0) idn 0 oper EXAM
Cursor Parent uid 130 efd 17 whr 26 slp 0
oper=DEFAULT pt1=00000000 pt2=00000000 pt3=00000000
pt4=00000000 u41=0 stt=0
KGX cleanup...
KGX Atomic Operation Log 342CD2A4
Mutex 452CC5F8(130, 0) idn 0 oper EXAM
Cursor Parent uid 130 efd 17 whr 26 slp 0
oper=DEFAULT pt1=48265D6C pt2=48265E68 pt3=48265D3C
pt4=00000000 u41=0 stt=0
Dump file e:\oracle\admin\nauti1\udump\nauti1_ora_5552.trc
Sat Nov 22 12:52:32 2008
ORACLE V10.2.0.4.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
Windows Server 2003 Version V5.2 Service Pack 2
CPU : 4 - type 586, 4 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:2070M/3839M, Ph+PgF:3896M/5724M, VA:673M/2047M
Instance name: nauti1
Redo thread mounted by this instance: 1
Oracle process number: 29
Windows thread id: 5552, image: ORACLE.EXE (SHAD)

Check out metalink bug description for Bug 6638558

Similar Messages

  • I have an iPad 2 with iOS 5.1 and iBooks version 2.1.1. I have 64GB of storage, 80% is used. iBooks is using 250MB of storage. I have a large number of PDF files in my iBooks library. At this time I can not add another book or PDF file to my library.

    I have an iPad 2 with iOS 5.1 and iBooks version 2.1.1. I have 64GB of storage, 80% is used. iBooks is using 250MB of storage. I have a large number of PDF files in my iBooks library. At this time I can not add another book or PDF file to my library.  When I try to move a PDF file to iBooks the system works for a while...sometimes the file appears and than disappears....sometimes the file never appears. Is ther some limit to the number of books or total storage used in IBooks?  Thanks....

    Hi jybravo70, 
    Welcome to the Apple Support Communities!
    It sounds like you may be experiencing issues on your non iOS 8 devices because iOS 8 is required to set up or join a Family Sharing group.
    The following information is located in the print at the bottom of the article. 
    Apple - iCloud - Family Sharing
    Family Sharing requires a personal Apple ID signed in to iCloud and iTunes. Music, movies, TV shows, and books can be downloaded on up to 10 devices per account, five of which can be computers. iOS 8 and OS X Yosemite are required to set up or join a Family Sharing group and are recommended for full functionality. Not all content is eligible for Family Sharing.
    Have a great day, 
    Joe

  • Trouble loading a large number of csv files

    Hi All,
    I am having an issue loading a large number of csv files into my LabVIEW program. I have attached a png of the simplified code for the load sequence alone.
    What I want to do is load data from 5000 laser beam profiles, so 5000 csv files (68x68 elements), and then carry out some data analysis. However, the program will only ever load 2117 files, and I get no error messages. I have also tried, initially loading a single file, selecting a crop area - say 30x30 elements - and then loading the rest of the files cropped to these dimensions, but I still only get 2117 files.
    Any thoughts would be much appreciated,
    Kevin
    Kevin Conlisk
    Ph.D Student
    National Centre for Laser Applications
    National University of Ireland, Galway
    IRELAND
    Solved!
    Go to Solution.
    Attachments:
    Load csv files.PNG ‏14 KB

    How many elements are in the array of paths (your size(s) indicator) ?
    I suspect that the open file is somewhat limited to a certain number.
    You could also select a certain folder and use 'List Folder' to get a list of files and load those.
    Your data set is 170 MB, not really asthounising, however you should whatc your programming to prevent data-doublures.
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • 10053 - no trace file generated

    Hi,
    no 10053 trace file is generated in the diag directory.
    sql_trace = true
    trace_enabled = true
    i set
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST';
    ALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
    but there is no trace file generated.
    Something seems to be missing.
    Any help would be much appreciated!
    Best Regards
    user11368124

    thanks for your messages.
    @Dom Brooks
    the Oracle release is 11.2 running on Ubuntu.
    Added flushing pool. That statement was missing.
    But unfortunately the 10053 trace file is still not generated.
    i am running the following query mentioned in the article "Examining the Oracle Database 10053 Trace Event Dump File" of Steve Callan:
    alter system set TRACE_ENABLED = true;
    alter system set SQL_TRACE = true;
    alter session set statistics_level=all;
    --alter session set max_dump_file_size = unlimited;
    --oradebug setmypid
    --oradebug unlimit
    --oradebug event 10053 trace name context forever, level 1
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST';
    alter session set events '10046 trace name context forever, level 12';
    alter session set events '10053 trace name context forever, level 1';
    -- plan_table exists
    select * from plan_table
    -- flushing pool
    alter system flush shared_pool;
    explain plan for
    SELECT ch.channel_class,
    c.cust_city,
    t.calendar_quarter_desc,
    SUM(s.amount_sold) sales_amount
    FROM sh.sales s,
    sh.times t,
    sh.customers c,
    sh.channels ch
    WHERE s.time_id = t.time_id
    AND s.cust_id = c.cust_id
    AND s.channel_id = ch.channel_id
    AND c.cust_state_province = 'CA'
    AND ch.channel_desc in ('Internet','Catalog')
    AND t.calendar_quarter_desc IN ('1999-01','1999-02')
    GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc
    ORDER by 1,2,3,4;
    Best Regards
    user11368124

  • Optimize a large number of pdf files using Acrobat Pro XI

    I have Adobe Acrobat Pro XI and need to optimize a large number of scanned files and other pdf's that are quite large. Is there any way to optimize multiple files at the same time, or do I have to do them one at a time (a major bummer if this is the case). I read on the forums that Acrobat Pro 9 had this capablity, so I can't imagine that a newer version wouldn't have it. Thanks for your help.

    You can use an Action to process multiple files in Acrobat Pro XI, via Tools - Action Wizard.
    Add to it a Save command and there you'll be able to specify that you want to optimize the file.

  • Do you know Timmings for trace files generated?

    Hi,
    I have done some sql traceing using DBMS_MONITOR package.
    We can also enable SQL traceing using DBMS_SESSION.
    I want to generate sql trace file for "particuler part of application".
    When i did that i got some sql trace files,, now that "particuler part of application" was over application was idle..
    but as time goes these files are still populating in size means they are still sql traceing going on...
    My question is when and how trace files are generated?
    Do you have idea???
    Thanks and Regards,
    Rushang Kansara
    Message was edited by:
    Rush

    also what content of my sql trace file should i
    consider for exacly tracing that "particuler part of
    application".
    Rushang
    Parse Count To Execute Ratio
    Take the numbers of parse count and divide it by numbers of time execute count if it is 1 then it means you are parsing the same statment everytime,If this ratio is 1 then it will latch the shared SQL area which will degrade the overall performance.Like if you execute a query which is using bind variable and this query is at yours front end level trigger (Forms) POST_QUERY then it will show you (parse count=execute count) which shows you are parsing for every triggering event which is bad ,for that you should put this seqeuel within PL/SQL procedure which cache the cursor and will turn in (parse count<Execute Count).
    Large Diffrence Between Elasped Time And CPU Time
    If this diffrence (Elapsed time[b]-CPU time)>1 then it means you are spending yours time in for waiting resources this waiting resources will in turn wait events e.g some one updated the row and dont realease by COMMIT or ROLBACK and the same span of time you want to update then you will see a lock in tkprof result in wait event section.If you read the data from hard disk (as first time you issue it reads from HD and then put into buffer cache during this reading a latch is grabed and will not let you read this data until you perform the alls read from HD to buffer cache this will also show you in wait events which is cache buffers chain
    Fetch Calls
    If yours Fetch calls=Rows then it means you are not using Bulk fetch and yours this code will take a lot of roundtrips which will in turn jam the network.
    Disk Count
    If every time yours disk count=current + query mode then you are reading alls block from disk alls the time ,usually oracle read once from disk and put it into SGA and should be found in SGA second time.
    And there is many more...depend on yors environemnt setup but above are common.
    As you said its reproducing the tkprof again and again ,make sure you terminate the session or you explicitly turn off the tracer by
    ALTER SESSION SET SQL_TRACE=FALSE Khurram

  • Too many trace files generated by program ORACLE.EXE (J001) in "bdump"

    Hi,
    Please help!
    the following trace file messages have been created in my "bdump" folder about every 6 minutes, so it generates about 200 files per hours. Can someone tell me how to solve the error or stop the trace files be generated. Thanks.
    Windows thread id: 5520, image: ORACLE.EXE (J001)
    *** 2009-05-22 12:49:21.372
    *** ACTION NAME:() 2009-05-22 12:49:21.372
    *** MODULE NAME:() 2009-05-22 12:49:21.372
    *** SERVICE NAME:(SYS$USERS) 2009-05-22 12:49:21.372
    *** SESSION ID:(312.292) 2009-05-22 12:49:21.372
    java.io.IOException: service early exit: code=1 : err=The system cannot find the path specified.
    : out=
    at oracle.wh.runtime.server.Util.execRuntimeService(Util.java:122)
    CJ

    No version number and not enough information in what you posted to help you.
    Did this just start?
    If so what actions preceded it?
    Or is this a new install?
    Have you tried bouncing the instance?
    If it were my system I would have already searched the knowledgebase at metalink and opened an SR if I couldn't find a solution. Did you?

  • Trace files generated by Portal

    Our DBA has just sent me an email saying that Portal is generating "thousands of trace files" on the server with messages like this:
    *** SESSION ID:(8.11147) 2002-08-05 15:46:12.338
    Traverse response tree:
    SOAP-ENV:Envelope:
    SOAP-ENV:Body:
    portal:initSessionResponse:
    sessionTimeout:
    1800
    I've never had/seen this problem before. Anyone know how to prevent this?
    Thanks
    Rich Zapata

    No version number and not enough information in what you posted to help you.
    Did this just start?
    If so what actions preceded it?
    Or is this a new install?
    Have you tried bouncing the instance?
    If it were my system I would have already searched the knowledgebase at metalink and opened an SR if I couldn't find a solution. Did you?

  • ESO application creates a large number of temporary file

    Hello,
    To summarize, when users run large queries, use attachments, of generate PDFs, etc, temporary files are created in /sourcing/tmp folder. Files older than 24 hours are cleared from this folder when logs roll over.
    Our /sourcing filesystem is not sized to handle this u2018featureu2019 and I want to look for options. I am not keen on having the application filesystem /sourcing have a large quantity of temporary files going through it. The option that I can think of to get around it is to create a new filesystem or reuse an existing one and softlink /sourcing/tmp to it. Weu2019ll have to do this for each application server.
    Does anybody have any suggestions as to how to avoid this problem?
    Does anybody have any sizing inputs for sizing /sourcing/tmp.
    Thanks,
    Dnyandev

    All,
    The number of .blob and .bin files that get generated are large when u run volume testing. SAP has indentified it as a bug in the problem.
    Does anybody have any solution.
    We are currently using SAP ESourcing CLM 5.1
    Thanks,
    Dnyandev Kondekar

  • Approach to parse large number of XML files into the relational table.

    We are exploring the option of XML DB for processing a large number of files coming same day.
    The objective is to parse the XML file and store in multiple relational tables. Once in relational table we do not care about the XML file.
    The file can not be stored on the file server and need to be stored in a table before parsing due to security issues. A third party system will send the file and will store it in the XML DB.
    File size can be between 1MB to 50MB and high performance is very much expected other wise the solution will be tossed.
    Although we do not have XSD, the XML file is well structured. We are on 11g Release 2.
    Based on the reading this is what my approach.
    1. CREATE TABLE XML_DATA
    (xml_col XMLTYPE)
    XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    2. Third party will store the data in XML_DATA table.
    3. Create XMLINDEX on the unique XML element
    4. Create views on XMLTYPE
    CREATE OR REPLACE FORCE VIEW V_XML_DATA(
       Stype,
       Mtype,
       MNAME,
       OIDT
    AS
       SELECT x."Stype",
              x."Mtype",
              x."Mname",
              x."OIDT"
       FROM   data_table t,
              XMLTABLE (
                 '/SectionMain'
                 PASSING t.data
                 COLUMNS Stype VARCHAR2 (30) PATH 'Stype',
                         Mtype VARCHAR2 (3) PATH 'Mtype',
                         MNAME VARCHAR2 (30) PATH 'MNAME',
                         OIDT VARCHAR2 (30) PATH 'OID') x;
    5. Bulk load the parse data in the staging table based on the index column.
    Please comment on the above approach any suggestion that can improve the performance.
    Thanks
    AnuragT

    Thanks for your response. It givies more confidence.
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    Example XML
    <SectionMain>
    <SectionState>Closed</SectionState>
    <FunctionalState>CP FINISHED</FunctionalState>
    <CreatedTime>2012-08</CreatedTime>
    <Number>106</Number>
    <SectionType>Reel</SectionType>
    <MachineType>CP</MachineType>
    <MachineName>CP_225</MachineName>
    <OID>99dd48cf-fd1b-46cf-9983-0026c04963d2</OID>
    </SectionMain>
    <SectionEvent>
    <SectionOID>99dd48cf-2</SectionOID>
    <EventName>CP.CP_225.Shredder</EventName>
    <OID>b3dd48cf-532d-4126-92d2</OID>
    </SectionEvent>
    <SectionAddData>
    <SectionOID>99dd48cf2</SectionOID>
    <AttributeName>ReelVersion</AttributeName>
    <AttributeValue>4</AttributeValue>
    <OID>b3dd48cf</OID>
    </SectionAddData>
    - <SectionAddData>
    <SectionOID>99dd48cf-fd1b-46cf-9983</SectionOID>
    <AttributeName>ReelNr</AttributeName>
    <AttributeValue>38</AttributeValue>
    <OID>b3dd48cf</OID>
    <BNCounter>
    <SectionID>99dd48cf-fd1b-46cf-9983-0026c04963d2</SectionID>
    <Run>CPFirstRun</Run>
    <SortingClass>84</SortingClass>
    <OutputStacker>D2</OutputStacker>
    <BNCounter>54605</BNCounter>
    </BNCounter>
    I was not aware of Virtual column but looks like we can use it and avoid creating views by just inserting directly into
    the staging table using virtual column.
    Suppose OID id is the unique identifier of each XML FILE and I created virtual column
    CREATE TABLE po_Virtual OF XMLTYPE
    XMLTYPE STORE AS BINARY XML
    VIRTUAL COLUMNS
    (OID_1 AS (XMLCAST(XMLQUERY('/SectionMain/OID'
    PASSING OBJECT_VALUE RETURNING CONTENT)
    AS VARCHAR2(30))));
    1. My question is how then I will write this query by NOT USING COLMUN XML_COL
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM po_Virtual t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                          <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x;
    2. Insetead of creating the view then Can I do
    insert into STAGING_table_yyy ( col1 ,col2,col3,col4,
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    insert into STAGING_table_yyy ( col1 ,col2,col3
    SELECT x."SectionOID",
    x."EventName",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionOID PATH 'SectionOID',
    EventName VARCHAR2 (30) PATH 'EventName',
    OID VARCHAR2 (30) PATH 'OID',
    ) x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    Same insert for other tables usind the OID_1 virtual coulmn
    3. Finaly Once done how can I delete the XML document from XML.
    If I am using virtual column then I beleive it will be easy
    DELETE table po_Virtual where oid_1 = '99dd48cf-fd1b-46cf-9983';
    But in case we can not use the Virtual column how we can delete the data
    Thanks in advance
    AnuragT

  • Lots of  trace file generated in udump dir at the propagation target site

    about 5 minutes per file,what can i to do to disable it?
    trace file sample:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /oraps/product/10.2.0
    System name: SunOS
    Node name: BjYwDbserver
    Release: 5.10
    Version: Generic_118833-03
    Machine: sun4u
    Instance name: zgs
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Unix process pid: 3885, image: oraclezgs@BjYwDbserver
    *** SERVICE NAME:(zgs.psmis) 2006-11-30 02:17:35.629
    *** SESSION ID:(267.13934) 2006-11-30 02:17:35.629
    *** Destination Propagation Status ***
    Source Q Name :"STRMADMIN"."STREAMS_CAPTURE_Q"
    Destination :HUAD.PSMIS
    Hwm : 517894
    Dest Q Name :"STRMADMIN"."STREAMS_APPLY_HUAD_Q" Hwm : 517894 Ack : 51
    7894
    LsM : 517894
    LsP : 163197
    --------------------------------------------------

    Hi,
    I have seen this issue on 10.2.0.2 too.
    This tracing is on by default and you have to put in some cron job to remove the trace files.
    Regards
    Fairlie Rego
    www.el-caro.blogspot.com

  • Trace files generated for every session in 11g

    Hi
    I have two databases - both 11.1.0.7, both on RHEL5
    Database A runs on Server A
    Database B runs on Server B
    Both installation of 11g and each database are new installations.
    On Database A a trace file is being created for every session in ADR_HOME.../trace.
    On Database B - this is not happening
    The problem I have is Database A. As every session connection creates a trace file (or 2 - being *.trc and *trm), at the end of the day we have 1000's of unnecessry trace files.  
    A trace file is created for every user - SYS, SYSTEM, application users, etc... It's being created immediately - even if no SQL statements are run in the session.
    I've compared the init.ora parameters running in each database - and can find no differences. btw - SQL_TRACE is set to FALSE.
    Any ideas why a trace file is being generated for every session on Database A? And how to switch this off?
    TIA
    Regards
    Paul

    What type of content is in generated trace files? Is it SQL trace or something different?
    Have you any AFTER LOGON trigger? It can be checked with:
    col text format a100
    select name, text
      from dba_source
    where name in (select trigger_name from dba_triggers where triggering_event like 'LOGON%')
    order by name, line

  • Trace enabled, but no trace file generated

    I have enabled trace (Application Developer -> Concurrent -> Program, find the report, tick 'Enabled Trace', save), then i execute the report, but no trace file is generated, check the fnd_concurrent_requests table, the record has oracle_process_id=null.
    what can I do to have trace generated?

    Hi,
    I have enabled trace (Application Developer -> Concurrent -> Program, find the report, tick 'Enabled Trace', save), then i execute the report, but no trace file is generated, check the fnd_concurrent_requests table, the record has oracle_process_id=null.
    what can I do to have trace generated?Please enable trace from Apps 11i form -> Help -> Diagnostic -> Trace ( Here u can select the option like Regular trace, trace with bin etc) as per your requirement. But for this trace profile option should be enabled for your user.
    Regards,
    X A H E E R

  • Trace files generated

    Oracle generated following trace files,
    can someone explain me, what error
    do they report?
    ora_28324.trc
    *** 2004-07-30 12:12:26.065
    *** SESSION ID:(19.7) 2004-07-30 12:12:26.064
    Probe:write_request: backend error 1003
    ora_28326.trc
    *** 2004-07-30 12:12:26.063
    *** SESSION ID:(20.2) 2004-07-30 12:12:26.054
    Probe:S:get_scalar: exception 10: ORA-06502: PL/SQL: numeric or value error
    *** 2004-07-30 12:23:42.230
    Probe:read_pipe: receive failed, status 3
    Probe:S:debug_loop: timeout. Action 1

    These are coming from the PL/SQL debugging API. They are non-fatal and can be ignored. Something/someone must be debugging some PL/SQL on the machine.

  • SAP instance Slow and trace file generates below errors :  DiagOConvert...

    Hello,
    please help to resolve the problem bellow :
    SAP informations :
    - SAP ECC 6.0
    - ORACLE : 10.2.0.4.0
    - Kernel release : 700
    Trace file content :
    *** ERROR => platform      : WINDOWS/NT [diagsrv.c    842]
      *** ERROR => gui version   : 710        [diagsrv.c    851]
      *** ERROR => gui patchlevel: 12                                                 [diagsrv.c
      *** ERROR => transaction   : SP01                 [diagsrv.c    867]
      *** ERROR => user          : SAPSERVE     [diagsrv.c    881]
      *** ERROR => display    : FRBOUD083 [diagsrv.c    887]
      *** ERROR => DiagOConvert: input and output buffer overlap [diagconv.c   1146]
    +0)  0x400000000149b6a8   CTrcStack2 + 0x1d8  [dw.sapP05_DVEBMGS21]+
    +1)  0x400000000149b4c0   CTrcStack + 0x18  [dw.sapP05_DVEBMGS21]+
    +2)  0x40000000015d5080   DiagOConvert + 0x1e0  [dw.sapP05_DVEBMGS21]+
    +3)  0x40000000015d7050   DiagoString0 + 0xf0  [dw.sapP05_DVEBMGS21]+
    +4)  0x400000000161e740   DiagoCurrentCodepage + 0x2c0  [dw.sapP05_DVEBMGS21]+
    +5)  0x4000000001601490   diagoutput + 0xb50  [dw.sapP05_DVEBMGS21]+
    +6)  0x40000000016003fc   diagmout + 0x1cc  [dw.sapP05_DVEBMGS21]+
    +7)  0x4000000001620f20   diagmsgo + 0x638  [dw.sapP05_DVEBMGS21]+
    +8)  0x4000000001508728   dytrcexit + 0x648  [dw.sapP05_DVEBMGS21]+
    +9)  0x4000000001507a30   dypex00 + 0x858  [dw.sapP05_DVEBMGS21]+
    +10)  0x4000000001513498   dynpoutf + 0x450  [dw.sapP05_DVEBMGS21]+
    +11)  0x400000000150a464   dynprctl + 0x61c  [dw.sapP05_DVEBMGS21]+
    +12)  0x4000000001504554   dynpen00 + 0x1f04  [dw.sapP05_DVEBMGS21]+
    +13)  0x400000000128e9c8   Thdynpen00 + 0xf08  [dw.sapP05_DVEBMGS21]+
    +14)  0x400000000128d1a8   TskhLoop + 0x5980  [dw.sapP05_DVEBMGS21]+
    +15)  0x4000000001281e44   ThStart + 0x214  [dw.sapP05_DVEBMGS21]+
    +16)  0x40000000011c8648   DpMain + 0x410  [dw.sapP05_DVEBMGS21]+
    +17)  0x40000000011c5c0c   nlsui_main + 0x14  [dw.sapP05_DVEBMGS21]+
    +18)  0x40000000011c5bd4   main + 0x3c  [dw.sapP05_DVEBMGS21]+
    +19)  0xc00000000000b7a8   $START$ + 0xa0  [/usr/lib/pa20_64/dld.sl]+
    Thank you.

    first of all, Than you for your answer,
    Yes i'v already seen thta note, but what they propose exactelly as aplicative solution by this text :
    Solution
    By applying a kernel patch, the trace output is enhanced with the information of the callpoint so that the cause of the problem can be analyzed faster.
    The error traces were issued with the print view of texts. The problem is solved with a kernel patch.
    The error trace output now works at trace levels higher than 1 only.

Maybe you are looking for