OMS generates tones of core files in OMS_HOME?

Hi,
We recently upgraded OMS to 10.2.0.4 and found that OMS has generated lot of core files where the file size is 0 bytes.Please let me know,what might could be the cause.
Thanks,
Regards,
Vinoth

Hi,
Thanks for your response.
File name format will be.
core.3586
core.3587
core.35XX
core.****
Rest you can guess it.
Thanks,
Regards,
Vinoth

Similar Messages

  • OEM Grid generating too many core files

    Hi all,
    Database: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    OEM (OMS and Agent): 10.2.0.5
    OS: Solaris 10 (SPARC 64bit)
    I have a weird problem concerning the agent, each an every time I start the agent, the cdump directory will be filled with hundreds of core files (66 MB each), thus feeling up the whole filesystem where the cdump directory resides. I'm not sure why this is happening. Is this a bug? Has anyone experienced this before?
    Thanks in advance for your help.
    Regards,
    Durbanite - South Africa

    Hi again,
    This is the content of the alert log:
    bash-3.00$ tail alert_clpr1.log
    ORA-07445: exception encountered: core dump [00000001002258A4] [SIGBUS] [Invalid address alignment] [0x60708090A0B0002] [] []
    Fri Jul 17 10:01:11 2009
    Errors in file /udd001/app/oracle/admin/clpr/udump/clpr1_ora_27740.trc:
    ORA-07445: exception encountered: core dump [00000001002258A4] [SIGBUS] [Invalid address alignment] [0x60708090A0B0002] [] []
    bash-3.00$ tail /udd001/app/oracle/admin/clpr/udump/clpr1_ora_27839.trc
    FD6C6EF9:00005D6A 265 0 10280 1 0x0000000000000109
    F3FED575:00005992 266 0 10280 1 0x000000000000010A
    FD6E1007:00005D6B 266 0 10280 1 0x000000000000010A
    F40260C6:00005994 267 0 10280 1 0x000000000000010B
    F40735E8:00005995 268 0 10280 1 0x000000000000010C
    F40C0992:00005997 269 0 10280 1 0x000000000000010D
    F40F9C50:00005999 270 0 10280 1 0x000000000000010E
    KSTDUMP: End of in-memory trace dump
    ssexhd: crashing the process...
    Shadow_Core_Dump = PARTIAL
    I think I might need to contact Oracle Supprt for this one, I'll start by using the ORA-07445 tool on metalink.
    Thanks and regards,
    Durbanited

  • Generate IP core files from a xco file

    I have a .xco file and it is indicated to me that I need to generate other IP core files by running my .xco file.
    Q1. How is that done? 
    Q2. Do I need to use the Xilinx Core Generator to do this? Do I have to create a 'project' in Xilinx Core Generator?
    Thanks for the help

    Nevermind! I figured out how to do it. For anyone else out there that might have this problem...
    ->Create a new project in the XILINX CORE Generator
    ->Import your desired .xco file.
    ->Double click on the Instance Name of the IP
    ->Setup your IP as desired
    ->Click 'Generate'
    This should generate all of the needed files for you to use your IP block in EDK.
    This was for ISE 14.4

  • Application running in Solaris 9 container generating core files. what to do?

    My solaris 9 zone configuration in solaris 10 looks like:
    zonecfg:sms> info
    zonename: sms
    zonepath: /zone/sms
    brand: solaris9
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,proc_priocntl,proc_clock_highres,proc_lock_memory,sys_time,priv_proc_priocntl,priv_sys_time,net_rawaccess,sys_ipc_config,priv_proc_lock_memory
    scheduling-class:
    ip-type: exclusive
    hostid:
    [max-shm-memory: 4G]
    [max-shm-ids: 100]
    [max-sem-ids: 100]
    fs:
      dir: /var
      special: /dev/dsk/c1t0d0s5
      raw: /dev/rdsk/c1t0d0s5
      type: ufs
      options: []
    net:
      address not specified
      physical: bge0
      defrouter not specified
    device
      match: /dev/dsk/c1t0d0s5
    device
      match: /dev/rdsk/c1t0d0s5
    device
      match: /dev/dsk/c1t0d0s6
    device
      match: /dev/rdsk/c1t0d0s6
    device
      match: /dev/dsk/c1t0d0s7
    device
      match: /dev/rdsk/c1t0d0s7
    capped-cpu:
      [ncpus: 2.00]
    capped-memory:
      physical: 4G
      [swap: 8G]
      [locked: 2G]
    attr:
      name: hostid
      type: string
      value: 84b18f64
    attr:
      name: machine
      type: string
      value: sun4u
    rctl:
      name: zone.max-sem-ids
      value: (priv=privileged,limit=100,action=deny)
    rctl:
      name: zone.max-shm-ids
      value: (priv=privileged,limit=100,action=deny)
    rctl:
      name: zone.max-shm-memory
      value: (priv=privileged,limit=4294967296,action=deny)
    rctl:
      name: zone.max-swap
      value: (priv=privileged,limit=8589934592,action=deny)
    rctl:
      name: zone.max-locked-memory
      value: (priv=privileged,limit=2147483648,action=deny)
    rctl:
      name: zone.cpu-cap
      value: (priv=privileged,limit=200,action=deny)
    Solaris 9 zone /etc/system file looks like:
    * The directive below is not applicable in the virtualized environment.
    * The directive below is not applicable in the virtualized environment.
    * The directive below is not applicable in the virtualized environment.
    * The directive below is not applicable in the virtualized environment.
    * The directive below is not applicable in the virtualized environment.
    * The directive below is not applicable in the virtualized environment.
    set noexec_user_stack=1
    set semsys:seminfo_semmni=100
    set semsys:seminfo_semmns=1024
    set semsys:seminfo_semmsl=256
    set semsys:seminfo_semvmx=32767
    set shmsys:shminfo_shmmax=4294967295
    set shmsys:shminfo_shmmin=1
    set shmsys:shminfo_shmmni=100
    set shmsys:shminfo_shmseg=10
    set rlim_fd_max=65536
    set rlim_fd_cur=60000
    * The directive below is not applicable in the virtualized environment.
    My questions are:
    1. Application running in solaris 9 container generating core files. what to do???
    2. My prstat -Z for zone shows almost 95% percent cpu usage. what to do???
    3. Kindly can share how to move solaris 9 into solaris 10 containers ??

    Based on the new questions for the same post you wrote in the other communities, some posts are removed as duplicate, here is the answer :
    For the point #3, please look on table 17-1 in the following URL :
    Zone Components - System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones
    You can also customize your container /etc/system file but it cannot exceeds the global zone and the zone configuration value.
    For the other point, #2, this can be complicated without a complete image of what the complete system do.
    Try fist to check if you have not a busy process in your zone, then you can check if a bottleneck exists in the I/O side. You use maybe wrong parameters, a wrong configuration or your system configuration is insufficient in term of resources.
    What I can see in the outputs that you provided is that the S9 zone uses the half of the swap space. This can impact your zone performance and I/O activity, and can have in this case a side effect on some processes. Check why your zone uses the swap and how you can remedy this.

  • Solaris 10 update 6 keeps generating core file (/core)

    I wonder if somebody has encountered the following issue.
    I did a fresh install of Solaris 10 update 6 on two servers (T5140 and T524) from DVD.
    I noticed that a core file was in the root filesystem (/core).
    So, I deleted it.
    As soon as I delete the core file, another one is generated.
    This is happening on both servers where I installed Solaris 10 update 6 from the DVD.
    This is not a live update install. Solaris was installed from scratch. When prompted to preserve previous data, I replied with 'do not preserve data'
    Does anybody know where the core file is coming from and how to stop it being generated?
    Found out that is coming from vold
    SunOS b1osdtsun02 5.10 Generic_137137-09 sun4v sparc SUNW,T5240
    # more /etc/release
                          Solaris 10 10/08 s10s_u6wos_07b SPARC
               Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                Assembled 27 October 2008
    # mdb /core
    Loading modules: [ libsysevent.so.1 libnvpair.so.1 libc.so.1 ld.so.1 ]
    ::statusdebugging core file of vold (32-bit) from b1osdtsun02
    file: /usr/sbin/vold
    initial argv: /usr/sbin/vold -f /etc/vold.conf
    threading model: multi-threaded
    status: process terminated by SIGSEGV (Segmentation Fault)
    ::stacklibc.so.1`strlen+0x18(408450a5, 0, 0, 88b70, 600, 180)
    read_slices+0x114(874a0, b, 889a0, feeafd34, 1, 5)
    read_hsfs_partition+0x88(b, 46c00, 6d0000, 2c, 34400, 1010101)
    read_partition+0x30(874a0, 341a4, 3, 34000, 34400, 9)
    create_top_partition+0x140(7cbe0, 7cc24, 7cbe0, 874a0, ffffffff, b)
    0x265e0(800012, feeaff9c, c, 598e0, 7cbe0, ffffffff)
    create_medium+0x74(800012, feeaff9c, 20, 12, 47800, c)
    0x2232c(5d278, 0, 0, 800012, 20, 33000)
    libc.so.1`_lwp_start(0, 0, 0, 0, 0, 0)
    >
    #It seems that vold is failing to mount the DVD on both servers after Solaris was installed.
    Is this a Solaris 10 update 6 bug?
    Edited by: shen on Jan 29, 2009 8:45 PM

    Never mind.
    It is a known bug documented on manual " [Solaris 10 10/08 Release Notes, Chapter 2 Solaris Runtime Issues|http://docs.sun.com/app/docs/doc/820-5245/chapter2-1000?a=view] " as shown below.
    The solution is to apply vold patch [138130-01|http://sunsolve.sun.com/search/document.do?assetkey=1-21-138130-01-1].
    Solaris 10 10/08 DVD Media Might Not be Automatically Mounted by vold (6712352)
    The Solaris 10 10/08 DVD does not mount by default during runtime. No error message is displayed.
    Workaround: Perform the following steps:
       1. Become superuser.
       2. Disable vold:
          * On Solaris 10 Systems:
                # svcadm disable -t volfs
          * On Solaris 8 and Solaris 9 systems:
                /etc/init.d/volmgt stop
       3. Mount the media manually by using the # mount -F hsfs path to block device path to mount point command. For example:
          # mount -F hsfs /dev/rdsk/c0t2d0s2 /mnt

  • System Hung frequently Generates Core file

    Hi All,
    One of my workstation is getting hung frequently when i kill that session it generates core file and the details are as follows.
    Hardware : Sun Blade 2500
    Memory: 2 GB
    Operating System : Solaris 8 2/04
    Patch Version : Generic_117350-12
    I found following error in core file:
    XtToolkitError.XtToolkitError
    CancelDrag
    typeConversionError.noConverter
    typeConversionError.noConverter
    files.so/usr/lib/locale/C/LC_CTYPE/textboundary.so.1
    override
    rter registered for 'Pixel' to 'SelectColor' conversion.
    No type converter registered for 'Pixel' to 'SelectColor' conversion.
    ListKbdCancel
    Cp9{
    override
    No type converter registered for '%s' to '%s' conversion.
    No type converter registered for '%s' to '%s' conversion.
    Please help me to fix this issue.
    Thanks & Regards,
    Ramana

    Hi,
    These files are getting generated because your JVM is getting crashed frequently.
    When JVM crash, JVM captures the state of the JVM at the time of crash and these files generate.
    Please study your std_server log very carefully and you will find some lines like "Out of Memory" kind of error.
    Now if you can stop the crash of the JVM, these files wont generate any more.
    I have faced the same issue and Fix it using the following ways.
    1)  Upgrade your JVM to the latest version so that better memory management will be in place.
    2) If still these files getting generated, you can tune the new version of the JVM.
    With Regards,
    Saurabh

  • Core file generating in tuxedo8.1

    Hi,
    we are using tuxedo8.1 and we are seeing core file getting generated and not able to identify the reason for the same.
    below are the details
    using debugger below is the output we got
    file core
    core: ELF-64 core file - IA64 from 'ck_CustomerUdt' - received SIGSEGV
    Core was generated by `ck_CustomerUdt'.
    warning: ck_CustomerUdt is 14 characters in length. Due to a limitation
    in the HP-UX kernel, core files contain only the first 14 characters
    of an executable's name. Check if ck_CustomerUdt is a truncated name.
    If it is so, core-file, packcore and other commands dealing with
    core files will exhibit incorrect behavior. To avoid this, issue
    exec-file and symbol-file commands with the full name of the executable
    that produced the core; then issue the core-file, packcore or other
    core file command of interest.
    Program terminated with signal 11, Segmentation fault.
    SEGV_MAPERR - Address not mapped to object
    warning: Load module /oraclehometux/oracle/product/10.2.0/db_1/lib/libclntsh.so.10.1 has been stripped.
    Debugging information is not available.
    warning: Load module /oraclehometux/oracle/product/10.2.0/db_1/lib/libnnz10.so has been stripped.
    Debugging information is not available.
    #0 0xc00000000866d1f0:0 in UDTProcess () at udt_Process.c:89
    89 udt_Process.c: No such file or directory.
    in udt_Process.c
    (gdb) bt
    #0 0xc00000000866d1f0:0 in UDTProcess () at udt_Process.c:89
    #1 0xc0000000214e33c0:0 in ck_CustomerUdtRequest () at udt.c:616
    #2 0x400000000000b800:0 in commonServiceWrapper () at ck_CustomerUdt.c:866
    #3 0x40000000000101e0:0 in I_CustomerUdtRequest () at ck_CustomerUdt.c:1285
    #4 0xc000000003c15020:0 in _tmsvcdsp () at tmsvcdsp.c:545
    #5 0xc000000003c66bb0:0 in _tmrunserver () at tmrunsvr.c:2015
    #6 0xc000000003c12a90:0 in _tmstartserver () at tmstrtsrvr.c:141
    #7 0x4000000000005240:0 in main () at BS-109a.c:76
    Current language: auto; currently c
    showing as udt_Process.c no such file.
    please help as we are facing this eery day. and every time core generating with same error

    Hi,
    Well from the call stack it looks like your server made a bad pointer reference on line 89 in file udt_Process.c. This really is unlikely to be a Tuxedo related issue. More likely just a coding error in the service.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • Frequently generating the core files /var/core in 11.5.10

    Hi,
    We have recently moved our production server from NETAPPS to EMC storage . After this we checked that lot of core files are generating by different modules like GL,AP etc...
    If someone has encountered the same issue, please let us know the reason.
    Regards,

    User;
    If my program is completed succesfully , eventhough core files are generating.
    Please let me know , reason of the core files , even the program are completed successfully.
    File-name like.
    core_<server name>GLPPOS504_501_1266506917_27175
    Did you check my previous post? Did you check Note:- Program Was Terminated By Signal 11 On GLLEZL After Applying Patch 11I.ATG_PF.H.DELTA.6 Doc Id: 580120.1 , did you try to Relink GLLEZL via adadmin and retest issue?
    Regard
    Helios

  • Lot of core files are generated in webserver machine.......

    we have a iplanet webserver running in our production environment........it is creating lot of entries in the file /var/adm/messages...............
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.62) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.64) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.84) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.88) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.90) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.92) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.94) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.106) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.108) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.110) failed: 1
    It is creating lot of core files in /var/core/ with the name.......
    ore.ns-httpd.14156.uk17.0.0.1218243851
    core.ns-httpd.14922.uk17.0.0.1217950925
    core.ns-httpd.14922.uk17.0.0.1217950926
    core.ns-httpd.14937.uk17.0.0.1218243696
    core.ns-httpd.14937.uk17.0.0.1218243697
    core.ns-httpd.14949.uk17.0.0.1218243760
    core.ns-httpd.14949.uk17.0.0.1218243762
    core.ns-httpd.14955.uk17.0.0.1218243765
    core.ns-httpd.14955.uk17.0.0.1218243767
    core.ns-httpd.14977.uk17.0.0.1218243777
    those files are in byte code so unable to read those file.............
    Please any one help me on this issue...............

    First migrate your server to the latest Web Server 7.0 update 3
    http://www.sun.com/software/products/web_srvr/index.xml
    https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=SJWS-7.0U3-OTH-G-F@CDS-CDS_SMI
    You are getting those messages as gethostbyaddr(192.168.245.92) is failing. Try writing a small C program which calls gethostbyaddr(192.168.245.92) and see if its an o/s issue.
    If you are on Solaris 10 try to see why Web Server is dumping core by
    mdb core.pid
    ::stackAre you sure you have patch levels as recommended in the release notes of Web Server?
    Have you enabled IPv6? Since when are you seeing these core dumps?

  • CMS crash with core files and multiple report output generation

    Happy new year to everyone,
    Our BOXIR3.1SP6FP2 env has recently started behaving weirdly by triggering multiple output to users inbox and email notification out of scheduled reports. Also we have noticed the CMS crash with core file (almost 4GB) generation at the time of multiple report output.
    Most of the times, CMC crashes and recycles itself. At few times, CMS services alone went shut down.
    OS details: RHEL 5.5, 32 GB RAM, 8 core processor on each of the clustered node, Oracle 10GR2.4 CMS DB server, 11GR2.4 oracle reporting DB server and oracle 11.1.0.6 client.
    2015/01/21 23:54:37.946|>=| | |28123|1534131088|{|||||||||||||||DBQueue::Read
    2015/01/21 23:54:37.946|==| | |28123|1496185744|
    |||||||||||||||(OracleStatement.cpp:156) Prepare: SQL: SELECT ObjectID,
    Version, LastModifyTime, CRC, Properties FROM CMS_InfoObjects6 WHERE ObjectID
    IN (1004050) ORDER BY ObjectID
    2015/01/21 23:54:37.946|==| | |28123|1496185744| ||||||||||||||(OracleStatement.cpp:183) Prepared statement Execute
    2015/01/21 23:54:37.965|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString 50293
    2015/01/21 23:54:37.966|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString Unknown exception in database thread
    2015/01/21 23:54:37.967|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString 33007
    2015/01/21 23:54:37.967|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString CMS is unstable and will shut down immediately. Reason: %1...
    2015/01/21 23:54:38.506|==| | |28123|1496185744| |||||||||||||||(OracleStatement.cpp:156) Prepare: SQL: SELECT ObjectID,
    Version, LastModifyTime, CRC, Properties FROM CMS_InfoObjects6 WHERE ObjectID IN (1009213) ORDER BY ObjectID
    2015/01/21 23:54:38.506|==| | |28123|1496185744| |||||||||||||||(OracleStatement.cpp:183) Prepared statement Execute
    2015/01/21 23:54:38.512|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:549) SUNIXDaemon::run: server restart flag is 1..
    2015/01/21 23:54:38.513|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:552) SUNIXDaemon::run: in abort ...
    2015/01/21 23:54:38.513|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:555) SUNIXDaemon::run: doing the WithAbort case ...
    2015/01/21 23:54:38.520|==| | |28123|1496185744| |||||||||||||||(dbq.cpp:1357) DBQ: Time required to read 1 objects: 20.000000 ms
    Thank you,
    Karthik

    Hi Denis,
    I'm trying my best for the last few weeks to understand the core issue along with SAP however it is still a mystery.
    >Ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 270335
    max locked memory       (kbytes, -l) 32
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 270335
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    Below is the observation as part of troubleshooting:
    1. CMS breaks at threshold of 3.9 G.
    2. CMS DB sits in a different Linux server than BOE server.
    3. All core files were generated by boe_cmsd process and are almost 4GB in size (same as max threshold which it breaks).
    4. Shell script which I've added in the BOE servers shows that the CMS DB is available/connecting at the time of CMS crash.
    5. SAP analysed the Core files and skeptical about the below lines.
         #3  0x58687b80 in skgesigCrash ()
          from /opt/oracle/product/11.1.0/client_1/lib32/libclntsh.so
         #4  0x58687e0d in skgesig_sigactionHandler ()
    I'll continue troubleshooting with a hope to fix it at the earliest.
    Thanks,
    Karthik

  • Possible to get data from a partly optimized/stripped core file?

    Hello,
    This may not be possible, but I figured it was worth asking about.
    I've got a C/C++ GUI application compiled with Solaris Studio 12.3 that is experiencing an infrequent crash when compiled for production and running on production boxes.  This is on Solaris 10 for x86 running in 64-bit mode.  Most of the app is in libraries which are statically linked.
    I working on trying to replicate the issue in a development environment, but have not had luck yet. In any case, it would be interesting to know what kind of data can be gleaned postmortem from the core file I've got access to.
    The application is actually a small "main.c" file which is complied and linked in debug mode with "-g" and no optimization, but this thin wrapper calls into the main logic in statically linked libraries which are optimized and not built in debug mode.  (See the call stack below.)
    From the core file :
    1) For functions in the call stack that have names, can I get the value of one of the parameters?  I ask because several such functions take pointers to structs with data that should be very useful.
    2) For functions in the call stack that appear as ??????, is it possible to determine at least what .o or .a file they came from?  This could help narrow things down.
    Some basic Googling indicates that either of the above may not be trivial or even possible.  But I'm wondering if the fact that we've got a "main.c" debuggable wrapper might somehow help.
    As a related question, pstack produces sensible output, but dbx shows the error: "dbx: internal error: could not iterate over load objects -- link-maps are not initialized".  Is there some flag I need to supply to dbx?
    Thank you for any help,
    David
    Background info:
    I've been unable to replicate on non-production deployments, but the machines do differ a bit.   Eventually I will be able to borrow a production box to deploy an instrumented binary, but for now all I've got is a core file and access to source.
    The core was generated with gcore while the app was displaying a popup from it's SIGABRT cleanup handler.   The production build scripts do some binary stripping, but I'm not yet sure where it is getting done.
    Here is the (slightly cleaned up) output of pstack for the core file:
    fffffd7ffeb3244a nanosleep (fffffd7fffdfd4b0, 0)
    0000000000514485 ZWidget_ModalEventLoop () + 65
    00000000004f74a9 ZWidget_ShowPopup () + 4a9
    000000000049d2ab ???????? ()
    fffffd7ffeb2dd16 __sighndlr () + 6
    fffffd7ffeb225e2 call_user_handler () + 252
    fffffd7ffeb2280e sigacthandler (6, 0, fffffd7fffdfd640) + ee
    --- called from signal handler with signal 6 (SIGABRT) ---
    fffffd7ffeb3351a _lwp_kill () + a
    fffffd7ffead81b9 raise () + 19
    fffffd7ffeab6b4e abort () + 5e
    000000000052c3bc ZUtil_Query () + 3c
    000000000059b66e ZUtil_QueryString () + 3e
    00000000004a1e2a ???????? ()
    00000000004a0879 ???????? ()
    000000000058b303 ???????? ()
    000000000052d517 ZUtil_Set () + 767
    00000000004f4805 ZUtil_DBSet () + 35
    00000000005094b5 ZWidget_ProcessCallback () + 465
    0000000000516814 ???????? ()
    fffffd7fff242424 XtCallCallbackList () + 114
    fffffd7ffef84d2e ActivateCommon () + 126
    fffffd7ffef84b72 Activate () + 1e
    fffffd7fff244efa HandleActions () + 14a
    fffffd7fff24b1b7 HandleComplexState () + 177
    fffffd7fff243a9e _XtTranslateEvent () + 4e
    fffffd7fff24382a XtDispatchEventToWidget () + 2ea
    fffffd7fff2430ee _XtDefaultDispatcher () + 15e
    fffffd7fff242db6 XtDispatchEvent () + 106
    00000000005142df ZWidget_ProcessEvent () + ff
    0000000000514099 ZWidget_ProcessEvents () + 19
    00000000005ac67a ZEventLoop_ProcessEvents () + 5a
    00000000005ac528 ZEventLoop_Execute () + 48
    000000000049d133 Main () + c93
    000000000049bdf9 main () + 9
    000000000049bc7b ???????? ()

    Thanks for reporting this problem.
    >1) For functions in the call stack that have names, can I get the value of one of the parameters?  I ask because several such functions take pointers to structs with data that should be very useful.
    Use compiler option -preserve_argvalues={none|simple|complete} to preserve incoming argument values. Note that this feature was introduced in Oracle Solaris Studio 12.4.
    You may also be interested in a new option in Oracle Solaris Studio 12.4 which provides much finer-grained control over debug information, which allows you to choose how much information is provided and to reduce the amount of disk space needed for the executable. Dev Tip: How to Get Finer-Grained Control of Debugging Information.
    >2) For functions in the call stack that appear as ??????, is it possible to determine at least what .o or .a file they came from?  This could help narrow things down.
    The following 2 commands may help:
    where -l        
    # Include library name with function name.
    whereis -a <addr-of-?????> # Print location of an address expression
    >As a related question, pstack produces sensible output, but dbx shows the error: "dbx: internal error: could not iterate over load objects -- link-maps are not initialized".  Is there some flag I need to supply to dbx?
    This may be caused by corefile mismatch. See dbx online help: "help core mismatch" for suggestions.
    Hope this helps.

  • Core files for GLLEZL in  /var/core

    Hi All,
    We are running on 11.5.10.2 and in 10.2.0.3 DB
    Today we got lots of core files in /var/core directory
    core_<server name >GLLEZL201_201_1257203633_25107
    Can you please advice on why these were generating and any issues are there because of this.
    Please advice.
    Thanks & Regards,
    Rakesh

    Pl identify if any changes were made recently. CORE files indicate OS errors (like SIGNAL 11 etc) - pl verify if the GLLEZL concurrent program is completing successfully and if there are any errors recorded in the log/out files of the GLLEZL concurrent requests - pl also check the database alert log file.
    580120.1 - Program Was Terminated By Signal 11 On GLLEZL After Applying Patch 11I.ATG_PF.H.DELTA.6
    HTH
    Srini

  • BO Core File Analysis

    Hi  Experts ,
    BO Core File Analysis :
    I would like to learn  doing analysis on core files, it happens many time that we get core generated we usually  go for file command on core file and go for dbx and get the output , we will try to find something in log files on that crashed process or servers.
    Is there any detailed guide and document or tutorial available  ?  , I want to understand the core written to find out the cause why and why it is happening
    Please advice.
    Regards,
    Neo.

    Expecting something on this .
    Regards,

  • Core file in $ORACLE_HOME/dbs

    Hi All,
    Database vertion: 11.2.0.1
    OS: SunOS with Sun cluster
    Core file is getting generating in huge size in $ORACLE_HOME/dbs and $ORACLE_HOME reaches 100%.
    Can any one tell me why core file is getting generating.
    background_core_dump is Partial
    Regards,
    Prasanna

    See CORE_DUMP_DEST in the docs. You can change it to somewhere with more room. Also see http://www.orafaq.com/faq/what_should_one_do_with_those_core_files
    If you don't get a hint from the file command or the alert log as to what is causing these, then you have to deal with Oracle support.
    You can also limit core size from the OS side, details depend on OS version.
    background_core_dump partial means don't dump sga with background process core dumps. Are background processes dumping?

  • Large Core file

    Hello,
    I found in the home of my domain (/oracle/product/ias10g/j2ee/home) a core file sized 1.2 Gb.
    It looks like a memory dump file. Is is full of binary data.
    Can you please let me know what this file is exactly and when/how it is generated? Or maybe point to a another resource.
    Is there a way for me to configure the generation of this file?
    Thanks

    Hi Abhishek,
    You can explain more.
    OS?
    DB?
    SAP Version?
    /usr/sap/SID/DVEBMGS00/work/
    SAP folder is full and creating issue.
    Regards,
    V Srinivasan

Maybe you are looking for

  • Visual Composer failed deployment

    I'm getting this error when I try to deploy a Flash Visual Composer application: [code]Error in compiling Flex application: java.lang.InternalError: Can't connect to X11 window server using ':0.0' as the value of the DISPLAY variable.    at sun.awt.X

  • Combine floor() and ceiling() functions

    I need to combine the functionality provided by both floor() and ceiling() functions using normal pl/sql function or proc. and without using any built-in functions in it.

  • How to know Which T-Code belongs to which auth.obj?

    Hello Friends, <b>Lets say a user has got authorization to execute SU01. Now I want to know to which auth.object this t-code belongs to and which activity the user is having for that t-code? can u explain it step by step how to check this?</b>

  • What is the best way to get my bank (small bank) to participate in this program (Apple Pay)??

    I also would like to know the faster way to do this, I tried to call the bank and according to them they don't know anything, so at this point I'm like who do I need to talk to about this great program? Nobody knows....please help thanks!!!

  • Error downloading Illustrator update 16.2.1

    When I try downloading the latest Illustrator CS6 update, it keeps failing. This is what the error log has to say: Adobe Illustrator CS6 Update (version 16.2.1) There was an error downloading this update. Please quit and try again later. Error Code: