Javacore file are generated in oracle home

hi all,
i have some problem with web enterprise manager but it is resolve when i shutdown then restart the system but issue is that it generated javacore files in oracle home. is these files are taken to be serious(why it is generated).i am using oracle 10gR3
Regards

Hi,
My application also generating .phd headdump file causing out of memory error. Did you get any fix for that. I dont want this files to be generated. How can i stop this files to be generated.
Please suggest if possible asap.
Thanks..

Similar Messages

  • Images and js files within a deployed war file are generating 404 errors.

    Greetings,
    I have an odd situation in that image files and js files that are part of a deployed war file are generating 404 (not found) errors when accessed via https but ARE found when accessed via http.
    Unfortunately we are required to use https.
    I have verified via jar -tf that the files are indeed part of the war file. So, it is not that they are missing as is evident when accessed via http. (they appear as expected)
    A work around is to create the sub directories under the document root, on the ohs, and populate the sub directories with the images and js files that were used as part of the war file build. While this works, it doesn't explain why they would generate a 404 error when referenced from within the war via https.
    The war file works correctly on our 10g installation.
    I also have a very simple deployed war file and it too seems to have an issue finding direcotries/files that are part of the war file when referenced via https but not http.
    We are using Oracle 11g OHS and using the WLS Admin Console to do the deploying. We also are using the OSSO for the performing the required authentication.
    I have an SR in with oracle and have been working with them but I thought I would post here too.
    Suggestions?
    Thanks in advance.
    Edited by: emmett on Jan 5, 2011 2:47 PM

    Don't crosspost. Continue here: http://forum.java.sun.com/thread.jspa?threadID=5251627

  • OWB 10g: how control files are generated?

    We are using OWB 10g within a 10g Database. We want to know how control files are generated by OWB in the file system. The reason we need to know is because our DBAs do not want to create directory objects pointing to NAS devices, their policy states that all directory objects should be on SAN shares. We rather use NAS shares since it simplifies our batch (too long to explain here). OWB has the "CREATE ANY DIRECTORY" privilege granted. Is it using this privilege to create a directory object for the path we specify the control file in the mapping or is it writing directly to this path? We checked the directory objects created (SELECT * FROM DBA_DIRECTORIES) after deploying a control file and it didn't seem to have created any new ones. Anyone knows how OWB creates these control files?

    Yes, indeed that's what we were after. We basically wanted to be sure we are not breaking an internal policy that says that "Oracle directory objects can not be located on NAS shares". The reasoning behind this policy is that NAS shares are not deemed highly available or high I/O devices hence our Oracle DBAs will not allow us to create any Oracle directory objects in NAS shares. The policy states that all database data should be stored in SAN shares which are directly attached to the servers and are therefore high I/O devices. It is arguable if the OWB data we want to load is really part of a database, we believe it is not. There are other implications in our environment about using NAS instead of SAN (NAS can run in active-active mode across different data centres, whereas SAN requires replication since it doesn't usually work well in an active-active mode across different data centres). So based on your answer we should be fine since OWB reads and writes directly to the files without using Oracle Directory Objects which supports our theory that these are not DB specific files and are only "OWB App" files which can then sit on a NAS without breaking the above stated policy.

  • Very large HEAPDUMP files are generated when executing BI Web reports NW7.0

    Dear Gurus,
    I´m facing a new problem.
    When few users are working in Portal to execute BI Web reports and queries, the system stops and big files are generated in directory: /usr/sap/BWQ/DVEBMGS42/j2ee/cluster/server0
    I´m using AIX 5.3. The files are these:
    2354248 Sep 29 12:31 Snap0001.20080929.153102.766064.trc
    1028628480 Sep 29 12:32 heapdump.20080929.153102.766064.phd
    0 Sep 29 12:32 javacore.20080929.153102.766064.txt
    I was searching for any solution in SAP help and notes. I´ve read a lot of notes:
    SAP Note 1030279 - Reports with very large result sets-BI Java
    SAP Note 1053495 - Settings to get a heapdump with IBM JVM on AIX
    SAP Note 1008619 - java.lang.OutOfMemoryError in BEx Web Applications
    SAP Note 1127156 - Safety belt: Result set iss too large
    SAP Note 723909 - Java VM settings for J2EE
    SAP Note 1150242 - Improving performance/memory in the BEX Analyzer
    SAP Note 950602 - Performance problems when you start a query in Java Web
    SAP Note 1021517 - NW 2004s BI Web memory optimization for large analysis item
    SAP Note 1044330 - Java parameterization for BI systems
    SAP Note 1025307 - Composite note for NW2004s performance: Reporting
    But still not having found an answer to solve this problem.
    In note 1030279 there is written:
    ?We will provide an optimization of the memory requirement in the next Support Package Stack. With this optimization, you can display a report as "stateless", so that the system can then immediately release the memory that is required to set up the result set.?
    I´m using Support Stack 15 for ABAP and Java, but I don´t have more information about this problem or stateless function in another note. And I don´t know how can I use this STATELESS function in BI.
    Anobody have any idea to solve this problem?
    Thanks a lot,
    Carlos

    Hi,
    Heap dumps are generated when there is an inmbalance in Java VM parameterization..
    Also please remove the parameter "-XX:+HeapDumpOnOutOfMemoryError " in Config tool, so that heap dumps will not generated and fill up the disk space..
    My advise is to send the heap dumps to SAP for recommendations.. Meanwhile check with SAP notes for Java VM recommendations..
    Regards
    Thilip Kumar
    Edited by: Thilip Kumar on Sep 30, 2008 5:58 PM

  • Numerous trace files are generating every minute causing space issue

    Hi All,
    numerous trace files are generating every minute <SID>_<PID>_APPSPERF01.trc  format.
    entry in trace file will be like..
    EXEC #10:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1734896627,tim=1339571764486430
    WAIT #10: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491273
    FETCH #10:c=0,e=0,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=1,plh=1734896627,tim=1339571764486430
    WAIT #10: nam='SQL*Net message from client' ela= 277 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491806
    EXEC #11:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
    FETCH #11:c=0,e=0,p=0,cr=9,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
    WAIT #11: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764493265
    *** 2012-06-13 03:16:14.496
    WAIT #11: nam='SQL*Net message from client' ela= 10003326 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571774496705
    BINDS #10:
    Bind#0
    oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
    oacflg=00 fl2=1000001 frm=01 csi=871 siz=2064 off=0
    kxsbbbfp=2b8ec799df38 bln=32 avl=03 flg=05
    value="535"
    Bind#1
    oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
    oacflg=00 fl2=1000001 frm=01 csi=871 siz=0 off=32
    kxsbbbfp=2b8ec799df58 bln=32 avl=04 flg=01
    value="1003"
    SQL> show parameter trace
    NAME TYPE VALUE
    tracefiles_public boolean TRUE
    log_archive_trace integer 0
    sec_protocol_error_trace_action string TRACE
    sql_trace boolean FALSE
    trace_enabled boolean TRUE
    tracefile_identifier string
    Profile options like "FND:Debug Log Enabled" and "Utilities:SQL Trace" are set to No
    Can some one help me to stop these trace generation.
    is there any way to find the cause for these trace?
    Thanks in adv...

    Hi;
    Please check who enable trace. Please see:
    How to audit users who enabled traces?
    check concurrent programs first
    *from the screen
    *F11, then select the trace, then Ctrl+F11
    Concurrent > program > define
    open the form, press F11 (query mode), select the trace, then (ctrl + f11) this should return all concurrent programs which have trace enabled
    Regard
    Helios

  • Offline redolog file are delete in Oracle 11

    Hi
    We recently upgrade our Oracle release from Oracle 10.2.0.2 to Oracle Database 11g Release 2.
    I was running the process to table compression with BRSPACE 7.20  using the command:
    brspace -u / -p /oracle/LUK/112_64/dbs/reorgEXCL.tab -f tbreorg -a reorg -o SAPR3 -s PSAPLUK -t allsel -n PSAPLUKCOMP -c ctab u2013SCT
    The process finish ok but I have a problem with Offline redolog, it are generated succesfully but it are delete after a short time, it disappear from the filesystem, I donot understand this behavior, the delete of archive log is not a deliberate configuration.
    If I execute the archive log backup I allways obtain a lot of warnings with redolog file not found:
    Offline redolog file '/oracle/XXX/saparch/XXXarch1_11155_666372933.dbf' not found
    After compression process the Offline redologs files are being delete too.
    I need to know the reason for this behavior, may be a wrong parameter in our database?
    It is a normal situation with Oracle 11?
    What happen with the Database and backup/restore strategy without offline redologs?
    Next week I need to upgrade the oracle release in our productive system, in this system the archive and database backup are very important and I need to solve this issue or fix it.
    Please help me to solve this problem.
    Thanks in advanced,
    OMM

    Hi,
    Offline redolog file '/oracle/XXX/saparch/XXXarch1_11155_666372933.dbf' not found
    Such messages are only seen if some one has deleted/moved the complained offline redo log files form the Archive Log destination.
    Please ensure that if someone has moved/deleted the offline redo log files or not ? It may be done to accommodate space in Archive log destination, as the reorganization/compression activities will generate more archive logs.
    Also try to perform the same activity again and monitor the archive log destination directory. As i think, such "Offline redo log file deletion after Table Compressions activity done by BRSAPCE " behavior is not possible at all. 
    What happen with the Database and backup/restore strategy without offline redologs?
    Offline Redo log files must require in Database media Recovery process. Without it only Point-in-time recovery is possible.
    Regards,
    Bhavik G. Shroff

  • Lot of core files are generated in webserver machine.......

    we have a iplanet webserver running in our production environment........it is creating lot of entries in the file /var/adm/messages...............
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.62) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.64) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.84) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.88) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.90) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.92) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.94) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.106) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.108) failed: 1
    Aug 11 13:00:01 uk17 sendmail[1449]: [ID 702911 mail.warning] gethostbyaddr(192.168.245.110) failed: 1
    It is creating lot of core files in /var/core/ with the name.......
    ore.ns-httpd.14156.uk17.0.0.1218243851
    core.ns-httpd.14922.uk17.0.0.1217950925
    core.ns-httpd.14922.uk17.0.0.1217950926
    core.ns-httpd.14937.uk17.0.0.1218243696
    core.ns-httpd.14937.uk17.0.0.1218243697
    core.ns-httpd.14949.uk17.0.0.1218243760
    core.ns-httpd.14949.uk17.0.0.1218243762
    core.ns-httpd.14955.uk17.0.0.1218243765
    core.ns-httpd.14955.uk17.0.0.1218243767
    core.ns-httpd.14977.uk17.0.0.1218243777
    those files are in byte code so unable to read those file.............
    Please any one help me on this issue...............

    First migrate your server to the latest Web Server 7.0 update 3
    http://www.sun.com/software/products/web_srvr/index.xml
    https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=SJWS-7.0U3-OTH-G-F@CDS-CDS_SMI
    You are getting those messages as gethostbyaddr(192.168.245.92) is failing. Try writing a small C program which calls gethostbyaddr(192.168.245.92) and see if its an o/s issue.
    If you are on Solaris 10 try to see why Web Server is dumping core by
    mdb core.pid
    ::stackAre you sure you have patch levels as recommended in the release notes of Web Server?
    Have you enabled IPv6? Since when are you seeing these core dumps?

  • How to proceed further once the explain plan and trace files are generated?

    Hi Friends,
    I need to improve the performance of on of the views that i am working on.
    As suggested in the thread - http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0 , i gave generated the explain plan and the trace file.
    From the explain plan, we can see the expensive operations for the query.
    Can any one please tell, how to proceed further from here on i.e. how to make this expensive operations less expensive?
    For ex: FULL TABLE SCAN might be an expensive operation when the table has indexes.In such cases, how can we avoid such operations to make query faster?
    Regards,
    Sreekanth Munagala.

    Hi Veena,
    An earlier post by you regarding P45 is as below
    Starter report P45(3) / P46 efiling for UK
    from my understanding though i have not worked on GB Payroll you have said that you deleted IT 65 details of leaver,however there must be clusters generated in system from where the earlier data needs to be deleted and may be that is why you are facing the issue.
    In Indian payroll when we execute text file for efiling of tax after challan mapping all the data compiles and sits in PCL cluster and therefore we are unable to generate form 16 with proper output,here we delete the clusters and rerun again the mappings and then check form 16.
    Hope this might help you,Experts have suggested you earlier also,they may correct me for this.
    Salil

  • Pdf files are generated at update

    Hello all,
    I'm new to RH. When generating the help file (first time around or during subsequent updates), RH generates a pdf file for each page of my manual. Can someone tell me where this is disabled? Thanks so much.

    Hmmm, apologies as I'm lost on exactly what you are doing. Are you familiar with Jing or ScreenR?
    http://www.techsmith.com/jing.html
    http://www.screenr.com/
    These apps allow you to record what you see on the screen and provide a link that allows others to watch the recording. I suggest these because I'm thinking in your case it might be beneficial for us to see first hand what you are seeing.
    I promise that I'm not trying to be intentionally obtuse here.
    In RoboHelp, I know we can establish links between Microsoft Word documents and Adobe Framemaker documents and update those links when the source content changes so it updates accordingly in RoboHelp. But I was unaware the same could be done with PDF. I'm also confused about your mention of seeing a PDF file for each "Book", so hopefully by showing us what is happening it will make more sense on our end.
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7, 8 or 9 within the day!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

  • Exported CSV files are locked by Oracle SQL Developer 2.1.0.63

    Whenever I export my query result into a CSV file, the resulting file is always locked for editing by Oracle SQL Developer 2.1.0.63.
    Have anybody encountered the same problem? Do you know how can I report this as a bug to Oracle?
    Thanks,
    Adrian Wijasa
    Banner Programmer/Analyst
    College of St. Benedict/St. John's University MN
    [email protected]

    I got it too if I left SQl Developer open then go to Excel to edit my CSV - got message 'table_export.csv is locked for editing by another user'. If I close SQL Developer then open the file, it works. Seems SQL Developer's holding onto the file after creation.
    Bugs are being reported in this forum.
    Evita
    Edited by: Evita on Jan 8, 2010 12:53 PM

  • Can we display associated functions when files are viewed via Oracle Drive

    We are on Oracle Portal 10.1.4 and have customised it extensively. We have certain functionality available to the users on the files. these functionalities are provided using associated functions. The user community use WEBDAV commonly as a more easier interface to upload and update files on the Portal. This being our drawback on us providing our custom functiionality via associated functions. Can you please help on how we can enable the associated functions to be viewed via Oracle Drive or WEBDAV

    Hi Jonathan. You've done a commendable job of isolating the problem. I'm not aware of any problem with a middle initial (and we have many users with a middle initial using the client here).
    Please try again with WebFolders to verify that your colleague can get in correctly, but can't through Oracle Drive.
    The Oracle Drive should be using the exact same mechanism.
    If it doesn't work with WebFolders, then the user should reset their password, which should regenerate the DAV verifiers.
    If it works with WebFolders, but not with Oracle Drive, you'll have to look at the Log file in
    C:\Documents and Settings\<username>\Application Data\Oracle\ODrive\cache\logs and see what exactly is sent to the sever, and what the error message is coming back.
    Send that along to us, and we can take a look.

  • How the media files are stored in Oracle 10g database

    I guess they have introduced new datatypes to handle multimedia objects( audio file, video file, images, etc etc). Can anyone tell which is the data type which is used to handle the media files in Oracle 10g database.
    thanks,
    shekar.

    Check this out.
    http://download-west.oracle.com/docs/cd/B14117_01/appdev.101/b10840/mm_uses.htm#sthref433

  • All the secondary database files are generated with mdf extensions

    Hi,
    See I have set up DR Server with SQL database. The problem is that when it creates new database on DR Server, all the secondary files, which are with .ndf extension on primary servers, gets created as .mdf files instead of ndf files.
    Can anybody tell me how to solve this problem.
    Will it create any problem when the secondary(DR Server) act as Primary server in case of failure.

    Hello,
    you can easily rename the files by following this procedure (see note 151603 for more details):
    - detach the database from the server
    - rename the files to the naming convention you like
    - attach the database with the new names again
    I do not exactly know what you mean with DR Server, but i assume that you mean
    Log Shipping or Database Mirroring. Both are not affected from the different filenames as they are using only the internal database name (e.g. PRD).
    Regards
      Clas

  • Photoshop Elements 7 Crashes When Peak Files Are Generated

    Have a PC with 80g HD, Pentium 4, 2g (400hz) RAM running Windows XP.  PSE7 generates peak files each time project is opened. When blue bar at bottom right of screen reaches 100%, Elements 7 freezes and must be shut down using Windows task manager.
    I have copied the file from 340g ext. HD back to C drive (the location of Adobe Photoshop Elements 7 program) and this has not resolved the problem.
    I note that this appears to have been a common problem since 2006 (earlier versions of Photoshop) with Adobe.
    Can anyone help.
    Regards
    Ozyjohn

    Elements User
    Have taken screen shot but Forum site will not allow upload of 1.8mb image even though the initial message states "Image will be scaled to fit 2mb limit". Have edited shot to show bottom right of screen only (700kb) and scanned to PDF (895kb) but still will not accept images!!!
    In the PRE 7 application the message beside the blue progression bar is "Generating Peak Files For ......" . When the progression bar reaches 100%, the program freezes.
    Any suggestions
    Regards
    Ozyjohn

  • Oracle reading binary files from other oracle home.

    Hi all,
    I have two oracle db installed on the same server but on different oracle home.
    It seems to me that the second oracle is reading the binaries from the other installation.
    I can start this database and the sap instance and it starts on the correct oracle home but I see that it is reading files on the first oracle home.
    for example, it saves the spfile under the first oracle_home\database directory. I started it using pfile=<the_correct_ora-home>\database\initSID.ora and set on this  init.ora file the correct path for  spfile.
    But even after this if i change any parameter, it stills change the spfile on the wrong <oracle_home>\database.
    Is there a way to fix this ? I mean to make the second oracle to see files under its own oracle home ?
    It is oracle 11g under windows 2008.
    Remember that all the environment variables is pointing to the correct oracle home and I can start SAP and Oracle as well on this oracle_home.
    Thanks in advance,
    Joao.

    Hi Orkun,
    No, we don't have anything else installed yet.
    Please see what I found out:
    Checking those binary errors I found out that the oracle service
    OracleServiceDPC (the second instance on this server) is being started from the wrong oracle_home path. Please
    see the path to the executables on that service properties:
    g:\oracle\dnc\11202\bin\ORACLE.EXE DPC
    It is pointing to the other oracle home DNC but starting instance DPC. How can I change that ?
    Please see that the oracle home, the listener, etc is correct pointing
    to oracle home G:\oracle\DPC\11202.
    I would like to disable this service and create the correct one.
    Is that possible ?
    thanks and regards,
    Joao

Maybe you are looking for

  • Keeping Page Viewer Web Part Content Exclusive to Each Workspace Instance

    I would like the Page Viewer Web Part to display different content for each instance in the workspace. The Shared Documents list already does this, but for some reason, when I display content in the Page Viewer, it applies to all instances. I'm using

  • Change Data Capture on an existing database

    Hi Readers, I am using the Sql Server CTP5 Developer edition on which i am able to perform CDC only if i start with creating a new database and then load the tables.The complete data change could be captured. But when i implement the same on a existi

  • Line wrapping - JTextPane

    Hi, Does anyone know how to disable line wrapping in a JTextPane? Thanks, Michael

  • IMacG5 won't recognize itouch 4.1

    My iTouch doesn't show up in finder or iTunes when I plug it in. I need to sync my calendars!!!! I have tried changing all the usb cords and ports - they are not the problem. The cords work with the charger and the ports work with other devices. I pl

  • How to calculate the in use percentage of cache ?

    Hi, All, I have two questions here: 1. Can we calculate the proper cache size based on the average key/data pair size and the number of key/data pairs? Is there any formula or something? ( I doubt it :), so next is the second question ) 2. If we set