Very large HEAPDUMP files are generated when executing BI Web reports NW7.0

Dear Gurus,
I´m facing a new problem.
When few users are working in Portal to execute BI Web reports and queries, the system stops and big files are generated in directory: /usr/sap/BWQ/DVEBMGS42/j2ee/cluster/server0
I´m using AIX 5.3. The files are these:
2354248 Sep 29 12:31 Snap0001.20080929.153102.766064.trc
1028628480 Sep 29 12:32 heapdump.20080929.153102.766064.phd
0 Sep 29 12:32 javacore.20080929.153102.766064.txt
I was searching for any solution in SAP help and notes. I´ve read a lot of notes:
SAP Note 1030279 - Reports with very large result sets-BI Java
SAP Note 1053495 - Settings to get a heapdump with IBM JVM on AIX
SAP Note 1008619 - java.lang.OutOfMemoryError in BEx Web Applications
SAP Note 1127156 - Safety belt: Result set iss too large
SAP Note 723909 - Java VM settings for J2EE
SAP Note 1150242 - Improving performance/memory in the BEX Analyzer
SAP Note 950602 - Performance problems when you start a query in Java Web
SAP Note 1021517 - NW 2004s BI Web memory optimization for large analysis item
SAP Note 1044330 - Java parameterization for BI systems
SAP Note 1025307 - Composite note for NW2004s performance: Reporting
But still not having found an answer to solve this problem.
In note 1030279 there is written:
?We will provide an optimization of the memory requirement in the next Support Package Stack. With this optimization, you can display a report as "stateless", so that the system can then immediately release the memory that is required to set up the result set.?
I´m using Support Stack 15 for ABAP and Java, but I don´t have more information about this problem or stateless function in another note. And I don´t know how can I use this STATELESS function in BI.
Anobody have any idea to solve this problem?
Thanks a lot,
Carlos

Hi,
Heap dumps are generated when there is an inmbalance in Java VM parameterization..
Also please remove the parameter "-XX:+HeapDumpOnOutOfMemoryError " in Config tool, so that heap dumps will not generated and fill up the disk space..
My advise is to send the heap dumps to SAP for recommendations.. Meanwhile check with SAP notes for Java VM recommendations..
Regards
Thilip Kumar
Edited by: Thilip Kumar on Sep 30, 2008 5:58 PM

Similar Messages

  • Project HTM Files are Blurry when opened in Web Browser

    Hello, I'm hoping someone on this discussion board can help me!
    I have created a video project for work in Adobe Captivate 6 and have published it. I now need to upload this video project onto our company's website.
    In short, here is my problem: When I open the HTM file for my video project from the original file source, it appears normal. However, once I wrote a path to the HTM file in our website's HTML code and went to open the page in my internet browser (I've been using Firefox) the video appears blurry.
    How can this be? Clearly the quality of the HTM file is fine, since it was not blurry when I opened it from the original file source. Can anyone think of a reason why it is blurry when opened in an internet browser? I know this sometimes can happen with the SWF files because internet browsers can alter their dimensions, but I thought using HTM files was supposed to help in avoiding these sorts of problems.
    Any help at all would be appreciated. Thanks!

    Sure.
    Here is a screenshot from one of the older videos:
    And here is a screenshot from my video:
    They are essentially the same screen, but for some reason in my version everything looks far more pixelated. Any thoughts on this?

  • Today, I randomly happened to have less than 1GB of hard drive space left. I found very large "frame" files, what are they?

    I found very large "frame" files, what are they & can I delete them? (See screenshot). I'm a (17 today)-year-old film-maker and can't edit in FCP X anymore because I "don't have enough space". Every time I try to delete one, another identical file creates itself...
    If that can help: I just upgraded to FCP 10.0.4 and every time I launch it it asks to convert my current projects (I know it would do it at least once) and I accept, but everytime I have to get it done AGAIN. My computer is slower than ever and I have a deadline this friday
    I also just upgraded to Mac OS X 10.7.4, and the problem hasn't been here for long, so it may be linked...
    Please help me!
    Alex

    The first thing you should do is to back up your personal data. It is possible that your hard drive is failing. If you are using Time Machine, that part is already done.
    Then, I think it would be easiest to reformat the drive and restore. If you ARE using Time Machine, you can start up from your Leopard installation disc. At the first Installer screen, go up to the menu bar, and from the Utilities menu, first select to run Disk Utility. Completely erase the internal drive using the Erase tab; make sure you have the internal DRIVE (not the volume) selected in the sidebar, and make sure you are NOT erasing your Time Machine drive by mistake. After erasing, quit Disk Utility, and select the command to restore from backup from the same Utilities menu. Using that Time Machine volume restore utility, you can restore it to a time and date immediately before you went on vacation, when things were working.
    If you are not using Time Machine, you can erase and reinstall the OS (after you have backed up your personal data). After restarting from the new installation and installing all the updates using Software Update, you can restore your personal data from the backup you just made.

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Images and js files within a deployed war file are generating 404 errors.

    Greetings,
    I have an odd situation in that image files and js files that are part of a deployed war file are generating 404 (not found) errors when accessed via https but ARE found when accessed via http.
    Unfortunately we are required to use https.
    I have verified via jar -tf that the files are indeed part of the war file. So, it is not that they are missing as is evident when accessed via http. (they appear as expected)
    A work around is to create the sub directories under the document root, on the ohs, and populate the sub directories with the images and js files that were used as part of the war file build. While this works, it doesn't explain why they would generate a 404 error when referenced from within the war via https.
    The war file works correctly on our 10g installation.
    I also have a very simple deployed war file and it too seems to have an issue finding direcotries/files that are part of the war file when referenced via https but not http.
    We are using Oracle 11g OHS and using the WLS Admin Console to do the deploying. We also are using the OSSO for the performing the required authentication.
    I have an SR in with oracle and have been working with them but I thought I would post here too.
    Suggestions?
    Thanks in advance.
    Edited by: emmett on Jan 5, 2011 2:47 PM

    Don't crosspost. Continue here: http://forum.java.sun.com/thread.jspa?threadID=5251627

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • How can NI FBUS Monitor display very large recorded files

    NI FBUS Monitor version 3.0.1 outputs an error message "Out of memory", if I try to load a large recorded file of size 272 MB. Is there any combination of operating system (possible Vista32 or Vista64) and/or physical memory size, where NI FBUS Monitor can display such large recordings ? Are there any patches or workarounds or tools to display very large recorded files?

    Hi,
    NI-FBUS Monitor does not set the limitation on the maximum record file size. The physical memory size in the system is one of the most important factors that affect the loading of large record file.  Monitor will try loading the entire file into the memory during file open operation.
    272MB is a really large file size. To open the file, your system must have sufficient physical memory available.  Otherwise "Out of memory" error will occur.
    I would recommend you do not use Monitor to open a file larger than 100MB. Loading of a too large file will consume the system memory quickly and decrease the performance.
    Message Edited by Vince Shen on 11-30-2009 09:38 PM
    Feilian (Vince) Shen

  • I was recently sent a very large emoji message amd now when i try to acces messages it will not display mesagges and then will bring me back to the home page

    was recently sent a very large emoji message amd now when i try to acces messages it will not display mesagges and then will bring me back to the home page

    Quit the messages app and reset your phone.
    Go to the Home screen and double click the Home button. That will reveal the row of recently used apps at the bottom of the screen. Tap and hold on the app in question until it jiggles and displays a minus sign. Tap the minus sign to actually quit the app. Then tap anywhere on the screen above that bottom row to return the screen to normal. Then restart the app and see if it works normally. Then reset your device. Press and hold the Home and Sleep buttons simultaneously ignoring the red slider should one appear until the Apple logo appears. Let go of the buttons and let the device restart.
    After the phone restarts go into Messages and delete the thread containing the problem message.

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Final cut pro x keeps on saying files are full when trying to export when their not ?

    Hi
    Final cut pro x keeps on saying files are full when i'm trying to export when they are not full and also what setting do i export them on for a small quicktime?

    The preset that will give you the smallest file is email and you will have 3 choices of size. Not sure what you consider small, but also take a look at Apple Device.
    Your error message is Files are Full or DIsk is Full? Will need more info about your system specs, the format of the clips you're using and your project settings.
    Russ

  • Numerous trace files are generating every minute causing space issue

    Hi All,
    numerous trace files are generating every minute <SID>_<PID>_APPSPERF01.trc  format.
    entry in trace file will be like..
    EXEC #10:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1734896627,tim=1339571764486430
    WAIT #10: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491273
    FETCH #10:c=0,e=0,p=0,cr=2,cu=0,mis=0,r=1,dep=0,og=1,plh=1734896627,tim=1339571764486430
    WAIT #10: nam='SQL*Net message from client' ela= 277 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764491806
    EXEC #11:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
    FETCH #11:c=0,e=0,p=0,cr=9,cu=0,mis=0,r=0,dep=0,og=1,plh=2638510909,tim=1339571764486430
    WAIT #11: nam='SQL*Net message to client' ela= 6 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571764493265
    *** 2012-06-13 03:16:14.496
    WAIT #11: nam='SQL*Net message from client' ela= 10003326 driver id=1952673792 #bytes=1 p3=0 obj#=34562 tim=1339571774496705
    BINDS #10:
    Bind#0
    oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
    oacflg=00 fl2=1000001 frm=01 csi=871 siz=2064 off=0
    kxsbbbfp=2b8ec799df38 bln=32 avl=03 flg=05
    value="535"
    Bind#1
    oacdty=01 mxl=32(21) mxlc=00 mal=00 scl=00 pre=00
    oacflg=00 fl2=1000001 frm=01 csi=871 siz=0 off=32
    kxsbbbfp=2b8ec799df58 bln=32 avl=04 flg=01
    value="1003"
    SQL> show parameter trace
    NAME TYPE VALUE
    tracefiles_public boolean TRUE
    log_archive_trace integer 0
    sec_protocol_error_trace_action string TRACE
    sql_trace boolean FALSE
    trace_enabled boolean TRUE
    tracefile_identifier string
    Profile options like "FND:Debug Log Enabled" and "Utilities:SQL Trace" are set to No
    Can some one help me to stop these trace generation.
    is there any way to find the cause for these trace?
    Thanks in adv...

    Hi;
    Please check who enable trace. Please see:
    How to audit users who enabled traces?
    check concurrent programs first
    *from the screen
    *F11, then select the trace, then Ctrl+F11
    Concurrent > program > define
    open the form, press F11 (query mode), select the trace, then (ctrl + f11) this should return all concurrent programs which have trace enabled
    Regard
    Helios

  • OWB 10g: how control files are generated?

    We are using OWB 10g within a 10g Database. We want to know how control files are generated by OWB in the file system. The reason we need to know is because our DBAs do not want to create directory objects pointing to NAS devices, their policy states that all directory objects should be on SAN shares. We rather use NAS shares since it simplifies our batch (too long to explain here). OWB has the "CREATE ANY DIRECTORY" privilege granted. Is it using this privilege to create a directory object for the path we specify the control file in the mapping or is it writing directly to this path? We checked the directory objects created (SELECT * FROM DBA_DIRECTORIES) after deploying a control file and it didn't seem to have created any new ones. Anyone knows how OWB creates these control files?

    Yes, indeed that's what we were after. We basically wanted to be sure we are not breaking an internal policy that says that "Oracle directory objects can not be located on NAS shares". The reasoning behind this policy is that NAS shares are not deemed highly available or high I/O devices hence our Oracle DBAs will not allow us to create any Oracle directory objects in NAS shares. The policy states that all database data should be stored in SAN shares which are directly attached to the servers and are therefore high I/O devices. It is arguable if the OWB data we want to load is really part of a database, we believe it is not. There are other implications in our environment about using NAS instead of SAN (NAS can run in active-active mode across different data centres, whereas SAN requires replication since it doesn't usually work well in an active-active mode across different data centres). So based on your answer we should be fine since OWB reads and writes directly to the files without using Oracle Directory Objects which supports our theory that these are not DB specific files and are only "OWB App" files which can then sit on a NAS without breaking the above stated policy.

Maybe you are looking for

  • Retrieving Images Embedded In XML (from rss using xsl)

    I am trying to comsuem and rss feed into my site but i dont seem to see an "image" element in the Item node. can someoen tell me how to retrieve Images Embedded In XML dreamwaever. i am using an xsl fragment on a pap server. Doh

  • After upgrading itunes to 7.7.1 touch makes  windows restart

    Since ugrading when I plug my ipod touch in it cause windows to restart. My older nano doesn't do this. I am running win xp pro SP2. thaks with any ideas?

  • Problems installing Acrobat X as part of CS6 upgrade

    After upgrading to CS6 from CS5, my Acrobat Pro 9 stopped working - it would say that there is an error and shut down, each time I tried to open it.  I have already tried uninstalling and reinstalling both Acrobat 9 and the CS6 upgrade.  Latest reins

  • Error occured (retrying)

    I receive error occurred (retrying) when starting workflow on a document. I feel that this is because these document have long URLs but I don't know how to fix it

  • H55M-P33 Can't Connect to Internet or Install the LAN drivers.

    I cannot connect to the internet. I have a working ethernet plugged in. I checked the lights and they are working but I still can't connect to the internet. I ran the Windows Troubleshooter and it said that I needed to update my drivers. I tried to i