Memory consumption to high!!!

Hi SDN,
we've in SAP ECC 6.0 and we have already applyed SAP Zero Administration for Windows (SQL2005), when we start SAP System, it occupies about 8GB memory but 2 to 4 hours later it is consuming about 12GB (!!!!!) even if the system is not used by any user, it seems that memory is not released for other processes.
someone knows how can I reduce this memory consumption of a SAP System?
Thanks in advance, best Regards,
Pedro Rodrigues

Hi,
thanks for you answers, this is win2003, i think MS KB 931308 is applyed but i'm not able to see it now, this is an ABAP stack only.
Regards,
Pedro

Similar Messages

  • High memory consumption in XSL transformations (XSLT)

    Hello colleagues!
    We have the problem of a very high memory consumption when transforming XML
    files with CALL TRANSFORMATION.
    Code example:
    CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
                SOURCE XML lx_clause_text
                RESULT XML lx_temp.
    lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
    format) and can therefore not be easily splitted into several parts.
    Unfortunately this string can get very huge (e.g. 50MB). The problem is that
    it seems that CALL TRANSFORMATION allocates memory for the source and result
    xstrings but doesn't free them after the transformation.
    So in this example this would mean that the transformation allocates ~100MB
    memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
    this with a couple of transformations and a good amount of users and you see
    we get in trouble.
    I found this note regarding the problem: 1081257
    But we couldn't figure out how this problem could be solved in our case. The
    note proposes to "use several short-running programs". What is meant with
    this? By the way, our application is done with Web Dynpro for ABAP.
    Thank you very much!
    With best regards,
    Mario Düssel

    Hi,
    q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
    q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
    Have you reviewed the logs?
    Regards,
    Harv

  • Continuously refreshing a tab after an interval leads to high memory consumption (400MB to 800MB in 30 seconds for 3 refreshes at 10 secs interval), why?

    Environment:
    MAC OSX 10.9.5
    Firefox 32.0.3
    Firefox keeps consuming lot of memory when you keep refreshing a tab after an interval.
    I opened a single tab in my firefox and logged into my gmail account on that. At this stage the memory consumption was about 400MB. I refreshed the page after 10 seconds and it went to 580MB. Again i refreshed after 10 seconds and this time it was 690MB. Finally, when i refreshed 3rd time after 10 seconds, it was showing as 800MB.
    Nothing was changed on the page (no new email, chat conversation, etc. nothing). Some how i feel that the firefox is not doing a good job at garbage collection. I tested this use case with lot of other applications and websites and got the similar result. Other browsers like Google chrome, safari, etc. they just work fine.
    For one on of my application with three tabs open, the firefox literally crashed after the high memory consumption (around 2GB).
    Can someone tell me if this is a known issue in firefox? and is firefox planning to fix it? Right now, is there any workaround or fix for this?

    Hi FredMcD,
    Thanks for the reply. Unfortunately, i don't see any crash reports in about:crashes. I am trying to reproduce the issue which will make browser to crash but somehow its not happening anymore but the browser gets stuck at a point. Here is what i am doing:
    - 3 tabs are open with same page of my application. The page has several panels which has charts and the javascript libraries used for this page are backbone.js, underscore.js, require.js, highcharts.js
    - The page automatically reloads after every 30 seconds
    - After the first loading of there three tabs, the memory consumption is 600MB. But after 5 minutes, the memory consumption goes to 1.6GB and stays at this rate.
    - After sometime, the page wont load completely for any of the tabs. At this stage the browser becomes very slow and i have to either hard refresh the tabs or restart the browser.

  • Memory and CPU consumption very high after OS X10.7.5 upgrade.

    Hi,
    I am new to Mac world (just couple of months and still getting used to mac).
    After upgrade to OS X 10.7.5, I am seeing gradual memory consumption upon normal bootup and login. loginwindow, systemUIServer and systemUIAgent processes are using almost all the remaining RAM. Also, _coreaudiod process is consuming maximum CPU.
    System Preferences is not responding.
    I don't have TimeMachine setup. No backup is available.
    Any remedy?

    Disable your Extensis / Suitcase font plugins until they can get a patch out.

  • Very high memory consumption of B1i and cockpit widgets

    Hi all,
    finally I have managed it to install B1i successfully, but I think something is wrong though.
    Memory consumption in my test environment (Win2003, 1024 MB RAM), while no other applications and no SAP addons are started:
    tomcat5.exe 305 MB
    SAP B1 client 315 MB
    SAP B1DIProxy.exe 115 MB
    sqlservr.exe 40 MB
    SAPB1iEventSender.exe 15 MB
    others less than 6 MB and almost only system based processes...
    For each widget I open (3 default widgets, one on each standard cockpit), the tomcat grows bigger and leaves less for the sql server, which has to fetch all the data (several seconds on 100% of CPU usage).
    Is this heavy memory consumption normal? What happens if several users are logged into SAP B1 using widgets?
    Thanks in advance
    Regards
    Sebastian

    Hi Gordon,
    so this is normal? Then I guess the dashboards are not suitable for many customers, especially for them who are working on a terminal server infrastructure. Even if the tomcat server has this memory consumption only on the SAP server, when each client needs about 300 MB (and add some hundred for the several addons they need!), I could not activate the widgets. And generally SAP B1 is not the only application running at the customers site. Suggesting to buy more memory for some Xcelsius dashboards won't convince the customer.
    I hope that this feature will be improved in the future, otherwise the cockpit is just an extension of the old user menu (except for the brilliant quickfinder on top of the screen).
    Regards
    Sebastian

  • MAIL Version 7.2 (1874) High Memory Consumption

    My MacBook Air has lately increased its Memory consumption ( It has 4 GB Ram ) up to the point that is getting very slow operationally speaking. Is there something that I can do to improve it, controlled it.

    Have you installed anything recently? Open up Activity Monitor and check what applications are running and see what's consuming the most resources.

  • Integration Builder Memory Consumption

    Hello,
    we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
    - open normal idoc to idoc mapping: + 40 MB
    - idoc to edi orders d93a: + 70 MB
    - a second idoc to edi orders d93a: + 70 MB
    - Execute those mappings: no additional consumption
    - third edi to edi orders d93a: + 100 MB
    (alle mappings in same namespace)
    After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
    Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
    So we cannot really call that fun. Working is very slow.
    Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
    And we are using only graphical mappings, no XSLT !
    We are on XI 3.0 SP 21
    CSY

    Hii
    Apart from raising tablespace..
    Note 425207 - SAP memory management, current parameter ranges
    you have configure operation modes to change work processes dynamically using rz03,rz04.
    Please see the below link
    http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
    You can Contact your Basis administrator for necessary action

  • BW data model and impacts to HANA memory consumption

    Hi All,
    As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs.  Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level).  So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level.  In this scenario:
    Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
    I'm thinking that more memory is required to aggregate the data in HANA before sending to BW.  Is aggregation still necessary to manage execution memory usage?
    Regards,
    Dae Jin

    Let  me rephrase.
    I got an EarlyWatch that said my dimensions on one of cube were too big.  I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
    So, I redesigned the cube, reactivated it and reloaded it.  I then ran SAP_INFOCUBE_DESIGNS again.  The cube doesn't even show up on it.  I suspect I have to trigger something in BW to make it populate for that cube.  How do I make that happen manually?
    Thanks.
    Dave

  • Excessive memory consumption when loading Customers through Component Inter

    Hi All,
    I'm facing a big problem with the high memory consumption when loading Customers, Companies and Sites using the Component Interfaces delivered by the product (RD_CONSUMER_CI_API, RD_COMPANY_CI_API, RD_SITE_CI_API) within Application Engine programs. I'm loading about 7 million customers, an amount that is not so big in my opinion, but the memory consume is too high.
    We have 3 Batch Servers, each one running under Red Hat OS with 32 GB RAM memory plus 32 GB Swap memory in each server. We are running 2 Process by Server and with a day and half the servers crash with 100% of memory consumed (RAM and SWAP).
    There's a good pratice to use Component Interface in a heavy load process?
    There are parameters in the process scheduler configuration file that could help to reduce the memory consumption?
    There's a way to free the memory through PeopleCode or by running another process?
    Thanking you in advance.

    You may want to try and cut down on the input data to ascertain that the load might be a problem.
    You may try and use the GarbageCollector, but might not help in your case.
    To get an idea of the size allocated in buffer for the Rowset being used, you may want to check out the memory overhead ...
    Also, could check, which is the process that is consuming a lot of memory

  • Graphic failure after full on memory consumption

    After few hours of usage mostly after a high memory consumption, my mac starts doing these lines. Now i never had this issue before maverick upgrade. Can someone let me know what is going on. Rebooting fixes the issue for now.
    MacBook Pro
    15-inch, Mid 2010
    Processor  2.53 GHz Intel Core i5
    Memory  4 GB 1067 MHz DDR3
    Graphics  Intel HD Graphics 288 MB

    Yup, same thing happening here. Just upgraded to 10.9 Mavericks.
    2010 MBP 15-inch, i5 2.4GHz.
    Kind of odd that all that all of us have the same model Mac. I hope this isn't related to another issue described over here: http://support.apple.com/kb/TS4088
    I just spent money to get my logic board replaced because of that issue, as the diagnostic test said I "passed" even though I was still getting kernel panics occasionally. Now I upgrade to 10.9 and see video artifacts everywhere.
    Random video/graphics glitches appear all over the place after a while. Sometimes it's minor, other times it makes the system unusable as you can't see anything. Usually goes away again if you restart, but it's obvious there's some kind of graphics issue with 10.9.
    the 2010 15-inch MBP.

  • SetTransform() memory consumption

    Hi,
    I'm currently working on a application which needs to move a sphere very quickly. The position is calculated every 40 ms and set via the TransformGroup.setTransfom() method. This raises a problem as this statement rapidly consumes huge amounts of memory, especially when called in short time intervals.
    I also tested it with the java3d example program "AWTInteraction" by simply putting the statement in a for loop and watched the memory climb:
    for (int i=0; i<1e+6; i++)
         objTrans.setTransform(trans);The result is a java.lang.OutOfMemoryError
    Is there a soultion or workaround for this kind of problem?
    Any hints appreciated. (Project has to be finished on Monday. It's really urgent.)
    TIA
    Erich

    Erich,
    i've never had any memory problems when dealing with Transforms. Is it perhaps possible that the leakage results from another instruction? For high memory consumption I saw responsible only working with textures so far. In order to find the problem, I would check out three things in your code:
    1. Are you working with textures and if yes, how does the program behave, if the textures are omitted?
    2. Are there any "new" instructions inside your loop? If yes, try to reuse objects and eliminate all "new" commands inside the loop.
    3. Did you consider the Mantra to do all changes on a live scene graph within a behavior (and from the behavior scheduler)? It seems unusual to me to change transforms inside a loop.
    Good luck,
    Oliver

  • J2EE Engine memory consumption (Usage)

    Dear experts,
    We have J2EE Engine (a Jawa stack).  When I run routine monitoring via the browser and read the memory consumption I am meet with a chart that show a sawtooth like graph. Every hour from 19:00 to 02:00 the memory consumption will rise with approx. 200 MB after 7 hours all of a sudden the memory consumption drops down to normal idel levvel and start over again. I can inform that at the time there are no user on the system.
    My question is what are the J2EE doing? since there is no user activity.Are the J2EE engine running some system applications? is it filling up the log files and then empty(storing) them.
    I hope some of the experts can answer.
    I just want to undertand what's going on, on the system. If there is some documentation/white paper on how to interpret/read the J2EE monitor I will great full if you drop the information or link here.
    Mike

    Hi Mike
    To understand what exactly is being executed in Java engine, I'd suggest you perform Thread dump analysis as per:
    http://help.sap.com/saphelp_smehp1/helpdata/en/10/3ca29d9ace4b68ac324d217ba7833f/frameset.htm
    Generally 4-5 thread dumps are triggered at the interval of 20-25 seconds for better analysis.
    Here's some useful SAP notes related to thread dump analysis:
    710154 - How to create a thread dump for the J2EE Engine 6.40/7.0
    1020246 - Thread Dump Viewer for SAP Java Engine
    742395 - Analyzing High CPU usage by the J2EE Engine
    Kind regards,
    Ved

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

Maybe you are looking for