Reducing the memory footprint of our Sybase ASE based SolMan install

Hello All,
We are doing a test install of SAP Solution Manager 7.01 on Sybase ASE 15.7.
Since this is just a test setup, we started off with a lower-than-recommended hardware configuration (4 GB RAM only) due to time constraints and since we were 'assured' that we could do basic testing with this setup.
While post install performance of SolMan was decent, performance during solman_setup (setting up technical monitoring) has become appalling. We are not able to complete the configuration process at all as the SolMan configuration web application has become very unpredictable and extremely slow.
The SolMan install is centralized and on a windows 2008 box. Windows task manager shows consistent memory usage of up to 90 - 95%. We also tried reducing the total number of work processes to just 8 but that did not help much. We see in 'task manager > resource monitor' that sqlserver.exe process is taking a shareable working set close to 2 GB of RAM whereas the committed memory much less (34 MB). Please tell us about any memory optimization we can perform for SolMan / Sybase ASE in order to complete technical monitoring setup using Solman_setup. We were hoping that we could change the  'total logical memory' setting for the DB directly using DBACOCKPIT tcode (in order to reduce the max memory setting) but could not do so as the it seems to be read-only. We could not find much documentation/posts regarding memory optimization for the DB. Please help out. Thanks!
-Regards,
Arvind

FWIW ... ASE's 'max memory' setting can be changed on the fly, while 'total logical memory' is a calculated value that you cannot change (ie, it's 'read only'; changing 'max memory' will cause 'total logical memory' to change automatically). [NOTE: DBACOCKPIT is a SAP-provided application that sits on top of ASE; while I know what's doable when connected directly to ASE I do not know if DBACOCKPIT has disabled the ability to change some configuration settings like 'max memory'.]
As for the SolMan performance issues ... I'd recommend reposting your issue in the SAP Applications on ASE discussion group where you're likely to get the attention of more folks with SAP application (on ASE) experience.  (While someone may jump in here with SolMan suggestions, SolMan is a SAP application and this group isn't really geared towards SAP applications.)

Similar Messages

  • Reducing the memory footprint of Sybase ASE

    Hello All,
    We are doing a test install of SAP Solution Manager 7.01 on Sybase ASE 15.7.
    Since this is just a test setup, we started off with a lower-than-recommended hardware configuration (4 GB RAM only) due to time constraints.
    While post install performance of SolMan was decent, performance during solman_setup (setting up technical monitoring) has become appalling. We are not able to complete the configuration process at all as the SolMan configuration web application has become very unpredictable and extremely slow.
    The SolMan install is centralized and on a windows 2008 box. Windows task manager shows consistent memory usage of up to 90 - 95%. We also tried reducing the total number of work processes to just 8 but that did not help much. We see in 'task manager > resource monitor' that sqlserver.exe process is committing close to 2 GB of RAM when in fact it seems to be using much lesser when seen directly in 'task manager > process tab'. Please tell us about any memory optimization we can perform for Sybase ASE in order to complete out Solman_setup. We were hoping that we could change the  'max memory'/'total logical memory' setting for the DB directly using DBACOCKPIT tcode but could not do so as the parameters seems to be read-only. We could not find much documentation regarding memory optimization for the DB. Please help out. Thanks!
    -Regards,
    Arvind

    FWIW ... ASE's 'max memory' setting can be changed on the fly, while 'total logical memory' is a calculated value that you cannot change (ie, it's 'read only'; changing 'max memory' will cause 'total logical memory' to change automatically). [NOTE: DBACOCKPIT is a SAP-provided application that sits on top of ASE; while I know what's doable when connected directly to ASE I do not know if DBACOCKPIT has disabled the ability to change some configuration settings like 'max memory'.]
    As for the SolMan performance issues ... I'd recommend reposting your issue in the SAP Applications on ASE discussion group where you're likely to get the attention of more folks with SAP application (on ASE) experience.  (While someone may jump in here with SolMan suggestions, SolMan is a SAP application and this group isn't really geared towards SAP applications.)

  • Reducing the memory utilisation of my database

    Hi,
    I want to reduce the memory utilisation of my database. I want to know which sqls have assigned by the oracle some OS memory in my database.I have the awr reports with me.
    My questions:
    1.Which section of the awr will give exactly this information?
    (SQL ordered by Sharable Memory doesn't help )
    2. Or can you tell me some views or tables wherein I get the needed the information which I can query against in my database?
    3. How can I reduce the memory utilisation in case I get the problematic sqls?
    Thanks,
    Sach

    I'm not sure that I understand your question. Can you clarify a couple points for me?
    What memory are we talking about here? Normally, most of the RAM allocated to Oracle is going to be SGA. But SGA isn't associated with any particular SQL statement, at least not in a fashion that I could contemplate doing reporting on. Individual SQL statements require RAM temporarily in the PGA during execution, but it sounds like you're not interested in that.
    What is the problem you are trying to solve here? If you want to reduce the amount of RAM allocated to Oracle from the operating system, you should be able to do that without analyzing any specific SQL statements by adjusting memory parameters. Mentioning what version of Oracle, what parameters you've set, and how much you'd like to reduce memory consumption would be helpful if you want specific suggestions for parameters to change.
    What does "problematic sqls" mean in this context?
    Justin

  • Reduce SQLDeveloper memory footprint with JDK 1.7

    Hi!
    Some time ago in another thread (Re: Memory problems with Oracle sql developer there was a suggestion to try the new Garbage-First Garbage Collector. which should be production in JDK 1.7.
    I use SQLDeveloper with JDK 1.7 on 64bit Linux with good results:
    - everything feels faster, snappier
    - fonts rendering is different, but it is OK
    - the bugs noted in other threads are not a showstopper for me (the connections pane not showing up on startup, not being able to scroll more than 1 OCI array size of records in results grid)
    In the above mentioned thread there is a suggestion that the new garbage collector should improve memory footprint of SQLDeveloper, however, this is not my experience, since it behaves pretty much the same as with JDK 1.6 (resident size between 700 and 900 MB).
    Do I need to use these opotions (as per reffering thread) to enable the new garbage collector (see below) or is it switched on by default in JDK 1.7? The reduced memory footprint would be very welcomed, because I use Oracle Warehouse Builder at the same time (also a java app) and there is always much pressure on memory.
    AddVMOption -XX:+UnlockExperimentalVMOptions
    AddVMOption -XX:+UseG1GC
    AddVMOption -XX:+G1YoungGenSize=25m
    AddVMOption -XX:+G1ParallelRSetUpdatingEnabled
    AddVMOption -XX:+G1ParallelRSetScanningEnabled
    Thanx
    Aleksander

    Hi Aleksander,
    Glad to hear of your good report on Java 7's HotSpot VM regarding performance -- it has various enhancements, of which the new garbage collector is just one. In terms of interpreting memory footprints, take a look at:
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing
    Note the diagram indicates total heap size does not include the permanent generation memory. Xmx limits the heap size (the young and tenured generation). MaxPermSize limits class and method metadata plus static variable content. (Apparently starting back in Java 5 there are even some cases where the permanent generation space can be shared by multiple VM instances to improve start-up time and reduce memory usage.) These two limits control distinct, non-overlapping areas of memory.
    When monitoring a Java application's heap consumption with a profiling tool, I doubt the reported usage will exceed the Xmx limit by much. Monitoring with Windows Task Manager, however, can be a bit misleading. I have read several critiques in years past on how Task Manager reports program memory consumption. "Mem Usage" is actually the working set size. "VM Size" is program private memory rather than the true virtual size. And who knows how it tracks the Java VM's permanent generation size. Will it depend on whether it is shared or not?
    So I cannot really recommend any additional parameters to you. Just trust in the Xmx setting and hope that SQL Developer keeps any memory leaks to a minimum.
    Hope this helps,
    Gary

  • Reducing JVM memory footprint

    I want to deploy what may turn out to be a JavaSpaces application on some Windows PC clients. These clients will receive event notifications from a central server and then pop up a GUI for the user to respond to. These clients may be memory-limited and am worried about deploying a J2SE runtime purely for this application and eating up 30+Mb of RAM.
    J2ME runtime seemed like it offered hope as regards reducing memory footprint but doesn't really seem aimed at a standard PC.
    Does anyone have any advice as to which direction I should go in as regards JRE ?
    Gary Roussak

    I haven't checked into it thoroughly, but I've heard repeatedly that 1.4 has a smaller ram footprint. It is still in beta, so that may not be the way you want to go, but you still may want to look into it.
    m

  • Reducing JRE memory footprint

    I want to deploy what may turn out to be a JavaSpaces application on some Windows PC clients. These clients will receive event notifications from a central server and then pop up a GUI for the user to respond to. These clients may be memory-limited and am worried about deploying a J2SE runtime purely for this application and eating up 30+Mb of RAM.
    J2ME runtime seemed like it offered hope as regards reducing memory footprint but doesn't really seem aimed at a standard PC.
    Does anyone have any advice as to which direction I should go in as regards JRE ?
    Gary Roussak

    Not all of the virtual memory assigned to a process is a problem - you need to look at the amount of memory that is commited (i.e. consumes pages of RAM and/or disk) to the process and not shared with other processes. Furthermore, the working set (the memory that has been recently accessed and needs to be in memory to avoid thrashing) is often much smaller than that.
    You can substantially reduce the private memory needed by the JVM by minimizing the heap size (-Xmx and -Xms parameters) - I've been able to run real programs accessing databases in as little as 2MB of heap.
    Chuck

  • Is this the best approach to reduce the memory ??

    Hi -
    I have been given a task to reduce the HEAP memory so that the system can support more number of users. I have used various suggestions given in this forum to find out the size of the object in memory. I have reached to a point that where i think i got an approx size of the object in memory.(not 100%)
    I basically have some objects of some other class which are created when this object is created . The intent was to initialize the nested objects once and use them in the main object. I saw some significant difference reduction in size of the object when i create these objects local to the methods which use it.
    Before moving the objects to method level
    Class A {
        Object b = new Object();
        Object c = new Object();
        Object d = new Object();
         public void method1 () {
             b.someMethod();
         public void method2 () {
             b.someMethod();
         public void method3 () {
             c.someMethod();
         public void method4 () {
             c.someMethod();
         public void method5 () {
             d.someMethod();
         public void method6 () {
             d.someMethod();
    After moving the objects to method level
    Class A {
         public void method1 () {
           Object b = new Object();
             b.someMethod();
         public void method2 () {
            Object b = new Object();
             b.someMethod();
         public void method3 () {
           Object c = new Object();
             c.someMethod();
         public void method4 () {
          Object c = new Object();
             c.someMethod();
         public void method5 () {
            Object d = new Object();
             d.someMethod();
         public void method6 () {
            Object d = new Object();
             d.someMethod();
    }Note : This object remains in the http session atleast 2 hrs. I cannot change the session time out.
    Is this the better approach to reduce the heap size? What are the side effects of creating all objects in the local methods which will be on stack?
    Thanks in advance

    The point is not that the objects are on the stack - they aren't, all objects are in heap, but that they have a much shorter life. They'll become unreachable as soon as the method exits, rather than surviving until the session times out. And the garbage collector will probably recycle them pretty promptly, because they remain in "Eden space".
    (In future versions of the JVM Sun is hoping to use "escape analysis" to reclaim such objects even faster).
    Of course some objects might have a significant creation overhead, in which case you might want to consider creating some kind of pool of them from which one could get borrowed for the duration of the call. With simple objects, though, the overheads of pooling are likely to be higher.
    Are these objects modified during use? If not then you might simply be able to create one instance of each for the whole application, and simply change the fields in the original class to static. The decision depends on thread safety.

  • How to reduce memory footprint

    Hello, while I observed that Audition always loads whole audio document into memory, I'm having quite a problem when processing long multichannel audio files as the whole physical memory gets utilized instantly I'm getting exhaustive disk swapping. This is especially awkward for long operations. I'm having a question if there's a way to reduce the memory footprint, ie. to Audition loads into memory just the file part that it's just working with, or to turn off undo. I assume the rapid memory usage is caused by collecting undo data also, whose I don't need in most cases.

    You can use the Clear History button in the History page to clear all undo data or selectively clear individual steps from History.

  • SQL Developer Memory Footprint

    We're looking at replacing around 200 TOAD licenses with SQL Developer. The only technical concern is the memory footprint, as in many cases it would be run from a terminal server with dozens of people logging on. A VM Size of 150MB seems to be not unusual for SQL Developer, and that all adds up of course.
    Are there any recommendations for reducing the memory footprint, or at least not letting it get much higher than 150? Features that can be turned off by default, versions of JDK, etc?

    Hi,
    The memory consume is quite worrying.
    However change the code into VB / Delphi will lose its availability as Java's write once run anywhere. :-)
    You won't be able to use this tool on Solaris, Linux, and Mac without changing the code and compiler. Thus would be less acceptable.
    I wonder if limiting SQL Dev's initial class load would give impact on memory consume.
    And why it seems that Java's garbage collector didn't do any collecting since the memory gets higher and higher time by time.
    Or maybe the code doesn't allow the object's become collectable?
    I ever get memory reach up to 500MB after doing a canceled Export Wizard for USER.
    But..... memory would never come down.
    Regards,
    Buntoro

  • How to minimize OC4J memory footprint?

    We use Oracle 9iAS R2 on a Windows 2K server with dual Xenons and 2 GB physical RAM as an integration server. There the various developers all have differenct OC4Js set up for testing. Now an Oracle rep once told us that many OC4Js within an iAS instance are no big deal because an OC4J had a memory footprint of about 3 MB RAM.
    That sounded too good to be true. And in our case, is isn't true - an OC4J takes up about 50 MB of RAM right after its start. I tried to minimize the Java heap through the OC4J server attbributes in Oracle Enterprise Manager (like "-Xms16") but that didn't make an ounce of a difference.
    So what's the memory footprint of an OC4J inside an iAS instance? And can I reduce it somehow?

    This must be a marketing gag. Client Swing Apps tend to consume more than 3MB.
    I'm not sure but I guess the iAS is the memory eater. We currently have a test installation on a Solaris machine and the iAS pages up to 3GB when accessing the enterprise manager web site. Have a look into the performance guide and try to switch off all unecessary things. Note that there are to different VM switches: -Xms configures the initial heap size a VM should have (growing is expensive) be whereas -Xmx configures the max size the heap can grow (wheares 64MB is default). Again the documenation says that 2MB is then min (not much availabe for OC4J).
    I've you found it - please tell me! :-)

  • JavaServer memory footprint - 80-100MB

    Hi,
    We are running WLE 5.1 (patch 63) on Solaris 7. Our WLE domain contains a
    combination of Java and C++ servers. We are concerned about the memory
    footprint of the Javaservers. On Solaris, the image size of each
    'JavaServer' ranges from 80MB up to 110MB. We are running up to 20
    Javaservers in each domain so obviously, we are very concerned about this in
    production. Even simpapp uses up to 50MB !
    Are there any good approaches to reduce the footprint of the Java servers
    e.g. Will combining multiple servers into a single VM (using the MODULE
    syntax) help ?
    We have tried the -mX etc. commands on the VM but with little impact or
    success.
    Thanks
    Dermot

    You can find out the address ranges for your young generation, old generation, and permanent generations by running with the -XX:+PrintHeapAtGC flag. That produces output meant for VM developers, but the information is there. You'll get something like Heap after GC invocations=2 (full 1):
    PSYoungGen      total 3584K, used 0K [0xffffffff38000000, 0xffffffff38400000, 0xffffffff78000000)
      eden space 3072K, 0% used [0xffffffff38000000,0xffffffff38000000,0xffffffff38300000)
      from space 512K, 0% used [0xffffffff38300000,0xffffffff38300000,0xffffffff38380000)
      to   space 512K, 0% used [0xffffffff38380000,0xffffffff38380000,0xffffffff38400000)
    PSOldGen        total 8192K, used 134K [0xfffffffeb8000000, 0xfffffffeb8800000, 0xffffffff38000000)
      object space 8192K, 1% used [0xfffffffeb8000000,0xfffffffeb8021a98,0xfffffffeb8800000)
    PSPermGen       total 24576K, used 2415K [0xfffffffe98000000, 0xfffffffe99800000, 0xfffffffeb8000000)
      object space 24576K, 9% used [0xfffffffe98000000,0xfffffffe9825bfd8,0xfffffffe99800000) showing your generations, in order: the permanent generation at [0xfffffffe98000000..0xfffffffeb8000000), the old generation at [0xfffffffeb8000000..0xffffffff38000000), and the young generation at [0xffffffff38000000..0xffffffff78000000). You should be able to match those address ranges up with the output of pmap to explain the pmap output. The collectors we have currently always map the heap as three separate (but adjacent) regions. The order of the generations is different for the different collectors, for obscure internal reasons. The address range shown will be reserved, but won't be committed unless you are using the space.
    That said, I don't know what your mapping is at 0000000100114000 is. Do you have any native libraries that malloc space?

  • Memory footprint is HUGE

    I just wanted to see if anyone else has a concern with the memory footprint and when/if this will be addressed. We have an ADF web app and now when we try to run under jdeveloper 11G the combination of jdeveloper and the weblogic java process is over 900M and grows when you do any clicking around. Under the previous TP4 release this was less than half.
    I have windows XP with Firefox, OracleXE, jdeveloper/weblogic and the memory footprint is at 2G. We already had to upgrade our systems, do we need to upgrade yet again???

    It seems that the commandline for starting the embedded weblogic has two instances of the -Xmx and Xms parameters. I think the last one is the one that is used, and it is set to 1024M, which is large for a large portion of development projects.
    The parameters are present in setDomainEnv.sh/cmd. It is situated in <JDEV_HOME?>system11.1.1.0.31.51.56/DefaultDomain/bin
    I've seen this directory show up in funny places so search for it if you can't find it.
    I've set the second set of parameters to the same as the first ones -Xms256m -Xmx512m.
    Trygve

  • Lightroom's large memory footprint

    After massaging many pictures in "develop" mode, the system began to become very slow (like locking up for 30 seconds). I opened process explorer and found LightRoom was consuming 1.8Gig of virtual memory and had a working set of about 1.2Gig. This seems quite excessive for general photo editing. I'm really only performing simple adjustments like color and contrast.
    I closed down Lightroom and restarted it, and it then worked fine again for another 50 or 60 pictures, at which time slowness occurred again, and the memory footprint was up again. Now that I know what to expect, I'm shutting LR down every 30 pictures or so to avoid the excessive memory consumption.
    I suspect there is a memory leak or creep in LR.
    I have a machine with 4Gig of RAM, running Vista Ultimate.

    EricP,
    LR does "accumulated background work" when nothing else is going on, esp if you have the Library set to All Photos. Also it appears that LR is very sensitive to where the pagefile(s) are located and their size. I only can speak to XP Pro though. Vista is a different animal. You might try putting a controlled size [1.5 RAM size for both Min and Max values] on both [or more] HDs you have. Also set the system to keep as much of the Kernel in RAM as possible and set the system to optimize for Applications. Those changes helped me. If they can be accomplished in Vista, they may help also.
    Good luck and keep us informed if you get any fixes working.
    Mel

  • Apache FOP: reducing/reclaiming memory?

    FOP does a marvelous job in generating documents, indeed. However, the memory footprint is also extraordinary (especially for larger documents). What's the most secure way to clean up fop and reclaim all memory fop used (if possible)?

    The best way is to use multiple page-sequences. Only put so much content in a single page-sequence. I use FOP in production and have successfully produced 1,000+ page documents without running out of memory. It also helps to up the heap size of your JVM using something like -Xmx512m when starting the JVM.
    Another trick is to use SAX instead of DOM to produce the XML that is eventually turned into the PDF. The FOP distribution has examples of using SAX.

  • Firefox memory footprint

    greetings,
    i write regarding the memory footprint of the mozilla-firefox package for arch.  i downloaded the gtk2/xft binary of firefox 0.9 from the mozilla website and used it in anticipation of the arch package being released.  previously on my arch box i had used the mozilla-fire* package rather than the mozilla.org binary.  but now i have noticed a discrepency in memory footprint between the two.  'ps v' gives for the arch package and mozilla.org package, respectively:
    <pre>
    1798 pts/1    S      0:04      0    66 37829 23280 18.2 /opt/mozilla-firefox/l
    1979 pts/1    S      0:06      9  9190 27345 19564 15.3 /tmp/firefox/firefox
    </pre>
    both were taken immediately after firefox startup.  this seems to be a pretty significant difference.  it is enough for me to prefer the mozilla.org package on my obsolete box with 128 megs ram, anyway.
    p.s.  i've been using arch for some time now.  i would just like to take this opportunity to thank those who created and maintain arch linux.  it is an enjoyable distribution

    i have mozilla-firefox using mem this way:
    13600 pts/32 S 0:00 0 47 3604 2140 0.2 /opt/gnome/libexec/gconfd-2 11
    13651 pts/32 S+ 0:00 0 577 1666 1096 0.1 /bin/sh /opt/mozilla-firefox/bin/firefox
    13669 pts/32 S+ 0:00 0 577 1702 1108 0.1 /bin/sh /opt/mozilla-firefox/lib/firefox-0.9/run-mozilla.sh /opt
    13674 pts/32 S+ 0:02 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13675 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13676 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13678 pts/33 Ss 0:00 0 577 2854 2456 0.3 -bash
    13691 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13692 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13693 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    13694 pts/32 S+ 0:00 0 66 49069 27464 3.5 /opt/mozilla-firefox/lib/firefox-0.9/firefox-bin
    and i cannot see any problem with that
    but i found out something strange while running it:
    [damir@Asteraceae /]$ mozilla-firefox
    LoadPlugin: failed to initialize shared library /opt/mozilla-plugins/Blender3DPlugin.so [/opt/mozilla-plugins/Blender3DPlugin.so: undefined symbol: _ZTV16nsQueryInterface]
    libxpt: bad magic header in input file; found 'XPCOM
    TypeLib
    –@', expected 'XPCOMnTypeLibrn32'
    *** loading the extensions datasource
    [damir@Asteraceae /]$
    the blender plugin is broken --- funny enough: i hear the first time that such thing exist, so it would be nice that someone else confirm this ;-)

Maybe you are looking for