Reentrant shared clones between instances Memory Usage

Hi guys. 
I have a question on the Reentrant shared clones between instances. 
I understand the concept of Reentrant shared clones between instances, Next clone get into the memory after a clone get out the memory. 
My question is that how many instances of clone can run simultaneously?  How much memory is allocate for Reentrant shared clones between instances?
If I use call Vi by reference as show in the the figure, will it open more than one VI and keep it in the memory. 

You might be just a bit confussed about clones, clone pools, and that option 0x80 setting. 
Your code as written will cause an error to subsequent calls unless the vi has finished first.
Combining the 0x80 flag with ox40 prepares the vi to be called reenterant so each call to the modified vi will casue one dataspace to be checked out of the clone pool.  The default is one dataspace per core in the clone pool, that can be increased (but not decreased) with the populate async call pool method. 
Each time that sub vi runs another dataspace is checked out of the clone pool.  If none are avalable additional dataspaces are created on the fly. this takes time you will notice. 
The sub-vi is responsible for returning the dataspace to the clone pool when it finishes.
As far as how many can be in memory at one time.  how much memory can your application access and how much memory does your vi's dataspace take?
Jeff

Similar Messages

  • Linux memory usage

    Hey all,
    I have a 1.6 GHz desktop running w/ 128mb of memory and a Core2Dou laptop w/ 1gb of memory.  Both machines are running Archlinux and Fluxbox.  At boot, both machine are using roughly the same amount of memory, around 26-30 mb but as soon as fluxbox is started the desktop is running around 40-50 mb while the laptop is sitting at 130-140 mb.  Those numbers do not include buffer/cache in them.  They both have similar configurations so I'm curious as to why is there such a drastic difference between the memory usage on the systems when Fluxbox is started?
    -Vincent

    What graphics card do you use on your laptop? Any integrated graphics with shared memory by chance? Then you have your answer.

  • Shared memory:  apache memory usage in solaris 10

    Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!

    a) How or why does solaris choose to share memory
    between processes
    from the same program invoked multiple times
    if that program has not
    been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
    Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
    If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
    Simply: if we run pmap / ipcs we can see a
    shared memory reference
    for our oracle database and ldap server. There
    is no entry for apache.
    But the total memory usage is far far less than
    all the apache procs'
    individual memory totted up (all 100 of them, in
    prstat.) So there is
    some hidden sharing going on somewhere that
    solaris(2.9) is doing,
    but not showing in pmap or ipcs. (virtually
    no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
    b) Under solaris 10, each apache process takes up
    precisely the
    memory reported in prstat - add up the 100
    apache memory details
    and you get the total RAM in use. crank up the
    number of procs any
    more and you get out of memory errors so it
    looks like prstat is
    pretty good here. The question is - why on
    solaris10 is apache not
    'shared' but it is on solaris 9? We set up
    all the usual project details
    for this user, (jn /etc/projects) but I'm
    guessing now that these project
    tweaks where you explicitly set the shared
    memory for a user only take
    effect for programs explicitly coded to use
    shared memory , e.g. the
    oracle database, which correctly shows up a
    shared memory reference
    in ipcs .
    We can fire up thousands of apaches on the 2.9
    system without
    running out of memory - both machines have the
    same ram !
    But the binary versions of apache are exactly
    the same, and
    the config directives are identical.
    please tell me that there is something really
    simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
    Darren

  • How to find out CPU and memory usage for an instance?

    Hi DBA Gurus,
    How to find out CPU usage and memory usage for an instance?
    Any information is appreciated!
    Thank you!
    Robert

    you can calculate cpu usage by adding fallowing three factors which you can get from v$sysstat
    1. Parse CPU time : This represents the percentage of CPU time spent parsing SQL statements. Parse time CPU can be a strong indication that an application has not been well tuned. High parse time CPU usually indicates that the application may be spending too much time opening and closing cursors or is not using bind variables.
    2. Recursive CPU time : Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used.
    Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints
    3. Other CPU time : This represents the percentage of time spent looking for buffers, fetching rows or index keys, etc. Generally, \"Other\" CPU should represent the highest percentage of CPU time out of the total CPU time used.
    total memory used you can calculate adding
    total_agrigate_area+sga
    memory usage on os level you can know by fallowing commands
    vmstat 5 20 depending upon os

  • Memory usage in more than 1 instance

    Hi,
    We have 2 instances on linux, RAM is 16GB. SGA_MAX_SIZE is 3 GB each instance and kernel parameter shmmax is set to 8GB.
    Now i wanted to know that how oracle calculates the memory in the server. Is it calculates (3gb+3gb=6gb) or (3gb) only.
    If possible then please provide any doc. link.
    Thanx

    Each instance has its own memory that is never shared with any other instance.
    Oracle instance memory is always SGA and PGA.
    See Concepts Guide for more info: http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/memory.htm#i12483

  • Preallocated clones vs shared clones

    Hi,
    Im building an application that plays and records sound via multiple (20) soundcards. The playback and recording parts are incapsulated within a VI called Process.VI. The Process.VI contains several layers of different subVIs.
    The Process.VI is running in prallel (one for each sound card) and is there for a reentrant clone along with all its subVIs. I have chosen to set the Process.VI and all its subVIs to be preallocated reentrent clones instead of shared clones and this is to minimize jitter and call overhead. I know that this means that LabVIEW must allocate a clone for each instance of all VIs but will this become a problem for me given the fact that the loops that contains all VIs are running pretty fast. The playback loop and the recording loop are running with 40 ms timing due to the requirement of small buffer sizes.
    Will the memory usage grow due to that all VIs are preallocated clones and repeatdly called every 40 ms?
    LabVIEW version: 12.0.1f5
    Best regards
    Jon
    JonS

    If they are all running, then shared vs preallocated won't matter.
    I would reevaluate your subVIs.  Those that are really quick my be able to be non-reentrant.  That could save you a lot of memory at the expense of possible jitter (no more than Windows will do to you though).  The compromise between the preallocated reentrant and the non-reentrant is the shared clone reentrant.  But if you have state in any of those subVIs, the should never be set to shared clone.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Memory usage ind - accurate? successful live gig notes

    I did a 3-set coverband thing last night with MainStage, and all went well. I covered everything using MainStage, Pianoteq for pianos, the rest was all Logic's own mass of sounds (out of all the installed Logic Studio package plus a bunch of stuff from the L7 era).
    Worked great, but what bothered me is this:
    while setting up the sets, I was watching the memory meter, and it was nearly pegged, so I started getting creative about how to set stuff up.
    Strangely though, when I saved, quit, and reloaded the concert, the memory indicator was practially nothing (comparatively). What exactly does this meter display (available memory after taking into account any other processes possible running on the machine) or something else? After the initial worry that I lost something somewhere, I ran through all the patches and everything was fine. Even after running through all the patches, the memory indicator never moved up again...
    By the way. I literally programmed the whole concert on my quad G5, saved the concert file out to a memory stick, and popped it into my MacBook Pro and read it in from there. The G5 uses mLAN, for this gig I took along an old MOTU 828 instead of ripping stuff out of my studio. Worked like a charm... Quad is Tiger, MBP is Leopard; no problem! Did I test it first, yes, but not completely. Forgot to load the USB MIDI driver from Yamaha so that the MBP would see my Motif ES8 controller. No problem. Connected over Bluetooth to my 3G phone here and downloaded the driver while the rest of the crew were hooking up cables. Came back in all fixed in less than 10 minutes.
    Set list? In Numbers. Used Spaces to switch back and forth between seeing the setlist and the display of MainStage. I actually think I'll put some of the charts in .pdf format and put those in another space to look at (I'm not a regular in this particular band so there's some arrangement "stuff" I need reminders of).
    Sounds? I had some reservations about using Pianoteq for pianos live, but there are some "patches" on their website that are better for this kind of gig. Used one of those for the "stock" piano sound. I have Ivory and TruePianos as well but wanted to work without an external drive to stream Ivory samples from and TruePianos seems to use a lot of CPU on my Quad, so I didn't want to run into a possible issue there. Can load up lots of Pianoteq instances without any probs... The EVB3 just plain rocks. Did a couple of things that worked nicely with Delay Designer including a cool built-up intro to Chaka Khan's Ain't Nobody (I hate playing that song - need 6 hands to cover everything! but the intro got smiles from the band). Normally I'd use some stuff from Atmosphere, but the Intel-wrapper thing they've done was something I didn't want to try live just yet.
    The most striking "takeaway" from this gig was that although I had my Motif with me (and it has always been the "main horse" for live gigs for the sporadic once in a while case where I do them), I never used it. It was purely a controller. I hooked up audio to for the case that the Mac would fail horribly, but it didn't happen. Seriously, a live gig, all sounds coming from a Mac. I used to haul around 3 synths, an 88 key controller, and a 20-space rack years ago, this just kills that!!!
    Anyway, repeat gig next week Saturday with the same band - gotta work up some surprises in the mean time!
    Michael

    Hi Michael,
    thanks for sharing your experience. I too have had the same kind of issue with memory as you describe (see previous posts). Usually when I am trying to assemble a number of patches for the gig, I frequently load and unload different synths, sounds, etc. trying to get the right sounds for a song. However, as I do this, the memory usage continues to grow until its in the red. What I have noticed (on my Macbook Pro) is if I just shut down and restart MainStage, the memory meter is pretty much the same, but once I shut down and restart my computer, loading up MainStage indicates a smaller memory usage. I haven't found a work around for this other than to make sure I do an absolute clean boot, with network shutdown and no other apps running, just to ensure I have a stable environment. So far, I haven't had any gig crashing events, but the instability causes me to check MainStage very frequently throughout my gig to make sure nothing has gone wrong. I would sure like Apple to fix these issues.

  • Problem with Firefox and very heavy memory usage

    For several releases now, Firefox has been particularly heavy on memory usage. With its most recent version, with a single browser instance and only one tab, Firefox consumes more memory that any other application running on my Windows PC. The memory footprint grows significantly as I open additional tabs, getting to almost 1GB when there are 7 or 8 tabs open. This is as true with no extensions or pluggins, and with the usual set, (firebug, fire cookie, fireshot, XMarks). Right now, with 2 tabs, the memory is at 217,128K and climbing, CPU is between 0.2 and 1.5%.
    I have read dozens of threads providing "helpful" suggestions, and tried any that seem reasonable. But like most others who experience Firebug's memory problems, none address the issue.
    Firefox is an excellent tool for web developers, and I rely on it heavily, but have now resorted to using Chrome as the default and only open Firefox when I must, in order to test or debug a page.
    Is there no hope of resolving this problem? So far, from responses to other similar threads, the response has been to deny any responsibility and blame extensions and/or pluggins. This is not helpful and not accurate. Will Firefox accept ownership for this problem and try to address it properly, or must we continue to suffer for your failings?

    55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
    And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
    Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't.

  • High RAM memory usage compared to other systems

    I'm suffering with memory usage problems, only in open 2 tabs of facebook(it's only for example, any 'heavy' page freeze also) in firefox*(for example also, but this occur in chromium also) freeze computer.
    I've 1GB of memory, it isn't big but also isn't very little.
    I'm using XFCE with compiz (yes, compiz, since use a very lightweight openbox-based session doesn't help, then compiz...) when system start (after login) system is consuming 130MB approximately of RAM.
    after 1 hour if close all open apps and see memory usage again consume is something about 350MB, "only solution" is reboot.
    all applications(htop,free -m,xfce's system monitor...) that I've tried to monitor memory usage show above scenario, except the 'ps_mem' script in AUR.
    in the 'ps_mem' script the result is following (with opera browser open, less memory offensive browser but I really prefer firefox):
    Private + Shared = RAM used Program
    88.0 KiB + 10.0 KiB = 98.0 KiB agetty
    380.0 KiB + 34.5 KiB = 414.5 KiB sshd
    408.0 KiB + 93.0 KiB = 501.0 KiB gpg-agent (2)
    372.0 KiB + 142.0 KiB = 514.0 KiB avahi-daemon (2)
    456.0 KiB + 60.0 KiB = 516.0 KiB systemd-logind
    280.0 KiB + 261.0 KiB = 541.0 KiB sh
    476.0 KiB + 103.5 KiB = 579.5 KiB xfconfd
    476.0 KiB + 153.5 KiB = 629.5 KiB gvfsd
    604.0 KiB + 34.5 KiB = 638.5 KiB systemd-udevd
    588.0 KiB + 102.5 KiB = 690.5 KiB dbus-launch (3)
    576.0 KiB + 119.5 KiB = 695.5 KiB sudo
    668.0 KiB + 77.5 KiB = 745.5 KiB cups-browsed
    620.0 KiB + 179.5 KiB = 799.5 KiB at-spi2-registryd
    804.0 KiB + 67.0 KiB = 871.0 KiB htop
    756.0 KiB + 131.0 KiB = 887.0 KiB gconfd-2
    800.0 KiB + 126.0 KiB = 926.0 KiB upowerd
    752.0 KiB + 183.5 KiB = 935.5 KiB xscreensaver
    920.0 KiB + 85.0 KiB = 1.0 MiB cupsd
    876.0 KiB + 241.0 KiB = 1.1 MiB gvfsd-fuse
    692.0 KiB + 429.0 KiB = 1.1 MiB systemd-journald
    880.0 KiB + 273.5 KiB = 1.1 MiB at-spi-bus-launcher
    1.2 MiB + 125.0 KiB = 1.3 MiB udisksd
    1.4 MiB + 486.5 KiB = 1.9 MiB dbus-daemon (5)
    1.4 MiB + 533.5 KiB = 1.9 MiB panel-6-systray
    1.5 MiB + 430.0 KiB = 2.0 MiB lightdm (2)
    1.6 MiB + 572.0 KiB = 2.1 MiB xfce4-session
    1.5 MiB + 683.0 KiB = 2.2 MiB panel-5-datetim
    1.5 MiB + 706.5 KiB = 2.2 MiB panel-2-actions
    1.6 MiB + 723.0 KiB = 2.3 MiB panel-4-systeml
    2.0 MiB + 543.5 KiB = 2.5 MiB xfsettingsd
    2.0 MiB + 579.5 KiB = 2.6 MiB systemd (3)
    2.3 MiB + 815.0 KiB = 3.1 MiB emerald
    2.8 MiB + 578.5 KiB = 3.3 MiB gnome-keyring-daemon (3)
    3.2 MiB + 946.5 KiB = 4.1 MiB zsh (2)
    16.1 MiB + -12144.0 KiB = 4.2 MiB polkitd
    4.1 MiB + 465.0 KiB = 4.6 MiB notify-osd
    4.5 MiB + 1.6 MiB = 6.2 MiB xfce4-panel
    5.1 MiB + 1.2 MiB = 6.3 MiB panel-7-mixer
    7.7 MiB + 1.4 MiB = 9.0 MiB xterm (2)
    12.0 MiB + 636.0 KiB = 12.6 MiB opera:libflashp
    24.3 MiB + -10774.5 KiB = 13.8 MiB Xorg
    18.6 MiB + 1.1 MiB = 19.7 MiB gnome-do (2)
    23.2 MiB + 2.4 MiB = 25.6 MiB compiz
    168.4 MiB + 3.2 MiB = 171.7 MiB opera
    320.1 MiB
    =================================
    as you can see in 'ps_mem' memory usage isn't nothing absurdly it's ok.
    another example, this problem stop me make certain things, for example is impossible browsing in web while programming in eclipse, this is very uncomfortable.
    other thing, this not occur in Windows XP, I can open more than 10 tabs in firefox, and eclipse open, etc...
    any needing information, tell-me!
    firefox*: with memory cache turned off in about:config
    Last edited by hotvic (2013-08-12 17:42:21)

    hotvic, could you please change the title to something more descriptive? 'memory problems' sounds like you have problems e.g. remembering / recalling things, which is not an Arch issue ;P
    kenny3794 wrote:
    I've had some limiting performance with browsers lately (Firefox and Chromium).  I found it had a considerable cache size utilizing lots of fragmented space on my home directory. I cleared the cache and have since set a limit to the cache size of 30 MB for Firefox.  Between multiple users across multiple browser sessions, I had at least 2 GB of cache data on my 10 GB home partition!  So far (1 day), this has been helpful.  Perhaps this could help you also?
    Ken
    OP is talking about RAM, not HDD space.

  • Free Disk Space vs Mac Memory Usage

    Hi,
    I'm wondering if you might be able to help me - I'm so confused with this issue.
    I have a MacBook 2008 with an upgraded 320GB HDD, with currently 203GB free disk space.
    My back up disk is 120GB - perfect for a time machine back up of 117 GB of data. However, the TM says theres not enough memory because the full size of the back is 150GB? How can this be?
    For some reason, the free disk space figure is different to the memory usage figures in About My Mac?
    Can anyone explain to me why this is?
    My other half uses this computer and has only pictures and files on it, no music or movies? So how can she have utilised 117GB of data? This was previously a shared Mac, but I've deleted my user acc since upgrading and wanted to make this Mac entirely hers.
    Any help regarding this will be truly appreciated.
    Sincerely,
    Tarmak

    CarbonCopyClone.  It is "donateware" ... pay if you want to.  Others use SuperDuper! (both a free and a paid version).  I prefer CCC, but do not let that stop you from paying for SuperDuper if others convince you as both are very competent.
    It makes a bootable copy of your system partition, and used repeatedly will make "incremental" backups that take far less time than the original backup.  I have a TimeMachine backup (I consider it "secondary") and a CCC backup whic I consider "primary" and will eventually save me when my system partition gets cranky and refuses to boot -- use the option key at startup to choose the external disk to boot from.
    When you boot from the external drive, which will have CCC running in it, you can then "clone back" to the new internal drive.

  • Logic Pro 8, Kontakt 2.23 & Memory usage

    Hello -
    *What type of known memory problems might exist between Logic (Logic 8 in particular) and either Kontakt 2.23 or Kontakt Player 2.23??*
    *If there are known memory usage issues, what can be done to help/fix the situation??*
    My computer set up is listed below. I have not upgraded to Leopard (but did upgrade to Logic Pro 8). By-in-large, Logic Pro 8 works fine. However, I usually "network" the Logic sequencing projects with Muse Research's Receptor (version B) and/or an aging Dell Precision Workstation 650 using Musiclab's MIDIoverLan CP3. This has proven stable for me. For my most recent project, I decided to see how many instrument plug-ins I can utilize within the Logic Pro 8 program (without benefit of this "network" set-up). For this project, I'm using one instance of MusicLab's RealGuitar and 16 instances of +*Kontakt Player version 2.23*+ for a total of 17 instrumental tracks. The instrument libraries used are Garritan's GPO and JABB. With this many individual (not multi) track instances used, the activity monitor is showing high RAM usage. For some of the instruments, I could not use the KS choices (trumpet, trombone) because the extra RAM caused Logic to crash as I mounted more instruments.
    I tried adjusting the DFD. Still, there remained high RAM usage and the occasional crash. For some of the crashes, I either got the endless colored spinning wheel or a message basically stating that there was no memory available for the DFD. I am no expert with the advanced workings of Logic and the DFD associated with the Kontakt plug-ins. But I am surprised that using just 17 instrument tracks with this newer Mac Pro computer (with 5 GBs RAM) is causing problems. Sadly, I am not an expert Apple user to be able to tell whether it's the computer or Logic or Kontakt. I suspect, though, its some kind of issue between Kontakt and Logic.
    For my audio, I use MOTU's 2408 MK3. For my midi in-put I use M-Audio's Keystation Pro 88. I do not think either one of these devices are causing the apparent memory problems.
    By the way. . . before I purchased my newer MacPro computer, I had a similar instrumental set-up using my aging Dell Precision 650 workstation with 2 GB of RAM using Steinberg's SX 3 program. I used the same GPO and JABB sound libraries. It did "challange" the Dell computer (I had to adjust buffer/latency settings to MOTU's 828 MK2 used with the Dell computer), but the computer didn't crash like what I'm experiencing at present.
    I don't mind "networking" my computers together in order to use larger instrumental plug-in tracks. But I would think that given the "power" of my newer MacPro I should be able to handle more than 17 GPO & JABB tracks. Yes? No?
    *Given my current computer set-up, how many instrumental tracks (using libraries associated with Kontakt or the Kontakt Player) should I get??*
    Thank you in advance.
    Ted

    Sadly, I can chime in with the same basic problem. Namely, that
    Kontakt2 seems to leak ram. I've watched the activity monitor show
    an increase in ram allocation to logic if i simply LOOK at a kontakt
    instance. Just clicking on the instance causes ram to be allocated, and no matter how much I purge samples, I just can't retrieve that ram. Once 2 gigs are sent to logic, crash.
    The only solution I've found is to save & restart before it climbs too high. I can then work for an hour or so until the ram overflows again. This is obviously a major bug, and I agree that Logic doesn't appear to be the culprit.
    Have either of you tried K3? I wonder if the problem is more under control there? There's no way Kontakt will bother to fix this bug
    in v2.23 with v3 on the shelves. The question is, how do we know that v3 fixes it without buying it? I've heard reports that v3 is even
    more bloated than v2.
    I've recently done a clean install of leopard on a blank drive, and
    i'm patiently waiting until NI updates the installers for K2.
    Then, i'll have a clean OS, clean newly installed logic 8, and
    a cleanly installed K2. Maybe that will help mitigate, if not solve the problem.
    I'm almost sure that Kontakt is the source of the problem, but
    I've also been told that Intels have serious flaws in their ram
    allocation WITH TIGER.
    Some people with G5's have reported having access to 3.5 gigs ram.
    Intells can only use 2 gigs with any one app. This was thought to be a problem with Tiger and and Intels. So maybe Leopard will expand the RAM wallet, and then we just have to wait for Kontakt to stop spending so much.

  • [HELP ]unexpected low memory usage

    Hi all,
    Could someone help me out with unexpected low memory usage I am facing when I start the DB instance.
    I am running Oracle 8.1.7.3.2 on Windows 2000 Adv Srv with
    2GB of RAM and 3GB of swap area.
    I only have 1 instance with memory related init parameters setted as below.
    shared_pool_size = 293510451
    large_pool_size = 104857600
    java_pool_size = 20971520
    db_block_buffers = 107289
    sort_area_size = 1048576
    avg of about 150~250session so I was expecting the oracle
    to allocate about 1.5GB of RAM when the instance startsup
    Now, when I startup the instance, EXPECTING the oracle to
    allocate about 1.5GB of RAM for himself, and check the
    memory allocation status from the "task manager", it
    shows that 1.4GB of physical memory is AVAILABLE. In another words, the whole system is only using 600MB of
    physical memory.
    Could anyone tell why this is happening and the solution to solve this problem?
    BTW, even if I give loads to oracle using the Mercury LoadRunner, the memory allocation status does not change much while oracle is barely processing each transactions.
    Thanks, in advance

    I work mostly with UNIX but found the following from Metalink document# 46001.1 Oracle Database and the Windows NT memory architecture, Technical Bulletin. I think this answers your question:
    On starting up an Oracle instance all global memory pages are reserved and committed (Total Shared Global Area, Buffer Cache and Redo Buffers). Only a small number of these memory pages are touched on startup and are thus not in Oracle's working set, as more pages are touched they will be brought into memory. Oracle must contend equally with other processes and will have its working set trimmed if other processes are faulting at a greater rate.
    It is unlikely that this trimming will be desirable, especially when the database has a varying workload (high and low usage periods). Two registry parameters exist to allow the administrator to manipulate the working set bounds of the Oracle process, these are :
    - ORA_WORKINGSETMIN or ORA_%SID%_WORKINGSETMIN
    : Minimum working set for the ORACLE.EXE process (units = MB)
    - ORA_WORKINGSETMAX or ORA_%SID%_WORKINGSETMAX
    : Maximum working set for the ORACLE.EXE process (units = MB)
    These parameters apply to all releases from 7.3.x and should be added to the Long postings are being truncated to ~1 kB at this time.

  • Memory Usage:  Heap space vs System Monitor

    Hi guys
    I'm trying to wrap my head around something which seems pretty complicated. We have an enterprise app making use of Jasper Reports. When a report is generated, the app's memory usage in the Linux (Ubuntu 8.10) System Monitor shoots up by around 20MB (the more data on the report, the higher that number). After the report is closed, memory usage doesn't decrease. This is resulting in performance issues, and memory usage given by the System Monitor doesn't match up with the results given by the NetBeans profiler. There is a huge difference between the values indicated by the System Monitor, and the heap space given by the profiler. I've done some fairly detailed checking using the profiler, and according to that, there are 0 instances of our report generating classes in memory after closing a generated report. I've read a bunch of articles on the net, but I'm still stumped. My best guess is that something weird is happening somewhere in the Jasper code, but I can't really think of a way to confirm that. Does anyone have any idea what could be causing this and how to prevent it? Any help will be much appreciated; thanks.
    P.S. I've posted on the Jasper forums are well, but to no avail. [http://jasperforge.org/plugins/espforum/view.php?group_id=102&forumid=103&topicid=65335]
    Edit: We're using Java 6 (we've recreated the issue on a few release of Java 6), and Jasper Reports 3.0.1. (Just in case that is important)

    One thing about your test is that using Xms110m will cause the JVM to NEVER give back any memory, because it won't ever go below 110M. I altered your test a bit:
    import java.util.LinkedList;
    public class MemTest {
       private static LinkedList<byte[]> mem = new LinkedList<byte[]>();
       public static void main(String[] args) {
            try {
                System.out.println("Initial:");
                print();
                useMem();
                System.out.println("Allocated:");
                print();
                clearMem();
                System.out.println("Deallocated:");
                print();
                for (;;) {
                   Thread.sleep(10000);
                   System.out.println("After 10 seconds: ");
                   print();
            } catch (Exception ex) {
                ex.printStackTrace();
       private static void useMem() {
          for (;;) {
             try {
                mem.add(new byte[1024*1024]);
             } catch (OutOfMemoryError e) {
                //free up a few MB to make sure we don't get an OOME while trying to print, etc
                mem.removeFirst();
                mem.removeFirst();
                mem.removeFirst();
                break;
       private static void clearMem() {
          mem = null;
          System.gc();
          System.gc();
          System.gc();
       private static void print() {
          long free = Runtime.getRuntime().freeMemory();
          long total = Runtime.getRuntime().totalMemory();
          double freeFraction = (((double)free) / (total)) * 100;
          System.out.printf("\tAvailable: %d\tTotal: %d\tFree: %4.2f%%%n", free, total,
                freeFraction);
    }When I ran it with
    -Xms16m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30I got these results:
    Initial:
         Available: 16518016     Total: 16711680     Free: 98.84%
    Allocated:
         Available: 2070952     Total: 1065484288     Free: 0.19%
    Deallocated:
         Available: 582957376     Total: 583073792     Free: 99.98%
    After 10 seconds:
         Available: 582953224     Total: 583073792     Free: 99.98%As you can see, the JVM did return about half of the memory back. ProcExp shows ~592MB used, and peaked ~1000MB. But right now it's still running, and still holding at 583MB.
    I've read bug reports like this, but honestly have no idea what the people at Sun are talking about. I don't see the footprint being reduced. I don't know, maybe if I wait a couple of days...

  • QUESTION: Multi-instance memory requirements on Linux

    Hi, all.
    I've been out of the loop on Oracle technical details for awhile, and need to re-educate myself on a few things. I'm hoping someone can point me to a book in the online docs which discusses my question.
    Oracle db 10.2.0.2, on Redhat Linux 2.6.9-67.0.0.0.1. This server is a virtual machine, on a VMWare ESX server.
    My question concerns the utilization of memory resources in a multi-instance environment.
    I currently have 2 instances/dbs on this server. Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    I am tasked with determining if the server can handle a third instance. It's unclear how much load the database will see, so I don't yet know how much memory I will want to allocate to the shared pool for the buffer cache, etc.
    I wanted to see how much memory was being used by the existing instances, so on the server I attempted to capture memory usage information both before, and after, the startup of the second instance.
    I used 'top' for this, and found that the server has a total of 3.12GB of physical memory. Currently there's about 100MB free physical memory.
    the information from 'top' also indicated that physical memory utilization had actually decreased after I started the second instance:
    Before second instance was started:
    Mem: 3115208k total, 3012172k used, 103036k free, 46664k buffers
    Swap: 2031608k total, 77328k used, 1954280k free, 2391148k cached
    After second instance was started:
    Mem: 3115208k total, 2989244k used, 125964k free, 47144k buffers
    Swap: 2031608k total, 89696k used, 1941912k free, 2320184k cached
    Logging into the instance, I ran a 'show SGA', and got an SGA size of about 900MB (as expected). But before I started the instance, there wasn't anywhere near than amount of physical memory available.
    The question I need to answer is whether this server can accomodate a third instance. I gather that the actual amount of memory listed in SGA_TARGET won't be allocated until needed, and I also understand that virtual memory will be used if needed.
    So rather than just asking for 'the answer', I'm hoping someone can point me to a resource which will help me better understand *NIX memory usage behavior in a multi-instance environment...
    Thanks!!
    DW

    Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    Not quite. If you set non-zero values for those parameters as well as setting SGA_TARGET, then they act as minimum values that have to be maintained before extra free memory is distributed automatically amongst all auto-tuned memory pools. If you've set them as well as SGA_TARGET, you've possibly got a mish-mash of memory settings that aren't doing what you expected. If it was me, I'd stick either to the old settings, or to the new, and try not to mix them (unless your application is very strange and causes the auto-allocate mechanism to apportion memory in ways you know are wrong, in which case setting a floor below which memory allocations cannot go might be useful).
    3GB of physical memory is not much these days. The general rule is that your total SGAs should not make up more than about 50% of physical memory, because you probably need most of the other 50% for PGA use. But if your users aren't going to be doing strenuous sorting (for example), then you can shrink the PGA requirement and nudge the SGA allowance up in its place.
    At 900MB per SGA, you can have two SGAs and not much user activity. That's 1800MB SGA plus, say, 200MB each PGA = 2200MB, leaving about 800MB for contingencies and Linux itself. That's quite tight and I personally wouldn't try to squeeze another instance of the same size into that, not if you want performance to be meaningful.
    Your top figures seem to me to suggest you're paging physical memory out to RAM already, which can't be good!

  • High memory usage when many tabs are open (and closed)

    When I open Firefox the memory usage is about 70-100 MB RAM. When I'm working with Firefox I often open 15-20 tabs at once, then I close them and open others. Memory usage increaes up to 450 - 500 MB RAM. After closing the tabs the usage usually decreases, but after sometime. It starts decreasing very slow and never comes back to the level from the beginning. After few hours of work Firefox uses about 400 MB RAM even if one tab is open. First I thought it's because of my plugins (Firebug, Speed Dial, Adlock Plus) but I've checked it and it's not. I reinstalled Firefox but the problem occurs as well. I'm not sure if it's normal. Could you help me, please?

    Hi,
    Not an exact answer but please [http://kb.mozillazine.org/Reducing_memory_usage_(Firefox) see this.] The KB article ponders through various general situations which may or may not be applicable to specific instances but nevertheless could be helpful.
    Useful links:
    [https://support.mozilla.com/en-US/kb/Options%20window All about Tools > Options]
    [http://kb.mozillazine.org/About:config Going beyond Tools > Options - about:config]
    [http://kb.mozillazine.org/About:config_entries about:config Entries]
    [https://addons.mozilla.org/en-US/firefox/addon/whats-that-preference/ What's That Preference? add-on] - Quickly decode about:config entries - After installation, go inside about:config, right-click any preference, enable (tick) MozillaZine Results to Panel and again right-click a pref and choose MozillaZine Reference to begin with.
    [https://support.mozilla.com/en-US/kb/Keyboard%20shortcuts Keyboard Shortcuts]
    [http://kb.mozillazine.org/Profile_folder_-_Firefox Firefox Profile Folder & Files]
    [https://support.mozilla.com/en-US/kb/Safe%20Mode Safe Mode]
    [http://kb.mozillazine.org/Problematic_extensions Problematic Extensions/Add-ons]
    [https://support.mozilla.com/en-US/kb/Troubleshooting%20extensions%20and%20themes Troubleshooting Extensions and Themes]

Maybe you are looking for

  • Excise Invoice cannot be captured for RG1(Finished Goods) material

    Dear Friends, While creating GR and choosing the option only capture Excise Invoice system showing error as follows: Excise Invoice cannot be captured for RG1(Finished Goods) material Message no. 4F276 The Material is a Raw Material(ROH) and Price co

  • Archiving MM_EKKO

    We have started archiving mm_ekko obbject . Situation which i am facing is that when i ran write job to archive few month of data i was able to archive majority of that data but for some purchase document i get this message "Change date of purchasing

  • Running java command from DOS prompt

    Hi, When I run java from the DOS prompt (NT) in any other location than the JDK bin directory, I get the following error: Error opening registry key 'Software\JavaSoft\Java Runtime Environment' My understanding is to be able to run this from anywhere

  • Runners, good alternative to armband, please?

    Hi folks, Longtime runner, now have iPod Touch 2nd gen. Nice device. BUT, I hate the armband approach. It pinches my biceps. No matter where I put it, a plastic-y edge rubs against me. If I place it high on my biceps, then it bumps into my shoulder.

  • Bridge Webgalerie in Muse homepage integrieren

    Hallo! Ich habe eine Bridge Webgalerie in Muse über einen Hyperlink integriert, finde aber keine  "zurück Button"  bzw. "zur Homepage Button". Gibt es diese Funktion, wenn nicht wie kommt der Kunde von der Webgalerei mit einem Klick wieder zur Homepa