Shared Memory and Memory Paging - VDI 3.2

Hello
What metrics have been identified in the performance and optimisation of these new features for VDI 3.2 ?
We are looking for substantiated indicators with respect to builds for sites of various user-base sizes v's SunRays on desks.
Regards
Jeremy.

good Question!
Also I'm confused of how much memory is really used and what is shown where (top vs. Desktop provider Info in VDI Admin vs. what I see/assigned in the VDesktop)
sometimes balooning works like a charm and I can overcommit lots of memory running several desktops. Other times running one single Win7 Desktop (1GB RAM assigned) on a SF4140 /w 8G Ram installed eats up 75% of my memory.
Right now the Win7 VDesktop I'm using has
1024 MB assigned by the VDI-Manager with 50% memory sharing
The manager displays ~ 3,8 GB Used, as the Win7 VBox is the only one Running in an All-in-One configuration and takes roughly 2.2-2.5 GB w/o a running VM, I assume that 1.2 or 1.3 is accounting on the Win7 Wbox
in top the VBoxHeadless process shows up with 1891MB memory usage
Which one to trust? Somewhat hard to forecast.
could someone shed some light on this?
Regards,
zoltan

Similar Messages

  • Class Data Sharing (CDS) and memory analys with pmap

    Hi everybody,
    I read that CDS can decrease the memory footprint by sharing some librairies.
    I did some tests on PowerPC board and embeddedJVM 1.5 with 128Mo ROM
    I wrote a HelloWorld with a sleep(10) and see the results :
    # java -Xshare:off HelloWorld
    mapped:   172568 KB writable/private: 165708 KB shared: 32 KB
    # java -Xshare:on HelloWorld
    mapped:   201112 KB writable/private: 188604 KB shared: 5680 KBThe shared memory increase, normal...
    The memory mapped and writable/private increase, I thought that memory will decrease...
    so, where is the benefits ?
    And the mapped memory is > to board memory ! why ?
    thanks for your help
    Obelix

    somebody can do this tests please :
    Write a HelloWord.java with a sleep() of 10s and execute this scripts :
    #!/bin/sh
    java -version >> testsCDSoff.txt 2>&1
    java -Xshare:off HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSoff.txt
    java -Xshare:off HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSoff.txt
    java -Xshare:off HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSoff.txtand
    #!/bin/sh
    java -version >> testsCDSon.txt 2>&1
    java -Xshare:on HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSon.txt
    java -Xshare:on HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSon.txt
    java -Xshare:on HelloWorld &
    PID=$!
    sleep 2
    pmap -d $PID | grep mapped >> testsCDSon.txtand then dump testsCDSoff.txt and testsCDSon.txt here :D
    Look at mines :
    testsCDSoff.txt
    java version "1.5.0_06"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05)
    Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode, sharing)
    mapped: 237220K    writeable/private: 186972K    shared: 44740K
    mapped: 237220K    writeable/private: 186972K    shared: 44740K
    mapped: 237220K    writeable/private: 186972K    shared: 44740K
    testsCDSon.txt
    java version "1.5.0_06"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05)
    Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode, sharing)
    mapped: 265808K    writeable/private: 210252K    shared: 50048K
    mapped: 265808K    writeable/private: 210252K    shared: 50048K
    mapped: 265808K    writeable/private: 210252K    shared: 50048Kwhere is the benefits ? I did an error somewhere or I don't understand CDS ?

  • SAP ECC6 memory and paging issues

    Dear Experts
    I have recently upgraded my 4.6C systems to an ECC 6 system (DB2 LUW 9.5 on AIX 5.3 TL9 64 Bit OS)
    I have been running the LPAR with 14 GB of memory and we are around 100-200+ users using the system, I was monitoring using nmon and found that Physical Memory was around 99.8% Used   (14311.8MB and 22.6MB was free) also the paging space was around 37.2% in result causing the system at times to run slow which can have a very negative effect on the users.
    After further investigation I found that after a system restart the Physical Memory would start around 50.9% and increased at a steady pace until it reached 99.8% that is when the system would start using the paging space which would steadily increase, I found that the only solution was a system restart at least once a week to reduce the memory consumption.
    At first glance it looked like a database manger memory leak with the process db2sysc, so I searched the net with the search words u201Cdb2 memory leaku201D and found the following APARs and notes.
    APAR JR30285 - Pervasive memory leak when compiling SQL statements that use SQL/XML functions
    APAR IZ35230 - There is a pervasive unix-specific private memory leak in the security component
    Note 1288341 - Memory leak in APPLHEAPSZ -> SQL0954C 
    Note 1352361 - Memory leak in shared memory area abrfci
    Note 1147821 - DB6: Known Errors and available Fixes in DB2 9.5 LUW
    After reading the notes and APARs I decided to updated DB2 to the latest fix pack (5SAP), but after the fix pack was implemented it did not solve the memory problem
    I started look at different problems with SAP ECC6, db2 and AIX with paging/memory problems and I found the following notes to do with AIX memory and paging but none of them helped as all parameters and settings were set accordingly  
    789477 - Large extended memory on AIX (64-bit) as of Kernel 6.20
    191801 - AIX 64-bit with very large amount of Extended Memory
    973227 - AIX Virtual Memory Management: Tuning Recommendations
    884393 - AIX saposcol consumes large amount of memory.
    856848 u2013 AIX Extended Memory Disclaiming 
    1048686 u2013 Recommended AIX settings for SAP
    1121904 u2013 SAP on AIX: Recommendations for Paging
    1086130 u2013 DB6: DB2 Standard Parameter Settings
    After even more investigation I found the following evidence suggesting AIX Virtual Memory Manager might have a problem

    Shared memories inside of pool 40
    Key:       42  Size:    17792992 (  17.0 MB) DB TTAB buffer              
    Key:       43  Size:    53606392 (  51.1 MB) DB FTAB buffer              
    Key:       44  Size:     8550392 (   8.2 MB) DB IREC buffer              
    Key:       45  Size:     7014392 (   6.7 MB) DB short nametab buffer     
    Key:       46  Size:       20480 (   0.0 MB) DB sync table               
    Key:       47  Size:    10241024 (   9.8 MB) DB CUA buffer               
    Key:       48  Size:      300000 (   0.3 MB) Number range buffer         
    Key:       49  Size:     2769392 (   2.6 MB) Spool admin (SpoolWP+DiaWP) 
    Shared memories outside of pools
    Key:        3  Size:   114048000 ( 108.8 MB) Disp. communication areas   
    Key:        4  Size:      523048 (   0.5 MB) statistic area              
    Key:        6  Size:   692224000 ( 660.2 MB) ABAP program buffer         
    Key:        7  Size:       14838 (   0.0 MB) Update task administration  
    Key:        8  Size:   134217828 ( 128.0 MB) Paging buffer               
    Key:        9  Size:   134217828 ( 128.0 MB) Roll buffer                 
    Key:       18  Size:     1835108 (   1.7 MB) Paging adminitration        
    Key:       19  Size:   119850000 ( 114.3 MB) Table-buffer                
    Key:       41  Size:    25010000 (  23.9 MB) DB statistics buffer        
    Key:       63  Size:      409600 (   0.4 MB) ICMAN shared memory         
    Key:       64  Size:     4202496 (   4.0 MB) Online Text Repository Buf. 
    Key:       65  Size:     4202496 (   4.0 MB) Export/Import Shared Memory 
    Key:     1002  Size:      400000 (   0.4 MB) Performance monitoring V01.0
    Key: 58900114  Size:        4096 (   0.0 MB) SCSA area                   
    Nr of operating system shared memory segments: 16
    Shared memory resource requirements estimated
    ================================================================
    Total Nr of shared segments required.....:         16
    System-imposed number of shared memories.:       1000
    Shared memory segment size required min..:  692224000 ( 660.2 MB)
    System-imposed maximum segment size......: 35184372088832 (33554432.0 MB)
    Swap space requirements estimated
    ================================================
    Shared memory....................: 1654.8 MB
    ..in pool 10  328.6 MB,   58% used
    ..in pool 40  143.3 MB,   30% used
    ..not in pool: 1174.1 MB
    Processes........................:  413.4 MB
    Extended Memory .................: 6144.0 MB
    Total, minimum requirement.......: 8212.2 MB
    Process local heaps, worst case..: 3814.7 MB
    Total, worst case requirement....: 21882.9 MB
    Errors detected..................:    0
    Warnings detected................:    3

  • Physical memory and paging

    I know in SAP's world, memory means physical memory + paging. I want to know which program has contribution for swap.
    Suppose there is 4G physical memory on an application server, em/initial_size_MB = 4G, abap/heap_area_nondia = 4G.
    1) If there is no dialog processor running and one background job running which claims 2G memory, I want to know will swap occur?
    2) If there is one dialog processor running which claims 2G memory and one background job running which claims 2G memory, will swap occur?

    By ST03N you can check the workload, by OS06 the swap .
    By transaction ST02 you can check the folllowing parameters:
    SAP Roll area parameters
    - ztta/roll_first          : First amount of roll area used in a dialog WP
    - ztta/roll_area             : size of the local SAP Roll area in the work process
    rdisp/ROLL_SHM      : size of SAP roll Buffer
    rdisp/ROLL_MAXFS     : size of entire shared SAP roll area
    SAP Extended Memory main parameters :
    em/initial_size_MB : size of SAP extended memory allocated when                         the SAP instance starts up
    em/blocksize_KB  : size block which split SAP Extended Memory
    ztta/roll_extension : maximum size of a user context in the SAP Extended memory
    SAP Heap Memory main parameters :
    abap/heap_area_dia       : quotas oh SAP heap memory that a dialog                                 process can allocated.
    abap/heap_area_nondia : quotas oh SAP heap memory that a nondialog                    process can allocated.
    abap/heap_area_total     : size that can be allocated in total by all work                    process.
    abap/heaplimit           : Workprocess restart limit of heap memory
    if helpful reward point is appreciated

  • HELP!!! I did everything i could all day to fix this and i can't upload my video to youtube or export file because it keeps saying "sharing requires more memory to be available" How do i enable my movie to be uploaded or exported?

    Question 1 / Plan A : When i finished making my movie project in iMovie 11 on Mac, i tried sharing it at all ways such as youtube, export, etc... it kept popping up: "sharing requires more memory to be available". I searched up all day on the internet on how to figure this out, i followed all of their advices but nothing seems to happen. btw i already got permitted for longer than 15 mins video on youtube. it's also weird that my computer has more than 600GB free space. So how do i upload my 2 hour video to youtube? and how do i avoid the popup of "sharing requires more memory to be available"?
    Question 2 / Plan B: If plan A won't work out, i definitely don't wanna do this all over again because i took centuries and effort on this. What im trying to do is divide them to multiple projects, and maybe it will let me upload it because of it's file size? So how do i copy parts from a project and paste it to another project or event library so it would be easier for me to finish it in iMovie?
    Question 3 / Plan C: If plan B won't work out, how do you screen record my movie with audio? What im trying to do is to record it by screen using quicktime and divide them by parts. I know how quicktime and soundflower works, but when i tried it, but the problem is that soundflower and quicktime audio was very crappy and broken at some point. So how do i record good quality audio with screen recording?
    Question 4 / Plan D: If plan C won't work out, i guess the worst way to do this is to record from my phone, its such a bitter idea but i want this to be published in good quality. i am so tired and i took so weeks finishing this project and the ending results just ****** me off. Id be glad if someone has an idea on this problem im occurring. Hope y'all people and mac experts understood on what im struggling about. leave a reply on those of you who might have an idea on solving this problem!

    When You select share, are you able to get either of these two windows?
    First one is Export As
    Second one is Save as QuickTime
    I looked at earlier post and see You have 4GB RAM,  maybe try a smaller format to convert it to,
    then save it to file first before trying YouTube.

  • SHARED MEMORY AND DATABASE MEMORY giving problem.

    Hello Friends,
    I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
    I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
    This IMPORT command is working perfect in foreground. But  it is not working in background.
    So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY.  But no use. Still background is giving problem.
    When I remove VIA JOB  parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
    pls find the below code of mine.
    option1
    EXPORT TAB = ITAB
           TO DATABASE indx(Z1)
                FROM   w_indx
                CLIENT sy-mandt
                ID     'XYZ'.
    option2
    EXPORT ITAB   FROM ITAB
      TO SHARED MEMORY indx(Z1)
      FROM w_indx
      CLIENT sy-mandt
      ID 'XYZ'.
       SUBMIT   ZPROG2   TO SAP-SPOOL
                      SPOOL PARAMETERS print_parameters
                       WITHOUT SPOOL DYNPRO
          *_VIA JOB name NUMBER number*_
                       AND RETURN.
    ===
    Hope every bidy understood the problem.
    my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
    Thanks
    Raghu

    Hi.
    You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
    When you call your report using VIA JOB, a new session is created.
    Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
    Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
    Hope it was helpful,
    Kind regards.
    F.S.A.

  • Shared memory and ABAP

    Dear developers,
    I have some documentation on ABAP SHARED OBJECTS and I wonder how I could define shared memory objects at run time ( so I would need something like an API ) instead of using the SHMA transaction.
    I actually also wonder what would happen if 2 users of the same shared object would call on one of its methods ? Can we have concurrent access ? How does the whole locking thing works ? Is it possible to implement waiting calls/locks ?
    If you have any serious/technical documentation on semaphores/locks in SAP/ABAP I'd also be very interested.
    Sincerely,
    Olivier MATT

    Hi matt,
                This is vijay here.
                                          if you r working on frequntly changed data then you must use this locking technique.
    well jus go to se11 and make a lock object by starting initial......EZ....
    EX: EZENMARA.
    then u can fix the lock as shared or (exclusive,cumulative) or( exclusive, not cumulative)
    you can also fix the lock parameters like  specific fields.then jus save and activate.
    and when you will make a program jus call the function module enqueue_ezenmara for locking and dequeue_ezenmara for unlocking and u can get rest in my program.
    regards
    vijay
    rewards if answer is helpfull.
    TABLES: MARA.
    PARAMETER: MATNR LIKE MARA-MATNR.
    CALL FUNCTION 'ENQUEUE_EZENMARA'
    EXPORTING
       MODE_MARA            = 'S'
       MANDT                = SY-MANDT
      MATNR                =
       X_MATNR              = 'MATNR'
      _SCOPE               = '2'
      _WAIT                = ' '
      _COLLECT             = ' '
    EXCEPTIONS
      FOREIGN_LOCK         = 1
      SYSTEM_FAILURE       = 2
      OTHERS               = 3
    IF SY-SUBRC = 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    WRITE: 'TABLE IS LOCKED'.
    ELSE.
    SELECT * FROM MARA WHERE MATNR = MATNR.
    WRITE:/ MARA-MATNR,MARA-ERNAM,MARA-MTART.
    ENDSELECT.
    ENDIF.

  • What is recommended size of System drive to keep operating system files , paging files and Memory Dump of hyper-v host.

    Hi ,
    I want to setup hyper-v host with 128 GB RAM  , Windows 2012 R2
    What is recommended size of System drive  to keep operating system files  , paging files  and Memory Dump?
    I tested to using 150 GB  , but when the server is crashed, there is no free space to keep memory dump file.
    Ramy

    Hi Ramy,
    For Server 2012R2 the absolute minimum system drive is 32Gb but this assume you have limited RAM or have your Page file located on another drive. It used to be best practice to setup a small page file but MS PFE now suggest leaving windows to manage
    the page file size.
    Obviously this is not always possible depending on the amount of RAM in the system, but base the system drive around this or offload the Page file to perhaps another drive. On top of this you also need space for the memory dump to be written, potentially
    again up to the size of the RAM. Assuming you fire the machine back up after a crash, you need space for the OS, the Page File plus space for the associated dump file.
    There is a nice little article here that maybe of assistance:  http://social.technet.microsoft.com/wiki/contents/articles/13383.best-practices-for-page-file-and-minimum-drive-size-for-os-partition-on-windows-servers.aspx
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • (SOLVED)fork() and memory sharing?

    Let me get this straight, when you fork() a child process this is what happens:
    A copy of the parent isn't made just yet-- instead the child has read only rights to the actual parent's memory page. Once the child executes a write call the stack and heap pages are copied and marked with read and write rights.
    Questions:
    What happens to the shared memory segments? Is this section shared prior and post to the copy/RW escalation? How does the stack, heap and shared memory segments differ?
    Last edited by Google (2010-08-25 16:01:17)

    Think of shared memory segments as their own entitites, truly separate from the process that created them.  They aren't copied during fork() because they are not technically part of the process's address space. 
    A pointer to the shared memory segment may be copied if one exists before you fork, which would allow the child to access the segment created by the parent, but the segment itself is ... well it's shared

  • Segment memory and shared memory

    hi guys
    is it possible to know the path of "Segment memory and shared memory " on OS level...........
    i don't know exact forum for this question so i post it here
    please help me !

    If are you using Linux you can use ipcs to get all shared memory segments at system level:
    $  ipcs -m
    ------ Shared Memory Segments --------
    key        shmid      owner      perms      bytes      nattch     status
    0x0000f47a 65536      root      777        512000     4
    0x26278554 163842     oracle    640        132120576  16
    0xac004cf0 196611     oracle    660        16108224512 128The Oracle executable sysresv uses ORACLE_SID env. var to map shared memory segment to the current instance:
    $ sysresv
    IPC Resources for ORACLE_SID "XXX1" :
    Shared Memory:
    ID              KEY
    196611          0xac004cf0
    Semaphores:
    ID              KEY
    229377          0x2dac12a4
    Oracle Instance alive for sid "XXX1"

  • Contribute "low on memory" and Acrobat won't redraw

    I have two issues that I believe may be related.  First here's my system information
    WinXP SP3
    Intel Core2Duo E6550 = (2.33GHz)
    3GB RAM
    GeForce 8600GTS 512 MB Video
    Running two 19" monitors, (at 1440x900 each)
    Creative Suite Premium 2.0 (InDesign CS2, Photoshop CS2, IllustratorCS2, Acrobat 7 Professional, BridgeCS2, GoLive CS2Version Cue is disabled.)
    Production Studio Premium 1.0 (Premiere Pro 2, Encore DVD 2, After Effects 7, Audition 2)
    Contribute CS3, (including Bridge CS3, but I don't use it, I use Bridge CS2 for everything).
    First Issue.  Intermittently, in both Adobe Reader 9, and Acrobat 7.0 pdf files that I have open will not display in the window properly, unless I reduce the size of the window to something quite small.  It will show the desktop in behind the window, or if I open a window on top, it shows that window even when I've moved it.
    The second problem is similar, but occurs in Contribute CS3 and is always accompanied by a "Your system is low on memory..." error, until I reduce the size of the Contribute window, at which point, it is so small that it becomes unuseable.
    The only other program that I have open at the time is Outlook 2007, with a couple of messages.
    As this is happening, I open the Task Manager, see that there is plenty of available Physical Memory, and the CPU Usage is very low, under 10%.  I've set my virtual memory paging file to Custom Size of 4591MB.
    The only way to fix it is to reduce the size of windows or close all the programs.  These two problems occur independently of each other.  Sometimes when I'm just trying to open a PDF file.
    3GB should be plenty to run Contribute and Outlook, or Acrobat and Bridge.
    What the heck is going on?

    Thanks for the reply.
    Yes I have tried that, but to no avail.  The crazy thing is that it only really happens after I've been working in Contribute for a while.  Sometimes an hour or more.  Then it just starts doing this.
    The redraw problem also affects other programs, that are not Adobe programs.  For example, when I open a Remote Desktop Connection to the web server where I am publishing, while this problem is occuring, the screen will not "draw" properly either.
    I have a feeling that there is something strange about how Contribute, and Acrobat, and Windows share the memory, either the system memory, or the graphics memory.
    Could this also be a problem caused by having both CS2 and CS3 programs installed?
    Chris

  • What is the difference between Azure RemoteApp Basic vs Standard Plans in terms of compute cores and memory?

    So our customer has asked us to compare compare Amazon Workspace and Azure RemoteApp offerings for them to choose from. While looking at Amazon Workspace, it clealy defines bundles with specific CPU cores, memory and user storage. However, Azure RemoteApp
    only specifies user storage and vaguely compares its basic vs. standard plans in terms of "task worker" vs. "information worker"
    I tried looking up its documentation but couldn't find specific CPU cores that are dedicated per user in basic vs. standard plans. I have following questions:
    Can anyone point me in the right direction or help understand how many CPU cores and memory are dedicated (or shared) per user in each plan?
    Our customer would most likely need a "custom" image for their custom apps. Is it possible for us to choose specific CPU cores and memory for the users to be able to run their apps in azure remoteapp?
    In case i am misunderstanding the basic difference between AWS workspace and Azure RemoteApp, i'd appreciate some help in understanding it as well.
    Thanks!

    Hi,
    With Azure RemoteApp users see just the applications themselves, and the applications appear to be running on their local machine similar to other programs.  With Workspaces users connect to a full desktop and launch applications within that.
    1. Azure RemoteApp currently uses size A3 Virtual Machines, which have 4 vCPUs and 7GB RAM.  Under Basic each VM can have a maximum of 16 users using it whereas under Standard each VM is limited to 10 users.  The amount of CPU available
    to a user depends on what the current demands are on the CPU at that moment from other users and system processes that may be on the server.
    For example, say a user is logged on to a VM with 3 other users and the other users are idle (not consuming any CPU).  At that moment the user could use all 4 vCPUs if a program they are running needed to.  If a few moments later
    the other 3 users all needed lots of CPU as well, then the first user would only have approximately 1 vCPU for their use.  The process is dynamic and seeks to give each user their fair share of available CPU when there are multiple users demanding CPU.
    Under the Standard plan a user will receive approximately a minimum of .4 vCPU assuming that the VM has the maximum number of users logged on and that all users are using as much CPU as possible at a given moment.  Under the Basic plan the approximate
    minimum would be .25 vCPU.
    2. You cannot choose the specific number of cores and memory.  What you can do is choose the Azure RemoteApp billing plan, which affects the user density of each VM as described above.  If you need a lower density than Standard you
    may contact support.
    -TP

  • MS 6390LE and Memory

    I have:
    MS 6390 LE w/ lan v 1.00
    AMD XP 1700 w/hsf
    30G WD hard drive 5400rpm
    128 M ddr266 memory
    Antec case with an Antec sl300 PSU
    floppy
    zip 100
    MS 8348 Dragonwriter CD RW(currently not working yet, but am working on that too)
    Everything seems to be working, and I flashed the bios to the current version not for the pcb 2.0, and I have this question,
    When booting up the memory test only shows 98 m memory and 32+ shared memory. After it boots up, and I am in Windows 98se, it only shows 96 m memory and the system is at 70%.
    Have I done something wrong, or is there something else wrong? The machine doesn't seem to run as fast as I would expect it to, but then the HD is a slower model. And I can't get the cd rw to work, but that is a different problem i posted to another forum.
    Thanks!

    No, it doesn't appear that you have done anything wrong.  You do have 128mb (96+32=128). The reason for the bios showing 98mb and windows showing 96 is in how it counts megabytes.  Windows see's megabytes as 1 000kb.  Your bios see's mb's as 2^10=1024kb.  This accounts for the discrepancy you see when you boot up and your 128mb stick is seen as 131mb.
    As for system resources, you only have 96 mb RAM available for Windows.  Add up what you have running in the background as well as what Windows normally needs and you're about right.

  • KT3 ULTRA and memory

     ?( I have KT3 Ultra with 512meg 2x 256 in Sockets 1 and 2.
    XP home and Athlon XP1800 I am trying to increase the RAM with another 256 in socket 3.
    When there is a module in the 3rd socket the machine will crash. I have tried 3 different memory modules and even taken out the existing memory and swapped modules around  but it will still crash. Sometimes it takes a couple of hours to crash and others it will happen on powering up. Today this is  what the  blue screen message said .
    Problem with WIN32K.sys page fault in non paged area
    It will crash during  any software it doesnt seem fussy
    Word, Photoshop IE6 whatever
    Anybody have any ideas or advice where to go ?
    TIA

    are all 3 tottally identical maker revision etc
    mixed never seem to work
    try less agressive ram timmings
    some times 3 just will not work is what ive seen
    more ram draws more power is a possible reason

  • Performance and memory problems

    Hello.
    I've installed Oracle AS 10g in Redhat Linux EL 3. The machine has 3 GB of memory, 2 GB of swap. Its a dual Xeon 2.6 Ghz.
    Imagine i reboot the machine. Everything works fine. The problem is, in 6 days, it fills all the swap memory and almost the memory available. If it reaches this values, the machine gets so slow... To correct temporary the situation, i need to perform a reboot. But the question is this is Linux, not windows...
    Does anyone has an ideia why this is happening?? i've never seen something like this... after 6 days, the only memory available of 5 GB are 30 MB... this is not possible.
    Help please !

    We are with the same problem. We read a lot about the vm in the RHAS 3.0. The vm was improved but, with bugs. We found a lot of inactive pages that, after the clean state, does'nt deallocate properly. This makes the thread kernel "kswapd" works a lot, taking all the cpu in the io process.
    There is a known bug related to "kswapd" thread in the RHAS 3.0 kernel. The RedHat promess to solve the problem in the nexts releases (I really doubt it, because there was already 3 releases after its announces).
    We have checked some strange behaviors too... The machines with Linux RH AS 3.0 + Oracle products (OracleDB and OracleIAS) are freezing randomically. We think that the shared memory are allocating kernel space pages, but we didn't found any evidences yet.
    The workarounds (not tested yet):
    - Upgrade to the latest kernel.
    or
    - Use the hugemem kernel (even when using less than 16GB of RAM, some guys reported that this solve the problem)
    or
    - Compiling a clean kernel directly from kernel.org. In this case, we don't have support from Oracle, but maybe the problem could be resolved until RH publish some bugless vm code with your kernel.

Maybe you are looking for