Understand the memory consumption in Solaris

Hi All,
There is some things that I don't understand on Solaris, regarding the memory utilization.
In my research on google, I have found this very usefull command : echo "::memstat" | mdb -k
the output is :
root@localhost > echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
Kernel 290375 1134 7%
ZFS File Data 55829 218 1%
Anon 82841 323 2%
Exec and libs 9844 38 0%
Page cache 10370 40 0%
Free (cachelist) 15677 61 0%
Free (freelist) 3726068 *14554* 89%
Total 4191004 16371
Physical 4101523 16021
I already know the vmstat command :
root@PA-OFC-SRV-UAT-1 > vmstat 2
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr lf s0 s1 -- in sy cs us sy id
0 0 0 16286044 14528236 24 106 0 0 0 0 1 0 -0 2 0 704 950 410 0 0 100
0 0 0 16390660 14629360 6 24 0 0 0 0 0 0 0 0 0 680 346 390 0 0 100
0 0 0 16390572 14629308 1 2 0 0 0 0 0 0 0 0 0 685 219 362 0 0 100
0 0 0 16390572 *14629308* 1 1 0 0 0 0 0 0 0 0 0 680 274 342 0 0 100
^C
The thing that I don't understand is the difference between the 2 output for the free memory:
FREE for mdb : 14554 MB -> 14.2 Go
FREE for vmstat : 14629308 kb -> 14286.4MO -> 13.9Go
I can tell that nothing is runing on my system (except the OS)
The difference is not very big 0.3 as on this server I have :
Memory size: 16380 Megabytes. (prtconf)
A strange thing is that I have successfully started a java process with 15go (so 15360 Mo), even if I have 14554 MB:
root@localhost > java -d64 -Xms15G -version
java version "1.6.0_23"
Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
root@localhost > /home/ullink/COMMON/JAVA/latest/bin/java -d64 -Xms16G -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
BUT, on a servers which I have 73720 Megabytes (prtconf), the difference is bigger:
root@host > echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
Kernel 662611 2588 3%
ZFS File Data 998688 3901 5%
Anon 2139403 8357 11%
Exec and libs 13959 54 0%
Page cache 35561 138 0%
Free (cachelist) 97387 380 1%
Free (freelist) 14922431 *58290* 79%
Total 18870040 73711
Physical 18362749 71729
vmstat:
root@host > vmstat 2
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr lf s1 s2 s3 in sy cs us sy id
0 0 0 50601596 60622968 64 179 0 0 0 0 0 0 1 0 0 1112 11666 959 0 0 100
0 0 0 48801836 58047100 59 78 0 0 0 0 0 0 0 0 0 1163 1200 860 0 0 100
3 0 0 48801756 58047480 246 929 0 0 0 0 0 0 0 0 0 1160 194171 968 1 0 99
0 0 0 48801556 *58047224* 149 656 0 0 0 0 0 0 0 0 0 1186 183253 980 1 0 99
^C
so here I have:
FREE for mdb -> 58290mb --> 56.9Go
FREE for vmstat -> 58047224 b --> 55.35Go
so here I have a difference of 1.5Go....
I have the feeling that when more we have memory, more the difference is big.
I investigate on this because on this (last) server, the production team of my compagny told me that they don't know "where" are some GB of RAM on this server....so I try to understand....
I don't say that all my interpretation are good, but there is something I don't understand...
so the question I have are :
- why was I able to start a java process with 15G?
- what is the difference between mdb and vmstat?
Thank you a lot in advance

How big is you SGA ? This is the memory Oracle is using.
(if you don't know: see init.ora
-> shared_pool
-> large_pool
-> java_pool
-> buffer_cache * db_block_size
-> log_buffer )

Similar Messages

  • 8i Memory Consumption in Solaris 8

    Hi,
    This time I'm hoping I've posted this qustioned to the right place. :-)
    This is the memory consumtion I took from my Solaris 8 Netra T1 with 256 MB RAM after running the
    prstat -t command. User Oracle is the Oracle 8.1.6 DB owner.
    Initial Consumption
    ===================
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    4 oracle 8760K 5976K 2.4% 0:00.00 0.1%
    25 root 52M 21M 8.7% 0:00.04 0.1%
    1 daemon 2488K 1016K 0.4% 0:00.00 0.0%
    After Starting the Listner
    ==========================
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    5 oracle 19M 11M 4.6% 0:00.00 0.4%
    25 root 52M 21M 8.7% 0:00.04 0.1%
    1 daemon 2488K 1016K 0.4% 0:00.00 0.0%
    After Starting Oracle
    =====================
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    17 oracle 1792M 1502M 99% 0:00.01 0.4%
    25 root 52M 21M 1.4% 0:00.04 0.1%
    1 daemon 2488K 1016K 0.1% 0:00.00 0.0%
    First Access
    ============
    NPROC USERNAME SIZE RSS MEMORY TIME CPU
    18 oracle 1941M 1631M 99% 0:00.08 4.3%
    1 daemon 2488K 1016K 0.1% 0:00.00 0.0%
    25 root 52M 21M 1.3% 0:00.04 0.0%
    And the memory stays at 99%. The consumption is not coming down.
    Any idea why this is happening. If it's a problem with the configuration pls let me know
    how to correct it.
    Thx
    Shafeen

    How big is you SGA ? This is the memory Oracle is using.
    (if you don't know: see init.ora
    -> shared_pool
    -> large_pool
    -> java_pool
    -> buffer_cache * db_block_size
    -> log_buffer )

  • Need help to understand the MEMORY output on WLC

    Hi All,
    Will be glad if someone can help me to understand the below output from WLC. Actually i want to know the FLASH & RAM size of my WLC so what i did. I run the command '' show memory statistics'' and below is what i have got. Can someone elaborate this
    System Memory Statistics:
    Total System Memory............: 259112960 bytes  (i believe this is the flash size 256Mb ???)
    Used System Memory.............: 154288128 bytes
    Free System Memory.............: 104824832 bytes
    Bytes allocated from RTOS......: 13717504 bytes
    Chunks Free....................: 24 bytes
    Number of mmapped regions......: 37
    Total space in mmapped regions.: 23429120 bytes
    Total allocated space..........: 13399760 bytes
    Total non-inuse space..........: 317744 bytes
    Top-most releasable space......: 61832 bytes
    Total allocated (incl mmap)....: 37146624 bytes
    Total used (incl mmap).........: 36828880 bytes
    Total free (incl mmap).........: 317744 bytes

    Hi Salman,
    Yes, that value give the RAM size of your WLC.
    259112960 bytes  valued correlated to ( or 23040 KB or ~248 MB ) dividing by 1024. So I think it is 256MB RAM.
    I do not think flash size can be determine by this output.
    Refer this post as it relate to your query as well
    https://supportforums.cisco.com/discussion/12023396/ram-size-5508-and-5760-wireless-lan-controllers
    HTH
    Rasika
    *** Pls rate all useful responses ****

  • Need the memory consumption details of a interface

    Hi,
    We have scenario that there are 5000 messages were trigered at a time ie.., during peak time.
    Is there any way in XI 3.0 to analyse the memory and CPU consumption of a single interface.
    I am aware of the option to pull the entries from Performance monitoring in RWB. Is there any file where we can get the memory dumps for every 10 min.
    The scenario consits of BPM where 5000 messages are triggered and get processed.
    Please help.
    Regards.
    MM

    You can check this in tcode ST06N,ST06 for time basis but for interface wise you can check user wise if uses different users for RFC connection in tcode STAD.
    Thanks!

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • J2EE Engine memory consumption (Usage)

    Dear experts,
    We have J2EE Engine (a Jawa stack).  When I run routine monitoring via the browser and read the memory consumption I am meet with a chart that show a sawtooth like graph. Every hour from 19:00 to 02:00 the memory consumption will rise with approx. 200 MB after 7 hours all of a sudden the memory consumption drops down to normal idel levvel and start over again. I can inform that at the time there are no user on the system.
    My question is what are the J2EE doing? since there is no user activity.Are the J2EE engine running some system applications? is it filling up the log files and then empty(storing) them.
    I hope some of the experts can answer.
    I just want to undertand what's going on, on the system. If there is some documentation/white paper on how to interpret/read the J2EE monitor I will great full if you drop the information or link here.
    Mike

    Hi Mike
    To understand what exactly is being executed in Java engine, I'd suggest you perform Thread dump analysis as per:
    http://help.sap.com/saphelp_smehp1/helpdata/en/10/3ca29d9ace4b68ac324d217ba7833f/frameset.htm
    Generally 4-5 thread dumps are triggered at the interval of 20-25 seconds for better analysis.
    Here's some useful SAP notes related to thread dump analysis:
    710154 - How to create a thread dump for the J2EE Engine 6.40/7.0
    1020246 - Thread Dump Viewer for SAP Java Engine
    742395 - Analyzing High CPU usage by the J2EE Engine
    Kind regards,
    Ved

  • How to overcome the Memory leakage issue in crystal report 2008 SP2 setup.

    I have developed the small windows based application tool with help of  Visual studio 2008 for identify the memory consumption of crystal report object. It helps to load the crystal report objects in the memory and then released the object from the memory. The tool simply does the u201CLoading and Unloadingu201D the objects in the memory.
    The tool will be started once u201CTest_MemoryConsumption.Exeu201D executed. The u201CTest_MemoryConsumption.Exeu201D consumes u201C9768 KBu201D memory before load the crystal report object in memory. It means, 9768 KB is normal memory consumption for run the tool.
    Crystal report object initiated by the tool and object help to load the report in memory once the tool initiated the crystal report object. Now u201CTest_MemoryConsumption.Exeu201D consumes u201C34980 KBu201D memory during the crystal report object creation and report load process. The actual memory consumption of crystal report object is 34980u20139768=u201C25212u201DKBu201D. 
    The memory consumption u201C34980 KBu201D will be continued till the end of the process. The memory consumption will be reduced to u201C34652 KBu201D from u201C34980 KBu201D once report load process completed. It means, u201C328 KBu201D memory only released from the memory consumption. Tool enables the Release command for the crystal report object. But crystal report object does not respond to the command and will not release his memory consumption.
    The memory consumption u201C34652 KBu201D will be stayed in the memory once job ends.  If i again initiate the crystal report object then it crystal report object start to consume the memory from 34652 KB.
    Database objects and crystal report objects are properly used in the tool. The object release commands properly  communicated to crystal report setup. But the u201CCrystal report service pack 2u201D setup unable to respond the commands which has enabled from .Net Tool.  Crystal report objects are properly initiated and disposed in the tool. But the crystal report unable to release from the server.
    The memory consumption will be reduced once the server restarted or kill the application.
    Crystal report 2008 and crystal report 2008 SP2 setup available in the server.
    Microsoft .Net Framework 2.0 SP2, Microsoft .Net Framework 3.0 SP2 and Microsoft .Net Framework 3.5 SP1 are available in the server,
    Could you please suggesst how to avoid the memory consumption keep increasing and  how to release the memory consumption  once the crystal object disposed???

    Hi Don..
    My case is different one. I hope, the problem with Run time Installation setup file (Crystal report 2008 Serivce Pack2 installer) which we installed in the server.
    Let me explain with Live scenario which our client faced in crystal report 2008 Service pack2 Installer.
    Our client is using a application to help to print their reports. The application is developed with Windows service.
    Windows service keep on running in the server. Windows service executes the client 's crystal reports( Labels Report, Stock  report) which designed for clients need and the reports will be printed from printer. 
    10 Same type report (Label Report) will be printed in 1 minute. Reports are not printing during non business hours. But the windows service keep on running.  Memory cosumption of application will be 160 MB in business hours.
    For Example, On Monday the application memory consumption starts with 160 MB. The Memory consumption will be reached 165 MB  in peak business hours. Then the memory will be ended in163 MB in the End of Monday. It means, The memory consumption will be in 163 MB during the non business hours. Reports will not be printed in non business hours.
    On Tuesday, the application memory consumption starts with 163 MB and it will be reached 168 MB during the peak hours. The Memory consumption will be ended in 165 MB in the end of Tuesday.  The same process contiues till friday. End of friday, the memory consumption of the application will be ended with 170 MB.
    Application Memory Consumption slowly increasing in the server. In 5 days, Memory consumption reached Threshold value (170 MB) of the server. Application gets hanged up once the memory consumption reached 170 MB. We got the error messages as "Attempted to read write protected memory " / "Not Enough memory for process".  If we restart server / If we restart the service then memory consumption of application get reduced to 160 MB.
    From the above scenario, We came know that the either the problem with Application object or the problem crystal report object. In the application, We have checked dispose methods of application objects completly. I am sure that  application objects are properly disposed in the application. I hope the problem not with application objects. The problem with Crystal report objects.
    Application properly communicates the dispose methods to crystal report objects. Crystal report objects are not released from
    the memory.
    Crystal report 2008 Serive Pack 2 setup installed in the server. 
    As you said, If Crystal report runtime is not released from memory then memory consumption keep increase???  In service oriented architecture application, how to unload the crystal report runtime??
    Do you any fix for this kind of issue??
    Willl Crystal report 2008 service pack 3 help on this issue??

  • Check Process memory consumption and Kill it

    Hello
    I have just installed Orchestrator and have a problem that I think is perfekt for Orchestrator to handle.
    I have a process that sometimes hangs and the only way to spot it is that the memory consumption has stoped.
    The process is started every 15 minutes and scans a folder, if it finds a file it reads the file to a system. You can see that it is working by the increasing Memory consumption. If the read fails then the memory consumption stops. The process is still working
    and is responding but is hung.
    I'm thinking about doing a runbook that checks the memory-consumption every 5 minutes and compares it with the previous value. if the last three values are the same then I will kill the process and start it again.
    My problem is that I have not found a way to check the memory consumption of a process.
    I have set up a small test, just verify that I get the correct process, with the activity Monitor process -> Get Process Status -> Append Line (process name).
    But How do I get the process memory consumption?
    /Anders

    Now that I think about it a bit more I don't think there will be an easy way to set up a monitor for your situation in SCOM. Not that it couldn't be done, just not easily. Getting back to SCORCH. What you are trying to do isn't an every day kind of
    scenario. I don't think there is a built in activity for this.
    The hardest thing to overcome whether you use SCORCH or SCOM is likely going to be determining the error condition of three consecutive samples of the same memory usage. you'll need a way to track the samples. I can't think of a good way to do
    this without utilizing scripting.

  • Query on Memory consumption of an object

    Hi,
    I am able to get information on the number of instances loaded, the memory occupied by those instances using heap histogram.
    Class      Instance Count      Total Size
    class [C      10965      557404
    class [B      2690      379634
    class [S      3780      220838
    class java.lang.String      10807      172912 Is there way to get detailed info like, String object of which class consume much memory.
    In other words,
    The memory consumption of String is 172912. can I have a split up like
    String Objects of Class A - 10%
    String Objects of Class B - 90%
    Thanks

    I don't know what profiler you are using but many memory profilers can tell you where the strings are allocated.

  • Query memory consumption

    Hi,
    Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
    What about query like
    select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
    How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
    Please note that the real query would include JOINS and such. Thanks
    Any information is appreciated

    Hi Melvin,
    Not sure I'd call myself an expert but I'll have a go at an answer
    I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
    If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
    You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
    Hope this helps,
    Owen

  • Continuously refreshing a tab after an interval leads to high memory consumption (400MB to 800MB in 30 seconds for 3 refreshes at 10 secs interval), why?

    Environment:
    MAC OSX 10.9.5
    Firefox 32.0.3
    Firefox keeps consuming lot of memory when you keep refreshing a tab after an interval.
    I opened a single tab in my firefox and logged into my gmail account on that. At this stage the memory consumption was about 400MB. I refreshed the page after 10 seconds and it went to 580MB. Again i refreshed after 10 seconds and this time it was 690MB. Finally, when i refreshed 3rd time after 10 seconds, it was showing as 800MB.
    Nothing was changed on the page (no new email, chat conversation, etc. nothing). Some how i feel that the firefox is not doing a good job at garbage collection. I tested this use case with lot of other applications and websites and got the similar result. Other browsers like Google chrome, safari, etc. they just work fine.
    For one on of my application with three tabs open, the firefox literally crashed after the high memory consumption (around 2GB).
    Can someone tell me if this is a known issue in firefox? and is firefox planning to fix it? Right now, is there any workaround or fix for this?

    Hi FredMcD,
    Thanks for the reply. Unfortunately, i don't see any crash reports in about:crashes. I am trying to reproduce the issue which will make browser to crash but somehow its not happening anymore but the browser gets stuck at a point. Here is what i am doing:
    - 3 tabs are open with same page of my application. The page has several panels which has charts and the javascript libraries used for this page are backbone.js, underscore.js, require.js, highcharts.js
    - The page automatically reloads after every 30 seconds
    - After the first loading of there three tabs, the memory consumption is 600MB. But after 5 minutes, the memory consumption goes to 1.6GB and stays at this rate.
    - After sometime, the page wont load completely for any of the tabs. At this stage the browser becomes very slow and i have to either hard refresh the tabs or restart the browser.

  • How to find out memory consumption for table in HANA without load it into memory

    Hi,
    To determine the memory consumption for a table in HANA, you can query table M_CS_TABLES, however, it requires load table into memory first, I just wonder if there has another table store memory consumption information for all HANA tables regardless it load into memory or not. Below is screenshot for one of table in my system, since that table is partially loaded into memory, "Total Memory Consumption (KB):" tell me the memory consumption of the portion load into memory, what I am looking for is something like "Estimated Maximun Memory Consumption (KB)" which provides me total memory consumption for that table including portion doesn't load into memory, of course I can use this Esitmated information, but consider I have close to thousand tables in my HANA system already, it's not pratical to check tables one by one.
    Thanks,
    Xiaogang.

    Hi Xiaogang,
    Estimated Memory Size that you see in the Table Run time Information - same is available in M_CS_TABLES also
    If you don't get the size of any Table in M_CS_TABLES View, then the same will also not be available in Runtime information of the Table
    Even if tables are not loaded into memory, you can get the Estimated Size, just try running the query with filter LOADED = 'NO'
    Regards,
    Vivek

  • Memory consumption using cvitdms.dll

    Hi all!
    I am using a Diadem library to create .tdms files
    Through the DLL, I create and open a file and so, I start to append data values.
    When I look at the task manager, I can note that the memory consumption of my application does not stop to increase until I stop my program.
    I also tried to start the program without start the logging methods, that use the tdms libraries, and so, this behavior does not occur.
    I flush the data every 30 seconds or every 500 registers.
    How can I solve this problem of memory consumption?
    Regards
    Gustavo

    Hey Gustavo,
    Do you have a small program that demonstrates this behavior?  If so, could you please upload the CVI source so I can reproduce your issue here?  Also, what version of CVI are you using?  I look forward to hearing back from you!
    Best Regards,
    Software Engineer
    Jett R

  • Memory Consumption in Multidimensional Arrays

    Hi,
    I've noticed that the memory consumption of multidimensional arrays in Java is sometimes far above one could expect for the amount of data that is being stored. For example, here is a simple program which stores a table containing only integers and reports the memory consumption after it is filled:
    public static void main(String[] args) {     
    int tableSize = 1000000;
    int noFields = 10;
    Random rnd3 = new Random();          
    int arr[][] = new int[tableSize][noFields];
    for (int i = 0; i < tableSize; i++) {
    for (int j = 0; j < noFields; j++) {
         arr[i][j] = rnd3.nextInt(100);
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();                    
    // Ensures table's data is still referenced
    System.out.println(arr[rnd3.nextInt(arr.length)]);
    long totalMemory = Runtime.getRuntime().totalMemory();
    long usedMemory = totalMemory-Runtime.getRuntime().freeMemory();
    System.out.println("Total Memory: " + totalMemory/(1024.0*1024) + " MB.");
    System.out.println("Used Memory: " + usedMemory/(1024.0*1024) + " MB.");          
    Output:
    Total Memory: 866.1875 MB.
    Used Memory: 62.124053955078125 MB.
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers. The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):
    Rows:100; Cols:100000 -> Used Memory: 43,05 MB
    Rows:1000; Cols:10000 -> Used Memory: 43,07 MB
    Rows:10000; Cols:1000 -> Used Memory: 43,24 MB
    Rows:100000; Cols:100 -> Used Memory: 44,96 MB
    Rows:1000000; Cols:10 -> Used Memory: 62,15 MB
    Rows:10000000; Cols:1 -> Used Memory: 192,15 MB
    Any ideas about the reasons for that behavior?
    Thanks,
    Marcelo

    mrnm wrote:
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers.That's only the expected value if you assume that a 2D array of ints is nothing more than a bunch of ints lined up end to end. This is not the case. A "2D" array in java is really just a plain ol' array whose component type is "reference to array".
    The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):That's because, e.g., new int[200][100] creates 200 array objects (and references to each of them), each of which holds 100 ints, while new int[100][200] creates 100 array objects (and references to each of them), each of which holds 200 ints.
    Edited by: jverd on Feb 24, 2010 11:17 AM

Maybe you are looking for

  • Table control in Screen Painter

    Hi Friends, I have created a screen using screen painter. The screen is having 4 tabs (subscreen) and in all the subscreen, i have used table control to insert multiple lines in the table. The problem I am getting is that, the data of table control p

  • How to get rid of inactive, stuck in update app icons from Launchpad?

    How do I get rid of all these useless icons? I've tried holding the icon and deleting it but it doesn't go away.

  • Alternate Color in Data Entry

    Link to: http://isd.datc.edu/sites/isd.datc.edu/files/Modification%20Form%20XMLx.pdf Notice when you add multiple lines to the second table. Every other line is white/grey. Go to the last table in the form and do the same. Every line is grey. How do

  • Trouble with CS6 Trial, Error codes #4960 and #102

    Having trouble w CS6 trial, first #4960, now #102. I have been trying to install CS6 Master Suite trial and had to pause installation when I had to leave studio for home. When I got home, resumed installation and it took a while, but everything seeme

  • Importing FLV into FLA - No sound...

    I'm attempting to embed a 10 second FLV video clip (created in Final Cut Pro) into my FLA. After matching the frame rate (30FPS) the video plays back fine, however, there is no sound. I'm relatively new to Flash, so any suggestions would be appreciat