Will TimesTen degrade performance if memory allocated close to use up?

If I near insert data that close to use up the memory allocated to TimesTen, will the performance affected?
Message was edited by:
carfield

There are two kinds of disk files used by TimesTen. Transaction log files (dsname.logn) which are created as transactions are executed. These are normally purged automatically by checkpoints (unless you have disabled checkpointing...) but incorrect use of some features such as replication, XLA, AWT cache groups and incremental backups can prevent log files being purged automatically.
The other type of file used are the checkpoint files (there are 2 of them). These are each the size of the datastore (PermSize + maybe 20 Mb) and they will exist until you drop the entire datastore (ttDestroy). That is normal and correct behavior. Although the files may start out very small they will grow as you add data to the database. Even dropping all the tables etc. will not cause the files to shrink. This is just an artifact of how checkpoints are structured and is not a problem. Most other databases (including Oracle) behave similarly - they do not somehow free parts of database files back to the O/S just because some data is deleted or a table is dropped.
I'm afraid I do not see what the issue is here...
Chris

Similar Messages

  • Memory Allocation problem when using JNI

    For a Project we need to interface Labwindows-CVI/ Teststand with an application written in Java. we are using JNI. The code uses JNI_CreateJavaVM to start a JVM to run the Java interface code. The code did run for some time , but now ( without any obvious change nor on the CVI side neither on the Java side) JNI_CreateJavaVM fails with -4 error code, that means that the start of the JVM failed due to memory allocation failure. First investigation showed, that even if Windows Task Manager shows about 600M free physical memory, you can allocate in CVI only about 250M as a single block at the time we are calling  JNI_CreateJavaVM. That might be a little bit to less as we need to pass -Xmx192m to the JVM to run our code. Unfortunately just increasing the physical memory of that machine from 1.5G to 2G doesn't change anything. The free memory showed by Task Manager increases, but the allocatable memory block size does not. Are the any trick to optimize CVI/Teststand for that use case ?  Or maybe known problems with JNI ?
    Solved!
    Go to Solution.

    hi,
    have you tried other functions to allocate memory?
    the -Xmx command only sets the maximum heap size. You can try to use -Xms. This command sets the initial Java heap size. 

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • Whydo i keep getting asked to display the page ff must send info that will repeat action performed ealier this is only happening using lexolous how can i stop it thank u

    message reads
    to display this page firefox must send information that will repeat any action (such as a search or order confirmation)that was performed earlier
    resend cancel
    how do i stop keep getting this pop up

    I assume that you mean a Greasemonkey script for KoC ?
    Try [/questions/780792]

  • Will more number of waiting threads in thread pool degrade performance?

    Hi,
    I use thread pool and set the maximum number of threads to execute. My problem is, the thread pool initiates the number of threads and these threads wait until they get a work. Will this degrade the performance (The threads do I/O operations) ??

    Threads waiting for work will not degrade performance. If your work involves those threads waiting for I/O then that in itself will not degrade performance either (as long as they block and don't poll of course).
    All live threads consume resources however so if you are short on threads or memory then don't create too many of them.
    Pre-starting a large number of threads in a pool can cause a startup delay of course. Generally you should let the pool create threads as needed until it gets to the core pool size.

  • Memory Allocation to JVM

    Hi
    How do I allocate Contiguous memory space to JVM 1.3 in Windows and AIX?
    Regards
    Sudhindra

    Hi
    You are right. But the issues here is performance
    enhancement. I feel that when we are talking about
    huge volumes of data and transaction performance will
    be enhanced if the memory allocated is contigous.
    As far as I know all modern platforms use virtual addressing. The overhead of that applies regardless of layout of physical memory. So I am not sure how contiguous blocks would help.
    So can you please tell me how to do this? Where can i
    get more information about this?
    Like I said maybe it is specific to your platform. If so you need to look to the docs about your platform. And the only way you are going to get any advantage from java objects from that is if you write your own jvm.

  • TimesTen not releasing shared memory even after DB destroy

    Hi,
    After TimesTen DB is destroyed, the shared memory allocated to DB is not getting released by system.
    We are using TimesTen Release 11.2.1.7.0 (64 bit Linux/x86_64)
    We need to do a system reboot for clearing the shared memory (stale) usage by TimesTen.
    Please let me know what is the issue here.
    Regards
    Pratheej

    Hi Pratheej,
    How are you actually destroying TimesTen? are you using ttDestroy? It looks like you maybe forcing a shutdown of the TT master daemon? In which case current connections won't be aware the master daemon has gone until they next try to access TT in the meantime they can keep the shared memory segment in-memory.
    Take a look at ttStatus to see what connections are open. Disconnect them all by default TT will then come out of memory then you can use ttDestroy.
    Tim

  • Memory aloocation Error while using UPV equipment

    Hi,
    While i running Audio Test scenario using UPV  (Audio Test equipment @ http://www2.rohde-schwarz.com/product/upv.html)
    I got after 4 hours that message (attached).
    Need help to understand the motive of that message.
    TNX,
    Shmulik dekel
    Texas Instruments
    [email protected]
    Attachments:
    UPV_Problem.doc ‏158 KB

    Hi Shmulik!
    It appears that that error can be caused by a number of different reasons.  My guess would be that you are not closing your VISA sessions when you are finished with them.  I think that the following two links might be helpful to you:
    VISA Troubleshooting Wizard
    KnowledgeBase: Insufficient System Resources to Perform Necessary Memory Allocation
    Let me know if you have any further questions!
    NickB

  • Doubt in memory allocation

    Hi..
    I have doubt in object memory allocation. for example see the following code,
    String str = new String(JAVA PROGRAMMING);
    1) int index = str.toLowerCase( ).indexOf("ram", 0);
    2) String temp = str.toLowerCase( );
    int index = temp.indexOf("ram", 0);
    Ques:
    from the above two form of coding which one is advicible and gives good performance.
    what will happen when i execute str.toLowerCase( ).indexOf("ram", 0);, i mean wthether i will do the lowercase conversion in same memory location or create temporary memory area.

    It means that the memory of a String is never reused to
    hold for example a substring. The substring will
    always be allocated in new memory. �Are you sure?
         * Initializes a newly created <code>String</code> object so that it
         * represents the same sequence of characters as the argument; in other
         * words, the newly created string is a copy of the argument string. Unless
         * an explicit copy of <code>original</code> is needed, use of this
         * constructor is unnecessary since Strings are immutable.
         * @param   original   a <code>String</code>.
        public String(String original) {
         this.count = original.count;
         if (original.value.length > this.count) {
             // The array representing the String is bigger than the new
             // String itself.  Perhaps this constructor is being called
             // in order to trim the baggage, so make a copy of the array.
             this.value = new char[this.count];
             System.arraycopy(original.value, original.offset,
                        this.value, 0, this.count);
         } else {
             // The array representing the String is the same
             // size as the String, so no point in making a copy.
             this.value = original.value;
         }

  • Queue of arrays without dynamic memory allocation

    Hey folks,
    I'm working on optimizing a timing critical VI. This VI is the
    producer in a producer consumer architecture. I'm trying to populate
    a queue from file in a manner that is as efficient as possible. My
    current plan of attack is:
    - read block of data from file and populate array (pre-allocated).
    - add array (always of the same size) to Queue with a max size defined
    (e.g. 50 elements)
    - This is in a while loop as is the standard producer consumer model.
    To improve the performance I would like to ensure that there is no
    dynamic memory allocation on the Queue's behalf. This is easily done,
    from what I understand, if the data type in the queue is of the same
    type (e.g. double, int). However, since the size of an array can vary
    does this mean that any queue of arrays will always dynamically
    allocate memory? Is there a way to ensure that the queue will always
    use the same memory as in a circular queue?
    Thanks,
    Steve

    Duplicate.
    Try to take over the world!

  • Problem in dynamic memory allocation

    Hi,
    My name is Ravi Kumar. I'm working on a project to improve organizational performance which include visual studio for simulation. I'm using dynamic memory allocation to allocate space for the array that are used in the program. Now I have run-time error
    which I can't understand where it is going wrong. Can someone please help me regrading this issue. 
    If anyone interested in helping please leave a comment with your email id so that I will share the whole project folder.
    Thanks,
    Ravi

    Hi Ravi,
    Don is right that this is the forum to discuss questions and feedback for Microsoft Office client.
    Please post in MSDN forum of Visual Studio, where you can get more experienced responses:
    https://social.msdn.microsoft.com/Forums/en-US/home?forum=visualstudiogeneral
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Regards,
    Ethan Hua
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Cluster memory allocation

    Hello,
    I have a beginner's question: I have a large number of variables I need  to pass to a function. Will there be a difference in memory use if I pass them directly compared to if I bundle them to a cluster and pass the cluster to the function and unbundle it inside?
    I tried to read online posts, and some say that cluster is like a struct... does it mean that bundling variables to cluster creates new memory locations for each variable, with overhead? My application refuses to run already ("Not enough memory") so if cluster creates new memory allocations, it's critical that I know it...
    Any information is appreciated
    MichalM 

    Norbert B wrote:
    Different tunnels are always different dataspaces, so an output tunnel is creates a copy in regard to the inputtunnel. A shiftregister can address this because the left and right node grant access to the same dataspace. Please note that most often, this does not take much effect, but when working with arrays, it is mandatory to work with shiftregisters.
    Hi, Norbert, you're right (as usual).
    The only small thing about Shift Registers (vs AutoIndex)... Let's say, we have pretty big array (nearby memory limit), which should be computed inside of loop.
    Now we have two possibilities: a) using AutoIndex tunnel, or b) using "preallocated" array and Shift Register. Something like that:
    The method b) with preallocated array is "more understandable" for me also from the "traditional programming" point of view. Continuous memory allocated (like with malloc), then Shift Register acts as pointer, and we performing elements replacement step by step.
    The method a) theoretically should work slow, because new elementh added to the array at each iteration (which caused memory reallocation every time), but it seems to be that LabVIEW intelligent enough for memory allocation before looping, and not during looping. And AutoIndex is faster than Shift Register in this case.
    But it looks completely other with while loops:
    Now array cannot be preallocated with AutoIndex, because total amount of iterations is unknown, and this caused big performance penalties (at least at first run). So, here Shift Register is preferred. What is funny - at second run the Method a) will be faster than method b) (looks like internal LabVIEW cache), but when total amount of iterations will be changed, then it will be slow again.
    And finally For Loop with Conditional Terminal vs While Loop:
    The total amount of iterations is unknown in both cases, but For Loop still fast because memory preallocated before iterations (created Array will be "trimmed" if loop will be breaked with condition).
    It means, that sometimes (but not always) AutoIndex is more preferred than ShiftRegister.
    Andrey.

  • Short term memory allocator and Cache memor is out of memory

    Hi,
    I have three NW 6.5 sp8 servers in production. One of these, the one which holds Pervasive sql 9.7 began to show the following errors:
    Cache memory allocator out of available memory.
    Short term memory allocator is out of memory.
    360396 attempts to get more memory failed.
    request size in bytes 1048576 from Module SERVER.NLM
    I show here segstats.txt:
    *** Memory Pool Configuration for : DBASE_SERVER
    Time and date : 02:42:36 AM 12/02/2012
    Server version : NetWare 6.5 Support Pack 8
    Server uptime : 11d 04h 35m 28s
    SEG.NLM uptime : 0d 00h 01m 17s
    SEG.NLM version : v2.00.17
    Original Memory : 4,292,812,800 bytes (4.00 GB)
    ESM Memory : 805,302,272 bytes (768.0 MB)
    0xFFFFFFFF --------------------------------------------------------------
    | Kernel Reserved Space |
    | |
    | Size : 180,355,071 bytes (172.0 MB) |
    | |
    0xF5400000 --------------------------------------------------------------
    | User Address Space (L!=P) |
    | |
    | User Pool Size : 671,088,640 bytes (640.0 MB) |
    | High Water Mark : 559,710,208 bytes (533.8 MB) |
    | PM Pages In Use : 1,855,488 bytes (1.8 MB) |
    | |
    0xCD400000 --------------------------------------------------------------
    | Virtual Memory Address Space (L!=P) |
    | |
    | VM Address Space : 2,369,781,760 bytes (2.21 GB) |
    | Available : 801,435,648 bytes (764.3 MB) |
    | Total VM Pages : 800,870,400 bytes (763.8 MB) |
    | Free Clean VM : 785,563,648 bytes (749.2 MB) |
    | Free Cache VM : 15,306,752 bytes (14.6 MB) |
    | Total LP Pages : 0 bytes (0 KB) |
    | Free Clean LP : 0 bytes (0 KB) |
    | Free Cache LP : 0 bytes (0 KB) |
    | Free Dirty : 0 bytes (0 KB) |
    | NLM Memory In Use : 1,767,256,064 bytes (1.65 GB) |
    | NLM/VM Memory : 1,751,785,472 bytes (1.63 GB) |
    | Largest Segment : 2,097,152 bytes (2.0 MB) |
    | Lowest Kernel Page: 0 bytes (0 KB) |
    | : [0x00000000] |
    | High Water Mark : 2,243,096,576 bytes (2.09 GB) |
    | Alloc Failures : 370,804 |
    | |
    0x40000000 --------------------------------------------------------------
    | File System Address Space (L==P or L!=P) |
    | |
    | FS Address Space : 1,067,290,624 bytes (1017.8 MB) |
    | Available : 108,978,176 bytes (103.9 MB) |
    | Largest Segment : 3,362,816 bytes (3.2 MB) |
    | |
    | NSS Memory (85%) : 613,683,200 bytes (585.3 MB) |
    | NSS (avail cache) : 610,455,552 bytes (582.2 MB) |
    | |
    0x00627000 --------------------------------------------------------------
    | DOS / SERVER.NLM |
    | |
    | Size : 6,451,200 bytes (6.2 MB) |
    | |
    0x00000000 --------------------------------------------------------------
    Total NLMs loaded on the server: 307
    Top 20 Memory Consuming NLMs
    NLM Name Version Date Total NLM Memory
    ================================================== =============
    1. NWMKDE.NLM 9.70.07 Nov 14, 2008 813,035,623 bytes (775.4 MB)
    2. SERVER.NLM 5.70.08 Oct 3, 2008 467,216,096 bytes (445.6 MB)
    3. NSS.NLM 3.27.02 Nov 11, 2009 203,168,848 bytes (193.8 MB)
    4. NCPL.NLM 3.02 May 6, 2009 41,854,837 bytes (39.9 MB)
    5. NWSQLMGR.NLM 9.70.07 Nov 14, 2008 39,309,132 bytes (37.5 MB)
    6. DS.NLM 20217.07 Jan 30, 2009 24,851,303 bytes (23.7 MB)
    7. APACHE2.NLM 2.00.63 Apr 25, 2008 19,863,493 bytes (18.9 MB)
    8. CIOS.NLM 1.60 Feb 12, 2008 10,569,567 bytes (10.1 MB)
    9. OWCIMOMD.NLM 3.02 Nov 27, 2007 9,318,616 bytes (8.9 MB)
    10. APRLIB.NLM 0.09.17 Apr 25, 2008 8,959,760 bytes (8.5 MB)
    11. APACHE2.NLM 2.00.63 Apr 25, 2008 7,702,469 bytes (7.3 MB)
    12. FATFS.NLM 1.24 Aug 27, 2007 5,859,413 bytes (5.6 MB)
    13. NWPA.NLM 3.21.02 Oct 29, 2008 4,990,686 bytes (4.8 MB)
    14. PKI.NLM 3.32 Aug 25, 2008 4,069,957 bytes (3.9 MB)
    15. WS2_32.NLM 6.24.01 Feb 14, 2008 3,623,596 bytes (3.5 MB)
    16. NWMPM100.NLM 9.70.07 Nov 14, 2008 3,597,747 bytes (3.4 MB)
    17. NWODBCEI.NLM 9.70.07 Nov 14, 2008 3,459,159 bytes (3.3 MB)
    18. PORTAL.NLM 4.03 Sep 22, 2008 3,404,576 bytes (3.2 MB)
    19. JVM.NLM 1.43 Oct 16, 2008 2,701,919 bytes (2.6 MB)
    20. NLDAP.NLM 20218.11 Jan 30, 2009 2,579,131 bytes (2.5 MB)
    Top 20 NLM - Memory Trends
    NLM Name Original Memory Current Change
    ================================================== =========
    1. NWMKDE.NLM 842,068,071 bytes 813,035,623 bytes -27.7 MB
    2. SERVER.NLM 463,894,240 bytes 467,216,096 bytes 3.2 MB
    3. NSS.NLM 203,168,848 bytes 203,168,848 bytes 0 KB
    4. NCPL.NLM 41,850,741 bytes 41,854,837 bytes 4 KB
    5. NWSQLMGR.NLM 39,092,044 bytes 39,309,132 bytes 212 KB
    6. DS.NLM 24,896,359 bytes 24,851,303 bytes -44 KB
    7. APACHE2.NLM 19,855,301 bytes 19,863,493 bytes 8 KB
    8. CIOS.NLM 10,569,567 bytes 10,569,567 bytes 0 KB
    9. OWCIMOMD.NLM 9,277,656 bytes 9,318,616 bytes 40 KB
    10. APRLIB.NLM 8,959,760 bytes 8,959,760 bytes 0 KB
    11. APACHE2.NLM 7,702,469 bytes 7,702,469 bytes 0 KB
    12. FATFS.NLM 5,859,413 bytes 5,859,413 bytes 0 KB
    13. NWPA.NLM 4,957,918 bytes 4,990,686 bytes 32 KB
    14. PKI.NLM 4,135,493 bytes 4,069,957 bytes -64 KB
    15. WS2_32.NLM 3,619,500 bytes 3,623,596 bytes 4 KB
    16. NWMPM100.NLM 3,597,747 bytes 3,597,747 bytes 0 KB
    17. NWODBCEI.NLM 3,459,159 bytes 3,459,159 bytes 0 KB
    18. PORTAL.NLM 3,400,480 bytes 3,404,576 bytes 4 KB
    19. JVM.NLM 2,701,919 bytes 2,701,919 bytes 0 KB
    20. NLDAP.NLM 2,505,403 bytes 2,579,131 bytes 72 KB
    Logical Memory Summary Information
    ================================================== ===============================
    File System Cache Information
    FS Cache Free : 4,591,616 bytes (4.4 MB)
    FS Cache Fragmented : 104,386,560 bytes (99.6 MB)
    FS Cache Largest Segment : 3,362,816 bytes (3.2 MB)
    Logical System Cache Information
    LS Cache Free : 0 bytes (0 KB)
    LS Cache Fragmented : 722,448,384 bytes (689.0 MB)
    LS OS Reserved Data : 333,455,360 bytes (318.0 MB)
    LS Cache Largest Segment : 2,097,152 bytes (2.0 MB)
    LS Cache Largest Position : 2DE00000
    Summary Statistics
    Total Address Space : 4,294,967,296 bytes (4.00 GB)
    Total Free : 4,591,616 bytes (4.4 MB)
    Total Fragmented : 826,834,944 bytes (788.5 MB)
    Highest Physical Address : CFE53000
    User Space : 671,088,640 bytes (640.0 MB)
    User Space (High Water Mark) : 559,710,208 bytes (533.8 MB)
    NLM Memory (High Water Mark) : 2,243,096,576 bytes (2.09 GB)
    Kernel Address Space In Use : 2,572,759,040 bytes (2.40 GB)
    Available Kernel Address Space : 43,929,600 bytes (41.9 MB)
    Memory Summary Screen (.ms)
    ================================================== ===============================
    KNOWN MEMORY Bytes Pages Bytes Pages
    Server: 3487425552 851422 Video: 8192 2
    Dos: 86000 20 Other: 131072 32
    FS CACHE KERNEL NLM MEMORY
    Original: 3483172864 850384 Code: 46854144 11439
    Current: 108978176 26606 Data: 27242496 6651
    Dirty: 0 0 Sh Code: 49152 12
    Largest seg: 3362816 821 Sh Data: 20480 5
    Non-Movable: 81920 20 Help: 172032 42
    Other: 4235538432 4292855635 Message: 1236992 302
    Avail NSS: 610439168 149033 Alloc L!=P: 1661366272 405607
    Movable: 8192 2 Alloc L==P: 14843904 3624
    Total: 1751785472 427682
    VM SYSTEM
    Free clean VM: 785563648 191788
    Free clean LP: 0 0
    Free cache VM: 15306752 3737
    Free cache LP: 0 0
    Free dirty: 0 0
    In use: 1855488 453
    Total: 801435648 195663
    Memory Configuration (set parameters)
    ================================================== ==============================
    Auto Tune Server Memory = ON
    File Cache Maximum Size = 1073741825
    File Service Memory Optimization = 1
    Logical Space Compression = 1
    Garbage Collection Interval (ON) = 299.9 seconds
    VM Garbage Collector Period (ON) = 300.0 seconds
    server -u<number> = 671088640
    NSS Configuration File:
    C:\NWSERVER\NSSSTART.CFG
    File does not exist,
    or is zero byte in size.
    DS Configuration File:
    SYS:\_NETWARE\_NDSDB.INI
    File does not exist,
    or is zero byte in size.
    TSAFS Memory Information/Configuration
    ================================================== ==============================
    Cache Memory Threshold : 1%
    Read Buffer Size : 65536 bytes
    Max Data Sets for Read Ahead : 2
    Read Threads Per Job : 4
    NSS Memory Information/Configuration
    ================================================== ==============================
    Current NSS Memory Settings
    Cache Balance Percentage : 85%
    Cache Memory Allocated : 585.3 MB
    Available Cache from NSS : 582.2 MB
    Current NSS Caching Percentages
    Buffer cache hit percentage : 63%
    Name Tree cache hit percentage : 94%
    File cache hit percentage : 99%
    NSS Flush Status: Not Flushed
    Server High/Low Water Mark Values
    ================================================== ==============================
    NLM Memory High Water Mark = 2,243,096,576 bytes
    File System High Water Mark = 443,108 bytes
    User Space Information:
    User Space High Water Mark = 559,710,208 bytes
    Committed Pages High Water Mark = 87 pages
    Mapped VM Pages High Water Mark = 3,875 pages
    Reserved Pages High Water Mark = 400,103 pages
    Swapped Pages High Water Mark = 3,785 pages
    Available Low Water Mark = 294,670,336
    ESM Memory High Water Mark = 173 pages
    It seems that server.nlm is growing without limits. When tat occurs, I have the mentioned errors.
    Though NWMKDE seems to have grown. It remains steady around the showed values.
    I'm not brave enough to apply the memcalc's recommended fixes because the following line:
    set file cache maximum size=822083584
    returns an error saying the minimun value should be 1073741824.
    Can someone help me because I'm completely blind here.
    Thanks in advance.
    Gabriel

    I take it this is primarily a database server, in which case it's OK that Btrieve is using so much memory? You wouldn't want this to be a general file server too. Is the memory error causing any actual problem?
    Server is asking for only 1mb, and due to fragmentation there is little free memory (actually 2mb left, which is a little odd, but neither here nor there).
    Also, let's see your bti.cfg, which is the Btrieve config file. I'll paste in below an ArcServe TID on Btrieve using excessive memory:
    Symptoms
    Btrieve was upgraded to version 8.5 during the installation of ARCserve r11.1. The cachesize in the BTI.cfg microkernel section is at 20 MB (20480). (Pervasive would like this setting placed to 20% of the server memory or database size which ever is less.) The server will keep adding 20 additional Megs of memory to the total amount of memory the server is using for database transactions after each backup job. This can be verified by performing the following at the server console:
    LOAD MONITOR
    Scroll down to System Resources under Available Options and hit enter.
    Scroll down to Alloc Memory (Bytes) and hit enter.
    Locate NWMKDE.nlm in the Resource Tags list.
    Sort by memory bytes and you will slowly see nwmkde.nlm move to the top of the usage list. Unless the server is rebooted the small memory allocations stays at the increased amount.
    Explanation
    Starting with Btrieve version 8.5 and higher, Pervasive has been working to make the Btrieve database more dynamic. They have created a two-tier memory allocation approach. The first level is controlled by the cache size setting in the BTI.cfg. If this becomes inadequate, the second level will be accessed. The default setting for the second level is 60% of the server's total memory.
    The following line in the BTI.cfg will control the second level of memory caching:
    MaxCacheUsage=60; default is 60% of memory.
    An example would be a server with 100 MB of memory and the following settings in sys:\system\bti.cfg:
    [microkernel]
    cachesize=20480
    MaxCacheUsage=60
    This will cause the nwmkde.nlm to use 20 MB (20480) of memory initially and grow up to 60 percent of the total server memory or 60 MB.
    Now you also have to throw Max worker threads into the mix. A setting of Max worker threads = 3 in the BTI.cfg > Btrieve Communications Manager section will also use server memory. It will use 1 MB per thread. In this example, 3 Megs of additional memory will be used. That will bring the total amount of memory used by nwmkde.nlm to 20 MB (20480) + 3 MB = 23 MB when the server is first booted. After running some backups, this number could go up to as high as 60 MB (60% of server memory) if the server dynamically requires it.
    Resolution
    The MaxCacheUsage=60 setting must be set down from this 60% number. Pervasive recommends setting this from 0 to 20. The server needs to be rebooted for this change to take effect.

  • IOS app crashes on return from cameraUI - a memory allocation problem?

    hey all
    trying to finish my first app
    when running on iOS, the app SOMETIMES crashes after returning from cameraUI (either "use"/MediaEvent.COMPLETE or "cancel"/Event.CANCEL).
    when i exit some other running apps on my iPhone 3Gs (and not that many are open), the problem goes away, which makes me think this is some memory allocation problem
    in that aspect, can i trust the iOS to exit inactive applications to allocate more memory for my, currently active, AIR app?
    (there is no memory leak)
    this is an iPhone 3Gs running os version 4.3.5
    the app was made with Flash Pro 5.5 overlayed with the AIR 3.1 sdk, and deployed using the "deploy for app store" type (which should be the most bug-free)
    (no crashes on Android or desktop versions)
    anyone had this cameraUI problem or a similar one where an app crashes if more then some numbers of apps are open?
    thanx
    Saar

    I don't get this. Its beyond frustrating:
    we are not talking about using an uncommon phone capability, access to a phones camera is about the most basic native level of access you would be looking for in a mobile framework
    we are not talking about an edge case in usage, just trying to take a simple picture consistently
    we are not talking about a feature issue where it doesn't quite work the way you want, it crashes the whole app hard! 
    we are not talking about a hard to recreate, only happening to a few people case - it seems from what I have read the Camera integration is fundamentally broken and I have spent days researching this and only found frustration from people out there
    we are not talking about an issue that does not have consequences - in several places on this forum and others people have emphasized how it is affecting their platform decisions, ability to submit apps. You even have people on this board recommending that not to use Flex Mobile and move to other platforms. Not what you want to be happening to when you are at the adoption phase of a new product.
    And that is the response - on this thread and here http://forums.adobe.com/message/4125590#4125590 - we know its an issue but we don't know when it will be fixed and no proactive communication on status - only a growing body of people like me getting increasingly frustrated. What does it take for an issue to be a show stopper? priority 1? affecting customer decisions priority?
    In my case I am in place where I am trying to make a platform decision and since this experience has happened I have subscribed to the live feed for this forum and as many relevant Adoble blogs, news feed etc. as I could find. I did this to get a feel for how well Adobe is supporting the mobile development on the AIR platform. Something increasingly important given recent decisions.
    My perception so far is quite poor especially for a recently released product, i.e. 4.6 release. In fact the release that finally addresses performance enough to make AIR mobile development a risk free decision. You would expect Adobe to be all over the boards like this - with core developers, platform experts contributing actively. My perception, rightly or wrongly, is of a community trying to support itself without much help, or clear communication from Adobe. In fact if you look at the the articles coming out of Adobe recently its all phonegap, html5 etc. It does not fill you with confidence for the future.
    To be clear - I have had a great experience with actionscript, flex etc and as a company we have developed the backend portion of our platform solely on Flex. I don't believe that we could have done it any other way and even now when I look at the alternatives for web development I feel vindicated in our decision.
    However, this rant is caused by a genuine frustration and fear. I don't expect this to get a meaningful response but maybe if there are enough voices it will create an overall improvement.
    Sean

  • Free Stmt Handle NOT release Memory allocated inDefineByPos[LongRaw]

    My Question is : Why Free Stmt Handle does not release the memory OCI allocated during OCIDefineByPos for data type "Long Raw"
    Please notice the Bold Fonts in the dtrace logging. Ptr is the pointer returned by malloc. Size is the memory size allocated.
    I use OCI to fetch records, each round I will fetch 1000 records. I found memory increase round by round. So I use dtrace to probe the memory malloc/free. At the end of each round, I will free the handle of SQL Statement. According the documentation, all the sub handles belongs to stmt handle should be free also. But I noticed that memory allocated during DefinebyPos for LongRaw did not get released.
    1. The SQL Send to DB each round is the same.
    2. I do the "Define" each round. So It's actually a "RE-Define" each round.
    The length of Long Raw is 512K. Here's the dtrace result when I call the OCIDefineByPos. There's a 2MB memory allocated, I don't know the reason.
    CPU ID FUNCTION:NAME
    3 45971 free:entry Ptr=0x247652d0 Size=0 TS=1568035638893113 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    libclntsh.so.10.1`sktsfFree+0x18
    libclntsh.so.10.1`kpummfpg+0xb6
    libclntsh.so.10.1`kghfrempty+0x17c
    libclntsh.so.10.1`kghgex+0x13c
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuex_reallocTempBufOnly+0x5f
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x36
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    3 45966 malloc:return Ptr=0x252ad100 Size=524356 TS=1568035638916654 AllocTime=2008 Jul 20 23:26:09
    libc.so.1`malloc+0x49
    libclntsh.so.10.1`kpummapg+0xcc
    libclntsh.so.10.1`kghgex+0x1aa
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuex_reallocTempBufOnly+0x5f
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x36
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    scrubber`_ZN17clsDatabaseOracle13DefineLongRawEiPhiPs+0x2f
    3 45971 free:entry Ptr=0x2476ab30 Size=0 TS=1568035639035363 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    libclntsh.so.10.1`sktsfFree+0x18
    libclntsh.so.10.1`kpummfpg+0xb6
    libclntsh.so.10.1`kghfrempty+0x17c
    libclntsh.so.10.1`kghgex+0x13c
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x8f
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    3 45966 malloc:return Ptr=0x2532d150 Size=2097220 TS=1568035639040090 AllocTime=2008 Jul 20 23:26:09
    libc.so.1`malloc+0x49
    libclntsh.so.10.1`kpummapg+0xcc
    libclntsh.so.10.1`kghgex+0x1aa
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x8f
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    scrubber`_ZN17clsDatabaseOracle13DefineLongRawEiPhiPs+0x2f
    scrubber`_ZN5dblib11DbExtractor18DefineLongRawFieldEiPhiPs+0x2b

    Assumption : You are using OCIHandleFree((dvoid*)DBctx->stmthp,(ub4)OCI_HTYPE_STMT);
    There are two scenarios here the LONG Raw mapping and release of memory.
    Unlike the other data types LONG Raw memory mapping is different since the data type requires more memory chunks.
    45971 free:entry Ptr=0x2476ab30 Size=0 TS=1568035639035363 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    Since from the above statement it clear that free is called . Hence there may be chance in other area of the code I suspect the memory leak is (Like native storage).
    Moreover the standard OS behavior is that it will not release the Heap/Stack until the process is complete even the free() is called by the process. This is for performance on memory management.Having said that it should not increase the memory drastically when not required.

Maybe you are looking for