Queue of arrays without dynamic memory allocation

Hey folks,
I'm working on optimizing a timing critical VI. This VI is the
producer in a producer consumer architecture. I'm trying to populate
a queue from file in a manner that is as efficient as possible. My
current plan of attack is:
- read block of data from file and populate array (pre-allocated).
- add array (always of the same size) to Queue with a max size defined
(e.g. 50 elements)
- This is in a while loop as is the standard producer consumer model.
To improve the performance I would like to ensure that there is no
dynamic memory allocation on the Queue's behalf. This is easily done,
from what I understand, if the data type in the queue is of the same
type (e.g. double, int). However, since the size of an array can vary
does this mean that any queue of arrays will always dynamically
allocate memory? Is there a way to ensure that the queue will always
use the same memory as in a circular queue?
Thanks,
Steve

Duplicate.
Try to take over the world!

Similar Messages

  • Templates and Dynamic Memory Allocation Templates

    Hi , I was reading a detailed article about templates and I came across the following paragraph
    template<class T, size_t N>
    class Stack
    T data[N]; // Fixed capacity is N
    size_t count;
    public:
    void push(const T& t);
    };"You must provide a compile-time constant value for the parameter N when you request an instance of this template, such as *Stack<int, 100> myFixedStack;*
    Because the value of N is known at compile time, the underlying array (data) can be placed on the run time stack instead of on the free store.
    This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation.
    Now in the above paragraph what does
    "This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation." mean ?? What does template over head mean ??
    I am a bit puzzled and i would really appreciate it if some one could explain to me what this sentence means thanks...

    The run-time memory model of a C or C++ program consists of statically allocated data, automatically allocated data, and dynamically allocated data.
    Data objects (e.g. variables) declared at namespace scope (which includes global scope) are statically allocated. Data objects local to a function that are declared static are also statically allocated. Static allocation means the storage for the data is available when the program is loaded, even before it begins to run. The data remains allocated until after the program exits.
    Data objects local to a function that are not declared static are automatically allocated when the function starts to run. Example:
    int foo() { int i; ... } Variable i does not exist until function foo begins to run, at which time space for it appears automatically. Each new invocation of foo gets its own location for i independent of other invocations of foo. Automatic allocation is usually referred to as stack allocation, since that is the usual implementation method: an area of storage that works like a stack, referenced by a dedicated machine register. Allocating the automatic data consists of adding (or subtracting) a value to the stack register. Popping the stack involves only subtracting (or adding) a value to the stack register. When the function exits, the stack is popped, releasing storage for all its automatic data.
    Dynamically allocated storage is acquired by an explicit use of a new-expression, or a call to an allocation function like malloc(). Example:
    int* ip = new int[100]; // allocate space for 100 integers
    double* id = (double*)malloc(100*sizeof(double)); // allocate space for 100 doublesDynamic storage is not released until you release it explicitly via a delete-expression or a call to free(). Managing the "heap", the area from where dynamic storage is acquired, and to which it is released, can be quite time-consuming.
    Your example of a Stack class (not to be confused with the program stack that is part of the C or C++ implementation) uses a fixed-size (that is, fixed at the point of template instance creation) automatically-allocated array to act as a stack data type. It has the advantage of taking zero time to allocate and release the space for the array. It has the disadvantages of any fixed-size array: it can waste space, or result in a program failure when you try to put N+1 objects into it, and it cannot be re-sized once created.

  • Problem in dynamic memory allocation

    Hi,
    My name is Ravi Kumar. I'm working on a project to improve organizational performance which include visual studio for simulation. I'm using dynamic memory allocation to allocate space for the array that are used in the program. Now I have run-time error
    which I can't understand where it is going wrong. Can someone please help me regrading this issue. 
    If anyone interested in helping please leave a comment with your email id so that I will share the whole project folder.
    Thanks,
    Ravi

    Hi Ravi,
    Don is right that this is the forum to discuss questions and feedback for Microsoft Office client.
    Please post in MSDN forum of Visual Studio, where you can get more experienced responses:
    https://social.msdn.microsoft.com/Forums/en-US/home?forum=visualstudiogeneral
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Regards,
    Ethan Hua
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Degree in Dynamic memory allocation need help!

    I'm a student in the University of Bucharest in Computer-Science in the senior year. I'm looking for some specs for my degree in Dynamic memory allocation. In particular I was looking for specs about how the JVM heap and garbage collector work. Can you please direct me to someone how can help me find the necessary specs?
    Thank you.

    [http://java.sun.com/javase/technologies/hotspot/]
    ~

  • Dynamic memory allocation failure

    Dear reader,
    We sometimes have a problem where our windows 2012 r2 RDS virtual servers, that reside on windows 2012r2 hyper-v hosts, loose their dynamic memory and only have their startup memory left to work with. Users start complaining that things are very slow etc.
    If I check several screens (RDS Broker load balancing, hyper-v manager, cluster manager and the vm's task manager) it's clear that the vm only has its startup memory allocated. I'm not sure if this happens instantly or immidiatly after the nightly reboot.
    To resolve the problem we have to call all users on the vm where it happens and ask them to logoff (if they are even able to), and then we reboot the machine.
    I have checked the logs from the machine where the VM resides on and the logs from the vm itself. But I cannot find anything. We also have alot of windows 2008r2 vm's with dynamivc memory, but none of those have ever had this problem.
    Searched the internet, but so far it seems we are only.
    Can anyone give me a lead to troubleshoot this?
    Best regards,
    Ruud Boersma
    MCITP Enterprise administrator

    Hi all,
    I'm going to be "one of those people" who revives dead posts for something that is relevant but obviously not fixed... sorry in advance!
    We have the exact same situation, a bunch of RDSH guests with Dynamic memory turned on (60+ of them). every day between 1-8 of them will fail to allocate Dynamic memory and will be stuck on their startup RAM amount. This really hurts our users at peak
    times.
    I have engaged our TAM and have raised a case with PSS, Usual story "your the only one with this problem". Which obviously isn't true.
    So, we have tons of free RAM on the hosts, 600GB+ on most of them, the issue affects RDSH hosts at random, across multiple hosts and clusters.
    The screen shots attached are of one of our hosts from this morning. it has 8GB startup, 8GB minimum, 32GB Maximum RAM configured, with a 23% buffer. the host has 752GB RAM FREE. Notice how the perf counter "Hyper-V Dynamic Memory Integration Service"
    is reporting "0", it should be reporting "32768". also under task manager on the VM we are missing "Maximum memory" which should be just below "Hardware reserved" in the bottom right hand corner.
    Looks like the balloon driver is being delayed at boot time, we are going to XPerf all the servers in the hope that we can catch the critter. It's an unusual problem.
    We only have Acrobat PDF viewer, word viewer, excel viewer and two custom dot.NET applications installed on the guests. Some of the servers are also just dumb RDSH hosts, with not connection broker configured, using an F5 loadbalancer for load distribution
    and session management. All guests are 2012R2 patched up to March 2015, integration Services are installed and up to date (its an enlightened OS remember).

  • Crash the dynamic memory allocation

    hi
    i am new to java, my prof said to crash the java prog. write simple prog. which allocate dynamic memory in C++ and Java. push the limits to see how much allocation cause the programs to crash.
    how do i implement this in java.. can any one help out here.
    thanks in advances.

    Write a program that allocates larger and larger objects until it crashes. Make sure it prints the sizes of the objects as it runs.

  • Dynamic memory allocation on HP-UX for multiple instances on one host

    Hi everyone,
    I was wondering what the current possibilities are nowadays on running multiple SAP instances on one very large host with regard to resource sharing. Normally, for each instance, using PHYS_MEMSIZE etc you have to set the memory to a fixed size and then optimize it.
    Preferrably we would like the memory to be allocated based upon actual usage. Is that possible at all? on HP-UX? Using third party techniques?
    Thank you
    Marcel Rabe

    Hello Marcel,
    As Juan said you may not be able to change the parameters at runtime.
    The only parameters that can be dynamically switched are :
    ztta/roll_first
    ztta/roll_extension
    ztta/roll_area
    abap/heap_area_dia
    abap/heap_area_nondia
    abap/heap_area_total
    em/stat_log_size_MB
    em/stat_log_timeout
    These parameters would put a cap on memory allocations, however they wouldn't help increase the total addressable memory area.
    I would suggest that you consider Adaptive Computing for dynamic use of resources.
    Adaptive Computing
    Regards,
    Siddhesh

  • Dynamic memory allocation

    Hi, guys, does anyone know whether I can use dyanmic memory allocation on the Real time system with "call library function" node. DLL is programmed using C language. Thanks.
    Machman

    You certainly can. LabVIEW Real-Time functionality is really not much different than what you can do on Windows. The only difference being that you can now assign priorty and timing to your execution loops; essentially, you have determinism with RT whereas with Windows, you never have a guarantee.
    Thus, in regards to calling a dll, you can perform this the same way you would in LabVIEW for Windows. Here's a KnowledgeBase article on how to do this if you're not already familiar.
    Cheers,
    Emilie K. | Applications Engineer | National Instruments

  • Why doesn't the debugger follow dynamic memory allocations well?

    Here's an example of a code block that doesn't seem to work right with the CVI compiler/debugger, but works fine with other C compilers/debuggers
    struct arg_int* arg_intn(const char* shortopts,
    const char* longopts,
    const char *datatype,
    int mincount,
    int maxcount,
    const char *glossary)
    size_t nbytes;
    struct arg_int *result;
    /* foolproof things by ensuring maxcount is not less than mincount */
    maxcount = (maxcount<mincount) ? mincount : maxcount;
    nbytes = sizeof(struct arg_int) /* storage for struct arg_int */
    + maxcount * sizeof(int); /* storage for ival[maxcount] array */
    result = (struct arg_int*)malloc(nbytes);
    if (result)
    /* init the arg_hdr struct */
    result->hdr.flag = ARG_HASVALUE;
    result->hdr.shortopts = shortopts;
    result->hdr.longopts = longopts;
    result->hdr.datatype = datatype ? datatype : "<int>";
    result->hdr.glossary = glossary;
    result->hdr.mincount = mincount;
    result->hdr.maxcount = maxcount;
    result->hdr.parent = result;
    result->hdr.resetfn = (arg_resetfn*)resetfn;
    result->hdr.scanfn = (arg_scanfn*)scanfn;
    result->hdr.checkfn = (arg_checkfn*)checkfn;
    result->hdr.errorfn = (arg_errorfn*)errorfn;
    /* store the ival[maxcount] array immediately after the arg_int struct */
    result->ival = (int*)(result+1);
    result->count = 0;
    /*printf("arg_intn() returns %p\n",result);*/
    return result;
    When I try to dereference this structure's 'ival[0]' the debugger constantly complains of a fatal runtime error and declares it out of the array bounds.
    This is from the argtable2 open source library available at http://argtable.sourceforge.net/ if you want to try and reproduce it.  I'm using CVI 2010 SP1.

    Unfortunately, you have run into one of the inherent limitations of CVI's run-time checking. Even though it is perfectly legal in C and somewhat common practice, our run-time checking doesn't like it when you conceptually split up a block of memory and treat it as two or more separate blocks. 
    While I cannot fix the problem in our run-time checking without breaking other use cases, I can explain what's causing the error and how you can work around it.
    When you malloc memory, we assume that the memory block is going to hold one or more instances of the type to which you're casting the memory block. In this case, CVI is treating the new memory block as an array of arg_info structures. But your new memory block is not big enough to hold more than one instance of struct arg_info, so we implicitly truncate the memory block to sizeof(struct arg_info). Because of the implicit truncation, s->values is now pointing past the end of the memory block and any access will result in an error.
    #include <ansi_c.h>
    struct arg_int {
    char some_large_block[64];
    int count;
    int *values;
    int main (int argc, char *argv[])
    int i, n = 4;
    struct arg_int *s = malloc(sizeof *s + n * sizeof *s->values);
    s->count = n;
    s->values = (int *)(s+1);
    for (i = 0; i < n; ++i)
    s->values[i] = i;
    return 0;
    You can avoid the implicit truncation in the original code by assigning to a (void*) first. This retains the original block size by keeping the actual type of the data in the memory block in limbo. Subsequent casts do not truncate the block. We truncate only when the cast occurs implicitly as part of a malloc. s->values points to the remainder of the block and you can access it freely, but you do not get any run-time checking on it.
    #include <ansi_c.h>
    struct arg_int {
    char some_large_block[64];
    int count;
    int *values;
    int main (int argc, char *argv[])
    int i, n = 4;
    struct arg_int *s;
    void *memory = malloc(sizeof *s + n * sizeof *s->values);
    s = memory;
    s->count = n;
    s->values = (int *)(s+1);
    for (i = 0; i < n; ++i)
    s->values[i] = i;
    return 0;
    If you want full run-time checking on s->values, you have to allocate the two components separately.
    #include <ansi_c.h>
    struct arg_int {
    char some_large_block[64];
    int count;
    int *values;
    int main (int argc, char *argv[])
    int i, n = 4;
    struct arg_int *s = malloc(sizeof *s);
    s->count = n;
    s->values = malloc(n * sizeof *s->values);
    for (i = 0; i < n; ++i)
    s->values[i] = i;
    return 0;
    Another option is to use an incomplete array. An incomplete array is an array of unspecified or zero size at the end of a structure definition. CVI is implicitly growing the array to fill up the rest of the allocated memory block. You do not get run-time checking for the array.
    #include <ansi_c.h>
    struct arg_int {
    char some_large_block[64];
    int count;
    int values[0];
    int main (int argc, char *argv[])
    int i, n = 4;
    struct arg_int *s = malloc(sizeof *s + n * sizeof *s->values);
    s->count = n;
    for (i = 0; i < n; ++i)
    s->values[i] = i;
    return 0;
    You can also disable run-time checking for your memory block by going through a series of casts: http://zone.ni.com/reference/en-XX/help/370051K-01/cvi/disablinguserprotectionforindividualpointer/
    Best regards.

  • Finding dynamic memory allocations in core file

    Hi,
    Is it possible to find out which data structures were allocated by analysing a core file?
    I want to find out which objects are causing the memory to increase, and I have the core file of the program.
    Thanks in advance,
    Paulo

    It's almost impossible. Anyway, it would be pure heuristics - looking at stack contents, finding familiar patterns in heap, etc, etc.

  • Office 2001/Dynamic Memory?

    Do the programs in Office 2001 have Dynamic Memory while using OS 9.2?

    Hi, HappyWarlock -
    Welcome to Apple's Discussions.
    Do the programs in Office 2001 have Dynamic Memory while using OS 9.2?
    Probably not. I don't use Office, but the vast majority of programs in OS 9 do not have the ability to use dynamic memory allocation; a couple of exceptions are the OS itself (Finder), and SimpleText.
    You can change the allocation for the programs in Office manually; the number to change is the Preferred allocation amount.
    Article #18278 - Assigning More Memory to an Application

  • Reliability of protected void finalize() for freeing native dynamic memory

    In different places, I've been reading different things about the specification of garbage collection in Java. From what I've found so far, it is pretty clear that if the garbage collector of the VM used decides to reclaim object X, then X.finalize() is called. However, opinions vary on whether the garbage collector will ever decide to reclaim X.
    I'm using the JNI and, unfortunately, have to keep a few bits in memory on the native side. Even worse, I keep those bits in dynamic memory (allocated with C's malloc). These chunks of memory always correspond to a single object on the Java side, so I thought it would be a good idea to have my objects in the native interface include this bit of code:
    native void free();
    protected void finalize() throws Throwable {
      Exception x = null;
      try {
        free();
        /* whatever other destruction seems usefull */
      } catch(Exception e) {
        CentralHandler.handle(this, e);
        x = e;
      } finally {
        try {
          super.finalize();
        } catch(Exception e) {
          CentralHandler.handle(this, e);
          if(x != null)
            throw CentralHandler.combine(x,e);
    }I have been told the problem with this is that I can't be sure the garbage collector will ever clear out this object, even when the program comes to an end (inside the VM; when the VM itself is killed, the user's on his own). Does it matter if finalize is never called and will all my malloc-ed memory just become free with the disappearance of the VM?

    What you have been told about GC is correct: You should not depend on finalize for anything.
    Assuming yours is a real-world program (and won't be graded), then if you are simply worried about memory being freed up when your program exits: Don't worry about it.

  • 9400 Graphics Memory Allocation

    For the GeForce 9400M chip is it possible to increase the graphics memory allocation if I were to upgrade the system memory?
    In other words: Does the GeForce 9400M chip and Mac OS Leopard+ support dynamic memory allocation for the graphics memory?
    If I increase the system RAM from 2GB to 4GB will the 9400 graphics memory allocation increase from 256MB to 512MB?
    (The Intel graphics chipset supported it)

    No. The 9400 uses 256 MBs of RAM, no more.

  • Targetdataline.stop() causes memory alloc error

    Hi,
    Im trying to run a Thread that records Audio when I press a button. I have no problem starting the thread, but I cannot seem to stop it without receiving memory alloc errors. Im try to stop is using targetdataline.stop followed by targetdataline.close .
    I am running Java SE 6 version 1.6.0_29-b11-402, on Mac OS X (I am also using Netbeans IDE 7.1)
    java(20704,0x1100ec000) malloc: *** error for object 0x2f6a6f72706c2e68: pointer being freed was not allocated
    *** set a breakpoint in malloc_error_break to debug
    Java Result: 134
    public AudioThread(String fileName) throws LineUnavailableException {
    soundFile = new File(fileName+ ".wav");
    audioFormat = new AudioFormat(8000.0F, 16, 1, true, false);
    dataLineInfo = new DataLine.Info(TargetDataLine.class,audioFormat);
    targetDataLine = (TargetDataLine) AudioSystem.getLine(dataLineInfo);
    audioInputStream = new AudioInputStream(targetDataLine);
    targetDataLine.open(audioFormat);
    @Override
    public void run(){
    try {
    AudioSystem.write(audioInputStream, Type.WAVE, soundFile);
    catch (IOException ex) {
    Logger.getLogger(AudioThread.class.getName()).log(Level.SEVERE, null, ex);
    public void startThread() throws LineUnavailableException {
    targetDataLine.start();
    super.start();
    public void stopThread() throws IOException{
    targetDataLine.stop();
    targetDataLine.close();
    Thanks,
    Lucas

    It seems to happen after the close() .
    I tried moving the targetdataline.close() at the end of run() function (after the AudioSystem.write() ), but no such luck.
    public void run(){
    try {
    AudioSystem.write(audioInputStream, Type.WAVE, soundFile);
    catch (IOException ex) {
    Logger.getLogger(AudioThread.class.getName()).log(Level.SEVERE, null, ex);
    targetDataLine.close();
    public void stopThread() throws IOException{
    targetDataLine.stop();
    ...

  • Queue Memory Allocation Weirdness

    I am at a loss to how LabVIEW is allocating memory in its queues.
    In the example I attached you must open Windows Task Manager and look at the Memory column.
    I'll try to lead you through the steps of the example.
    1.) Make a queue of length 10, data type double array.
    2.) Fill the queue with 10 arrays of 1M points, watch the memory increase in 8M steps in the Task Manager. (This makes complete sense to me.)
    3.) Empty the queue of the 10 elements. Note that the memory stays constant in the Task Manager. (This also make sense to me and LabVIEW will not release the memory in case it is needed later.)
    4.) Fill the queue with empty arrays, watch the memory decrease in 8M steps in the Task Manager. (This also makes complete sense to me.)
    5.) Put one array element in the queue, 1M doubles, then dequeue the element. Now the memory increases by 8M steps for all 10 iterations. Note if you increase the number of iteration on this last loop, the total memory used never goes beyond 80M, the original allocation.
    Why is LabVIEW using all the memory initially allocated to the queue when there is only 1 element in it at any time?
    I ask this because at points in my real program I want to limit my queue to a single element to save memory, but this example shows that will require a work around.
    If anybody has any insights, I greatly appreciate it.
    Cheers,
    Andrew
    Attachments:
    QueueMemoryWeirdness.vi ‏23 KB
    QueueMemoryWeirdness.png ‏64 KB

    Nathand,
    Thanks for your advice, although I am having a hard time following it. I'm trying to put the words into images in my head, but presently it is not working. It has been a long week.
    As far a real program goes, a brief description is as follows:
    I have a bunch of A/D boards in a PXI chassis; the slow part of my program is writing to a file. After the data is processed, I store the values in a queue where it is passed to a "consumer loop" that saves the data. If I have a single element queue, then my DAQ acquisition speed is limited by how fast the files can be saved. Thus, I have a finite length queue to buffer the data while it gets saved.
    The person that I work likes to store millions of double precision points, so memory can grow fast. Typically we only get 2M total points from each board, and my queue can grow to 500MB in size. However, there are times when my colleague wants to acquire 10M or more points from each board. In this case, I want to limit the queue size to 1. Since this can not be done dynamically while the program is running, I am using a semaphore like construct to limit my queue to 1 element. (That is I wait until my file is saved before adding another element.) When I tried to do this, my memory usuage kept growing, similar to the example I provided earlier.
    I came up with a kludge type solution, fill up the queue to N-1 elements with empty arrays, then insert in the front my real data set. It works but I am not happy with it.
    On this board there are plently of people like yourself who are much more clever than me, and i am hoping they can figure out what is going on here. (I'm still trying to figure out your advice.)
    Thanks again for your help.
    Cheers,
    mcduff

Maybe you are looking for