Memory Leak in multidimensional int array?

I have a simple class
MyMatrix.h
@interface MyMatrix : NSObject {
int twoDMatrix[4][4];
int threeDMatrix[4][4][2];
MyMatrix.m
- init {
twoDMatrix[0][0] = 0;
// and so on
twoDMatrix[3][3] = 1;
threeDMatrix[0][0][0] = 0;
// and so on
threeDMatrix[3][3][1] = 1;
// and so on
-(void) dealloc{
[super dealloc];
Do you think MyMatrix class will gives Memory Leak ??

No.

Similar Messages

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • Memory leak when when returning array of strings to calling vb program

    Hi,
    I have a VBA script running in Excel 2010 which invokes a java swing app using the ActiveX bridge.
    Below is a short snippet of what happens
    Set Requester = CreateObject("ERequester.Bean.1")
    Set RequestResult = Requester.executeRequest(some parameters.....)
    Dim dArray
    dArray = RequestResult.getResultStringArray()
    Erase dArray
    At the Java side (using jdk1.6.0_26) the getResultStringArray() function is as given below
    public String[] getResultStringArray()
    String[] ticValues= new String[10000];
    Some code which creates local string objects and sets it to each position in the array
    return ticValues;
    After calling RequestResult.getResultStringArray() and releasing the array at VBA side the stings are still present in jvm memory and are not garbage collected
    As the function gets called multiple times, JVM continuously keeps increasing its memory footprint until it reaches the max limit and causes OutOfMemoryError.
    Any idea what may be going wrong?
    Thanks in advance

    Whne the program starts, you could create an array of all control references (from the pane property: "Controls[]") and get their label. To get a reference from the label, just search the label array and index into the array of references to get the corresponding reference.
    Still, this method is flawed.
    You can have multiple controls with the same name, which would confuse things.
    How about controls within cluster, etc.
    I think somebody wrote a tool once for this, but I don't remember where I saw it.
    LabVIEW Champion . Do more with less code and in less time .

  • Memory leak in String(byte[] bytes, int offset, int length)

    Has anyone run into memory leak problem using this String(byte[] bytes, int offset, int length) class? I am using it to convert byte array to string, and I am showing memory leak using this class. Any idea what is going on?

    Hi,
    If you post in Native methods forum I assume you are using this constructor in the native side.
    Be aware that getting char * from jstring eats memory that you must free before returning from native with env->ReleaseStringUTFChars().
    --Marc (http://jnative.sf.net)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Memory Consumption in Multidimensional Arrays

    Hi,
    I've noticed that the memory consumption of multidimensional arrays in Java is sometimes far above one could expect for the amount of data that is being stored. For example, here is a simple program which stores a table containing only integers and reports the memory consumption after it is filled:
    public static void main(String[] args) {     
    int tableSize = 1000000;
    int noFields = 10;
    Random rnd3 = new Random();          
    int arr[][] = new int[tableSize][noFields];
    for (int i = 0; i < tableSize; i++) {
    for (int j = 0; j < noFields; j++) {
         arr[i][j] = rnd3.nextInt(100);
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();                    
    // Ensures table's data is still referenced
    System.out.println(arr[rnd3.nextInt(arr.length)]);
    long totalMemory = Runtime.getRuntime().totalMemory();
    long usedMemory = totalMemory-Runtime.getRuntime().freeMemory();
    System.out.println("Total Memory: " + totalMemory/(1024.0*1024) + " MB.");
    System.out.println("Used Memory: " + usedMemory/(1024.0*1024) + " MB.");          
    Output:
    Total Memory: 866.1875 MB.
    Used Memory: 62.124053955078125 MB.
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers. The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):
    Rows:100; Cols:100000 -> Used Memory: 43,05 MB
    Rows:1000; Cols:10000 -> Used Memory: 43,07 MB
    Rows:10000; Cols:1000 -> Used Memory: 43,24 MB
    Rows:100000; Cols:100 -> Used Memory: 44,96 MB
    Rows:1000000; Cols:10 -> Used Memory: 62,15 MB
    Rows:10000000; Cols:1 -> Used Memory: 192,15 MB
    Any ideas about the reasons for that behavior?
    Thanks,
    Marcelo

    mrnm wrote:
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers.That's only the expected value if you assume that a 2D array of ints is nothing more than a bunch of ints lined up end to end. This is not the case. A "2D" array in java is really just a plain ol' array whose component type is "reference to array".
    The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):That's because, e.g., new int[200][100] creates 200 array objects (and references to each of them), each of which holds 100 ints, while new int[100][200] creates 100 array objects (and references to each of them), each of which holds 200 ints.
    Edited by: jverd on Feb 24, 2010 11:17 AM

  • Will the memory leak for queue when used in producer and consumer mode in DAQ to transfer different sized array.

    In the data acquisition, I use one loop to poll data from hardware, another loop to receive the data from polling loop sent by queue.
    But everytime the size of the transferred data array may not be the same, so the system may assign different array size and recycle very frequently.
    Will it cost memory leak. Or will it slow down the performance, since the array size is not fixed, so every time need to create a new sized array.
    Any suggestion or better method. 
    Solved!
    Go to Solution.

    As i understand your description, your DAQ-loop acquires data with the setting '-1' for samples to read at the DAQmx read function. This results in the different array sizes.
    Passing those arrays directly to a queue is valid and it does not have significant drawback in performance (at least as far as i know) and it definetly does not leak memory.
    So the question is more or less:
    Is it valid that your consumer receives different array sizes for analysis? How does your consumer handle those arrays? 
    hope this helps,
    Norbert 
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Memory leak on an array

    I have a global NSMutableArray. I am allocating memory to it in the viewWillAppear method as I want it to be empty everytime view appears. On running instruments i am getting a memory leak on this array.
    I am releasing this array in dealloc.
    Am i doing something wrong ?

    sURBHIT Mehrotra wrote:
    I am allocating memory to it in the viewWillAppear method as I want it to be empty everytime
    How are you doing that? If you are using a method other than "removeAllObjects", then, yes, you are doing it wrong.

  • Pro*c multithreaded application has memory leak

    Hi there,
    I posted this message a week ago in OCI section, nobody answer me.
    I am really curious if my application has a bug or the pro*c has a bug.
    Anyone can compile the sample code and test it easily.
    I made multithreaded application which queries dynamic SQL, it works.
    But the memory leaks when i query the SQL statement.
    The more memory leaks, the more i query the SQL statement, even same SQL
    statement.
    I check it with top, shell command.
    My machine is SUN E450, Solaris 8. Oracle 9.2.0.1
    Compiler : gcc (GCC) 3.2.2
    I changed source code which is from
    $(ORACLE_HOME)/precomp/demo/proc/sample10.pc
    the sample10 doesn't need to be multithreaded. But i think it has to work
    correctly if i changed it to multithreaded application.
    the make file and source code will be placed below.
    I have to figure out the problem.
    Please help
    Thanks in advance,
    the make file is below
    HOME = /user/jkku
    ORA = $(ORACLE_HOME)
    CC = gcc
    PROC = proc
    LC_INCL = -I$(HOME)/work/dbmss/libs/include
    lc_incl = include=$(HOME)/work/dbmss/libs/include
    SYS_INCL =
    sys_incl =
    ORA_INCL = -I. \
    -I$(ORA)/precomp/public \
    -I$(ORA)/rdbms/public \
    -I$(ORA)/rdbms/demo \
    -I$(ORA)/rdbms/pbsql/public \
    -I$(ORA)/network/public \
    -DSLMXMX_ENABLE -DSLTS_ENABLE -D_SVID_GETTOD
    INCLUDES = $(LC_INCL) $(SYS_INCL) $(ORA_INCL)
    includes = $(lc_incl) $(sys_incl)
    LC_LIBS =
    SYS_LIBS = -lpthread -lsocket -lnsl -lrt
    ORA_LIBS = -L$(ORA)/lib/ -lclntsh
    LIBS = $(LC_LIBS) $(SYS_LIBS) $(ORA_LIBS)
    # Define C Compiler flags
    CFLAGS += -D_Solaris64_ -m64
    CFLAGS += -g -D_REENTRANT
    # Define pro*c Compiler flags
    PROCFLAGS += THREADS=YES
    PROCFLAGS += CPOOL=YES
    # Our object files
    PRECOMPS = sample10.c
    OBJS = sample10.o
    .SUFFIXES: .o .c .pc
    .c.o:
    $(CC) -c $(CFLAGS) $(INCLUDES) $*.c
    .pc.c:
    $(PROC) $(PROCFLAGS) $(includes) $*.pc $*.c
    all: sample10
    sample10: $(PRECOMPS) $(OBJS)
    $(CC) $(CFLAGS) -o sample10 $(OBJS) $(LIBS)
    clean:
    rm -rf *.o sample10 sample10.c
    the source code is below which i changed the oracle sample10.pc to
    multithreaded application.
    Sample Program 10: Dynamic SQL Method 4
    This program connects you to ORACLE using your username and
    password, then prompts you for a SQL statement. You can enter
    any legal SQL statement. Use regular SQL syntax, not embedded SQL.
    Your statement will be processed. If it is a query, the rows
    fetched are displayed.
    You can enter multi-line statements. The limit is 1023 characters.
    This sample program only processes up to MAX_ITEMS bind variables and
    MAX_ITEMS select-list items. MAX_ITEMS is #defined to be 40.
    #include <stdio.h>
    #include <string.h>
    #include <setjmp.h>
    #include <sqlda.h>
    #include <stdlib.h>
    #include <sqlcpr.h>
    /* Maximum number of select-list items or bind variables. */
    #define MAX_ITEMS 40
    /* Maximum lengths of the names of the
    select-list items or indicator variables. */
    #define MAX_VNAME_LEN 30
    #define MAX_INAME_LEN 30
    #ifndef NULL
    #define NULL 0
    #endif
    /* Prototypes */
    #if defined(__STDC__)
    void sql_error(void);
    int oracle_connect(void);
    int alloc_descriptors(int, int, int);
    int get_dyn_statement(void);
    void set_bind_variables(void);
    void process_select_list(void);
    void help(void);
    #else
    void sql_error(/*_ void _*/);
    int oracle_connect(/*_ void _*/);
    int alloc_descriptors(/*_ int, int, int _*/);
    int get_dyn_statement(/* void _*/);
    void set_bind_variables(/*_ void -*/);
    void process_select_list(/*_ void _*/);
    void help(/*_ void _*/);
    #endif
    char *dml_commands[] = {"SELECT", "select", "INSERT", "insert",
    "UPDATE", "update", "DELETE", "delete"};
    EXEC SQL INCLUDE sqlda;
    EXEC SQL INCLUDE sqlca;
    EXEC SQL BEGIN DECLARE SECTION;
    char dyn_statement[1024];
    EXEC SQL VAR dyn_statement IS STRING(1024);
    EXEC SQL END DECLARE SECTION;
    EXEC ORACLE OPTION (ORACA=YES);
    EXEC ORACLE OPTION (RELEASE_CURSOR=YES);
    SQLDA *bind_dp;
    SQLDA *select_dp;
    /* Define a buffer to hold longjmp state info. */
    jmp_buf jmp_continue;
    char *db_uid="dbmuser/dbmuser@dbmdb";
    sql_context ctx;
    int err_sql;
    enum{
    SQL_SUCC=0,
    SQL_ERR,
    SQL_NOTFOUND,
    SQL_UNIQUE,
    SQL_DISCONNECT,
    SQL_NOTNULL
    int main()
    int i;
    EXEC SQL ENABLE THREADS;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    /* Connect to the database. */
    if (connect_database() < 0)
    exit(1);
    EXEC SQL CONTEXT USE :ctx;
    /* Process SQL statements. */
    for (;;)
    /* Allocate memory for the select and bind descriptors. */
    if (alloc_descriptors(MAX_ITEMS, MAX_VNAME_LEN, NAME_LEN) != 0)
    exit(1);
    (void) setjmp(jmp_continue);
    /* Get the statement. Break on "exit". */
    if (get_dyn_statement() != 0)
    break;
    EXEC SQL PREPARE S FROM :dyn_statement;
    EXEC SQL DECLARE C CURSOR FOR S;
    /* Set the bind variables for any placeholders in the
    SQL statement. */
    set_bind_variables();
    /* Open the cursor and execute the statement.
    * If the statement is not a query (SELECT), the
    * statement processing is completed after the
    * OPEN.
    EXEC SQL OPEN C USING DESCRIPTOR bind_dp;
    /* Call the function that processes the select-list.
    * If the statement is not a query, this function
    * just returns, doing nothing.
    process_select_list();
    /* Tell user how many rows processed. */
    for (i = 0; i < 8; i++)
    if (strncmp(dyn_statement, dml_commands, 6) == 0)
    printf("\n\n%d row%c processed.\n", sqlca.sqlerrd[2], sqlca.sqlerrd[2] == 1 ? '\0' : 's');
    break;
    /* Close the cursor. */
    EXEC SQL CLOSE C;
    /* When done, free the memory allocated for pointers in the bind and
    select descriptors. */
    for (i = 0; i < MAX_ITEMS; i++)
    if (bind_dp->V != (char *) 0)
    free(bind_dp->V);
    free(bind_dp->I); /* MAX_ITEMS were allocated. */
    if (select_dp->V != (char *) 0)
    free(select_dp->V);
    free(select_dp->I); /* MAX_ITEMS were allocated. */
    /* Free space used by the descriptors themselves. */
    SQLSQLDAFree(ctx, bind_dp);
    SQLSQLDAFree(ctx, select_dp);
    } /* end of for(;;) statement-processing loop */
    disconnect_database();
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    EXEC SQL COMMIT WORK RELEASE;
    puts("\nHave a good day!\n");
    return;
    * Allocate the BIND and SELECT descriptors using sqlald().
    * Also allocate the pointers to indicator variables
    * in each descriptor. The pointers to the actual bind
    * variables and the select-list items are realloc'ed in
    * the set_bind_variables() or process_select_list()
    * routines. This routine allocates 1 byte for select_dp->V
    * and bind_dp->V, so the realloc will work correctly.
    alloc_descriptors(size, max_vname_len, max_iname_len)
    int size;
    int max_vname_len;
    int max_iname_len;
    int i;
    * The first sqlald parameter determines the maximum number of
    * array elements in each variable in the descriptor. In
    * other words, it determines the maximum number of bind
    * variables or select-list items in the SQL statement.
    * The second parameter determines the maximum length of
    * strings used to hold the names of select-list items
    * or placeholders. The maximum length of column
    * names in ORACLE is 30, but you can allocate more or less
    * as needed.
    * The third parameter determines the maximum length of
    * strings used to hold the names of any indicator
    * variables. To follow ORACLE standards, the maximum
    * length of these should be 30. But, you can allocate
    * more or less as needed.
    if ((bind_dp =
    SQLSQLDAAlloc(ctx, size, max_vname_len, max_iname_len)) ==
    (SQLDA *) 0)
    fprintf(stderr,
    "Cannot allocate memory for bind descriptor.");
    return -1; /* Have to exit in this case. */
    if ((select_dp =
    SQLSQLDAAlloc(ctx, size, max_vname_len, max_iname_len)) == (SQLDA *)
    0)
    fprintf(stderr,
    "Cannot allocate memory for select descriptor.");
    return -1;
    select_dp->N = MAX_ITEMS;
    /* Allocate the pointers to the indicator variables, and the
    actual data. */
    for (i = 0; i < MAX_ITEMS; i++) {
    bind_dp->I = (short *) malloc(sizeof (short));
    select_dp->I = (short *) malloc(sizeof(short));
    bind_dp->V = (char *) malloc(1);
    select_dp->V = (char *) malloc(1);
    return 0;
    int get_dyn_statement()
    char *cp, linebuf[256];
    int iter, plsql;
    for (plsql = 0, iter = 1; ;)
    if (iter == 1)
    printf("\nSQL> ");
    dyn_statement[0] = '\0';
    fgets(linebuf, sizeof linebuf, stdin);
    cp = strrchr(linebuf, '\n');
    if (cp && cp != linebuf)
    *cp = ' ';
    else if (cp == linebuf)
    continue;
    if ((strncmp(linebuf, "EXIT", 4) == 0) ||
    (strncmp(linebuf, "exit", 4) == 0))
    return -1;
    else if (linebuf[0] == '?' ||
    (strncmp(linebuf, "HELP", 4) == 0) ||
    (strncmp(linebuf, "help", 4) == 0))
    help();
    iter = 1;
    continue;
    if (strstr(linebuf, "BEGIN") ||
    (strstr(linebuf, "begin")))
    plsql = 1;
    strcat(dyn_statement, linebuf);
    if ((plsql && (cp = strrchr(dyn_statement, '/'))) ||
    (!plsql && (cp = strrchr(dyn_statement, ';'))))
    *cp = '\0';
    break;
    else
    iter++;
    printf("%3d ", iter);
    return 0;
    void set_bind_variables()
    int i, n;
    char bind_var[64];
    /* Describe any bind variables (input host variables) */
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    bind_dp->N = MAX_ITEMS; /* Initialize count of array elements. */
    EXEC SQL DESCRIBE BIND VARIABLES FOR S INTO bind_dp;
    /* If F is negative, there were more bind variables
    than originally allocated by sqlald(). */
    if (bind_dp->F < 0)
    printf ("\nToo many bind variables (%d), maximum is %d\n.",
    -bind_dp->F, MAX_ITEMS);
    return;
    /* Set the maximum number of array elements in the
    descriptor to the number found. */
    bind_dp->N = bind_dp->F;
    /* Get the value of each bind variable as a
    * character string.
    * C contains the length of the bind variable
    * name used in the SQL statement.
    * S contains the actual name of the bind variable
    * used in the SQL statement.
    * L will contain the length of the data value
    * entered.
    * V will contain the address of the data value
    * entered.
    * T is always set to 1 because in this sample program
    * data values for all bind variables are entered
    * as character strings.
    * ORACLE converts to the table value from CHAR.
    * I will point to the indicator value, which is
    * set to -1 when the bind variable value is "null".
    for (i = 0; i < bind_dp->F; i++)
    printf ("\nEnter value for bind variable %.*s: ",
    (int)bind_dp->C, bind_dp->S);
    fgets(bind_var, sizeof bind_var, stdin);
    /* Get length and remove the new line character. */
    n = strlen(bind_var) - 1;
    /* Set it in the descriptor. */
    bind_dp->L = n;
    /* (re-)allocate the buffer for the value.
    sqlald() reserves a pointer location for
    V but does not allocate the full space for
    the pointer. */
    bind_dp->V = (char *) realloc(bind_dp->V, (bind_dp->L + 1));
    /* And copy it in. */
    strncpy(bind_dp->V, bind_var, n);
    /* Set the indicator variable's value. */
    if ((strncmp(bind_dp->V, "NULL", 4) == 0) ||
    (strncmp(bind_dp->V, "null", 4) == 0))
    *bind_dp->I = -1;
    else
    *bind_dp->I = 0;
    /* Set the bind datatype to 1 for CHAR. */
    bind_dp->T = 1;
    return;
    void process_select_list()
    int i, null_ok, precision, scale;
    if ((strncmp(dyn_statement, "SELECT", 6) != 0) &&
    (strncmp(dyn_statement, "select", 6) != 0))
    select_dp->F = 0;
    return;
    /* If the SQL statement is a SELECT, describe the
    select-list items. The DESCRIBE function returns
    their names, datatypes, lengths (including precision
    and scale), and NULL/NOT NULL statuses. */
    select_dp->N = MAX_ITEMS;
    EXEC SQL DESCRIBE SELECT LIST FOR S INTO select_dp;
    /* If F is negative, there were more select-list
    items than originally allocated by sqlald(). */
    if (select_dp->F < 0)
    printf ("\nToo many select-list items (%d), maximum is %d\n",
    -(select_dp->F), MAX_ITEMS);
    return;
    /* Set the maximum number of array elements in the
    descriptor to the number found. */
    select_dp->N = select_dp->F;
    /* Allocate storage for each select-list item.
    sqlprc() is used to extract precision and scale
    from the length (select_dp->L).
    sqlnul() is used to reset the high-order bit of
    the datatype and to check whether the column
    is NOT NULL.
    CHAR datatypes have length, but zero precision and
    scale. The length is defined at CREATE time.
    NUMBER datatypes have precision and scale only if
    defined at CREATE time. If the column
    definition was just NUMBER, the precision
    and scale are zero, and you must allocate
    the required maximum length.
    DATE datatypes return a length of 7 if the default
    format is used. This should be increased to
    9 to store the actual date character string.
    If you use the TO_CHAR function, the maximum
    length could be 75, but will probably be less
    (you can see the effects of this in SQL*Plus).
    ROWID datatype always returns a fixed length of 18 if
    coerced to CHAR.
    LONG and
    LONG RAW datatypes return a length of 0 (zero),
    so you need to set a maximum. In this example,
    it is 240 characters.
    printf ("\n");
    for (i = 0; i < select_dp->F; i++)
    char title[MAX_VNAME_LEN];
    /* Turn off high-order bit of datatype (in this example,
    it does not matter if the column is NOT NULL). */
    sqlnul ((unsigned short *)&(select_dp->T), (unsigned short
    *)&(select_dp->T), &null_ok);
    switch (select_dp->T)
    case 1 : /* CHAR datatype: no change in length
    needed, except possibly for TO_CHAR
    conversions (not handled here). */
    break;
    case 2 : /* NUMBER datatype: use sqlprc() to
    extract precision and scale. */
    sqlprc ((unsigned int *)&(select_dp->L), &precision,
    &scale);
    /* Allow for maximum size of NUMBER. */
    if (precision == 0) precision = 40;
    /* Also allow for decimal point and
    possible sign. */
    /* convert NUMBER datatype to FLOAT if scale > 0,
    INT otherwise. */
    if (scale > 0)
    select_dp->L = sizeof(float);
    else
    select_dp->L = sizeof(int);
    break;
    case 8 : /* LONG datatype */
    select_dp->L = 240;
    break;
    case 11 : /* ROWID datatype */
    case 104 : /* Universal ROWID datatype */
    select_dp->L = 18;
    break;
    case 12 : /* DATE datatype */
    select_dp->L = 9;
    break;
    case 23 : /* RAW datatype */
    break;
    case 24 : /* LONG RAW datatype */
    select_dp->L = 240;
    break;
    /* Allocate space for the select-list data values.
    sqlald() reserves a pointer location for
    V but does not allocate the full space for
    the pointer. */
    if (select_dp->T != 2)
    select_dp->V = (char *) realloc(select_dp->V,
    select_dp->L + 1);
    else
    select_dp->V = (char *) realloc(select_dp->V,
    select_dp->L);
    /* Print column headings, right-justifying number
    column headings. */
    /* Copy to temporary buffer in case name is null-terminated */
    memset(title, ' ', MAX_VNAME_LEN);
    strncpy(title, select_dp->S, select_dp->C);
    if (select_dp->T == 2)
    if (scale > 0)
    printf ("%.*s ", select_dp->L+3, title);
    else
    printf ("%.*s ", select_dp->L, title);
    else
    printf("%-.*s ", select_dp->L, title);
    /* Coerce ALL datatypes except for LONG RAW and NUMBER to
    character. */
    if (select_dp->T != 24 && select_dp->T != 2)
    select_dp->T = 1;
    /* Coerce the datatypes of NUMBERs to float or int depending on
    the scale. */
    if (select_dp->T == 2)
    if (scale > 0)
    select_dp->T = 4; /* float */
    else
    select_dp->T = 3; /* int */
    printf ("\n\n");
    /* FETCH each row selected and print the column values. */
    EXEC SQL WHENEVER NOT FOUND GOTO end_select_loop;
    for (;;)
    EXEC SQL FETCH C USING DESCRIPTOR select_dp;
    /* Since each variable returned has been coerced to a
    character string, int, or float very little processing
    is required here. This routine just prints out the
    values on the terminal. */
    for (i = 0; i < select_dp->F; i++)
    if (*select_dp->I < 0)
    if (select_dp->T == 4)
    printf ("%-*c ",(int)select_dp->L+3, ' ');
    else
    printf ("%-*c ",(int)select_dp->L, ' ');
    else
    if (select_dp->T == 3) /* int datatype */
    printf ("%*d ", (int)select_dp->L,
    *(int *)select_dp->V);
    else if (select_dp->T == 4) /* float datatype */
    printf ("%*.2f ", (int)select_dp->L,
    *(float *)select_dp->V);
    else /* character string */
    printf ("%-*.*s ", (int)select_dp->L,
    (int)select_dp->L, select_dp->V);
    printf ("\n");
    end_select_loop:
    return;
    void help()
    puts("\n\nEnter a SQL statement or a PL/SQL block at the SQL> prompt.");
    puts("Statements can be continued over several lines, except");
    puts("within string literals.");
    puts("Terminate a SQL statement with a semicolon.");
    puts("Terminate a PL/SQL block (which can contain embedded
    semicolons)");
    puts("with a slash (/).");
    puts("Typing \"exit\" (no semicolon needed) exits the program.");
    puts("You typed \"?\" or \"help\" to get this message.\n\n");
    int connect_database()
    err_sql = SQL_SUCC;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    EXEC SQL CONTEXT ALLOCATE :ctx;
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL CONNECT :db_uid;
    if(err_sql != SQL_SUCC){
    printf("err => connect database(ctx:%ld, uid:%s) failed!\n", ctx, db_uid);
    return -1;
    return 1;
    int disconnect_database()
    err_sql = SQL_SUCC;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL COMMIT WORK RELEASE;
    EXEC SQL CONTEXT FREE:ctx;
    return 1;
    void sql_error()
    printf("err => %.*s", sqlca.sqlerrm.sqlerrml, sqlca.sqlerrm.sqlerrmc);
    printf("in \"%.*s...\'\n", oraca.orastxt.orastxtl, oraca.orastxt.orastxtc);
    printf("on line %d of %.*s.\n\n", oraca.oraslnr, oraca.orasfnm.orasfnml,
    oraca.orasfnm.orasfnmc);
    switch(sqlca.sqlcode) {
    case -1: /* unique constraint violated */
    err_sql = SQL_UNIQUE;
    break;
    case -1012: /* not logged on */
    case -1089:
    case -3133:
    case -1041:
    case -3114:
    case -3113:
    /* �6�Ŭ�� shutdown�ǰų� �α��� ���°� �ƴҶ� ��b�� �õ� */
    /* immediate shutdown in progress - no operations are permitted */
    /* end-of-file on communication channel */
    /* internal error. hostdef extension doesn't exist */
    err_sql = SQL_DISCONNECT;
    break;
    case -1400:
    err_sql = SQL_NOTNULL;
    break;
    default:
    err_sql = SQL_ERR;
    break;
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    EXEC SQL ROLLBACK WORK;
    void sql_not_found()
    err_sql = SQL_NOTFOUND;

    Hi Jane,
    What version of Berkeley DB XML are you using?
    What is your operating system and your hardware platform?
    For how long have been the application running?
    What is your current container size?
    What's set for EnvironmentConfig.setThreaded?
    Do you know if containers have previously not been closed correctly?
    Can you please post the entire error output?
    What's the JDK version, 1.4 or 1.5?
    Thanks,
    Bogdan

  • Memory Leak with new Font()

    Hello all,
    Not sure if this is the right place to post this but here goes. I have a program that needs to change the font of at least 250,000 letters, however after doing this the program runs slow and laggy as if changing the fonts of each letter took up memory and never released it? Here is a compilable example, Click the button at the bottom of the window and watch the letter it is currently working on. You will notice that around 200,000 it will begin to not smoothly count up anymore but count in kind of a skipping pattern. This seems to get worse the more you do it and seems to indicate to me that something is using memory and not releasing it. I'm not sure why replacing a letter's font with a new font would cause more memory to be taken up I would think if I was doing it properly it would simply replace the old font with the new one not taking anymore memory then it did before. Here is the example: This program sometimes locks up so be prepared. If someone could maybe point out what is causing this to take up more memory after changing the fonts that would be great and hopefully find a solution :) Thanks in advance.
    -neptune692
    * To change this template, choose Tools | Templates
    * and open the template in the editor.
    package paintsurface;
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    import java.util.*;
    import java.util.List;
    public class PaintSurface implements Runnable, ActionListener {
    public static void main(String[] args) {
            SwingUtilities.invokeLater(new PaintSurface());
    List<StringState> states = new ArrayList<StringState>();
    Tableaux tableaux;
    Random random = new Random();
    Font font = new Font("Arial",Font.PLAIN,15);
    //        Point mouselocation = new Point(0,0);
    static final int WIDTH = 1000;
    static final int HEIGHT = 1000;
    JFrame frame = new JFrame();
    JButton add;
    public void run() {
            tableaux = new Tableaux();
            for (int i=250000; --i>=0;)
                    addRandom();
            frame.add(tableaux, BorderLayout.CENTER);
            add = new JButton("Change Font of letters - memory leak?");
            add.addActionListener(this);
            frame.add(add, BorderLayout.SOUTH);
            frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            frame.setSize(WIDTH, HEIGHT);
            frame.setLocationRelativeTo(null);
            frame.setVisible(true);
    public void actionPerformed(ActionEvent e) {
        new Thread(new ChangeFonts()).start();
    void addRandom() {
            tableaux.add(
                            Character.toString((char)('a'+random.nextInt(26))),
                            UIManager.getFont("Button.font"),
                            random.nextInt(WIDTH), random.nextInt(HEIGHT));
    //THIS CLASS SEEMS TO HAVE SOME KIND OF MEMORY LEAK I'M NOT SURE?
    class ChangeFonts implements Runnable {
        public void run() {
        Random rand = new Random();
            for(int i = 0; i<states.size(); i++) {
                font = new Font("Arial",Font.PLAIN,rand.nextInt(50));
                states.get(i).font = font;
                add.setText("Working on letter - "+i);
    class StringState extends Rectangle {
            StringState(String str, Font font, int x, int y, int w, int h) {
                    super(x, y, w, h);
                    string = str;
                    this.font = font;
            String string;
            Font font;
    class Tableaux extends JComponent {
            Tableaux() {
                    this.enableEvents(MouseEvent.MOUSE_MOTION_EVENT_MASK);
                    lagState = createState("Lag", new Font("Arial",Font.BOLD,20), 0, 0);
            protected void processMouseMotionEvent(MouseEvent e) {
                    repaint(lagState);
                    lagState.setLocation(e.getX(), e.getY());
                    repaint(lagState);
                    super.processMouseMotionEvent(e);
            StringState lagState;
            StringState createState(String str, Font font, int x, int y) {
                FontMetrics metrics = getFontMetrics(font);
                int w = metrics.stringWidth(str);
                int h = metrics.getHeight();
                return new StringState(str, font, x, y-metrics.getAscent(), w, h);
            public void add(String str, Font font, int x, int y) {
                    StringState state = createState(str, font, x, y);
                    states.add(state);
                    repaint(state);
            protected void paintComponent(Graphics g) {
                    Rectangle clip = g.getClipBounds();
                    FontMetrics metrics = g.getFontMetrics();
                    for (StringState state : states) {
                            if (state.intersects(clip)) {
                                    if (!state.font.equals(g.getFont())) {
                                            g.setFont(state.font);
                                            metrics = g.getFontMetrics();
                                    g.drawString(state.string, state.x, state.y+metrics.getAscent());
                    if (lagState.intersects(clip)) {
                    g.setColor(Color.red);
                    if (!lagState.font.equals(g.getFont())) {
                        g.setFont(lagState.font);
                        metrics = g.getFontMetrics();
                    g.drawString("Lag", lagState.x, lagState.y+metrics.getAscent());
    }Here is the block of code that I think is causing the problem:
    //THIS CLASS SEEMS TO HAVE SOME KIND OF MEMORY LEAK I'M NOT SURE?
    class ChangeFonts implements Runnable {
        public void run() {
        Random rand = new Random();
            for(int i = 0; i<states.size(); i++) {
                font = new Font("Arial",Font.PLAIN,rand.nextInt(50));
                states.get(i).font = font; // this line seems to cause the problem?
                add.setText("Working on letter - "+i);
    }

    neptune692 wrote:
    jverd wrote:
    You're creating a quarter million distinct Font objects, and obviously you must be hanging on to all of them because each character is having its font set to the newly created object. So if you have 250k chars, you're forcing it to have 250k Font objects.
    Since the only difference is that rand.nextInt(50) parameter, just pre-create 50 Font objects with 0..49, stick 'em in the corresponding elements in an array, and use rand.nextInt to select the Font object to use.That does make sense but it does that when the the program is first launched and doesn't lag. But the second and third time you change the letters font it seems to lag so if it wasn't taking up more memory the second time it should perform like it did when it first launched. I don't care to investigate any further. The real problem is almost certainly the quarter million Font objects. It could be that 250k is fine, but by the time you get to 500k, it has to do a lot of GC, and that's where the slow down is coming. You might even be able to make it work better with the code you have just by tweaking the GC parameters at startup, but I wouldn't bother. Fix the code first, and then see if you have issues.
    Does creating a new font for each of those letters not replace the old font object? If it didn't use more memory the second and third time I don't think you would see the skipping in the counter and the slowing down of the iterations. So it must be remembering some of the old font objects or am I wrong?Using new always creates a new object. When you do it the second time around, and call letter.setFont(newFont), the old Font object is eligible for GC. That doesn't mean it will be GCed right away though. The JVM can leave them all laying around until it runs out of memory, and then GC some or all of them.

  • Memory Leak in ArrayList ?

    Dear all,
    When I went thru the implementation of ArrayList, it just appeared to me that there can be a memory leak in the ensureCapacity method, which allocated new memory... The code is as below.
    *Increases the capacity of this <tt>ArrayList</tt> instance, if*
    necessary, to ensure that it can hold at least the number of elements
    *specified by the minimum capacity argument.*
    *@param   minCapacity   the desired minimum capacity*
        public void ensureCapacity(int minCapacity) {
         modCount++;
         int oldCapacity = elementData.length;
         if (minCapacity > oldCapacity) {
    *Object oldData[]* = elementData;
             int newCapacity = (oldCapacity * 3)/2 + 1;
                 if (newCapacity < minCapacity)
              newCapacity = minCapacity;
                // minCapacity is usually close to size, so this is a win:
                elementData = Arrays.copyOf(elementData, newCapacity);
        }Dont we need to make null the reference of oldData, as oldData = null, so that gc can reclaim the memory used by the old elementData, soon after the array copy (Arrays.copyOf) ??
    Please suggest if my understanding is wrong..
    Thanks & Regards
    Joby

    JOBY1985 wrote:
    Dear all,
    When I went thru the implementation of ArrayList, it just appeared to me that there can be a memory leak in the ensureCapacity method, which allocated new memory... The code is as below.
    *Increases the capacity of this <tt>ArrayList</tt> instance, if*
    necessary, to ensure that it can hold at least the number of elements
    *specified by the minimum capacity argument.*
    *@param   minCapacity   the desired minimum capacity*
    public void ensureCapacity(int minCapacity) {
         modCount++;
         int oldCapacity = elementData.length;
         if (minCapacity > oldCapacity) {
    *Object oldData[]* = elementData;
             int newCapacity = (oldCapacity * 3)/2 + 1;
             if (newCapacity < minCapacity)
              newCapacity = minCapacity;
    // minCapacity is usually close to size, so this is a win:
    elementData = Arrays.copyOf(elementData, newCapacity);
    }Dont we need to make null the reference of oldData, as oldData = null, so that gc can reclaim the memory used by the old elementData, soon after the array copy (Arrays.copyOf) ??No we don't need to set any thing to 'null'. The reference 'elementData' is update by the Arrays.copyOf() invocation and whatever it used to reference is then only referenced by oldData and when the block in which the reference 'oldData' is declared exits there will be no reference to the original array so it is eligible for GC.
    >
    Please suggest if my understanding is wrong..Your understanding is wrong.
    >
    Thanks & Regards
    JobyEdited by: sabre150 on Oct 20, 2009 1:20 PM

  • Memory leak in Waveform Graph?

    Either thier is a huge memory leak in the waveform graph or I am really doing something wrong.
    I created an example app with a waveform graph and a button the contructor looks as follows:
    Form1(void)
    InitializeComponent();
    vals = gcnew array<double>(60000) ;
    for (int i=0; i<60000; i++)
    vals[i]=Math:in(Math:: PI*2*60/6000.0*i) ;
    and the click event looks like this:
    System::Void button1_Click(System:: Object^ sender, System::EventArgs^ e)
    this->waveformGraph1->Plots->Clear() ;
    NationalInstruments::UI::WaveformPlot^ plot = gcnew NationalInstruments::UI::WaveformPlot(xAxis1, yAxis1) ;
    plot->PlotY(vals) ;
    plot->LineColor = Color::Red ;
    this->waveformGraph1->Plots->Add(plot) ;
    every time I click the button the memory used on my system goes up by about 10 MB.  I tried this also with using this->waveformGraph1->PlotY(vals) and the memory usage stays solid as a rock.
    Am I doing something wrong or what is causing the leak so I can work around it, my program plots 4 arrays of this size on one graph per test result.

    The plot uses some unmanaged resources (gdi objects and other handles), which is why it implements IDisposible. Because of the GC, resource cleanup is not deterministic, clean up occurs when the GC deems it necessary. Calling delete (which based on C++/CLI syntax) ultimately ends up calling Dispose and this forces the object to release any handles it might have immedietly. See the documentation for the .NET Dispose pattern for more information.
    If you don't call delete, what would end up happening is that eventually at some point in the application, the GC would fire and cleanup all the objects and handles and you would see a drop in the applications memory footprint, but you would need to run the application for a while before that might happen. In a long application run, things would end up stabilizing.
    Bilal Durrani
    NI

  • Custom MediaStreamSource and Memory Leaks During SampleRequested

    Greetings,
    I have a nasty memory leak problem that is causing me to pull my hair out.
    I'm implementing a custom MediaStreamSource along with MediaTranscoder to generate video to disk. The frame generation operation occurs in the SampleRequested handler (as in the MediaStreamSource example). No matter what I do - and I've tried a
    ton of options - inevitably the app runs out of memory after a couple hundred frames of HD video. Investigating, I see that indeed GC.GetTotalMemory reports an increasing, and never decreasing, amount of allocated RAM. 
    The frame generator in my actual app is using RenderTargetBitmap to get screen captures, and is handing the buffer to MediaStreamSample.CreateFromBuffer(). However, as you can see in the example below, the issue occurs even with a dumb allocation
    of RAM and no other actual logic. Here's the code:
    void _mss_SampleRequested(Windows.Media.Core.MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
    if ( args.Request.StreamDescriptor is VideoStreamDescriptor )
    if (_FrameCount >= 3000) return;
    var videoDeferral = args.Request.GetDeferral();
    var descriptor = (VideoStreamDescriptor)args.Request.StreamDescriptor;
    uint frameWidth = descriptor.EncodingProperties.Width;
    uint frameHeight = descriptor.EncodingProperties.Height;
    uint size = frameWidth * frameHeight * 4;
    byte[] buffer = null;
    try
    buffer = new byte[size];
    // do something to create the frame
    catch
    App.LogAction("Ran out of memory", this);
    return;
    args.Request.Sample = MediaStreamSample.CreateFromBuffer(buffer.AsBuffer(), TimeFromFrame(_FrameCount++, _frameSource.Framerate));
    args.Request.Sample.Duration = TimeFromFrame(1, _frameSource.Framerate);
    buffer = null; // attempt to release the memory
    videoDeferral.Complete();
    App.LogAction("Completed Video frame " + (_FrameCount-1).ToString() + "\n" +
    "Allocated memory: " + GC.GetTotalMemory(true), this);
    return;
    It usually fails around frame 357, with GC.GetTotalMemory() reporting 750MB allocated.
    I've tried tons of work-arounds, none of which made a difference. I tried putting the code that allocates the bytes in a separate thread - no dice.  I tried Task.Delay to give the GC a chance to work, on the assumption that it just had no time
    to do its job. No luck.
    As another experiment, I wanted to see if the problem went away if I allocated memory each frame, but never assigned it to the MediaStreamSample, instead giving the sample (constant) dummy data. Indeed, in that scenario, memory consumption stayed
    constant. However, while I never get an out-of-memory exception, RequestSample just stops getting called around frame 1600 and as a result the transcode operation never actually returns to completion.
    I also tried taking a cue from the SDK sample which uses C++ entirely to generate the frame. So I passed the buffer as a Platform::Array<BYTE> to a static Runtime extension class function I wrote in C++.
    I won't bore you with the C++ code, but even directly copying the bytes of the array to the media sample using memcpy still had the same result! It seems that there is no way to communicate the contents of the byte[] array to the media sample without
    it never being released.
    I know what some will say: the difference between my code and the SDK sample, of course, is that the SDK sample generates the frame _entirely_ in C++, thus taking care of its own memory allocation and deallocation. Because I want to get
    the data from RenderTargetBitmap, this isn't an option for me. (As a side note, if anyone knows if there's a way to get the contents of an RT Window using DirectX, that might work too, but I know this is not a C++ forum, so...). But more importantly,
    MediaStreamSource and MediaStreamSample are managed classes that appear to allow you to generate custom frames using C# or other managed code. The MediaStreamSample.CreateFromBuffer function appears to be tailored for exactly what I want. But there appears
    to be no way to release the buffer when giving the bytes to the MediaStreamSample. At least none that I can find.
    I know the RT version of these classes are new to Windows 8.1, but I did see other posts going back 3 years discussing a similar issue in Silverlight. That never appears to have been resolved.
    I guess the question boils down to this: how do I safely get managed data, allocated during the SampleRequested handler, to the MediaStreamSample without causing a memory leak? Also, why would the SampleRequested handler just stop getting called
    out of the blue, even when I artificially eliminate the memory leak problem?
    Thanks so much for all input!

    Hi Rob - 
    Thanks for your quick reply and for clarifying the terminology. 
    In the Memory Usage test under Analyze/Performance and Diagnostics (is that what you mean?) it's clear that each frame of video being created is not released from memory except when memory consumption gets very high. GC will occasionally kick in, but eventually
    it succumbs.
    Interestingly, if I reduce the frame size substantially, say 320x240, it never runs out of RAM no matter how many frames I throw at it. The Memory Usage test, however, shows the same pattern. But this time the GC can keep up and release the RAM.
    After playing with this ad nauseum,  I am fairly convinced I know what the problem is, but the solution still escapes me. It appears that the Transcoder is requesting frames from the MediaStreamSource (and the MediaStreamSource is providing them via
    my SampleRequested handler) faster than the Transcoder can write them to disk and release them. Why would this be happening? The MediaStreamSource.BufferTime property is - I thought - used to prevent this very problem. However, changing the BufferTime seems
    to have no effect at all - even changing it to ZERO doesn't change anything. If I'm right, this would explain why the GC can't do its job - it can't release the buffers I'm giving to the Transcoder via SampleRequested because the Transcoder won't give them
    up until it's finished transcoding and writing them to disk. And yet the transcoder keeps requesting samples until there's no more memory to create them with.
    The following code, which I made from scratch to illustrate my scenario, should be air-tight according to everything I've read. And yet, it still runs out of memory when the frame size is too large. 
    If you or anyone else can spot the problem in this code, I'd be thrilled to hear it. Maybe I'm omitting a key step with regard to getting the deferral? Or maybe it's a bug in the back-end? Can I "slow down" the transcoder and force it to release samples
    it's already used?
    Anyway here's the new code, which other than App.cs is everything. So if I'm doing something wrong it will be in this module:
    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Threading.Tasks;
    using System.Linq;
    using System.Runtime.InteropServices.WindowsRuntime;
    using System.Diagnostics;
    using Windows.Foundation;
    using Windows.Foundation.Collections;
    using Windows.UI.Xaml;
    using Windows.UI.Xaml.Controls;
    using Windows.UI.Xaml.Controls.Primitives;
    using Windows.UI.Xaml.Data;
    using Windows.UI.Xaml.Input;
    using Windows.UI.Xaml.Media;
    using Windows.UI.Xaml.Navigation;
    using Windows.UI.Popups;
    using Windows.Storage;
    using Windows.Storage.Pickers;
    using Windows.Storage.Streams;
    using Windows.Media.MediaProperties;
    using Windows.Media.Core;
    using Windows.Media.Transcoding;
    // The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=234238
    namespace MyTranscodeTest
    /// <summary>
    /// An empty page that can be used on its own or navigated to within a Frame.
    /// </summary>
    public sealed partial class MainPage : Page
    MediaTranscoder _transcoder;
    MediaStreamSource _mss;
    VideoStreamDescriptor _videoSourceDescriptor;
    const int c_width = 1920;
    const int c_height = 1080;
    const int c_frames = 10000;
    const int c_frNumerator = 30000;
    const int c_frDenominator = 1001;
    uint _frameSizeBytes;
    uint _frameDurationTicks;
    uint _transcodePositionTicks = 0;
    uint _frameCurrent = 0;
    Random _random = new Random();
    public MainPage()
    this.InitializeComponent();
    private async void GoButtonClicked(object sender, RoutedEventArgs e)
    Windows.Storage.Pickers.FileSavePicker picker = new Windows.Storage.Pickers.FileSavePicker();
    picker.FileTypeChoices.Add("MP4 File", new List<string>() { ".MP4" });
    Windows.Storage.StorageFile file = await picker.PickSaveFileAsync();
    if (file == null) return;
    Stream outputStream = await file.OpenStreamForWriteAsync();
    var transcodeTask = (await this.InitializeTranscoderAsync(outputStream)).TranscodeAsync();
    transcodeTask.Progress = (asyncInfo, progressInfo) =>
    Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
    _ProgressReport.Text = "Sourcing frame " + _frameCurrent.ToString() + " of " + c_frames.ToString() +
    " with " + GC.GetTotalMemory(false).ToString() + " bytes allocated.";
    await transcodeTask;
    MessageDialog dialog = new MessageDialog("Transcode completed.");
    await dialog.ShowAsync();
    async Task<PrepareTranscodeResult> InitializeTranscoderAsync (Stream output)
    _transcoder = new MediaTranscoder();
    _transcoder.HardwareAccelerationEnabled = false;
    _videoSourceDescriptor = new VideoStreamDescriptor(VideoEncodingProperties.CreateUncompressed( MediaEncodingSubtypes.Bgra8, c_width, c_height ));
    _videoSourceDescriptor.EncodingProperties.PixelAspectRatio.Numerator = 1;
    _videoSourceDescriptor.EncodingProperties.PixelAspectRatio.Denominator = 1;
    _videoSourceDescriptor.EncodingProperties.FrameRate.Numerator = c_frNumerator;
    _videoSourceDescriptor.EncodingProperties.FrameRate.Denominator = c_frDenominator;
    _videoSourceDescriptor.EncodingProperties.Bitrate = (uint)((c_width * c_height * 4 * 8 * (ulong)c_frDenominator) / (ulong)c_frNumerator);
    _frameDurationTicks = (uint)(10000000 * (ulong)c_frDenominator / (ulong)c_frNumerator);
    _frameSizeBytes = c_width * c_height * 4;
    _mss = new MediaStreamSource(_videoSourceDescriptor);
    _mss.BufferTime = TimeSpan.FromTicks(_frameDurationTicks);
    _mss.Duration = TimeSpan.FromTicks( _frameDurationTicks * c_frames );
    _mss.Starting += _mss_Starting;
    _mss.Paused += _mss_Paused;
    _mss.SampleRequested += _mss_SampleRequested;
    MediaEncodingProfile outputProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Ntsc);
    outputProfile.Audio = null;
    return await _transcoder.PrepareMediaStreamSourceTranscodeAsync(_mss, output.AsRandomAccessStream(), outputProfile);
    void _mss_Paused(MediaStreamSource sender, object args)
    throw new NotImplementedException();
    void _mss_Starting(MediaStreamSource sender, MediaStreamSourceStartingEventArgs args)
    args.Request.SetActualStartPosition(new TimeSpan(0));
    /// <summary>
    /// This is derived from the sample in "Windows 8.1 Apps with Xaml and C# Unleashed"
    /// </summary>
    /// <param name="sender"></param>
    /// <param name="args"></param>
    void _mss_SampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
    if (_frameCurrent == c_frames) return;
    var deferral = args.Request.GetDeferral();
    byte[] frameBuffer;
    try
    frameBuffer = new byte[_frameSizeBytes];
    this._random.NextBytes(frameBuffer);
    catch
    throw new Exception("Sample source ran out of RAM");
    args.Request.Sample = MediaStreamSample.CreateFromBuffer(frameBuffer.AsBuffer(), TimeSpan.FromTicks(_transcodePositionTicks));
    args.Request.Sample.Duration = TimeSpan.FromTicks(_frameDurationTicks);
    args.Request.Sample.KeyFrame = true;
    _transcodePositionTicks += _frameDurationTicks;
    _frameCurrent++;
    deferral.Complete();
    Again, I can't see any reason why this shouldn't work. You'll note it mirrors pretty closely the sample in the Windows 8.1 Apps With Xaml Unleashed book (Chapter 14). The difference is I'm feeding the samples to a transcoder rather than a MediaElement (which,
    again should be no issue).
    Thanks again for any suggestions!
    Peter

  • Memory leak with SystemTray

    Hello,
    I just found a memory leak in my code caused by the SystemTray, but I don't know if that is me that misused SystemTray (I add a trayIcon, then remove it, then add an another one, and so on). I know how to get around that problem, but I don't know the origin of it (my code, java bug ?). So I made the following code reproducing that leak. If someone can give me idea or advice :-)
    class MyThread extends Thread {
      ReferenceQueue<?> referenceQueue;
      public MyThread(ReferenceQueue<?> referenceQueue) {
        this.referenceQueue = referenceQueue;
      @Override
      public void run() {
        System.out.println("Starting ...");
        try {
          while (true) {
            referenceQueue.remove();
            System.out.println("We release a TrayIcon");
        } catch (InterruptedException ex) {
          Logger.getLogger(MyThread.class.getName()).log(Level.SEVERE, null, ex);
    class MyPopup extends PopupMenu {
      static final int MAX = 1000000;
      Integer []toto = new Integer[MAX];
      public MyPopup() {
        for(int i = 0; i < MAX; i++)
          toto[i] = i;
    public class Main {
        public static void main(String[] args) {
          PopupMenu popup;
          ReferenceQueue referenceQueue = new ReferenceQueue();
          Vector<WeakReference>weakReference = new Vector<WeakReference>();
          MyThread myThread = new MyThread(referenceQueue);
          myThread.start();
          SystemTray tray = SystemTray.getSystemTray();
          TrayIcon trayIcon;
          File file = new File(<AN IMAGE HERE>);
          Image image;
          try {
            image = ImageIO.read(file);
            while(true) {
              popup = new PopupMenu();
              trayIcon = new TrayIcon(image, "Frame Title", popup);
              tray.add(trayIcon);
              tray.remove(trayIcon);
              weakReference.add(new WeakReference(trayIcon, referenceQueue));
          } catch (IOException ex) {
            Logger.getLogger(MyJFrame.class.getName()).log(Level.SEVERE, null, ex);
          } catch (AWTException e) {
              Logger.getLogger(MyJFrame.class.getName()).log(Level.SEVERE, null, e);
    }------------------------------------------- ADDED -----------------------------------
    P.S. :
    - the toto array in MyPopup is there in order to obtain quicker a OutOfMemory (OOM)
    - you may change your java parameter by fixing the allowed memory size in order to have a OOM quicker (-Xms30m -Xmx30m)
    - you must have an image in order to test this code (and set the path where <AN IMAGE HERE> is)
    - if you remove the following lines there is no OOM :
    tray.add(trayIcon);
    tray.remove(trayIcon);
    so it seems to me that TrayIcon is the origin of the OOM
    - the weak reference is there in order to see when a trayIcon can be garbage collected
    Edited by: lemmel on May 5, 2009 7:22 AM - added comments

    So I did the test and:
    - you were right about the vector, but this one was added only for debugging purpose (find the OOM cause)
    - your code sample get a OOM with my computer as well as mine (with the vector removed obviously); I made some alteration to your code see below
    could you try changing your java memory settings (see below) ?
    P.S. :
    - I first tried with your code as it was, got the OOM, then decided to give some time to the garbage collector (and later to force a call to it) by adding the lines with the tag ADDED
      import java.awt.*;
      import java.awt.image.BufferedImage;
      static class MyPopup extends PopupMenu {
      public static void main(String[] args) throws Exception{
        SystemTray tray = SystemTray.getSystemTray();
        BufferedImage image = new BufferedImage(16, 16, BufferedImage.TYPE_INT_RGB);
        int i = 0;                       //ADDED
        while (true) {
            PopupMenu popup = new MyPopup();
            TrayIcon newTrayIcon = new TrayIcon(image, "Frame Title", popup);
            tray.add(newTrayIcon);
            tray.remove(newTrayIcon);
            if((i%10)==0) {              //ADDED
              System.gc();               //ADDED
              System.runFinalization();  //ADDED
              Thread.sleep(100);         //ADDED
            }                            //ADDED
            i ++;                        //ADDED
        }I removed the array trick in the MyPopup class in order to be able to do at least one loop (see my settings below)
    - my environnement :
    ---- debian testing, kernel 2.6.26-1-686, libc6 2.9-4
    ---- tried with both java 6.12 and 6.13 (1.6.0_13; Java HotSpot(TM) Client VM 11.3-b02), use Netbeans NetBeans IDE 6.5 (Build 200811100001)
    ---- put those flags: -Xms5m -Xmx5m

  • Memory leak with callback function

    Hi,
    I am fairly new to LabWindows and the ninetv library, i have mostly been working with LabVIEW.
    I am trying to create a basic (no GUI) c++ client that sets up subscriptions to several network variables publishing DAQ data from a PXI.
    The data for each variable is sent in a cluster and contains various datatypes along with a large int16 2D array for the data acquired(average array size is 100k in total, and the average time between data sent is 10ms). I have on average 10 of these DAQ variables.
    I am passing the same callback function as an arguement to all of these subscriptions(CNVCreateSubcription).
    It reads all the correct data, but i have one problem which is that i am experiencing a memory leak in the callback function that i pass to the CNVCreateSubscription.
    I have reduced the code one by one line and found the function that actually causes the memory leak, which is a CNVGetStructFields(). At this point in the program the data has still not been passed to the clients variables.
    This is a simplified version of the callback function, where i just unpack the cluster and get the data (only showing from one field in the cluster in the example, also not showing the decleration).
    The function is passed into to the subscribe function, like so:
    static void CNVCALLBACK SubscriberCallback(void * handle, CNVData data, void * callbackData);
    CNVCreateSubscriber (url.c_str(), SubscriberCallback, NULL, 0, CNVWaitForever, 0 , &subscriber);
    static void CNVCALLBACK SubscriberCallback(void * handle, CNVData data, void * callbackData)
    int16_t daqValue[100000];
    long unsigned int nDims;
    long unsigned int daqDims[2];
    CNVData fields[1];
    CNVDataType type;
    unsigned short int numFields;
    CNVGetDataType(data, &type, &nDims);
    CNVGetNumberOfStructFields (data, &numFields);
    CNVGetStructFields (data, fields, numFields); // <-------HERE IS THE PROBLEM, i can comment out the code after this point and it still causes a memory leak.
    CNVGetDataType(fields[0], &type, &nDims);
    CNVGetArrayDataDimensions(fields[0], nDims, acqDims);
    CNVGetArrayDataValue(fields[0], type, daqValue, daqDims[0]*daqDims[1]);
    CNVDisposeData(data);
    At the average settings i use all my systems memory (4GB) within one hour. 
    My question is, have any else experienced this and what could the problem/solution to this be?
    Thanks.
    Solved!
    Go to Solution.

    Of course.....if it is something i hate more than mistakes, it is obvious mistakes.
    Thank you for pointing it out, now everything works

  • Memory leak with 1.6.0_07 in applet using Swing

    Java Plug-in 1.6.0_07
    Using JRE version 1.6.0_07 Java HotSpot(TM) Client VM
    Windows XP - SP2
    I have a commercial application that has developed a memory leak with the introduction of the latest plugin. The applets chew up memory and eventually freeze. They did not before. Using jvisualm I see a build up of native arrays, primarily int[][] and char[]. I'm still investigating. Anyone have a similar experience?
    The Applet uses a swing interface, uses buffered images and swing timers, and regularly performs http connections to the server which result in actions via the SwingUtil.invokeLater() method.

    I am Using Internet Explorer Browser Version 6.0.Huge security hole.
    Its not throwing Error / Exception Wrap a try/catch at the highest level possible.
    Catch 'Throwable'. And log/display it somewhere.

Maybe you are looking for