Database toolkit memory leaks

I have memory leaks in my Labview application connecting and inserting data into a MSSQL server using Labview Database Toolkit.
Versions: Labview 7.1, Toolkit 1.0.1.
I have searched the forums and searhed everywhere else...I see there are some issues. (For example List tables vi). Removing the list tables vi helped a lot. But I can still see that I'm loosing memory here and there, now and then.
My question is: Is there any general guidlines to prevent all memory leaks mentioned? 
Or possibly, will it all go away if I upgrade the versions. (As I understand I have too update Labview to be able to update the Toolkit?)
Best Regards
/Jesper
Solved!
Go to Solution.

Have you seen this Developer Zone document?
LabVIEW Database Connectivity Toolset 1.0.1 Known Issues
Functions, VIs, and Express VIs - ID 2F7E2352    
Memory Leak in the Database Toolset in DB Tools Open Schema VI
The DB Tools Open Schema VI leaks memory which if called continuously eventually causes a LabVIEW crash. This is a sub VI of the DB Tools Insert Data VI.
Workaround—At the highest level, if you are continuously updating values in a table, make sure the table already exists and leave the Create Table input to the DB Tools Insert Data VI set to FALSE.
Other than this, the most important thing is to close all references...
Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
If you don't hate time zones, you're not a real programmer.
"You are what you don't automate"
Inplaceness is synonymous with insidiousness

Similar Messages

  • SQL toolkit 2.2 Stored Procedure Memory Leaks

    Hi
    we are using CVI 2012 and SQL Toolkit 2.2. My db is "MySql 5.5.28" and I use "MySQL Connector/ODBC 5.2(w)"
    I use only stored procedure (with and without the output parameters).  If I call continuously a stored procedure,
    with Sql toolkit code, I have memory leaks!! 
    My code (without error handler) is:
    // iDbConnId is the handle of DBConnect() called when program starts
    iStmt = DBPrepareSQL(iDbConnId, "spGetPrData");
    DBSetStatementAttribute(iStmt, ATTR_DB_STMT_COMMAND_TYPE, DB_COMMAND_STORED_PROC);
    DBExecutePreparedSQL(iStmt);
    DBBindColLongLong(iStmt, 1, &llPrId, &lStatus1);
    DBBindColInt(iStmt, 2, &iIpPort, &lStatus2);
    while(DBFetchNext(iStmt) != DB_EOF)
        //get data
    DBClosePreparedSQL(iStmt);
    DBDiscardSQLStatement(iStmt);
    If I call the same stored procedure by sql code ("CALL spProcedure()")
    I don't have memory leaks!!
    The code is (without error handler):
    // iDbConnId is the handle of DBConnect() called when program starts
    iStmt = DBActivateSQL(iDbConnId, "CALL spGetPrData();");
    DBBindColLongLong(iStmt, 1, &llPrId, &lStatus1);
    DBBindColInt(iStmt, 2, &iIpPort, &lStatus2);
    while(DBFetchNext(iStmt) != DB_EOF)
        //get data
    DBDeactivateSQL (iStmt)
    It seems to me that the DBDeactivateSQL function free the memory correctly,
    while the functions DBClosePreparedSQL and DBDiscardSQLStatement do not release properly the memory,
    (or I use improperly the library functions!!!)
    I also have a question to ask:
    Can I open and close the connection every time I connect to the database or is it mandatory to open and close it just one time?
    How can I check the status of the database connection, if is mandatory to open and close the db connection just one time?
    Thanks for your help.
    Best Regards
    Patrizio

    Hi,
    one of the functions "DBClosePreparedSQL and DBDiscardSQLStatement" is a known problem. In fact, there is a CAR (Corrective Action Request 91490) reporting the problem. What version of the toolkit are you using?
    About this function you refer to the manual:
    http://digital.ni.com/manuals.nsf/websearch/D3DF4019471D3A9386256B3C006CDC78
    Where functions are described. To avoid memory leaks DBDeactivateSQL must be used. DBDeactivateSQL is equivalent to calling DBCloseSQLStatement, then DBDiscardSQLStatement. DBDeativateSQL will work at all for parameterized SQL, as well.
    Regarding the connection mode advice to you is to open the connection once and close it at the end of your operations. What do you mean with "state of the database connection"? Remember that if you have connection problems or something goes wrong, the errors appears in any point in your code where you query the database. Bye
    Mario

  • Memory leak pulling in data from database view or database table

    Post Author: Thang Nguyen
    CA Forum: Data Integration
    Hi,
    I'm experiencing memory leaks when using DI to load from a database view or table. I have seen the issue on 11.7.2.0 and 11.7.2.2 and was wondering if anyone else has seen it.
    You can see the row count in the monitor tab going up, but with every 1000 rows it pulls in the al_engine process consumes more and more memory until it gets to 2GB and crashes with an unknown error.
    Simlir behaviour is seen in the validation transform when doing and "IN" to another table, and with table comparisons.
    I've got a webex with support tomorrow as they don't seem to belive that this happening and just want to get a heads up if anyone else was seeing this problem.
    Thanks

    Post Author: tambol
    CA Forum: Data Integration
    HI,
    i am experiencing similar error : 
    Unknown error in transform <AIView4>.
    i am using older version of BO. how could i possibly fix this without upgrading to newest version?
    please don't be too techincal when explaining...new here
    thanks a lot!

  • Memory leak when running in database

    I am somewhat new to java and very new to java in the db. I just ran into a problem with what appears to be a memory leak. I have a substantial java program used to parse XML files. I developed this app in jDeveloper and for testing purposes created a method that would connect to the database so that I could run the app from jDeveloper instead of having to deploy it everytime i needed to run it. When deployed the application uses the existing connection to connect to the db.
    I am running Oracle 9i on a windows 2k machine.
    When I run the application through jDeveloper the javaw.exe process takes up roughly 20mb of RAM and doesn't increase. I also watched the oracle.exe process and there was little to no increase in the RAM that it was using.
    When deployed to a db and run through a java stored procedure the RAM used by the oracle.exe process sky rockets, jumping from 70mb to 139mb at about 2mb per second.
    Hopefully this will make some sense to someone as posting code would be somewhat difficult considering the size of the project. Is there something I'm missing? I've tried calling the garbage collector explicity but it has had no effect. I have made sure that all my cursors, statements, resultsets are closing. I have a number of Vecotors which are all being de-allocated(as far as i can tell). Are there any known issues with the garbage collector in a 9i DB?
    Thanks
    Butch Wesley

    The size of oracle.exe is not an indication of how the Java VM GC works; so you are not comparing apples to apples. It'll be too long to explain here but in my upcoming book (see hereafter), I gave a detailled explanation of the various memory areas the Java VM uses and how these are GCed and also how you can meausre their size (not all, though).
    In short you want to use OracleRuntime methods such as
    OracleRuntime.getSessionSize(); --> get he current size of Sessionspace
    OracleRuntime.getNewspaceSize(); --> get he current size of Newspace
    there are other memory areas described in the book
    http://www.oracle.com/technology/pub/articles/mensah_dws.html
    http://www.elsevier.com/wps/find/bookdescription.cws_home/706089/description#description
    Sample chapter: http://www.oracle.com/technology/books/pdfs/mensah_ch1.pdf
    Kuassi

  • Memory leak when using database connectivity toolset and ODBC driver for MySQL

    The "DB Tools Close Connection VI" does not free all the memory used when "DB Tools Open Connection VI" connects to "MySQL ODBC Driver 3.51".
    I have made a small program that only opens and close the connection that I run continously and it's slowly eating all my memory. No error is given from any of the VI's. (I'm using the "Simple Error Handler VI" to check for errors.)
    I've also tried different options in the DSN configuration without any results.
    Attachments:
    TestProgram.vi ‏16 KB
    DSNconfig1.jpg ‏36 KB
    DSNconfig2.jpg ‏49 KB

    Also,
    I've duplicated the OPs example: a simple VI that opens and closes a connection. I can say this definately causes a memory leak.
    Watching the memory:
    10:17AM - 19308K
    10:19AM - 19432K
    10:22AM - 19764K
    10:58AM - 22124K
    Regards,
    Ken
    Attachments:
    OpenCloseConnection.vi ‏13 KB

  • Memory leak due high database operations

    Our application is a network monitoring application , where we get lot of data from network every 2 minutes.
    When we try process large amount data in our application we start seeing memory leak. But if we control data inflow rate , then our application works fine with no leak.
    I am not able to profile our application with tools like Jprobe, as the profiler itself is running outofmemory even before I could reproduce the issue.
    Mostly likely DB is not able to handle the data at the rate we would like it to handle. We are using MySql with C3P0.
    I am not sure which objects are not getting garbage collected which is causing our application to run outofmemory since i am not able to profile the application.
    Will appreciate if anyone can share their experiences in debugging such problems

    We have added throttling mechanism to temporarily fix the problem. As I said before we get data every 2 mins from network. We have a data structure which is populated when we receive the data. So we count the numbers of rows in the data structure and once we reach the throttle we drop rest of the data. Throttle gets reset so that when data comes in next cycle we can again process it.
    The problem is we are not able handle data at a higher rate. Most of the times data received from the network is within our throttle limit , so there are no issues. We were seeing issues only when receive burst of data. So if we fixed the issue temporarily by adding throttling.
    I hope this clarifies your question

  • Memory leak in VM

    Hello,
    I'd like to point out a memory leak in the VM, and it is very annoying for the application I'm working on. Here is the code that shows the problem :
    import java.awt.Image;
    import java.awt.Toolkit;
    import java.io.File;
    import java.lang.management.MemoryMXBean;
    import sun.management.ManagementFactory;
    public final class MemoryLeak {
         private static final void printUsedMem(final MemoryMXBean mx) {
              // I force garbage collection twice to be sure !
              System.gc();
              System.gc();
              System.out.println(mx.getHeapMemoryUsage().getUsed());
         public static final void main(final String[] args) {
              final MemoryMXBean mx = ManagementFactory.getMemoryMXBean();
              final File imageFile = new File("D:\\TEMP\\test1.jpg");
              printUsedMem(mx);
              Image image = Toolkit.getDefaultToolkit().createImage(
                      imageFile.getAbsolutePath());
              printUsedMem(mx);
              image = null;
              printUsedMem(mx);
    }This prints :
    149728
    221144
    221144The last line should be the same as the first one !
    I'm using jdk 1.5.0_11, on a Windows platform. I found people having the same problem, and the bug is referenced in your database and known for 12 years.
    Bug ID : #4014323
    Link : http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4014323
    Can you give me a workaround for this problem please ?
    Thank you.

    I realize that you're a one-post-wonder, who for some reason thinks that this forum is a direct line to the Sun development team (of which I am not a member), but in case other people read this.
         public static final void main(final String[] args) {
              final MemoryMXBean mx = ManagementFactory.getMemoryMXBean();
              final File imageFile = new File("D:\\TEMP\\test1.jpg");
              printUsedMem(mx);
              Image image = Toolkit.getDefaultToolkit().createImage(
                      imageFile.getAbsolutePath());
              printUsedMem(mx);
              image = null;
              printUsedMem(mx);
         }OK, so you load a single image into memory, and think that it should be garbage collected after calling System.gc(). As other posters have pointed out, this isn't guaranteed to collect garbage, but it is likely.
    This prints :
    149728
    221144
    221144
    The last line should be the same as the first one !Why?
    I can think of a number of reasons why it wouldn't be, the most obvious being that your first line is printed before either the Image or Toolkit classes get loaded. Since Toolkit maintains a lot of in-memory data, it's likely that you're seeing constant overhead that won't be eligible for collection.
    If you want to write a real test, you need to put in a loop, to simulate actual image loading by an application. Ten times should be sufficient: if the memory increases by a constant each time, then you might have found an issue.
    Alternatively, you could use a profiler and see exactly what memory is being held, and by whom.
    I'm using jdk 1.5.0_11, on a Windows platform. I found people having the same problem, and the bug is referenced in your database and known for 12 years.
    Bug ID : #4014323
    Link : http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4014323
    That bug has absolutely nothing to do with your problem. And it was fixed in 1999.

  • Pro*c multithreaded application has memory leak

    Hi there,
    I posted this message a week ago in OCI section, nobody answer me.
    I am really curious if my application has a bug or the pro*c has a bug.
    Anyone can compile the sample code and test it easily.
    I made multithreaded application which queries dynamic SQL, it works.
    But the memory leaks when i query the SQL statement.
    The more memory leaks, the more i query the SQL statement, even same SQL
    statement.
    I check it with top, shell command.
    My machine is SUN E450, Solaris 8. Oracle 9.2.0.1
    Compiler : gcc (GCC) 3.2.2
    I changed source code which is from
    $(ORACLE_HOME)/precomp/demo/proc/sample10.pc
    the sample10 doesn't need to be multithreaded. But i think it has to work
    correctly if i changed it to multithreaded application.
    the make file and source code will be placed below.
    I have to figure out the problem.
    Please help
    Thanks in advance,
    the make file is below
    HOME = /user/jkku
    ORA = $(ORACLE_HOME)
    CC = gcc
    PROC = proc
    LC_INCL = -I$(HOME)/work/dbmss/libs/include
    lc_incl = include=$(HOME)/work/dbmss/libs/include
    SYS_INCL =
    sys_incl =
    ORA_INCL = -I. \
    -I$(ORA)/precomp/public \
    -I$(ORA)/rdbms/public \
    -I$(ORA)/rdbms/demo \
    -I$(ORA)/rdbms/pbsql/public \
    -I$(ORA)/network/public \
    -DSLMXMX_ENABLE -DSLTS_ENABLE -D_SVID_GETTOD
    INCLUDES = $(LC_INCL) $(SYS_INCL) $(ORA_INCL)
    includes = $(lc_incl) $(sys_incl)
    LC_LIBS =
    SYS_LIBS = -lpthread -lsocket -lnsl -lrt
    ORA_LIBS = -L$(ORA)/lib/ -lclntsh
    LIBS = $(LC_LIBS) $(SYS_LIBS) $(ORA_LIBS)
    # Define C Compiler flags
    CFLAGS += -D_Solaris64_ -m64
    CFLAGS += -g -D_REENTRANT
    # Define pro*c Compiler flags
    PROCFLAGS += THREADS=YES
    PROCFLAGS += CPOOL=YES
    # Our object files
    PRECOMPS = sample10.c
    OBJS = sample10.o
    .SUFFIXES: .o .c .pc
    .c.o:
    $(CC) -c $(CFLAGS) $(INCLUDES) $*.c
    .pc.c:
    $(PROC) $(PROCFLAGS) $(includes) $*.pc $*.c
    all: sample10
    sample10: $(PRECOMPS) $(OBJS)
    $(CC) $(CFLAGS) -o sample10 $(OBJS) $(LIBS)
    clean:
    rm -rf *.o sample10 sample10.c
    the source code is below which i changed the oracle sample10.pc to
    multithreaded application.
    Sample Program 10: Dynamic SQL Method 4
    This program connects you to ORACLE using your username and
    password, then prompts you for a SQL statement. You can enter
    any legal SQL statement. Use regular SQL syntax, not embedded SQL.
    Your statement will be processed. If it is a query, the rows
    fetched are displayed.
    You can enter multi-line statements. The limit is 1023 characters.
    This sample program only processes up to MAX_ITEMS bind variables and
    MAX_ITEMS select-list items. MAX_ITEMS is #defined to be 40.
    #include <stdio.h>
    #include <string.h>
    #include <setjmp.h>
    #include <sqlda.h>
    #include <stdlib.h>
    #include <sqlcpr.h>
    /* Maximum number of select-list items or bind variables. */
    #define MAX_ITEMS 40
    /* Maximum lengths of the names of the
    select-list items or indicator variables. */
    #define MAX_VNAME_LEN 30
    #define MAX_INAME_LEN 30
    #ifndef NULL
    #define NULL 0
    #endif
    /* Prototypes */
    #if defined(__STDC__)
    void sql_error(void);
    int oracle_connect(void);
    int alloc_descriptors(int, int, int);
    int get_dyn_statement(void);
    void set_bind_variables(void);
    void process_select_list(void);
    void help(void);
    #else
    void sql_error(/*_ void _*/);
    int oracle_connect(/*_ void _*/);
    int alloc_descriptors(/*_ int, int, int _*/);
    int get_dyn_statement(/* void _*/);
    void set_bind_variables(/*_ void -*/);
    void process_select_list(/*_ void _*/);
    void help(/*_ void _*/);
    #endif
    char *dml_commands[] = {"SELECT", "select", "INSERT", "insert",
    "UPDATE", "update", "DELETE", "delete"};
    EXEC SQL INCLUDE sqlda;
    EXEC SQL INCLUDE sqlca;
    EXEC SQL BEGIN DECLARE SECTION;
    char dyn_statement[1024];
    EXEC SQL VAR dyn_statement IS STRING(1024);
    EXEC SQL END DECLARE SECTION;
    EXEC ORACLE OPTION (ORACA=YES);
    EXEC ORACLE OPTION (RELEASE_CURSOR=YES);
    SQLDA *bind_dp;
    SQLDA *select_dp;
    /* Define a buffer to hold longjmp state info. */
    jmp_buf jmp_continue;
    char *db_uid="dbmuser/dbmuser@dbmdb";
    sql_context ctx;
    int err_sql;
    enum{
    SQL_SUCC=0,
    SQL_ERR,
    SQL_NOTFOUND,
    SQL_UNIQUE,
    SQL_DISCONNECT,
    SQL_NOTNULL
    int main()
    int i;
    EXEC SQL ENABLE THREADS;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    /* Connect to the database. */
    if (connect_database() < 0)
    exit(1);
    EXEC SQL CONTEXT USE :ctx;
    /* Process SQL statements. */
    for (;;)
    /* Allocate memory for the select and bind descriptors. */
    if (alloc_descriptors(MAX_ITEMS, MAX_VNAME_LEN, NAME_LEN) != 0)
    exit(1);
    (void) setjmp(jmp_continue);
    /* Get the statement. Break on "exit". */
    if (get_dyn_statement() != 0)
    break;
    EXEC SQL PREPARE S FROM :dyn_statement;
    EXEC SQL DECLARE C CURSOR FOR S;
    /* Set the bind variables for any placeholders in the
    SQL statement. */
    set_bind_variables();
    /* Open the cursor and execute the statement.
    * If the statement is not a query (SELECT), the
    * statement processing is completed after the
    * OPEN.
    EXEC SQL OPEN C USING DESCRIPTOR bind_dp;
    /* Call the function that processes the select-list.
    * If the statement is not a query, this function
    * just returns, doing nothing.
    process_select_list();
    /* Tell user how many rows processed. */
    for (i = 0; i < 8; i++)
    if (strncmp(dyn_statement, dml_commands, 6) == 0)
    printf("\n\n%d row%c processed.\n", sqlca.sqlerrd[2], sqlca.sqlerrd[2] == 1 ? '\0' : 's');
    break;
    /* Close the cursor. */
    EXEC SQL CLOSE C;
    /* When done, free the memory allocated for pointers in the bind and
    select descriptors. */
    for (i = 0; i < MAX_ITEMS; i++)
    if (bind_dp->V != (char *) 0)
    free(bind_dp->V);
    free(bind_dp->I); /* MAX_ITEMS were allocated. */
    if (select_dp->V != (char *) 0)
    free(select_dp->V);
    free(select_dp->I); /* MAX_ITEMS were allocated. */
    /* Free space used by the descriptors themselves. */
    SQLSQLDAFree(ctx, bind_dp);
    SQLSQLDAFree(ctx, select_dp);
    } /* end of for(;;) statement-processing loop */
    disconnect_database();
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    EXEC SQL COMMIT WORK RELEASE;
    puts("\nHave a good day!\n");
    return;
    * Allocate the BIND and SELECT descriptors using sqlald().
    * Also allocate the pointers to indicator variables
    * in each descriptor. The pointers to the actual bind
    * variables and the select-list items are realloc'ed in
    * the set_bind_variables() or process_select_list()
    * routines. This routine allocates 1 byte for select_dp->V
    * and bind_dp->V, so the realloc will work correctly.
    alloc_descriptors(size, max_vname_len, max_iname_len)
    int size;
    int max_vname_len;
    int max_iname_len;
    int i;
    * The first sqlald parameter determines the maximum number of
    * array elements in each variable in the descriptor. In
    * other words, it determines the maximum number of bind
    * variables or select-list items in the SQL statement.
    * The second parameter determines the maximum length of
    * strings used to hold the names of select-list items
    * or placeholders. The maximum length of column
    * names in ORACLE is 30, but you can allocate more or less
    * as needed.
    * The third parameter determines the maximum length of
    * strings used to hold the names of any indicator
    * variables. To follow ORACLE standards, the maximum
    * length of these should be 30. But, you can allocate
    * more or less as needed.
    if ((bind_dp =
    SQLSQLDAAlloc(ctx, size, max_vname_len, max_iname_len)) ==
    (SQLDA *) 0)
    fprintf(stderr,
    "Cannot allocate memory for bind descriptor.");
    return -1; /* Have to exit in this case. */
    if ((select_dp =
    SQLSQLDAAlloc(ctx, size, max_vname_len, max_iname_len)) == (SQLDA *)
    0)
    fprintf(stderr,
    "Cannot allocate memory for select descriptor.");
    return -1;
    select_dp->N = MAX_ITEMS;
    /* Allocate the pointers to the indicator variables, and the
    actual data. */
    for (i = 0; i < MAX_ITEMS; i++) {
    bind_dp->I = (short *) malloc(sizeof (short));
    select_dp->I = (short *) malloc(sizeof(short));
    bind_dp->V = (char *) malloc(1);
    select_dp->V = (char *) malloc(1);
    return 0;
    int get_dyn_statement()
    char *cp, linebuf[256];
    int iter, plsql;
    for (plsql = 0, iter = 1; ;)
    if (iter == 1)
    printf("\nSQL> ");
    dyn_statement[0] = '\0';
    fgets(linebuf, sizeof linebuf, stdin);
    cp = strrchr(linebuf, '\n');
    if (cp && cp != linebuf)
    *cp = ' ';
    else if (cp == linebuf)
    continue;
    if ((strncmp(linebuf, "EXIT", 4) == 0) ||
    (strncmp(linebuf, "exit", 4) == 0))
    return -1;
    else if (linebuf[0] == '?' ||
    (strncmp(linebuf, "HELP", 4) == 0) ||
    (strncmp(linebuf, "help", 4) == 0))
    help();
    iter = 1;
    continue;
    if (strstr(linebuf, "BEGIN") ||
    (strstr(linebuf, "begin")))
    plsql = 1;
    strcat(dyn_statement, linebuf);
    if ((plsql && (cp = strrchr(dyn_statement, '/'))) ||
    (!plsql && (cp = strrchr(dyn_statement, ';'))))
    *cp = '\0';
    break;
    else
    iter++;
    printf("%3d ", iter);
    return 0;
    void set_bind_variables()
    int i, n;
    char bind_var[64];
    /* Describe any bind variables (input host variables) */
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    bind_dp->N = MAX_ITEMS; /* Initialize count of array elements. */
    EXEC SQL DESCRIBE BIND VARIABLES FOR S INTO bind_dp;
    /* If F is negative, there were more bind variables
    than originally allocated by sqlald(). */
    if (bind_dp->F < 0)
    printf ("\nToo many bind variables (%d), maximum is %d\n.",
    -bind_dp->F, MAX_ITEMS);
    return;
    /* Set the maximum number of array elements in the
    descriptor to the number found. */
    bind_dp->N = bind_dp->F;
    /* Get the value of each bind variable as a
    * character string.
    * C contains the length of the bind variable
    * name used in the SQL statement.
    * S contains the actual name of the bind variable
    * used in the SQL statement.
    * L will contain the length of the data value
    * entered.
    * V will contain the address of the data value
    * entered.
    * T is always set to 1 because in this sample program
    * data values for all bind variables are entered
    * as character strings.
    * ORACLE converts to the table value from CHAR.
    * I will point to the indicator value, which is
    * set to -1 when the bind variable value is "null".
    for (i = 0; i < bind_dp->F; i++)
    printf ("\nEnter value for bind variable %.*s: ",
    (int)bind_dp->C, bind_dp->S);
    fgets(bind_var, sizeof bind_var, stdin);
    /* Get length and remove the new line character. */
    n = strlen(bind_var) - 1;
    /* Set it in the descriptor. */
    bind_dp->L = n;
    /* (re-)allocate the buffer for the value.
    sqlald() reserves a pointer location for
    V but does not allocate the full space for
    the pointer. */
    bind_dp->V = (char *) realloc(bind_dp->V, (bind_dp->L + 1));
    /* And copy it in. */
    strncpy(bind_dp->V, bind_var, n);
    /* Set the indicator variable's value. */
    if ((strncmp(bind_dp->V, "NULL", 4) == 0) ||
    (strncmp(bind_dp->V, "null", 4) == 0))
    *bind_dp->I = -1;
    else
    *bind_dp->I = 0;
    /* Set the bind datatype to 1 for CHAR. */
    bind_dp->T = 1;
    return;
    void process_select_list()
    int i, null_ok, precision, scale;
    if ((strncmp(dyn_statement, "SELECT", 6) != 0) &&
    (strncmp(dyn_statement, "select", 6) != 0))
    select_dp->F = 0;
    return;
    /* If the SQL statement is a SELECT, describe the
    select-list items. The DESCRIBE function returns
    their names, datatypes, lengths (including precision
    and scale), and NULL/NOT NULL statuses. */
    select_dp->N = MAX_ITEMS;
    EXEC SQL DESCRIBE SELECT LIST FOR S INTO select_dp;
    /* If F is negative, there were more select-list
    items than originally allocated by sqlald(). */
    if (select_dp->F < 0)
    printf ("\nToo many select-list items (%d), maximum is %d\n",
    -(select_dp->F), MAX_ITEMS);
    return;
    /* Set the maximum number of array elements in the
    descriptor to the number found. */
    select_dp->N = select_dp->F;
    /* Allocate storage for each select-list item.
    sqlprc() is used to extract precision and scale
    from the length (select_dp->L).
    sqlnul() is used to reset the high-order bit of
    the datatype and to check whether the column
    is NOT NULL.
    CHAR datatypes have length, but zero precision and
    scale. The length is defined at CREATE time.
    NUMBER datatypes have precision and scale only if
    defined at CREATE time. If the column
    definition was just NUMBER, the precision
    and scale are zero, and you must allocate
    the required maximum length.
    DATE datatypes return a length of 7 if the default
    format is used. This should be increased to
    9 to store the actual date character string.
    If you use the TO_CHAR function, the maximum
    length could be 75, but will probably be less
    (you can see the effects of this in SQL*Plus).
    ROWID datatype always returns a fixed length of 18 if
    coerced to CHAR.
    LONG and
    LONG RAW datatypes return a length of 0 (zero),
    so you need to set a maximum. In this example,
    it is 240 characters.
    printf ("\n");
    for (i = 0; i < select_dp->F; i++)
    char title[MAX_VNAME_LEN];
    /* Turn off high-order bit of datatype (in this example,
    it does not matter if the column is NOT NULL). */
    sqlnul ((unsigned short *)&(select_dp->T), (unsigned short
    *)&(select_dp->T), &null_ok);
    switch (select_dp->T)
    case 1 : /* CHAR datatype: no change in length
    needed, except possibly for TO_CHAR
    conversions (not handled here). */
    break;
    case 2 : /* NUMBER datatype: use sqlprc() to
    extract precision and scale. */
    sqlprc ((unsigned int *)&(select_dp->L), &precision,
    &scale);
    /* Allow for maximum size of NUMBER. */
    if (precision == 0) precision = 40;
    /* Also allow for decimal point and
    possible sign. */
    /* convert NUMBER datatype to FLOAT if scale > 0,
    INT otherwise. */
    if (scale > 0)
    select_dp->L = sizeof(float);
    else
    select_dp->L = sizeof(int);
    break;
    case 8 : /* LONG datatype */
    select_dp->L = 240;
    break;
    case 11 : /* ROWID datatype */
    case 104 : /* Universal ROWID datatype */
    select_dp->L = 18;
    break;
    case 12 : /* DATE datatype */
    select_dp->L = 9;
    break;
    case 23 : /* RAW datatype */
    break;
    case 24 : /* LONG RAW datatype */
    select_dp->L = 240;
    break;
    /* Allocate space for the select-list data values.
    sqlald() reserves a pointer location for
    V but does not allocate the full space for
    the pointer. */
    if (select_dp->T != 2)
    select_dp->V = (char *) realloc(select_dp->V,
    select_dp->L + 1);
    else
    select_dp->V = (char *) realloc(select_dp->V,
    select_dp->L);
    /* Print column headings, right-justifying number
    column headings. */
    /* Copy to temporary buffer in case name is null-terminated */
    memset(title, ' ', MAX_VNAME_LEN);
    strncpy(title, select_dp->S, select_dp->C);
    if (select_dp->T == 2)
    if (scale > 0)
    printf ("%.*s ", select_dp->L+3, title);
    else
    printf ("%.*s ", select_dp->L, title);
    else
    printf("%-.*s ", select_dp->L, title);
    /* Coerce ALL datatypes except for LONG RAW and NUMBER to
    character. */
    if (select_dp->T != 24 && select_dp->T != 2)
    select_dp->T = 1;
    /* Coerce the datatypes of NUMBERs to float or int depending on
    the scale. */
    if (select_dp->T == 2)
    if (scale > 0)
    select_dp->T = 4; /* float */
    else
    select_dp->T = 3; /* int */
    printf ("\n\n");
    /* FETCH each row selected and print the column values. */
    EXEC SQL WHENEVER NOT FOUND GOTO end_select_loop;
    for (;;)
    EXEC SQL FETCH C USING DESCRIPTOR select_dp;
    /* Since each variable returned has been coerced to a
    character string, int, or float very little processing
    is required here. This routine just prints out the
    values on the terminal. */
    for (i = 0; i < select_dp->F; i++)
    if (*select_dp->I < 0)
    if (select_dp->T == 4)
    printf ("%-*c ",(int)select_dp->L+3, ' ');
    else
    printf ("%-*c ",(int)select_dp->L, ' ');
    else
    if (select_dp->T == 3) /* int datatype */
    printf ("%*d ", (int)select_dp->L,
    *(int *)select_dp->V);
    else if (select_dp->T == 4) /* float datatype */
    printf ("%*.2f ", (int)select_dp->L,
    *(float *)select_dp->V);
    else /* character string */
    printf ("%-*.*s ", (int)select_dp->L,
    (int)select_dp->L, select_dp->V);
    printf ("\n");
    end_select_loop:
    return;
    void help()
    puts("\n\nEnter a SQL statement or a PL/SQL block at the SQL> prompt.");
    puts("Statements can be continued over several lines, except");
    puts("within string literals.");
    puts("Terminate a SQL statement with a semicolon.");
    puts("Terminate a PL/SQL block (which can contain embedded
    semicolons)");
    puts("with a slash (/).");
    puts("Typing \"exit\" (no semicolon needed) exits the program.");
    puts("You typed \"?\" or \"help\" to get this message.\n\n");
    int connect_database()
    err_sql = SQL_SUCC;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    EXEC SQL CONTEXT ALLOCATE :ctx;
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL CONNECT :db_uid;
    if(err_sql != SQL_SUCC){
    printf("err => connect database(ctx:%ld, uid:%s) failed!\n", ctx, db_uid);
    return -1;
    return 1;
    int disconnect_database()
    err_sql = SQL_SUCC;
    EXEC SQL WHENEVER SQLERROR DO sql_error();
    EXEC SQL WHENEVER NOT FOUND DO sql_not_found();
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL COMMIT WORK RELEASE;
    EXEC SQL CONTEXT FREE:ctx;
    return 1;
    void sql_error()
    printf("err => %.*s", sqlca.sqlerrm.sqlerrml, sqlca.sqlerrm.sqlerrmc);
    printf("in \"%.*s...\'\n", oraca.orastxt.orastxtl, oraca.orastxt.orastxtc);
    printf("on line %d of %.*s.\n\n", oraca.oraslnr, oraca.orasfnm.orasfnml,
    oraca.orasfnm.orasfnmc);
    switch(sqlca.sqlcode) {
    case -1: /* unique constraint violated */
    err_sql = SQL_UNIQUE;
    break;
    case -1012: /* not logged on */
    case -1089:
    case -3133:
    case -1041:
    case -3114:
    case -3113:
    /* �6�Ŭ�� shutdown�ǰų� �α��� ���°� �ƴҶ� ��b�� �õ� */
    /* immediate shutdown in progress - no operations are permitted */
    /* end-of-file on communication channel */
    /* internal error. hostdef extension doesn't exist */
    err_sql = SQL_DISCONNECT;
    break;
    case -1400:
    err_sql = SQL_NOTNULL;
    break;
    default:
    err_sql = SQL_ERR;
    break;
    EXEC SQL CONTEXT USE :ctx;
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    EXEC SQL ROLLBACK WORK;
    void sql_not_found()
    err_sql = SQL_NOTFOUND;

    Hi Jane,
    What version of Berkeley DB XML are you using?
    What is your operating system and your hardware platform?
    For how long have been the application running?
    What is your current container size?
    What's set for EnvironmentConfig.setThreaded?
    Do you know if containers have previously not been closed correctly?
    Can you please post the entire error output?
    What's the JDK version, 1.4 or 1.5?
    Thanks,
    Bogdan

  • Memory leak in Application

    Hi,
    we're load testing an application using Coherence. It uses a distributed cache, with an web application as the client, and separate cache servers implementing a Cache Store.
    We're getting several memory leaks and one area that was reported was a great number of instances for the com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$GetRequest class.
    Does anybody know what could cause these instances to be left hanging around? I think the Coherence classes ar a symptom of another problem rather than an issue with Coherence itself but it would be useful to know what could cause these objects to be left in the heap.
    Thanks
    Mike

    After further load testing, our application definitely seems to be leaking Coherence objects.
    The classes seem to be com.tangosol.util.Binary &
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$GetRequest.
    During testing, the number of Binary objects rose 0 to about 9000 and the number of GetRequest objects rose from 0 to about 3600 in two hours.
    The application stores objects in the cache and updates them, the main operation being to add to a very long log string required by the client to these objects, which represent sessions.
    Our test keeps a constant number of these objects in the cache during the run, removing the same number it creates after it has ramped up to full load testing.
    The application is stateless, it recieves frequent requests to get an object from the cache, work on it and put it back in the cache. The cache also has a cache store to persist the data to a database.
    Are there any references to the cache objects that would stop them being garbage collected?
    Our whole application is based in a simple stateless reqest/response operation and I cannot see where the huge number of objects is coming from.
    Mike

  • Memory Leak in Report after upgrading from Crystal Reports 10.5 to 13.0.1/2

    I'm currently having an issue with crystal reports 13 (Visual Studio 2010), We have recently updated our CRM Solution to use the newer crystal reports runtime as we are now using Visual Studio 2010,
    We have a client who has a report which contains a sub report which basically contains a Image pulled from a MSSQL Database as a BLOB Image Field which is basically scanned images relating to the report, Previously the report worked fine before the Client Updated our CRM Solution to the latest version, Now for each individual page which has the scanned image the application is swallowing 100 MB of ram, as there are around 32 of these scanned images and our solution is a 32 bit application we are getting out of memory errors due to the whole 2GB addressable to a 32 bit process limitation,
    The images are around 4MP and stored in JPG format in the database so should not be consuming over 100mb of ram per image displayed by the report even if they are being stored uncompressed in RGBA Format,
    Likewise when viewing the pages of the report manually after a specific page (when the amount of memory addressable by a 32 bit application gets hit) the images just don't display and then any pages with the image are not being displayed and not giving an error / exception.
    I have tried re-saving the .rpt files to cause them to be in the newer crystal reports format and this is still happening, likewise I have tried un-installing the 13.0.1 and installing the 13.0.2 runtime.
    I am just about to check SQL which pulls the image for the sub report, although I am sure for each sub report it should only be pulling one row with one jpeg image in case the sub report is holding quite a few images but only displaying the first.
    Likewise If all else fails I will try re-creating the report as I have experienced issues with some other specific reports doing strange things after being updated from the 2008 runtime which I really am not liking the idea of due to how fiddly crystal reports can be, It is good and does the job but takes far longer than some other solutions to get what you are trying to achieve done.
    This report had been working fine for 2+ years before the client updated to the most recent version of our CRM Software.
    Has anyone else experienced simular issues with the latest runtime.

    I have just been reviewing the code for this and it appears that the sub report is pulling all of the images,
    It is strange that previously this was working fine seems like the newer runtime does not dispose of the data once it has been displayed on a sub report which would explain memory leak as it will call the select again pulling approx 60 images which are probably approx 8 - 900KB plus sub report + uncompressed image to display and then filtering.
    I am about to modify this report and will post if fix i put in place resolves the issue.

  • Memory leak in weblogic 6.0 sp2 oracle 8.1.7 thin driver

    Hi,
         I have a simple client that opens a database connection, selects from
    a table containing five rows of data (with four columns in each row)
    and then closes all connections. On running this in a loop, I get the
    following error after some time:
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
    <OutOfMemoryError in
    Adapter
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    >
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
    failed
    java.lang.OutOfMemoryError
    I am running with a heap size of 64 Mb. The java command that runs
    the client is:
    java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
    -Djava.naming.f
    actory.initial=weblogic.jndi.WLInitialContextFactory
    -Djava.naming.provider.url=
    t3://garlic:7001 -verbose:gc Test
    The following is the client code that opens the db connection and does
    the select:
    import java.util.*;
    import java.sql.*;
    import javax.naming.*;
    import javax.sql.*;
    public class Test {
    private static final String strQuery = "SELECT * from tblPromotion";
    public static void main(String argv[])
    throws Exception
    String ctxFactory     = System.getProperty
    ("java.naming.factory.initial");
    String providerUrl     = System.getProperty
    ("java.naming.provider.url");
    Properties jndiEnv          = System.getProperties ();
    System.out.println ("ctxFactory : " + ctxFactory);
    System.out.println ("ProviderURL : " + providerUrl);
    Context ctx     = new InitialContext (jndiEnv);
    for (int i=0; i <1000000; i++)
    System.out.println("Running query for the "+i+" time");
    Connection con = null;
    Statement stmnt = null;
    ResultSet rs     = null;
    try
    DataSource ds     = (DataSource) ctx.lookup
    (System.getProperty("eaMDataStore", "jdbc/eaMarket"));
    con = ds.getConnection ();
    stmnt = con.createStatement();
    rs = stmnt.executeQuery(strQuery);
    while (rs.next ())
    //System.out.print(".");
    //System.out.println(".");
    ds = null;
    catch (java.sql.SQLException sqle)
    System.out.println("SQL Exception : "+sqle.getMessage());
    finally
    try {
    rs.close ();
    rs = null;
    //System.out.println("closed result set");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    stmnt.close ();
    stmnt = null;
    //System.out.println("closed statement");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    con.close();
    con = null;
    //System.out.println("closed connection");
    } catch (Exception e) {
    System.out.println("Exception closing connection");
    I am using the Oracle 8.1.7 thin driver. Please let me know if this
    memory leak is a known issue or if its something I am doing.
    thanks,
    rudy

    Repost in JDBC section ... very serious issue but it may be due to Oracle or
    to WL ... does it happen if you test inside WL itself?
    How many iterations does it take to blow? How long? Does changing to a
    different driver (maybe Cloudscape) have the same result?
    Peace,
    Cameron Purdy
    Tangosol Inc.
    << Tangosol Server: How Weblogic applications are customized >>
    << Download now from http://www.tangosol.com/download.jsp >>
    "R.C." <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    I have a simple client that opens a database connection, selects from
    a table containing five rows of data (with four columns in each row)
    and then closes all connections. On running this in a loop, I get the
    following error after some time:
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Adapter>
    <OutOfMemoryError in
    Adapter
    java.lang.OutOfMemoryError
    <<no stack trace available>>
    >
    <Nov 28, 2001 5:57:40 PM GMT+06:00> <Error> <Kernel> <ExecuteRequest
    failed
    java.lang.OutOfMemoryError
    I am running with a heap size of 64 Mb. The java command that runs
    the client is:
    java -ms64m -mx64m -cp .:/opt/bea/wlserver6.0/lib/weblogic.jar
    -Djava.naming.f
    actory.initial=weblogic.jndi.WLInitialContextFactory
    -Djava.naming.provider.url=
    t3://garlic:7001 -verbose:gc Test
    The following is the client code that opens the db connection and does
    the select:
    import java.util.*;
    import java.sql.*;
    import javax.naming.*;
    import javax.sql.*;
    public class Test {
    private static final String strQuery = "SELECT * from tblPromotion";
    public static void main(String argv[])
    throws Exception
    String ctxFactory = System.getProperty
    ("java.naming.factory.initial");
    String providerUrl = System.getProperty
    ("java.naming.provider.url");
    Properties jndiEnv = System.getProperties ();
    System.out.println ("ctxFactory : " + ctxFactory);
    System.out.println ("ProviderURL : " + providerUrl);
    Context ctx = new InitialContext (jndiEnv);
    for (int i=0; i <1000000; i++)
    System.out.println("Running query for the "+i+" time");
    Connection con = null;
    Statement stmnt = null;
    ResultSet rs = null;
    try
    DataSource ds = (DataSource) ctx.lookup
    (System.getProperty("eaMDataStore", "jdbc/eaMarket"));
    con = ds.getConnection ();
    stmnt = con.createStatement();
    rs = stmnt.executeQuery(strQuery);
    while (rs.next ())
    //System.out.print(".");
    //System.out.println(".");
    ds = null;
    catch (java.sql.SQLException sqle)
    System.out.println("SQL Exception : "+sqle.getMessage());
    finally
    try {
    rs.close ();
    rs = null;
    //System.out.println("closed result set");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    stmnt.close ();
    stmnt = null;
    //System.out.println("closed statement");
    } catch (Exception e) {
    System.out.println("Exception closing result set");
    try {
    con.close();
    con = null;
    //System.out.println("closed connection");
    } catch (Exception e) {
    System.out.println("Exception closing connection");
    I am using the Oracle 8.1.7 thin driver. Please let me know if this
    memory leak is a known issue or if its something I am doing.
    thanks,
    rudy

  • Memory leak issue with link server between SQL Server 2012 and Oracle

    Hi,
    We are trying to use the linked server feature with SQL Server 2012 to connect SQL server and Oracle database. We are concerned about the existing memory leak issue.  For more context please refer to the link.
    http://blogs.msdn.com/b/psssql/archive/2009/09/22/if-you-use-linked-server-queries-you-need-to-read-this.aspx
    The above link talks about the issues with SQL Server versions 2005 and 2008, not sure if this is still the case in 2012.  I could not find any article that talks about if this issue was fixed by Microsoft in later version.
    We know that SQL Server process crashes because of the third-party linked server provider which is loaded inside SQL Server process. If the third-party linked server provider is enabled together with the
    Allow inprocess option, the SQL Server process crashes when this third-party linked server experiences internal problems.
    We wanted to know if this fixed in SQL Server 2012 ?

    So your question is more of a information type or are you really facing OOM issue.
    There can be two things for OOM
    1. There is bug in SQL Server which is causing the issue which might be fixed in 2012
    2. The Linked server provider used to connect to Oracle is not upto date and some patch is missing or more recent version is to be used.  Did you made sure that you are using latest version.
    What is Oracle version you are trying to connect(9i,10g, R2...)
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Memory leak in Crystal Reports 2008

    We are running CR runtime 2008 version 12.3.4 and it looks like there is a memory leak. This is what we see happen (see chart#1). So just between these 4 report runs memory usage as gone up 64mb. We loose 20mb of memory with each report that we call. We actually see memory keep going up until the application fails due to out of memory issues around 1100mb of memory usage. This has been an issue ever since crystal 2008 was released we did not see this issue with Crystal Reports 11 or 10.
    chart#1:
    memory using in MB after opening and closing a small CR
    Start 123
    open rpt 225
    close rpt 204
    open rpt 245
    close rpt 226
    open rpt 266
    close rpt 247
    open rpt 287
    close rpt 268
    When is SAP going to address this issue?  Our work around for the last 2 years has been to close the CR application and reopen it multiple times a day to free up this lost memory and to revent "Out of memory" errors.  We will be forced to move to Reporting Services if this issue is not addressed soon.
    Josh

    Hi Josh,
    There are no memory leaks in CR 10, 11, 2008 or 2010 or 2011.
    Make sure you are closing and disposing all of your report objects, database connections, Data Sets etc. And then calling GC.Collect.
    And if you want to pursue this I suggest you give more details than just saying there is a leak and show a bunch of numbers that mean nothing without the code to verify those numbers.
    Thank you
    Don

  • Memory leak in occi

    Hi ,
    I am working in Solaris 9 x86, with Oracle installed on it.I am getting memory leaks reported in occi library.
    Machine Details:
    Hostname: ALEXANDER
    Hostid: 2ed11ae9
    Release: 5.9
    Kernel architecture: i86pc
    Application architecture: i386
    Hardware provider:
    Domain:
    Kernel version: SunOS 5.9 Generic 112234-10 Nov 2003
    Database Version :
    Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Compiler Version:
    CC: Sun C++ 5.5 Patch 113819-09 2004/08/03.
    Compiler arguments:
    CC -w -g Main.cpp -I$ORACLE_INCLUDE -o Main -lclntsh -locci
    I started with the demo apps for occi that comes with Oracle.
    After running the executables and checking with the bcheck utility I receive the errors due to memory leak.
    bcheck -all ./Main
    Actual leaks report (actual leaks: 2 total size: 43 bytes)
    <rtc> Memory Leak (mel):
    Found leaked block of size 31 bytes at address 0x812d6f8
    At time of allocation, the call stack was:
         [1] operator new() at 0xd7906718
         [2] operator new[]() at 0xd7905ab8
         [3] oracle::occi::StatementImpl::do_setSQL() at 0xda13e681
         [4] oracle::occi::StatementImpl::StatementImpl() at 0xda13e1bf
         [5] oracle::occi::ConnectionImpl::createStatement() at 0xda136863
         [6] occidml::executeSelectStatement() at line 46 in "Main.h"
         [7] main() at line 12 in "Main.cpp"
    <rtc> Memory Leak (mel):
    Found leaked block of size 12 bytes at address 0x8084c40
    At time of allocation, the call stack was:
         [1] operator new() at 0xd7906718
         [2] std::vector<OCIParam*,std::allocator<OCIParam*> >::__insert_aux() at 0xda145c19
         [3] std::vector<OCIParam*,std::allocator<OCIParam*> >::resize() at 0xda145443
         [4] oracle::occi::StatementImpl::initParamVec() at 0xda144e70
         [5] oracle::occi::ResultSetImpl::next() at 0xda146404
         [6] main() at line 15 in "Main.cpp"
    If any body needs more info , pls send me your email address so that I can send my source files.

    can you post this in the OCCI forum?

  • ITunes 6.0.4(3) memory leak

    I know this topic has popped up on the Windows forum, but I'd like to officially document the same problem in iTunes for OSX.
    Using the iStat Nano widget, I've been monitoring my memory usage when running iTunes. When I first start the program, memory usage jumps to 600 MB. The longer iTunes is open, the more and more memory disappears, whether I'm actively using the program or not (i.e. listening to music). Eventually, this causes a hard crash. By hard crash I mean that I must use the on/off button to reboot the machine. I have had this problem since getting my computer (in mid-March), so there has been at least one iTunes update since then that has not fixed this issue. (I have a previous post on this topic called "iTunes freezing" that was made before I realized that it was caused by a memory leak.)
    I am very, very pleased with my MacBook Pro and I realize that this is not a hardware issue. Has anyone else encountered this problem? Are there any plans to fix it in a future update?
    Thanks in advance for any comments, suggestions, or solutions!
    2.16GHz 15.4" MacBook Pro, 2GB RAM   Mac OS X (10.4.6)   4G 40GB Photo iPod, 5G 2GB nano

    I posted a question this evening on what sounds like the same issue: the link is here:
    http://discussions.apple.com/thread.jspa?threadID=548752
    I think your original post makes clear that this issue is about iTunes itself, it's not about any devices or iPods, not is it to do with the Intel or non-Intel processors... I will try looking at the Memory stats too.
    What baffles me is that I had been using iTunes 6 for quite a long time (and only updating it manually, not automatically) before I ever experienced this.
    My hunch (but what do I know?) is that my initial freeze might have happened for some other reason (too many applications working hard at once, as sometimes happens -- though I have had this happen reasonably often on the PowerBook it is extremely rare on the G5) -- and then in the course of that freeze / crash something in iTunes got corrupted (perhaps the database file that allows iTunes to navigate the iTunes library?)
    By the way, what is the technical name for the total freeze thing (i.e. no beachballs, just complete loss of the mouse and keyboard response, clock stops, no shortcuts working, only PowerButton allows you to shut down).
    Has anyone with this issue tried uninstalling iTunes (by Trashing the App) and reinstalling from a newly-downloaded copy? (I might try that ...)
    PowerMac G5 Dual 1.8GHz, PowerBook G4 12" 1.33GHz   Mac OS X (10.4.5)  
    PowerMac G5 Dual 1.8GHz, PowerBook G4 12" 1.33GHz   Mac OS X (10.4.5)  
    PowerMac G5 Dual 1.8GHz, PowerBook G4 12" 1.33GHz   Mac OS X (10.4.5)  
    PowerMac G5 Dual 1.8GHz, PowerBook G4 12" 1.33GHz   Mac OS X (10.4.5)  
    PowerMac G5 Dual 1.8GHz, PowerBook G4 12" 1.33GHz   Mac OS X (10.4.5)  

Maybe you are looking for

  • PP-PM integration : CM50 (capacity evaluation)

    Hi experts, I have an equipment with a PP work center filled (in "work center" field, in location tab). I have created a PM order with "system condition" set to 0 (=out of order) and a work duration of 8HR The PM work center has the following capacit

  • Java 6 or se 6 update for powerbook

    first timer here is it possible to upgrade to java se6 or 1.6 on powerbook g4 im new to mac i bought it on ebay so im kinda interested on what i can do with it and could i also upgrade to a new software like 10.7.3

  • Cataloging the datafiles in control file during cloning

    Hi, Iam trying to clone a database with around 160 datafiles, evethough there is no issue in cloning process, the catalogging of datafiles is taking very long time after the cloning is complete. some times its taking 24 hours and sometime sit complet

  • How to store .xml file in mime repository.

    I AM USING CL_MIME_REPOSITORY_API TO UPLOAD FILE TO MIME REPOSITORY it shows the following outcome The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again

  • How to install firefox in centos 6.0? What is latest version of firefox support for centos 6.0?

    I want to install latest version of firefox browser on my centos 6.0 installed computer. I read some blogs in web based on that i installed firefox version 26. But i need to upgrade firefox version. Is there is any latest version support for centos 6