ResultSet frm Database resulted in out of memory

Application is for report generation based on the huge amount of data present in the database. JDBC query results me in out of memory error while fetching 72000 records. Any solution from the application side that could resolve this problem. Mail [email protected]

Let's see...
72000 rows with each row on a line and 80 lines per page give a 900 page report.
Is someone going to actually read this? Is it possible that they actually want something else, like a summary?

Similar Messages

  • RE: Big Log Files resulting in Out Of Memory of serverpartition

    To clean a log on nt, you can open it with notepad, select all and delte, add a space ans save as... with the same file name
    on unix, you can just redirect the standard input to the file name (e.g.:
    # > forte_ex_2390.log
    (Should work with nt but i never tried)
    Hope that will help
    De : Vincent R Figari
    Date : lundi 30 mars 1998 21:42
    A : [email protected]
    Objet : Big Log Files resulting in Out Of Memory of server partition
    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    So try treating your development box like a production box for a day and see if the problem manifests itself.
    Do a load test and simulate massive numbers of changes on your development box.
    Are there any OS differences between production and development?
    How long does it take to exhaust the memory?
    Does it just add new jsp files, or can it also replace old ones?

  • Big Log Files resulting in Out Of Memory of serverpartition

    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    Ask in Photoshop General Discussion or go to Microsoft and search for an article on memory allocation
    http://search.microsoft.com/search.aspx?mkt=en-US&setlang=en-US
    This forum is about the Cloud as a delivery process, not about using individual programs
    If you start at the Forums Index https://forums.adobe.com/welcome
    You will be able to select a forum for the specific Adobe product(s) you use
    Click the "down arrow" symbol on the right (where it says All communities) to open the drop down list and scroll

  • Importing a single track results in "Out of Memory!"

    I'm trying to import an audio track from another session into my current song, and Logic gives me "Out of Memory! Couldn't insert or delete data".
    I have 24 gigs on this system, and 12 currently free. I'm running Logic in 64-bit.
    What the heck is going on???

    Gowtam,
    Are you sure that the HashMap object is available for GC at the end of your controller code?
    If your JVM Heap is relatively small compared to the size of your HashMap, then you can hit this issue. Analyze, if you really require such a huge collection on objects to work on, if there is no other alternative, then go ahead and do a memory turning to find out the optimum memory requirement / user and tune your JVM accordingly.

  • Out of memory - Cannot alocate 2gb memory to SGA  - SUsE 9 / 10gR2

    I need help, please !!!!
    Cannot alocate > 2gb memory to SGA
    SHMMAX:
    SUsE:/home/oracle # /sbin/sysctl -p
    kernel.shmall = 2097152
    kernel.shmmax = 3221225472
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 1048576
    net.core.rmem_max = 1048576
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    vm.disable_cap_mlock = 1
    SUsE:/home/oracle #
    When startup database:
    ORA-27102: out of memory
    Linux Error: 12: Cannot allocate memory
    Additional information: 1
    Additional information: 4620291
    Dell powered 2800 - 6gb memory
    SUsE 9
    Oracle 10g Std.

    You're most likely correct - I have been running without issue with the existing kernel params and SGA set to 1536M - but at this point, I just want to get back to my original settings to start the first node, then do additional research on the kernel params and SGA settings. So, any help in setting the SGA back to what I had previously would be most appreciated.
    Here are my kernel params:
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.ipv4.ip_local_port_range = 1024 65500
    net.core.rmem_default = 1048576
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max =1048536

  • Getting HeapDump on out of memory error when executing method through JNI

    I have a C++ code that executes a method inside the jvm through the JNI.
    I have a memory leak in my java code that results an out of memory error, this exception is caught in my C++ code and as a result the heap dump is not created on the disk.
    I am running the jvm with
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:HeapDumpPath=C:\x.hprof
    Any suggestions?
    Thanks

    I'll rephrase it then.
    I have a java class named PbsExecuter and one static method in it ExecuteCommand.
    I am calling this method through JNI (using CallStaticObjectMethod). sometimes this method causes the jvm to throw OutOfMemoryError and I would like to get a heap dump on the disk when this happens in order to locate my memory leak.
    I've started the jvm with JNI_CreateJavaVM and I've put two options inside the JavaVMInitArgs that is used to create the Jvm. -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=C:\x.hprof
    which supposed to create a heap dump on the disk when OutOfMemoryError occurs.
    Normally if I would execute normal java code, when this exception would occur and I wouldn't catch it the Jvm would crash and the heap dump would be created on the disk.
    Since I need to handle errors in my C++ code I am use ExceptionOccured() and extracts the exception message from the exception it self and write it.
    For some reason when I execute this method through JNI it doesn't create the dump.

  • Result Set Causing out of memory issue

    Hi,
    I am having trouble to fix the memory issue caused by result set.I am using jdk 1.5 and sql server 2000 as the backend. When I try to execute a statement the result set returns minimum of 400,000 records and I have to go through each and every record one by one and put some business logic and update the rows and after updating around 1000 rows my application is going out of memory. Here is the original code:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();
    I am planning to fix the code in this way:
    Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY,
                          ResultSet.CONCUR_UPDATABLE);
    stmt.setFetchSize(50);
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();But one of my colleague told me that setFetchSize() method does not work with sql server 2000 driver.
    So Please suggest me how to fix this issue. I am sure there will be a way to do this but I am just not aware of it.
    Thanks for your help in advance.

    Here is the full-fledged code.There is Team Connect and Top Link Api being used. The code is already been developed and its working for 2-3 hours and then it fails.I just have to fix the memory issue. Please suggest me something:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                 /where vo is the value object obtained from the rs row by row     
                if (updateInfo(vo, user)){
                               logger.info("updated : "+ rs.getString("number_string"));
                               projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(CostCenter vo, YNUser tcUser) {
              boolean updated;
              UnitOfWork unitOfWork;
              updated = false;
              unitOfWork = null;
              List projList_m = null;
              try {
                   logger.info("Before vo.getId() HERE i AM" + vo.getId());
                   unitOfWork = FNClientSessionManager.acquireUnitOfWork(tcUser);
                   ExpressionBuilder expressionBuilder = new ExpressionBuilder();
                   Expression ex1 = expressionBuilder.get("application")
                             .get("projObjectDefinition").get("uniqueCode").equal(
                                       "TABLE-NAME");
                   Expression ex2 = expressionBuilder.get("primaryKey")
                             .equal(vo.getPrimaryKey());// primaryKey;
                   Expression finalExpression = ex1.and(ex2);
                   ReadAllQuery projectQuery = new ReadAllQuery(FQUtility
                             .classForEntityName("EntryTable"), finalExpression);
                   List projList = (List) unitOfWork.executeQuery(projectQuery);
                   logger.info("list value1" + projList.size());
                   TNProject project_hist = (TNProject) projList.get(0); // primary key
                   // value
                   logger.info("vo.getId1()" + vo.getId());
                   BNDetail detail = project_hist.getDetailForKey("TABLE-NAME");
                   project_hist.setNumberString(project_hist.getNumberString());
                   project_hist.setName(project_hist.getName());
                   String strNumberString = project_hist.getNumberString();
                   TNHistory history = FNHistFactory.createHistory(project_hist,
                             "Proj Update");
                   history.addDetail("HIST_TABLE-NAME");
                   history.setDefaultCategory("HIST_TABLE-NAME");
                   BNDetail histDetail = history.getDetailForKey("HIST_TABLE-NAME");
                   String strName = project_hist.getName();
                   unitOfWork.registerNewObject(histDetail);
                   setDetailCCGSHistFields(strNumberString, strName, detail,
                             histDetail);
                   logger.info("No Issue");
                   TNProject project = (TNProject) projList.get(0);
                   project.setName(vo.getName());
                   logger.info("vo.getName()" + vo.getName());
                   project.setNumberString(vo.getId());
                   BNDetail detailObj = project.getDetailForKey("TABLE-NAME"); // required
                   setDetailFields(vo, detailObj);//this method gets the value from vo and sets in the detail_up object
                   FNClientSessionManager.commit(unitOfWork);
                   updated = true;
                   unitOfWork.release();
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
                   unitOfWork.release();
              return updated;
         }Now I have tried to change little bit in the code. And I added the following lines:
                        updated = true;
                     FNClientSessionManager.release(unitOfWork);
                     project_hist=null;
                     detail=null;
                     history=null;
                     project=null;
                     detailObj=null;
                        unitOfWork.release();
                        unitOfWork=null;
                     expressionBuilder=null;
                     ex1=null;
                     ex2=null;
                     finalExpression=null;
    and also I added the code to request the Garbage collector after every 5th update:
    if (updateInfo(vo, user)){
                               logger.info("project update : "+ rs.getString("number_string"));
                               projCount++;
                               //call garbage collector every 5th record update
                               if(projCount%5==0){
                                    System.gc();
                                    logger.debug("Called Garbage Collectory on "+projCount+"th update");
                          }But now the code wont even update the single record. So please look into the code and suggest me something so that I can stop banging my head against the wall.

  • Database out of memory error getting Web Intelligence prompts

    The following code generates an exception for a particular web intelligence report object ID:
    m_engines = (ReportEngines)m_entSession.getService("ReportEngines");
    m_widocRepEngine = (ReportEngine)m_engines.getService(ReportEngines.ReportEngineType.WI_REPORT_ENGINE);
    DocumentInstance doc = m_widocRepEngine.openDocument(id);
    Prompts prompts = doc.getPrompts();
    The exception is as follows:
    A database error occurred. The database error text is: The system is out of memory. Use system side cursors for large result sets: Java heap space. Result set size: 31,207, 651. JVM total memory size: 66,650,112..(WIS 10901).
    I can't understand how the result set could be over 31 million, or how to fix this. Any ideas?

    So what happens in InfoView?
    I ask since it doesn't appear to be a SDK coding issue.
    Sincerely,
    Ted Ueda

  • Out of Memory error caused by huge result set

    Hi,
    I'm using version 3.3.4 of KODO with the enterpise license. Our
    application has some places where we're using straight SQL to create a
    query using the following statement:
    Query query = pm.newQuery ("javax.jdo.query.SQL", queryString);
    There is one query that performs an inner join and returns over 2 billion
    results. Of course this throws an out of memory error when we try to add
    the results to a list. I printed out the exact SQL statement that is
    being executed and it should return 0 results. The application is hitting
    an Oracle 9i database, and running the query through the Oracle tool
    SQLPlus returns the correct result. I'm stumped as to why this massive
    result set is being returned when we use KODO. Here is the SQL string for
    your examination:
    SELECT T_ORDERITEM.* FROM T_ORDERITEM INNER JOIN T_REVENUE ON
    T_ORDERITEM.ORDER_HOID = T_REVENUE.HOID WHERE T_ORDERITEM.COMMODITY_HOID
    = '1871816913731466' AND T_ORDERITEM.DATESHIPPED BETWEEN
    TO_DATE('12/25/2005', 'MM/DD/YYYY') AND TO_DATE('01/24/2006',
    'MM/DD/YYYY') AND T_REVENUE.CUSTOMER_HOID IN (1844203535241210)
    Thanks for your help,
    Bennett

    I'm also using JDK 1.4.2 so I'm not sure why you wouldn't be running into
    the issue using the ArrayList constructor.
    -Bennett
    Paul Mogren wrote:
    What result list implementation are you using? What's your JDK
    version? I'm sure I've written new ArrayList(results) numerous times
    without trouble, though I have not used Kodo 3.3.4. I had a look at
    the JDK 1.4.2 source code for ArrayList(Collection) and
    ArrayList.addAll(Collection), and it looks to me like the latter would
    consume more memory than the former. However, it might be worth noting
    that the former invokes size() on the argument.
    -Paul Mogren, CommerceHub
    On Thu, 19 Jan 2006 22:10:50 +0000 (UTC), [email protected]
    (Bennett Hunter) wrote:
    Nevermind, issue resolved. This wasn't due to the query, but rather how
    we were creating a list with the results returned. This issue is probably
    already posted, but I'll give the info. again:
    You can't create an arraylist by passing in the kodo collection of results
    to the constructor of ArrayList. For example: List l = new
    ArrayList(results) .... that will throw the out of memory error because of
    the way that the constructor builds the kodo collection. Maybe the bug is
    with the kodo collection. However, the way around this is to create the
    list with an empty constructor and then do an addAll() operation.
    Bennett Hunter wrote:
    Hi,
    I'm using version 3.3.4 of KODO with the enterpise license. Our
    application has some places where we're using straight SQL to create a
    query using the following statement:
    Query query = pm.newQuery ("javax.jdo.query.SQL", queryString);
    There is one query that performs an inner join and returns over 2 billion
    results. Of course this throws an out of memory error when we try to add
    the results to a list. I printed out the exact SQL statement that is
    being executed and it should return 0 results. The application is hitting
    an Oracle 9i database, and running the query through the Oracle tool
    SQLPlus returns the correct result. I'm stumped as to why this massive
    result set is being returned when we use KODO. Here is the SQL string for
    your examination:
    SELECT T_ORDERITEM.* FROM T_ORDERITEM INNER JOIN T_REVENUE ON
    T_ORDERITEM.ORDER_HOID = T_REVENUE.HOID WHERE T_ORDERITEM.COMMODITY_HOID
    = '1871816913731466' AND T_ORDERITEM.DATESHIPPED BETWEEN
    TO_DATE('12/25/2005', 'MM/DD/YYYY') AND TO_DATE('01/24/2006',
    'MM/DD/YYYY') AND T_REVENUE.CUSTOMER_HOID IN (1844203535241210)
    Thanks for your help,
    Bennett

  • Out of Memory: ResultSet.getBytes accounts for 24,000 allocations of byte[]

    Hi,
    We are using Weblogic6.0 deployed on Win NT
    There is a query in one of our classes which returns the Catalog Items from the
    d/b. We have 2000 Catalog Items in the database. When w144 users concurrently
    try to retrieve this data we run out of memory.
    We used Jprobe for profiling this, the maximum memory is being consumed by the
    byte[] (64,766,540 Bytes, 40,570 allocations) ,it is consuming 80% of memory.
    The maximum allocations(24,552 allocations) are being made at TdsEntry.getBytes
    (which actually is from ResultSet.getBytes).
    I am very curious as to why so many byte[] are being allocated which are not being
    released.
    (Our each Catalog Item has size of 100 bytes , so 100*14*2000 = 2800kB which is
    approx 2.8 MB)
    Any help is appreciated
    Thanks and Regards
    Rashmi

    Hi. I am sure we will look into whether there is anymemory wastage in our
    driver. You should repeat the test using someone else's driver too. This
    application seems to be crying out for a better way. Unless the catalog
    is changing by the instant, it would be much more efficient to have
    a serverside class that occasionally went to the DBMS and got the latest
    contents of the catalog, and have all your user classes go to this class
    to share the one in-memory copy.
    Joe Weinstein at B.E.A.
    Rashmi S wrote:
    Hi,
    We are using Weblogic6.0 deployed on Win NT
    There is a query in one of our classes which returns the Catalog Items from the
    d/b. We have 2000 Catalog Items in the database. When w144 users concurrently
    try to retrieve this data we run out of memory.
    We used Jprobe for profiling this, the maximum memory is being consumed by the
    byte[] (64,766,540 Bytes, 40,570 allocations) ,it is consuming 80% of memory.
    The maximum allocations(24,552 allocations) are being made at TdsEntry.getBytes
    (which actually is from ResultSet.getBytes).
    I am very curious as to why so many byte[] are being allocated which are not being
    released.
    (Our each Catalog Item has size of 100 bytes , so 100*14*2000 = 2800kB which is
    approx 2.8 MB)
    Any help is appreciated
    Thanks and Regards
    Rashmi

  • DB error - Ran out of memory retrieving results - (Code: 200,302) (Code: 209,879)

    I am encountering an error while running a big job of about 2.5million records running thru the EDQ cleansing/match process.
    Process failed: A database error has occurred : Ran out of memory retrieving query results.. (Code: 200,302) (Code: 209,879)
    The server has 8gb memory with 3gb allocated to Java for processing. I could not see any PostgreSQL configuration files to tune any parameters. Need some help with configuring the PostgreSQL database I guess. Appreciate any suggestions!!

    Hi,
    This sounds very much like a known issue with the latest maintenance releases of EDQ (9.0.7 and 9.0.8) where the PostgreSQL driver that we ship with EDQ was updated to support later versions of PostgreSQL but has been seen to use huge amounts more memory.
    The way to resolve this is to change the PostgreSQL driver that ships with EDQ to the conventional PostgreSQL version:
    1. Go here PostgreSQL JDBC Download and download the JDBC4 Postgresql Driver, Version 9.1-902. 
    2. Put this into the  tomcat/webapps/dndirector/WEB-INF/lib folder
    3. Remove/rename the existing postgresql.jar from the same location
    4. Rename the newly downloaded driver postgresql.jar
    5. Restart the 3 services in the following order: Director database, Results database, Application Server)
    With this version of the driver, the memory issues have not been seen.
    Note that there are two reasons why we do not ship this driver as standard, so you may wish to be aware of the impact of these if you use the standard driver:
    a. Drilldown performance from some of the results views from the Parse processor may be a little slower.
    b. There is a slim possibility of hitting deadlocks in the database when attempting to insert very wide columns.
    Regards,
    Mike

  • Oracle 9i Database installation error ORA-27102: out of memory HELP

    Hello
    Appologies if this post has been answered already, or if I am meant to post some data capture to show what is the issue however i am a bit unsure what I need.
    I have downloaded oracle 9i for my university course as I need to have it to do some SQL and Forms building.
    I have had a lot of issues but I have battled through them - however now I am stuck on this one.
    I install Oracle and then the below:
    Install Oracle Database 9.2.0.1.0
    Personal Edition 2.80gb
    General Purpose
    I leave the defualt port
    Set my database name
    Select the location
    Character set etc
    then the database config assistant starts to install the new database at 46% i get the error on a pop up window :
    ORA-27102: out of memory
    How can I resolve this??
    I am a mainframe programmer and not at all in anyway a windows whizz - please oculd someone help a dummy understand??
    Again thank you all very much

    You have too few RAM on your machine, even you could successfully create an instance, it's going to slow as hell.
    When you run DBCA to create database, instead of actually creating the database you could choose to dump the SQL scripts and files used for database creation to a directory. This way will give you a chance to modify pfile and reduce the SGA parameter. I believe the default SGA of instance created by DBCA is already beyond your RAM limit.

  • SQL Result Cache  vs In-Memory Database Cache

    Hi,
    can anyone help me to understand the relations and differences between the 11 g new features of SQL Result Cache vs In-Memory Database Cache ?
    Thanks

    I highly recommend you read the 11g New Features Guide. Here is a sample from it:
    h4. 1.11.2.9 Query Result Cache
    A separate shared memory pool is now used for storing and retrieving
    cached results. Query retrieval from the query result cache is faster
    than rerunning the query. Frequently executed queries will see
    performance improvements when using the query result cache.
    The new query result cache enables explicit caching of results in
    database memory. Subsequent queries using the cached results will
    experience significant performance improvements.
    See Also:
    [Oracle Database Performance Tuning Guide|http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/memory.htm#PFGRF10121] for details
    [Results Cache Concepts|http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/memory.htm#PFGRF10121|Results Cache Concepts]
    HTH!

  • Repeated Opening of  database in a Txn  causes Logging region out of memory

    Hi
    BDB 4.6.21
    When I open and close a single database file repeatedly, it causes the error message "Logging region out of memory; you may need to increase its size". I have set the 65KB default size for set_lg_regionmax. Is there any work around for solving this issue, other than increasing the value for the set_lg_regionmax. Even if we set it to a higher value, we cannot predict how the clients of BDB will opens a file and closes a database file. Following is a stand alone program, using which one can reproduce the scenario.
    void main()
    const int SUCCESS = 0;
    ULONG uEnvFlags = DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_TXN |DB_INIT_LOCK | DB_THREAD;// |
    //DB_RECOVER;
    LPCSTR lpctszHome = "D:\\Nisam\\Temp";
    int nReturn = 0;
    DbEnv* pEnv = new DbEnv( DB_CXX_NO_EXCEPTIONS );
    nReturn = pEnv->set_thread_count( 20 );
    nReturn = pEnv->open( lpctszHome, uEnvFlags, 0 );
    if( SUCCESS != nReturn )
    return 0;
    DbTxn* pTxn = 0;
    char szBuff[MAX_PATH];
    UINT uDbFlags = DB_CREATE | DB_THREAD;
    Db* pDb = 0;
    Db Database( pEnv, 0 );
    lstrcpy( szBuff, "DBbbbbbbbbbbbbbbbbbbbbbbbbbb________0" );// some long name
    // First create the database
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    nReturn = Database.close( 0 );
    for( int nCounter = 0; 10000 > nCounter; ++nCounter )
    // Now repeatedly open and close the above created database
    pEnv->txn_begin( pTxn, &pTxn, 0 );
    Db Database( pEnv, 0 );
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    if( SUCCESS != nReturn )
    // when the count reaches 435, the error occurs
    pTxn->abort();
    pDb->close( 0 );
    pEnv->close( 0 );
    return 0;
    pTxn->abort();
    pTxn = 0;
    Database.close( 0 );
    By the way, following is the content of my DB_CONFIG file
    set_tx_max 1000
    set_lk_max_lockers 10000
    set_lk_max_locks 100000
    set_lk_max_objects 100000
    set_lock_timeout 20000
    set_lg_bsize 1048576
    set_lg_max 10485760
    #log region: 66KB
    set_lg_regionmax 67584
    set_cachesize 0 8388608 1
    Thanks are Regards
    Nisam

    Hi Nisam,
    I was able to reproduce the problem using Berkeley DB 4.6.21. The problem is with releasing the FNAME structure in certain cases involving abort Transactions. In a situation where you have continuous (in a loop) transactional (open, abort, close) of databases you will notice (as you did) that the log region size needs to be increased (set_lg_regionmax).
    This problem was identified and reproduced yesterday (thanks for letting us know about this) and is reported as SR #15953. It will be fixed in the next release of Berkeley DB and is currently in code review/regression testing. I have a patch that you can apply to Berkeley DB 4.6 and have confirmed that your test program runs with the patch applied. If you send me email at (Ron dot Cohen at Oracle) I’ll send the patch to you.
    As you noticed, commiting the transaction will run cleanly without error. You could do that (with the suggestiion DB_TXN_NOSYNC below) but you may not even need transactions for this.
    I want to expand a bit on my recommendation that you not abort transactions in the manner that you are doing (though with the patch you can certainly do that). First, the open/close database is a heavyweight operation. Typically you create/open your databases and keep them open for the life of the application (or a long time).
    You also mentioned, that you noticed commits may have taken a longer time. We can talk about that (if you email me), but you could consider using the DB_TXN_NOSYNC flag losing durability. Make sure that this suggestion will work with your application requirements.
    Even if you have (create/open/get/commit/abort) that should not need transactions for a single get operation. For that case, there would be no logging for the open and close therefore this sequence would be faster. This was a code snippet so what you have in your application may be a lot more complicated and justify what you have done. But the simple test case above should not require a transaction since you are doing a single atomic get.
    I hope this helps!
    Ron Cohen
    Oracle Corporation

  • Database creation = ora-27102 out of memory

    Hi,
         I have a solaris sparc 9.5
    Memory size: 16384 Megabytes
         swapfile dev swaplo blocks free
    /dev/dsk/c1t0d0s1 32,25 16 1068464 1068464
         And when I try to create a database with the following configuration
         DUMMY AREA NAME SUM(BYTES)
    2 Shared Pool shared pool 603979776
    3 Large Pool large pool 352321536
    4 Java Pool java pool 33554432
    5 Redo Log Buffer log_buffer 787456
    6 Fixed SGA fixed_sga 731328
         =>     ora-27102 out of memory
         Help me please

    The error is reported by Oracle during allocation of the SGA and will happen
    in cases where the kernel parameter SHMMAX/SHM_MAX is not set high enough.
    The SHMMAX kernel parameter decides the maximum size of a shared memory segment
    that can be allocated in the system. Since Oracle implements SGA using shared
    memory, this parameter should be set appropriately. The value of the SHMMAX
    kernel parameter should be higher than the maximum of SGA sizes of the Oracle
    instances used in the server. In cases where the SHMMAX is smaller than the SGA
    size, Oracle tries to fit the entire SGA into a single shared memory segment,
    which will fail, and you will see the warning message in the alert.log.
    The recommended value for this parameter is 4294967295 (4 GB), or the size of
    physical memory, or half the size of physical memory, depending on platform.
    Setting the SHMMAX to recommended value in the kernel parameter configuration
    file and rebooting the server will get rid of the warning messages. See the
    platform specific Oracle installation guide for detailed information on how to
    modify the SHMMAX/SHM_MAX kernel parameter.
    General guidelines for SHMMAX on common platforms
    (check with your vendor for maximum settings):
    Platform Recommended value
    Solaris/Sun 4 GB or max sga whichever is higher

Maybe you are looking for

  • Is there a way to create netcdf files in labview?

    I am looking for a way to create Netcdf files in labview rather than having to go through the pains of creating them in C++ and calling that from labview.

  • How to include file-separator in fileName?

    I'm retrieving the file name from a function and that function returns me the string and now I can't change this. String fileName="C:\jdk1.2\hello.java" Now next with this file Name I'm making a file object File f=new File(fileName); FileInputStream

  • Disable attachment in preview of UWL : possible ?

    Hi, Is it possible to make a configuration for the UWL that for the tasks (not for the notifications) in the "preview" the attachments are not available (not shown) In the preview the attachement refers to the R/3 application. We prefer to keep the p

  • TS3274 touch doesn't work properly (w/ Retina disp version)

    I synced my music last night and now ONLY in iTunes does my iPad not respond to my touch properly.  Yes screen has been cleaned and I do have a cover on it...  Not sure what's going on...

  • Customer Exit notes

    hey all, Can anyone help me, where can i find more information on writing customer exits in BEx. i want a better understanding of what does low, high etc mean. thanks Laura.