Out of memory while commiting uow objects

we are getting this error message when we try to persist our objects, is there an upper limit on the object size that we can persist? Do we need to enable or
disable any property? we currently have OptimizeDataConversion turned off
com.integral.finance.dealing.ejb.RequestService.process(com.integral.message.WorkflowMessage)
throws java.rmi.RemoteException
java.lang.OutOfMemoryError: allocLargeObjectOrArray - Object size: 1328752, Num
elements: 664365
at
oracle.jdbc.driver.OraclePreparedStatement.allocBinds(OraclePreparedStatement.java:1147)
at
oracle.jdbc.driver.OraclePreparedStatement.growBinds(OraclePreparedStatement.java:1294)
at
oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:1920)
at
oracle.jdbc.driver.OraclePreparedStatement.addBatch(OraclePreparedStatement.java:8930)
at
oracle.toplink.internal.databaseaccess.DatabasePlatform.addBatch(DatabasePlatform.java:138)
at
oracle.toplink.platform.database.oracle.Oracle9Platform.addBatch(Oracle9Platform.java:207)
at
oracle.toplink.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.prepareBatchStatements(ParameterizedSQLBatchWritingMechanism.java:170)
at
oracle.toplink.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:124)
at
oracle.toplink.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:71)
at
oracle.toplink.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:478)
at
oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:441)
at oracle.toplink.publicinterface.Session.executeCall(Session.java:728)
at
oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:117)
at
oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:103)
at
oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:238)
at
oracle.toplink.internal.queryframework.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:158)
at
oracle.toplink.internal.queryframework.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:173)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:429)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommit(InsertObjectQuery.java:63)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:76)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.performUserDefinedWrite(DatabaseQueryMechanism.java:522)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:358)
at
oracle.toplink.queryframework.WriteObjectQuery.executeCommitWithChangeSet(WriteObjectQuery.java:107)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:254)
at
oracle.toplink.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:47)
at
oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
at
oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:542)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:100)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:72)
at
oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2631)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)
at
oracle.toplink.mappings.ObjectReferenceMapping.preInsert(ObjectReferenceMapping.java:469)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:380)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommit(InsertObjectQuery.java:65)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:76)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.performUserDefinedWrite(DatabaseQueryMechanism.java:522)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:358)
at
oracle.toplink.queryframework.WriteObjectQuery.executeCommitWithChangeSet(WriteObjectQuery.java:110)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:254)
at
oracle.toplink.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:47)
at
oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
at
oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:542)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:100)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:72)
at
oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2631)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)
at
oracle.toplink.mappings.ObjectReferenceMapping.preInsert(ObjectReferenceMapping.java:469)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:380)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommit(InsertObjectQuery.java:65)
at
oracle.toplink.queryframework.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:76)
at
oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:254)
at
oracle.toplink.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:47)
at
oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
at
oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:542)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:100)
at
oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:72)
at
oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2631)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
at
oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)
at
oracle.toplink.internal.sessions.CommitManager.commitNewObjectsForClassWithChangeSet(CommitManager.java:243)
at
oracle.toplink.internal.sessions.CommitManager.commitAllObjectsForClassWithChangeSet(CommitManager.java:219)
at
oracle.toplink.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:174)
at
oracle.toplink.publicinterface.Session.writeAllObjectsWithChangeSet(Session.java:3195)
at
oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(UnitOfWork.java:1320)
at
oracle.toplink.publicinterface.UnitOfWork.commitToDatabaseWithChangeSet(UnitOfWork.java:1416)
at
oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOfWork.java:1164)
at
oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:932)

Hello, we encounter this problem for years on TopLink then there is not in Hibernate.
The problem is that Toplink awaits the uow.commit () before merging changes in the cache objects and before release work objects (backup and clone).
If a transaction modify 10.000 objects, TopLink references 30.000 objects (10.000 refCache, 10.000 clone, 10.000 backup). The 30.000 objects will be released once the database commit completed and the merge completed in memory.
We want Toplink offers an option to make several Flush in unitOfWork, each flush will send SQL to database (insert, update, delete) without sql commit then mark flushed Objects as invalid before releasing them so that the garbage collector can garbage flushed objects.
Example:
{color:#0000ff}*uow.begin()* {color} / * begin of transaction on 10.000 objects * /
application modify first 1.000 objects (from 1 to 1.000)
uow.flush () / * Toplink send sql to database, toplink flag 1.000 as invalid, Toplink release 1.000 objects to the GC * /
application modify next 1000 objects (from 1.001 to 2.000)
uow.flush () / * Toplink send sql to database, toplink flag 1.000 as invalid, Toplink release 1.000 objects to the GC * /
application modify last 1000 objects objects (9001 .. 10000)
uow.flush () / * Toplink send sql to database, toplink flag 1.000 as invalid, Toplink release 1.000 objects to the GC * /
{color:#0000ff}*uow.commit();* {color}/ * Toplink send sql-commit to database, no objects to release to GC* /
Can you tell me if you intend to implement this function because we need long running transaction process without outOfMemory suffer?
Thank you for your understanding
Tabi

Similar Messages

  • Getting Error Out Of Memory while importing the work repository in ODI 10g

    I exported the work repository from topology of ODI 10g of one DB and tried importing it in the another ODI 10g topology of another DB.While importing i got the error 'Out of Memory' .
    Can somebody suggest me ,how to solve the heap size out of memory issue while importing in ODI 10g.
    Thanks in Advance.

    Hi,
    you have to post your question in ODI forum
    Data Integrator
    Suresh

  • Out of Memory while running report

    Hi All,
    DB Version :- 10.1.0.5.0
    Our applications is used to genrate reports, where one report has limitations on the date range. This was due to 'out of memory' error which occured before and it was taking huge time to complete before. Now we have tune the SQL and its completeing in mins (performance increased by 80%) :-)
    Now we wanted to remove the 10 year limit and run for past 30 years data. when we reomve limitation it throws the below error message.
    "when others Error ORA-04030: out of process memory when trying to allocate 1872664 bytes (PLS non-lib hp,DARWIN) with query in ....package" .
    While running the report i asked DBA to check the usage and he mentioned that its at peak. Only one report using 42 GB of RAM.
    Can anyone provide me how to tackle the issue/start investigation to conclude the reason for the issue.
    When the report runs there some dynamic SQL's generated and report is completed.
    Regards,
    Sunny

    Hi All,
    DB Version :- 10.1.0.5.0
    I got the dynamic SQL's and from those found that this piece of SQL is taking huge time. It takes around 15 mins to complete.
    Below is the SQL
    SELECT
    X.TIME_PERIOD EXPOSURE_PERIOD, Y.TIME_PERIOD EVALUATION_PERIOD,b.BUSINESS_UNIT,
    decode(GROUPING(LOB_VALUE),1,'SEL',LOB_VALUE) BUSINESS_UNIT_LOB_ID_ACT,
    0 CALC_VALUE
    FROM
    ACTUARIAL_REF_DATA.TIME_PERIOD_HIERARCHY X, ACTUARIAL_REF_DATA.TIME_PERIOD_HIERARCHY Y,
    ANALYSIS_BUSINESS_UNITS B, ANALYSIS_LOBS L
    WHERE
    B.ANALYSIS_ID = L.ANALYSIS_ID
    AND X.TIME_PERIOD BETWEEN TO_NUMBER('198001') AND TO_NUMBER('201006')
    AND Y.TIME_PERIOD BETWEEN TO_NUMBER('198001') AND TO_NUMBER('201006')
    AND b.BUSINESS_UNIT='31003'
    AND LOB_VALUE IN (SELECT TO_NUMBER(LOB_VALUE) FROM ANALYSIS_LOBS WHERE ANALYSIS_ID=TO_NUMBER('3979'))
    GROUP BY X.TIME_PERIOD, Y.TIME_PERIOD,BUSINESS_UNIT,CUBE(LOB_VALUE)
    PLAN_TABLE_OUTPUT
    Plan hash value: 929111431
    | Id  | Operation                    | Name                       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                            |    26M|   996M|       |   203K  (3)| 00:47:35 |
    |   1 |  SORT GROUP BY               |                            |    26M|   996M|       |   203K  (3)| 00:47:35 |
    |   2 |   GENERATE CUBE              |                            |    26M|   996M|       |   203K  (3)| 00:47:35 |
    |   3 |    SORT GROUP BY             |                            |    26M|   996M|  2868M|   203K  (3)| 00:47:35 |
    |*  4 |     HASH JOIN                |                            |    35M|  1324M|       |  6694  (17)| 00:01:34 |
    |*  5 |      INDEX RANGE SCAN        | ANALYSIS_LOBS_PK           |    48 |   480 |       |     2   (0)| 00:00:01 |
    |*  6 |      HASH JOIN               |                            |   148M|  4097M|       |  5619   (1)| 00:01:19 |
    |   7 |       TABLE ACCESS FULL      | ANALYSIS_LOBS              | 24264 |   236K|       |    12   (0)| 00:00:01 |
    |   8 |       MERGE JOIN CARTESIAN   |                            |  3068K|    55M|       |  5584   (1)| 00:01:19 |
    |   9 |        MERGE JOIN CARTESIAN  |                            |  8401 |   114K|       |    20   (0)| 00:00:01 |
    |* 10 |         INDEX FAST FULL SCAN | ANALYSIS_BUSINESS_UNITS_PK |    23 |   207 |       |     3   (0)| 00:00:01 |
    |  11 |         BUFFER SORT          |                            |   365 |  1825 |       |    17   (0)| 00:00:01 |
    |* 12 |          INDEX FAST FULL SCAN| TIME_PERIOD_HIERARCHY_PK   |   365 |  1825 |       |     1   (0)| 00:00:01 |
    |  13 |        BUFFER SORT           |                            |   365 |  1825 |       |  5583   (1)| 00:01:19 |
    |* 14 |         INDEX FAST FULL SCAN | TIME_PERIOD_HIERARCHY_PK   |   365 |  1825 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access(TO_NUMBER("LOB_VALUE")=TO_NUMBER("LOB_VALUE"))
       5 - access("ANALYSIS_ID"=3979)
       6 - access("B"."ANALYSIS_ID"="L"."ANALYSIS_ID")
      10 - filter("B"."BUSINESS_UNIT"='31003')
      12 - filter("X"."TIME_PERIOD">=198001 AND "X"."TIME_PERIOD"<=201006)
      14 - filter("Y"."TIME_PERIOD">=198001 AND "Y"."TIME_PERIOD"<=201006)
    no.of rows returing :- 58404816
    Desc of the tables
    TABLE TIME_PERIOD_HIERARCHY
      TIME_PERIOD        NUMBER(6),
      QUARTER_DESC       VARCHAR2(6 CHAR),
      QUARTER_START      NUMBER(6),
      QUARTER_END        NUMBER(6),
      SEMI_ANNUAL_DESC   VARCHAR2(12 CHAR),
      SEMI_ANNUAL_START  NUMBER(6),
      SEMI_ANNUAL_END    NUMBER(6),
      YEAR               NUMBER(4),
      YEAR_START         NUMBER(6),
      YEAR_END           NUMBER(6)
    TABLE ANALYSIS_LOBS
      ANALYSIS_ID  NUMBER(10),
      LOB_TYPE_ID  NUMBER(1),
      LOB_VALUE    VARCHAR2(7 CHAR)
    TABLE ANALYSIS_BUSINESS_UNITS
      ANALYSIS_ID    NUMBER(10),
      BUSINESS_UNIT  VARCHAR2(5 CHAR)
    )Kindly let me know if there is any point where we can improve the performance.
    Regards,
    Sunny
    Edited by: k_17 on Nov 22, 2011 2:47 PM

  • Getting ORA-27102: out of memory while creating DB using DBCA

    Hi All,
    I am working on 11.2.0.3 oracle version and linux OS. I am trying to create a new database using dbca and getting error "ORA-27102: out of memory".
    Please find the DB version and OS level parameters info below and let me know what i need to do to overcome this issue.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    $uname -a
    Linux greenlantern1a 2.6.18-92.1.17.0.1.el5 #1 SMP Tue Nov 4 17:10:53 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    $cat /etc/sysctl.conf
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 4194304
    net.core.wmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_max = 1048576
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576
    net.ipv4.ip_local_port_range = 9000 65500
    $free -g
    total used free shared buffers cached
    Mem: 94 44 49 0 0 31
    -/+ buffers/cache: 12 81
    Swap: 140 6 133
    $ulimit -l
    32
    $ipcs -lm
    ------ Shared Memory Limits --------
    max number of segments = 4096
    max seg size (kbytes) = 4194303
    max total shared memory (kbytes) = 8388608
    min seg size (bytes) = 1
    Please let me know for any other details.
    Thanks in advance.

    Ok, first, let's set aside the issue of hugepages for a moment. (Personally, IMHO, if you're doing manual memory mangement, and you're not using hugepages, you're doing it wrong.)
    Anyhow, looking at your SHM parameters:
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    Let's take those in reverse order:
    1.) shmmni - This is the max number of shared memory segments you can have on your system, regardless of the size of each segment.
    2.) shmmax - Contrary to popular belief, this is NOT the max amount of shared memory you can allocate system wide! This is the max size, in bytes of a single shared memory segment. You currently have it set to 4GB-1. This is probably fine. Even if you wanted an SGA larger than 4GB, having shmmax set to this wouldn't hurt you. Oracle would simply allocate multiple shared memory segments, until it had allocated enough memory for the SGA. There's really no harm there, unless this parameter is set really low, causing a huge number of tiny shared memory segments to be allocated.
    3.) shmall - This is the real shared memory segment limit. This number is the total amount of shared memory you're permitted to allocate, system wide, expressed in pages. Pagesize here is the native OS pagesize, which is 4096 bytes, so, this is 2097152 * 4096 = 8589934592, or, 8GB. So, 8GB is the maximum amount of memory that can currnetly be allocated to shared memory, on your machine.
    So, having said all that, you haven't mentioned how many, if any, other Oracle databases are running on the server or their sizes. Secondly, we have no idea what memory sizing parameters you have set on the database that you're trying to create, that's getting the error.
    So, if you can provide more details, in terms of how many other databases are already on this server, and their SGA sizes, and the parameters you've chosen for the database that's failing to create, perhaps we can help more.
    Finally, if you're not using SGA_TARGET or MEMORY_TARGET, you really need to take the time to configure hugepages. Particularly if you've got a server that has as much memory as you do, and you're planning to have non-trivially sized SGA (10s of GB), then you really want to configure hugepages.
    Hope that helps,
    -Mark

  • ORA-27102: out of memory (while creation of drsite problem)

    Hi all,
    I am trying to create DRSITE at remote location, but whilw using the pfile of primary server i am getting the error ORA-27102: out of memory we are using Oracle 9.2 and RHEL and another is that in the primary server we are haing oracle 9.2.0.8 and at the drsite we are using oracle 9.2.0.6,actually aour patch got corrupted that's why we are using oracle 9.2.0.6, because of the differences os the patch creating a problem.....but i dno't think so..pls correct me if i am wrong
    SQL> conn sys/pwd as sysdba
    Connected to an idle instance.
    SQL> startup nomount pfile='/u01/initicai.ora';
    ORA-27102: out of memory
    SQL>we are haing total 8gb memory out of which we using 6gb for oracle i.e
    [oracle@icdb u01]$ cat /proc/meminfo
    MemTotal:      8175080 kB
    MemFree:         39912 kB
    Buffers:         33116 kB
    Cached:        7780188 kB
    SwapCached:         32 kB
    Active:          78716 kB
    Inactive:      7761396 kB
    HighTotal:           0 kB
    HighFree:            0 kB
    LowTotal:      8175080 kB
    LowFree:         39912 kB
    SwapTotal:    16779884 kB
    SwapFree:     16779660 kB
    Dirty:              28 kB
    Writeback:           0 kB
    Mapped:          48356 kB
    Slab:           265028 kB
    CommitLimit:  20867424 kB
    Committed_AS:    61372 kB
    PageTables:       2300 kB
    VmallocTotal: 536870911 kB
    VmallocUsed:    271252 kB
    VmallocChunk: 536599163 kB
    HugePages_Total:     0
    HugePages_Free:      0
    Hugepagesize:     2048 kB
    and
    [oracle@icdb u01]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename.
    # Useful for debugging multi-threaded applications.
    kernel.core_uses_pid = 1
    kernel.shmall=2097152
    kernel.shmmax=6187593113
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max =65536
    net.ipv4.ip_local_port_range = 1024 65000
    [oracle@icdb u01]$and bash profile is
    PATH=$PATH:$HOME/bin
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=$ORACLE_BASE/product/9.2.0
    ORACLE_SID=ic
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
    PATH=$PATH:$ORACLE_HOME/bin
    export  ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH
    export PATH
    unset USERNAME
    ~
    ~please suggest me...

    init file
    [oracle@icdb u01]$ cat initicai.ora
    *.aq_tm_processes=1
    *.background_dump_dest='/u01/app/oracle/admin/ic/bdump'
    *.compatible='9.2.0.0.0'
    *.control_files='/bkp/data/ctl/control03.ctl'
    *.core_dump_dest='/u01/app/oracle/admin/ic/cdump'
    *.db_block_size=8192
    *.db_cache_size=4294967296
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='icai'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=icaiXDB)'
    *.fast_start_mttr_target=300
    *.hash_join_enabled=TRUE
    *.instance_name='icai'
    *.java_pool_size=157286400
    *.job_queue_processes=20
    *.large_pool_size=104857600
    *.open_cursors=300
    *.pga_aggregate_target=938860800
    *.processes=1000
    *.query_rewrite_enabled='FALSE'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.shared_pool_size=818103808
    *.sort_area_size=524288
    *.star_transformation_enabled='FALSE'
    *.timed_statistics=TRUE
    *.undo_management='AUTO'
    *.undo_retention=10800
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='/u01/app/oracle/admin/ic/udump'
    #log_archive_dest='/bkp/arch/ic_'
    Log_archive_start=True
    sga_max_size=6444442450944
    log_archive_dest_1='location=/bkp/arch/ mandatory'
    log_archive_dest_2='service=prim optional reopen=15'
    log_archive_dest_state_1=enable
    remote_archive_enable=true
    standby_archive_dest='/arch/ic/ic_'
    standby_file_management=auto
    [oracle@icdb u01]$Edited by: user00726 on Nov 11, 2009 10:27 PM

  • ORA-27201 Out of Memory while installing 8.1.6

    I just recieved a new Solaris 2.7 machine. I am trying to install Oracle Enterprise 8.1.6 (and also tried 8.1.7). I try Installing the just the most simple install - with the basic db. In doing so the db creation fails 'Oracle not available'. If
    I ignore the problem and continue the install, I then can't start Oracle afterwards. In srvmgrl I run startup and I get error 27201 - Out of Memory. A check of swap and memory shows plenty available.

    Check your total SGA size, check your total physical Memory and swap space configured,then check kernel parameters for Shared Memory ,shmmax,shmseg,shmmni (all are important) By default maximum SGA size available is 1.7GB ,u can alter that.

  • "Out of memory" while importing AVI  type (AJA) file with AE CS3

    With After Effects CS3 (version 8.0.2.27) when we import an AVI type (AJA) file greater than 2Gigs captured by an another PC with a Xena board AJA , a message window appears : After Effects: Out of memory.(945949kK requested) (23 :: 40). If we import the same file with After Effects CS2 (version: 7.0.1) we have no problem. If we import an AVI (AJA) file smaller than 2 Gigs with AE CS3 no problem
    appears. If we capture an AVI type (Matrox) file greater than 2Gigs and import this one with AE CS3 no message appears. If we import a MOV type (AJA) file greater than 2 Gigs no problem appears with AE CS3. So to bypass this problem we are working with footage captured in MOV type file AJA. The PC which the AE CS3 is running has 4 Gigs of RAM with 2 Xeon Quad CPU.
    Thanks
    Marc

    Most likely an issue with MediaCore trying to take over and then not getting it right... Have you checked with AJA? maybe they know something about it.
    Mylenium

  • Ipad since v5 out of memory while running applications

    Since installing v5 on iPad (original 3g - 64, applications that ran OK now get out of
    memory error and screen goes dark, then back to one of the icon screens.

    Since installing v5 on iPad (original 3g - 64, applications that ran OK now get out of
    memory error and screen goes dark, then back to one of the icon screens.

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Problem : Application run out of memory while processing image I/O.

    Hi,
    I have coded an application (utility) for creating scaled (JPEG) images (thumb nails) and writing (saving) them to files. I tried this with both imageio package of jdk1.4 and com.sun.image.codec.jpeg.* classes of earlier version. Application works with both versions but when I execute it for more (say more than 10 images) images, system becomes very slow and application starts throwing a memory related (OutOfMemoryError) error. Is it that objects of java image related classes consume too much memory ? How to solve this problem ? (My system run P4 processor with 256 MB RAM).
    Awaiting solution,
    Thanx and regards,
    IB

    So I am not alone in this...
    I wrote the jpeg thumbnail generating code over a year ago and now when I am ready to use it in the finished app. I noticed that each process (as monitored using Forte 3's execution window) that instantiates a new ImageIcon object is never killed by the JVM. As more and more processes are started for distinct tasks they build up endlessly. Here's the method which is almost identical to the test code provided in sun's tutorial for generating thumbnails from jpeg and gif image files (http://developer.java.sun.com/developer/TechTips/1999/tt1021.html) :
    public boolean createThumbImage(int maxDim) throws EntegraEntityException {
         * Reads an image in a file and creates
         * a thumbnail in another file.
         * @param this.getImagePath() The name of image file.
         * @param thumb The name of thumbnail file. 
         * Will be created if necessary.
         * @param maxDim The width and height of
         * the thumbnail must
         * be maxDim pixels or less.
           String thumbsource =  this.path + this.filename;
           String thumbdest = this.path + "thumbs_" + maxDim + File.separator/*"\\"*/ + Utilities.replaceString(Utilities.replaceString(this.filename," ","_","ALL").trim(),"[.][a-zA-Z]*",".jpg","ALL");
       //  System.out.println("thumbsource in createthumbimage(): " + thumbsource + "\n");
       //  System.out.println("thumbdest in createthumbimage(): " + thumbdest + "\n");
            try {
                // Get the image from a file.
                java.awt.Image inImage = new ImageIcon (thumbsource).getImage();
                // Determine the scale.
             double scale = (double)maxDim/(double)inImage.getHeight(null);
                if (inImage.getWidth(null) > inImage.getHeight(null)) {
                    scale = (double)maxDim/(double)inImage.getWidth(null);
                // Determine size of new image.
                //One of them
                // should equal maxDim.
                int scaledW = (int)(scale*inImage.getWidth(null));
                int scaledH = (int)(scale*inImage.getHeight(null));
                // Create an image buffer in
                //which to paint on.
                BufferedImage outImage =
                  new BufferedImage(scaledW, scaledH,
                    BufferedImage.TYPE_INT_RGB);
                // Set the scale.
                AffineTransform tx =
                  new AffineTransform();
                // If the image is smaller than
                //the desired image size,
                // don't bother scaling.
                if (scale < 1.0d) {
                    tx.scale(scale, scale);
                // Paint image.
                Graphics2D g2d =
                 outImage.createGraphics();
                g2d.drawImage(inImage, tx, null);
                g2d.dispose();
                // JPEG-encode the image
                //and write to file.
                Utilities.fileMakeDirs(this.path + "thumbs_" + maxDim + File.separator/*"\\"*/);
                Utilities.fileMakeFile(thumbdest);
                OutputStream os =
                 new FileOutputStream(thumbdest);
                JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(os);
                encoder.encode(outImage);
                os.close();
                return true;
            } catch (IOException e) {
                throw new EntegraEntityException("IO exception creating thumbnail for Image object ID: " + this.getId() + " ::",e);
            } catch (EntegraFileIOException fi) {
                throw new EntegraEntityException("File IO exception creating thumbnail for Image object ID: " + this.getId() + " ::",fi);
        //    return false; //Never reached. Since it doesn't live in the "try" block but must be here to ensure method body gets a return value.
    }:During my troubleshooting of the method execution yesterday I discovered that the stuck process only occurs as a consequence of the line shown in bold below :
    java.awt.Image inImage = new ImageIcon (thumbsource).getImage();
    :this indicates to me that there is something wrong with how either the Image object or the ImageIcon file resources are allocated. At first I tried using the "flush()" method of the java.awt.Image class to release the resources to no avail, later I was able to confirm that the error does not occur for the default constructor of ImageIcon only for the one that specifies the source path (as shown in the line above) this indicates that a fileIO stream is being created and probably not being released inside ImageIcon or Image, but since that stream is probably private to those core classes we can't access it to close() it properly. I sure hope I am wrong on this or there is an alternative to using these classes..
    Please let me know if any of you have other ideas on how to quash this bug or can otherwise find flaws in my logic for it's occurance.
    Regards,
    Sent2null

  • RT-Target running out of memory while writing to Network Stream

    Hey there,
    I have a program, that transfers acquired data from the FPGA to the Host-PC. The RT-VI reads the data from the DMA-FIFO and writes it onto a Network Stream (BlockDiagram.png).
    Now I am experiencing a phenomenon, that the RT-Target loads its RAM until it's full, and crashes.
    I have no idea, why this happens, the buffer of the Network Stream is empty, all elements are read by the Host, and there is no array built by indexing or so.
    Has anybody an idea, how I can handle this?
    Best regards,
    Solved!
    Go to Solution.
    Attachments:
    BlockDiagram.png ‏43 KB
    DSM.png ‏78 KB

    Hey there,
    I got the problem solved,
    the problem was, the buffer of the sender endpoint was too big. Unlike this problem: http://digital.ni.com/public.nsf/allkb/784CB8093AE30551862579AB0050C429, it wasn't memory growth because of dynamic memory allocation,
    it's just the normal speed of the cRIO while allocating the buffer memory. Setting the sender buffer much smaller, memory growth stops at a specific level (DSM2.png).
    It's only strange, that memory usage grows that slowly, despite creating the endpoint with preallocated buffer, while usage sinks rapidly when the VI-execution stops...
    Best regards...
    Attachments:
    DSM2.png ‏63 KB

  • While creating DB using DBCA getting ORA-27102: out of memory in Linux

    Hi All,
    I am working on 11.2.0.3 oracle Redhat linux. I am getting error "ORA-27102: out of memory" while creating a new database using dbca
    Below are the DB ans OS details. Please check it and let me know what i need to do to overcome this issue.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    $uname -a
    Linux greenlantern1a 2.6.18-92.1.17.0.1.el5 #1 SMP Tue Nov 4 17:10:53 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    $cat /etc/sysctl.conf
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    kernel.shmall = 4294967296
    kernel.shmall = 2097152
    kernel.shmmax = 4294967295
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 4194304
    net.core.wmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_max = 1048576
    fs.file-max = 6815744
    fs.aio-max-nr = 1048576
    net.ipv4.ip_local_port_range = 9000 65500
    $free -g
    total used free shared buffers cached
    Mem: 94 44 49 0 0 31
    -/+ buffers/cache: 12 81
    Swap: 140 6 133
    $ulimit -l
    32
    $ipcs -lm
    Shared Memory Limits
    max number of segments = 4096
    max seg size (kbytes) = 4194303
    max total shared memory (kbytes) = 8388608
    min seg size (bytes) = 1
    Also created a trace file under trace loction and it suggesting to changes shm parameter value. but i am not sure which parameter (shmmax or shmall) and value i need to modify.
    below are trace file info
    Trace file /u02/app/oracle/diag/rdbms/beaconpt/beaconpt/trace/beaconpt_ora_9324.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u02/app/oracle/product/11.2.0.3
    System name: Linux
    Node name: greenlantern1a
    Release: 2.6.18-92.1.17.0.1.el5
    Version: #1 SMP Tue Nov 4 17:10:53 EST 2008
    Machine: x86_64
    Instance name: beaconpt
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 9324, image: oracle@greenlantern1a
    *** 2012-02-02 11:09:53.539
    Switching to regular size pages for segment size 33554432
    Switching to regular size pages for segment size 4261412864
    skgm warning: ENOSPC creating segment of size 00000000fe000000
    fix shm parameters in /etc/system or equivalent
    Please let me what are the kernel parameter values i need to chage to work this.
    Thanks in advance.

    Yes it is same question, but i didn't have any solution there and still looking for some help. the solution it was provided in the last post is not working and getting the same error even with less thn 20% of memory. Please let me know how to overcome this issue.
    Thanks

  • Adobe Indesign CC out of memory error/crash while scrolling

    As I am moving about in my InDesign document (on a MAC), I get a error message saying I'm "OUT OF MEMORY" while I scroll w/ either the space bar on the keyboard and or document scroll bar. I just updated a new patch thru Adobe for InDesign and restarted my computer several times.
    Please need some answers.
    Thanks,
    Moofia66

    Please visit the InDesign forum and look for the discussions on issues with InDesign CC. I'm locking this thread rather than move it there because there are already enough discussions there.

  • Blackberry Desktop Software "Out of Memory Message"

    Very new to blackberry. Have had it two days. Dig it so far. 
    On a mac using the Blackberry Desktop Software, and tonight when updating the blackberry using this software, I started getting a message: "This device has run out of memory while synchronizing your Notes and Memos."
    But in the Device Manager, it states that I have 1.2 Gb free memory, and on the blackberry itself it states I have 1.6Gb free. So I have plenty of memory. What's the cause of this error message, and how do I fix it? 
    Thanks. 

    gordonscobie wrote:
    I have been getting an 'out of memory' message when using twitter app. The message appears inside the twitter app at the bottom of the screen. My phone has plenty of memory - so does anyone know what is going on and what I can do about it. When I get the message the app stops working and I need to close and reopen it.
    Have you tried uninstalling and reinstalling the app? or at least a hard reset?
    Click if you want to Thank someone. If Problem is resolved, so that others can make use of it.

  • Ora-2702 out of memory

    Hello,
    I am getting ora-2702 out of memory while configuring my sga to 4gb in linux(OEL5.2) 32 bit for my 11g database.
    Can anyone help what are the changes need to be done in sysctl or anyother file .
    the database version is 11gR2.The system is having 16gb of ram.
    Regards
    Nidhish

    02702, 00000, "osnoraenv: error translating orapop image name"
    // *Cause:   ORACLE_HOME environment variable not set.
    // *Action:  Make sure that the ORACLE_HOME environment variable has been
    //          properly set and exported.please use COPY & PASTE showing exactly what you do & how Oracle actually responds

Maybe you are looking for

  • SAP BI 4.0 IDT

    i have noted one issue in IDT.Refresh structure is not updating newly added database columns.It is reflecting in data foundattion layer but in business layer it is not reflecting.If we tried to manually add in business alyer , it is getting the wrong

  • OSX 10.6.6 Screen Blacks out

    I've noticed a couple of times now that randomly for no particular reason my screen will black out - similar to what happens when the display goes to sleep, and then come on again a second later. This has happened once while booted up in Windows and

  • Use of Structures

    Hi all, What is the use of structures in query desigining? When we 'll go for structures? Thanks.

  • Me21n - Me59n - Exit for Services Items

    Hi All, I am creating a Purchase Order (PO) from a Purchase Requisition (PR). PR has items which item category = 'D' - Service. If I select an item and select in menu Item -> Service, it will be listed Serives Items. I need to copy the column `value`

  • Prepare a binder for the service requests is not working

    Hi all I did migration from stellet 7.5 tp oracle cs 10gr3 but my component is not working well the problem is in getEnvironment I think can any body help thanks here is the code how can i fix it // prepare a binder for the service requests DataBinde