Data Cartridge Implementation - PGA Memory Problem

Hi,
I have modified the Data Cartridge implementation explained in the article at http://www.oracle-developer.net/display.php?id=422.
More precisely I have modified it to create not only the type returned, but also the select statement.
The problem I have, is that the PGA memory through the execution of my package always increases, and when it ends the execution the PGA doesn't get released.
I'm wondering if i'm missing something on the ODCITableClose.
Object spec:
create or replace type t_conf_obj as object
  atype anytype, --<-- transient record type
  static function ODCITableDescribe(rtype                  out anytype,
                                    p_cz_conf_categoria in     number,
                                    p_id_profilo        in     number   default null,
                                    p_cz_conf_livello   in     number   default null,
                                    p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                   ) return number,
  static function ODCITablePrepare(sctx                   out t_conf_obj,
                                   tf_info             in     sys.ODCITabFuncInfo,
                                   p_cz_conf_categoria in     number,
                                   p_id_profilo        in     number   default null,
                                   p_cz_conf_livello   in     number   default null,
                                   p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                  ) return number,
  static function ODCITableStart(sctx                in out t_conf_obj,
                                 p_cz_conf_categoria in     number,
                                 p_id_profilo        in     number   default null,
                                 p_cz_conf_livello   in     number   default null,
                                 p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                ) return number,
  member function ODCITableFetch(self  in out t_conf_obj,
                                 nrows in     number,
                                 rws      out anydataset
                                ) return number,
  member function ODCITableClose(self in t_conf_obj
                                ) return number
);Object Body
create or replace type body t_conf_obj as
  static function ODCITableDescribe(rtype                  out anytype,
                                    p_cz_conf_categoria in     number,
                                    p_id_profilo        in     number   default null,
                                    p_cz_conf_livello   in     number   default null,
                                    p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                   ) return number
  is
    r_sql   pck_conf_calcolo.rt_dynamic_sql;
    v_rtype anytype;
    stmt    dbms_sql.varchar2a;
  begin
      HERE I CREATE MY SELECT STATEMENT
    if p_cz_conf_livello is null and p_id_profilo is null then
      stmt := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria
    elsif p_cz_conf_livello is not null then
      stmt (1) := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria,
                                                       p_id_profilo,
                                                       p_cz_conf_livello,
                                                       p_dt_rif
    else
      stmt (1) := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria,
                                                       p_id_profilo
    end if;
    || parse the sql and describe its format and structure.
    r_sql.cursor := dbms_sql.open_cursor;
    dbms_sql.parse(r_sql.cursor, stmt, stmt.first, stmt.last, false, dbms_sql.native);
    dbms_sql.describe_columns2(r_sql.cursor,
                               r_sql.column_cnt,
                               r_sql.description);
    dbms_sql.close_cursor(r_sql.cursor);
    || create the anytype record structure from this sql structure.
    anytype.begincreate(dbms_types.typecode_object, v_rtype);
    for i in 1 .. r_sql.column_cnt loop
      v_rtype.addattr(r_sql.description(i).col_name,
                      case
                      --<>--
                        when r_sql.description(i).col_type in (1, 96, 11, 208) then
                         dbms_types.typecode_varchar2
                      --<>--
                        when r_sql.description(i).col_type = 2 then
                         dbms_types.typecode_number
                        when r_sql.description(i).col_type in (112) then
                         dbms_types.typecode_clob
                      --<>--
                        when r_sql.description(i).col_type = 12 then
                         dbms_types.typecode_date
                      --<>--
                        when r_sql.description(i).col_type = 23 then
                         dbms_types.typecode_raw
                      --<>--
                        when r_sql.description(i).col_type = 180 then
                         dbms_types.typecode_timestamp
                      --<>--
                        when r_sql.description(i).col_type = 181 then
                         dbms_types.typecode_timestamp_tz
                      --<>--
                        when r_sql.description(i).col_type = 182 then
                         dbms_types.typecode_interval_ym
                      --<>--
                        when r_sql.description(i).col_type = 183 then
                         dbms_types.typecode_interval_ds
                      --<>--
                        when r_sql.description(i).col_type = 231 then
                         dbms_types.typecode_timestamp_ltz
                      --<>--
                      end,
                      r_sql.description(i).col_precision,
                      r_sql.description(i).col_scale,
                      r_sql.description(i).col_max_len,
                      r_sql.description(i).col_charsetid,
                      r_sql.description(i).col_charsetform);
    end loop;
    v_rtype.endcreate;
    || now we can use this transient record structure to create a table type
    || of the same. this will create a set of types on the database for use
    || by the pipelined function...
    anytype.begincreate(dbms_types.typecode_table, rtype);
    rtype.setinfo(null,
                  null,
                  null,
                  null,
                  null,
                  v_rtype,
                  dbms_types.typecode_object,
                  0);
    rtype.endcreate();
    return odciconst.success;
  exception when others then
    -- indicate that an error has occured somewhere.
    return odciconst.error;
  end odcitabledescribe;
  static function ODCITablePrepare(sctx                   out t_conf_obj,
                                   tf_info             in     sys.ODCITabFuncInfo,
                                   p_cz_conf_categoria in     number,
                                   p_id_profilo        in     number   default null,
                                   p_cz_conf_livello   in     number   default null,
                                   p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                  ) return number
  is
    r_meta pck_conf_calcolo.rt_anytype_metadata;
  begin
    || we prepare the dataset that our pipelined function will return by
    || describing the anytype that contains the transient record structure...
    r_meta.typecode := tf_info.rettype.getattreleminfo(1,
                                                       r_meta.precision,
                                                       r_meta.scale,
                                                       r_meta.length,
                                                       r_meta.csid,
                                                       r_meta.csfrm,
                                                       r_meta.type,
                                                       r_meta.name);
    || using this, we initialise the scan context for use in this and
    || subsequent executions of the same dynamic sql cursor...
    sctx := t_conf_obj(r_meta.type);
    return odciconst.success;
  end;
  static function ODCITableStart(sctx                in out t_conf_obj,
                                 p_cz_conf_categoria in     number,
                                 p_id_profilo        in     number   default null,
                                 p_cz_conf_livello   in     number   default null,
                                 p_dt_rif            in     varchar2 default to_char(sysdate,'ddmmyyyy')
                                ) return number
  is
    r_meta pck_conf_calcolo.rt_anytype_metadata;
    stmt    dbms_sql.varchar2a;
  begin
      HERE I CREATE MY SELECT STATEMENT
    if p_cz_conf_livello is null and p_id_profilo is null then
      stmt := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria
    elsif p_cz_conf_livello is not null then
      stmt(1) := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria,
                                                       p_id_profilo,
                                                       p_cz_conf_livello,
                                                       p_dt_rif
    else
      stmt(1) := pck_conf_calcolo.GetSQLConfCategoria(p_cz_conf_categoria,
                                                      p_id_profilo
    end if;
    || we now describe the cursor again and use this and the described
    || anytype structure to define and execute the sql statement...
    pck_conf_calcolo.r_sql.cursor := dbms_sql.open_cursor;
    dbms_sql.parse(pck_conf_calcolo.r_sql.cursor, stmt, stmt.first, stmt.last, false, dbms_sql.native);
    dbms_sql.describe_columns2(pck_conf_calcolo.r_sql.cursor,
                               pck_conf_calcolo.r_sql.column_cnt,
                               pck_conf_calcolo.r_sql.description);
    for i in 1 .. pck_conf_calcolo.r_sql.column_cnt loop
      || get the anytype attribute at this position...
      r_meta.typecode := sctx.atype.getattreleminfo(i,
                                                    r_meta.precision,
                                                    r_meta.scale,
                                                    r_meta.length,
                                                    r_meta.csid,
                                                    r_meta.csfrm,
                                                    r_meta.type,
                                                    r_meta.name);
      case r_meta.typecode
      --<>--
        when dbms_types.typecode_varchar2 then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor, i, '', 32767);
          --<>--
        when dbms_types.typecode_number then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as number));
          --<>--
        when dbms_types.typecode_date then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as date));
          --<>--
        when dbms_types.typecode_raw then
          dbms_sql.define_column_raw(pck_conf_calcolo.r_sql.cursor,
                                     i,
                                     cast(null as raw),
                                     r_meta.length);
          --<>--
        when dbms_types.typecode_timestamp then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as timestamp));
          --<>--
        when dbms_types.typecode_timestamp_tz then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as timestamp with time zone));
          --<>--
        when dbms_types.typecode_timestamp_ltz then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as timestamp with local time zone));
          --<>--
        when dbms_types.typecode_interval_ym then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as interval year to month));
          --<>--
        when dbms_types.typecode_interval_ds then
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as interval day to second));
          --<>--
        when dbms_types.typecode_clob then
          --<>--
          dbms_sql.define_column(pck_conf_calcolo.r_sql.cursor,
                                 i,
                                 cast(null as clob));
          --<>--
      end case;
    end loop;
    || the cursor is prepared according to the structure of the type we wish
    || to fetch it into. we can now execute it and we are done for this method...
    pck_conf_calcolo.r_sql.execute := dbms_sql.execute(pck_conf_calcolo.r_sql.cursor);
    return odciconst.success;
  end;
  member function ODCITableFetch(self  in out t_conf_obj,
                                 nrows in     number,
                                 rws      out anydataset
                                ) return number
  is
    type rt_fetch_attributes is record(
      v2_column    varchar2(32767),
      num_column   number,
      date_column  date,
      clob_column  clob,
      raw_column   raw(32767),
      raw_error    number,
      raw_length   integer,
      ids_column   interval day to second,
      iym_column   interval year to month,
      ts_column    timestamp,
      tstz_column  timestamp with time zone,
      tsltz_column timestamp with local time zone,
      cvl_offset   integer := 0,
      cvl_length   integer);
    r_fetch rt_fetch_attributes;
    r_meta  pck_conf_calcolo.rt_anytype_metadata;
  begin
    rws := null;
    if dbms_sql.fetch_rows(pck_conf_calcolo.r_sql.cursor) > 0 then
      || first we describe our current anytype instance (self.a) to determine
      || the number and types of the attributes...
      r_meta.typecode := self.atype.getinfo(r_meta.precision,
                                            r_meta.scale,
                                            r_meta.length,
                                            r_meta.csid,
                                            r_meta.csfrm,
                                            r_meta.schema,
                                            r_meta.name,
                                            r_meta.version,
                                            r_meta.attr_cnt);
      || we can now begin to piece together our returning dataset. we create an
      || instance of anydataset and then fetch the attributes off the dbms_sql
      || cursor using the metadata from the anytype. longs are converted to clobs...
      anydataset.begincreate(dbms_types.typecode_object, self.atype, rws);
      rws.addinstance();
      rws.piecewise();
      for i in 1 .. pck_conf_calcolo.r_sql.column_cnt loop
        r_meta.typecode := self.atype.getattreleminfo(i,
                                                      r_meta.precision,
                                                      r_meta.scale,
                                                      r_meta.length,
                                                      r_meta.csid,
                                                      r_meta.csfrm,
                                                      r_meta.attr_type,
                                                      r_meta.attr_name);
        case r_meta.typecode
        --<>--
          when dbms_types.typecode_varchar2 then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.v2_column);
            rws.setvarchar2(r_fetch.v2_column);
            --<>--
          when dbms_types.typecode_number then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.num_column);
            rws.setnumber(r_fetch.num_column);
            --<>--
          when dbms_types.typecode_date then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.date_column);
            rws.setdate(r_fetch.date_column);
            --<>--
          when dbms_types.typecode_raw then
            dbms_sql.column_value_raw(pck_conf_calcolo.r_sql.cursor,
                                      i,
                                      r_fetch.raw_column,
                                      r_fetch.raw_error,
                                      r_fetch.raw_length);
            rws.setraw(r_fetch.raw_column);
            --<>--
          when dbms_types.typecode_interval_ds then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.ids_column);
            rws.setintervalds(r_fetch.ids_column);
            --<>--
          when dbms_types.typecode_interval_ym then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.iym_column);
            rws.setintervalym(r_fetch.iym_column);
            --<>--
          when dbms_types.typecode_timestamp then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.ts_column);
            rws.settimestamp(r_fetch.ts_column);
            --<>--
          when dbms_types.typecode_timestamp_tz then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.tstz_column);
            rws.settimestamptz(r_fetch.tstz_column);
            --<>--
          when dbms_types.typecode_timestamp_ltz then
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.tsltz_column);
            rws.settimestampltz(r_fetch.tsltz_column);
            --<>--
          when dbms_types.typecode_clob then
            --<>--
            dbms_sql.column_value(pck_conf_calcolo.r_sql.cursor,
                                  i,
                                  r_fetch.clob_column);
            rws.setclob(r_fetch.clob_column);
            --<>--
        end case;
      end loop;
      || our anydataset instance is complete. we end our create session...
      rws.endcreate();
    end if;
    return odciconst.success;
  end;
  member function ODCITableClose(self in t_conf_obj
                                ) return number
  is
  begin
    dbms_sql.close_cursor(pck_conf_calcolo.r_sql.cursor);
    pck_conf_calcolo.r_sql := null;
    return odciconst.success;
  end;
end;We have an Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 running on Windows 2003 sp2.
Thanks in advance.
Riccardo.

I'm getting confused with so much answers!!!

Similar Messages

  • PGA memory problem - Oracle 10.2.0.4 on windows 2003

    Hi,
    I have recently started work at a new company and we are running Oracle 10g (10.2.0.4, Enterprise Edition) on Windows 2003 (Standard Edition). The server has 4Gb of RAM (and we have modified boot.ini to inclue the /3Gb switch).
    RE: SGA/PGA, we have the following Oracle parameters set:
    sga_target 1G
    pga_aggregate_target 194M
    The employees tell me that they frequently "have to reboot the database" because of ORA-4030 and ORA-4031 problems. Looking at taskmgr on the server, Oracle is using "too much" memory (~3Gb). New sessions cannot connect etc. and they restart the database. Being a DBA (experience in UNIX, not Windows) I'm not so keen on this "solution" and am trying to find out what's happening.
    When this problem occurred yesterday, before allowing the reboot, I bought myself some time to have a little dig around in the database. In v$sesstat I saw one process that had a value of over 1GB for "session pga memory". Memory usage on the server for oracle.exe was (as predicted) ~1GB over the "expected" 1.2Gb value (of SGA+PGA agg target). So, part 1 of my question is:
    - Is this "normal" behaviour for Oracle to allow a process to go so wild on the PGA?
    (I understood that Oracle would attempt to maintain total PGA memory close to the value of PGA_AGGREGATE_TARGET. I believe I read in the documentation that it could allow PGA memory to increase "up to 20% over this value" but please don't quote me on that, I can't find it again..)
    Part 2 to this problem is that sessions "collect" in the database and do not release their PGA memory, leading to the slow build up of memory until the errors are encountered. I believe Dead Connection Detection (DCD) not to be working here (sql_net.expire_time=1 is set on the server but appears to do nothing). I've started reading docs/notes on this and it seems that DCD is not reliable on Windows. Metalink Doc 151972.1 suggests testing and adjusting some underlying TCP/IP serttings in the O/S kernel (I'm not even sure how to do this in Windows yet, let alone if it's something I want to get involved with!). So:
    - I'm wondering if anyone has any tips for this (killing off dead connections, getting DCD working in Windows 2003 etc.)?  Any experiences, tips welcome here!!
    Thanks in advance.
    Regards,
    Ados

    user647632 wrote:
    (By the way, can anyone recommend how to sort the formatting of these results please?!!)You can find all by clicking the Oracle Forum FAQ
    </br>
    Here is my PGASTAT result. Have a look at the values.
    SQL> column name  format a60
    column value format 9,999,999,999,999,999
    select * from gv$pgastat order by inst_id,name;
    SQL> SQL>
       INST_ID NAME                                                                          VALUE UNIT
             1 PGA memory freed back to OS                                         202,362,322,944 bytes
             1 aggregate PGA auto target                                             1,831,209,984 bytes
             1 aggregate PGA target parameter                                        2,147,483,648 bytes
             1 bytes processed                                                     287,247,907,840 bytes
             1 cache hit percentage                                                             68 percent
             1 extra bytes read/written                                            133,790,002,176 bytes
             1 global memory bound                                                     214,743,040 bytes
             1 max processes count                                                              48
             1 maximum PGA allocated                                                 1,708,733,440 bytes
             1 maximum PGA used for auto workareas                                   1,112,871,936 bytes
             1 maximum PGA used for manual workareas                                       271,360 bytes
       INST_ID NAME                                                                          VALUE UNIT
             1 over allocation count                                                             0
             1 process count                                                                    42
             1 recompute count (total)                                                     136,756
             1 total PGA allocated                                                     328,158,208 bytes
             1 total PGA inuse                                                         196,502,528 bytes
             1 total PGA used for auto workareas                                        81,608,704 bytes
             1 total PGA used for manual workareas                                               0 bytes
             1 total freeable PGA memory                                                96,927,744 bytes
    19 rows selected.
    SQL>
    SQL> column BYTES_PROCESSED format 9,999,999,999,999,999
    column EST_RW_EXTRA_BYTES format 9,999,999,999,999,999
    SQL> SQL> select inst_id,round(pga_target_for_estimate/1024/1024) as target_size_MB,
      2                bytes_processed,estd_extra_bytes_rw as est_rw_extra_bytes,
      3                estd_pga_cache_hit_percentage as est_hit_pct,
      4                estd_overalloc_count as est_overalloc
      5  from gv$pga_target_advice  order by inst_id,target_size_mb;
       INST_ID TARGET_SIZE_MB        BYTES_PROCESSED     EST_RW_EXTRA_BYTES EST_HIT_PCT EST_OVERALLOC
             1            256        285,418,388,480        188,648,610,816          60             4
             1            512        285,418,388,480        131,006,145,536          69             0
             1           1024        285,418,388,480         92,476,995,584          76             0
             1           1536        285,418,388,480         91,536,565,248          76             0
             1           2048        285,418,388,480         72,373,725,184          80             0
             1           2458        285,418,388,480         68,650,139,648          81             0
             1           2867        285,418,388,480         68,650,139,648          81             0
             1           3277        285,418,388,480         68,650,139,648          81             0
             1           3686        285,418,388,480         68,650,139,648          81             0
             1           4096        285,418,388,480         68,650,139,648          81             0
             1           6144        285,418,388,480         68,650,139,648          81             0
       INST_ID TARGET_SIZE_MB        BYTES_PROCESSED     EST_RW_EXTRA_BYTES EST_HIT_PCT EST_OVERALLOC
             1           8192        285,418,388,480         68,650,139,648          81             0
             1          12288        285,418,388,480         68,650,139,648          81             0
             1          16384        285,418,388,480         68,650,139,648          81             0
    14 rows selected.
    SQL>
    SQL> show parameters pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 2G
    SQL> show parameters sga_max
    NAME                                 TYPE        VALUE
    sga_max_size                         big integer 2G
    SQL> show parameters sga_target
    NAME                                 TYPE        VALUE
    sga_target                           big integer 2G
    SQL>

  • Oracle Com Data Cartridge

    I have installed Oracle COM Data Cartridge, but I have problem with creating objects.
    While calling ORDCom.CreateObject method I always get error represented by HRESULT value = -2147467259 (Ox80004005). Does anybody know what may be purpose of this error?
    null

    Hello Wei Xiong,
    I have the same error when I trying to configure Oracle CQL Processor Table Source according to :
    http://docs.oracle.com/cd/E23943_01/dev.1111/e14301/processorcql.htm#CIHCCADG
    I've configured table source and had the same error on deployment :
    <The application context "check_entrance" could not be started: org.springframework.beans.FatalBeanException: Error in context lifecycle initialization; nested exception is com.bea.wlevs.ede.api.StatementException: Could not start rule [DBEventQuery] due to error: java.lang.ClassNotFoundException: weblogic.jdbc.wrapper.PoolConnection
    org.springframework.beans.FatalBeanException: Error in context lifecycle initialization; nested exception is com.bea.wlevs.ede.api.StatementException: Could not start rule [DBEventQuery] due to error: java.lang.ClassNotFoundException: weblogic.jdbc.wrapper.PoolConnection
         at com.bea.wlevs.spring.ApplicationContextLifecycle.onApplicationEvent(ApplicationContextLifecycle.java:145)
    It's look like a same issue.
    I've added zip file of my Eclipse project and config.xml of my CEP domain to e-mail.
    Thank you !
    Regards,
    Dmitry

  • Why is data loading running out of PGA memory?

    Hi folks.
    I've got a question that I haven't been able to find an answer to myself so I'll give this forum a shot :).
    So here's the deal:
    We have a 9.2 db (or rather - is using) as a base for a report tool (Business Objects). The tables in this database is being filled by a "special group", we can call them The Gang - who's only job is to fill our tables with data (ok, before anyone gets started - I'm not in charge of company organization :) ). They're like a human ETL tool.
    We have this table containing about 26 million rows that has to be transferred from one database (I thinks it's OpenVMS or something) to our database. To load this table The Gang is using some kind of load program written in Java. This import is screwing up all the time and I started to get curious why. I asked the one who wrote this loading program and he told me that he opens a cursor in either database and then copies the contents row by row. It makes a commit every 100000 row. When the load i running it gets to about 6 million rows and then we get a PGA memory error.
    So I guess my question is - is there any other, more efficient way, to make this load? I've raised the pga_max_size from 25Mb to 200Mb (though the advisor pointed more towards 2Gb of RAM) but our database is not on a dedicated machine (and we're not in control of the actual server either by the way) so I don't want to raise this parameter up too high. What could be the problem and what should I check?

    Wow. Thanks for the quick reply! :D
    They're like a human ETL tool.Is that the title on business cards as well? :)Maybe it should be. :)
    >
    >
    >>
    gets to about 6 million rows and then we get a PGAmemory error.
    And the error is...?I'm not at work right now but I'll have the error code in a short while on mail from my colleague..
    >
    So I guess my question is - is there any other,more
    efficient way, to make this load? I've raised theAre you saying that there's more problems than this
    pga error?There were other errors in the beginning, mostly because the people (not us) who installed the database form the beginning were not given any input on the use of the database so I guess they just took standard values for everything. Then when it started to get filled with data to the equivalent of production data the segments started to blow up - since the IT-dept that are in charge of the physical server don't like autoextents and stuff.
    >
    pga_max_size from 25Mb to 200Mb (though theadvisor
    You could check v$pgastatYeah, I've checked it and also the advisors in the database. I've looked in several books as well but I haven't been able to find out why the PGA is to be a problem at all when using cursors?
    Is there any data about each opening of the cursors that get saved in the PGA during the session? Is it "bad" to open and close the cursors too often for example? Should you open a cursor and let it be open for the whole transfer?
    I'm not a programmer so I don't really know.

  • How store data permanent in phone memory?

    How store data permanent in phone memory?
    With RecordStore records are removed when Midlet is remove.
    Using resource file from JAR (getResourceAsStream) not solve problem
    beecause resource is on Midlet JAR.
    I want use data from previous Midlet after install new version of same Midlet.
    How write and read data outside of Midlet Suite?

    As you already found out, it doesn't work with MIDP 1.0, because of the sandbox model.
    My (weird) suggestion: Make a server-based backup procedure. You implement a functionality in your MIDlet to connect to the server and backup or restore the data via HTTP. Before updating, the MIDlet uploads the data, and after updating, the new MIDlet downloads the data. But then you have to implement a user database on the server...

  • PGA Memory increase indefinitely

    Hello.
    I'm using Oracle XE and doing some load tests on a java application using JDBC as connector.
    During the load, 6 data tables increase but every 3 minutes, a scheduled job purge everything.
    My problem is that my PGA memory size increase indefinitely and, after 12 hours, I exceed the 1GB memory in total and oracle doesn't respond.
    I have about 40 query each seconds.
    Thank you very much !
    Bruno

    Sounds like some pretty serious testing. Might want to try backing off some of the workload, get a successful test baseline established. Memory might be the fastest storage method, but its not infinite. Even more so when using XE with the 1G/1CPU limits.
    What are you trying to accomplish, what do you want the database to do, and what are you asking the database to do? Those aren't always quite the same things ... ;)

  • Oracle CEP JDBC Data Cartridge

    Hello,
    I'm trying to implement Oracle CEP JDBC Data Cartridge
    according to :
    http://docs.oracle.com/cd/E23943_01/apirefs.1111/e12048/datacartjdbc.htm#CIHCEFBH
    The problem is that it fails on deployment with following error :
    <Exception thrown from prepare method com.oracle.cep.cartridge.jdbc.JdbcCartridgeContext.checkCartridgeContextConfig.
    java.lang.AssertionError: java.lang.ClassNotFoundException: weblogic.jdbc.wrapper.PoolConnection
    I've added file that contains this class to classpath (com.bea.core.datasource6_1.10.0.0.jar) ,
    but get the same error .
    Any help would be appreciated .
    Regards,
    Dmitry

    Hello Wei Xiong,
    I have the same error when I trying to configure Oracle CQL Processor Table Source according to :
    http://docs.oracle.com/cd/E23943_01/dev.1111/e14301/processorcql.htm#CIHCCADG
    I've configured table source and had the same error on deployment :
    <The application context "check_entrance" could not be started: org.springframework.beans.FatalBeanException: Error in context lifecycle initialization; nested exception is com.bea.wlevs.ede.api.StatementException: Could not start rule [DBEventQuery] due to error: java.lang.ClassNotFoundException: weblogic.jdbc.wrapper.PoolConnection
    org.springframework.beans.FatalBeanException: Error in context lifecycle initialization; nested exception is com.bea.wlevs.ede.api.StatementException: Could not start rule [DBEventQuery] due to error: java.lang.ClassNotFoundException: weblogic.jdbc.wrapper.PoolConnection
         at com.bea.wlevs.spring.ApplicationContextLifecycle.onApplicationEvent(ApplicationContextLifecycle.java:145)
    It's look like a same issue.
    I've added zip file of my Eclipse project and config.xml of my CEP domain to e-mail.
    Thank you !
    Regards,
    Dmitry

  • JPA @Lob annotation and memory problem

    Hi.
    Is there any way to overcome the memory problem with @Lob types?
    FetchType.LAZY means lazy as it spelled. Am I right?
    And it doesn't mean any stream handling of big sized lobs. For lobs are so big that my physical memory can't hold it.
    @Entity public class Parent implements Serializable {
        @Lob private byte[] big; // what can I do if it is to big to hold in memory?
    }Is there any standard way for this problem?
    I found that openejb serves *@Persistent* annotation for InputStream/Reader for this purpose.
    But I want to know if there is another way for more portability.
    You know what?
    I heard from a JPA spec guy that this problem (maybe it's not a problem) won't even be considered in next version. (2.0?)
    How horrible...
    Is it can be a good think that split the Lob column into a OneToMany children?
    This is the best output, for now, from my head.
    @Entity public class Parent implements Serializable {
        @OneToMany private Collection<Child> children;
    @Entity public class Child implements Serializable {
        @Lob private byte[] chopped; // small enough to fit into memory
    }

    If the Lob is too big for your memory, then you are best off handling whatever processing you need to do on it through raw JDBC. If your only interested on the first n bytes, then you could probably map to something like this using a view the truncate the lob, and map your class to the view instead of the table.
    You should also be able to define your variable as Blob or Clob and get the locator directly. But the locator is tied to the connection, so will not be usable after the connection is returned to the pool. I'm not sure how such a lob could be model if it can't be read into memory, perhaps as some special JPA Lob type that can return a stream on the data by re-querying the lob from the db.
    What exactly to you want to do with the lob?
    -- James : http://www.eclipselink.org

  • Memory Problem with SEt and GET parameter

    hi,
    I m doing exits. I have one exit for importing and another one for changing parameter.
    SET PARAMETER exit code is ....
    *data:v_nba like eban-bsart,
           v_nbc like eban-bsart,
           v_nbo like eban-bsart.
           v_nbc = 'CAPX'.
           v_nbo = 'OPEX'.
           v_nba = 'OVH'.
    if im_data_new-werks is initial.
      if im_data_new-knttp is initial.
        if im_data_new-bsart = 'NBC' or im_data_new-bsart = 'SERC' or im_data_new-bsart = 'SERI'
           or im_data_new-bsart = 'SER' or im_data_new-bsart = 'SERM' or im_data_new-bsart = 'NBI'.
          set parameter id 'ZC1' field v_nbc.
        elseif im_data_new-bsart = 'NBO' or im_data_new-bsart = 'NBM' or im_data_new-bsart = 'SERO'.
          set parameter id 'ZC2' field v_nbo.
        elseif im_data_new-bsart = 'NBA' or im_data_new-bsart = 'SERA'.
          set parameter id 'ZC3' field  v_nba.
        endif.
      endif.
    endif. *
    and GET PARAMETER CODE IS....
      get parameter id 'ZC1' field c_fmderive-fund.
      get parameter id 'ZC2' field c_fmderive-fund.
      get parameter id 'ZC3' field c_fmderive-fund.
    FREE MEMORY ID 'ZC1'.
      FREE MEMORY ID 'ZC2'.
       FREE MEMORY ID 'ZC3'.
    In this code i m facing memory problem.
    It is not refreshing the memory every time.
    So plz give me proper solution.
    Its urgent.
    Thanks
    Ranveer

    Hi,
       I suppose you are trying to store some particular value in memory in one program and then retieve it in another.
    If so try using EXPORT data TO MEMORY ID 'ZC1'. and IMPORT data FROM MEMORY ID 'ZC1'.
    To use SET PARAMETER/GET PARAMETER the specified parameter name should be in table TPARA. Which I don't think is there in your case.
    Sample Code :
    Data declarations for the function codes to be transferred
    DATA : v_first  TYPE syucomm,
           v_second TYPE syucomm.
    CONSTANTS : c_memid TYPE char10 VALUE 'ZCCBPR1'.
    Move the function codes to the program varaibles
      v_first  = gv_bdt_fcode.
      v_second = sy-ucomm.
    Export the function codes to Memory ID
    EXPORT v_first
           v_second TO MEMORY ID c_memid.        "ZCCBPR1  --- Here you are sending the values to memory
    Then retrieve it.
    Retrieve the function codes from the Memory ID
      IMPORT v_first  TO v_fcode_1
             v_second TO v_fcode_2
      FROM MEMORY ID c_memid.                                   "ZCCBPR1
      FREE MEMORY ID c_memid.                                   "ZCCBPR1
    After reading the values from memory ID free them your problem should be solved.
    Thanks
    Barada
    Edited by: Baradakanta Swain on May 27, 2008 10:20 AM

  • Memory problems with PreparedStatements

    Driver: 9.0.1 JDBC Thin
    I am having memory problems using "PreparedStatement" via jdbc.
    After profiling our application, we found that a large number oracle.jdbc.ttc7.TTCItem objects were being created, but not released, even though we were "closing" the ResultSets of a prepared statements.
    Tracing through the application, it appears that most of these TTCItem objects are created when the statement is executed (not when prepared), therefore I would have assumed that they would be released when the ResultSet is close, but this does not seem to be the case.
    We tend to have a large number of PreparedStatement objects in use (over 100, most with closed ResultSets) and find that our application is using huge amounts of memory when compared to using the same code, but closing the PreparedStatement at the same time as closing the ResultSet.
    Has anyone else found similar problems? If so, does anyone have a work-around or know if this is something that Oracle is looking at fixing?
    Thanks
    Bruce Crosgrove

    From your mail, it is not very clear:
    a) whether your session is an HTTPSession or an application defined
    session.
    b) What is meant by saying: JSP/Servlet is growing.
    However, some pointers:
    a) Are there any timeouts associated with session.
    b) Try to profile your code to see what is causing the memory leak.
    c) Are there references to stale data in your application code.
    Marilla Bax wrote:
    hi,
    we have some memory - problems with the WebLogic Application Server
    4.5.1 on Sun Solaris
    In our Customer Projects we are working with EJB's. for each customer
    transaction we create a session to the weblogic application server.
    now there are some urgent problems with the java process on the server.
    for each session there were allocated 200 - 500 kb memory, within a day
    the JSP process on our server is growing for each session and don't
    reallocate the reserved memory for the old session. as a work around we
    now restart the server every night.
    How can we solve this problem ?? Is it a problem with the operating
    system or the application server or the EJB's ?? Do you have problems
    like this before ?
    greetings from germany,

  • Nokia C5-03 Low memory problems ( Other Files )

    Please help. I have a nokia C5-03 the handset is showing phone memory is full. I have checked and uninstalled all unecessay applications. But problem is still there. when i check the phone memory it indicates that i have other files of 42mb installed and using phone memory. however i am unable to check what other files are installed or classfied as other files. Can you please assist me so i can know what these other files consist of or where they are located.

    Hi Thetao,
    I typed a more extensive reaction before, but it got lost when I pressed "post". Therefore I just respond to the main points that you mentioned (and some I found out myself).
    Strange: I can't find the 40 MB Maximum User Storage on the Nokia website anymore (nor the 75 MB). But it sounds very familiar to me. It looks to me as if they removed this from the phone specs, also of other Smartphones by the way.
    Yesterday, I deleted some small apps that I don't use (anymore) such as InternetRadio and I also removed Nokia email. Although the apps were below 2 MB together, this freed up over 7 MB of Phone memory (24 MB free now)! I think there were still some old emails stored in C: which I couldn't delete any other way. This helped me a great deal already but I tried your suggestions as well.
    1. No map data or CITIES folder on C: 2. Switched messages memory to phone (and phone to offline mode) and I did indeed find a forgotten email account with 30 email messages. Not much but I had 24,7 MB free after that. Of course, I put messages memory back to the memorycard. 3. Used the free edition of Y-Browser to manually delete the cache folder. Not much data in that, but 25,1 MB free after that. Nice tool, with which you can reach folders that normally stay hidden! Used YBrowser to search all C: for files over 300 kB. Only 2 files: boot_space.txt in C:\ (500 kB, contains only the X repeatedly as far as I see, but is probably essential for the operating system) and C:\resource\python25\pyton25.zip (1 MB). It looks like an installation package, but I'm not sure if I can delete it. By the way: YBrowser hasn't made a shortcut in one of the menus. Only way that I found to start it was to look for it using the Phone's search function. Is there a way to make this shortcut myself?
    4. Yes I did. No Images folder on C: anymore, nor other big files (see point 3)
    5. I use Bluetooth for file transfer sometimes, mainly for installation files (such as YBrowser.sis, but I did this one via USB-cable). However, no big files are left on C: so I don't think I have this problem.
    6. I tried to delete Nokia Chat yesterday as well (with the other apps), but it won't be uninstalled the normal way as it says "Uninstall cancelled" (not sure about the exact translation since my phone 'speaks' Dutch) Do you know if there's another way to get rid of this 3 MB app that I don't use at all?
    I think I may have found an explanation and a solution for the memory problem while navigating. You mentioned the "memory in use" in the map settings. Above that option there's a slide bar for the % of memory that the navigation can use. Standard is 70%. I always thought this was about storage memory on (in my case) the memorycard. Another topic mentioned that this the working memory (so the RAM) that the navigation may use. Setting it to 70% means there's only 30% left for other apps that run in the background. The other topic states that this is nog enough so the slider should be set to for instance 30% for navigation leaving 70% free for "the phone". From behind my computer, navigation seems much more stable. I'll try this setting in my car soon and let you know how it works.
    Thanks a lot for thinking along with me so far! There's already 25,1 MB of space, which is great since it was only 7 MB last Sunday. And navigation looks more stable. I'd appreciate if you have some more answers to my latest questions, but if not I think my phone will work a lot better already!
    Regards, Paul

  • Memory problem with loading a csv file and displaying 2 xy graphs

    Hi there, i'm having some memory issues with this little program.
    What i'm trying to do is reading a .csv file of 215 mb (6 million lines more or less), extracting the x-y values as 1d array and displaying them in 2 xy graphs (vi attacked).
    I've noticed that this process eats from 1.6 to 2 gb of ram and the 2 x-y graphs, as soon as they are loaded (2 minutes more or less) are really realy slow to move with the scrollbar.
    My question is: Is there a way for use less memory resources and make the graphs move smoother ?
    Thanks in advance,
    Ierman Gert
    Attachments:
    read from file test.vi ‏106 KB

    Hi lerman,
    how many datapoints do you need to handle? How many do you display on the graphs?
    Some notes:
    - Each graph has its own data buffer. So all data wired to the graph will be buffered again in memory. When wiring a (big) 1d array to the graph a copy will be made in memory. And you mentioned 2 graphs...
    - load the array in parts: read a number of lines, parse them to arrays as before (maybe using "spreadsheet string to array"?), finally append the parts to build the big array (may lead to memory problems too).
    - avoid datacopies when handling big arrays. You can show buffer creation using menu->tools->advanced->show buffer allocation
    - use SGL instead of DBL when possible...
    Message Edited by GerdW on 05-12-2009 10:02 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Memory problem if OLE-object references to WMF files

    Hi there,
    I have a report with an OLE object containing WMFs.
    The graphic files are variable and their name is loaded from the database during runtime (path + filename).
    Running the report leads to 185 pages, each one containing a different WMF.
    If I preview the report in CR, everything looks fine.
    If I print the report, the OLE object / graphic is left empty....
    If I export the report to PDF (as an example) I get the error message 'memory full'. Reducing the data set to ~50, the PDF is created. But the pictures get resized (much bigger) and only parts are visible.
    The machine I'm using doesn't have any memory problems.
    The WM files are only 3 to 12 KB each.
    If I convert the WMFs to JPG and use these within the report it works...
    Problem with this: a loss of quality (it is necessary to stretch the pictures to certain size)
    Thanks in advance for any ideas!
    Susanne
    I'm using CR 2008 SP 3 on Windows 2003 Server

    Format the pictures outside of CR for best results.

  • Memory problem on my e3500

    Hi all,
    I've a problem on this e3500 server, I had several reboot without printing anything in messages.
    Now I found something, I think it's not cpu19 involved (score05 and syndrome not equal to 0x3), I suppose it's fault of 2 memory slot on board 7 or dimms. Nothing was evidencied by advanced POST.
    Now the question is: How can I find the physical address of the bad dimms ( Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 989652 kern.info&#93; &#91;AFT2&#93; E$Data (0x10): 0x696cf36f.6e74726f Bad PSYND=0xff00) ? is possible to translate the hex code and find the J3*** number? Is there a table or a doc where I can find the answer? Why Oracle pid is involved in this case? Maybe only because that pid was unequal to parity alg?
    Thank you in advance
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 949434 kern.warning&#93; WARNING:
    &#91;AFT1&#93; Uncorrectable Memory Error on CPU19 Data access at TL=0, err
    ID 0x0000e56e.7c3643da
    Nov 13 05:32:57 rhea AFSR 0x00000001<ME>.00300000<UE,CE> AFAR
    0x00000000.8b212380
    Nov 13 05:32:57 rhea AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC
    0xffffffff7d000970
    Nov 13 05:32:57 rhea UDBH 0x029c<UE> UDBH.ESYND 0x9c UDBL 0x0333<UE,CE>
    UDBL.ESYND 0x33
    Nov 13 05:32:57 rhea UDBH Syndrome 0x9c Memory Module Board 7 J3101
    J3201 J3301 J3401 J3501 J3601 J3701 J3801
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 549381 kern.info&#93; &#91;AFT2&#93; errID
    0x0000e56e.7c3643da PA=0x00000000.8b212380
    Nov 13 05:32:57 rhea E$tag 0x00000000.1cc01164 E$State: Exclusive
    E$parity 0x0e
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x00): 0x060337ff.01800180
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 989652 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x08): 0xffff3100.1c746578 Bad PSYND=0x00ff
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 989652 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x10): 0x696cf36f.6e74726f Bad PSYND=0xff00
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x18): 0x6c736e63.31407669
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x20): 0x7267696c.696f2e69
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x28): 0x74ff0180.01800180
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x30): 0x02c10201.80013001
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x38): 0x30018009.42393935
    Nov 13 05:32:57 rhea unix: &#91;ID 321153 kern.notice&#93; NOTICE: Scheduling
    clearing of error on page 0x00000000.8b212000
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 512463 kern.info&#93; &#91;AFT3&#93; errID
    0x0000e56e.7c3643da Above Error is in User Mode
    Nov 13 05:32:57 rhea and is fatal: will reboot
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 820260 kern.warning&#93; WARNING:
    &#91;AFT1&#93; Uncorrectable Memory Error on CPU19 Data access at TL=0, err
    ID 0x0000e56e.7c3643da
    Nov 13 05:32:57 rhea AFSR 0x00000001<ME>.00300000<UE,CE> AFAR
    0x00000000.8b212380
    Nov 13 05:32:57 rhea AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC
    0xffffffff7d000970
    Nov 13 05:32:57 rhea UDBH 0x029c<UE> UDBH.ESYND 0x9c UDBL 0x0333<UE,CE>
    UDBL.ESYND 0x33
    Nov 13 05:32:57 rhea UDBL Syndrome 0x33 Memory Module Board 7 J3101
    J3201 J3301 J3401 J3501 J3601 J3701 J3801
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 549381 kern.info&#93; &#91;AFT2&#93; errID
    0x0000e56e.7c3643da PA=0x00000000.8b212380
    Nov 13 05:32:57 rhea E$tag 0x00000000.1cc01164 E$State: Exclusive
    E$parity 0x0e
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x00): 0x060337ff.01800180
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 989652 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x08): 0xffff3100.1c746578 Bad PSYND=0x00ff
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 989652 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x10): 0x696cf36f.6e74726f Bad PSYND=0xff00
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x18): 0x6c736e63.31407669
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x20): 0x7267696c.696f2e69
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x28): 0x74ff0180.01800180
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x30): 0x02c10201.80013001
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 359263 kern.info&#93; &#91;AFT2&#93;
    E$Data
    (0x38): 0x30018009.42393935
    Nov 13 05:32:57 rhea SUNW,UltraSPARC-II: &#91;ID 512463 kern.info&#93; &#91;AFT3&#93; errID
    0x0000e56e.7c3643da Above Error is in User Mode
    Nov 13 05:32:57 rhea and is fatal: will reboot
    Nov 13 05:32:57 rhea unix: &#91;ID 855177 kern.warning&#93; WARNING: &#91;AFT1&#93;
    initiating reboot due to above error in pid 19609 (oracle)

    Now a friend of mine has a similar problem, he's very far from my city so I can't see the server and I only have this message appeared at boot:
    Rebooting with command: boot
    Boot device: diskbrd File and args:
    SunOS Release 5.8 Version Generic_117350-14 64-bit
    Copyright 1983-2003 Sun Microsystems, Inc. All rights reserved.
    WARNING: &#91;AFT1&#93; Uncorrectable Memory Error on CPU1 at TL=0, errID 0x00000028.9184e3e9
    AFSR 0x00000001<ME>.80300000<PRIV,UE,CE> AFAR 0x00000000.00003cc0
    AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC 0x1014f10c
    UDBH 0x0333<UE,CE> UDBH.ESYND 0x33 UDBL 0x034d<UE,CE> UDBL.ESYND 0x4d
    UDBH Syndrome 0x33 Memory Module Board 2 J3100 J3200 J3300 J3400 J3500 J3600 J3700 J3800
    WARNING: &#91;AFT1&#93; Uncorrectable Memory Error on CPU1 at TL=0, errID 0x00000028.9184e3e9
    AFSR 0x00000001<ME>.80300000<PRIV,UE,CE> AFAR 0x00000000.00003cc0
    AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC 0x1014f10c
    UDBH 0x0333<UE,CE> UDBH.ESYND 0x33 UDBL 0x034d<UE,CE> UDBL.ESYND 0x4d
    UDBL Syndrome 0x4d Memory Module Board 2 J3100 J3200 J3300 J3400 J3500 J3600 J3700 J3800
    panic&#91;cpu1&#93;/thread=2a1001ddd20: &#91;AFT1&#93; errID 0x00000028.9184e3e9 UE Error(s)
    See previous message(s) for details
    000002a1001dd3a0 SUNW,UltraSPARC-II:cpu_aflt_log+568 (2a1001dd45e, 1, 10155300, 2a1001dd5e8, 2a1001dd4ab, 10155328)
    %l0-3: 00000300003a6a90 0000000000000003 000002a1001dd6b0 0000000000000010
    %l4-7: 0000030001d8c290 0000000000000000 000002a75029c000 000002a100176fd0
    000002a1001dd5f0 SUNW,UltraSPARC-II:cpu_async_error+868 (1046b370, 2a1001dd6b0, 180300000, 0, c7a6e6780300000, 2a1001dd870)
    %l0-3: 0000000010475e90 0000000000000063 000000000000034d 0000000000000333
    %l4-7: 0000000000003cc0 0000000000800000 0000000000800000 0000000000000001
    000002a1001dd7c0 unix:prom_rtt+0 (f0803cc0, 3cc0, 800000, 0, 16, 14)
    %l0-3: 0000000000000006 0000000000001400 0000004400001605 000000001014c848
    %l4-7: 000002a75029c000 0000000000000000 0000000000000009 000002a1001dd870
    000002a1001dd910 SUNW,UltraSPARC-II:scrub_ecache_line+2b4 (f0803cc0, c, 1046b370, 300002015d8, 30001dcdf40, 83)
    %l0-3: 0000030001c49518 0000000000000003 0000000000000070 0000000000000000
    %l4-7: 0000000000000000 0000000000800000 0000000000003cc0 0000000000000004
    000002a1001dda60 SUNW,UltraSPARC-II:scrub_ecache_line_intr+30 (30001dcdf40, 1, 1, 2a1001ddd20, 102e0, 1014f27c)
    %l0-3: 0000000000000001 0000000000000001 0000031001e7e8a0 000003000020df88
    %l4-7: 0000029fffd82000 0000031005127540 0000031001e7e8f8 0000000000000000
    syncing file systems... done
    skipping system dump - no dump device configured
    rebooting...
    Resetting...
    Software Power ON
    He putted off board 2 and the server started correctly, nothing recorded in messages.*, He has not spare parts, what do you think about? Memory problem again?

  • Stringbuffer memory problem

    Hi guys
    I am developing an applet which reads a 25mb text file into a stringbuffer and outputs the stringbuffer content to a text area. the file is an auto generated report.
    File f = new File ("c:\\my_file.txt");
    FileReader fr = new FileReader(f);
    BufferedReader br = new BufferedReader(fr);
    StringBuffer sb = new StringBuffer();          
    String oLine;     
    while ((oLine = br.readLine()) != null) {
         sb.append(oLine+"\n");
    br.close();               
    TextArea data = new TextArea(sb.toString(),50,110);This works fine when I increase my JVM size to 128mb. But runs out of memory when using the default JVM size.
    Is there a more efficient way I can do this so that it works in default JVM memory size? A datatype that takes up less memory than stringbuffer perhaps?
    Thanks in advance.

    The question is: what are 25 MB of text doing in an applet? Who's supposed to look at all that info? Trim down the amount of information you display. You couldn't even navigate through that pile of text.
    A datatype that takes up less memory than stringbuffer perhaps?How would that work? If you have 25 MB of data, you'll have 25 MB of data.
    I guess your problem comes up when you create a String from the StringBuffer - I don't know if or how StringBuffer and String are sharing their backing arrays, but I'd assume for simplicity's sake they don't. So you end up with a 25 MB String and a 25 MB StringBuffer.

Maybe you are looking for