Appendbytes with larger memory consumption

hi at all,
when i user appendbytes, my memory consumption become too Larger, how can i reduce it.
i tried to close the netStream and seek(0) and appendBytesAction, it's not usef.
Thanks.

yes,the URLStream, but i put the data in ByteArray in ProgressEvent, than use appendBytes with Timer

Similar Messages

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • OutOfMemory exception on device with 2mb memory

    Hi all
    I have experienced the following problem.
    The application I have developed works fine on a sonyerricson w800i but gives outofmemory on a nokia 6131
    The se800 reports 1048572 bytes available memory (1mb)
    The nokia 6131 reports 2097152 bytes available memory (2mb)
    For debugging purposed I do a System.gc() and report free memory, on average the se800 has 77% free whereas the nokia starts with 55% free and it degrades quickly.
    Its the same application and code so does this mean the nokia has a problem? It is my code that is at fault?
    When I first encountered outOfMemory exception I followed some guidelines removing any inner classes dereferencing object by obj = null and making sure I close streams etc, this helped with general memory consumption. I tested it with the emulator limiting memory to 400k and I have an average of 20% free.
    I suspected the screensize to play a role as I paint the canvas manually but on the emulator I use the defaultcolorphone which has a larger screen than the nokia.
    Where could the problem lie?
    GX

    Hi MelGohan
    Thanks for your reply. In order to test how much free memory I am using the following method:
        public static String getFreeMem()
            System.gc();
            long tot = Runtime.getRuntime().totalMemory();
            long free = Runtime.getRuntime().freeMemory();
            int per = (int)(((float)free / (float)tot) * 100);
            String s = gx.Utils.padString(String.valueOf(free), -8, " ") + " " + gx.Utils.padString(String.valueOf(per), -3, " ") + "%" + " / " + String.valueOf(tot);
            return s;
        }There are no other applications in memory as I tried restarting the phone and the results were the same hence I can see exactly how much memory is available (2097152 bytes) and how much is free/used.
    On the emulator and the sony erricson the memory usage is stable whereas the nokia keeps using more and more, I am currently setting all unused objects = null when done using them but have found no difference so far.
    I have put the same post on the nokia boards and there is more info there:
    http://discussion.forum.nokia.com/forum/showthread.php?t=108401
    GX

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • Memory consumption is more with oracle database compared to Sybase

    Hi,
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase.
    Currently using 11g R2 with ojdbc6.jar driver
    can you please provide the information to optimize the memory consumption when using the oracle.
    Thanks,
    Nagaraj

    user12569889 wrote:
    We are executing the same java source code with backend sybase and the oracle database. But with oracle database it is consuming more memory than Sybase. That is not saying anything at all.
    What memory in Oracle? Shared pool? Buffer cache? PGA? UGA? Library cache? Something else?
    What memory in Sybase did you compare it to?
    What type of client-server connection to Oracle was made? Dedicated or shared?
    What commands/method were used to determine the memory consumption in Oracle and then Sybase?
    What serves as the baseline for comparison?
    Comparing product A with product B is a COMPLEX thing to do. And IMO, beyond the abilities of the majority of developers - as they lack the technical expertise to extract usable metrics and correctly compare these between products that can work VERY differently.
    And unless you can provide technical details to backup your claim that you are observing that "+Memory consumption is more with oracle database compared to Sybase+", I would say that you have no idea what you are actually observing and in no position to deduce that Oracle consumes more memory.

  • MacPro with10.7.3. running a Python script in terminal I see a : "There is no more application memory available on your startup disk". Python uses 10G of 16G RAM and  VM =238G with 1TB free. Log: macx-swapon FAILED - 12. It only happens with larger inputs

    On my MacPro with10.7.3. while running a Python script in terminal, after a while, in several hours actually,  I see a system message for the Terminal app: "There is no more application memory available on your startup disk". Both RAM and VM appear to be fine at this point, i.e. Python uses only 10G of 16G RAM and  VM =238G with ~1TB free. Log reads: " macx-swapon FAILED - 12" multiple times. Furthermore, other terminal windows can be opened and commands run there. It only happens with larger inputs (text files), but with inputs that are about half the size everything runs smoothly.  So the issue must be the memory indeed, but where to look for the problem/fix?

    http://liulab.dfci.harvard.edu/MACS/README.html
    Have you tried with the --diag flag for diagnostics? Or changing verbose to 3 to show debug messages? Clearly one of three things is happening;
    1. You ARE running out of disk space, but once it errors out the space is reclaimed if the output file is deleted on error. When it fails, does your output have the content generated up to the point of termination?
    2. The application (Terminal) is allocated memory that you are exceeding
    3. The task within Terminal is allocated memory that you are exceeding
    I don't know anything about what this does but is there a way to maybe run a smaller test run of it? Something that takes 10 minutes? Just to see if it works.

  • Memory Consumption: Start A Petition!

    I am using SQL Developer 4.0.0.13 Build MAIN 13.80.  I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl.  But that is not the case.
    Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage?  This is problem has been there for years now with many posts and no real answer.
    If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
    Thank you

    Yes, at this point (after restarting) SQL Developer is functioning fine.  Windows reports 1+ GB of free memory.  I have 3 worksheets open all connected to two different DB connections.  Each worksheet has 1 to 3 pinned query results.  My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets.  It appears like it does not clean up after itself to me.
    I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that.  I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help.

  • Query memory consumption

    Hi,
    Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
    What about query like
    select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
    How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
    Please note that the real query would include JOINS and such. Thanks
    Any information is appreciated

    Hi Melvin,
    Not sure I'd call myself an expert but I'll have a go at an answer
    I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
    If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
    You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
    Hope this helps,
    Owen

  • Working with large images

    i need to work with very large images (3000x3000 pixels) and they take a lot of memory (50MB). thing is that they don't use a lot of colors but they are loaded in memory with 24bit color model.
    it is jpeg image and it is decoded to buffered image with standard jpeg decoder. i want to use buffered iamge because it is very simple to work with, however, i need to find a way to reduce memory consumption.
    is it possible to somehow use different color model to reduce color count to 256, which is enough for me. i am clueless about imaging and i simple haven't got a time to get into it. i need some hints about this. i just want to know is it poosible (and how) to reduce size of BufferedImage in memory by decreasing it's number of colors.
    i was thinking about using gif encoder, but it works with images, not bufferedimages and conversion takes lot of time and is not a solution.
    thanx!
    dario

    You can create a byte indexed BufferedImage. This uses just 1 byte per pixel.
    Use
    public BufferedImage(int width,
    int height,
    int imageType,
    IndexColorModel cm)
    with
    imageType == BufferedImage.TYPE_BYTE_BINARY
    and create a color palette with
    new IndexColorModel()

  • Query is allocating too large memory error in OBIEE 11g

    Hi ,
    We have one pivot table(A) in our dashboard displaying , revenue against a Entity Hierarchy (i.e we have 8 levels under the hierarchy) And we have another pivot table (B) displaying revenue against a customer hierarchy (3 levels under it) .
    Both tables running fine under our OBIEE 11.1.1.6 environment (windows) .
    After deploying the same code (RPD&catalog) in a unix OBIEE 11.1.1.6 server , its throwing the below error ,while populating Pivot table A :
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    *State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)*
    But , pivot table B is running fine . Help please !!!!!
    data source used : essbase 11.1.2.1
    Thanks
    sayak

    Hi Dpka ,
    Yes ! we are hitting a seperate essbase server with Linux OBIEE enviorement .
    I'll execute the query in essbase and get back to you !!
    Thanks
    sayak

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • Bad Performance/OutOfMemory Error in CMP Entity Bean with Large DB

    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() - (Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP Entity Beans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

    Hi
    You should use a Select Method, it does support cursors.
    Or a Home Select Method combination.
    Regards
    Thomas
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>>
    (without max-beans-in-cache directive the performance is worse)>
    Is there any documentation available to help us in deploying CMP
    Entity Beans with very large number of records (instances) ? Any help
    is greatly appreciated.>
    Regards>
    Rajan>
    >
    "Rajan Jena" <[email protected]> schrieb im Newsbeitrag
    news:3dadd7d1$[email protected]..
    >
    Hello:
    I have an CMP Entity deployed on WLS 7.0
    The entity bean maps to a table that has 97,480 records.
    It has a finder: findAll() -- SELECT OBJECT(e) FROM Equipment e
    I have a JSP client that invokes the findALL()
    The performance is very poor ~ 150 seconds just to perform the findAll() -(Benchmark
    from within the JSP code)
    If more than one simultaneous call is made then I get outOfMemory Error.
    WLS is started with max memory of 512MB
    EJB is deployed with <max-beans-in-cache>100000</max-beans-in-cache>
    (without max-beans-in-cache directive the performance is worse)
    Is there any documentation available to help us in deploying CMP EntityBeans
    with very large number of records (instances) ?
    Any help is greatly appreciated.
    Regards
    Rajan

  • Query is allocating too large memory Error ( 4GB) in Essbase 11.1.2

    Hi All,
    Currently we are preparing dashboards in OBIEE from the Hyperion Essbase ASO (11.1.2) Cubes.When are trying to retrieve data with more attributes we are facing the below error
    "Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)"
    Currently we have data file size less than 2GB so we are using "Pending Cache Size=64MB".
    Please let me know which memory I have to increase to resolve this issue
    Thanks,
    SatyaB

    Hi,
    Do you have any dynamic hierarchies? What is the size of the data set?
    Thanks,
    Nathan

Maybe you are looking for