Cache query results in too much garbage collection activity

Oracle Coherence Version 3.6.1.3 Enterprise Edition: Production mode
JRE 6 Update 21
Linux OS 64 bit
The application is using a object Customer having following structure
Customer(CustID, FirstName, LastName, CCNumber, OCNumber)
Each property of Customer is a inner classes having getValue as one of the methods retuning a value. The getValue method of CCNumber and OCNumber return a Long value. There are 150m instances of Customer in cache. To hold this much data in cache we are running several nodes on 2 machines.
The following code is used to create indexes on CCNumber and OCNumber:
     ValueExtractor[] valExt = new ValueExtractor[]{
          new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
     ChainedExtractor chExt = new ChainedExtractor(valExt);
     Long value = new Long(0);
     Filter f = new NotEqualsFilter(chExt, value);
     ValueExtractor condExtractor = new ConditionalExtractor(f, chExt, true);
     cache.addIndex(condExtractor, false, null);The client code queries the cache with following code:
     ValueExtractor[] valExt1 = new ValueExtractor[]{
          new ReflectionExtractor("getCCNumber"), new ReflectionExtractor("getValue")};
     ChainedExtractor chExt1 = new ChainedExtractor(valExt1);
     EqualsFilter filter1 = new EqualsFilter(chExt1, ccnumber);
     ValueExtractor[] valExt2 = new ValueExtractor[]{
          new ReflectionExtractor("getOCNumber"), new ReflectionExtractor("getValue")};
        ChainedExtractor chExt2 = new ChainedExtractor(valExt2);
     EqualsFilter filter2 = new EqualsFilter(chExt2, ocnumber);
     AnyFilter anyFilter = new AnyFilter(new Filter[]{filter1, filter2})
     cache.entrySet(anyFilter);The observation is that for 20 client threads the application performs well(avg response time = 200ms) but as the number of client threads increases the application performance goes down disproportionately(query returns anywhere between 1000ms to 8000ms for 60 threads). I think this is because of the eden space filling up very fast when the number of client thread goes up. The number of collections per second goes up with the number of client threads. There are almost 2-3 ParNew collections every second when there are 60 client threads where as only 1 collection per second for 20 client threads. Even 100-200ms pause degrades the overall query performance.
My question is why coherence is creating so many objects that fills up eden so fast? Is there anything I need to do in my code?

Hi Coh,
The reason for so much garbage is that you are using ReflectionExtractors in you filters, I assume you do not have any indexes on your caches either. This means that each time you execute a query Coherence has to scan the cache for matches to the filter - like a full table scan in a DB. For each entry in the cache Coherence has to deserialize that entry into a real object then using reflection call the methods in the filters. Once the query is finished all these deserialized objects are garbage that needs to be collected. For a big cache this can be a lot of garbage.
You can change to POF extractors to save the deserialization step which should reduce the garbage quite a bit, although not eliminate it. You could also use indexes, which should eliminate pretty much more of the garbage you are seeing during queries.
JK

Similar Messages

  • Query is taking too much time

    hi
    The following query is taking too much time (more than 30 minutes), working with 11g.
    The table has three columns rid, ida, geometry and index has been created on all columns.
    The table has around 5,40,000 records of point geometries.
    Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
    SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
    sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
    regards

    I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
    a.ida='CORD' AND
    b.idat='CORD' AND
    a.rid !=b.rid AND
    sdo_equal(a.geometry, b.geometry)='TRUE'
    ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
    So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
    and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
    Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
    And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
    Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
    [ c o d e ]
    <code/results here>
    [ / c o d e ]
    that way the original formatting is kept and it makes things much easier to read.
    Regards,
    Stefan

  • Cannot query using both conforming and cached query result

    TopLink doesn't allow me to both use conforming and cached query result at the same time.
    Conforming is certainly not a superset of the [cached query result] features.
    Can you confirm that it's a limitation of TopLink?
    Any know workaround to end-up with the same features as using both conforming and cached query result?
    Conforming is about seeing modifications you do in the same transaction. As a bonus, if you query for one object and specify at least the id as criteria because TopLink will have to check in memory anyway it can avoid going to the database.
    But if I do a query like "give me employees hired before now and after 30 days ago" it's about more than one objects and about finding existance so cached query result is needed to get acceptable performance in a complex application trying to avoid the same SQL generated over and over again.

    Thats where the trace just ends? It doesnt look like there's any LIKE or filtering going on (with respect to the Oracle pieces anyway), apparently MSAccess simply requested the whole table.
    What do you mean by 'hang' exactly? Are you sure it's just not taking a long time to complete? How long have you waited? How fast does it complete on the other environment?
    ODBC tracing isnt likely to help much for that. SQLNet tracing would be better to see what is going on at a lower level. Specifically, what is going on at the network level? Is the client waiting for a packet to be returned from the database?
    Is the database having a hard time processing the query, perhaps due to index/tuning issues?
    Assuming that is indeed the query that is "hung", how much data does that return?
    Are you able to reproduce the same behavior with that query and vbscript for example?
    Greg

  • cache-query-results question

    I have another post for general descriptor tag information but I do have a specific question. In a project I am looking at I see:
    <cache-usage> check cache by primary key </cache-usage>
    <cache-query-results>false</cache-query-results>
    <maintain-cache>true</maintain-cache>
    I'm not sure how to interpret this. Does this mean that a cache is in place or not? cache-query-rests is set to false which implies no caching, yet the other parameters imply a cache is in place. What overrides here?
    Thanks

    The XML maps directly to the API so the JavaDocs and related documentation are the best tools:
    cache-usage: query.setCacheUsage(int)
    This option indicates how the object cache should be used when processing the query. This is how in-memory query is configured as well as support for cache-hits on ReadObjectQuery.
    cache-query-result: query.setShouldCacheQueryResults(boolean)
    This option allows you to indicate that the results returned from the query execution should be held. When the query is executed again these results will be returned without going to the database or searching the object cache. This is just caching the results locally within the query.
    maintain-cache: query.maintainCache() or query.dontMaintainCache()
    This setting determines if the results returned from the query should be cached in the shared object cache. It is on by default and turning this off is very rare. Occasionally done to compare the cache version with the database verision when handling an optimistic locking failure.
    Doug

  • I have an iMac and a MacBook with Intel Core 2 Duo processors. I realize that this is within the stated requirements for Lion.  However was wondering if by migrating to Lion this will result in too much demand on processor resources, thus a slower machine

    I have an iMac and a MacBook with Intel Core 2 Duo processors. I realize that this is within the stated requirements for Lion.  However was wondering if by migrating to Lion this will result in too much demand on processor resources, thus a slower machine than using Snow Leopard?

    Which iMac?  Which Macbook?   Both have had several model
    itertations, even within the framework of a Core2Duo processor.
    With that said, I have an early 2009 iMac 24", with 2.66 GHz
    Core2Duo with 8 gig of RAM, and in my opinion, seems to be running
    smoother and faster with Lion.

  • Caching query results?

    Hi guys,
    I have this page which calls the same query (across a database link!) four times! First when an initial LOV is generated (and the page is loaded), again for another LOV based off the initial one and finally for the report generated based off the 2 selections. This is crazily slow (30-40 seconds each load~) and also: When you try to sort with an Interactive Report by clicking the column names it then queries the database again (I assumed this part was cached but apparently not). I therefore need to cache the results from an initial query and then just limit the results locally. How can I do this? Guides? Examples?
    Thanks for help.
    Mike

    I tried this example based on a view:
    CREATE MATERIALIZED VIEW MV_TEST2
         REFRESH COMPLETE
         START WITH SYSDATE
         NEXT  SYSDATE + 1/48
         WITH ROWID
         AS SELECT * FROM test1;REFRESH COMPLETE -- The complete refresh re-creates the entire materialized view.
    START WITH SYSDATE -- run now
    NEXT SYSDATE + 1/48 -- run again in half an hour
    WITH ROWID -- I think this option is important if you use partial refresh of the view.
    AS SELECT * FROM test1; -- test1 is a view:
    CREATE OR REPLACE VIEW TEST1 AS
    SELECT st_id, st_name
        FROM aaw_solution_tree;Are column indexes still possible? I'm not sure:
    Indexing: !with respect to MV's on 10gR2 Jonathan Lewis wrote! ... you are allowed to create indexes on the tables that sit under materialized views - just don't make them unique indexes
    How much freedom is there in setting the refresh rate?
    What type of refreshing do you need?
    Another useful link: [http://asktom.oracle.com/pls/ask/search?p_string=materialized+view|http://asktom.oracle.com/pls/ask/search?p_string=materialized+view]
    Hope it helps.
    Tobias

  • Low resolution - a result of too much compression or is it the source file?

    a friend of mine is helping me produce a DVD using DVD Studio Pro 4 and as our source files, we are using 14 video files that were generated using Final Cut Pro.
    when the final DVD is produced, we were both disappointed to discover that the screen resolution appears to be quite low - the darks are pixelated and the entire image lacks sharpness. it looks fine on anything up to a large computer screen, but it definitely seems inadequate when viewed on a large flat screen HD television.
    the DVD is not HD formatted, by the way.
    being quite the amateur on DVD Studio Pro, my question is this: have the original video files been compressed too much and is this the cause of the problem?
    the 14 video files that were loaded into DVD Studio Pro, in their original format, take up over 15 GB of space - they are all quicktime files (.mov formatted), generated from FINAL CUT PRO as DVCPRO files in NTSC (those were my export settings) - yet when all of the files are compressed and loaded into DVD Studio Pro, they take up only 1.4 GB.
    it would seem that we have at least 3 GB's to spare.
    any feedback would be welcomed, and please excuse me for my beginner questions.
    additionally, it would be great if anyone could take the time to explain to me where i can change the compression settings / preferences in DVD Studio Pro.
    again, thanks very much.

    Ok, do not use DVD SP to compress your video. Use Compressor to create a .m2v file of your video and a .ac3 file of your audio. Tutorials can be found at; http://www.kenstone.net/fcphomepage/fcp_homepageindex.html
    With Compressor you will have much more control over the encoding process and you can get better results. That being said, Compressor is good, but there are even better third party MPEG-2 encoders out there. They will give you even better results at lower bitrates.

  • How to force Work Item Query Policy to refresh its cached query results?

    I have enabled the Work Item Query Policy on my TFS project.  It works as expected, except when using Visual Studio 2013 with the following sequence of steps:
    User selects Check in Pending Changes from the Source Control Explorer
    User enters in the ID of the work item to be associated to the changeset
    User clicks the Check In button
    Work Item Query Policy displays message ' Work item ##### was not found in the results of stored query...'
    User realizes his mistake, and modifies (and saves) the work item so that it is returned in in the query result set
    User clicks the Check In button again expecting the TFS policy to accept the association
    Work Item Query Policy still displays message ' Work item ##### was not found in the results of stored query...'
    Removing the Work Item and re-associating it does not make a difference.  The only workaround that I have found is to close Visual Studio and reopen it.  Does any one have a better solution than this?

    Our setup is different from the one you are using:
    - User is using VS 2013 Update 4.
    - We are running TFS 2010 SP1
    The test case that you described is different from the one that is causing my problem (that scenario works fine for me as well).  I am trying to associate the check in to the same work item both times; whereas, you are associating it to a different
    work item the second time.  I can consistently reproduce the error using the following steps:
    1) Create a query that returns All Bugs in Active state, and set it as the query for the Work Item Query Policy
    2) Create and save a new Bug
    3) Run the query to confirm that the new bug does not appear in the result set
    4) Checkout a file, modify it, save it
    5) Check in the file and associate it to the bug from step 2)
    - the Work Item Query Policy will issue an error message saying that the work item cannot be found in the associated query
    6) Change the state of the bug to Active, and save
    7) Refresh the query to confirm that the bug now appears in the result set
    8) Check in the file again
    - error message from step 5) will not go away

  • Discoverer Query is taking too much time

    Hi,
    I am having performance problems with some queries in Discoverer(Relational). Discoverer is taking around 30 minutes to run the report. But if I run the same query through TOAD it takes only 5 to 6 seconds. Why it is so ?
    Structure of Report:
    The report is using crosstab with 3 dimensions on the left side and 3 dimensions on the page items.
    Why the performance in the discoverer is slow and how can I improve it ?
    Thanks & Kind Regards
    Rana

    Hi all
    Russ' comments are correct. When you use crosstabs or page items, Discoverer has to execute the entire query before it can bring back any results. This is a known factor that should be taken into account when end users create workbooks.
    The following conditions will greatly impact performance:
    1. Crosstabs with many items on the left axis
    2. Multiple Crosstab values
    3. Page items with a large set of values,
    4. Crosstabs or page items that use complex calculations,
    5. Multiple page items
    Thus, users must avoid building worksheets that use too many of the above. As commented previously, this is well documented in the forums and on the Oracle website and should not come up by surprise. If it does then either suitable training has not been given to the end users or Oracle's own end user documentation has not been read. Section 6 of the Discoverer Plus user guide has the following advice:
    Whether you are using Discoverer Plus Relational to perform ad hoc queries, or to create reports for other end users, you want to minimize the time it takes to run queries and reports. By following a few simple design guidelines, you can maximize Discoverer performance.
    Where possible:
    use tabular reports rather than cross-tabular reports
    minimize the number of page items in reports
    avoid wide cross tabular report
    avoid creating reports that return tens of thousands of rows
    provide parameters to reduce the amount of data produced
    minimize the number of worksheets in workbooks
    remove extraneous worksheets from workbooks (especially if end users frequently use Discoverer’s export option, see Notes below)
    Notes:
    When end users export data in Discoverer Plus Relational or Discoverer Viewer, they can export either the current worksheet or all the worksheets. In other words, they cannot selectively choose the worksheets to be exported. Remove extraneous worksheets so that extra data is not included when end users export all worksheets.
    I hope this helps
    Regards
    Michael

  • Query BW tahckes too much time to runn

    Hi,
    We have a query BW who takes 5 minute to run.
    Our user asks us to minimize the time of the extraction.
    Pease let us know if you can help us.
    Best Regards
    Carmen

    Hi,
    use InfoCubes instead of DSO's for reporting.
    If you use InfoCubes then create aggregates for your queries.
    Also use partition for the Cubes based on e.g. 0CALYEAR.
    Here are some useful documents you should read:
    [SAP BW Query Performance Tuning with Aggregates|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9]
    [Performance Tuning for SAP Business Information Warehouse|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c]
    SAP notes:
    Note [859456|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bex/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d383539343536%7d] - Increasing performance using the cache
    Note [356732|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bex/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d333536373332%7d] - Performance Tuning for Queries with Aggregates
    Note [1241794|https://websmp230.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1241794] - Query performance with partitioned InfoCubes
    Regards
    Andreas

  • Delete query is taking too much time...

    Hi All,
    Below delete query is taking at least 1hrs. !!!!
    DELETE aux_current_link aux
    WHERE EXISTS (
    SELECT *
    FROM link_trans_cons link2
    WHERE aux.tr_leu_id = link2.tr_leu_id
    AND aux.kind_of_link = link2.kind_of_link
    AND link2.TYPE = 'H');
    table aux_current_link has record - 284279 and has 6 normal index.
    pls help me.
    Subir

    Not even close to enough information.
    Look here to see if you can tune the operation.
    When your query takes too long ...
    But for a delete you need to understand that the indexes need to be maintained, you have 6 of them, that requires effort. ALSO, foreign keys need to be checked to make sure you don't violate any enabled foreign keys (so maybe you need an index on a table where a column from this table you deleting from is being referenced).
    If you are deleting a HIGH percentage of the table you may be better off doing a create table as select ....query to keep all rows from your table....then drop your current table, rename the new one you made, and add all the indexes to it.
    Otherwise, once you've tuned the operation (query), assuming it can be tuned, it's going to take as long as it needs to take.....
    Message was edited by:
    Tubby

  • Performance problem...is there a way to cache query results?

    Greetings team,
    I've been deploying DS5.2 for a while now, and am on the cusp of pushing it into our production environment, however I've been noticing lately that some hosts are taking an exorbitantly long time to log in (actually, a user noted it, and I'm now investigating).
    Logins to hosts in this environment can take anywhere from 10-50 seconds. One thing that I've noticed is that any time you run a command that requires any amount of awareness of uid->username translation (i.e. if you ls -l /opt/home), queries are made to the configured directory server for this information. Is this normal? Since uid's and usernames don't often change (in most environments, anyway), is there a way this could be cached?
    I see also in my access log for my primary server (configured as a hub, btw) that there is near constant traffic to that host for LDAP info. I'm not sure why it's so chatty, but it does appear to be slowing things down a bit. The load on my LDAP host (a SunFire V210 w/ 1GHz processor, 1024MB RAM) seems to float between 1 and 12, with sar reporting an average idle time of about 44%.
    Any ideas? I'm really at a loss to explain why there's so much traffic to this host when much of it seems to come from hosts with nobody logged into them.
    Patrick

    It is great that you have found the root cause of
    your issue.
    nscd is by default started at boottime by a usual OS
    install. There is a /etc/nscd.conf but I doubt that
    anyone will change anything there as the default
    settings are good for most cases.
    I think LDAP search performance is affected by the
    existence of Search Indexes also.
    I have observed that if the user home directory is
    NFS mounted especially over a WAN, be it via
    /etc/fstab or automount maps, the login process will
    be very slow, it will take a while to obtain a
    command prompt at the home directory level.
    GaryGary et al,
    In my environment nscd has been explicitly disabled for some historical reasons, none of which are still a problem. So, I'm going to enable it for only passwd and group caching, with the default values for those caches.
    I'm in the process of working out my performance tuning plan for my LDAP servers, but I'm definitely going to have an eye on indices and caches. Those will probably have the least impact on search times and such for the moment since my directory is so tiny (261 entries!), but preventing that traffic from hitting the server at all will be a huge savings.
    I can definitely see why WAN mounted homedirs would cause things to lag. That's not the case here since NFS is a big no-no.
    Patrick

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • 3000 old emails back to 2009 came along for the ride when I transferred aol address book to Mac address book. All attempts to delete en masse have no results. Too much stuff. Procedures?

    3000 old emails, some as far back as 2009, came along for the ride when I transferred aol address book to imac address book. Can't find the magic key to permanently delete them. Ideas on procedures?

    I'd try deleting 50 at a time - 3000 is a stretch.
    By the way, double posting is not at all helpful.

  • Query result caching on oracle 9 and 10 vs indexing

    I am trying to improve performance on oracle 9i and 10g.
    We use some queries that take up to 30 minutes to execute.
    I heard that there are some products to cache query results.
    Would this have any advantage over using indexes or materialized views?
    Does anyone know any products that I can use to cache the results of this queries on disk?
    Personally I think that by using the query result caching I would reduce the cpu time needed to process the query.
    Is this true?

    Your message post pushes all the wrong buttons starting with the fact that 9i and 10g are marketing labels not version numbers.
    You don't tune queries by spending money and throwing resources at them. You tune them by identifying the problem queries, running explain plans, visualizing their output using DBMS_XPLAN, and addressing the root cause.
    If you want help post full version numbers, the SQL statements, and the DBMS_XPLAN outputs.

Maybe you are looking for

  • Help - trying to make a video and imovie won't import the required video

    the video i need has been converted using handbrak to mp4 and yet remains white when i try to import. please help as i need to make a video for work and desperately running out of time now. ruth

  • "change registered device" and Nokia n8

    Hello, My previous phone was a 5800 CWM, the subscription to which expired in July. The phone was damaged and replaced with a new Nokia N8. I was under the impression the device could be changed even after a sub is expired, As written clearly on the

  • Enter/View comments using Financial Reporting.

    Hi Gurus, Is there a way to have comments written using Financial reporting. Example: I have created a FR report for budgeting. I would like to have users provision to enter thier comments. the comments could be another grid or another section in the

  • Schedule SAP Web Reports

    Dear Am using SAP Business Objects 4.0 My scenario Is , I want to create Schedule , If the Web reports have data, i want to send it to User A,B,C But if the Web reports have no data , I want to send it to User D And E In other way How can i create Sc

  • REM MFBF Scrap posting not reflecting in MD04

    Hi All, If I am doing yield posting from MFBF i can see reduced quantity in MD04. If I am doing scrap posting from MFBF quantity is not getting reduced from MD04. how can i achieve it via ABAP coding. pleae help with any exit or sample code. Thanks,