Coherence-Extend and Continuous Query performance

Hi,
I am trying to evaluate the performance impact of continous queries, when using coherence extend (TCP). The idea is that desktop clients will be running continuous queries against a cluster, and other processes will be updating the data in that cluster. The clients themselves take a purely read-only view of the data.
In my tests, I find that the updater process takes about 250ms to update 5000 values in the cache (using a putAll operation). When I have a continuous query running against a remote cache, linked with coherence extend, the update time increases to about 1500ms. This is not CPU bound.
Is this what people would expect?
If so this raises questions to me about:
1) slow subscribers - what if one of my clients is very badly behaved? Can I detect this and/or take action?
2) conflation of updates - can Coherence do conflation?
3) can I get control to send object deltas over the wire rather than entire objects?
Is this a use case for which CoherenceExtend and continuous queries were designed?
Robert

Yes, it is certainly possible, although depending on your requirements it may be more or less additional coding. You have a few choices. For example, since you have a CQC on the cache, you could conceivably aggregate locally (on any event). In other words, since all the data are local, there is no need to do the parallel aggregation (unless it is CPU limited). Depending on the aggregation, you may only have to recalculate part of it.
You can access the internal data structure (Map) within the CQC as follows:
Map map = cqc.getInternalCache();
// now we can do aggregation
NamedCache cache = new WrapperNamedCache(map);
cache.aggregate(..);More complex approaches would only recalculate portions based on the event, or (depending on the function) might use the event to adjust the aggregated results.
Peace,
Cameron Purdy | Oracle Coherence
http://coherence.oracle.com/

Similar Messages

  • Coherence Extends and Local Cache

    I am triying to use coherence extends to do some work with cache,
    with a local cache is that possible i keep getting null pointer exception,
    like if the data is not being stored in the cache.
         <cache-mapping>
              <cache-name>local-pds2-*</cache-name>
              <scheme-name>local-cache</scheme-name>
         </cache-mapping>
         <local-scheme>
              <scheme-name>local-cache</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>32000</high-units>
                   <low-units>10</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <flush-delay>1000ms</flush-delay>
         </local-scheme>
    is there something wrong in my configuration?

    this is the config y use for the client
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>local-pds2-*</cache-name>
              <scheme-name>local-cache</scheme-name>
         </cache-mapping>
    <cache-mapping>
    <cache-name>dist-pds2-*</cache-name>
    <scheme-name>extend-dist</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
         <local-scheme>
              <scheme-name>local-cache</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>32000</high-units>
                   <low-units>10</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <flush-delay>1000ms</flush-delay>
         </local-scheme>
    <remote-cache-scheme>
    <scheme-name>extend-dist</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>172.16.2.229</address>
    <address>localhost</address>
    <port>5354</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    and this for the server
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <defaults>
              <serializer system-property="tangosol.coherence.serializer"/>
              <socket-provider system-property="tangosol.coherence.socketprovider"/>
         </defaults>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>dist-pds2-*</cache-name>
                   <scheme-name>dist-default</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <cache-mapping>
              <cache-name>dist-*</cache-name>
              <scheme-name>distributed</scheme-name>
              <init-params>
                   <init-param>
                        <param-name>back-size-limit</param-name>
                        <param-value>8MB</param-value>
                   </init-param>
              </init-params>
         </cache-mapping>
         <distributed-scheme>
              <scheme-name>distributed</scheme-name>
              <service-name>DistributedCache</service-name>
              <backing-map-scheme>
                   <local-scheme>
                        <scheme-ref>binary-backing-map</scheme-ref>
                   </local-scheme>
              </backing-map-scheme>
              <autostart>true</autostart>
         </distributed-scheme>
         <local-scheme>
              <scheme-name>binary-backing-map</scheme-name>
              <eviction-policy>HYBRID</eviction-policy>
              <high-units>{back-size-limit 0}</high-units>
              <unit-calculator>BINARY</unit-calculator>
              <expiry-delay>{back-expiry 1h}</expiry-delay>
              <flush-delay>1m</flush-delay>
              <cachestore-scheme></cachestore-scheme>
         </local-scheme>
         <caching-schemes>
              <distributed-scheme>
                   <scheme-name>dist-default</scheme-name>
                   <backing-map-scheme>
                        <local-scheme/>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address >localhost</address>
                                  <port >5354</port>
                             </local-address>
                        </tcp-acceptor>
                   </acceptor-config>
                   <proxy-config>
                        <cache-service-proxy>
                             <enabled>true</enabled>
                        </cache-service-proxy>
                        <invocation-service-proxy>
                             <enabled>true</enabled>
                        </invocation-service-proxy>
                   </proxy-config>
                   <autostart >true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>

  • Sql:variable and XML query performance

    Can someone help with sql:variable() in xml queries?  It seems that when I attempt to reference variables with the sql:variable(...) function in an xpath function (exist or nodes) it comes up with a totally different query plan, possibly ignoring
    my secondary indices like the ones for VALUE, PATH.
    But if I replace sql:variable("@p_ObjectIdentifierForReference") with the literal (ie. "ord/p/ord0616.p") then it uses secondary indices more consistently.
    Below you will see an unsuccessful attempt to get the query to "OPTIMIZE FOR" a specific literal value of @p_ObjectIdentifierForReference.  But this doesn't give work.  It doesn't give me a plan using the secondary index I expect.
    Ideally there would be a way to get the sql:variable(...) function to give the same query plan as a literal. Not sure why that isn't the default behavior.
    DECLARE
    @p_ObjectIdentifierForReference
    varchar(500);
    SET
    @p_ObjectIdentifierForReference
    = 'ord/p/ord0616.p';
    WITH
    XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004'
    as D)
    SELECT  
    XREF_FileDataReference.XREF_FileData     
    AS XrefFileData,
    InnerRowNode.value('/D:Reference[1]/D:File-num[1]',
    'int') 
    AS FileNumber,
    InnerRowNode.value('/D:Reference[1]/D:Line-num[1]',
    'int') 
    AS LineNumber
    FROM
    (SELECT
    XREF.XREF_FileData.XREF_FileData,
    XREF.XREF_FileData.XREF_FileEntry,
    InnerRow.query('.')
    AS InnerRowNode
     FROM
    XREF.XREF_FileData
    OUTER APPLY
    DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference")
    and @Reference-type = "RUN"]')
    as T(InnerRow)                                                           
    WHERE    DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier
    = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]')
    = 1) 
    AS XREF_FileDataReference
     INNER
    JOIN  XREF.XREF_MemberBuilt  
    ON XREF_MemberBuilt.XREF_FileData  
    = XREF_FileDataReference.XREF_FileData
     INNER
    JOIN  XREF.XREF_FileEntry 
    ON XREF_FileEntry.XREF_FileEntry
    = XREF_FileDataReference.XREF_FileEntry 
    WHERE
    XREF_MemberBuilt.XREF_ProjectBuilt
    = 69
    OPTION(RECOMPILE,
    OPTIMIZE FOR (@p_ObjectIdentifierForReference
    = 'ord/p/ord0616.p')

    I tried to create a "repro" of your query so we can work on it and try and improve it, but I got the best results by just adding text() and [1] to it, eg
    SELECT
    XREF_FileDataReference.XREF_FileData AS XrefFileData,
    InnerRowNode.value('(/D:Reference/D:File-num/text())[1]', 'int') AS FileNumber,
    InnerRowNode.value('(/D:Reference/D:Line-num/text())[1]', 'int') AS LineNumber
    FROM (
    In my main repro, even with a large piece of xml with 100,000 elements, there still wasn't much difference between the queries:
    USE tempdb
    GO
    IF NOT EXISTS ( SELECT * FROM sys.schemas WHERE name = 'XREF' )
    EXEC( 'CREATE SCHEMA XREF' )
    GO
    IF OBJECT_ID('XREF.XREF_FileData') IS NOT NULL DROP TABLE XREF.XREF_FileData
    CREATE TABLE XREF.XREF_FileData
    rowId INT IDENTITY,
    DataXref XML,
    XREF_FileData INT,
    XREF_FileEntry INT,
    CONSTRAINT PK_XREF_FileData PRIMARY KEY ( rowId )
    GO
    IF OBJECT_ID('XREF.XREF_MemberBuilt') IS NOT NULL DROP TABLE XREF.XREF_MemberBuilt
    CREATE TABLE XREF.XREF_MemberBuilt
    XREF_ProjectBuilt INT,
    XREF_FileData INT
    GO
    IF OBJECT_ID('XREF.XREF_FileEntry') IS NOT NULL DROP TABLE XREF.XREF_FileEntry
    CREATE TABLE XREF.XREF_FileEntry
    XREF_FileEntry INT
    GO
    -- Create larger piece of xml for repro
    ;WITH XMLNAMESPACES ( DEFAULT 'uri:schemas-progress-com:XREFD:0004' ), cte AS (
    SELECT TOP 100000 ROW_NUMBER() OVER ( ORDER BY ( SELECT 1 ) ) rn
    FROM master.sys.columns c1
    CROSS JOIN master.sys.columns c2
    CROSS JOIN master.sys.columns c3
    INSERT INTO XREF.XREF_FileData ( DataXref, XREF_FileData, XREF_FileEntry )
    SELECT
    SELECT
    CASE rn WHEN 9999 THEN 'ord/p/ord0616.p' ELSE CAST( rn AS VARCHAR(20) ) END AS "@Object-identifier",
    'RUN' AS "@Reference-type",
    SELECT
    rn AS "File-num",
    rn * 10 AS "Line-num"
    FOR XML PATH(''), TYPE
    ) AS "*"
    FROM cte
    FOR XML PATH('Reference'), ROOT('Source'), TYPE
    ).query('<Cross-reference xmlns="uri:schemas-progress-com:XREFD:0004">{.}</Cross-reference>'), 1, 100
    INSERT INTO XREF.XREF_FileEntry ( XREF_FileEntry )
    VALUES ( 100 )
    INSERT INTO XREF.XREF_MemberBuilt ( XREF_ProjectBuilt, XREF_FileData )
    VALUES ( 69, 1 )
    GO
    --SELECT * FROM XREF.XREF_FileData
    --SELECT * FROM XREF.XREF_FileEntry
    --SELECT * FROM XREF.XREF_MemberBuilt
    --GO
    -- Add primary XML index
    CREATE PRIMARY XML INDEX xidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
    GO
    -- Add value, property and path xml indexes
    CREATE XML INDEX xvalidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
    USING XML INDEX xidx_XREF_FileData FOR VALUE
    CREATE XML INDEX xpthidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
    USING XML INDEX xidx_XREF_FileData FOR PATH
    CREATE XML INDEX xprpidx_XREF_FileData ON XREF.XREF_FileData (DataXref)
    USING XML INDEX xidx_XREF_FileData FOR PROPERTY
    GO
    :exit
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    GO
    DECLARE @p_ObjectIdentifierForReference varchar(500);
    SET @p_ObjectIdentifierForReference = 'ord/p/ord0616.p';
    ;WITH XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004' as D)
    SELECT
    XREF_FileDataReference.XREF_FileData AS XrefFileData,
    InnerRowNode.value('/D:Reference[1]/D:File-num[1]', 'int') AS FileNumber,
    InnerRowNode.value('/D:Reference[1]/D:Line-num[1]', 'int') AS LineNumber
    FROM (
    SELECT
    XREF.XREF_FileData.XREF_FileData,
    XREF.XREF_FileData.XREF_FileEntry,
    InnerRow.query('.') AS InnerRowNode
    FROM XREF.XREF_FileData
    OUTER APPLY DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') as T(InnerRow)
    WHERE DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') = 1
    ) AS XREF_FileDataReference
    INNER JOIN XREF.XREF_MemberBuilt ON XREF_MemberBuilt.XREF_FileData = XREF_FileDataReference.XREF_FileData
    INNER JOIN XREF.XREF_FileEntry ON XREF_FileEntry.XREF_FileEntry = XREF_FileDataReference.XREF_FileEntry
    WHERE XREF_MemberBuilt.XREF_ProjectBuilt = 69
    OPTION( RECOMPILE, OPTIMIZE FOR (@p_ObjectIdentifierForReference = 'ord/p/ord0616.p') )
    GO
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    GO
    DECLARE @p_ObjectIdentifierForReference varchar(500);
    SET @p_ObjectIdentifierForReference = 'ord/p/ord0616.p';
    ;WITH XMLNAMESPACES ('uri:schemas-progress-com:XREFD:0004' as D)
    SELECT
    XREF_FileDataReference.XREF_FileData AS XrefFileData,
    InnerRowNode.value('(/D:Reference/D:File-num/text())[1]', 'int') AS FileNumber,
    InnerRowNode.value('(/D:Reference/D:Line-num/text())[1]', 'int') AS LineNumber
    FROM (
    SELECT
    XREF.XREF_FileData.XREF_FileData,
    XREF.XREF_FileData.XREF_FileEntry,
    InnerRow.query('.') AS InnerRowNode
    FROM XREF.XREF_FileData
    OUTER APPLY DataXref.nodes('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') as T(InnerRow)
    WHERE DataXref.exist('/D:Cross-reference/D:Source/D:Reference[@Object-identifier = sql:variable("@p_ObjectIdentifierForReference") and @Reference-type = "RUN"]') = 1
    ) AS XREF_FileDataReference
    INNER JOIN XREF.XREF_MemberBuilt ON XREF_MemberBuilt.XREF_FileData = XREF_FileDataReference.XREF_FileData
    INNER JOIN XREF.XREF_FileEntry ON XREF_FileEntry.XREF_FileEntry = XREF_FileDataReference.XREF_FileEntry
    WHERE XREF_MemberBuilt.XREF_ProjectBuilt = 69
    OPTION( RECOMPILE, OPTIMIZE FOR (@p_ObjectIdentifierForReference = 'ord/p/ord0616.p') )
    GO
    So I guess I'm saying I cannot reproduce your problem on SQL 2008 R2 or SQL 2012.  Does anything about this repro stand out as different from your situation?
    Looking at your query I would say you might consider the following:
    are you really seeing big differences in query duration?
    pretty much ignore estimated plan costs for xml queries
    consider breaking it up; eg carve off the xml then do the joins?  If poor cardinality estimation is part of the problem this might help
    Understand what PATH, PROPERTY and VALUE are for, then only create the ones you need
    do you really have the range of queries that requires all three?
    this is still a great article on xml indexes:
    http://technet.microsoft.com/en-us/library/ms191497.aspx
    What's performance like with the primary xml index only?
    If performance is that important, consider materialising the columns permanently
    I think the buffer_descriptors stuff is a distraction - mostly your cache is warm right?
    plan forcing could be a last resort
    Selective XML indexes in SQL 2012 onwards are great : )  much less storage required for example but much more specific

  • Query performance decreases dramatically after adding 0AGE characteristic

    Hi gurus,
    I've added 0AGE as a free characteristic to a query based on 0PAPA_C02 cube and the query performance falls down dramatically.
    In transaction RSRT I have compared statistics for two identical queries, one with 0AGE, and one without it.
    Total DBTRANS changed from 1.528 to 284.342
    Total DBSEL didn't change.
    "OLAP: Data Selection" changed from 0,207267 to 232,879858
    There is also a new line: OLAP: USER_EXIT with long duration: 289,386668 and counter: 6.840.863
    And I am not using any user exit variable!!!! or anything...
    What may be wrong and how can I fix that?
    Regards,
    Dorota

    Found the answer.
    There is a BAPI for 0PAPA_C01 InfoCube which makes 0AGE a virtual characteristic.
    It needs to be deactivated.
    The BAPI name is RS_BCT_PA.

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Weblogic 8.1.6 and Oracle 9.2.0.8 - query performance

    Folks,
    We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
    When we use the following combination of the stack we see SQL query performance degradation: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
    We do not see the degradation in case of the following: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
    Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
    This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
    Any ideas appreciated.

    Folks,
    We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
    When we use the following combination of the stack we see SQL query performance degradation: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
    We do not see the degradation in case of the following: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
    Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
    This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
    Any ideas appreciated.

  • Select query and Insert statement performance

    Hi all,
    Can anyone plz guide us on below problem I am facing ?
    1) One of the simple Insert statement runs very slow..What might be the reason? Its simple table without any LOBs ,LONG or so. Everything else in the DB works fine.
    2) one of the SELECT statement runs very slow. It selects all records (around 1000) from a table..How can i improve its performance?
    3)Which columns in the Master and its detail tables should be indexed to improve Query performance on them.
    Many Thanks
    Regards
    sandeep

    To get an answer to your questions you have to post some informations about your system:
    1. operating system
    2. RAM
    3. oracle version
    4. init.ora
    Thomas

  • An index can not being used and still afect a query performance?

    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?

    user11173393 wrote:
    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?You said that is what is happening, & I believe you.

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Query performance and Transport

    Guys, (Those of you that already work in the field maybe able to better answer this question.) 
    Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
    Thanks,
    RG

    Aggregates are created directly in production.
    We create aggregate based on query statistics...which is based on data read by query and various times.
    Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Fuzzy searching and concatenated datastore query performance problems.

    I am using the concatenated datastore and indexing two columns.
    The query I am executing includes an exact match on one column and a fuzzy match on the second column.
    When I execute the query, performance should improve as the exact match column is set to return less values.
    This is the case when we execute an exact match search on both columns.
    However, when one column is an exact match and the second column is a fuzzy match this is not true.
    Is this normal processing??? and why??? Is this a bug??
    If you need more information please let me know.
    We are under a deadline and this is our final road block.
    TIA
    Colleen GEislinger

    I see that you have posted the message in the Oracle text forum, good! You should get a better, more timely answer there.
    Larry

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

Maybe you are looking for

  • Oracle 9i R2 Standard Edition on Red Hat As 2.1

    I have installed Oracle 9i R2(9.2.0) Standard Edition on Red Hat Linux AS 2.1 When I have check : Select * from V$Version the result was : Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production I don't understand what happened , why in V$Version

  • Information related to Outstanding Shopping carts for particular User

    Hello All, Is there any way(FM / table) by which we can get below info for any particular user: 1. All the SC he/she has raised 2. The outstanding SC's for particular user. (For eg: SC with amt 100, but only 70 has been confirmed) Thanks, Dhananjay

  • Backed up iPod Now i lost 10GB on Mac!

    Ok so i used a programme to take all the music off my iPod classic (because i just got an iPod Touch). Can not remember what i used... I wanted to take 10GB off the iPod classic to my mac and then put it all on my new iPod Touch. BUT, it transfered e

  • New Apple Keyboard for Dual G5 with KVM Switcher

    Hi Everyone, I have a Dual Mac G5 and a Dual Mac G4 with a Belkin KVM Switcher, so they are able to both share one monitor. I am not sure of the model number of the Switcher, but I bought it because it was recommended by Apple. It has worked flawless

  • Install will continue when conflicting applications are closed

    The problem is the applications dont exist let alone are running.  Does this mean i have a virus or something cheers Gareth