Is a continuous query aggregator possible?

How can I make something like this a continuous query so I'm constantly aware of the count of working Orders in the orderCache?
Filter f2 = new EqualsFilter("isWorking", true);
Integer workingOrdersCount = (Integer)orderCache.aggregate(f2,  new com.tangosol.util.aggregator.Count() );Thanks,
Andrew

Hi Andrew,
We do not have a continuous aggregator at this time. The specific use case you suggest could be accomplished by maintaining a continuous query cache (CQC) and calling it's size() method. The resource consumption of the CQC can be kept to a minimum by configuring it to only cache the matching keys, and to not hold the values.
thanks,
mark
Coherence Development Team

Similar Messages

  • Coherence-Extend and Continuous Query performance

    Hi,
    I am trying to evaluate the performance impact of continous queries, when using coherence extend (TCP). The idea is that desktop clients will be running continuous queries against a cluster, and other processes will be updating the data in that cluster. The clients themselves take a purely read-only view of the data.
    In my tests, I find that the updater process takes about 250ms to update 5000 values in the cache (using a putAll operation). When I have a continuous query running against a remote cache, linked with coherence extend, the update time increases to about 1500ms. This is not CPU bound.
    Is this what people would expect?
    If so this raises questions to me about:
    1) slow subscribers - what if one of my clients is very badly behaved? Can I detect this and/or take action?
    2) conflation of updates - can Coherence do conflation?
    3) can I get control to send object deltas over the wire rather than entire objects?
    Is this a use case for which CoherenceExtend and continuous queries were designed?
    Robert

    Yes, it is certainly possible, although depending on your requirements it may be more or less additional coding. You have a few choices. For example, since you have a CQC on the cache, you could conceivably aggregate locally (on any event). In other words, since all the data are local, there is no need to do the parallel aggregation (unless it is CPU limited). Depending on the aggregation, you may only have to recalculate part of it.
    You can access the internal data structure (Map) within the CQC as follows:
    Map map = cqc.getInternalCache();
    // now we can do aggregation
    NamedCache cache = new WrapperNamedCache(map);
    cache.aggregate(..);More complex approaches would only recalculate portions based on the event, or (depending on the function) might use the event to adjust the aggregated results.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Continuous Query

    Hi,
    Which model continuous query follows for fetching the results (event driven or polling)? Is there a way to configure how often continuous query shall execute? Please provide with pointers and recommendations around it ? Is it possible to have dynamic filters for fetching the results. For example, the user wants the last 5 minutes data locally
    Calendar date = Calendar.getInstance();
    date.add(Calendar.MINUTE, -5);
    ValueExtractor valueExtractor = new ReflectionExtractor("getTime");
    Filter filter = new BetweenFilter(valueExtractor, date.getTime(), Calendar.getInstance().getTime());
    NamedCache continuousQueryCache = new ContinuousQueryCache (namedCache, filter,false);
    In the above example, the coherence cluster stores the data for 3 hours which is, flowing-in continuously, but the user requires only 5 minutes of data at any point-in-time locally available. But since filter does not change dynamically so the continuousQueryCache only stores the 5 mins of data from the time filter was instantiated. How can it be achieved, if not continuous query?
    Regards,
    Neeraj

    Why a single threaded model? There's a thread that's receiving data into the CQC, and then the listener thread that gets fired when that object is added. This will, in turn, add the object to your local timestamped collection. There'll be another "cleaner" thread that gets kicked off every little while to delete objects from the collection, and then there'll be the thread from your application that actually reads the keys and determines which value objects to get (assuming everything else just uses keys to keep the load low). But I think your gut feel that this is a large load is probably correct - 70000 events a second is a high number, in my experience.
    I wonder if you're going about this the right way!
    Your current approach uses the grid simply as storage. When you need to display the graph you're pulling all the data to the client, working on all the data to do whatever is needed to compose the graph, and then displaying the graph. Is my understanding correct?
    If I'm correct then you're making virtually no use of the fact the cluster has many (potentially very many) processors that can operate in parallel. If you could get them to do much of the work for you then your client can be much more lightly loaded.
    Is it possible to do the statistical calculations in the cluster itself, splitting the problem into parallel operations, and then use a distributed query to get the pieces (a much small set of objects) and simply assemble them on the client? (This model will work if one can, for example, partition the objects by some simple value - time, lets say - so that node A has processes all the objects in the even minutes, and node B processes all the objects in the odd minutes (lets say each minute you average some value from each of those objects and store those minute aggregates). Now the client can simply do a distributed query that goes across Node A and Node B to pull back those already calculated values - there's less of them AND the work has already been done in the cluster, so the client is less heavily loaded. Furthermore, if you find you can't keep up then simply adding new nodes and ensuring the work is split in parallel across the nodes allows you to be able to scale your performance up until the system works at the speeds you need.
    Hope that helps.
    Toby

  • VO Auto Refresh error - Unsupported query for Continuous Query Notification

    I created a read only view object and set the auto refresh property to true. According to the documentation that was all that need to be done in order to enable auto-refresh for a view instance of a shared application module (which I am using to retrieve reference data). When I run a test, I get the following error:
    [371] DCBindingContainer.reportException :oracle.jbo.SQLStmtException
    [372] oracle.jbo.SQLStmtException: JBO-27122: SQL error during statement preparation. Statement: SELECT
    CCIPO_REF_SENSITIVITY_LEVEL.SENSITIVITY_ID SENSITIVITY_ID,
    CCIPO_REF_SENSITIVITY_LEVEL.SENSITIVITY_DESC SENSITIVITY_DESC
    FROM
    CCIPO_REF_SENSITIVITY_LEVEL
         at oracle.jbo.server.BaseSQLBuilderImpl.processException(BaseSQLBuilderImpl.java:3721)
         at oracle.jbo.server.OracleSQLBuilderImpl.processException(OracleSQLBuilderImpl.java:4722)
         at oracle.jbo.server.QueryCollection.buildResultSet(QueryCollection.java:1386)
         at oracle.jbo.server.QueryCollection.executeQuery(QueryCollection.java:928)
         at oracle.jbo.server.ViewObjectImpl.executeQueryForCollection(ViewObjectImpl.java:6968)
         at oracle.jbo.server.ViewRowSetImpl.execute(ViewRowSetImpl.java:1183)
         at oracle.jbo.server.ViewRowSetImpl.execute(ViewRowSetImpl.java:1037)
         at oracle.jbo.server.ViewRowSetIteratorImpl.ensureRefreshed(ViewRowSetIteratorImpl.java:2774)
         at oracle.jbo.server.ViewRowSetIteratorImpl.ensureRefreshed(ViewRowSetIteratorImpl.java:2751)
         at oracle.jbo.server.ViewRowSetIteratorImpl.first(ViewRowSetIteratorImpl.java:1580)
         at oracle.jbo.server.ViewRowSetImpl.first(ViewRowSetImpl.java:3500)
         at oracle.jbo.server.ViewObjectImpl.first(ViewObjectImpl.java:9943)
         at oracle.adf.model.binding.DCIteratorBinding.setupRSIstate(DCIteratorBinding.java:779)
         at oracle.adf.model.binding.DCIteratorBinding.refreshControl(DCIteratorBinding.java:679)
         at oracle.jbo.uicli.binding.JUIteratorBinding.refreshControl(JUIteratorBinding.java:474)
         at oracle.adf.model.binding.DCIteratorBinding.refresh(DCIteratorBinding.java:4437)
         at oracle.adf.model.binding.DCExecutableBinding.refreshIfNeeded(DCExecutableBinding.java:347)
         at oracle.adf.model.binding.DCIteratorBinding.getRowSetIterator(DCIteratorBinding.java:1605)
         at oracle.jbo.jbotester.panel.BindingPanel.setBindingContext(BindingPanel.java:116)
         at oracle.jbo.jbotester.panel.BindingPanel.<init>(BindingPanel.java:88)
         at oracle.jbo.jbotester.panel.BindingPanel.<init>(BindingPanel.java:71)
         at oracle.jbo.jbotester.form.BindingForm.createMasterPanel(BindingForm.java:63)
         at oracle.jbo.jbotester.form.BindingForm.init(BindingForm.java:98)
         at oracle.jbo.jbotester.form.JTForm.<init>(JTForm.java:72)
         at oracle.jbo.jbotester.form.BindingForm.<init>(BindingForm.java:50)
         at oracle.jbo.jbotester.form.FormType$1.createForm(FormType.java:63)
         at oracle.jbo.jbotester.form.FormType.createForm(FormType.java:199)
         at oracle.jbo.jbotester.form.FormType.createTab(FormType.java:270)
         at oracle.jbo.jbotester.form.FormType.showForm(FormType.java:248)
         at oracle.jbo.jbotester.form.FormType.showForm(FormType.java:207)
         at oracle.jbo.jbotester.form.FormType.showForm(FormType.java:203)
         at oracle.jbo.jbotester.tree.ObjTreeNode.showForm(ObjTreeNode.java:140)
         at oracle.jbo.jbotester.tree.ObjTreeNode.showForm(ObjTreeNode.java:123)
         at oracle.jbo.jbotester.tree.Tree.processTreeMouseClicked(Tree.java:728)
         at oracle.jbo.jbotester.tree.Tree.access$100(Tree.java:96)
         at oracle.jbo.jbotester.tree.Tree$TreeMouseListener.mouseClicked(Tree.java:141)
         at java.awt.AWTEventMulticaster.mouseClicked(AWTEventMulticaster.java:253)
         at java.awt.Component.processMouseEvent(Component.java:6292)
         at javax.swing.JComponent.processMouseEvent(JComponent.java:3267)
         at java.awt.Component.processEvent(Component.java:6054)
         at java.awt.Container.processEvent(Container.java:2041)
         at java.awt.Component.dispatchEventImpl(Component.java:4652)
         at java.awt.Container.dispatchEventImpl(Container.java:2099)
         at java.awt.Component.dispatchEvent(Component.java:4482)
         at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4577)
         at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4247)
         at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4168)
         at java.awt.Container.dispatchEventImpl(Container.java:2085)
         at java.awt.Window.dispatchEventImpl(Window.java:2478)
         at java.awt.Component.dispatchEvent(Component.java:4482)
         at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:644)
         at java.awt.EventQueue.access$000(EventQueue.java:85)
         at java.awt.EventQueue$1.run(EventQueue.java:603)
         at java.awt.EventQueue$1.run(EventQueue.java:601)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:98)
         at java.awt.EventQueue$2.run(EventQueue.java:617)
         at java.awt.EventQueue$2.run(EventQueue.java:615)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:614)
         at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269)
         at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
    Caused by: java.sql.SQLException: ORA-29983: Unsupported query for Continuous Query Notification
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:457)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
         at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:889)
         at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:476)
         at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:204)
         at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:540)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:924)
         at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1261)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1419)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3752)
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3806)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1667)
         at oracle.jbo.server.QueryCollection.buildResultSet(QueryCollection.java:1262)
         ... 65 more
    The query is selecting two varchar2 columns and according to the criteria in the documentation is valid to be registered for QRCN in Guaranteed Mode. Any thoughts?
    JDev version: JDeveloper 11.1.1.5.0.

    Well, could be a DB bug. I seem to remember that there was a bug in that area in an older version of 11g.
    Or you could use jnettrace to trace the calls made by the driver to the DB. That should show you what's going wrong.
    Sascha

  • Problem in continuous query cache with PofExtractor

    Hi,
    I am creating a CQC with a filter having PofExtractor. When I try to insert any record in cache it is giving me exception at server side.
    When I do not use PofExtractor it is working fine.
    If any one know about this prblem please help.
    I am using C# as my client application
    Regards
    Nitin Jain

    Hi JK,
    I have made some changes in my server. Now I am not using any pof definition on server side. I think that's why it is giving me the error.
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1): Exception occured during filter evaluation: MapEventFilter(mask=INSERTED|UPDATED_ENTERED|UPDATED_WITHIN, filter=GreaterEqualsFilter(PofExtractor(target=VALUE, navigator=SimplePofPath(indices=0)), Nitin)); removingthe filter...
    2010-04-13 15:39:07.481/20.593 Oracle Coherence GE 3.5.1/461 <Error> (thread=DistributedCache, member=1):
    (Wrapped) java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConverterFromBinary.convert(DistributedCache.CDB:4)
    at com.tangosol.util.ConverterCollections$ConverterMapEvent.getNewValue(ConverterCollections.java:3594)
    at com.tangosol.util.filter.MapEventFilter.evaluate(MapEventFilter.java:172)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.prepareDispatch(DistributedCache.CDB:82)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.postInvoke(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.put(DistributedCache.CDB:156)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:37)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheKeyRequest.ExtendedKeyRequest.onReceived(ExtendedKeyRequest.CDB:4)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.StreamCorruptedException: unknown user type: 1001
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3289)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    ... 15 more
    Is there any way by which we can apply continuous query cache without providing object definition at server side?
    Regards
    Nitin Jain

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • ORA-29983: Unsupported query for Continuous Query Notification

    Hi, i'm having a LOV with auto refresh. If i add "Order by " to the LOV query , the following exception is thrown.
    SQL error during statement preparation. Statement: SELECT * FROM (SELECT DISTINCT XXXX FROM YYYY) QRSLT ORDER BY "XXXX"
    Error     ORA-29983: Unsupported query for Continuous Query Notification
    I tried to override the create method of the VO, with "this.setNestedSelectForFullSql(false);" .. But this one disables the auto referesh property...
    Could some one help to solve this. Thanks .

    Hi, i looked into the ADF run-time source code and i can see that , the connection object " OracleDatabaseChangeListenerWrapper" class setting the mode as "BEST_EFFORT".
    I would assume , by default adf run-time using the best effort mode , not the guaranteed mode.
    The data base doc also says in best effort mode "order by" clause as well as key word "like" can be used for continuous query notification. But still i'm getting "ORA-29983".
    Could some one clarify me. Thanks .

  • Unsupported query for Continuous Query Notification

    Hi all,
    I'm following an application which supports ADS(Active Data Service). http://www.consideringred.com/files/oracle/2011/ActiveDataServiceADFBCApp-v0.02.zip
    I build the application and it works well.
    Then I created a ViewCriteria for the same VO. Then added a search panel through that ViewCriteria.
    Now when I change data at database it shows the data change (ADS works).
    But when I search for data through that search boxes it gives me the following error.
    java.sql.SQLException: ORA-29983: Unsupported query for Continuous Query NotificationHow can I overcome this error....??
    Thanks,
    Dinuka.

    Here it is Mr.John..,
    >
    <ADFLogger> <begin> Execute query
    <ViewObjectImpl> <closeStatementsResetRowSet> [7421] ViewObject: [########.client.views.TestVO]ClientAM.TestVO1 close prepared statements...
    <ViewObjectImpl> <getPreparedStatement> [7422] ViewObject: [########.client.views.TestVO]ClientAM.TestVO1 Created new QUERY statement
    <ViewObjectImpl> <buildQuery> [7423] TestVO1>#q computed SQLStmtBufLen: 2911, actual=2823, storing=2853
    <ViewObjectImpl> <buildQuery> [7424] SELECT * FROM (SELECT
    ClientsEO.ADDED_BY,
    ClientsEO.APPROVAL_STATUS,
    ClientsEO.CHANGED_BY,
    ClientsEO.CLIENT_PREFIX,
    ClientsEO.CLIENT_SUFFIX,
    ClientsEO.CLIENT_TYPE,
    ClientsEO.COUNTRY_OF_RESIDENCE,
    ClientsEO.CUSTODIAN_NUMBERED_ACC,
    ClientsEO.DATE_ADDED,
    ClientsEO.DATE_CHANGED,
    ClientsEO.DATE_OF_INCORPORATION,
    ClientsEO.DATE_STATUS_CHANGED,
    ClientsEO.DISPOSAL_INSTRUCTIONS,
    ClientsEO.DIVIDEND_DISP_TYPE,
    ClientsEO.ENTITLEMENT_TO_UNASSIGNED,
    ClientsEO.ENTITLEMENT_UNASSIGNED,
    ClientsEO.GENDER,
    ClientsEO.INITIALS,
    ClientsEO.LOCAL_CLIENT_ID,
    ClientsEO.MEMBER_CODE,
    ClientsEO.MEMBER_TYPE,
    ClientsEO.NATIONALITY,
    ClientsEO.OTHER_NAMES,
    ClientsEO.REMARKS,
    ClientsEO.STATUS,
    ClientsEO.STATUS_CHANGE_REASON_CODE,
    ClientsEO.STATUS_CHANGED_BY,
    ClientsEO.SURNAME,
    ClientsEO.TAX_CODE_DEBT,
    ClientsEO.TAX_CODE_EQT,
    ClientsEO.TITLE,
    CS.DESCRIPTION STATUS_DESCRIPTION,
    CT.DESCRIPTION TYPES_DESCRIPTION,
    ClientsEO.CLIENT_SUFFIX as Client_Suff,
    DECODE(ClientsEO.GENDER, 'M', 'MALE', 'F', 'FEMALE', 'N/A') CLIENT_GENDER,
    CASE
    WHEN CSF.COMPANY = 'Y' THEN ClientsEO.SURNAME
    ELSE ClientsEO.TITLE || ' ' || ClientsEO.OTHER_NAMES || ' ' || ClientsEO.SURNAME
    END Name,
    C.COUNTRY_NAME,
    C.NATIONALITY COUNTRY_NATIONALITY,
    CSF.DESCRIPTION SUFFIX_DESCRIPTION,
    DECODE(ClientsEO.MEMBER_TYPE,'',' ',(SELECT MT.DESCRIPTION FROM MEMBER_TYPES MT WHERE MT.MEMBER_TYPE=ClientsEO.MEMBER_TYPE )) MEM_TYPE_DESCRIPTION,
    DECODE(ClientsEO.TAX_CODE_DEBT,'',' ',(SELECT TS.TAX_DESCRIPTION FROM TAX_STATUS_CODES TS WHERE TS.TAX_CODE=ClientsEO.TAX_CODE_DEBT )) TAX_DEBT_DESCRIPTION,
    DECODE(ClientsEO.TAX_CODE_DEBT,'',' ',(SELECT TS.TAX_RATE FROM TAX_STATUS_CODES TS WHERE TS.TAX_CODE=ClientsEO.TAX_CODE_DEBT )) TAX_DEBT_RATE,
    DECODE(ClientsEO.TAX_CODE_EQT,'',' ',(SELECT TS.TAX_DESCRIPTION FROM TAX_STATUS_CODES TS WHERE TS.TAX_CODE=ClientsEO.TAX_CODE_EQT )) TAX_EQT_DESCRIPTION,
    DECODE(ClientsEO.TAX_CODE_EQT,'',' ',(SELECT TS.TAX_RATE FROM TAX_STATUS_CODES TS WHERE TS.TAX_CODE=ClientsEO.TAX_CODE_EQT )) TAX_EQT_RATE,
    DECODE(ClientsEO.DIVIDEND_DISP_TYPE, 'C', 'CASH','B' , 'BANK','') DIVIDEND_DISP_DESCRIPTION ,
    DECODE(ClientsEO.STATUS_CHANGE_REASON_CODE,'','',(SELECT SR.REASON FROM SUSPENDING_REASONS SR WHERE SR.REASON_CODE=ClientsEO.STATUS_CHANGE_REASON_CODE )) STATUS_CHANGE_REASON
    FROM
    CLIENTS ClientsEO,
    CLIENT_STATUS CS,
    CLIENT_TYPES CT,
    COUNTRIES C,
    CLIENT_SUFFIXES CSF
    WHERE
    ClientsEO.CLIENT_TYPE = CT.CLIENT_TYPE AND ClientsEO.STATUS = CS.STATUS AND ClientsEO.COUNTRY_OF_RESIDENCE = C.COUNTRY_CODE(+) AND ClientsEO.CLIENT_SUFFIX = CSF.CLIENT_SUFFIX) QRSLT WHERE ( ( (UPPER(CLIENT_PREFIX) = UPPER(:vc_temp_1) ) ) )
    <ViewObjectImpl> <bindParametersForCollection> [7425] Bind params for ViewObject: [########.client.views.TestVO]ClientAM.TestVO1
    <OracleSQLBuilderImpl> <bindParamValue> [7426] Binding param "vc_temp_1": 325
    <ADFLogger> <addContextData> Execute query
    MyOracleDatabaseChangeListenerWrapper.getRegistrationProperties() : vProperties = {DCN_QUERY_CHANGE_NOTIFICATION=true, DCN_NOTIFY_ROWIDS=true, DCN_BEST_EFFORT=true}
    <ADFLogger> <addContextData> Execute query
    <ViewObjectImpl> <freeStatement> [7427] ViewObject: [########.client.views.TestVO]ClientAM.TestVO1 close single-use prepared statements
    <QueryCollection> <buildResultSet> [7428] QueryCollection.executeQuery failed...
    <QueryCollection> <buildResultSet> [7429] java.sql.SQLException: ORA-29983: Unsupported query for Continuous Query Notification
    >
    I think this is the query you asked for.
    thanks for paying attention Mr.John.
    Dinuka.
    Edited by: dinuka on Jul 25, 2011 12:18 PM
    some package names removed due to company rules and regulations.

  • Missed and duplicate events with Continues Query Cache

    We have seen missed events and duplicate events when we register to receive events (using Continues Query Cache) on an entry in the cache while the entry is updating.
    Use case:
    Start a Node
    Start a Proxy
    Start Extend Client
    Implementation of the Extend Client
    Create Cache
    Add Entry to Cache
    Initiate Thread 1 {
          For each ( 1 to 30)
              Run Update Entry Processor on cache entry; Entry Processor increments the Cache Entry value by 1 
    Initiate Thread 2 {
         wait until Cache entry is updated 10 times
         Create MAP Listener {
              For Entry Insert Event {
                            Print event
                   set Initial value = new value
              For Entry Update Event {
                            Print event
                   set Update value = + 1
         Initiate Continues Query Cache (cache, Always Filter, MAP Listener)
    Start Thread 1
    Start Thread 2
    Waits until Thread 1 and Thread2 are terminated
    Expected Result = read the value of the entry from cache
    Actual result = Initial value + Update value
    Results we have seen in two tests_
    Test1: Expected Result > Actual results: Missing events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    +Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=15]}+*
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 29
    Issue:+ Event on 14th update was not sent
    Test 2: Expected Result < Actual Result: Duplicate events
    Entry Inserted Event: MapEvent{ContinuousQueryCache inserted: key=1, value=UpdateObject [intNumber=1, longNumber=13]}
    *Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=13]}*+
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=13], new value=UpdateObject [intNumber=1, longNumber=14]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=14], new value=UpdateObject [intNumber=1, longNumber=15]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=15], new value=UpdateObject [intNumber=1, longNumber=16]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=16], new value=UpdateObject [intNumber=1, longNumber=17]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=17], new value=UpdateObject [intNumber=1, longNumber=18]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=18], new value=UpdateObject [intNumber=1, longNumber=19]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=19], new value=UpdateObject [intNumber=1, longNumber=20]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=20], new value=UpdateObject [intNumber=1, longNumber=21]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=21], new value=UpdateObject [intNumber=1, longNumber=22]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=22], new value=UpdateObject [intNumber=1, longNumber=23]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=23], new value=UpdateObject [intNumber=1, longNumber=24]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=24], new value=UpdateObject [intNumber=1, longNumber=25]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=25], new value=UpdateObject [intNumber=1, longNumber=26]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=26], new value=UpdateObject [intNumber=1, longNumber=27]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=27], new value=UpdateObject [intNumber=1, longNumber=28]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=28], new value=UpdateObject [intNumber=1, longNumber=29]}
    Entry Updated Event: MapEvent{ContinuousQueryCache updated: key=1, old value=UpdateObject [intNumber=1, longNumber=29], new value=UpdateObject [intNumber=1, longNumber=30]}
    Expected Result: 30
    Actual Result: 31
    Issue:+ Event on 13th update was sent in Insert and Update events both
    reg
    Dasun.

    Hi Paul,
    I tested with 3.7.1.4 and 3.7.1.5. In both versions I can see the issue.
    reg
    Dasun.

  • Continuous Query Cache Local caching meaning

    Hi,
    I'm enconter following problem when I was working with continuous query cache with local caching TRUE.
    I was able to insert data into coherence cache and read as well.
    Then I stopped the process and try to read the data in the cache for keys which I inserted earlier.
    But I received NULL as the result.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), true);
    DerivedCQC.hpp
    * File: DerivedCQC.hpp
    * Author: srathna1
    * Created on 15 July 2011, 02:47
    #ifndef DERIVEDCQC_HPP
    #define     DERIVEDCQC_HPP
    #include "coherence/lang.ns"
    #include "coherence/net/cache/ContinuousQueryCache.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include "coherence/util/Filter.hpp"
    #include "coherence/util/MapListener.hpp"
    using namespace coherence::lang;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::net::NamedCache;
    using coherence::util::Filter;
    using coherence::util::MapListener;
    class DerivedCQC
    : public class_spec<DerivedCQC,
    extends<ContinuousQueryCache> >
    friend class factory<DerivedCQC>;
    protected:
    DerivedCQC(NamedCache::Handle hCache,
    Filter::View vFilter, bool fCacheValues = false, MapListener::Handle hListener = NULL)
    : super(hCache, vFilter, fCacheValues, hListener) {}
    public:
    virtual bool containsKey(Object::View vKey) const
    return m_hMapLocal->containsKey(vKey);
    #endif     /* DERIVEDCQC_HPP */
    When I switch off the local storage flag to FALSE.
    I was able to read the data.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), false);
    Ideally I'm expecting in true scenario when I'm connected with coherence all keys and values with locally synced up with cache and store it locally and for each update also it will get synched up.
    In false scenario it will hook into the coherence cache and read it from their for each key and cache values from that moment onwards. Please share how it is implemented underneath
    Thanks and regards,
    Sura

    Hi Wei,
    I found the issue when you declare you cache as an global variable then you won't get data in TRUE scenario and if you declare the cache during a method then you will retrieve data.
    Try this.......
    #include <iostream>
    #include <coherence/net/CacheFactory.hpp>
    #include "coherence/lang.ns"
    #include <coherence/net/NamedCache.hpp>
    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <coherence/net/cache/ContinuousQueryCache.hpp>
    #include <coherence/util/filter/AlwaysFilter.hpp>
    #include <coherence/util/filter/EntryFilter.hpp>
    #include "DerivedCQC.hpp"
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <coherence/util/Set.hpp>
    #include <coherence/util/Iterator.hpp>
    #include <sys/types.h>
    #include <unistd.h>
    #include <coherence/stl/boxing_map.hpp>
    #include "EventPrinter.hpp"
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    using coherence::net::ConcurrentMap;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::util::filter::AlwaysFilter;
    using coherence::util::filter::EntryFilter;
    using coherence::util::Set;
    using coherence::util::Iterator;
    using coherence::stl::boxing_map;
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    int main(int argc, char** argv) {
    std::cout << "size: " << hCache->size() << std::endl;
    in above example you will see size is 0 for true case and size is equal to data size in the cache in false scenario.
    But if you declare the cache as below you will get the expected results as the documentation.
    int main(int argc, char** argv) {
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    std::cout << "size: " << hCache->size() << std::endl;
    Is this a bug or is this the expected behaviour. According to my understanding this is a bug?
    Thanks and regards,
    Sura

  • Continuous Query, who can help?

    Hello,
    As final project I want to do something with Continuous query. But now I search some information about it. Where can I found good information about what it is and stuff (here on the site and maybe on wikipedia). On wikipedia I found it in German, but I don't understand the language so good.
    Hope someone can help me...
    Thanks in advance...

    http://wiki.tangosol.com/display/COH31UG/Continuous+Query
    HTH...

  • Is Jump Query is possible in Webi Intelligence ?

    Hello Friends,
    I am from a SAP BI background. I am very new to Bussiness Object tool...... In my current client there is a requirement where I need to do a Jump query from one query , by passing the variables and the query result.
    1) I know this is possible in Bex using the tcode (RSBBS)...... Do they have any similar feature in Webi.....?
    2) also we can pass the result of one query as a filter value to another query in Bex ( using replacement path)..... Is this possible in Webi ?
    Thanks for your time friends/////

    Hello  Alican Polat,
    Thanks again for your answers........ They are very informative.......
    "You can jump from a Bex Report to a Crystal Report
    (for e.g you have customer in the rows of your bex report jump to a nice formatted customer sheet)
    Have a look at rsbbs. There you can chose Crystal Reports as a jump-target."
    1) So if this is possible, can I jump form one crystal report to another using the same tcode RSBBS ?
    Thanks,

  • Instant query results - possible?

    I just want to get general idea on query response (Bex)
    Usually client expectations is they would be instanteneous.  That rarely happens, I know I could do pre-calculation, use indexes and have aggergates and use cache properly besides having a good data model
    But still I doubt that queries would be instantenous.
    What is general experience on this forum.
    Please share your experiences.. Thanks

    Within the Data Warehousing Industry it is generally accepted that response times for data warehouse queries is not instantaneous.  This expectation generally comes from experiences with OLTP system (R3) where the processing is vastly different.
    That does not go to say, however, that instantaneos response time cannot be acheived for specific reports.  The more one knows about the report requirements the more it is possible to tune. 
    However, for data warehousing in general, especially ad-hoc types of reporting, it would be prudent for you to try to manage your users, and your Mangements's expectations to help them understand that R3 and BW are very different animals and the same performance for R3 should not be expected for BW, in general.

  • Change of Row Property during query execution possible?

    Hello Experts,
    is it possible to change the row property during query execution?
    In the case an input ready IP query can be executed for different customers. Depending on the customer class (attribute of customer), a certain row (containing account data) should be input ready or only be shown, but not input ready.
    How can this be achieved?
    Thank you!
    Angie

    Hi Angie,
      To determine the input readiness at  Run time based on the master data value of a particular characteristic value (In your case Customer),
    you can create data slice of type exit .
    The data slice is based on an exit class. In the exit class, you can implement a customer-specific logic to protect data records.
    Please refer to http://help.sap.com/saphelp_nw2004s/helpdata/en/43/0c033316cd2bc4e10000000a114cbd/frameset.htm
    for more information.
    By using this, the required rows will be shown but not input ready ( As per your requirement)
    Hope this info helps. Or do let us know !!
    Best regards,
    Akshata

  • Combining two queries into one query if possible

    Hi there. I would like, if possible, to somehow combine the queries 1) and 2) into a single query.
    1) select distinct user_id from system_user_sessions;
    This query returns all unique users from the indicated table.
    2) select count(session_id) from system_user_sessions where user_id = "each user_id returned from 1)";
    This query will return, for each distinct user_id, the number of sessions involving that user. In other words I would like to return the user_id of each user together with the number of session_ids involving that user.
    any ideas? Joe

    I assume you are looking for something like this:
    select count(session_id)
      from system_user_sessions
    where user_id in ( select distinct user_id
                          from system_user_sessions
                      );HTH
    Ghulam

Maybe you are looking for