Querying just cache

Hi,
I want to querying coherence cache for each 5 seconds with a cql query. The cache connects to cql processor and processor has not taken any events from any channel. There is no channel that connects to cql processors.
In the adapter components I am setting and updating the coherence cache and I want to query and filter the data in the cache with a CQL query.
If I write this code, CEP gives an error, I describe this at below cql query.
             <![CDATA[
             select
                  t1.telNo as telNo
             from
                  customerCache t1
              ]]>
<BEA-2045016> <The application context "SOLNoActiveSubs" could not be started. Could not initialize component "<unknown>":
Invalid statement: "select
                  t1.telNo as telNo
             from
                  >>customerCache t1<<"
Description: generic syntax error
Cause: This DDL command has syntax error
Action: The syntax expects  '[', as, match_recognize, xmltable, end-of-file, ')', ',', where, group, having, order, left, right, partition, on, primary token>
####<Mar 1, 2011 12:07:01 AM EET> <Info> <OSGiLogReaderAdapter> <> <myServer> <Log Event Dispatcher> <> <> <> <1298930821652> <BEA-000000> <Bundle[308] SOLNoActiveSubs, Message (BundleEvent STOPPED), Exception (null), Time (1298930821652)>
####<Mar 1, 2011 12:07:01 AM EET> <Notice> <Deployment> <> <myServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <> <> <> <1298930821653> <BEA-2045001> <The application bundle "SOLNoActiveSubs" was undeployed successfully> Is there a way to query coherence cache within CQL processor even there is not any channel which connects to cql processors. Instead, should I use event bean to achieve this?
Thanks.

Hi, please follow the CEP programming model to write the application. We use adapter to retrieve external data and convert them to CEP event(POJO or Tuple), and processor is driven by streaming data which means the rules are evaluated only when events come to processor.
Therefore you need revise your adapter and configure your EPN as something like: Adapter -> Channel -> Processor -> Channel -> UserBean(or Outbound Adapter).
If you need coherence cache to implement cluster processing, you can use HA adapter alternatively. More discussion needs you state your use case.

Similar Messages

  • The query just run the first time, the second doesnt run

    The query just run the first time, the second doesnt run, i checked it in the SM50 and appear this msg:
    CL_RSR_CACHE_BO_FF============CP
    its not a problem of indexes, i repaired it but its really weird cause the first time the query run ok, the second doesnt, it seems that the cache is confused or i dont know, help guys, i'll really appreciate it

    I was watching the notes, its really weird when i load the cube and execute the query for the first time, its ok, to the second stay in cache, but i was watching that the cube doesnt allow me activat the BD statistics, i dont know if this is necesary to improve the performance of the query, what i know is this:
    1- before i create a Hierarchy for an infoobject thta is in the cube ,  the query used to run ok. Now when i load the first time and execute its ok, but the second stay in cache.
    2- now i can activate the statistics for the cube, the indexes are ok, i checked for the RSRV and all is ok less the statistics.
    what can i do help friends...

  • Query result caching on oracle 9 and 10 vs indexing

    I am trying to improve performance on oracle 9i and 10g.
    We use some queries that take up to 30 minutes to execute.
    I heard that there are some products to cache query results.
    Would this have any advantage over using indexes or materialized views?
    Does anyone know any products that I can use to cache the results of this queries on disk?
    Personally I think that by using the query result caching I would reduce the cpu time needed to process the query.
    Is this true?

    Your message post pushes all the wrong buttons starting with the fact that 9i and 10g are marketing labels not version numbers.
    You don't tune queries by spending money and throwing resources at them. You tune them by identifying the problem queries, running explain plans, visualizing their output using DBMS_XPLAN, and addressing the root cause.
    If you want help post full version numbers, the SQL statements, and the DBMS_XPLAN outputs.

  • Pinning the sql query in cache

    Hi All,
    I want to pin the sql query is cache because the physical reads are very high. Can anyone tell me steps to pin the sql query in the cache. Current version 10.2.1.0 OS : Windows.
    Reads CPU Elapsed
    Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
    19,836 38 522.0 2.1 25.40 50.00 1r0wh3v6bayyk
    With regards
    kccrga

    Oscar,
    I've read Carys paper and a good paper it is.
    The point I was trying to get across is that there should be no rules of thumb. (par this one of course ;) )
    It all depends. Should one concentrate on reducing cpu usage on a disk-bound system? Is doing, say, 100 single-block reads from disk faster than doing 100 current mode gets?

  • Using the client result cache without the query result cache

    I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
    This is done using a result_cache hint in the query.
    select /*+ result_cache */ * from table
    As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
    The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
    Thanks.

    e3a934c9-c4c2-4c80-b032-d61d415efd4f wrote:
    I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
    This is done using a result_cache hint in the query.
    select /*+ result_cache */ * from table 
    As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
    The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
    Thanks.
    You haven't provided ANY information about how you configured the result cache. Different parameters are used for configuring the client versus the server result cache so you need to post what, if anything, you configured.
    Post the code you executed when you set the 'client_result_cache_lag' and 'client_result_cache_size' parameters so we can see what values you used. Also post the results of querying those parameters after you set them that show that they really are set.
    You also need to post your app code that shows that you are using the OCI statements are used when you want to use client side result cacheing.
    See the OCI dev guide
    http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci10new.htm#sthref1491
    Statement Caching in OCI
    Statement caching refers to the feature that provides and manages a cache of statements for each session. In the server, it means that cursors are ready to be used without the need to parse the statement again. Statement caching can be used with connection pooling and with session pooling, and will improve performance and scalability. It can be used without session pooling as well. The OCI calls that implement statement caching are:
      OCIStmtPrepare2()
      OCIStmtRelease()

  • Query not cached in BIServerCache

    Hi,
    I am trying to seed to the cache using an agent. I see from the log file nqquery.log that a query is fired at the database when ever the agent is run. Also, the presentation cache is populated. So, if I try to use the analysis again without clearing the presentation cache it opens up fast and I dont see any entry in the nqquery.log. But if I clear the presentation cache from administration by using
    "clear all cursors" and try to access the analysis again, I see that a query is again fired at the database instead of using the BI server cache.
    Any help is greatly appreciated.
    Thanks,
    KK

    Hi,
    Can you check the points in the below URL to ensure that your queries are actually getting cached in BI Server:
    http://obieeblog.wordpress.com/2009/01/19/obiee-cache-is-enabled-but-why-is-the-query-not-cached/
    Thanks

  • Query read / cache modes are set in BW

    Hi Experts,
    What is query read / cache modes are set in BW in order to improve the query performance?
    Thanks
    Rohan

    Hi
    The read mode determines how the OLAP processor gets data during navigation. You can set the mode in Customizing for an InfoProvider and in the Query Monitor for a query.
    http://help.sap.com/saphelp_nw04/helpdata/en/57/b10022e849774f9961aa179e8763b6/content.htm
    Assign points if it helps...
    Regards,
    ARK

  • Query result cache with functions

    Hi all,
    one of my colleagues has found a little bit weird behavior of a query result cache. He has set result_cache_mode = 'FORCE' (but it can be reproduced with a result_cache hint too) and suddenly functions called from the query get executed twice (for the first time) .
    An easy example:
    alter session set result_cache_mode = 'FORCE';
    create sequence test_seq;
    create or replace function test_f(i number)
    return number
    is                  
    begin
      dbms_output.put_line('TEST_F executed');
      --autonomous transaction or package variable can be used too
      return test_seq.nextval;
    end;
    prompt First call
    select test_f(1) from dual;
    prompt Second call
    select test_f(1) from dual;
    drop sequence test_seq;
    drop function test_f;
    First call
    TEST_F(1)
             2
    TEST_F executed
    TEST_F executed
    Second call
    TEST_F(1)
             1
    As you can see - for the first time the function is executed twice and return the value from the second execution. When I execute the query again it returns the value from the first execution... but it doesn't matter, problem is in the double execution. Our developers used to send emails via select (it's easier for them):
    select send_mail(...) from dual;
    ... and now the customers complains that they get emails twice
    And now the question - is there any way, hot to get rid of this behavior (without changing the parameter back or rewriting code)? I thought that the result cache is automatically disabled for non-deterministic functions...or is this an expected behavior?
    Thanks,
    Ivan

    Interesting.. you are right:
    SELECT /*+ RESULT_CACHE */ 'dog' FROM DUAL;
    And at the second execution:
    | Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
    |   1 |  RESULT CACHE    | cc5k01xyqz3ypf9t0j28r5gtd1 |       |            |          |
    |   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
    Hmmm..

  • Query results cached?

    For a report region, is the query results cached?
    That is, is the report query being executed again
    in the following condition:
    * when a user navigate between pages of a report
    (from page 1 (1-100 of 1000 rows) to page 2 (101-200 of 1000 rows)
    * when a user clicks on the column heading to sort the report

    Ken,
    In both cases the query is executed again.
    Regards,
    Marc

  • Querying TopLink Cache

    My system demands caching the result sets obtained from a ReadAllQuery. I might make several queries on this static set of cached data. But the data to be cached is small. I use TopLink's use session.getProject().FullIdentityMap() to cache the output of ReadAllQuery that i execute at the start of the system.
    But am not able to query the cache from an external API later ( which could be several minutes later). How can i manipulate the TopLink cache APIs to get this done. Kindly Reply.

    Hi Manoj,
    If I understand you correctly, you persist some objects and then later query them. You don't get the results you expect when you use checkCacheOnly(). You need to use checkCacheThenDatabase() and when you do this you're seeing SQL I expect.
    If your cache type for the class is FullIdentityMap then TopLink will never release objects of that class once read and your checkCacheOnly() query should work.
    I'm guessing that you're using a different TopLink session. You mention you have a number of services. What environment are you running in and what is your architecture (e.g., servlet or EJB)? Statics don't solve sharing problems especially in an application server or web application environment in which multiple classloaders are employed.
    --Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Oracle 11g/R2 Query Result Cache - Incremental Update

    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AM

    965300 wrote:
    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AMNo, the RESULT CACHE operation doesn't necessarily mean that the results are retrieved from there. It could be being
    written to there.
    Look at the number of consistent gets: it's zero in the first step (I assume you had already run this query before) and I would
    conclude that the data is being read from the result cache.
    In the third step there are 4 consistent gets. I would conclude that the data is being written to the result cache, a fourth step repeating
    the SQL should show zero consistent gets and that would be the results being read.

  • Query Template Caching Properties

    I need to change the IsCachable on the fly in an IRPT page. It seems every property is exposed in the query object EXCEPT IsCachable. Am I missing something?
    Assuming I'm not, how can I simply tell the query template to NOT check the cache for this update AND not create a new cache. I know I can change the Rowcount, or anything really, so it won't match up with an existing cache. But IsCachable is true, and this will cause a new cache to be created, which I don't want.

    The problem is I need to change the IsCachable property in a web page in response to user input (back and forth at will), and then re-update the applet(s) on the page. So I really need to drive this from javascript, changing IsCachable to whatever the user has selected and then doing the applet update. QueryObject.IsCachable returns an error, no method or property exists. It's also not listed in the Script Assistant.
    The idea is I have a scheduled transaction that runs every few minutes and pre-caches BLS transactions that feed reports that are performance problems (Rajeev you should be able to relate to that one). When these reports are run, they now render in a few seconds. I display how old the data is, and have a checkbox they can tick to run the report again withouit using the cache if they have a problem with data that is 10 minutes, or whatever, old. They make the decision to wait for the absolute latest data. Doing this in Javascript is by far the easiest way to handle this. I just don't seem to have access to that IsCachable property.
    Rick - It's interesting that you consider it easier to do your own caching. We started down this road and decided it was all getting too messy. It took a while to get total control over xMII caching, but now I find using it much cleaner.

  • Is there a way to stop a query just after the cursor/plan is produced by CBO?

    Following suggestions of Kerry Osborne&amp;#8217;s Oracle Blog &amp;raquo; Blog Archive &amp;raquo; Explain Plan Lies &amp;#8211; Kerry Osborn…
    on the lies of "Explain plan" (and of "set autotrace on"  too) I'd like to try to stop a query/DML before actually it starts, just after the plan was produced and sql_id assigned.
    Is there any CLEAN way (other than trying CTRL-C) to do that?
    Thanks
    Paolo

    Hi
    PaolFili wrote:
    Thanks rp.
    I think my question is a little dofferent, but your reply give me an idea.(which has clear disadvantages , but can do the work).
    The problem is obtain in Lybrary Cache a plan,a SLQ_ID,PLAN_HASH for a query ( i.e. a 10days running query) that I cannot start-up.
    So my (suggested from your reply) idea is:
    1) to LOCK EXCLUSIVE (a table level) , **if it's possible** every table accessed in the query ( Yes, it can be really expansive in some production environment, but sometimes can be necessary ..)
      using :  LOCK TABLE table IN EXCLUSIVE MODE
    for each table accessed from query
    2) Startup the query that will be suspended form the lock on tables accessed + kill the sid,serial#,@Inst_id for the query.
    3) UNLOCK tables from EXCLUSIVE using "ROLLBACK"  in the session where LOCK TABLE.... was send.(to remake  tables to work for other queries)
    Any other ideas?
    Thanks
    Paolo
    you're planning on using locks to stop a query and you think in order to do so you need exclusive locks  on every table accessed in the query? And you are prepared to do that on a production system?
    And all of this is needed to troubleshoot a query that was running for 10 days -- i.e. a query that was available for all kinds of diagnostics during 10 days?
    Sorry, I think it wasn't a good idea for you to read that Osborne's blog post -- you should've started with more basic things. Way more basic.
    Best regards,
      Nikolay

  • Query Data cached? (Virtual Cube)

    Hi folks,
    I have some problems with a Query-Objekt which gets data out of an Virtual Cube. The Virtual Cube is based on a 3.X Infosource an gets data out of a table in the ERP.
    When i call the Query from VC everything works fine and current data is shown. But if i manipulate data of the table which the Virtual Cube points to and send a refresh event to the Query-Objekt, the manipulated data is not shown. It always returns the data which it fetched at the first call. If i refresh the whole application in the browser (via F5), the manipulated data is shown. I disabled cachmode in rsrt for this query but it doesn't work.
    Any chance to get the current data by just sending a refresh action and call the Query-Objekt again without reloading the whole application? Any Idea?
    Points will be awarded for usefull information.

    Hello,
    The reason why the data manipulated is not showing up in the query even after refresh is sent is because the cache for the virtual provider does not get reset as it would for a normal InfoCube.
    So, it does not know when to reset the cache for itself even when data is manipulated.
    The way we have worked around this is have a temporary process chain which runs on a frequent basis and executes the function module RSDMD_SET_DTA_TIMESTAMP for the virtual cube in consideration.
    Thanks
    Dharma.

  • Querying multiple caches parallely

    Hi experts,
    I have multiple caches defined in client-cache config. And I wanted to query the differnt caches concurrently. If I use thread to do the job, it is going to be network call for each cache (I use TCP extend). Instead I want that to be handled inside the grid. Any suggestions on this?
    I appreciate your valuable inputs.
    Regards,
    karthik

    Hi karthik
    Each cache access will be a network call, although they will all share the same TCP connection from the client to the server.
    If you really just want everything to happen via a single call to the server you could use an invocation service. Configure an invocation-scheme on you server and a remote-invocation-scheme on the client and then write all the logic to access the caches in an Invocable.
    For Example:
    The server config:
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>dist-*</cache-name>
          <scheme-name>example-distributed</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <invocation-scheme>
          <scheme-name>EXAMPLE-INVOCATION</scheme-name>
          <service-name>EXAMPLE-INVOCATION-SERVICE</service-name>
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          <thread-count>5</thread-count>
          <autostart>true</autostart>
        </invocation-scheme>
        <proxy-scheme>
          <scheme-name>Example-Proxy</scheme-name>
          <service-name>EXAMPLE-PROXY-SERVICE</service-name>
          <thread-count>5</thread-count>
          <acceptor-config>
            <tcp-acceptor>
              <local-address>
                <address>localhost</address>
                <port>50115</port>
              </local-address>
            </tcp-acceptor>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
            </serializer>
          </acceptor-config>
          <proxy-config>
            <cache-service-proxy>
              <enabled>false</enabled>
            </cache-service-proxy>
            <invocation-service-proxy>
              <enabled>true</enabled>
            </invocation-service-proxy>
          </proxy-config>
          <autostart>true</autostart>
        </proxy-scheme>
      </caching-schemes>
    </cache-config>The client config
    <cache-config>
      <caching-scheme-mapping/>
      <caching-schemes>
        <remote-invocation-scheme>
          <scheme-name>EXAMPLE-INVOCATION-SERVICE-SCHEME</scheme-name>
          <service-name>EXAMPLE-INVOCATION-SERVICE</service-name>
          <initiator-config>
            <tcp-initiator>
              <remote-addresses>
                <socket-address>
                  <address>localhost</address>
                  <port>50115</port>
                </socket-address>
              </remote-addresses>
              <connect-timeout>2s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
              <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
            </serializer>
          </initiator-config>
        </remote-invocation-scheme>
      </caching-schemes>
    </cache-config>Example Invocable
    public class CacheQuery extends AbstractInvocable implements PortableObject {
        public void run()
            Object result = ... // Perform your cache queries
            // set the results to pass back to the client
            setResult(result);       
        public void readExternal(PofReader pofReader) throws IOException {
            // Implement any POF deserialization
        public void writeExternal(PofWriter pofWriter) throws IOException {
            // Implement any POF serialization
    }Example client code
    InvocationService service = (InvocationService) CacheFactory.getService("EXAMPLE-INVOCATION-SERVICE");
    CacheQuery invocable = new CacheQuery();
    Object result = service.query(invocable, null);JK

Maybe you are looking for