Tuning Distributed Queries

Hello to all
     I have new senior plz help me out how to handle or what tool technique is appropriate for this senior.
My senior related to distributed quires. I want to tune this process. My working environment is windows server 2003 with 11g.
I run query form one server which involve db_link to extract data from two other databases. But I observe that query runs on first node and then 2nd node. But I want to run this process parallel. I want both servers utilize at same time but using db_link in query not solve the problem. Can oracle parallel server would be the solution or another. Plz help me which architecture would appropriate in this senior
regards
Adeel

Yes i have started this thread. I don't know what is the problem with My Userid of OTN. Sometime it show ORACLE student and sometime ID.
-Rem sum(store_code) calculated on Local server Node0
select sum(store_code) from
--REM this Query Runs on one server say Node1*
select * from PK0113_jt@D2_9_2010634013255218206180
inner join PK0113_dg0@D2_9_2010634013255218206180 on PK0113_dg0.dg0_id=PK0113_jt.dg0_id
where store_code=113
union all
REM REM this Query Runs on one server say Node2*
select * from PK113S_jt@D2_12_2010634015728553161367
inner join PK113S_dg0@D2_12_2010634015728553161367 on PK113S_dg0.dg0_id=PK113S_jt.dg0_id
where store_code=102
) t
i have send this request from Local server to two stars whose data reside on two different server. in my case first request goes to node 1 and then node2 finally return answer. and execution plan shows Remote in plan.

Similar Messages

  • Xml data type is not supported in distributed queries. Remote object 'OPENROWSET' has xml column(s).

    Hi,
    Can anyone help me out please.
    I have written one stored Procedure to create a views using Openrowset(openquery) but for tables which contains xml data types throwing error while executing the SP. Error
    " Xml data type is not supported in distributed queries. Remote object 'OPENROWSET' has xml column(s)."
    Please refer the Stored Procedure & error message below.
    USE [Ice]
    GO
    /****** Object:  StoredProcedure [dbo].[Pr_DBAccess]    Script Date: 08/14/2014 16:08:20 ******/
    SET
    ANSI_NULLS ON
    GO
    SET
    QUOTED_IDENTIFIER ON
    GO
    ALTER
    PROCEDURE [dbo].[ Pr_DBAccess](@SERVERTYPE
    NVARCHAR(50),@SERVERNAME
    NVARCHAR(100),@DATABASENAME
    NVARCHAR(100),@SCHEMANAME
    NVARCHAR(100),@TABLENAME
    NVARCHAR(100),@USERNAME
    NVARCHAR(100),@PASSWORD
    NVARCHAR(100))
    AS
    BEGIN
    DECLARE @openquery
    NVARCHAR(4000),
    @ETL_CONFIG_IDN
    NVARCHAR(100);
     IF @SERVERTYPE='SQL'
     BEGIN
    SET @openquery= 
    'CREATE VIEW '+@TABLENAME+
    ' WITH ENCRYPTION AS SELECT * FROM OPENROWSET(''SQLNCLI'',''SERVER='+@SERVERNAME+';TRUSTED_CONNECTION=YES;'',''SELECT * FROM '+@DATABASENAME+'.'+@SCHEMANAME+'.'+@TABLENAME+''')'
    SELECT @openquery
    END
    EXECUTE
    sp_executesql @openquery
    END
    ----While running the SP manually below error occured

    HI ,
    1. You cannot use a table or view that contains xml or clr type as 4-part name in your query
    2. You need to cast the column to either nvarchar(max) or varbinary(max) or other appropriate type to use
    3. If you have a table that has xml type for example then you need to create a view that contains all columns other than xml and query it instead. Or you can issue a pass-through query using OPEN QUERY with the appropriate columns only.
    Here is a work around:
    SELECT
          Cast(a.XML_Data as XML) as XML_Data
    FROM
          OPENQUERY([LINKED SERVER NAME HERE],'
              SELECT
                Cast(XML_Data as Varchar) as XML_Data
             FROM
                [DATABASE NAME].[SCHEMA].[TABLE NAME]'
    ) a
    Basically, the data is queried on the remote server, converts the XML data to a varchar, sends the data to the requesting server and then reconverts it back to XML.
    You can take help from below link;
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/c6e0f4da-821f-4ba2-9b01-c141744076ef/xml-data-type-not-supported-in-distributed-queries?forum=transactsql
    Thanks

  • OLE DB provider 'MSOLAP' cannot be used for distributed queries because the provider is configured to run in single-threaded

    Hopefully this will save somebody some trouble.
    Running 64bit Enterprise SQL and SSAS with Service pack 2 installed.
    Also running Proclarity so 32bit mode Reporting Services is running.
    When trying to create a linked server to my OLAP database I was continually getting the following Error:
    OLE DB provider 'MSOLAP' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode. (Microsoft SQL Server, Error: 7308)
    Many posts suggested I select the "in Proc" check box under the olap provider, but this did not help.
    Finally, instead of using the IDE to create the linked server I used a script to call sp_addlinkedserver and used @provider='MSOLAP.3'.  This fixed the problem.
    If you have a more clear idea of why I was having the issue in the first place, feel free to let me know what you think.

    Try this thread:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/f02a3921-7d0b-4038-97bb-9f17d381b345/linked-ssas-server-fails?forum=sqlanalysisservices
    Talk to me now on

  • Facing problem in distributed queries over Oracle linked server

    Hi,
    I have a SQL Server 2005 x64 Standard Edition SP3 instance. On the instance, we had distributed (4 part) queries over an Oracle linked server running fine just a few hours back but now they have starting taking too long. They seem to work fine when OPENQUERY
    is used. Now I have a huge number of queries using the same mechanism and is not feasible for me to convert all queries to OPENQUERY, please help in getting this resolved.
    Thanks in advance.

    Hi Ashutosh,
    According to your description, you face performance issues with distributed queries and
    it is not feasible for you to convert all queries to
    OPENQUERY. To improve the performance, you could follow the solutions below:
    1. Make sure that you have a high-speed network between the local server and the linked server.
    2. Use driving_site hint. The driving site hint forces query execution to be done at a different site than the initiating instance. 
    This is done when the remote table is much larger than the local table and you want the work (join, sorting) done remotely to save the back-and-forth network traffic. In the following example, we use the driving_site hint to force the "work"
    to be done on the site where the huge table resides:
    select /*+DRIVING_SITE(h)*/
    ename
    from
    tiny_table t,
    huge_table@remote h
    where
    t.deptno = h.deptno;
    3. Use views. For instance, you could create a view on the remote site referencing the tables and call the remote view via the local view as the following example.
    create view local_cust as select * from cust@remote;
    4. Use procedural code. In some rare occasions it can be more efficient to replace a distributed query by procedural code, such as a PL/SQL procedure or a precompiler program.
    For more information about the process, please refer to the article:
    http://www.dba-oracle.com/t_sql_dblink_performance.htm
    Regards,
    Michelle Li

  • Linked Server and Distributed Queries  in Oracle

    In MSSQL, Linked Server and Distributed Queries provide SQL Server with access data from remote data sources. How about in Oracle ?
    I have a table A at Server A and table B at Server B, i wanna join these two table together. How can i do this in Oracle ?

    Use a database link: http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/statements_5005.htm
    For instance, if you have created on database A a link to database B with name 'database_b'
    you can use
    select * from table@database_b

  • Ad Hoc Distributed Queries

    Hello Experts
    I was trying to enable my 'ad hoc distributed queries' using
    sp_configure 'show advanced options', 1;
    RECONFIGURE;
    sp_configure 'Ad Hoc Distributed Queries', 1;
    RECONFIGURE;
    GO
    but I am getting the following error please advise, thank you.
    Msg 5833, Level 16, State 1, Line 1
    The affinity mask specified is greater than the number of CPUs supported or licensed on this edition of SQL Server

    Hi,
    Why duplicate post please avoid this
    http://social.technet.microsoft.com/Forums/en-US/2ebf1d6e-ffe3-41bf-b741-5f1e4f08f46e/ad-hoc-distributed-queries?forum=sqldatabaseengine
    Please refer to masrked answer in below thread
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/700bd948-1a1d-4912-ac6d-723d4478bd55/license-issues-when-virtualizing-a-sql-2008-onto-windows-2003?forum=sqlsetupandupgrade
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Distributed queries+pipelined table function

    HI friends,
    can i get better performance for distributed queries if i use pipelined table function.I have got my data distribued across three different databases.
    thanx
    somy

    You will need to grant EXECUTE access on the pipelined table function to whatever users want it. When other users call this function, they may need to prefix the schema owner (i.e. <<owner>>.getValue('001') ) unless you've set up the appropriate synonym.
    What version of SQL*Plus do you have on the NT machine?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Distributed Queries - Index Server

    Using Kodo, is it possible to perform distributed queries. that is to
    combine a standard tabular SQL query and one which queries an index server?
    I suppose the real question is, is it possible to query a full text index
    using Kodo?

    Kodo doesn't provide any built-in support for querying a separate
    server for a particular JDOQL query. If the index server has a JDBC API,
    then it wouldn't be too difficult to issue the query using a separate
    PMF for the index server, and then manually join the results to get back
    the appropriate objects from the main database.
    There are also a bunch of interesting things you can do with custom
    field/class mappings; you might want to investigate these APIs
    (preferrably in 3.0, where they are more sophisticated).
    Finally, the next release of 3.0 will contain a new "textindex" sample,
    which demonstrates how you might roll your own full text index purely in
    JDO.
    In article <boeo0r$s67$[email protected]>, BD wrote:
    Using Kodo, is it possible to perform distributed queries. that is to
    combine a standard tabular SQL query and one which queries an index server?
    I suppose the real question is, is it possible to query a full text index
    using Kodo?--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Distributed Queries w/interMedia

    Does interMedia support simple distributed queries such as the following:
    select doc_id from doc_table@dblink where contains(text,'November',0)>0;
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Paul Dixon ([email protected]):
    This does not work so far in >= 8i.<HR></BLOCKQUOTE>
    I finally figured this one out. Add @dblink between "contains" and the "(". Works fine like this against 8.1.7.
    null

  • Help tuning SQL queries

    Hi,
    I need your advise on the following queries (using Oracle EBS tables). I runs more than one hour for the subquery part only. I would like to achieve much faster result. I appreciate if anyone can help me tuning this query.
    SELECT item_id, item_code, org_id,
    CASE
    WHEN COUNT (a) = 6
    THEN 1
    WHEN COUNT (a) = 5
    THEN 2
    WHEN COUNT (a) = 3 OR COUNT (a) = 4
    THEN 3
    WHEN COUNT (a) = 1 OR COUNT (a) = 2
    THEN 4
    END "MC"
    FROM (SELECT oel.inventory_item_id item_id, msi.segment1 item_code,
    oel.ship_from_org_id org_id, SUM (oel.ordered_quantity) a,
    TO_CHAR (oel.request_date, 'Mon-YYYY') b
    FROM mtl_system_items_b msi,
    mtl_item_categories mic,
    oe_order_headers_all oeh,
    oe_order_lines_all oel
    WHERE oeh.header_id = oel.header_id
    AND oel.request_date BETWEEN TRUNC (ADD_MONTHS (LAST_DAY (SYSDATE), -7)) + 1
    AND TRUNC (ADD_MONTHS (LAST_DAY (SYSDATE), -1)) + 1
    AND msi.creation_date < TRUNC (ADD_MONTHS (LAST_DAY (SYSDATE), -7)) + 1
    AND oel.ship_from_org_id = msi.organization_id
    AND oeh.header_id = oel.header_id
    AND oel.inventory_item_id = msi.inventory_item_id
    AND msi.inventory_item_id = mic.inventory_item_id
    AND msi.organization_id = mic.organization_id
    AND mic.category_set_id = 1
    AND mic.category_id = 178
    AND oel.org_id = oeh.org_id
    GROUP BY oel.inventory_item_id,
    msi.segment1,
    oel.ship_from_org_id,
    TO_CHAR (oel.request_date, 'Mon-YYYY'))
    GROUP BY item_id, item_code, org_id
    Here is the explain plan for the query, seems OK, but the query took so much time.
    Plan
    SELECT STATEMENT CHOOSECost: 3,955 Bytes: 38 Cardinality: 1                                              
         15 SORT GROUP BY Cost: 3,955 Bytes: 38 Cardinality: 1                                         
              14 VIEW APPS. Cost: 3,955 Bytes: 38 Cardinality: 1                                    
                   13 SORT GROUP BY Cost: 3,955 Bytes: 91 Cardinality: 1                               
                        12 FILTER                          
                             11 NESTED LOOPS Cost: 3,908 Bytes: 91 Cardinality: 1                     
                                  8 NESTED LOOPS Cost: 3,907 Bytes: 82 Cardinality: 1                
                                       5 NESTED LOOPS Cost: 1,303 Bytes: 1,612 Cardinality: 31           
                                            2 TABLE ACCESS BY INDEX ROWID INV.MTL_ITEM_CATEGORIES Cost: 59 Bytes: 11,818 Cardinality: 622      
                                                 1 INDEX SKIP SCAN NON-UNIQUE INV.MTL_ITEM_CATEGORIES_N1 Cost: 42 Cardinality: 622
                                            4 TABLE ACCESS BY INDEX ROWID INV.MTL_SYSTEM_ITEMS_B Cost: 2 Bytes: 33 Cardinality: 1      
                                                 3 INDEX UNIQUE SCAN UNIQUE INV.MTL_SYSTEM_ITEMS_B_U1 Cost: 1 Cardinality: 1
                                       7 TABLE ACCESS BY INDEX ROWID ONT.OE_ORDER_LINES_ALL Cost: 84 Bytes: 30 Cardinality: 1           
                                            6 INDEX RANGE SCAN NON-UNIQUE ONT.OE_ORDER_LINES_N3 Cost: 2 Cardinality: 94      
                                  10 TABLE ACCESS BY INDEX ROWID ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 9 Cardinality: 1                
                                       9 INDEX UNIQUE SCAN UNIQUE ONT.OE_ORDER_HEADERS_U1 Cardinality: 1           
    Thanks in advance,
    Dapid Candra

    Check out these links on how to post proper tuning requests.
    {message:id=1812597}
    {thread:id=863295}
    After a quick look at your execution plan I noticed you have a lot of steps that report a cardinality of 1. Do you know if statistics have been gathered recently on these tables? If not, you probably should.

  • Optimizing Distributed Queries

    Hello All,
    We have a serious problem optimizing a job that fetches data ( 200k + rows) through views that reside in remote db and inserts into local table. We are running this job from Local db and due to contraints we can neither create any object in the remote db nor we have select access to the tables in remote db .We were given grant to select from the remote views built on remote base tables. How to optimize the job .We tried 2 methods neither was faster ( one hour plus min....)
    example 1: (using driving_site hint & append hint)
    begin
    for irec in (Select /*+ driving_site (c)(d) */) c.customer_id,c.customer_name,d.dept_id,d.dept_name
    from
    customers_view@remotedb c,departments_view@remotedb d
    where d.unique_id = c.unique_id)
    loop
    insert /*+append */
    into local_table ( cust_id,cust_name,dept_id,dept_name)
    values
    ( rec.customer_id,
    rec.customer_name,
    rec.dept_id,
    rec.dept_name);
    end loop;
    commit;
    end;
    example 2: (conventional insert with append hint and driving_site will not work here )..
    insert /*+append */
    into local_table ( cust_id,cust_name,dept_id,dept_name)
    Select c.customer_id,c.customer_name,d.dept_id,d.dept_name
    from
    customers_view@remotedb c,departments_view@remotedb d
    where d.unique_id = c.unique_id)
    Limitations :
    1) we do not have privilage to run explain plan for the remote objects..:(. So whatever we do we have no clue will it increase performance..!! )
    2) The job fetches data only from remote objects(views) and no local objects..
    3) We are not allowed to create any object in remote db..( We will never get a grant for that,so no second thought about creating an objects in remote db to increase performance)
    If any one have encontered or experienced similar problems or got any suggestions to optmizie then please do help us out.Thank you all in advance..
    Edited by: 843561 on Aug 26, 2011 1:53 PM

    Dev_Indy wrote:
    Thanks Tubby for your suggestion, will give it a shot for sure and let you know how it worked!!No problem.
    Please do let us know how that works out for you :)

  • Distributed Queries

    I want to querry data from 2 tables reciding on another Oracle database based on a date value that is a variable.
    THe SQL statment below works:
    select t1.market_cd, t1.NT_LOC_ENTITY_CD, t1.NTI_NO
    FROM customer_order@phoenix t1, Customer_Order_line@phoenix t2
    WHERE t1.nt_LOC_ENTITY_CD = '515'
    and t1.NTI_NO NOT LIKE 'Y%'
    AND customer_po NOT LIKE 'TD0000%'
    AND customer_po NOT LIKE 'TDMN%'
    and Customer_NO NOT LIKE '20352%'
    and t1.nti_no = t2.nti_no AND
    t1.bo_no = t2.Bo_no AND
    contract_annix not like 'B06%' AND
    contract_annix not like 'B17%' AND
    LINE_ITEM_SEQ_NO='0000' AND
    t2.actual_ship_date IS NULL AND
    (t2.Orig_sched_ship_date = '30-DEC-00'OR t2.Orig_cust_req_date <= '30-DEC-00');
    When I try to replace the hardcoded date '30-DEC-00' above with a variable as shown below. I get errors.
    As BEGIN
    SELECT sysdate INTO todaysDate from Dual;
    INSERT INTO Uma
    select t1.market_cd, t1.NT_LOC_ENTITY_CD, t1.NTI_NO
    FROM customer_order@phoenix t1, Customer_Order_line@phoenix t2
    WHERE t1.nt_LOC_ENTITY_CD = '515'
    and t1.NTI_NO NOT LIKE 'Y%'
    AND customer_po NOT LIKE 'TD0000%'
    AND customer_po NOT LIKE 'TDMN%'
    and Customer_NO NOT LIKE '20352%'
    and t1.nti_no = t2.nti_no AND
    t1.bo_no = t2.Bo_no AND
    contract_annix not like 'B06%' AND
    contract_annix not like 'B17%' AND
    LINE_ITEM_SEQ_NO='0000' AND
    t2.actual_ship_date IS NULL AND
    (t2.Orig_sched_ship_date = todaysDate OR t2.Orig_cust_req_date <= '30-DEC-00');
    end;
    Can any one tell be how I can rewrite this querry to use variables.
    Thank you.

    Hi,
    I don't know if that may be a workaround but You may try:
    - Create a view at the PHOENIX db as:
    CREATE OR REPLACE
    VIEW Customer_Full_Orders
    AS
    SELECT t1.market_cd,
    t1.nt_loc_entity_cd,
    t1.nti_no,
    customer_po,
    customer_no,
    contract_annix,
    line_item_seq_no,
    t2.actual_ship_date,
    t2.orig_sched_ship_date,
    t2.orig_cust_req_date
    FROM customer_order t1,
    customer_order_line t2
    WHERE t1.nti_no = t2.nti_no
    AND t1.bo_no = t2.bo_no
    /If the filters conditions may be hard-coded You may skip some field and include the filter in the view.
    - Recreate the procedure as:
    DECLARE
    v_Filter_Date DATE := SYSDATE;
    BEGIN
    INSERT
    INTO uma
    SELECT r.market_cd,
    r.nt_loc_entity_cd,
    r.nti_no
    FROM Customer_Full_Orders@PHOENIX r
    WHERE r.nt_loc_entity_cd = '515'
    AND r.nti_no NOT LIKE 'Y%'
    AND r.customer_po NOT LIKE 'TD0000%'
    AND r.customer_po NOT LIKE 'TDMN%'
    AND r.contract_annix NOT LIKE 'B06%'
    AND r.contract_annix NOT LIKE 'B17%'
    AND r.actual_ship_date IS NULL
    AND (r.orig_sched_ship_date = v_Filter_Date
    OR r.orig_cust_req_date <= v_Filter_Date
    END;
    /Hope this is usefull.
    Bye Max
    null

  • Join two remote sites, use_nl or use_hash

    We are using Oracle 10g R2 on Linux platform.
    Suppose we have three remote sites A, B and C. I want to join two tables on B and C by executing a query on Site A. I cannot give driving_site hint because I do not have privileges for that.
    Can you please answer the following question?
    If I use nested loop join, both tables on site B and C shall be copied to the local site A and join shall be performed OR only one table, called the deriving table shall be copied on site A  and second table shall be probed remotely?
    In Hash Join, Oracle shall copy both tables B and C on site A and then perform the join?
    | Id  | Operation          | Name                           | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
    |   0 | SELECT STATEMENT   |                                | 13084 |   907K|    11M  (3)| 37:04:41 |        |      |
    |   1 |  NESTED LOOPS OUTER|                                | 13084 |   907K|    11M  (3)| 37:04:41 |        |      |
    |   2 |   REMOTE           | VISA_CIL01_GDA_INTEREST_PERIOD | 13084 |   344K|    15   (7)| 00:00:01 | CIL_G~ |
    |   3 |   REMOTE           | VGUA_CIL01_ACCOUNT_ID          |     1 |    44 |   850   (3)| 00:00:11 | CIL_G~ | R->S |

    No; it means that when you are trying to work out whether or not a nested loop join .......
    Can you please tell me scenario where I should use NL instead of Hash join
    This means that the point at which you decide to switch from NL to Hash join will typically be for a smaller number of cycles round the loop, i.e. for a smaller amount of data.
    Sorry, I could not get this point. Does it mean that we should use hash join when we have tables to join?
    Do you have a URL for the document you read that gave this impression
    Actually this is Oracle 8i documentation from which i get this concept, perhaps not feasible in Oracle 10g now. Tuning Distributed Queries
    For the nested loop, the rows and columns needed by the outer (first) table will be pulled to the local site in relatively small batches, and for each row in that rowsource the inner (second) table will be probed across the network.
    Can you please give me reference from Oracle documentation about the the above description to help me better understand the idea?

  • Select count(*) where exists (takes 5 hours).

    Hello Gurus,
    I have two databases on two servers, I am counting how many rows are similiar on two tables that are identical, and the rows should be identical as well. I am running a select count(*) where exists query and it takes 5 hours to complete.
    Each table only has two million rows.
    What can I do to speed it up?

    5 hours to process 2M rows does sound a bit long :(
    I didn't see this mentioned explicitly, but I thought the idea of comparing data on 2 servers implied a database link. Tuning distributed queries can be nasty.
    Start by getting an execution plan of the query to figure out what it is doing. Compare that to the plan generated by the already suggested MINUS operator. You'll need to do MINUS twice with each query in the other's position the second time. Alternately, check the indexing on the subqueries; EXISTS tends to work best with fast indexed lookups. FTS on an EXISTS subquery is not good :(
    Think about copying the data locally to one system or the other first, maybe in a materialized view or even global temporary table.
    Finally, think about tuning the transfer. There are articles on Metalink on tuning the transfer packet sizes (SDU/TDU) which might help IF YOU ARE ON UNIX; I haven't had any luck changing these values on Windows. You can also look into setting tcp.nodelay which can affect when packets get sent (another Metalink article should cover this).

  • Why is it only possible to run queries on a Distributed cache?

    I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
    I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.

    Kris,
    I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
    The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
    Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
    Gene

Maybe you are looking for

  • Opening links in a PDF

    When I try to open links in a pdf, a internet browser window opens instead of the word document(its a docx file).  Is there anyway I can change this to make it realize its not a website? I am using adobe reader 11.0.02 Thank you in advance!

  • Problems running the installer for Elements 11 - Mac

    Mac OS10.8.3 15" Retina 2.7GHz  16GB RAM I have downloaded the installation file for Elements 11 using the Adobe Installer; upon running the installer I get an invalid checksum error.  Tried repeated times, same result.  Tried disabling checksum chec

  • Reg : Exchange rate conversion

    Hi all , I am using the function module 'READ_EXCHANGE_RATE' to find the exchange rate . I am using the below parameters .. date = sy-datum, FCURR = 'CAD', TCURR = 'USD'. I am getting the exchange rate as '1.496' is coming from TCURRtable. But for th

  • Integration to portal

    I have a number of web applications needed to integrate to portal server. Any pointers or tips of what my option is. thanks - Daniel Target : portal server v 6.0

  • A buzz every few minutes

    Every minute or two I get a buzz from what I'm thinking is the back left side of my Macbook Pro Retina. I even get this buzz when I mute sound (I figured this out when I put volume to max and it didn't get louder). Not sure what it is. I've turned al