Distributed Queries w/interMedia

Does interMedia support simple distributed queries such as the following:
select doc_id from doc_table@dblink where contains(text,'November',0)>0;
null

<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Paul Dixon ([email protected]):
This does not work so far in >= 8i.<HR></BLOCKQUOTE>
I finally figured this one out. Add @dblink between "contains" and the "(". Works fine like this against 8.1.7.
null

Similar Messages

  • Xml data type is not supported in distributed queries. Remote object 'OPENROWSET' has xml column(s).

    Hi,
    Can anyone help me out please.
    I have written one stored Procedure to create a views using Openrowset(openquery) but for tables which contains xml data types throwing error while executing the SP. Error
    " Xml data type is not supported in distributed queries. Remote object 'OPENROWSET' has xml column(s)."
    Please refer the Stored Procedure & error message below.
    USE [Ice]
    GO
    /****** Object:  StoredProcedure [dbo].[Pr_DBAccess]    Script Date: 08/14/2014 16:08:20 ******/
    SET
    ANSI_NULLS ON
    GO
    SET
    QUOTED_IDENTIFIER ON
    GO
    ALTER
    PROCEDURE [dbo].[ Pr_DBAccess](@SERVERTYPE
    NVARCHAR(50),@SERVERNAME
    NVARCHAR(100),@DATABASENAME
    NVARCHAR(100),@SCHEMANAME
    NVARCHAR(100),@TABLENAME
    NVARCHAR(100),@USERNAME
    NVARCHAR(100),@PASSWORD
    NVARCHAR(100))
    AS
    BEGIN
    DECLARE @openquery
    NVARCHAR(4000),
    @ETL_CONFIG_IDN
    NVARCHAR(100);
     IF @SERVERTYPE='SQL'
     BEGIN
    SET @openquery= 
    'CREATE VIEW '+@TABLENAME+
    ' WITH ENCRYPTION AS SELECT * FROM OPENROWSET(''SQLNCLI'',''SERVER='+@SERVERNAME+';TRUSTED_CONNECTION=YES;'',''SELECT * FROM '+@DATABASENAME+'.'+@SCHEMANAME+'.'+@TABLENAME+''')'
    SELECT @openquery
    END
    EXECUTE
    sp_executesql @openquery
    END
    ----While running the SP manually below error occured

    HI ,
    1. You cannot use a table or view that contains xml or clr type as 4-part name in your query
    2. You need to cast the column to either nvarchar(max) or varbinary(max) or other appropriate type to use
    3. If you have a table that has xml type for example then you need to create a view that contains all columns other than xml and query it instead. Or you can issue a pass-through query using OPEN QUERY with the appropriate columns only.
    Here is a work around:
    SELECT
          Cast(a.XML_Data as XML) as XML_Data
    FROM
          OPENQUERY([LINKED SERVER NAME HERE],'
              SELECT
                Cast(XML_Data as Varchar) as XML_Data
             FROM
                [DATABASE NAME].[SCHEMA].[TABLE NAME]'
    ) a
    Basically, the data is queried on the remote server, converts the XML data to a varchar, sends the data to the requesting server and then reconverts it back to XML.
    You can take help from below link;
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/c6e0f4da-821f-4ba2-9b01-c141744076ef/xml-data-type-not-supported-in-distributed-queries?forum=transactsql
    Thanks

  • OLE DB provider 'MSOLAP' cannot be used for distributed queries because the provider is configured to run in single-threaded

    Hopefully this will save somebody some trouble.
    Running 64bit Enterprise SQL and SSAS with Service pack 2 installed.
    Also running Proclarity so 32bit mode Reporting Services is running.
    When trying to create a linked server to my OLAP database I was continually getting the following Error:
    OLE DB provider 'MSOLAP' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode. (Microsoft SQL Server, Error: 7308)
    Many posts suggested I select the "in Proc" check box under the olap provider, but this did not help.
    Finally, instead of using the IDE to create the linked server I used a script to call sp_addlinkedserver and used @provider='MSOLAP.3'.  This fixed the problem.
    If you have a more clear idea of why I was having the issue in the first place, feel free to let me know what you think.

    Try this thread:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/f02a3921-7d0b-4038-97bb-9f17d381b345/linked-ssas-server-fails?forum=sqlanalysisservices
    Talk to me now on

  • Facing problem in distributed queries over Oracle linked server

    Hi,
    I have a SQL Server 2005 x64 Standard Edition SP3 instance. On the instance, we had distributed (4 part) queries over an Oracle linked server running fine just a few hours back but now they have starting taking too long. They seem to work fine when OPENQUERY
    is used. Now I have a huge number of queries using the same mechanism and is not feasible for me to convert all queries to OPENQUERY, please help in getting this resolved.
    Thanks in advance.

    Hi Ashutosh,
    According to your description, you face performance issues with distributed queries and
    it is not feasible for you to convert all queries to
    OPENQUERY. To improve the performance, you could follow the solutions below:
    1. Make sure that you have a high-speed network between the local server and the linked server.
    2. Use driving_site hint. The driving site hint forces query execution to be done at a different site than the initiating instance. 
    This is done when the remote table is much larger than the local table and you want the work (join, sorting) done remotely to save the back-and-forth network traffic. In the following example, we use the driving_site hint to force the "work"
    to be done on the site where the huge table resides:
    select /*+DRIVING_SITE(h)*/
    ename
    from
    tiny_table t,
    huge_table@remote h
    where
    t.deptno = h.deptno;
    3. Use views. For instance, you could create a view on the remote site referencing the tables and call the remote view via the local view as the following example.
    create view local_cust as select * from cust@remote;
    4. Use procedural code. In some rare occasions it can be more efficient to replace a distributed query by procedural code, such as a PL/SQL procedure or a precompiler program.
    For more information about the process, please refer to the article:
    http://www.dba-oracle.com/t_sql_dblink_performance.htm
    Regards,
    Michelle Li

  • Linked Server and Distributed Queries  in Oracle

    In MSSQL, Linked Server and Distributed Queries provide SQL Server with access data from remote data sources. How about in Oracle ?
    I have a table A at Server A and table B at Server B, i wanna join these two table together. How can i do this in Oracle ?

    Use a database link: http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/statements_5005.htm
    For instance, if you have created on database A a link to database B with name 'database_b'
    you can use
    select * from table@database_b

  • Ad Hoc Distributed Queries

    Hello Experts
    I was trying to enable my 'ad hoc distributed queries' using
    sp_configure 'show advanced options', 1;
    RECONFIGURE;
    sp_configure 'Ad Hoc Distributed Queries', 1;
    RECONFIGURE;
    GO
    but I am getting the following error please advise, thank you.
    Msg 5833, Level 16, State 1, Line 1
    The affinity mask specified is greater than the number of CPUs supported or licensed on this edition of SQL Server

    Hi,
    Why duplicate post please avoid this
    http://social.technet.microsoft.com/Forums/en-US/2ebf1d6e-ffe3-41bf-b741-5f1e4f08f46e/ad-hoc-distributed-queries?forum=sqldatabaseengine
    Please refer to masrked answer in below thread
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/700bd948-1a1d-4912-ac6d-723d4478bd55/license-issues-when-virtualizing-a-sql-2008-onto-windows-2003?forum=sqlsetupandupgrade
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Distributed queries+pipelined table function

    HI friends,
    can i get better performance for distributed queries if i use pipelined table function.I have got my data distribued across three different databases.
    thanx
    somy

    You will need to grant EXECUTE access on the pipelined table function to whatever users want it. When other users call this function, they may need to prefix the schema owner (i.e. <<owner>>.getValue('001') ) unless you've set up the appropriate synonym.
    What version of SQL*Plus do you have on the NT machine?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Distributed Queries - Index Server

    Using Kodo, is it possible to perform distributed queries. that is to
    combine a standard tabular SQL query and one which queries an index server?
    I suppose the real question is, is it possible to query a full text index
    using Kodo?

    Kodo doesn't provide any built-in support for querying a separate
    server for a particular JDOQL query. If the index server has a JDBC API,
    then it wouldn't be too difficult to issue the query using a separate
    PMF for the index server, and then manually join the results to get back
    the appropriate objects from the main database.
    There are also a bunch of interesting things you can do with custom
    field/class mappings; you might want to investigate these APIs
    (preferrably in 3.0, where they are more sophisticated).
    Finally, the next release of 3.0 will contain a new "textindex" sample,
    which demonstrates how you might roll your own full text index purely in
    JDO.
    In article <boeo0r$s67$[email protected]>, BD wrote:
    Using Kodo, is it possible to perform distributed queries. that is to
    combine a standard tabular SQL query and one which queries an index server?
    I suppose the real question is, is it possible to query a full text index
    using Kodo?--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Tuning Distributed Queries

    Hello to all
         I have new senior plz help me out how to handle or what tool technique is appropriate for this senior.
    My senior related to distributed quires. I want to tune this process. My working environment is windows server 2003 with 11g.
    I run query form one server which involve db_link to extract data from two other databases. But I observe that query runs on first node and then 2nd node. But I want to run this process parallel. I want both servers utilize at same time but using db_link in query not solve the problem. Can oracle parallel server would be the solution or another. Plz help me which architecture would appropriate in this senior
    regards
    Adeel

    Yes i have started this thread. I don't know what is the problem with My Userid of OTN. Sometime it show ORACLE student and sometime ID.
    -Rem sum(store_code) calculated on Local server Node0
    select sum(store_code) from
    --REM this Query Runs on one server say Node1*
    select * from PK0113_jt@D2_9_2010634013255218206180
    inner join PK0113_dg0@D2_9_2010634013255218206180 on PK0113_dg0.dg0_id=PK0113_jt.dg0_id
    where store_code=113
    union all
    REM REM this Query Runs on one server say Node2*
    select * from PK113S_jt@D2_12_2010634015728553161367
    inner join PK113S_dg0@D2_12_2010634015728553161367 on PK113S_dg0.dg0_id=PK113S_jt.dg0_id
    where store_code=102
    ) t
    i have send this request from Local server to two stars whose data reside on two different server. in my case first request goes to node 1 and then node2 finally return answer. and execution plan shows Remote in plan.

  • The 'Contain' Queries on Intermedia ORDDOC fails

    Hi
    We are using Intemedia to Manage multimedia content. We store a lot of document using OrdDoc.
    We were able to index the content using
    "create index contentstoreindex on CONTENTSTORE(CNTSTR_FILE.Comments) indextype is ctxsys.context "
    Now we have put some documents, the select statement does not return any rows.
    SELECT SCORE(1), CNTSTR_SUBJECT
         FROM CONTENTSTORE WHERE CONTAINS(CNTSTR_FILE.Comments, 'test', 1) > 0;
    No error, no rows fetched.
    (There are 10 documents containing the string test in the database.
    Can some one please help on what is wrong, what else we need to do.
    Thanks

    Some more info, when I changed my indexing method following is the dump.
    SQL> create index contentstoreindex on CONTENTSTORE(CNTSTR_FILE.Comments)
    2 indextype is ctxsys.context
    3 parameters ('filter ctxsys.inso_filter');
    Index created.
    SQL> SELECT SCORE(1), CNTSTR_SUBJECT, CNTSTR_OID
    2 FROM CONTENTSTORE
    3 WHERE CONTAINS(CNTSTR_FILE.Comments, 'Test', 1) > 0 ;
    WHERE CONTAINS(CNTSTR_FILE.Comments, 'Test', 1) > 0
    ERROR at line 3:
    ORA-00904: invalid column name

  • Optimizing Distributed Queries

    Hello All,
    We have a serious problem optimizing a job that fetches data ( 200k + rows) through views that reside in remote db and inserts into local table. We are running this job from Local db and due to contraints we can neither create any object in the remote db nor we have select access to the tables in remote db .We were given grant to select from the remote views built on remote base tables. How to optimize the job .We tried 2 methods neither was faster ( one hour plus min....)
    example 1: (using driving_site hint & append hint)
    begin
    for irec in (Select /*+ driving_site (c)(d) */) c.customer_id,c.customer_name,d.dept_id,d.dept_name
    from
    customers_view@remotedb c,departments_view@remotedb d
    where d.unique_id = c.unique_id)
    loop
    insert /*+append */
    into local_table ( cust_id,cust_name,dept_id,dept_name)
    values
    ( rec.customer_id,
    rec.customer_name,
    rec.dept_id,
    rec.dept_name);
    end loop;
    commit;
    end;
    example 2: (conventional insert with append hint and driving_site will not work here )..
    insert /*+append */
    into local_table ( cust_id,cust_name,dept_id,dept_name)
    Select c.customer_id,c.customer_name,d.dept_id,d.dept_name
    from
    customers_view@remotedb c,departments_view@remotedb d
    where d.unique_id = c.unique_id)
    Limitations :
    1) we do not have privilage to run explain plan for the remote objects..:(. So whatever we do we have no clue will it increase performance..!! )
    2) The job fetches data only from remote objects(views) and no local objects..
    3) We are not allowed to create any object in remote db..( We will never get a grant for that,so no second thought about creating an objects in remote db to increase performance)
    If any one have encontered or experienced similar problems or got any suggestions to optmizie then please do help us out.Thank you all in advance..
    Edited by: 843561 on Aug 26, 2011 1:53 PM

    Dev_Indy wrote:
    Thanks Tubby for your suggestion, will give it a shot for sure and let you know how it worked!!No problem.
    Please do let us know how that works out for you :)

  • Distributed Queries

    I want to querry data from 2 tables reciding on another Oracle database based on a date value that is a variable.
    THe SQL statment below works:
    select t1.market_cd, t1.NT_LOC_ENTITY_CD, t1.NTI_NO
    FROM customer_order@phoenix t1, Customer_Order_line@phoenix t2
    WHERE t1.nt_LOC_ENTITY_CD = '515'
    and t1.NTI_NO NOT LIKE 'Y%'
    AND customer_po NOT LIKE 'TD0000%'
    AND customer_po NOT LIKE 'TDMN%'
    and Customer_NO NOT LIKE '20352%'
    and t1.nti_no = t2.nti_no AND
    t1.bo_no = t2.Bo_no AND
    contract_annix not like 'B06%' AND
    contract_annix not like 'B17%' AND
    LINE_ITEM_SEQ_NO='0000' AND
    t2.actual_ship_date IS NULL AND
    (t2.Orig_sched_ship_date = '30-DEC-00'OR t2.Orig_cust_req_date <= '30-DEC-00');
    When I try to replace the hardcoded date '30-DEC-00' above with a variable as shown below. I get errors.
    As BEGIN
    SELECT sysdate INTO todaysDate from Dual;
    INSERT INTO Uma
    select t1.market_cd, t1.NT_LOC_ENTITY_CD, t1.NTI_NO
    FROM customer_order@phoenix t1, Customer_Order_line@phoenix t2
    WHERE t1.nt_LOC_ENTITY_CD = '515'
    and t1.NTI_NO NOT LIKE 'Y%'
    AND customer_po NOT LIKE 'TD0000%'
    AND customer_po NOT LIKE 'TDMN%'
    and Customer_NO NOT LIKE '20352%'
    and t1.nti_no = t2.nti_no AND
    t1.bo_no = t2.Bo_no AND
    contract_annix not like 'B06%' AND
    contract_annix not like 'B17%' AND
    LINE_ITEM_SEQ_NO='0000' AND
    t2.actual_ship_date IS NULL AND
    (t2.Orig_sched_ship_date = todaysDate OR t2.Orig_cust_req_date <= '30-DEC-00');
    end;
    Can any one tell be how I can rewrite this querry to use variables.
    Thank you.

    Hi,
    I don't know if that may be a workaround but You may try:
    - Create a view at the PHOENIX db as:
    CREATE OR REPLACE
    VIEW Customer_Full_Orders
    AS
    SELECT t1.market_cd,
    t1.nt_loc_entity_cd,
    t1.nti_no,
    customer_po,
    customer_no,
    contract_annix,
    line_item_seq_no,
    t2.actual_ship_date,
    t2.orig_sched_ship_date,
    t2.orig_cust_req_date
    FROM customer_order t1,
    customer_order_line t2
    WHERE t1.nti_no = t2.nti_no
    AND t1.bo_no = t2.bo_no
    /If the filters conditions may be hard-coded You may skip some field and include the filter in the view.
    - Recreate the procedure as:
    DECLARE
    v_Filter_Date DATE := SYSDATE;
    BEGIN
    INSERT
    INTO uma
    SELECT r.market_cd,
    r.nt_loc_entity_cd,
    r.nti_no
    FROM Customer_Full_Orders@PHOENIX r
    WHERE r.nt_loc_entity_cd = '515'
    AND r.nti_no NOT LIKE 'Y%'
    AND r.customer_po NOT LIKE 'TD0000%'
    AND r.customer_po NOT LIKE 'TDMN%'
    AND r.contract_annix NOT LIKE 'B06%'
    AND r.contract_annix NOT LIKE 'B17%'
    AND r.actual_ship_date IS NULL
    AND (r.orig_sched_ship_date = v_Filter_Date
    OR r.orig_cust_req_date <= v_Filter_Date
    END;
    /Hope this is usefull.
    Bye Max
    null

  • Why is it only possible to run queries on a Distributed cache?

    I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
    I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.

    Kris,
    I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
    The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
    Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
    Gene

  • Report with Multiple queries too slow in BI Publisher 11g

    Hi, I have a report in 11g where i need to create multiple queries to show them in report. I tried to combine everything in one query, but i found that the query is too huge and hard to understand and maintain. I created 3 data sets and linked them together. In SQL dev, the main query is returning about 315 records and first detail query returns less than 100 records and second detail query is returning one record which is BLOB. Each query returns data within a couple of seconds from SQL Developer. The entire report from BI Publisher should be just 21 page PDF output. Did anyone face performance issues while running reports with multiple queries in 11g? I ran reports that have single query which returned 10K pages PDF and never had an issue while everthing is in one query. This is the first time Iam attempting to create multiple queries. Can someone help me understand what i might be doing wrong or missing here. Thank you.

    Isn't there a way for you to do this via a Package/Procedure versus having multiple queries?
    Per the BI Publisher guide,
    Following are recommended guidelines for building data models:
    Reduce the number of data sets or queries in your data model as much as possible. In general, the fewer data sets and queries you have, the faster your data model will run. While multiquery data models are often easier to understand, single-query data models tend to execute more quickly. It is important to understand that in parent-child queries, for every parent, the child query is executed.
    You should only use multiquery data models in the following scenarios:
    To perform functions that the query type, such as a SQL query, does not support directly.
    To support complex views (for example, distributed queries or GROUP BY queries).
    To simulate a view when you do not have or want to use a view.
    Thanks,
    Bipuser

  • Matrix report with Multiple queries

    I have created one cross product group containing 4 subgroups under a single query. Now I have another one query seperately to be joined with the matrix query. Also I have one formula column and summary column with this the cross product group. The logic is if formula condition is true the it should take the 2nd query value in the place holder column otherwise it should take the first query value. is it possible?
    If any one knows about this update me ASAP to [email protected]

    Isn't there a way for you to do this via a Package/Procedure versus having multiple queries?
    Per the BI Publisher guide,
    Following are recommended guidelines for building data models:
    Reduce the number of data sets or queries in your data model as much as possible. In general, the fewer data sets and queries you have, the faster your data model will run. While multiquery data models are often easier to understand, single-query data models tend to execute more quickly. It is important to understand that in parent-child queries, for every parent, the child query is executed.
    You should only use multiquery data models in the following scenarios:
    To perform functions that the query type, such as a SQL query, does not support directly.
    To support complex views (for example, distributed queries or GROUP BY queries).
    To simulate a view when you do not have or want to use a view.
    Thanks,
    Bipuser

Maybe you are looking for

  • Error while migrating reports from XI3.1 to BI4.0

    Hi,            I have converted deski report into webi report using report conversion tool and i am trying to migrate the converted report from XI 3.1 to BI 4.0. I am getting 'Error:INF' while doing this. how to overcome this error while migration ?.

  • Multiple Account assignemnet for Service PO

    Hi Friends, I can understand that if multiple account assignment is selected in PO , then by default the GR Non Valuated indicator will be set. i.e. GR will be non valuated . no FI document created during GR  But when I create service PO with item ca

  • Iphone 6 + and 8.1.1

    I had my 6+ for about 2 weeks now. Battery Last 2 days ! super !!! yesterday when i installed 8.1.1 from 8.1 i can watch it go from 100% within minutes 98% and so on.... never did that on 8.1.... Bug? any others having the same issue ? today I went b

  • SLD error after LDAP integration

    Hello All, I integrated Corporate LDAP with EP 7.0 ,after that I have SLD error when I click on the ESS tab which says: Caused by: com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Failed to resolve JCO destination name 'SAP_R3_SelfService

  • Volume is increased for a second

    Hey. I have iPod classic 160 (last generation i guess) For some reasons the volume is increased for a second when i change songs (not every song, but some). I tried to enable/disable "sound check", but it does not help. Any ideas? Thank you.