XMB2DB_XML queries with join

Using XMB2DB_XML I have done some simple queries like select, insert, update in SAP XI 2.0.
Now i want to do the queries involving joins...
Any idea about the XML Document Format for the Mode XMB2DB_XML???
Abhijeet
[email protected]

Hi,
You will probably need to create a join on the database side and use the SELECT within XMB2DB_XML.
XMB2DB_XML does not provide joins of tables.
Regards,
Bill

Similar Messages

  • Bug-fix needs your vote: queries w/ joins against SQLite return incorrect values because Adobe treats PK col as alias for rowid when it should not

    For someone new to Adobe the forums and products can be bewildering. I've been advised to repost something I posted in Flash Data Integration in this forum.
    Here is the link to the post I put there:
    http://forums.adobe.com/message/2363777#2363777
    I have reported this bug: http://bugs.adobe.com/jira/browse/FB-23750
    I gather bugs get fixed if people vote for them to be fixed. Please vote for it to be fixed. It is serious, and you might not even realize you're suffering from it because the incorrect values returned by the query will seem perfectly plausible.
    If the link above doesn't work, here it is again:
    When I execute the following query in Flex and/or Lita:
    select wrdid, uspelling from WRD WHERE uspelling = 'wingeard'
    the results are:
    uspelling...wrdid
    wingeard   3137
    Look at what comes back when I execute this query using .NET provider by  Robert Simpson for SQLite and SQLite Manager by Mrinal Kant:
    SELECT     rowid, wrdid, uspelling
    FROM         WRD
    WHERE     (uspelling = 'wingeard')
    rowid.......wrdid...........uspelling
    3137........3042............wingeard
    No wonder none of my queries with joins is working correctly in Flex.
    wrdid is defined as "int" not INTEGER.
    http://www.sqlite.org/lang_createtable.html (see INTEGER PRIMARY KEY section):
    "The special behavior of INTEGER PRIMARY KEY is only available if the type name is exactly "INTEGER" (in any mixture of upper and lower case.)  Other integer type names like "INT" or "BIGINT" or "SHORT INTEGER" or "UNSIGNED INTEGER" causes the primary key column to behave as an ordinary table column with integer affinity and a unique index, not as an alias for the rowid."  [emphasis added]
    Now, I happen to think the SQLite developers made a mistake here in failing to follow standards, preferring not to break legacy code. They'd rather break current code instead???  I would not characterize this as a "corner case" and the bug-at-hand is de facto evidence of that.

    Did you try running the queries I posted? What were your results with those?
    What I am seeing is that when I use "int PRIMARY KEY" in a CREATE TABLE statement, that column becomes the special "rowid" column. I believe this is also what you are seeing.
    However, what confuses me is how you're getting a table with three columns "rowid", "id", and "name" in the first place. When I run this SQL...
    CREATE TABLE test
    id int PRIMARY KEY,
    name String
    ...I get a table with two columns: a normal column named "name", and a special primary key column named "id", which for this table is identical to the column represented by the rowid identifier.
    However, if I understand correctly, your table has three columns, "id", "name", and the special primary key column (i.e. "rowid"). Is that right? Can you give me the SQL that was used to create the table, or tell me how the table was created (e.g. if you used a tool like Lita) so I can try to re-create your exact situation? That would really be very very helpful -- it was the only detail that was missing in your last post, so I had to guess on that one detail.
    I tried something else to re-create your situation. I ran the following statement:
    CREATE TABLE test
    id int
    name String
    That gave me a table with two real columns plus the rowid column. Then I ran the three insert statements on that table, and when I ran the select statement I got the expected result:
    id     name
    1     one
    2     two
    7     seven
    Again, I'm guessing that your table was created differently than my test table, and that's the explanation for the difference.
    Some other possibilities to consider:
    The screen shot doesn't show a SQLResult object, so it seems that you're using some wrapper library or code to execute the query, or at least that you've copied the SQLResult.data Array to another variable named results. Although it seems less likely to me, it's possible that somewhere in that code something is getting scrambled. (But I'd rather rule out AIR as the underlying cause first before attempting to explore those paths.)
    As a side note, if you really want the database to have three columns (the special rowid column and your two columns id and name), and you don't want id to be the special rowid column, then it sounds to me like you don't actually want to define id as the primary key. If you just want the id column to have a constraint that prevents duplicate values, you can define it as a UNIQUE column:
    CREATE TABLE test
    id int UNIQUE,
    name String
    That gives you the same database-enforced constraint of not allowing duplicate values, but it tells the database explicitly that id isn't the same thing as the rowid primary key. (You can still join another table to the id column even if it's not defined as the primary key.)
    P.S. I'm sure you don't mean it this way, but using gigantic red text comes across like shouting -- it's very "loud". I'm trying my best to understand the issue you're having and help you resolve it, and using multiple colors and font sizes doesn't really make your post any more or less clear. Just because I ask you questions, or say that I'm seeing different results than you, doesn't mean I don't believe that you're seeing the results you're seeing. I've definitely seen strange variations and cases where something happens on my computer but others can't duplicate it on their computers -- so I believe that you are getting the results you're getting. I'm just trying to figure out how to make it so that I can also get those results, so that I can pass that on to the engineers who are in a position to make changes.

  • Joining two queries with Visual Composer

    Hallo,
    I was working with visual composer to join two queries with the distinct operator. But on my table view, when i linked it with the distinct operator, i get only the key element customer (customer, customer_ext_key, customer_key). These are the queries that i want join:
    Query 1:
    Customer
    Net Sales
    Query 2:
    Customer
    Billed Quantity
    What i want to see is the following result:
    Customer
    Net Sales
    Billed Quantity
    Can anybody help me with this problem?
    Greeting,
    Murat.

    Hallo,
    I was working with visual composer to join two queries with the distinct operator. But on my table view, when i linked it with the distinct operator, i get only the key element customer (customer, customer_ext_key, customer_key). These are the queries that i want join:
    Query 1:
    Customer
    Net Sales
    Query 2:
    Customer
    Billed Quantity
    What i want to see is the following result:
    Customer
    Net Sales
    Billed Quantity
    Can anybody help me with this problem?
    Greeting,
    Murat.

  • Materialized View with Joins

    Dear Dev/DBAs,
    I have the following scenario:
    SQL> CREATE TABLE T1 (ID NUMBER(3),NAME VARCHAR2(10));
    SQL> CREATE TABLE T2 (ID NUMBER(3),NAME VARCHAR2(10));
    The T1 contains records having the ID num from 10 to 80 and the T2 having the ID from 90 to 170
    SQL> SELECT * FROM T1 JOIN ALL SELECT * FROM T2
    It give all records in the 2 tables.
    I'm planning to create a materialized view (like CREATE MATERIALIZED VIEW V_TAB REFRESH ON COMMIT AS SELECT * FROM T1 JOIN ALL SELECT * FROM T2) and it seems i can't do with the error ORA-12054, further the oracle documentation says that materialized view can only be used with a simple join.
    Do you have another solution??
    Note that the materialized views can be used to improve queries.
    Thank you in advance

    Straight from the link I posted:
    *Restrictions on Fast Refresh on Materialized Views with UNION ALL*Materialized views with the UNION ALL set operator support the REFRESH FAST option if the following conditions are satisfied:
    * The defining query must have the UNION ALL operator at the top level.
    The UNION ALL operator cannot be embedded inside a subquery, with one exception: The UNION ALL can be in a subquery in the FROM clause provided the defining query is of the form SELECT * FROM (view or subquery with UNION ALL) as in the following example:
    CREATE VIEW view_with_unionall AS
    (SELECT c.rowid crid, c.cust_id, 2 umarker
    FROM customers c WHERE c.cust_last_name = 'Smith'
    UNION ALL
    SELECT c.rowid crid, c.cust_id, 3 umarker
    FROM customers c WHERE c.cust_last_name = 'Jones');
    CREATE MATERIALIZED VIEW unionall_inside_view_mv
    REFRESH FAST ON DEMAND AS
    SELECT * FROM view_with_unionall;
    Note that the view view_with_unionall satisfies the requirements for fast refresh.
    * Each query block in the UNION ALL query must satisfy the requirements of a fast refreshable materialized view with aggregates or a fast refreshable materialized view with joins.
    The appropriate materialized view logs must be created on the tables as required for the corresponding type of fast refreshable materialized view.
    Note that the Oracle Database also allows the special case of a single table materialized view with joins only provided the ROWID column has been included in the SELECT list and in the materialized view log. This is shown in the defining query of the view view_with_unionall.
    * The SELECT list of each query must include a maintenance column, called a UNION ALL marker. The UNION ALL column must have a distinct constant numeric or string value in each UNION ALL branch. Further, the marker column must appear in the same ordinal position in the SELECT list of each query block.
    * Some features such as outer joins, insert-only aggregate materialized view queries and remote tables are not supported for materialized views with UNION ALL.
    * Partiton Change Tracking (PCT)-based refresh is not supported for UNION ALL materialized views.
    * The compatibility initialization parameter must be set to 9.2.0 or higher to create a fast refreshable materialized view with UNION ALL.

  • Spatial Performance with join

    I have a Oracle Spatial table with 3.5 million rows plus another auxillary table with 3.5 million rows. A query over these two tables joined returns a full result (250 rows) based on one to one join - here's an example:
    Select count(*)
    from F, N
    where F.id = n.id and (F.GEOM,mdsys.sdo_geometry (2003,8307, null, mdsys.sdo_elem_info_array (1,1003,1), mdsys.sdo_ordinate_array(-120.0,49.5,-119.0,49.5,-119.0,60.35,-120.0,60.35,-120.0,49.5)),
    'mask= ANYINTERACT querytype=WINDOW') = 'TRUE') AND N.PNUM = '4';
    It takes an average of 35 seconds to get the full result set back. I've gathered statistics, tweaked memory parameters and this is the best I can get. Does anyone have any suggestions?

    This is an interesting problem. It looks like Oracle is doing the right thing for each of the table accesses - use the index and fetch by rowid.
    The only thing you have to play with if you don't go to materialized views or temp tables is how the results of the two table queries are joined.
    You don't have a lot of options. Hash join seems to be slow, but you don't know if it is faster or slower compared with nested loops or merge join.
    I'd compare what you have done with something like the following to test nested loops:
    select /*+ no_merge use_nl (f1,n1) */ count(*)
    from
    (select id
    from f
    where sdo_anyinteract ( F.GEOM,
    sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
    sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
    -120.0,60.35,-120.0,49.5))) = 'TRUE') f1,
    (select id
    from n
    where n.pnum='4') n1
    where f1.id=n1.id ;
    and presort with a merge join hint to see how it performs:
    select /*+ no_merge use_merge (f1,n1) */ count(*)
    from
    (select id
    from f
    where sdo_anyinteract ( F.GEOM,
    sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
    sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
    -120.0,60.35,-120.0,49.5))) = 'TRUE'
    order by id) f1,
    (select id
    from n
    where n.pnum='4'
    order by id) n1
    where f1.id=n1.id ;
    It might be that you already have the best Oracle can do, but I'd be curious to know how you make out.
    Dan Abugov
    VP Software Support and Services
    Acquis Inc.

  • EclipseLink Direct Map with Joining mapping issue

    Hi EclipseLink Team,
    We encountered an issue to map an attribute as Direct Map with Join Fetch enabled in EclipseLink Workbench 1.1.1 (Build v20090430-r4097).
    Basically, we have the following data model:
    SESSION
    SESS_NO (PK),
    PARAM_SESS
    SESS_NO (FK) (PK),
    PARAM_NAME (PK),
    PARAM_VALUE
    Then we have Session.java persistent entity associated with SESSION table. This class contains Map attribute sessionParameters which we map as Direct Map with PARAM_SESS.PARAM_NAME as key and PARAM_SESS.PARAM_VALUE as value (referenced by PARAM_SESS.SESS_NO = SESSION.SESS_NO).
    The described mapping works fine without Joining enabled (both for Lazy and Eager Loading).
    But we have cases when we want to get parameters for a number of sessions. And disabling of Joining leads to a number of performed SELECT SQL queries (both for Lazy and Eager Loading) those are bad for performance.
    So we have chosen the Eager Loading and set the Join Fetch option to Outer. Then we have got the following:
    I see in the log the only SQL query performed: SELECT DISTINCT t0.SESS_NO, …, t1.PARAM_VALUE, t1.PARAM_NAME FROM {oj SESSION t0 LEFT OUTER JOIN PARAM_SESS t1 ON (t1.SESS_NO = t0.SESS_NO)}. This is pretty good for us. This query returns exactly what we expect when executing on the database. But the Map attribute for every session is populated incorrectly: Maps are empty (not corresponding to available relational data).
    Could you please let us know if this is a bug, or kind of known issue or we made something wrong? Some hints and proposals would be very helpful and appreciated.
    I should mention that for now we want to map all of this for read only purpose.
    Thanks.
    Best Regards,
    Alexey Elisawetski

    James,
    I've tried both 1.2 release and 2.0 (v20091121-r5847) but received the same result - empty Map.
    Moreover, for both versions the following string was absent in deployed XML file:
    +<direct-key-field table="PARAM_SESS" name="PARAM_NAME" xsi:type="column"/>+
    Therefore, on application initialization I have got an exception: org.eclipse.persistence.exceptions.DescriptorException with message This descriptor contains a mapping with a DirectMapMapping and no key field set.
    So I was forced to add the line manually.
    This seems buggy to me...
    Regards,
    Alexey

  • Strange situation with joins!!!

    Hi,
    I am having a weird situation with joins. I have two tables "EMPLOYER" and "ADDR". Table "ADDR" has columns "ADDRESS_ID" and "ADDRESS1".
    Table "EMPLOYER" has two columns, "BUS_ADDRESS_ID" and "MAIL_ADDRESS_ID" which are left outer joined to the "ADDR" table on "ADDRESS_ID". I have also created an alias for "ADDR" to get both addresses.
    The logical table Employer has two LTS, one with "ADDR" and one with "ADDR1". (I do not want to create one LTS with both the tables as left outer joins, that will create other problems)
    When I query EmployerID,EmployerName,ADDR.ADDRESS1,ADDR1.ADDRESS1, I get the correct results, but the query is very slow.
    I looked at the log and there is no second left outer join* at all, but the results are correct!!!. I have attached the log sequence.
    -------------------- Logical Request (before navigation):
    RqList distinct
    EMPLOYER.EMPLOYER ID as c1 GB,
    EMPLOYER.EMPLOYER NAME as c2 GB,
    EMPLOYER.PRIMARY ADDRESS 1 as c3 GB,
    EMPLOYER.MAILING ADDRESS 1 as c4 GB
    DetailFilter: APPLIED EMPLOYER.EMPLOYER ID = 7501
    OrderBy: c1 asc, c2 asc, c3 asc, c4 asc
    +++USER1:1750000:1750004:----2009/03/04 15:22:22
    -------------------- Sending query to database named DEV (id: &lt;&lt;2378603&gt;&gt;):
    select T30717.EMPLOYER_ID as c1,
    T30717.EMPLOYER_NAME as c2,
    T44205.ADDRESS_1 as c3,
    T30717.MAIL_ADDRESS_ID as c4
    from EAPP_EMPLOYER T30717, ADDR T44205
    where ( T30717.BUS_ADDRESS_ID = T44205.ADDRESS_ID (+) ) and ( T30717.EMPLOYER_ID = 7501 )
    order by c4
    +++USER1:1750000:1750004:----2009/03/04 15:22:22
    -------------------- Sending query to database named DEV (id: &lt;&lt;2378624&gt;&gt;):
    select T43657.ADDRESS_1 as c1,
    T43657.ADDRESS_ID as c2
    from
    ADDR T43657
    order by c2
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Query Status: Successful Completion
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Rows 1, bytes 160 retrieved from database query id: &lt;&lt;2378603&gt;&gt;
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Physical query response time 0 (seconds), id &lt;&lt;2378603&gt;&gt;
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Rows 0, bytes 0 retrieved from database query id: &lt;&lt;2378624&gt;&gt;
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Physical query response time 0 (seconds), id &lt;&lt;2378624&gt;&gt;
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Physical Query Summary Stats: Number of physical queries 2, Cumulative time 0, DB-connect time 0 (seconds)
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Rows returned to Client 1
    +++USER1:1750000:1750004:----2009/03/04 15:22:30
    -------------------- Logical Query Summary Stats: Elapsed time 8, Response time 7, Compilation time 0 (seconds)
    Has anyone come across this situation? I have used the same concept in the same RPD for another model, but that used Oracle 9i as the datasource. This one uses Oracle 8i.
    Thanks
    rkingmdu

    1) Have you used dfferent joins for the 2 versions of the addr table (origianal and alias).
    i.e. one joins on bus_addr_id to employer and
    the other joins to mail_addr_id to employer - *+{color:#ff0000}YES{color}+*
    2) do both have left outer join specified? - *+{color:#ff0000}YES{color}+*
    3) can you run employee, business_addrss query once
    then run employee, mail_addrss query once separately, to see if the individual queries give correct results? - *+{color:#ff0000}YES, I get the correct results{color}+*
    4) also check if the business_addrss and mail addrss in the logical employer table are mapped to different LTSs - *+{color:#ff0000}YES, they are{color}+*

  • Hi! Everyone, I have some problems with JOIN and Sub-query; Could you help me, Please?

    Dear Sir/Madam
    I'm a student who is interested in Oracle Database and
    I have some problems with JOIN and Sub-query.
    I hope so many of you could help me.
    if i use JOIN without sub-query, may it be faster or not?
    SELECT field1, field2 FROM tableA INNER JOIN tableB
    if i use JOIN with sub-query, may it be faster or not?
    SELECT field1,field2,field3 FROM tableA INNER JOIN (SELECT field1,field2 FROM tableB)
    Thanks in advance!

    Hi,
    fac30d8e-74d3-42aa-b643-e30a3780e00f wrote:
    Dear Sir/Madam
    I'm a student who is interested in Oracle Database and
    I have some problems with JOIN and Sub-query.
    I hope so many of you could help me.
    if i use JOIN without sub-query, may it be faster or not?
    SELECT field1, field2 FROM tableA INNER JOIN tableB
    if i use JOIN with sub-query, may it be faster or not?
    SELECT field1,field2,field3 FROM tableA INNER JOIN (SELECT field1,field2 FROM tableB)
    Thanks in advance!
    As the others have said, the execution plan will give you a better idea about which is faster.
    If you're trying to see how using (or not using) a sub-query affects performance, make the rest of the queries as similar as possible.  For example, include field3 in both queries, or ignore field3 in both queries.
    In this particular case, I guess the optimizer would do the same thing either way, but that's just a guess.  I can't see your execution plans.
    In general, simpler code is faster, and better in other ways, too.  In this case
    tableB
    is simpler than
    (SELECT field1, field2  FROM tableB)
    Why do you want a sub-query in this example?

  • Linked Server - Getting Error when performing Cross Instance Query with joins

    Hi Guys,
    I've successfully created a Linked Server that connects a local DB Engine with another DB Engine through an ip over an extranet. I am able to run simple Select statement queries on the Local DB
    Engine and get results from the linked server. However when attempting to perform more complex queries that join tables from
    the linked server with tables from the local DB server, I get the following error message after several minutes of execution:
    OLE DB provider "SQLNCLI11" for linked server "<ip of Linked Server>" returned message "Protocol error in TDS stream".
    OLE DB provider "SQLNCLI11" for linked server "<ip of Linked Server>"
    returned message "Communication link failure".
    Msg -1, Level 16, State 1, Line 0
    Session Provider: Physical connection is not usable [xFFFFFFFF].
    OLE DB provider "SQLNCLI11" for linked server "<ip of Linked Server>"
    returned message "Communication link failure".
    Msg -1, Level 16, State 1, Line 0
    Session Provider: Physical connection is not usable [xFFFFFFFF].
    OLE DB provider "SQLNCLI11" for linked server "<ip of Linked Server>"
    returned message "Communication link failure".
    Msg 10054, Level 16, State 1, Line 0
    TCP Provider: An existing connection was forcibly closed by the remote host.
    Grateful if you could advise what may be causing this issue and how I can resolve it. I've read on Distributed Transactions but I understand that it only applies to manipulation statements?
    Both are SQL servers. Linked Server is SQL2008R2 if not mistaken. Local DB Engine is SQL2014.
    Thanks and Regards,
    Rhyan.

    Thank you for your insight Erland. Can you advise how I can check the SQL Server errorlog pls?
    I just got word from a user that the query ran successfully this morning but took 15 mins to complete.. does it ring a bell to you in terms of issues that may be causing this?
    This kind of behaviour would suggest something makes the query "time out" or stop abruptly.. How can I further investigate it pls? Can a BEGIN DISTRIBUTED TRANSACTION  statement in the query help me mitigate or resolve the error? ref: https://msdn.microsoft.com/en-us/library/ms188386.aspx?f=255&MSPPError=-2147217396
    SQL Version for Remote Server is : 
    Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) Oct 19 2012 13:38:57 Copyright
    (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    SQL Version for Local Server is:
    Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Feb 20 2014 20:04:26 Copyright
    (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
    Thanks and Regards,
    Rhyan

  • How to querie with bind variable in findMode?

    Hallo,
    is it possible to use a querie with bind variable in findMode? And when how is it possible to initialize the bind variable?
    Any help is appreciated.

    Hi,
    bind variables can be linked from teh ADF binding to e.g. a managed bean, the sessionScope or other binding attributes via EL. So when you go into findMode and execute the query, the EL will grab the bindVariable values automatically.
    Frank

  • Multiple queries with 1 connection

    Can I execute multiple queries with one connection?
    //Example -
    <%
    String firstconn;
    Class.forName("org.gjt.mm.mysql.Driver");
    // create connection string
    firstconn = "jdbc:mysql://localhost/profile?user=mark&password=mstringham";
    // pass database parameters to JDBC driver
    Connection aConn = DriverManager.getConnection(firstconn);
    // query statement
    Statement firstSQLStatement = aConn.createStatement();
    String firstquery = "UPDATE auth_users SET last_log='" + rightnow + "'WHERE name='" + username + "' ";
    // get result code
    int firstSQLStatus = firstSQLStatement.executeUpdate(firstquery);
    // close connection
    firstSQLStatement.close();
    %>     
    Now, instead of building a new connection for each query, can I use the same connection info for another query?
    if so - how do you do this?
    thanks for any help.
    Mark

    Create multiple statement objects from your connection. It's a good idea to close these in a finally block after you're done with them
    Connection conn = null;
    Statement stmt1 = null;
    Statement stmt2 = null;
    try {
        conn = DriverManager.getConnection();
        stmt1 = conn.createStatement();
        // some sql here
        stmt2 = conn.createStatement();
        // some more sql here
    } finally {
        if ( stmt1 != null ) stmt1.close();
        if ( stmt2 != null ) stmt2.close();

  • Fast Refresh on Materialized View With Join

    Hi All,
    i have created following Materialized View Logs and MV
    on EMP and DEPT
    CREATE MATERIALIZED VIEW LOG ON DEPT
    WITH PRIMARY KEY,SEQUENCE
    INCLUDING NEW VALUES;
    CREATE MATERIALIZED VIEW LOG ON EMP
    WITH PRIMARY KEY,SEQUENCE
    INCLUDING NEW VALUES;
    CREATE MATERIALIZED VIEW EMP_MVW
    TABLESPACE OSMIS_REPORT_DATA
    REFRESH FAST ON COMMIT
    AS
    SELECT A.EMPNO,A.ENAME,A.DEPTNO,A.JOB,A.MGR,A.HIREDATE,A.SAL,A.COMM,B.DNAME,B.LOC
    FROM EMP A,DEPT B
    WHERE A.DEPTNO=B.DEPTNO
    The Create MV Stmnt raised following error.
    ERROR at line 6:
    ORA-12052: cannot fast refresh materialized view SCOTT.EMP_MVW
    I have tried the same with ROWID also,but same error
    while creating MV.
    Pls anyone give idea to Fast Refersh on MV with Joins
    Thnks
    Raj.G.
    mail : [email protected]

    Actually you can get Oracle to tell you why the view does not have the capabilities you want using the DBMS_MVIEW package. For example you could have done something like this...
    Personal Oracle Database 10g Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> CREATE MATERIALIZED VIEW LOG ON dept
      2    WITH PRIMARY KEY, SEQUENCE
      3      INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> CREATE MATERIALIZED VIEW LOG ON emp
      2    WITH PRIMARY KEY,SEQUENCE
      3      INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> CREATE MATERIALIZED VIEW emp_mvw
      2    REFRESH FAST ON COMMIT
      3  AS
      4  SELECT a.empno, a.ename, a.deptno,
      5         a.job, a.mgr, a.hiredate,
      6         a.sal, a.comm, b.dname, b.loc
      7  FROM   emp a,dept b
      8  WHERE  a.deptno = b.deptno;
    FROM   emp a,dept b
    ERROR at line 7:
    ORA-12052: cannot fast refresh materialized view SCOTT.EMP_MVW
    SQL> CREATE MATERIALIZED VIEW emp_mvw
      2    REFRESH ON COMMIT -- remove the FAST to allow view to compile
      3  AS
      4  SELECT a.empno, a.ename, a.deptno,
      5         a.job, a.mgr, a.hiredate,
      6         a.sal, a.comm, b.dname, b.loc
      7  FROM   emp a,dept b
      8  WHERE  a.deptno = b.deptno;
    Materialized view created.
    SQL> @\oracle\product\10.1.0\Db_1\RDBMS\ADMIN\utlxmv.sql -- create mv_capabilities_table
    Table created.
    SQL> EXEC DBMS_MVIEW.EXPLAIN_MVIEW ('EMP_MVW');
    PL/SQL procedure successfully completed.
    SQL> SELECT capability_name, possible, msgtxt
      2  FROM   mv_capabilities_table
      3  WHERE  capability_name LIKE '%REFRESH_FAST_AFTER%';
    CAPABILITY_NAME                P MSGTXT
    REFRESH_FAST_AFTER_INSERT      N the SELECT list does not have the rowids of all the detail tables
    REFRESH_FAST_AFTER_INSERT      N mv log must have ROWID
    REFRESH_FAST_AFTER_INSERT      N mv log must have ROWID
    REFRESH_FAST_AFTER_ONETAB_DML  N see the reason why REFRESH_FAST_AFTER_INSERT is disabled
    REFRESH_FAST_AFTER_ANY_DML     N see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
    SQL> DROP MATERIALIZED VIEW LOG ON dept;
    Materialized view log dropped.
    SQL> DROP MATERIALIZED VIEW LOG ON emp;
    Materialized view log dropped.
    SQL> DROP MATERIALIZED VIEW emp_mvw;
    Materialized view dropped.
    SQL> CREATE MATERIALIZED VIEW LOG ON dept
      2    WITH ROWID, SEQUENCE
      3      INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> CREATE MATERIALIZED VIEW LOG ON emp
      2    WITH ROWID,SEQUENCE
      3      INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> CREATE MATERIALIZED VIEW emp_mvw
      2    REFRESH FAST ON COMMIT
      3  AS
      4  SELECT a.ROWID emp_rowid, b.ROWID dept_rowid,
      5         a.empno, a.ename, a.deptno,
      6         a.job, a.mgr, a.hiredate,
      7       a.sal, a.comm, b.dname, b.loc
      8  FROM   emp a,dept b
      9  WHERE  a.deptno = b.deptno;
    Materialized view created.
    SQL> DELETE mv_capabilities_table;
    18 rows deleted.
    SQL> EXEC DBMS_MVIEW.EXPLAIN_MVIEW ('EMP_MVW');
    PL/SQL procedure successfully completed.
    SQL> SELECT capability_name, possible, msgtxt
      2  FROM   mv_capabilities_table
      3  WHERE  capability_name LIKE '%REFRESH_FAST_AFTER%';
    CAPABILITY_NAME                P MSGTXT
    REFRESH_FAST_AFTER_INSERT      Y
    REFRESH_FAST_AFTER_ONETAB_DML  Y
    REFRESH_FAST_AFTER_ANY_DML     Y
    SQL>

  • Create a VC Model for BEx queries with Cell Editor

    Hello All,
    I am trying to create a VC Model for BEx queries with Cell Editor.
    BW Development Team has created a BEX query with complex restrictions and calculations using Cell Editor feature of BI-BEX.
    The query output in BEx analyzer is correct where all values are being calculated at each Cell level and being displayed.
    But, while creating VC model, system is not displaying the Cells.Thus, no VC Model can be created.
    I have executed below steps:
    1. Created a VC Model for BEx Query - ZQRY_XYZ
    2. Create Iview -> Create a dataService -> Provide a Table from the Output
    In the Column field system is not showing any of the Cells (present in Cell Editor)
    Please help me to solve this issue.
    Thanks,
    Siva

    Hi
    If 'Cell Editor' is been used then that query must have the structure in it. You have to select that 'structure' object in your 'VC Table'.
    If you select that then you will get the required result in the output. This structure will be the structure where 'cell reference' is used in BI query, You have to select that structures name.
    Regards
    Sandeep

  • Materialized View  with Joins and Possibilities of Partitioning

    Hi,
    We have a materialized view which has a data to query around 12 gb. The query goes some thing like
    this.
      Select a.c1,a.c2,b.c1,b.c2,c.c1,c.c2
      from a,b,c
      where a.c1=b.c1
      --and the where condition goes on...
      --i.e Basically joining 3 different tables with complex join conditions but without any group functions and subqueries.
       Now i have few questions here.
    Firstly as the Mview is created with joins we will have to create separate logs for each tables and we have did the same. The logs are created with rowid and sequence for better performance during refresh.
    Question No 1
    Is this is a best approach for materializing a query with complex join conditions. Or Is it better to make 3 different materialized views for each 3 tables and join those 3 MViews and write a query for my report. I ask this question just to make sure the refresh time is brought down by this method. But as such as we have created 3 different logs for 3 tables the refresh must be happening separately.
    Question No 2
    Data is likely to grow faster and faster on this. So we have a idea of making this a partitioned Mview. Can the Pro's advice me on this and suggest me the right practice for the same and help me to correct links from where i can pick a example and continue from there.
    Question No 3
    How to find whether the materialized view has refreshed on a fast mode or complete mode after the last refresh.
    Apart from querying and checking the rowid which i feel is quiet cumbersome for a mview with a data volume like what we have. By default when i see the info in TOAD for this Mview it shows Refresh Mode as "Force". But how to ascertain this.
    Thanks in anticipation for a good round of discussion

    Hi,
    We have a materialized view which has a data to query around 12 gb. The query goes some thing like
    this.
      Select a.c1,a.c2,b.c1,b.c2,c.c1,c.c2
      from a,b,c
      where a.c1=b.c1
      --and the where condition goes on...
      --i.e Basically joining 3 different tables with complex join conditions but without any group functions and subqueries.
       Now i have few questions here.
    Firstly as the Mview is created with joins we will have to create separate logs for each tables and we have did the same. The logs are created with rowid and sequence for better performance during refresh.
    Question No 1
    Is this is a best approach for materializing a query with complex join conditions. Or Is it better to make 3 different materialized views for each 3 tables and join those 3 MViews and write a query for my report. I ask this question just to make sure the refresh time is brought down by this method. But as such as we have created 3 different logs for 3 tables the refresh must be happening separately.
    Question No 2
    Data is likely to grow faster and faster on this. So we have a idea of making this a partitioned Mview. Can the Pro's advice me on this and suggest me the right practice for the same and help me to correct links from where i can pick a example and continue from there.
    Question No 3
    How to find whether the materialized view has refreshed on a fast mode or complete mode after the last refresh.
    Apart from querying and checking the rowid which i feel is quiet cumbersome for a mview with a data volume like what we have. By default when i see the info in TOAD for this Mview it shows Refresh Mode as "Force". But how to ascertain this.
    Thanks in anticipation for a good round of discussion

  • Materialized view with join

    In 10g release 2,I tried to create following materialized view with join:
    test_link is a normal table
    test_geom is a table contains a column in SDO_GEOMETRY
    CREATE MATERIALIZED VIEW LOG ON test_link with rowid
    CREATE MATERIALIZED VIEW LOG ON test_geom with rowid,primary key
    CREATE MATERIALIZED VIEW MV_LINK USING INDEX REFRESH FAST ON DEMAND AS
    SELECT li.rowid link_rowid,geom.rowid geom_rowid,li.link_id,geom.link
    FROM test_link li, test_geom geom
    WHERE li.link_id=geom.link_id
    But I always got an error like:
    ORA-12015: cannot create a fast refresh materialized view from a complex query
    If I change the geometry table to another table, everything works fine.
    Anyone have ideas?

    Unfortunately, creating a fast refreshable materialized view on a join with one of the select columns being a user defined type (sdo_geometry is a user defined type) is not allowed. See 5303489 in the metalink bug database.
    You could do like the workaround in the article suggests and create two materialized views and then create a regular view on top.
    In our scenario, our materialized view also contains unions, so we would really like to have one physical object at the end of the day. One approach that we are currently investigating is to create the materialized view (MV1) without the geometry column, which makes it fast refreshable, and also create a materialized view (MV2) on the table containing the geometry column. MV2 is also fast refreshable. We then create a table (T3) that contains all of the columns from MV1, plus a geometry column. An insert, update, delete trigger on MV1 is created. The trigger is used to push all of the columns from MV1 to T3 and the geometry column from MV2 to T3. I have created the above in one of our test environments and haven't encountered any issues yet.
    Let me know if you come up with a better approach.

Maybe you are looking for

  • Closing stock report with GRPO

    Hai all   Please help me to write this query, For example I have  query to take closing stock report ,but my requirement is ,i want split up for Closing stock and closingvalue like, if i have 40 closing stock means i want details how 40 is arrived fo

  • This sounds crazy, but I can't delete a document in Pages.  How do I do that?  duh.

    How do I delete an entire document in Pages?  I've tried and it won't go away!  I've used every key word/phrase I can think of to get an answer.  Thank you.

  • Mac Mini with two VGA displays

    Hello. I have a MacMini (2.4 Core 2 Duo) recently connected to 2 Dell E197FP screens. The main screen works fine (HDMI to VGA) but the second screen is blank (Mini DVI to VGA) even though it is recognised in Displays and I can set the resolution. Any

  • Adobe Prints Text as Double Image

    When I print pdf documents online[specifically http://content.educationdirectonline.com/course/VET010/online/vet010_088001.pdf ] the text prints out fuzzy, with a gray double image of the text right behind it, almost like a shadow. I've tried turning

  • How to omprove performance my program?

    Procedure xyz() Cursor C1 Very largequery handling some bulk data. Cursor C2 Very largequery handling some bulk data. Cursor C3 Very largequery handling some bulk data. Cursor C4 Very largequery handling some bulk data. Cursor C5 Very largequery hand