Need Query - join in same table

I need query for following criteria,
Table : test
No     Order
1     a
1     b
1     c
2     a
2     b
2     d
3     e
3     f
3     g
3     h
1     f
2     f
Consider the above table,
1)     I will give input order as a,b: It should return No 1,2
No     Order
1     a
1     b
1     c
2     a
2     b
2     d
3     e
3     f
3     g
3     h
1     f
2     f
2)     I will give input order as f,g,h: It should return No 3
No     Order
1     a
1     b
1     c
2     a
2     b
2     d
3     e
3     f
3     g
3     h
1     f
2     f
Please give me the query which will give above result.
Thanks

I am not sure I understand you, but it may be this
with test as (
select 1 N, 'a' Ord from dual union all
select 1,'b' from dual union all
select 1,'c' from dual union all
select 2,'a' from dual union all
select 2,'b' from dual union all
select 2,'d' from dual union all
select 3,'e' from dual union all
select 3,'f' from dual union all
select 3,'g' from dual union all
select 3,'h' from dual union all
select 1,'f' from dual union all
select 2,'f' from dual )
select N from test tp where Ord = 'a'
intersect
select N from test tp where Ord = 'b';
with test as (
select 1 N, 'a' Ord from dual union all
select 1,'b' from dual union all
select 1,'c' from dual union all
select 2,'a' from dual union all
select 2,'b' from dual union all
select 2,'d' from dual union all
select 3,'e' from dual union all
select 3,'f' from dual union all
select 3,'g' from dual union all
select 3,'h' from dual union all
select 1,'f' from dual union all
select 2,'f' from dual )
select N from test tp where Ord = 'f'
intersect
select N from test tp where Ord = 'g'
intersect
select N from test tp where Ord = 'h';

Similar Messages

  • Abap query, join between same tables

    Hi,
    I have an Abap Query (SQ01), I need to create a join between the same table (ESLL-ESLL) to obtain the services from a PO. The join is with the packno from ESLL to subpackno from ESLL (the same table). But I don't know how I can do that with Abap Query. Because the Infoset doesn't allow inserting the table two times.
    Somebody can help me.
    Thanks.
    Victoria

    Hi:
    I was able to create a query to retrieve the service lines entries using tables ESSR (Header) (service entry sheet number as input parameter), linked to package number to view ML_ESLL and then from the view the sup-package number linked to ESLL. That way I was able to retrieve all the service lines information from table ESLL only using SQ02 and SQ01, no ABAP.
    I Hope this help.
    Juan
    PS: I know the post is old but may be there are people out there with no ABAP access who needs to create reports for Service Entry Sheets lines. All the join conditions are.
    Table             Table
    ESSR            EKKO
    ESSR            ML_ESLL
    ML_ESLL      ESLL
    ESLL             ESLH
    Edited by: Juan Marino on Jan 23, 2012 10:53 PM

  • Query performance on same table with many DML operations

    Hi all,
    I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
    The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
    If i created same table again newly with same data and fire the same select statement, it is taking less time.
    My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
    Thanks in advance,
    Pal

    Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
    As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
    For example if you had a table like this where seq_no is populated by a sequence and indexed
    seq_no         NUMBER
    processed_flag VARCHAR2(1)
    trans_date     DATEand then did deletes like:
    DELETE FROM t
    WHERE processed_flag = 'Y' and
          trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
    HTH
    John

  • Sql query with multiple joins to same table

    I have to write a query for a client to display business officers' names and title along with the business name
    The table looks like this
    AcctNumber
    OfficerTitle
    OfficerName
    RecKey
    90% of the businesses have exactly 4 officer records, although some have less and some have more.
    There is a separate table that has the AcctNumber, BusinessName about 30 other fields that I don’t need
    An individual account can have 30 or 40 records on the other table.
    The client wants to display 1 record per account.
    Initially I wrote a query to join the table to itself:
    Select A.OfficerTtitle, A.OfficerName, B.OfficerTitle, B.OfficerName, C.OfficerTtitle, C.OfficerName, D.OfficerTitle, D.OfficerName where A.AcctNumber = B.AcctNumber and A.AcctNumber = C.AcctNumber and A.AcctNumber = D.AcctNumber
    This returned tons of duplicate rows for each account ( number of records * number of records, I think)
    So added
    And A.RecKey > B.RecKey and B.RecKey > C. RecKey and C.RecKey . D.RecKey
    This works when there are exactly 4 records per account. If there are less than 4 records on the account it skips the account and if there are more than 4 records, it returns multiple rows.
    But when I try to l join this to the other table to get the business name, I get a row for every record on the other table
    I tried select distinct on the other table and the query runs for ever and never returns anything
    I tried outer joins and subqueries, but no luck so far. I was thinking maybe a subquery - if exists - because I don't know how many records there are on an account, but don't know how to structure that
    Any suggestions would be appreciated

    Welcome to the forum!
    user13319842 wrote:
    I have to write a query for a client to display business officers' names and title along with the business name
    The table looks like this
    AcctNumber
    OfficerTitle
    OfficerName
    RecKey
    90% of the businesses have exactly 4 officer records, although some have less and some have more.
    There is a separate table that has the AcctNumber, BusinessName about 30 other fields that I don’t need
    An individual account can have 30 or 40 records on the other table.
    The client wants to display 1 record per account.As someone has already mentioned, you should post CREATE TABLE and INSERT statements for both tables (relevant columns only). You don't have to post a lot of sample data. For example, you need to pick 1 out of 30 or 40 rows (max) for the same account, but it's almost certainly enough if you post only 3 or 4 rows (max) for an account.
    Also, post the results you want from the sample data that you post, and explain how you get those resutls from that data.
    Always say which version of Oracle you're using. This sounds like a PIVOT problem, and a new SELECT .... PIVOT feature was introduced in Oracle 11.1. If you're using Oracle 11, you don't want to have to learn the old way to do pivots. On the other hand, if you have Oracle 10, a solution that uses a new feature that you don't have won't help you.
    Whenever you have a question, please post CREATE TABLE and INSERT statements for some sample data, the results you want from that data, an explanation, and your Oracle version.
    Initially I wrote a query to join the table to itself:
    Select A.OfficerTtitle, A.OfficerName, B.OfficerTitle, B.OfficerName, C.OfficerTtitle, C.OfficerName, D.OfficerTitle, D.OfficerName where A.AcctNumber = B.AcctNumber and A.AcctNumber = C.AcctNumber and A.AcctNumber = D.AcctNumber Be careful, and post the exact code that you're running. The statement above can't be what you ran, because it doesn't have a FROM clause.
    This returned tons of duplicate rows for each account ( number of records * number of records, I think)
    So added
    And A.RecKey > B.RecKey and B.RecKey > C. RecKey and C.RecKey . D.RecKey
    This works when there are exactly 4 records per account. If there are less than 4 records on the account it skips the account and if there are more than 4 records, it returns multiple rows.
    But when I try to l join this to the other table to get the business name, I get a row for every record on the other table
    I tried select distinct on the other table and the query runs for ever and never returns anything
    I tried outer joins and subqueries, but no luck so far. I was thinking maybe a subquery - if exists - because I don't know how many records there are on an account, but don't know how to structure that
    Any suggestions would be appreciatedDisplaying 1 column from n rows as n columns on 1 row is called Pivoting . See the following link fro several ways to do pivots:
    SQL and PL/SQL FAQ
    Pivoting requires that you know exactly how many columns will be in the result set. If that number depends on the data in the table, then you might prefer to use String Aggregation , where the output consists of a huge string column, that contains the concatenation of the data from n rows. This big string can be formatted so that it looks like multiple columns. For different string aggregation techniques, see:
    http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php
    The following thread discusses some options for pivoting a variable number of columns:
    Re: Report count and sum from many rows into many columns

  • Left Outer Join on same table clarification

    HI,
    I have a table that gets populated from 3<sup>rd</sup> party system. We don’t have control over it. So, the table has master record (master) and children. Master type is 78 and children’s type is 64. So, it looks like this. In the 3<sup>rd</sup>
    party system, if Master transaction gets cancelled, it is recorded as type 178. If child is cancelled, then it is 164. Once the child is cancelled and created again using one process then newly created transaction will have 65 as type. Same thing with Master
    cancelled transaction also. It will be 79. So, to summarize:
    Master:                                                                                                                                                                      
    Brand New Transaction type = 78
    Cancelled Transaction type = 178
    Cancelled with creation transaction type = 79
    Child:
    Brand New Transaction type = 64
    Cancelled Transaction type = 164
    Cancelled with creation transaction type = 65
    I don’t have to bother about master records. I need to focus on only children for my query.
    ID
    TxnID
    Master
    Type
    TDate
    Location
    193075
    211554
    211543
    64
    20140805
    ABC
    193076
    211555
    211543
    64
    20140805
    NBC
    193077
    211556
    211543
    64
    20140805
    ABC
    193080
    211559
    211558
    64
    20140805
    ABC
    193081
    211562
    211561
    64
    20140805
    ABC
    193082
    211565
    211564
    64
    20140805
    CBC
    193083
    211565
    211564
    164
    20140805
    CBC
    193084
    211566
    211564
    65
    20140805
    AZC
    --drop
    table #Transactions
    CREATE
    TABLE #Transactions
    ID
    int,
    TxnID
    int,
    mstTicket
    int,
    Typecode
    int,
    Tdate
    datetime,
    Location
    varchar(10)
    select
    * from
    #Transactions
    Insert
    into #Transactions
    (ID,
    TxnID,
    mstTicket,Typecode,Tdate,Location)
    Select 193075, 211554,211543,64,'2014-08-05','ABC'
    UNION ALL
    Select 193076, 211555,211543, 64,
    '2014-08-05',
    'NBC' UNION ALL
    Select 193077, 211556, 211543, 64,
    '2014-08-05',
    'ABC' UNION
    ALL
    Select 193080, 211559, 211558, 64,
    '2014-08-05',
    'ABC' UNION
    ALL
    Select 193081, 211562, 211561, 64,
    '2014-08-05',
    'ABC' UNION
    ALL
    Select 193082, 211565, 211564, 64,
    '2014-08-05',
    'CBC' UNION
    ALL
    Select 193083, 211565, 211564, 164,
    '2014-08-05',
    'CBC' UNION
    ALL
    Select 193084, 211566, 211564, 65,
    '2014-08-05',
    'AZC'
    select
    T.TxnID,
    T.TypeCode,
    T.Location,
    TL.TxnID
    From
    #Transactions T
    Left Outer
    JOIN #Transactions
    TL ON
    TL.TxnID
    = T.TxnID
    and TL.TypeCode
    = 164
    select
    T.TxnID,
    T.TypeCode,
    T.Location,
    TL.TxnID
    From
    #Transactions T
    Left Outer
    JOIN #Transactions
    TL ON
    TL.TxnID
    = T.TxnID
    and TL.TypeCode
    = 164
    Where
    T.typecode
    in (64, 65)
    I need a clarification regarding left Outer Join. 
    In the first left outer join query both 64 and 164 both have TL.TxnID populated. Why is that?. What I understand from
    left outer join is that ‘Returns all the rows’ from left table and only matching data from right table.
    Here, matching row from right table is 211565 and 164 record (id 193083). So, only it should have TxnID populated. But row 211565 and 64 has TL.txnID getting populated (ID 193082).
    Why is it? Am I not understanding left out join properly?
    Thanks,

    Thank you Shailesh. I understood what join does in left outer join. I was thinking
     Left Outer JOIN #Transactions TL ON TL.TxnID = T.TxnID and TL.TypeCode = 164 is same as
    Left Outer JOIN #Transactions TL ON TL.TxnID = T.TxnID and TL.TypeCode = T.TypeCode
    and TL.TypeCode = 164
    #Transactions
    T
    Left
    Outer
    JOIN
    #Transactions
    TL
    ON
    TL.TxnID
    =
    T.TxnID
    and
    TL.TypeCode
    = 164
    Where
    T.typecode
    in
    (64,
    65)

  • Outer/Cartesian Join using same table

    Hi All,
    I'm needing to write a query, which is kind of like an outer join/cartesian join that essentially tags each customer in my database with a flag indicating whether they have received a particular product for a given year. There are 8 products so for each distinct customer_id in my database I want 8 records, with a 'Yes' or 'No' indicating whether they have received this product or not. Currently, for a given year, if a customer only receives 2 out of the 8 products there will be two records for that individual but I want 8 records for each individual so that each product is shown to either have been received or not received by the customer. This is the table format I'm looking for:
    CUSTOMER_ID     PRODUCT_CD     PRODUCT_RECEIVED_FLAG     YEAR
    999999999     1     Y     2010
    999999999     2     N     2010
    999999999     3     N     2010
    999999999     4     N     2010
    999999999     5     N     2010
    999999999     6     N     2010
    999999999     7     Y     2010
    999999999     8     Y     2010
    888888888     1     N     2010
    888888888     2     N     2010
    888888888     3     Y     2010
    888888888     4     Y     2010
    888888888     5     N     2010
    888888888     6     N     2010
    888888888     7     Y     2010
    888888888     8     Y     2010
    777777777     1     Y     2010
    777777777     2     Y     2010
    777777777     3     Y     2010
    777777777     4     Y     2010
    777777777     5     Y     2010
    777777777     6     Y     2010
    777777777     7     N     2010
    777777777     8     N     2010Thanks,
    Ed

    I am in good mood today ;) :
    with customer as (
                      select '999999999' customer_id from dual union all
                      select '888888888' from dual union all
                      select '777777777' from dual
          product as (
                      select level product_cd from dual connect by level <= 8
           orders as (
                      select '999999999' customer_id,1 product_cd,2010 year from dual union all
                      select '999999999',7,2010 from dual union all
                      select '999999999',8,2010 from dual union all
                      select '888888888',3,2010 from dual union all
                      select '888888888',4,2010 from dual union all
                      select '888888888',7,2010 from dual union all
                      select '888888888',8,2010 from dual union all
                      select '777777777',1,2010 from dual union all
                      select '777777777',2,2010 from dual union all
                      select '777777777',3,2010 from dual union all
                      select '777777777',4,2010 from dual union all
                      select '777777777',5,2010 from dual union all
                      select '777777777',6,2010 from dual union all
                      select '777777777',7,2010 from dual union all
                      select '777777777',8,2010 from dual
    -- end of on-the-fly data sample
    select  c.customer_id,
            p.product_cd,
            nvl2(o.product_cd,'Y','N') product_received_flag,
            y.year
      from       customer c
            cross join
                 product p
            cross join
                 select  distinct year
                   from  orders
                ) y
            left join
                orders o
              on (
                      c.customer_id = o.customer_id
                  and
                      p.product_cd = o.product_cd
                  and
                      y.year = o.year
      order by y.year,
               c.customer_id desc,
               p.product_cd
    CUSTOMER_ PRODUCT_CD P       YEAR
    999999999          1 Y       2010
    999999999          2 N       2010
    999999999          3 N       2010
    999999999          4 N       2010
    999999999          5 N       2010
    999999999          6 N       2010
    999999999          7 Y       2010
    999999999          8 Y       2010
    888888888          1 N       2010
    888888888          2 N       2010
    888888888          3 Y       2010
    CUSTOMER_ PRODUCT_CD P       YEAR
    888888888          4 Y       2010
    888888888          5 N       2010
    888888888          6 N       2010
    888888888          7 Y       2010
    888888888          8 Y       2010
    777777777          1 Y       2010
    777777777          2 Y       2010
    777777777          3 Y       2010
    777777777          4 Y       2010
    777777777          5 Y       2010
    777777777          6 Y       2010
    CUSTOMER_ PRODUCT_CD P       YEAR
    777777777          7 Y       2010
    777777777          8 Y       2010
    24 rows selected.
    SQL> SY.
    P.S. Code assumes at least one customer ordered at least one product within each year. Otherwise you will need year table.
    Edited by: Solomon Yakobson on Oct 28, 2011 2:36 PM

  • Slow Query: Join rows from table A with first match in table B

    Hi,
    I have been struggling with this for days. It is very slow:
    With table with 4.5 mio records it took over 2h.
    Records with anType 2 and 3 4 mio.
    Records with anType 1 and 4 500,000
    Different acWarehouse values: 20
    Different acIdent values: 9799
    Could this be written in any other way so that it would be faster.
    anId | acWarehouse | acIdent | anType | anQty | anTotalQuantity
    1| WarehouseA | IdentA | 1 | 100 | 100
    2| WarehouseA | IdentA | 1 | 100 | 200
    3| WarehouseA | IdentA | 1 | 100 | 300
    4| WarehouseA | IdentA | 1 | 100 | 400
    5| WarehouseA | IdentA | 2 | -100 | 100
    6| WarehouseA | IdentA | 2 | -100 | 200
    7| WarehouseA | IdentA | 2 | -100 | 300
    8| WarehouseA | IdentA | 2 | -100 | 400
    Result should be:
    anId | anEdge_Transaction_Id | anQuantity
    5| 1| 100
    6| 2 | 100
    7| 3 | 100
    8| 4 | 100
    Table definition:
    CREATE TABLE iPA_Transaction
         ANID     NUMBER(9,0) -- PRIMARY KEY
    ,     ACWAREHOUSE     VARCHAR2(30 CHAR)
    ,     ACIDENT     VARCHAR2(16 CHAR)
    ,     ANTYPE     NUMBER(1)
    ,     ANQTY     NUMBER(19,4)
    ,     ANTOTALQUANTITY NUMBER(19,4) -- RUNNING TOTAL
    ALTER TABLE iPA_Transaction ADD CONSTRAINT PK_Transaction PRIMARY KEY (anId);
    CREATE INDEX IX_Transaction_TEST4 ON iPA_Transaction(acIdent,acWarehouse,anType,anTotalQuantity);
    CREATE TYPE edge_transaction_data AS OBJECT (
         anId NUMBER(9,0)
    ,     anEdge_Transaction_Id NUMBER(9,0)
    ,     anQuantity NUMBER(19,4)
    CREATE TYPE edge_transaction AS TABLE OF edge_transaction_data;
    /Query:
         SELECT
              iPA_Transaction.anId
         ,     first_transaction.anEdge_Transaction_Id
         ,     first_transaction.anQuantity
         FROM
              iPA_Transaction
              INNER JOIN TABLE(
                   CAST(
                        MULTISET(
                             SELECT
                                  iPA_Transaction.anId
                             ,     MIN(transaction_stock.anId) KEEP (DENSE_RANK FIRST ORDER BY transaction_stock.anTotalQuantity) AS anEdge_Transaction_Id
                             ,     MIN(transaction_stock.anTotalQuantity) KEEP (DENSE_RANK FIRST ORDER BY transaction_stock.anTotalQuantity) AS anTotalQuantity
                             FROM
                                  iPA_Transaction transaction_stock
                             WHERE
                                  transaction_stock.anType IN (1,4)
                             AND transaction_stock.acIdent = iPA_Transaction.acIdent
                             AND transaction_stock.acWarehouse = iPA_Transaction.acWarehouse
                             AND transaction_stock.anTotalQuantity > (iPA_Transaction.antotalquantity + iPA_Transaction.anqty)
                        ) AS edge_transaction
              ) first_transaction ON (iPA_Transaction.anId = first_transaction.anId)
         WHERE
              iPA_Transaction.anType IN (2,3)
         ;-- EXECUTION PLAN
    PLAN_TABLE_OUTPUT
    Plan hash value: 1731335374
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 6634 | 362K| 107M (1)|357:36:32 |
    | 1 | NESTED LOOPS | | 6634 | 362K| 107M (1)|357:36:32 |
    |* 2 | TABLE ACCESS FULL | IPA_TRANSACTION | 3946K| 203M| 15004 (1)| 00:03:01 |
    |* 3 | COLLECTION ITERATOR SUBQUERY FETCH| | 1 | 2 | 27 (0)| 00:00:01 |
    | 4 | VIEW | | 1 | 39 | 6 (0)| 00:00:01 |
    | 5 | SORT AGGREGATE | | 1 | 50 | | |
    | 6 | INLIST ITERATOR | | | | | |
    | 7 | TABLE ACCESS BY INDEX ROWID | IPA_TRANSACTION | 1 | 50 | 6 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | IX_TRANSACTION_TEST4 | 1 | | 5 (0)| 00:00:01 |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$80EA2A9E
    2 - SEL$80EA2A9E / IPA_TRANSACTION@SEL$1
    3 - SEL$80EA2A9E / KOKBF$@SEL$2
    4 - SEL$4 / KOKSDML$@SEL$540AC7B0
    5 - SEL$4
    7 - SEL$4 / TRANSACTION_STOCK@SEL$4
    8 - SEL$4 / TRANSACTION_STOCK@SEL$4
    Predicate Information (identified by operation id):
    2 - filter("IPA_TRANSACTION"."ANTYPE"=2 OR "IPA_TRANSACTION"."ANTYPE"=3)
    3 - filter("IPA_TRANSACTION"."ANID"=SYS_OP_ATG(VALUE(KOKBF$),1,2,2))
    8 - access("TRANSACTION_STOCK"."ACIDENT"=:B1 AND "TRANSACTION_STOCK"."ACWAREHOUSE"=:B2 AND
    ("TRANSACTION_STOCK"."ANTYPE"=1 OR "TRANSACTION_STOCK"."ANTYPE"=4) AND
    "TRANSACTION_STOCK"."ANTOTALQUANTITY">:B3+:B4 AND "TRANSACTION_STOCK"."ANTOTALQUANTITY" IS NOT NULL)
    Column Projection Information (identified by operation id):
    1 - (#keys=0) "IPA_TRANSACTION"."ANID"[NUMBER,22],
    "IPA_TRANSACTION"."ACWAREHOUSE"[VARCHAR2,120], "IPA_TRANSACTION"."ACIDENT"[VARCHAR2,64],
    "IPA_TRANSACTION"."ANTYPE"[NUMBER,22], "IPA_TRANSACTION"."ANQTY"[NUMBER,22],
    "IPA_TRANSACTION"."ANTOTALQUANTITY"[NUMBER,22], VALUE(A0)[96]
    2 - "IPA_TRANSACTION"."ANID"[NUMBER,22], "IPA_TRANSACTION"."ACWAREHOUSE"[VARCHAR2,120],
    "IPA_TRANSACTION"."ACIDENT"[VARCHAR2,64], "IPA_TRANSACTION"."ANTYPE"[NUMBER,22],
    "IPA_TRANSACTION"."ANQTY"[NUMBER,22], "IPA_TRANSACTION"."ANTOTALQUANTITY"[NUMBER,22]
    3 - VALUE(A0)[96]
    4 - "KOKSDML$"."KOKSDML$_C00000"[NUMBER,22], "KOKSDML$"."ANEDGE_TRANSACTION_ID"[NUMBER,22],
    "KOKSDML$"."ANTOTALQUANTITY"[NUMBER,22]
    5 - (#keys=0) MIN("TRANSACTION_STOCK"."ANTOTALQUANTITY") KEEP (DENSE_RANK FIRST ORDER BY
    "TRANSACTION_STOCK"."ANTOTALQUANTITY")[22], MIN("TRANSACTION_STOCK"."ANID") KEEP (DENSE_RANK FIRST
    ORDER BY "TRANSACTION_STOCK"."ANTOTALQUANTITY")[22]
    6 - "TRANSACTION_STOCK".ROWID[ROWID,10], "TRANSACTION_STOCK"."ANID"[NUMBER,22],
    "TRANSACTION_STOCK"."ACWAREHOUSE"[VARCHAR2,120], "TRANSACTION_STOCK"."ACIDENT"[VARCHAR2,64],
    "TRANSACTION_STOCK"."ANTYPE"[NUMBER,22], "TRANSACTION_STOCK"."ANTOTALQUANTITY"[NUMBER,22]
    7 - "TRANSACTION_STOCK".ROWID[ROWID,10], "TRANSACTION_STOCK"."ANID"[NUMBER,22],
    "TRANSACTION_STOCK"."ACWAREHOUSE"[VARCHAR2,120], "TRANSACTION_STOCK"."ACIDENT"[VARCHAR2,64],
    "TRANSACTION_STOCK"."ANTYPE"[NUMBER,22], "TRANSACTION_STOCK"."ANTOTALQUANTITY"[NUMBER,22]
    8 - "TRANSACTION_STOCK".ROWID[ROWID,10], "TRANSACTION_STOCK"."ACIDENT"[VARCHAR2,64],
    "TRANSACTION_STOCK"."ACWAREHOUSE"[VARCHAR2,120], "TRANSACTION_STOCK"."ANTYPE"[NUMBER,22],
    "TRANSACTION_STOCK"."ANTOTALQUANTITY"[NUMBER,22]
    Edited by: 939464 on 08-Jun-2012 02:30
    Edited by: 939464 on 08-Jun-2012 02:32
    Edited by: 939464 on 08-Jun-2012 02:36
    Edited by: 939464 on 08-Jun-2012 04:39

    Additional to what has just been said by Hoek, which I also quote, I feel this could be a problem similar to the one posted here.
    [url:https://forums.oracle.com/forums/thread.jspa?threadID=2387388]SQL - Which positive covered the negative?
    Could you please let us know a bit more about the logic of the output?
    1) Do you want to know which transaction with positive quantity cover the current transaction with negative quantity?
    2) How does it need to be partitioned?
    3) Are the quantity always equal for corresponding transaction?
    If I just look at your data I can do something really simple but it might be not what you need.
    CREATE TABLE iPA_Transaction
    ANID NUMBER(9,0) -- PRIMARY KEY
    , ACWAREHOUSE VARCHAR2(30 CHAR)
    , ACIDENT VARCHAR2(16 CHAR)
    , ANTYPE NUMBER(1)
    , ANQTY NUMBER(19,4)
    , ANTOTALQUANTITY NUMBER(19,4) -- RUNNING TOTAL
    ALTER TABLE iPA_Transaction ADD CONSTRAINT PK_Transaction PRIMARY KEY (anId);
    CREATE INDEX IX_Transaction_TEST4 ON iPA_Transaction(acIdent,acWarehouse,anType,anTotalQuantity);
    INSERT INTO iPA_Transaction VALUES(1, 'WarehouseA', 'IdentA', 1 , 100, 100);
    INSERT INTO iPA_Transaction VALUES(2, 'WarehouseA', 'IdentA', 1 , 100, 200);
    INSERT INTO iPA_Transaction VALUES(3, 'WarehouseA', 'IdentA', 1 , 100, 300);
    INSERT INTO iPA_Transaction VALUES(4, 'WarehouseA', 'IdentA', 1 , 100, 400);
    INSERT INTO iPA_Transaction VALUES(5, 'WarehouseA', 'IdentA', 2 , -100, 100);
    INSERT INTO iPA_Transaction VALUES(6, 'WarehouseA', 'IdentA', 2 , -100, 200);
    INSERT INTO iPA_Transaction VALUES(7, 'WarehouseA', 'IdentA', 2 , -100, 300);
    INSERT INTO iPA_Transaction VALUES(8, 'WarehouseA', 'IdentA', 2 , -100, 400);
    SELECT a.anid, b.anid anedge_transaction_id, -a.anqty anqty
      FROM ipa_transaction a, ipa_transaction b
    WHERE     a.acwarehouse = b.acwarehouse
           AND a.acident = b.acident
           AND a.antype IN (2, 4)
           AND b.antype IN (1, 4)
           AND a.antotalquantity = b.antotalquantity;
          ANID ANEDGE_TRANSACTION_ID      ANQTY
             5                     1        100
             6                     2        100
             7                     3        100
             8                     4        100
    {code}
    Try to give additional details.
    Regards.
    Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Filter and Join on same table

    Hi All,
    I am having a bit of hard time, implementing following.
    All suggestions welcome.
    (1) I have a file being mapped into an initial table with say, 10 fields (field1...field10).
    (2) I want to execute following logic for mapping
    For all records in the table, in a cursor
    If field1 = 10 Then
    update field10=x
    Else If field2 = 20 Then
    update field10=y
    End if
    If field10=x then
    update field3 =222
    End if
    I was thinking of using the filter, but am grappling with the problem that after I define the filter, how do I merge it with the original table data and execute the last conditional update (based on field10) on all records in the table?
    - Am thinking of doing as below
    Join the output from mother table with output from filter into a temp table (using non equal row id as the join condition), but this creates duplicate column names, and am wondering how to collapse them back into one column set again?
    is there an alternative possible, for this is very kludgy (if it works at all).
    Question 2
    (1) I have a sql expression defined, which I want to use in the filter bifurcation and also after the join. The sql expression's input column and output columns are same, just the target are to be different?
    Is it possible to do this, or do I have to duplicate the sql expression?
    Question 3
    My current load/stage is in PL/SQL procedure, which I am trying to model with OWB. Is there a guideline/ recommended best practice for doing this kind of activity?
    Appreciate your help.
    Deepak

    1. I think I already gave an answer on this question (and there is one more of the same in the forum), but here it is again:
    You can use an expression with a case statement:
    CASE field1
    WHEN 10 THEN 'x'
    WHEN 20 THEN 'y'
    WHEN ... THEN...
    ELSE field10 END CASE
    The input to this expression are fields field1 and field10, the output goes to field 10. So field10 will be updated with the value coming out of the expression - if field1 is 10 then it will be updated with 'x', when field1 is 20 then it will be updated with 'y' etc... when none of the CASE conditions are true (the ELSE case) it will be updated with the field10 (passt-hrough).
    2. The best solution would be to create a transformation (a function, for example) that contains your expression, then use it throughout the project without having to retype it.
    3. You should:
    - Import the source object structures and (where possible) the target ones
    - Design the new objects in OWB
    - Import your custom transformation library, if there is one
    - Design the extraction processes as mappings in OWB (you will not be able to reuse much of your old code if you want to take advantage of OWBs metadata management, runtime management etc. and if you want to maintain the system through OWB)
    - Run the two systems side by side for some time until you are confortable that the process logic you designed in OWB gives the same results as the old process.
    - Move the OWB system to production and switch the old system off.
    Regards:
    Igor

  • Select Query failing on a  table that has per sec heavy insertions.

    Hi
    Problem statement
    1- We are using 11g as an database.
    2- We have a table that is partitioned on the date as range partition.
    3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
    4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
    5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
    6- Effecient indexes are also created on the table.
    Solutions Tried.
    1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
    2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
    please suggest any solution to this issue
    1- i.e. Redisgn of table.
    2- Any better way to quey to fix the fetch issue.
    3- Any oracle seetings or parameter changes to fix the fetch issue.
    Thanks in advance.
    Regards
    Vishal Sharma

    I am uploading the latest stats please let me know how can improve as this is taking 25 minutes
    ####TKPROF output#########
    SQL ID : 2j5w6bv437cak
    select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
      almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
      almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
      almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
      almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
      almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
      almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
      almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
      almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
    FROM
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.10       0.17          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       42   1348.25    1521.24       1956   39029545          0         601
    total       44   1348.35    1521.41       1956   39029545          0         601
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Rows     Row Source Operation
        601  PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
        601   TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
        601    INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
        601     FILTER  (cr=39027139 pr=0 pw=0 time=0 us)
    169965204      COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
    169965204       VIEW  (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
    169965204        PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
    169965204         TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
    169965204          INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      42        0.00          0.00
      SQL*Net message from client                    42       11.54        133.54
      db file sequential read                      1956        0.20         28.00
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
    SQL ID : 0ushr863b7z39
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
    FROM
    (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
      NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
      "PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
      "SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.01          1          3          0           1
    total        3      0.00       0.01          1          3          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 82     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
          0   TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.01          0.01
    SQL ID : bjkdb51at8dnb
    EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
      almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
      almevttbl.Severity, almevttbl.State, almevttbl.Category,
      almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
      almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
      almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
       almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
      almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
      almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
      almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM 
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.28       0.26          0          0          0           0
    Execute      1      0.01       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.29       0.27          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       13      0.71       0.96          3         10          0           0
    Execute     14      0.20       0.29          4        304         26          21
    Fetch       92   2402.17    2714.85       3819   70033708          0        1255
    total      119   2403.09    2716.10       3826   70034022         26        1276
    Misses in library cache during parse: 10
    Misses in library cache during execute: 6
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      49        0.00          0.00
      SQL*Net message from client                    48       29.88        163.43
      db file sequential read                      1966        0.20         28.10
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
      latch: session allocation                       1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      940      0.51       0.73          1          2         38           0
    Execute   3263      1.93       2.62          7       1998         43          23
    Fetch     6049      1.32       4.41        214      12858         36       13724
    total    10252      3.78       7.77        222      14858        117       13747
    Misses in library cache during parse: 172
    Misses in library cache during execute: 168
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        88        0.04          0.62
      latch: shared pool                              8        0.00          0.00
      latch: row cache objects                        2        0.00          0.00
      latch free                                      1        0.00          0.00
      latch: session allocation                       1        0.00          0.00
       34  user  SQL statements in session.
    3125  internal SQL statements in session.
    3159  SQL statements in session.
    Trace file: ora11g_ora_2064.trc
    Trace file compatibility: 11.01.00
    Sort options: default
           6  sessions in tracefile.
          98  user  SQL statements in trace file.
        9111  internal SQL statements in trace file.
        3159  SQL statements in trace file.
          89  unique SQL statements in trace file.
       30341  lines in trace file.
        6810  elapsed seconds in trace file.
    ###################################### AutoTrace Output#################  
    Statistics
           3901  recursive calls
              0  db block gets
       39030275  consistent gets
           1970  physical reads
            140  redo size
         148739  bytes sent via SQL*Net to client
            860  bytes received via SQL*Net from client
             42  SQL*Net roundtrips to/from client
             73  sorts (memory)
              0  sorts (disk)
            601  rows processed

  • Join on two tables using "LIKE"

    Hi all,
    I need make join on two tables (QMEL and STXL) using keys for connection:
    - field for join of first table is (QMEL-QMNUM - Notif. number - Char 12)
    - field for join of second table is  (STXL-TDNAME - Char 70)
    If it is connection based on EQ, I think, it's no problem but I need to connect it on base of 'LIKE'.
    Example:
    QMEL-QMNUM = '100100698075'
    I would like get all rows from STXL which contain or even better start with notif. number.
    Examples I would like connect with QMEL-QMNUM:
    STXL-TDNAME = '100100698075'
    STXL-TDNAME = '1001006980750001'
    STXL-TDNAME = '10010069807500010001'
    STXL-TDNAME = '10010069807500010002'
    etc..
    Am I able to manage that with select which join these two tables this way?
    Thanks for any solution
    Vaclav Hosek.

    Hi,
    Write 2 separate selects for this.
    select * from QMEL
    where......
    if sy-subrc = 0.
    loop at i_qmel into wa.
    r_tdname-option = 'CP'.
    r_tdname-sign = 'I'
    concatenate wa-qnum '*' into r_tdname-low.
    append r_tdname.
    endloop.
    select *
    from STL
    where tdname in r_tdname.
    endif.

  • SAP Query - Need to join 3 Tables via outer join

    Hi,
    I need to join 3 Tables using SAP Query. I wish an OUTER JOIN to be performed for the table join.
    For Example:
    Table 1 has 1000 Entries           Field A             Field B          
    Table 2 has 300 Entries             Field A            Field C
    Table 3 has 100 Entries             Field A           Field D
    The normal Join (INNER JOIN) gives me only the records that exists in all the 3 Tables.
    But what i need is:
    In the above example, If one entry in Table 1 has no matching records in Table 2 / Table 3, there should be an output entry in the SAP Query like
    Field A            Field B              Field C            Field  D
    xxxx              yyyy                  Blank             Blank
    If there is a common record that exists in the tables, that record should appear in the same entry in the Query output.
    Field A            Field B              Field C            Field  D
    xxxx              yyyy                  zzzz              aaaa
    In this way, there should be a minimum of 1000 entries (Largest no of records in the Tables joined). More than 1000 records in the Query output depends on the number of common records.
    Kindly help if you have come across such a scenario.
    thanks & regds
    sriram

    Hi
    Please join the outer join as below
    Table1 with Field A  to Table 2 Field A-----outer join
    Table1 with Field A  to Table 3 Field A------outer join
    then you get the out put as per your requirement
    Regards
    Damu

  • Opinion needed on best way to map multiple table joins (of the same table)

    Hi all
    I have a query of the format:
    select A.col1, B.col1,C.col1
    FROM
    MASTER_TABLE A, ATTRIBUTE_TABLE B, ATTRIBUTE_TABLE C
    WHERE
    A.key1 = B.key1 (+)
    AND
    A.key1 = C.key1(+)
    AND
    B.key2(+) = 100001
    AND
    C.key2(+) = 100002
    As you can see, I am joining the master table to the attribute table MANY times over, (over 30 attributes in my actual query) and I am struggling to find the best way to map this efficiently as the comparison for script vs. mapping is 1:10 in execution time.
    I would appreciate the opinion of experienced OWB users as to how they would tackle this in a mapping and to see if they use the same approach as I have done.
    Many thanks
    Adi

    SELECT external_reference, b.attribute_value AS req_date,
    c.attribute_value AS network, d.attribute_value AS spid,
    e.attribute_value AS username, f.attribute_value AS ctype,
    g.attribute_value AS airtimecredit, h.attribute_value AS simnum,
    i.attribute_value AS lrcredit, j.attribute_value AS airlimitbar,
    k.attribute_value AS simtype, l.attribute_value AS vt,
    m.attribute_value AS gt, n.attribute_value AS dt,
    o.attribute_value AS datanum, p.attribute_value AS srtype,
    q.attribute_value AS faxnum,
    R.ATTRIBUTE_VALUE AS FAXSRTYPE,
    s.attribute_value AS extno,
    t.attribute_value AS tb, u.attribute_value AS gb
    v.attribute_value AS mb, w.attribute_value AS stolenbar,
    x.attribute_value AS hcredit, y.attribute_value AS adminbar,
    z.attribute_value AS portdate
    FROM csi_item_instances a,
    csi_iea_values b,
    csi_iea_values c,
    csi_iea_values d,
    csi_iea_values e,
    csi_iea_values f,
    csi_iea_values g,
    csi_iea_values h,
    csi_iea_values i,
    csi_iea_values j,
    csi_iea_values k,
    csi_iea_values l,
    csi_iea_values m,
    csi_iea_values n,
    csi_iea_values o,
    csi_iea_values p,
    csi_iea_values q,
    CSI_IEA_VALUES R,
    csi_iea_values s,
    csi_iea_values t,
    csi_iea_values u,
    csi_iea_values v,
    csi_iea_values w,
    csi_iea_values x,
    csi_iea_values y,
    csi_iea_values z
    WHERE a.instance_id = b.instance_id(+)
    AND a.instance_id = c.instance_id(+)
    AND a.instance_id = d.instance_id(+)
    AND a.instance_id = e.instance_id(+)
    AND a.instance_id = f.instance_id(+)
    AND A.INSTANCE_ID = G.INSTANCE_ID(+)
    AND a.instance_id = h.instance_id(+)
    AND a.instance_id = i.instance_id(+)
    AND a.instance_id = j.instance_id(+)
    AND a.instance_id = k.instance_id(+)
    AND a.instance_id = l.instance_id(+)
    AND a.instance_id = m.instance_id(+)
    AND a.instance_id = n.instance_id(+)
    AND a.instance_id = o.instance_id(+)
    AND a.instance_id = p.instance_id(+)
    AND a.instance_id = q.instance_id(+)
    AND A.INSTANCE_ID = R.INSTANCE_ID(+)
    AND a.instance_id = s.instance_id(+)
    AND a.instance_id = t.instance_id(+)
    AND a.instance_id = u.instance_id(+)
    AND a.instance_id = v.instance_id(+)
    AND a.instance_id = w.instance_id(+)
    AND a.instance_id = x.instance_id(+)
    AND a.instance_id = y.instance_id(+)
    AND a.instance_id = z.instance_id(+)
    AND b.attribute_id(+) = 10000
    AND c.attribute_id(+) = 10214
    AND d.attribute_id(+) = 10132
    AND e.attribute_id(+) = 10148
    AND f.attribute_id(+) = 10019
    AND g.attribute_id(+) = 10010
    AND h.attribute_id(+) = 10129
    AND i.attribute_id(+) = 10198
    AND j.attribute_id(+) = 10009
    AND k.attribute_id(+) = 10267
    AND l.attribute_id(+) = 10171
    AND m.attribute_id(+) = 10184
    AND n.attribute_id(+) = 10060
    AND o.attribute_id(+) = 10027
    AND p.attribute_id(+) = 10049
    AND q.attribute_id(+) = 10066
    AND R.ATTRIBUTE_ID(+) = 10068
    AND s.attribute_id(+) = 10065
    AND t.attribute_id(+) = 10141
    AND u.attribute_id(+) = 10072
    AND v.attribute_id(+) = 10207
    AND w.attribute_id(+) = 10135
    AND x.attribute_id(+) = 10107
    AND y.attribute_id(+) = 10008
    AND z.attribute_id(+) = 10103
    AND external_reference ='07920490103'
    If I run this it takes less than a second in TOAD, when mapped in OWB it takes ages. 10:1 is a conservative estimate. In reality it takes 15-20 minutes. CSI_IEA_VALUES has 30 million rows CSI_ITEM_INSTANCES has 500,000 rows.
    Hope that helps. I would love to know how others would tackle this query.

  • Need different kinds of query's  for same result

    Hi all,
    I need a query for the following scenario,the query which i mentioned below is giving me result as i need,i would like to know whether there are any other better ways to get the same result.
    create table xxeaccess
    (client_name varchar2(100), service_id number,requestee_id number,requestor_name varchar2(100));
    insert into xxeaccess values('UBS', 100,1000,'Raghu');
    insert into xxeaccess values('UBS', 100,1000,'Tedla');
    insert into xxeaccess values('SBI', 200,2000,'Sai');
    insert into xxeaccess values('SBH',300,3000,'Radha');
    insert into xxeaccess values ('SBH',300,3000,'Krishna');
    insert into xxeaccess values('Canara Bank',400,4000,'Bharath');
    select * from xxeaccess
    where (client_name,service_id,requestee_id) in
    (select client_name,service_id,requestee_id
    from xxeaccess
    group by client_name,service_id,requestee_id
    having count(*) > 1);
    Requirement is like this,client_name,service_id,requestee_id should be same only requestor_name should be differnt and there should be more  than one record with same client_name,service_id and requestee_id.
    Thanks
    Raghu                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    SQL> select * from xxeaccess
      2  where (client_name,service_id,requestee_id) in
      3  (select client_name,service_id,requestee_id
      4  from xxeaccess
      5  group by client_name,service_id,requestee_id
      6  having count(*) > 1)
      7  /
    CLIENT_NAME          SERVICE_ID REQUESTEE_ID REQUESTOR_NAME
    SBH                         300         3000 Radha
    SBH                         300         3000 Krishna
    UBS                         100         1000 Raghu
    UBS                         100         1000 Tedla
    4 rijen zijn geselecteerd.
    Uitvoeringspan
       0      SELECT STATEMENT Optimizer=CHOOSE
       1    0   MERGE JOIN
       2    1     SORT (JOIN)
       3    2       TABLE ACCESS (FULL) OF 'XXEACCESS'
       4    1     SORT (JOIN)
       5    4       VIEW OF 'VW_NSO_1'
       6    5         FILTER
       7    6           SORT (GROUP BY)
       8    7             TABLE ACCESS (FULL) OF 'XXEACCESS'
    SQL> select client_name
      2       , service_id
      3       , requestee_id
      4       , requestor_name
      5    from ( select t.*
      6                , count(*) over (partition by client_name,service_id,requestee_id) mycount
      7             from xxeaccess t
      8         )
      9   where mycount > 1
    10  /
    CLIENT_NAME          SERVICE_ID REQUESTEE_ID REQUESTOR_NAME
    SBH                         300         3000 Radha
    SBH                         300         3000 Krishna
    UBS                         100         1000 Raghu
    UBS                         100         1000 Tedla
    4 rijen zijn geselecteerd.
    Uitvoeringspan
       0      SELECT STATEMENT Optimizer=CHOOSE
       1    0   VIEW
       2    1     WINDOW (SORT)
       3    2       TABLE ACCESS (FULL) OF 'XXEACCESS'Regards,
    Rob.

  • Cost of using subquery vs using same table twice in query

    Hi all,
    In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
    Using subquery:
    select a.sa_id R1, b.other_field R2,
    (select b.name from b
    where b.b_id = a.sa_id) R3
    from table1 a, table2 b
    where ...Using same table twice (table2 under 2 different aliases)
    select a.sa_id R1, b.other_field R2, c.name R3
    from table1 a, table2 b, table2 c
    where
    c.b_id = a.sa_id,
    and ....Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.

    pl/sql novice wrote:
    Hi all,
    In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
    Using subquery:
    Using same table twice (table2 under 2 different aliases)
    Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.In theory, if you use the scalar "subquery" approach, the correlated subquery needs to be executed for each row of your result set. Depending on how efficient the subquery is performed this could require significant resources, since you have that recursive SQL that needs to be executed for each row.
    The "join" approach needs to read the table only twice, may be it can even use an indexed access path. So in theory the join approach should perform better in most cases.
    Now the Oracle runtime engine (since Version 8) introduces a feature called "filter optimization" that also applies to correlated scalar subqueries. Basically it works like an in-memory hash table that caches the (hashed) input values to the (deterministic) correlated subquery and the corresponding output values. The number of entries of the hash table is fixed until 9i (256 entries) whereas in 10g it is controlled by a internal parameter that determines the size of the table (and therefore can hold different number of entries depending on the size of each element).
    If the input value of the next row corresponds to the input value of the previous row then this optimization returns immediately the corresponding output value without any further action. If the input value can be found in the hash table, the corresponding output value is returned, otherwise execute the query and keep the result combination and eventually attempt to store this new combination in the hash table, but if a hash collision occurs the combination will be discarded.
    So the effectiveness of this clever optimization largely depends on three different factors: The order of the input values (because as long as the input value doesn't change the corresponding output value will be returned immediately without any further action required), the number of distinct input values and finally the rate of hash collisions that might occur when attempting to store a combination in the in-memory hash table.
    In summary unfortunately you can't really tell how good this optimization is going to work at runtime and therefore can't be properly reflected in the execution plan.
    You need to test both approaches individually because in the optimal case the optimization of the scalar subquery will be superior to the join approach, but it could also well be the other around, depending on the factors mentioned.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Query Builder - 2 occurrences of same table and Show Related Tables

    In SQL Developer 1.5 the Query Builder still does not allow you to drag two copies of the same table onto the canvas. Additionally the Show Related Tables does not appear to do anything even after Hide Related Tables. Both of these operations sets the cursor to the hourglass although you can continue working as if the commands completed OK. Query Builder is a great time saver but with limitations like this it is really hobbled.
    Additionally, in order to get join conditions you have to double-click the join in the diagram before the appropriate WHERE clause appears. I have not found any reference to how to use the Query Builder in any documentation. This forum has stated that the Query Builder is supposed to be intuitive and easy to use but this is not the case so far.
    The rest of SQL Developer is a pleasure to use!

    Thanks for your feedback. There are a number of things we want to do to the Query Builder. The issue about dragging 2 copies of a table onto the work surface is a logged enhancement request that we want to address. The Show Related tables and hourglass issues are bugs and need to be addressed. I'll document those.
    Better documentation is also important and we can add that into the product for a future release.
    In the short term, I have on previous occasion said I'd do a demo for using the Query Builder, so I'll make that a priority.
    Sue

Maybe you are looking for

  • My safari wont open and says quit unexpectedly quit-yosimite 10.10.1

    i have had a couple of help messages but too much tech speak and I cant follow the support

  • CC 2014 Dateizuordnung fehlerhaft (Windows 7)

    Hallo, ich benutze Windows 7 und seit kurzer Zeit passen meine Dateizuordnungen nicht mehr. Ich bin mir nicht sicher, ob es seit dem Update auf CC 2014 oder seit dem letzten Windows Update der Fall ist, jedenfalls sind viele Typen (.indd, .aep, .psd,

  • How to create a tablespace

    Connect to a database as a user with DBA privileges (usually SYSTEM), and execute a command like this: create tablespace TABLESPACE_NAME datafile size 20G autoextend on next 2G; Choose tablespace size according to data volume estimates. Set autoexten

  • Excel Workbook in Background Processing

    Hello, I have more than 10 queries in an Excel Workbook, all of them are using the same variable and this variable is filled by Exit. The only thing that user need, is Refresh all queries. The time of the query execution is very long, so, we want to

  • Need Information about UNIX subdirectories

    Hi , I am working on a SAP support project. The UNIX OS used is IBM AIX Version 5.3. Database is Oracle 10.2.0.2.0 and SAP version is SAP R/3 Enterprise 4.7.  I would like to know as to what is stored in the following subdirectories at the UNIX level