Logical equivalence of 2 queries

Hi All,
I dont know whether this is right question but I am not sure so putting it here.
I have one query which I am trying to tune which I felt is doing unnecessary fetches and additional joins not required ( here I do nt know the backgrnd of this query like purpose etc. still given for tuning)
Hence I rewritten same query which I feel is logical equivalent of the original query. Thus need your guidance.
This is original query:
WITH
fi as (select feed_book_status.COB_DATE,
            feed_book_status.FEED_INSTANCE_ID,BOOK_ID , feed_id
            from feed_book_status, feed_instance
            where feed_book_status.cob_date = v_cob_date
            and feed_book_status.FEED_INSTANCE_ID=feed_instance.FEED_INSTANCE_ID
            AND FEED_BOOK_STATUS.COB_DATE= FEED_INSTANCE.COB_DATE
p as (select book_id, feed_instance_id, position_id from position where feed_instance_id in
       (select feed_instance_id from fi)),
s as (select book_id, position_id, feed_instance_id, type_id from sensitivity where feed_instance_id in
     (select feed_instance_id from fi))
select
  fi.cob_date, fi.feed_id, p.book_id, s.type_id, count(s.book_id) Cnt
from  fi
      inner join p  on p.feed_instance_id = fi.feed_instance_id and fi.book_id = p.book_id
      inner join s on p.feed_instance_id = s.feed_instance_id and p.position_id = s.position_id
group by fi.cob_date, fi.feed_id, p.book_id, s.type_id;Execution plan
| Id  | Operation                      | Name                        | Rows  | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop |
|   0 | SELECT STATEMENT               |                             | 40730 |  4613K|       | 65255  (17)|       |       |
|   1 |  TEMP TABLE TRANSFORMATION     |                             |       |       |       |            |       |       |
|   2 |   LOAD AS SELECT               | SENSITIVITY                 |       |       |       |            |       |       |
|*  3 |    HASH JOIN                   |                             |   240 |  9600 |       |    32   (7)|       |       |
|   4 |     TABLE ACCESS BY INDEX ROWID| FEED_INSTANCE               |   238 |  4284 |       |    12   (0)|       |       |
|*  5 |      INDEX RANGE SCAN          | IDX_FEED_INSTANCE_CD_MR     |   238 |       |       |     3   (0)|       |       |
|*  6 |     INDEX RANGE SCAN           | IDX_FBS_CD_FII_BI           |  2981 | 65582 |       |    19   (6)|       |       |
|   7 |   HASH GROUP BY                |                             | 40730 |  4613K|   138M| 65223  (17)|       |       |
|*  8 |    HASH JOIN                   |                             |  1073K|   118M|  9848K| 57613  (18)|       |       |
|*  9 |     HASH JOIN                  |                             |   113K|  8515K|       | 14875  (13)|       |       |
|  10 |      VIEW                      |                             |   240 |  6720 |       |     2   (0)|       |       |
|  11 |       TABLE ACCESS FULL        | SYS_TEMP_0FD9D6813_D59D2159 |   240 |  6720 |       |     2   (0)|       |       |
|* 12 |      HASH JOIN                 |                             |  3854K|   180M|       | 14555  (11)|       |       |
|  13 |       MERGE JOIN CARTESIAN     |                             |   240 |  2400 |       |   496   (3)|       |       |
|  14 |        VIEW                    | VW_NSO_2                    |   240 |  1200 |       |     2   (0)|       |       |
|  15 |         HASH UNIQUE            |                             |   240 |  3120 |       |            |       |       |
|  16 |          VIEW                  |                             |   240 |  3120 |       |     2   (0)|       |       |
|  17 |           TABLE ACCESS FULL    | SYS_TEMP_0FD9D6813_D59D2159 |   240 |  6720 |       |     2   (0)|       |       |
|  18 |        BUFFER SORT             |                             |   240 |  1200 |       |     2 (-240|       |       |
|  19 |         VIEW                   | VW_NSO_1                    |   240 |  1200 |       |     2   (0)|       |       |
|  20 |          HASH UNIQUE           |                             |   240 |  3120 |       |            |       |       |
|  21 |           VIEW                 |                             |   240 |  3120 |       |     2   (0)|       |       |
|  22 |            TABLE ACCESS FULL   | SYS_TEMP_0FD9D6813_D59D2159 |   240 |  6720 |       |     2   (0)|       |       |
|  23 |       PARTITION RANGE ALL      |                             |  3854K|   143M|       | 13741   (9)|     1 |   130 |
|  24 |        TABLE ACCESS FULL       | POSITION                    |  3854K|   143M|       | 13741   (9)|     1 |   130 |
|  25 |     PARTITION RANGE ALL        |                             |    18M|   691M|       | 21789  (22)|     1 |   130 |
|  26 |      TABLE ACCESS FULL         | SENSITIVITY                 |    18M|   691M|       | 21789  (22)|     1 |   130 |
Predicate Information (identified by operation id):
   3 - access("FEED_BOOK_STATUS"."FEED_INSTANCE_ID"="FEED_INSTANCE"."FEED_INSTANCE_ID" AND
              "FEED_BOOK_STATUS"."COB_DATE"="FEED_INSTANCE"."COB_DATE")
   5 - access("FEED_INSTANCE"."COB_DATE"=TO_DATE(' 2010-10-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   6 - access("FEED_BOOK_STATUS"."COB_DATE"=TO_DATE(' 2010-10-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - access("FEED_INSTANCE_ID"="FEED_INSTANCE_ID" AND "POSITION_ID"="POSITION_ID" AND
              "FEED_INSTANCE_ID"="$nso_col_1")
   9 - access("FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID" AND "FI"."BOOK_ID"="BOOK_ID")
  12 - access("FEED_INSTANCE_ID"="$nso_col_1")This one new query
WITH
fi as (select feed_book_status.COB_DATE,
            feed_book_status.FEED_INSTANCE_ID,BOOK_ID , feed_id
            from feed_book_status, feed_instance
            where feed_book_status.cob_date = '12-Oct-2010'
            and feed_book_status.FEED_INSTANCE_ID=feed_instance.FEED_INSTANCE_ID
            AND FEED_BOOK_STATUS.COB_DATE= FEED_INSTANCE.COB_DATE
select  fi.cob_date, fi.feed_id, p.book_id, s.type_id, count(s.book_id) Cnt
from  fi, position p, sensitivity s
where p.feed_instance_id = fi.feed_instance_id
and p.book_id= fi.book_id
and p.feed_instance_id = s.feed_instance_id
and p.position_id = s.position_id
group by fi.cob_date, fi.feed_id, p.book_id, s.type_idExecution plan
| Id  | Operation                              | Name                     | Rows  | Bytes | Cost (%CPU)| Pstart| Pstop |
|   0 | SELECT STATEMENT                       |                          |   157 | 18526 |  1380  (14)|       |       |
|   1 |  HASH GROUP BY                         |                          |   157 | 18526 |  1380  (14)|       |       |
|*  2 |   TABLE ACCESS BY LOCAL INDEX ROWID    | SENSITIVITY              |  2276 | 88764 |     6   (0)|       |       |
|   3 |    NESTED LOOPS                        |                          |   255K|    28M|  1209   (2)|       |       |
|   4 |     NESTED LOOPS                       |                          |   112 |  8848 |   533   (3)|       |       |
|*  5 |      HASH JOIN                         |                          |   240 |  9600 |    32   (7)|       |       |
|   6 |       TABLE ACCESS BY INDEX ROWID      | FEED_INSTANCE            |   238 |  4284 |    12   (0)|       |       |
|*  7 |        INDEX RANGE SCAN                | IDX_FEED_INSTANCE_CD_MR  |   238 |       |     3   (0)|       |       |
|*  8 |       INDEX RANGE SCAN                 | IDX_FBS_CD_FII_BI        |  2981 | 65582 |    19   (6)|       |       |
|   9 |      PARTITION RANGE ITERATOR          |                          |     1 |    39 |    14   (0)|   KEY |   KEY |
|  10 |       TABLE ACCESS BY LOCAL INDEX ROWID| POSITION                 |     1 |    39 |    14   (0)|   KEY |   KEY |
|* 11 |        INDEX RANGE SCAN                | IDX_RISK_POSITION_FII_BI |   172 |       |     2   (0)|   KEY |   KEY |
|  12 |     PARTITION RANGE ITERATOR           |                          |     5 |       |     2   (0)|   KEY |   KEY |
|* 13 |      INDEX RANGE SCAN                  | IDX_SENSITIVITY_RPI      |     5 |       |     2   (0)|   KEY |   KEY |
Predicate Information (identified by operation id):
   2 - filter("P"."FEED_INSTANCE_ID"="S"."FEED_INSTANCE_ID")
   5 - access("FEED_BOOK_STATUS"."FEED_INSTANCE_ID"="FEED_INSTANCE"."FEED_INSTANCE_ID" AND
              "FEED_BOOK_STATUS"."COB_DATE"="FEED_INSTANCE"."COB_DATE")
   7 - access("FEED_INSTANCE"."COB_DATE"=TO_DATE(' 2010-10-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - access("FEED_BOOK_STATUS"."COB_DATE"=TO_DATE(' 2010-10-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  11 - access("P"."FEED_INSTANCE_ID"="FEED_BOOK_STATUS"."FEED_INSTANCE_ID" AND "P"."BOOK_ID"="BOOK_ID")
  13 - access("P"."POSITION_ID"="S"."POSITION_ID")This is Oracle version 10.2.
Rgds,
Aashish

Aashish S. wrote:
This is original query:
WITH
fi as (select feed_book_status.COB_DATE,
feed_book_status.FEED_INSTANCE_ID,BOOK_ID , feed_id
from feed_book_status, feed_instance
where feed_book_status.cob_date = v_cob_date
and feed_book_status.FEED_INSTANCE_ID=feed_instance.FEED_INSTANCE_ID
AND FEED_BOOK_STATUS.COB_DATE= FEED_INSTANCE.COB_DATE
p as (select book_id, feed_instance_id, position_id from position where feed_instance_id in
(select feed_instance_id from fi)),
s as (select book_id, position_id, feed_instance_id, type_id from sensitivity where feed_instance_id in
(select feed_instance_id from fi))
select
fi.cob_date, fi.feed_id, p.book_id, s.type_id, count(s.book_id) Cnt
from  fi
inner join p  on p.feed_instance_id = fi.feed_instance_id and fi.book_id = p.book_id
inner join s on p.feed_instance_id = s.feed_instance_id and p.position_id = s.position_id
group by fi.cob_date, fi.feed_id, p.book_id, s.type_id;This one new query
WITH
fi as (select feed_book_status.COB_DATE,
feed_book_status.FEED_INSTANCE_ID,BOOK_ID , feed_id
from feed_book_status, feed_instance
where feed_book_status.cob_date = '12-Oct-2010'
and feed_book_status.FEED_INSTANCE_ID=feed_instance.FEED_INSTANCE_ID
AND FEED_BOOK_STATUS.COB_DATE= FEED_INSTANCE.COB_DATE
select  fi.cob_date, fi.feed_id, p.book_id, s.type_id, count(s.book_id) Cnt
from  fi, position p, sensitivity s
where p.feed_instance_id = fi.feed_instance_id
and p.book_id= fi.book_id
and p.feed_instance_id = s.feed_instance_id
and p.position_id = s.position_id
group by fi.cob_date, fi.feed_id, p.book_id, s.type_idExecution plan
The two queries don't seem to be equivalent - take subquery P as an example:
because of the IN subquery, every row in POSITION can appear no more than once in P, but when you turn the subquery into a join each row in POSITION may appear many times because there may be many row in FI with the same feed_instance_id.
Part of you performance problem may relate to the fact that the optimizer suffers from a few defects when optimising code that uses subquery factoring ( http://jonathanlewis.wordpress.com/2010/09/13/subquery-factoring-4/ ) ; it also has problems with ANSI SQL.
For testing purposes, I would try the following:
Step 1 - copy the P and S subqueries inline and see how the plan changes. (My guess would be that Oracle would still do a temp table transformation and create a result set for for FI and use it three times - but it's possible that it would do something a little more sensible with POSITION and SENSITIVITY
Step 2 - as step 1, but add the /*+ inline */ hint to the definition of FI to force Oracle to copy the code in-line rather than creating a temp table (assuming this hasn't happened in step 1)
Step 3 - copy the FI code inline three times.
Step 4a - assuming the rewrite performs well, document it, consider whether you want to make it look prettier and more understandable.
Step 4b - assuming the rewrite doesn't perform weel, start thinking about logical equivalence. Watch out particularly for the implied "select distinct" from IN subqueries, and check what uniqueness constraints you have on the data. (It's possible that your rewrite is actually equivalent because of uniqueness constraints on the data that you havevn't told us about).
Regards
Jonathan Lewis

Similar Messages

  • Use of Logical database in ABAP Queries

    hi,
    Can anybody tell me when/why do we use logical database in Infosets for ABAP Query?
    Regards,
    Divya

    hi,
    pros: -you need not to define so many own tables, fields and additional codings in your info set
    cons: - performance in huge DB's
    A.
    Message was edited by: Andreas Mann

  • Conversion of logical system names in queries

    hello,
    We have the following problem in our multisource model.
    Constant values have been assigned in queries in development : for instance, in line 1, the profit center is restricted to the constant value D3/9000/TTBC600
    D3 : development source system
    9000 : controlling area
    TTBC600 : Profit center code
    When we transport the query, we were expecting a conversion of the source system ID in this constant value but it is not the case.
    could you tell me if this is a normal behaviour??
    If yes, it means that we have to enter constant value in hard code without reference to the source system ID (indirect input, you cannot enter more than the length of the object i.e. TTBC600 in this case) and this can be at the origin of conflict if we have same code in different source system.
    Thanks in advance for any answer
    OD

    Did your transports move up ok to your target system?
    I am asking because your problem could be your mapping of source system to source system ID is not set up correctly or your "conversion of source system names after transport" is not set up correctly in your target system.

  • DRG-50920 part of phrase not itself a phrase or equivalence in phrase query

    Hello everyone on this forum.
    How can I do grouping of Booleans in a phrase query ?
    Specifically, I need to use a query such as the following one (fictitious example):
    CONTAINS( my_indexed_column
            , ' aaaa ( (bbb1 bbb2) OR (bbb3 bbb4 bbb5) ) cccc ( (ddd1 ddd2) OR (ddd3) ) eeee '
            , 1
            ) > 0 Unfortunately I found that executing such a query, causes the following error:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-20000: Oracle Text error:
    DRG-50900: text query parser error on line 1, column 43
    DRG-50920: part of phrase not itself a phrase or equivalence
    I am using Oracle Text version 10.2.0.2.0
    It looks like Oracle Text parser, in phrase queries, only allows grouping if it is/evaluates to a simple sequence of terms,
    Unfortunately the EQUIV and SYN operators don't solve my problem, because:
    1) The EQUIV operator only works on a term-by-term basis ;
    Using
    (bbb1 bbb2) EQUIV (bbb3 bbb4 bbb5) causes error
    DRG-50921: EQUIV operand not a word or another EQUIV expression
    2) The SYN operator can deal with compound terms/phrases, but Oracle Text parser will choke on a phrase query ;
    Assuming I created a CTX_THES thesaurus , that has the SYN relation: bbb1 bbb2 ==> bbb3 bbb4 bbb5
    Using
            ' aaaa '
         || ' (' || CTX_THES.SYN('bbb1 bbb2','my_thesaurus') || ') '
         || ' cccc '
        causes error
    DRG-50920: part of phrase not itself a phrase or equivalence
    So, short of expanding all the boolean operations in my query to their logical equivalents, I have not figured a solution for my problem.
    For my fictitious example, that would mean something like:
    CONTAINS( my_indexed_column   
               ( aaaa (bbb1 bbb2)      cccc (ddd1 ddd2) eeee )
              OR
               ( aaaa (bbb3 bbb4 bbb5) cccc (ddd1 ddd2) eeee )
              OR
               ( aaaa (bbb1 bbb2)      cccc (ddd3) eeee )
              OR
               ( aaaa (bbb3 bbb4 bbb5) cccc (ddd3) eeee )
            , 1
            ) > 0 which seems a complicated workaround.
    Does anybody have a better solution ?
    Thanks in advance.

    Dear Roger Ford
    Was afraid that might be precisely the answer to my question; Bad news :-( ...
    Thanks anyway for your quick answer.
    Best regards

  • Nvl logical equivalent

    Hi,
    I've got something like that in my query:
    SELECT *
      FROM (SELECT   *
                FROM zzz_firmy nad
               WHERE NOT EXISTS (
                        SELECT 1
                          FROM zzz_firmy nzm
                         WHERE nzm.frm_nzm_id = nad.frm_nzm_id
                           AND NVL (nad.frm_audyt_dm, nad.frm_audyt_dt) >
                                          NVL (nzm.frm_audyt_dm, nzm.frm_audyt_dt))
                 AND frm_audyt_st = '1'
                 AND frm_nazwa LIKE :1
            ORDER BY NVL (frm_audyt_dm, frm_audyt_dt) DESC)
    WHERE ROWNUM < 2;
    Plan   
    SELECT STATEMENT  CHOOSE Cost: 5,999,041  Bytes: 2,181  Cardinality: 1         
    8 COUNT STOPKEY       
      7 VIEW XXXADMIN.    Cost: 5,999,041  Bytes: 39,992,997  Cardinality: 18,337        
       6 SORT ORDER BY STOPKEY      Cost: 39,516  Bytes: 4,180,836  Cardinality: 18,337    
        5 FILTER   
         2 TABLE ACCESS BY INDEX ROWID XXX.zzz_FIRMY       Cost: 38,884  Bytes: 4,180,836  Cardinality: 18,337    
          1 INDEX RANGE SCAN NON-UNIQUE XXX.NBI_FRM_NAZWA        Cost: 562  Cardinality: 72,982   
         4 TABLE ACCESS BY INDEX ROWID XXX.zzz_FIRMY       Cost: 325  Bytes: 720  Cardinality: 45         
          3 INDEX RANGE SCAN NON-UNIQUE XXX.NBI_FRM_FRM_NZM_FK        Cost: 6  Cardinality: 896   
    How to improved that ?
    Morover , what is logical equivalence of :
    NVL (nad.frm_audyt_dm, nad.frm_audyt_dt) > NVL (nzm.frm_audyt_dm, nzm.frm_audyt_dt))
    i think thats the problem indekses are not used.
    Regards.
    HR                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    user10388717 wrote:
    Plan
    SELECT STATEMENT CHOOSE Cost: 5,999,041 Bytes: 2,181 Cardinality: 1
    8 COUNT STOPKEY
    7 VIEW XXXADMIN. Cost: 5,999,041 Bytes: 39,992,997 Cardinality: 18,337
    6 SORT ORDER BY STOPKEY Cost: 39,516 Bytes: 4,180,836 Cardinality: 18,337
    5 FILTER
    2 TABLE ACCESS BY INDEX ROWID XXX.zzz_FIRMY Cost: 38,884 Bytes: 4,180,836 Cardinality: 18,337
    1 INDEX RANGE SCAN NON-UNIQUE XXX.NBI_FRM_NAZWA Cost: 562 Cardinality: 72,982
    4 TABLE ACCESS BY INDEX ROWID XXX.zzz_FIRMY Cost: 325 Bytes: 720 Cardinality: 45
    3 INDEX RANGE SCAN NON-UNIQUE XXX.NBI_FRM_FRM_NZM_FK Cost: 6 Cardinality: 896
    i think thats the problem indekses are not used.The plan says indexes are being used. Are the cardinalities right out of each step?

  • Help Needed in Join

    Hi,
    Am using oracle 10g.
    I have two tables namely Table A, Table B. having common column b/w these two tables.
    Table A
    Id
    Name
    Age
    Table B
    ID
    Salary
    designation
    My output needed as follows,
    ID, Name,Age, Salary,Designation
    I want to join Table A and Table B based on Id and display all the fields. I am able to do this. But some occasions the Id will not be in available Table A or Table b.
    for ex : ID = 5 will be available in Table A and which doesn't available in Table B then my output should have salary and designation will be NULL.
    On the other hand ID=5 will not be in Table A and Available in table b then in my output will be name, age will be null and salary and designation will have data.
    we cannot use right/left outer join on this logic. Please suggest me join do i need to do for this logic with some sample queries.

    born2win wrote:
    we cannot use right/left outer join on this logic. Please suggest me join do i need to do for this logic with some sample queries.You need a FULL OUTER JOIN in this case :
    SELECT *
    FROM tableA a
         FULL OUTER JOIN tableB b on a.id = b.id ;

  • Alv output- download to excel file

    Hi
    I have ALV report. My requirement is
    For example i have 10 records in my ALV output.
    I want to download first 5 data to excel file.
    so i need to select the data and click the button download in alv screen. I created the download button in ALV screen.
    how to write coding for this

    Hi Kumar K,
    U can do it by feeling another internal table from the final internal table which u displayed...
    suppose u want the record 5 to 12 then
    LOOP AT itab FROM 5 TO 12.
    Append itab to itab2.
    ENDLOOP.
    So now Itab2 contains record 5 to 12...
    Logic:
    Create one Custom Button ... Now For Sy-ucomm of that button... provide popup window with FROM and TO parameters...
    Then using Loop... Endloop... select that much records form internal table to another internal table say itab2...
    Now using GUI_DOWNLOAD or WS_DOWNLOAD or any other FMs and pass the internal table to this FM...
    For more information on LOOP Syntax...
    LOOP AT itab - cond
    Syntax
    ... [FROM idx1] [TO idx2] [WHERE log_exp].
    Extras:
    1. ... FROM idx1
    2. ... TO idx2
    3. ... WHERE log_exp
    Effect
    The table rows to be read in a LOOP-loop can be limited by optional conditions; if no conditions are specified , all rows of the table are read.
    Addition 1
    ... FROM idx1
    Effect
    The specification FROM is only possible with standard tables and sorted tables. This specification only accepts table rows starting from table index idx1. For idx1, a data object of the type i is expected. If the value of idx1 is smaller or equal to 0, then it will be set to 1. If the value is larger than the number of table rows, the loop is not passed through.
    Addition 2
    ... TO idx2
    Effect
    The specification TO is only possible with standard tables and sorted tables. The specification only accepts table rows after table index idx2. For idx2, a data object of the type i is expected. If the value of idx2 is smaller or equal to 0, then the loop will not be passed. If the value is larger than the number of table rows, then the value will be set to the number of rows. If idx2 is smaller than idx1, then the loop is not passed as well.
    Addition 3
    ... WHERE log_exp
    Effect
    WHERE can be specified with all table-types. After WHERE, you can specify any logical expression log_exp in which the first operand of any singular comparison is a component of the internal table. For this reason, all logical expressions are possible except for IS ASSIGNED, IS REQUESTED and IS SUPPLIED. Dynamic specification of a component through bracketed character-type data objects is not possible. Loops at sorted tables must have compatible operands of the logical expression. All rows are read for which the logical expression is true.
    Notes
    The logical expression specified after WHERE is analyzed once at entry into the loop. Possible changes of the second operand during loop processing are not taken into account.
    While with standard tables all rows of the internal table are checked for the logical expression of the WHERE- addition, with sorted tables and hash tables (as of Release 7.0) you can achieve optimized access by checking that at least the beginning part of the table key in sorted tables and the entire table key in hash tables is equal in the logical expression through queries linked with AND. Optimization also takes effect if the logical expression contains other queries linked with AND with arbitrary operators.
    Hope it will solve your problem..
    Thanks & Regards
    ilesh 24x7

  • Issue more than Planned in Production Order

    Hi All,
    I have a problem with 2007B SP00 PL11 for Issue for Production.
    In 2005B, I can issue the BOM Components more than planned. But in 2007B version, I am unable to issue more than planned. When I right-click at Production Order to Issue Component or at Issue for Production and copy from Production order. no items are displayed.
    I have already issued in full at the 1st issue. Then subsequently, i need to issue some more qty for certain components. But i am unable to do that.
    Has this feature of issue more been changed in 2007B?
    What other method can be adopted without changing the planned qty?
    Please advise.
    Regards
    Jessie

    Hi guys,
    Thanks for the fast response.
    Duplicate the previous issue to create the over-issue is not a good solution. It is very troublesome. itis not a good procedure to ask customers to do this step.
    Changing of the planned qty will mean that i am not able to trace my initial plan and have to go back to the BOM. again, this is not logical. especially when the queries are using figures directly from production order tables.
    Since it can be done in 2005 version, why is SAP taking it out this feature in 2007?

  • Condition WHERE in LOOP

    Good afternoon,
    I have the following issue,
    I  have that do a LOOP to a table Internal, ie.
    Loop at it_tabla where lgart = '0100' or lgart = '0200' or lgart = '0300' or lgart = '0400' or lgart = '0500' or  lgart = '0600' or lgart = '0700' or lgart = '0800' .
    Endloop.
    The question is, Is there any way , in the which not have that do many or  in the sentence LOOP?
    Kind regards.

    I think Rob means that it will go through the table once.
    It's true for Standard tables.  But as it says in the help on the WHERE clause.
    While with standard tables all rows of the internal table are checked for the logical expression of the WHERE- addition, with sorted tables and hash tables (as of Release 7.0) you can achieve optimized access by checking that at least the beginning part of the table key in sorted tables and the entire table key in hash tables is equal in the logical expression through queries linked with AND. Optimization also takes effect if the logical expression contains other queries linked with AND with arbitrary operators
    In this case though, the queries are linked with OR - so you still get all rows checked! 
    Now, if it_tabla is a sorted table with key lgart, then you could do the following:
    PERFORM read_the_table USING: '0100', '0200',...
    FORM read_the_table USING i_lgart TYPE ...
      data: l_tabix TYPE sytabix.
      READ TABLE it_table WITH TABLE KEY lgart =i_lgart TRANSPORTING NO FIELDS.
      l_tabix = sy-tabix.
      LOOP AT it_table FROM l_tabix INTO ls_wa.
        IF ls_wa-lgart NE i_lgart.
          EXIT.
        ENDIF.
        " Do stuff
      ENDLOOP.
    ENDFORM.
    matt

  • Return latest transaction data, based upon transaction dates.

    I appreciate I'm being a little dense here, and I have searched, read, and tried out a few different solutions I've seen give around the place. However; I think I'm struggling more with the naming conventions and logic of other people queries than you might in understanding mine (here's hoping!)
    I have a huge table, which contains a record for every transaction which has an effect on our inventory (yup - BIG table!)
    For a given transaction type 'CHC' (CHange Costs) I want to return the Part code, Transaction Date and Transaction cost of the LAST TWO changes.
    Because its to be used for tracking updates to the cost of materials, and further for calculating the ongoing effect of these, I just need the two for now.
    So,
    Table is I50F
    Columns required are
    I50PART - [Part Code|http://forums.oracle.com/forums/]
    I50TDAT - [Transaction Date|http://forums.oracle.com/forums/]
    I50UCOST - [Price changed to|http://forums.oracle.com/forums/]
    I50TRNS - [Transaction Type - we just want CHC]
    Sample Data (Including just columns we are interested in)
    I50PART              I50TDAT             I50UCOST         I50TRNS
    BACCA001             08/03/2006 07:34:51 0.08829          CHC    
    BACCA001             25/07/2007 08:26:30 0.10329          CHC    
    BACCA001             10/04/2008 16:29:02 0.10639          CHC    
    BACCA003             20/06/2006 12:22:30 0.16814          CHC    
    BACCA003             25/07/2007 08:26:54 0.17024          CHC    
    BACCA003             10/04/2008 13:30:12 0.17535          CHC    
    BACCA004             28/08/2007 15:46:03 0.06486          CHC    
    BACCA004             28/08/2007 15:49:15 0.06328          CHC    
    BACCA004             30/10/2008 09:22:40 0.06952          CHC    
    BACCA004             13/01/2009 09:09:07 0.06867          CHC    
    BACCA005             25/07/2007 08:27:24 0.06715          CHC    
    BACCA005             10/04/2008 15:45:14 0.06916          CHC    
    BACCA005             30/10/2008 09:05:17 0.07453          CHC    
    BACCA005             13/01/2009 09:06:49 0.07275          CHC     To take a part in isolation, BACCA005;
    I'm interested in the last two records.
    It makes sense for there to be two records output per part at this stage, as it may be that the powers that be decide that they want the last 3, or 4, or whatever (I'm sure everybody has similar experiences with beancouters)
    Is it A) Easy, and B) relatively efficient. There are 2.4m records in the table.
    If I've been stupid and not included enough info, please do [metaphorically] poke me in the eye, and I'll pad out a bit.
    Thanks ever so much for reading - and even more so if you can help!
    Cheers
    J

    Analytic functions FTW!
    with I50F as (select 'BACCA001' I50PART, to_date('08/03/2006 07:34:51', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.08829 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA001' I50PART, to_date('25/07/2007 08:26:30', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.10329 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA001' I50PART, to_date('10/04/2008 16:29:02', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.10639 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA003' I50PART, to_date('20/06/2006 12:22:30', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.16814 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA003' I50PART, to_date('25/07/2007 08:26:54', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.17024 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA003' I50PART, to_date('10/04/2008 13:30:12', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.17535 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA004' I50PART, to_date('28/08/2007 15:46:03', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06486 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA004' I50PART, to_date('28/08/2007 15:49:15', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06328 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA004' I50PART, to_date('30/10/2008 09:22:40', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06952 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA004' I50PART, to_date('13/01/2009 09:09:07', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06867 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA005' I50PART, to_date('25/07/2007 08:27:24', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06715 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA005' I50PART, to_date('10/04/2008 15:45:14', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.06916 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA005' I50PART, to_date('30/10/2008 09:05:17', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.07453 I50UCOST, 'CHC' I50TRNS from dual union all
                  select 'BACCA005' I50PART, to_date('13/01/2009 09:06:49', 'dd/mm/yyyy hh24:mi:ss') I50TDAT, 0.07275 I50UCOST, 'CHC' I50TRNS from dual)
    select I50PART, I50TDAT, I50UCOST, I50TRNS
    from   (select I50PART, I50TDAT, I50UCOST, I50TRNS, row_number() over (partition by I50PART order by I50TDAT desc) rn
            from   I50F
            where  I50TRNS = 'CHC')
    where  rn <= 2
    order by I50PART, I50TDAT desc;
    I50PART  I50TDAT               I50UCOST I50
    BACCA001 10/04/2008 16:29:02     .10639 CHC
    BACCA001 25/07/2007 08:26:30     .10329 CHC
    BACCA003 10/04/2008 13:30:12     .17535 CHC
    BACCA003 25/07/2007 08:26:54     .17024 CHC
    BACCA004 13/01/2009 09:09:07     .06867 CHC
    BACCA004 30/10/2008 09:22:40     .06952 CHC
    BACCA005 13/01/2009 09:06:49     .07275 CHC
    BACCA005 30/10/2008 09:05:17     .07453 CHC

  • To select all files if * is applied

    HI
    I have a requirement as below
    In selection screen of a program there are  fields on selection screen as below
    file path
    file name
    in path the path of file is specified and in file name name of file
    my requirement is that uptill now the only one file is being picked up and specified in the selection screen now we want to select all files paterns just like we use in search
    for eg ig user give value File_* then all files should be fetched begening with File_ same as it works in search for file names ..
    please suggest how to approach for the same
    regards
    arora

    also to add on i have defined the selection parameter as
    p_file TYPE epsf-epsfilnam.
    now if there are more that one file selected then it needs to be handeled in program as well ? giving the logic imlemented for any queries to explain better so first concern is that the file names should be picked according to pattern as specified in teh program
    pls suggest
    also further on progam I am doing below FM cll
    Get files of directory
    CALL FUNCTION 'EPS_GET_DIRECTORY_LISTING'
      EXPORTING
        dir_name               = p_path
        file_mask              = p_file
      TABLES
        dir_list               = gt_dir_file
      EXCEPTIONS
        invalid_eps_subdir     = 1
        sapgparam_failed       = 2
        build_directory_failed = 3
        no_authorization       = 4
        read_directory_failed  = 5
        too_many_read_errors   = 6
        empty_directory_list   = 7
        OTHERS                 = 8.
    where gt_dir_file        TYPE STANDARD TABLE OF epsfili
    then i am reading files one by one
    *Read files of directory one by one
    LOOP AT gt_dir_file INTO gs_dir_file.
      CONCATENATE p_path
                  gs_dir_file-name
                  INTO g_dataset SEPARATED BY gc_sep.
    *Open file saved in Application server
    OPEN DATASET g_dataset FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      OPEN DATASET g_dataset FOR INPUT IN LEGACY TEXT MODE.
      IF sy-subrc = 0.
        DO.
    *Read file contents
          READ DATASET g_dataset INTO gs_string_data.

  • ExecutorService and awaitTermination

    I have an executor service(MainES), which in turn starts a thread. This thread in turn use an executor service to execute(S1) say 10 threads(T1-T10). T1-T10 say sleeps for 20 secs.
    My question is when I awaitTermination on the MainES, why does it return true though the threads in S1 have not completed? Why does it not wait for all the tasks in S1 to finish? They are the child threads.
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    import java.util.concurrent.TimeUnit;
    public class TestES {
        /** Creates a new instance of TestES */
        public TestES() {
        public static void main(String[] args) {
            TestES testEs = new TestES();
            testEs.process();
        public void process() {
            ExecutorService esMain = Executors.newCachedThreadPool();
            esMain.execute(new Handler());
            esMain.shutdown();
            boolean result=false;
            try {
                result = esMain.awaitTermination(1000, TimeUnit.SECONDS);
            } catch (InterruptedException ex) {
                ex.printStackTrace();
            System.out.println("awaitTermination? " + result);
        class Handler implements Runnable {
            Handler() {
            public void run() {
                ExecutorService es = Executors.newCachedThreadPool();
                for(int i=0;i<10;i++) {
                    es.execute(new HandlerChild());
                es.shutdown();
        class HandlerChild implements Runnable {
            HandlerChild() {
            public void run() {
                // read and service request
                try {
                    Thread.sleep(100);
                    System.out.println("HandlerChild........"+Thread.currentThread());
                } catch (InterruptedException ex) {
                    ex.printStackTrace();
    }

    ForumKid2 wrote:
    I have two long queries that run about 30 seconds each. Without the ExecutorService it takes approximately 1 minute to get the results single threaded.
    I setup the above logic to call the queries and its takes the same amount of time (give or take a few milliseconds) to return the results. In essence, the queries should run at the same time therefore taking approximately half the time which would be around 30 seconds. That's why I think I am doing something wrong.If your examples are anything to go by, you most likely do the work in the constructor. Don't do that, do the work in the call()-method. That is what it is there for. If you do the work in the constructor, the calling thread will simply do the work before adding it to the executorservice, robbing you of the benefit of that service.

  • MII Trends - add data onto chart object

    Hello,
    Can anyone please provide some thoughts on my current requirement:
    its pretty basic:
    I select a site(Historian datasource) search for a historian tag and then based on the selection I Trend(lets say Line chart).
    Now I should select another site(historian datasource) search for a historian tag and add the tag onto the line chart along with the previous one.
    so basically give user an option to select any historian data source and trend.
    While i do this, i should still be able to use the in built chart properties mainly the time control - to go back and forht in time..
    while I am doing this, I am also keeping a running list of selected tags in another table with the tag's other properties -like current value, Eng Units, calculation, Description, datasource.and give an option with a check box to this running list - where in user can check /uncheck the item and that removes the pen from the chart - and maintaining the chart pen color consistent with the running list's item's color(check box color - for ease of matching) - I am able to accomplish this part successfully.
    I grealty appreciate if any one cane give me some ideas.on how to adynamically add tags onto the chart from multiple datasources and maintian the inbuilt properties of the line chart.
    As you can see, I am almost building a product here - similar to Aspen's web21 or osi's process explorer - using MII.
    any input is greatly appreciated.
    Regards,
    pramod

    The built-in features of the iChart allow the basic premise of what you are trying to do, however it limits you to one Data Server at a time in the iChart, because a single query template only has one data server source, but with an XacuteConnector as your data source you can do whatever you need.
    You'll need to keep track of their choices either in an array or string list so you can appropriately bind it to the Param.x in your underlying Xacute Query template.  The BLS logic for running the queries, and overloading the Server and Tags will be up to you.
    The start and end date mapping for your Xacute query template will make the chart work like a typical Historian line trend with time controls. You'll just need to create the date inputs (DateTime datatype in your Transaction property) and link to the QueryStartDate/QueryEndDate of each tag query action.
    You don't need to merge the data into one rowset. The chart will be happy with 1 Rowset per tag like you see from either Current, History, or HistoryEvent from a real Historian (not like Simulator's merged normalized dataset).  You may just want to consider creating the Transaction OutputXML, and use the AssignXML for the first data source query, followed by AppendXML for just the Rowset(s) from each subsequent data source query.  This way you keep building into the OutputXML in your loop.

  • ABAP in CRM

    Hi experts,
    I am actually new to CRM and here the ABAP is rather different. And i couldnt get on with the logic for the select queries, here things are different like GUID, Object ID, business transaction, etc. please someone kindly help me out from this with some sample codes, etc .
    Thank You.

    Check the below links :
    Links to CRM Documentation
    Brainstorming Discussion - Links to CRM Documentation
    http://help.sap.com/bp_crmv340/CRM_DE/BBLibrary/html/BBLibrary.htm
    /message/1863167#1863167 [original link is broken]
    http://www.ixos.com/local/es/home/products/pro-integration/pro-integration-crm/pro-integration-crm-sap.htm
    http://wiki.ittoolbox.com/index.php/Topic:MySAP_ERP
    http://www.cadservices.nl/site/cadservices/28/27/Designer_mySAP_PLM_2005.pdf
    http://www.unevoc.unesco.org/southernafrica/8-km/SAPKW_SolutionBrief.pdf
    CRM material
    http://help.sap.com/saphelp_erp2005/helpdata/en/06/4f220a51d173478a0d60f01645d914/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/6a/0e1d3ba1f2ab13e10000000a11402f/frameset.htm
    Thanks
    Seshu

  • Output redirect in ttisql

    Hi,
    I am trying to use TimesTen for benchmarking tpc-h queries. My first approach is to use:
    time ttisql < query > /dev/null
    , as I do not want to account for result printing. However, opening and closing the ODBC connection introduces a delay of 2-3 seconds, while the query execution time is 1-2 seconds, which is unacceptable.
    Another thought was to connect through ttisql and use "timing 1" to get execution times. Still, I have not managed to find any way of redirecting the output to /dev/null inside ttisql (starting with ttisql > /dev/null will make execution time printing impossible).
    What is the best way of running benchmark queries in TimesTen?
    Thanks.

    The 'best' way is usually to run the queries from a program that includes timing logic but for long queries such as TPC/H ttIsql is okay.
    If you issue the command 'verbosity 0' in ttIsql it will suppress the display of returned data while still outputting the timing information. That should do the trick for you.
    Chris

Maybe you are looking for

  • OCR on Acrobat 9 Pro not working - will not explain why.

    I ran the OCR function on multiple 500-page PDF documents for an upcoming project. For the most part, the OCR worked. However, OCR DID NOT work on specific pages within the documents (would usually happen once or twice every few thousand pages). I we

  • Ipod nano wont switch off

    I have a 4g ipod nano. I went away for a month and left it at home unused, only to find when I got back it now wont switch off when I hold the middle button of the click wheel and stays permantly on. I have tried to restore and reset it, and yes, the

  • Raid ideas and solutions

    Hey Guys I was looking at Lacie and GTechs FW800 Raid setups with swapple drives. I am looking for a workflow that will allow me to capture to a drive. Cut edit than save the final product to that drive. Once the drive is full pack it up and archive

  • Support pdf rendering natively with the OS

    The competitor OSX does it. Since many offices and enterprises move to paperless, pdf based archiving, it is a pity that WinOS does not (yet) support pdf as it does with jpg, gif, png, and other image formats. Up to now there is no pdf preview in Win

  • Unable to set EnableAttachments property to true on Posts list in Blog web.

    Hello! In my dev environment I've implemented blog site (using the Blog Template) as a sub web of Team Site. Using this approach I was able to manage and display blog post attachments. But when I'm trying to do the same things on production, the appr