About indexing and select conditions

Hi,
Hope to clear some doubts about table indexing...
1)
SAP help says that:
NOT, OR and IN are not supported by indexes unless all of the fields in the SELECT clause and WHERE condition are also contained in the index.
but how come when I use eg. IN and check the index used through ST05, it still show the selected index? so is it still true that NOT, OR and IN are not supported???
2)
Will the positioning of fields in the where condition affect performance when considering the fields in the index?
eg. field a, b, and c are indexed accordingly in sequence in the table.
so, if you do the following select...from table
where a = 1, d = 1, b = 1, c = 1.
If b and c are positioned in front of d, will it help in performance?
Thanks,
Charles

Hi!
SAP runs on many different database systems - there are smarter ones and not so smart ones. So some databases won't recognize IN -> can be translate to a list of EQ, others are better and do this 'internally' (your case).
NOT seems always to corrupt an index use. So instead of 'flag ne space' use 'flag eq c_x'.
About the order of the fields: no database optimizer should be influenced by the order of the fields, the result should be completely independed of the order of the fields.
Two reasons, why to code in a specific order:
- if several programs make identical selects for the same table, then a statement cache can be used - only if the order of the fields in ABAP are identical -> so by convention in the order of the table definition.
- if you go for a specific iindex, than it's easier to control and understand (for others later), if the fields are programmed in the order of the 2nd index.
Regards,
Christian

Similar Messages

  • Questions about Indexing and Using an Indexing POA

    Although I have only about 50 users, at least 15 of them have in excess of 100,000 messages in their accounts and the POA (version 7.0.2) is regularly slowing to a crawl. (I just know that plans for revolution are fomenting!) I have embarked on a campaign to reduce these accounts by archiving everything off to get mail accounts down to 3000 or fewer pieces. I have achieved user buy-in, but have worked on only a few users so far.
    In another closely related thread, it was suggested to me that the PO speed issues relate to broken indexes. And I suspect that given so many messages, the indexes were never getting fully rebuilt with the default QF POA settings. I am trying to fix that situation in addition to reducing mail account sizes. So, I have set up a second POA on another server and dedicated it to the indexing task. The /qfinterval is set for 1 hour, other /qf switches at default. The POA-QF does no mail delivery, but it does do nightly user upkeep.
    The POA-QF seems to be steadily working away and making progress at reducing the number of unindexed messages. However, I have questions about what I am seeing and what more I can do:
    1. Is the progress I am seeing real progress? For example I have a user with over 100,000 messages to be indexed and every time I check the logs, the count drops by about 500 messages per hourly QF cycle. I assume that if I just let it keep running, it will eventually get caught up and fixed. Not only with this user, but with all the others as well. Will my patience (and theirs) be rewarded? Are there any gotchas I need to prepare for?
    2. One user has recently had virtually all of her messages successfully moved to archive. I can see them in the Archive, and do not see them in the online account. However, now over a week later, QF still shows >130,000 items still left to index for that user. The POA-QF is making slow, steady progress reducing that number, but why is this user's QF count still so high? Does it just need more time, or is there something amiss for this user?
    3. I may want to rebuild indexes for single users from scratch. I have seen the TID 3105742 which tells how to do this: Essentially you turn off mail delivery functions, and make some other switch changes to dedicate the POA to indexing for just a single user, and then you let the POA rebuild the indexes. The implication of that scenario is that the POA is now enjoying exclusive access to the user's databases.
    If I want to use my secondary POA-QF to rebuild a user's index from scratch, does the main POA have to be offline and the user out of GWise? That is, Does the QF process require exclusive access in order to rebuild indexes from scratch?
    Thanks for any thoughts or suggestions.
    Peter Smick

    pgsmick wrote:
    > 1. Is the progress I am seeing real progress? For example I have a user with
    > over 100,000 messages to be indexed and every time I check the logs, the count
    > drops by about 500 messages per hourly QF cycle. I assume that if I just let
    > it keep running, it will eventually get caught up and fixed. Not only with
    > this user, but with all the others as well. Will my patience (and theirs) be
    > rewarded? Are there any gotchas I need to prepare for?
    Set this switch for this indexing POA - /qflevel=999 - this will index
    everything in one run. It will take a long time, but with no qflevel switch you
    are indeed only indexing 500 messages at a time, and if the user has that much
    mail, it might never really catch up.
    >
    > 2. One user has recently had virtually all of her messages successfully moved
    > to archive. I can see them in the Archive, and do not see them in the online
    > account. However, now over a week later, QF still shows >130,000 items still
    > left to index for that user. The POA-QF is making slow, steady progress
    > reducing that number, but why is this user's QF count still so high? Does it
    > just need more time, or is there something amiss for this user?
    >
    This is odd, because really the index count should drop to nothing, but with the
    above switch this might get resolved as well.
    > 3. I may want to rebuild indexes for single users from scratch. I have seen
    > the TID 3105742 which tells how to do this: Essentially you turn off mail
    > delivery functions, and make some other switch changes to dedicate the POA to
    > indexing for just a single user, and then you let the POA rebuild the indexes.
    > The implication of that scenario is that the POA is now enjoying exclusive
    > access to the user's databases.
    Not really - the POA is not enjoying exclusive access to the user's database,
    the indexer is just avoiding an attempt to index anything else.
    > If I want to use my secondary POA-QF to rebuild a user's index from scratch,
    > does the main POA have to be offline and the user out of GWise? That is, Does
    > the QF process require exclusive access in order to rebuild indexes from
    > scratch?
    No - QF never requires exclusive access. That said, you may find that an
    extremely vigorous QF can cause slowdowns for the user.
    Danita
    Novell Knowledge Partner
    Moving GroupWise to Linux?
    http://www.caledonia.net/gwmove.html

  • Question about index for some condition

    Hello expert,
    I have following condition in the where subclause, will you please tell if it will use index for transaction_log_fk ?
    pp.transaction_log_fk < p_transaction_log_fk
    Many Thanks,

    You'll have to check out the execution plan if you want to know if Oracle's Optimizer will use the index.
    It depends on the selectivity of your data, table statistics etc.
    If your query returns a lot of records then a FTS would be obvious, for example.
    Here's some more:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
    And you should read these links from the SQL and PL/SQL FAQ:
    SQL and PL/SQL FAQ
    Read the following threads from there:
    - When your query takes too long...
    - How to post a SQL statement tuning request
    They explain it all.
    Edited by: hoek on Jun 24, 2011 4:48 PM

  • Function Based Index And Selectivity

    Hi All,
    I have some doubts w.r.t FBI.I am on 10gR2 (10.2.0.4) with Solaris 5.9
    I was under impression that FBI does not provide guaranteed index access and CBO choose access pattern purely on basis of available stats and query selectivity.
    However, many a times i found that CBO is going for FBI even in case when FTS provides better query elapsed time.
    I created following test case:
    create table fbi_test (id number,flag varchar2(1));
    begin
    for i in 1..1000000
    loop
    insert into fbi_test values(i,'Y');
    end loop;
    end;
    commit;
    begin
    for i in 1..10
    loop
    insert into fbi_test values(i,'N');
    end loop;
    end;
    commit;
    ANALYZE TABLE FBI_TEST COMPUTE STATISTICS;
    CREATE INDEX fbi_test_FBI
    ON fbi_test (CASE WHEN flag = 'Y' THEN 1 ELSE NULL END);
    Autotrace for FBI ACCESS
    SQL> select *from fbi_test where (CASE WHEN flag = 'Y' THEN 1 ELSE NULL END)=1;
    1000000 rows selected.
    Elapsed: 00:00:18.43
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 10000 | 50000 | 342 (1)|
    | 1 | TABLE ACCESS BY INDEX ROWID| FBI_TEST | 10000 | 50000 | 342 (1)|
    |* 2 | INDEX RANGE SCAN | FBI_TEST_FBI | 4000 | | 1958 (1)|
    Statistics
    0 recursive calls
    0 db block gets
    136812 consistent gets
    0 physical reads
    0 redo size
    22180292 bytes sent via SQL*Net to client
    733814 bytes received via SQL*Net from client
    66668 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1000000 rows processed
    Autotrace for FTS
    SQL> select *from fbi_test where flag = 'Y';
    1000000 rows selected.
    Elapsed: 00:00:16.56
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 500K| 2441K| 371 (9)|
    |* 1 | TABLE ACCESS FULL| FBI_TEST | 500K| 2441K| 371 (9)|
    Statistics
    0 recursive calls
    0 db block gets
    68372 consistent gets
    0 physical reads
    0 redo size
    22180292 bytes sent via SQL*Net to client
    733814 bytes received via SQL*Net from client
    66668 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1000000 rows processed
    FYI...
    SQL> show parameter opt
    NAME TYPE VALUE
    filesystemio_options string asynch
    object_cache_optimal_size integer 102400
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    plsql_optimize_level integer 2
    My questions are,
    1.why oracle optimizer is going for FBI with high cardinality of 100000 rows?
    2.Why cost of FTS is high (371) as compare to FBI (342) eventhough FTS is having fewer IO (68372) + Less Elapsed Time?
    3.Why Optimizer is considering ELAPSED TIME during plan generation?
    Any inpute would be highly appreciated.

    user635930 wrote:
    Hi All,
    I have some doubts w.r.t FBI.I am on 10gR2 (10.2.0.4) with Solaris 5.9
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 10000 | 50000 | 342 (1)|
    | 1 | TABLE ACCESS BY INDEX ROWID| FBI_TEST | 10000 | 50000 | 342 (1)|
    |* 2 | INDEX RANGE SCAN | FBI_TEST_FBI | 4000 | | 1958 (1)|
    ---------------------------------------------------------------------------------You're seeing three different effects here.
    First - for the table access by index, Oracle has "lost" the cost of the index range scan - notice that the total cost of the query is 342, but the cost of the index access is 1958. The total cost of the query should be 2,300 and Oracle should have chosen the full tablescan automatically.
    Second, the stats on the index show just one distinct value for "distinct_keys", and the optimizer has decided (for no reason I can think of - it may be a bug) to assume a 0.4% selectivity on the index.
    Third, the estimated cardinality of the table and index lines differs. Whatever Oracle has done in the index line has been forgotten, and the cardinality of the table line has been based on the predicate given by your case statement and, as a "complex function", that predicate has been given a selectivity of 1% - hence the 10,000 rows estimate.
    The combination of unsuitable statistics, an extreme case, and a couple of quirks in the optimizer mean that the chosen path is clearly unsuitable.
    [Addendum]: It just occurred to me that part of the problem is that you collected stats on the table before you created the index. Given you're running 10g, the 'create index' would automatically generate index stats at the same time - but since it's a function-based index, there's a "virtual column" created for the table as well, and that column won't have any statistics on it - which is why you get the "fixed percentage" selectivities.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan
    Edited by: Jonathan Lewis on Nov 29, 2008 1:15 PM

  • Help me about Indexes and Exchange rate

    I see in SAP B1 have form Exchange rates and Indexes. I don't know mean of Index. What does it use for? And Exchange rates relate to Indexes. I can't understand. Can you help me.

    Hello Tien,
    You are clever. There are also some member forums could not know how benefit the helpfiles for them and a certain member in this forum is called forum point seeker, take this as an opportunity.
    Indexes in SAP B1 are used to define consumer price index (CPI). h
    ere is the definition :
    The Consumer Price Index (CPI) measures inflation as experienced by consumers in their day-to-day living expenses. (It is sometimes referred to as the retail price index.) There are separate indexes for two groups or populations of consumers:
    The CPI for All Urban Consumers (CPI-U) is the index most often reported by the national media.
    The CPI for Urban Wage Earners and Clerical Workers (CPI-W) is the index most often used for wage escalation agreements.
    The CPI inflation calculator allows customers to calculate the value of current dollars in an earlier period, or to calculate the current value of dollar amounts from years ago.
    Consumer price indexes often are used to escalate or adjust payments for rents, wages, alimony, child support and other obligations that may be affected by changes in the cost of living. There is a fact sheet explaining how to use the CPI for escalating contracts.
    A new price index called the Chained Consumer Price Index (C-CPI-U) is now available. This new measure is designed to be a closer approximation to a "cost-of-living" index than the CPI-U or CPI-W.
    Based on that definition, we could obtain it as data when determining exchange rate in daily business or price of the inventory items
    Rgds,
    JM

  • Sql query question! about insert and select

    i kinda wanna know if i can do something like this
    INSERT INTO "table1" ("column1", "column2", ...) SELECT "column3"+1, "column4", ... FROM "table2"
    since i wanna make the column increase by one, can i just do something like ++ or +1?
    please help

    hi
    ya u can do this
    this is i tried n its working fine
    insert into emp1(empno,sal) Select empno,sal+1 from emp
    wats ur column data type?
    regds
    Ashish

  • Regarding Indexes and performance tuning.

    Hi Everyone,
    I need some elaborate explanation about indexes and performance tuning.
    1. How do you find out whether the select query which I write is utilizing the indexes .
    2. Is it true that the sequence in which the indexes are defined in se11
      for eg:  MANDT
                 KNUMH
                 KOPOS
    your select query should also have the same sequence in the where clause else the indexes are not utilized well .
    3. Is there any precautions/ special method to write select queries for proper utilization of indexes.
    Thanks to all reading and answering in advance.
    Rgds,
    Anu.

    Hi
    You will find like this
    If your select like this
    select matnr mtart from mara into<itab>
    where matnr = <>..
    Go to table and see mara ..if matnr is checked as primary key,then you are using primary index.
    Check secondary index tab and see if any fields,if those fields you are using in select then you are using secondary index.
    2.Not sequence,check how many fields are checked tick as primary key,all thsoe are index.
    3.Always try to use proper primary index in select statment and avoid nested select statements.
    Thanks

  • First Try with JE BDB - Indexes and Inheritance troubles... (FIXED)

    Hi Mark,
    Mark wrote:
    Hi, I'm a newbie here trying some stuff on JE BDB. And now I'm having
    I am happy to help you with this, but I'll have to ask you to re-post this question to the BDB JE forum, which is...
    Sorry for the mistake. I know now that here is the place to post my doubts.
    I'm really interested in JE BDB product. I think it is fantastic!
    Regarding my first post about "Indexes and Inheritance" on JE BDB, I found out the fix for that and actually, it wasn't about "Indexes and Inheritance" but "*Inheritance and Sequence*" because I have my "@Persistent public abstract class AbstractEntity" with a "@PrimaryKey(sequence = "ID_SEQ") private Long id" property.
    This class is extended by all my business classes (@Entity Employee and @Entity Department) so that my business classes have their PrimaryKey autoincremented by the sequence.
    But, all my business classes have the same Sequence Name: "ID_SEQ" then, when I start running my JE BDB at first time, I start saving 3 new Department objects and the sequence for these department objects star with "#1" and finishes with #3.
    Then I continue saving Employee objects (here was the problem) I thought that my next Sequence number could be #4 but actually it was #101 so when I tried to save my very first Employee, I set the property "managerId=null" since this employee is the Manager, then, when I tried to save my second Employee who is working under the first one (the manager employee), I got the following exception message:
    TryingJEBDBApp DatabaseExcaption: com.sleepycat.je.ForeignConstraintException: (JE 4.0.71) Secondary persist#EntityStoreName#com.dmp.gamblit.persistence.BDB.store.eployee.Employee#*managerId*
    foreign key not allowed: it is not present in the foreign database
    persist#EntityStoreName#com.dmp.gamblit.persistence.BDB.store.eployee.Employee
    The solution:
    I fixed it modifying the managerId value from "4" to "101" and it works now!
    At this moment I'm trying to understand the Sequence mechanism and refining concerns about Cursors manipulation...
    Have you any good material about these topics, perhaps a link where I can find more detailed information on these?
    Thanks in advance Mark, thanks for your attention on this and I will post more doubts in the future for sure ;0)
    Regards,
    Diego

    Hi Diego,
    I fixed it modifying the managerId value from "4" to "101" and it works now!I'm glad you found the problem. It is usually best to get the assigned ID from the entity object after you call put(), and then use that value to fill in related fields in other entities. The primary key field (assigned from the sequence) is set by the put() method.
    At this moment I'm trying to understand the Sequence mechanism and refining concerns about Cursors manipulation...Have you any good material about these topics, perhaps a link where I can find more detailed information on these? >
    To find documentation, start at the first message in the forum, the Welcome message:
    http://forums.oracle.com/forums/ann.jspa?annID=250
    This refers to the main JE doc page, the JE FAQ and a white paper on DPL queries. The FAQ has a section on the DPL, which refers to the javadoc. The DPL javadoc has lots of info on using cursors along with indexes (see EntityIndex, PrimaryIndex, SecondaryIndex). The white paper will be useful to you if you're accustomed to using SQL.
    I don't know of any doc on sequences other than the javadoc:
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/persist/model/PrimaryKey.html#sequence()
    This doc will point you to info on configuring the sequence, if that's what you're interested in.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Function-based Index and an OR-condition in the WHERE-clause

    We have some problems with functin-based indexes and
    the or-condition in a where-clause.
    (We use oracle 8i (8.1.7))
    create table TPERSON(ID number(10),NAME varchar2(20),...);
    create index I_NORMAL_TPERSON_NAME on TPERSON(NAME);
    create index I_FUNCTION_TPERSON_NAME on TPERSON(UPPER(NAME));
    The following two statements run very fast on a large table
    and the execution-plan asure the usage of the indexes
    (-while the session is appropriate configured and the table is analyzed):
    1)     select count(ID) FROM TPERSON where upper(NAME) like 'MIL%';
    2)     select count(ID) from TPERSON where NAME like 'Mil%' or (3=5);
    In particular we see that a normal index is used while the where-clause contains
    an OR-CONDITION.
    But if we try the similarly select-statement
    3)     select count(ID) FROM TPERSON where upper(NAME) like 'MIL%' or (3=5);
    the CBO will not use the function-index.
    (This behavior we only expect with views but not with indexes.)
    We ask for an advice like an hint, which enable the CBO-usage
    of function-based indexes in connection with OR.
    This problem seems to be artificial because it contains this dummy logic:
         or (3=5).
    This steams from an prepared statement, where this kind of boolean
    flag reduce the amount of different select-statements needed for
    covering the hole business-logic, while using bind-variables for the
    concrete query-parameters.
    A more realistic (still boild down) version of our prepared select-statement run in
    SQL Plus:
    define x_name = 'MIL%';
    define x_firstname = '';
    select * FROM TPERSON
    where (upper(NAME) like '&x_name' or ( '&x_name' = ''))
    and (upper(FIRSTNAME) like '&x_firstname' or ('&x_firstname' = ''))
    and ...;
    In particular we dont refernce the tablecolumn , but the QUERY-Parameter
    yield the second boolean value in the or-condition.
    The problem is that this condition ('&x_name' = '') dont use any index.
    thanks a lot for spending your time with this problem

    Try
    SELECT /*+ RULE */
    as your hint. I don't have the book with me, but this last weekend I read a section about your very problem. The book was a Oracle Press gold cover about Oracle 8i Performance tuning. If you e-mail me I can quote you the chapter when I get home Friday.

  • Question about Index in data selection

    Hi experts,
    I am reading the documents about index today and find there are some questions confused me. Plz help, thx!
    1 What is the technical structure of index?
    For example, I have a table ztable with 5 fields, F1(key), F2(key). F3. F4.F5. If I create a index on F3,F4, the system will create a copy of this table. Is the structure of this copy as follow?
    F3, F4, pointer(for the lineno of the recorder)
    So this copy will have fewer lines than the ZTABLE and is sorted. If the DB optimizer choose this index in selection, it will be fast for the 2 reasons?
    2 By what is the sequence of the fields in the index determined? I read a BLOG about permance, the author make a example with DB05, and say if the distinct values of certain field is between 1-1000 lines, it is good to set a secondary index for the field.
    I wonder, if the F3 has only 5 distinct value for all the data in the ZTABLE, and the F4 has 100 distinct value. Which one should be the first field in the index and why?
    Thx!!~

    Hi,
    1 What is the technical structure of index?
    For example, I have a table ztable with 5 fields, F1(key), F2(key). F3. F4.F5. If I create a index on F3,F4, the system will create a copy of this table. Is the structure of this copy as follow?
    F3, F4, pointer(for the lineno of the recorder)
    So this copy will have fewer lines than the ZTABLE and is sorted. If the DB optimizer choose this index in selection, it will be fast for the 2 reasons?
    2 By what is the sequence of the fields in the index determined? I read a BLOG about permance, the author make a example with DB05, and say if the distinct values of certain field is between 1-1000 lines, it is good to set a secondary index for the field.
    I wonder, if the F3 has only 5 distinct value for all the data in the ZTABLE, and the F4 has 100 distinct value. Which one should be the first field in the index and why?
    You seem to be correct for some but not all.
    Struture of index will
    for unique key index F3, F4, row id of database
    it could have only one row id when your where condition match with the index then it goes to only index table first and read the row id and read record directly from the row id.
    But for non unique key table store all the F3, F4, row id of record which math value of F1 and F4. It can have multiple value.
    When your where condition math with condition partially ie left part of the key or whole part key then read all match record from the index table and then read the database physical table and check for addation where condition if any.
    Index is determine on basis of the where condtion which is most left part of index will match.
    if your where condition F4 and in your index has f3 f4 then index will not consider, It create gap between index condition and read whole table database. But if you consider f3 then only it consider the index to read because it match the left most part with index fields.
    For more details How to create secondary index ? Search for blog in sdn I reply many time.

  • One question about Pricing and Conditions puzzle me for a long time!

    One question about Pricing and Conditions puzzle me for a long time.I take one example to explain my question:
    1-First,my sale order use pricing procedure RVAA01.
    2-Next,the pricing procedure RVAA01 have some condition type,such as EK01(Actual Costs),PR00(Price)....,and so on.
    3-Next,the condition type PR00 define the Access Sequences PR00 as it's Access Sequences.
    4-Next,the Access Sequences PR00 have some Condition tables,such as:
         table 118 : "Empties" Prices (Material-Dependent)
         table 5 : Customer/Material
         table 6 : Price List Type/Currency/Material
         table 4 : Material
    5-Next,I need to maintain Condition tables's Records.Such as the table 5(Customer/Material).I guess the sap would supply one screen for me to input the data of table 5.At this screen,the sap would ask me to select one table,such as table 5.When I select the table 5,the sap would go to the screen to let me input the data of table 5.But when I use the T-CODE VK31 or VK32 to maintain Condition tables's Record,I found it's total different from my guess:
    A-First,I can not found one place for me to open the table,such as table 5,to let me input the data?
    B-Second,For example,when I select the VK31->Discounts/Surcharges->By Customer/Material,the sap show the grid view at the right side.At the each line of the grid view,you need to select the Condition Type at the first field.And this make me confused very much.Why the sap need me to select one Condition Type but not the Condition table?To the normal logic,it ought not to select Condition table but not the Condition Type!
    Dear all,I'm a new one in sd.May be this is a very stupid question.But it did puzzle me for a long time.If any one can  explain this question in detail and let me understand the concept,I will appreciate him/her very much.Thank you.

    Hi,
    You said that you are using the T.codes VK31 or VK32.
    These transaction codes are used to enter condition records for standard condition types. As you can see a grid left side having all the standard condition types like price, discounts, taxes, frieghts.
    Pl check using T.code VK11 OR VK12 (change mode)
    Here you can enter the required condition type, in the intial screen. (like PR00, MWST, K004, K005 .....etc)
    After giving the condition type, press enter or click on Combinations icon on top of the screen. Then you can see all the condition tables which you maintained for that condition type. Like as you said table 118, table 5, table 6 and table 4.
    You can select any table and press enter, then you can go into the screen in which you have all the field cataglogues you maintained for that table. For example you selected combination of Customer/Material (table 5) then after you press enter then you can see customer field on top, and material fields.
    You can give all the required values and save the conditon record.
    Hope this is clear.
    REWARD IF HELPFUL.
    Regards,
    praveen

  • Question about composite index and index skip scan

    Hi,
    I have a confusion.
    I have read Burleson's post on composite index column ordering (http://www.dba-oracle.com/t_composite_index_multi_column_ordering.htm) where he writes that
    "......for composite indexes the most restrictive column value(the column with the highest unique values) should be put first to trim down the result set..."
    But 10g performance tuning book tells this about INDEX SKIP SCAN ::
    "... Index Skip scanning lets a composite index be split logically into smaller subindexes. In skip
    scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped.
    The number of logical subindexes is determined by the number of distinct values in the initial column.
    Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the non-leading key of the index......."
    So if we design Composite indexes acc. to what Burleson said then how can we take advantage of index skip scanning. These two staements oppose each other,don't they ?
    Can anybody explain this ?

    Dear,
    For the moment forget the distinct values and the compressibility. I've tried to reproduce your case here below (using jonathan lewis table script)
    create table t1
    as
    with generator as (
        select    --+ materialize
            rownum id
        from dual
        connect by
            rownum <= 10000
    select
        rownum              c1,
        mod(rownum,1000)    c2,
        mod(rownum,5000)    c3,
        mod(rownum,10000)   c4,
        lpad(rownum,10,'0') c5,
        rpad('x',5)         c6
    from
        generator    v1,
        generator    v2
    where
        rownum <= 1000
    alter table t1 add constraint t1_pk primary key (c3,c4,c5,C6);
    create index idx_1 on t1 (c1,c2);
    begin
         dbms_stats.gather_table_stats(
              ownname           => user,
              tabname           =>'T1',
              estimate_percent => 100,
              method_opt       => 'for all columns size 1'
    end;
    /and here are what I got with your two selects
    mho.sql>> SELECT c1,c2
      2  FROM t1
      3  WHERE c2 = 3;
            C1         C2
             3          3
    Elapsed: 00:00:00.00
    mho.sql>> start dispcursor
    PLAN_TABLE_OUTPUT
    SQL_ID  4dbgq3m2utd9f, child number 0
    SELECT c1,c2 FROM t1 WHERE c2 = 3
    Plan hash value: 3723378319
    | Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |*  1 |  INDEX FAST FULL SCAN| IDX_1 |      1 |      1 |      1 |00:00:00.01 |       7 |
    Predicate Information (identified by operation id):
       1 - filter("C2"=3)
    17 rows selected.
    Elapsed: 00:00:00.34
    mho.sql>> SELECT c3,c4,c5,c6
      2  FROM t1
      3  WHERE c4 = 3
      4  AND c5 = '0000000003'
      5  AND c6 = 'x    ';
            C3         C4 C5         C6
             3          3 0000000003 x
    Elapsed: 00:00:00.00
    mho.sql>> @dispcursor
    PLAN_TABLE_OUTPUT
    SQL_ID  fv62c9uqtktw9, child number 0
    SELECT c3,c4,c5,c6 FROM t1 WHERE c4 = 3 AND c5 = '0000000003' AND c6 = 'x    '
    Plan hash value: 2969533764
    | Id  | Operation            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |*  1 |  INDEX FAST FULL SCAN| T1_PK |      1 |      1 |      1 |00:00:00.01 |       9 |
    Predicate Information (identified by operation id):
       1 - filter(("C4"=3 AND "C5"='0000000003' AND "C6"='x    '))
    17 rows selected.I didn't succeed to reproduce your index skip scan situation.
    Best Regards
    Mohamed Houri

  • I have a 64GB iPad2 and selected to "Automatically fill free space with songs".... will it have to download that 60GB of music to my ipad everytime I sync?  It took about 8 hours to sync the music to it

    I have a 64GB iPad2 and selected to "Automatically fill free space with songs".... will it have to download that 60GB of music to my ipad everytime I sync?  It took about 8 hours to sync the music to it.
    I did this last night at 10pm thinking it would take 30 minutes to sync these 10,000 songs or so to my ipad to take up the 64 GB.  I went to bed because it was taking so long and woke up at 6am and it was just about to finish.  It finished around 630am.  So it took a good 8 hours to sync.
    I unhooked it and used it for an hour or so and then hooked it backup so that I could update a few apps on it.  When i did that it acted like it was re-adding all of those songs again and it took about 8 hours again to sync all the music.  Was that just a fluke because I made changes to the ipad?  or will it always calculate and then re-sync alllllllllllllllllll of those 10,000 songs?
    Thanks so much for any help!!!
    -Michael

    Anyone?

  • About case and switch in multiple condition step in workflow.

    i ve some information about case and switch in multiple condition in workflow.
    case - static determination
    switch - runtime determination.
    but i want brief explanation about case and switch and difference please help me.....

    hi velmurugan............
        in case,
               we can have only one value for comparison and can have any number of branches for it.
       in switch,
               we can compare any number of values and have any number of branches.
    eg:
         consider i am triggering a workflow for purchase order change and i am having a multiple condition step.
    if i am going for a case:
            i can have only on value (ie po number/vendor number.....) as a parameter and can check different values with it. ( eg vendor number < 1000
                                                                 vendor number > 1000.... so on)
            a branch will be created for each condition.
    if i am going for a switch:
            i can take any parameter needed. (eg: vendor number > 1000
                                                                    order type = 'NB' .... so on)
             so a single branch can have any number of comparisons with the help of 'and' and 'or' operators and i can have any parameter for my condition.
    ---regards,
       alex b justin

  • 8520 curve rror: Sqlite Error (schema update): net.rim.device.api.database.DatabaseException: SELECT name FROM sqlite_master WHERE type = 'index' AND name = 'chat_history_jid_index': disk I / O error (10).

    Dear team support,
    I have a problem with my WhatsApp Messenger.
    my whatsapp wont save message history. couse error.
    Error: Sqlite Error (schema update):
    net.rim.device.api.database.DatabaseException: SELECT name FROM sqlite_master WHERE type = 'index' AND name = 'chat_history_jid_index': disk I / O error (10).
    Please advise me how can i solve my memory card issue..
    Thanks

    ls -l /var/run/lighttpd/
    And how are you spawning the php instances? I don't see that in the daemons array anywhere.
    EDIT: It looks like the info in that page is no longer using pre-spawned instances, but lighttpd adaptive-spawn. The documentation has been made inconsistent it looks like.
    You will note that with pre-spawned information, the config looks different[1].
    You need to do one or the other, not both (eg. choose adaptive-spawn, or pre-spawn..not both).
    [1]: http://wiki.archlinux.org/index.php?tit … oldid=8051 "change"

Maybe you are looking for

  • Custom Report to display Material Number PR00 K004 KF00 MWSTand Total net e

    Hi All I am supposed to develop a report for pricing which is used to quote prices to the customers with all the components of pricing in different columns shown below. Material Number PR00 K004 KF00 MWSTand Total net effect  in different columns and

  • Test if segment exist in XML IDOC source structure

    Hi, I have a one to one mapping of IDOC ORDERS. I need to test if the segment E1EDPT1 never exists in the xml instance. If it does not i want to create it. The problem is it takes context of the superiour node. I don't want to add it if it exists in

  • Regarding mm- pending order

    sir, i want to calculate percentage of pending order.I used like ekko-memory,               "Purchase order not yet complete if memoey eq 'X' then it is incomplited. my problame is how will calculate percentage pending.there is any field availave in

  • Truncating decimal places in smartfroms

    Hi My requiremnent is that I have a quantity value printed in a smartform. The value comes as 100.000 but i only want 100 to come. How can this be done in a smartform

  • What is time and date

    time and date search is in my menu bar. I don't want it. How do I get rid of it. I don't know where it came from. Is it a plug in?