Add dummy Like condition '%%' inorder to use index scan

Hi,
I have a table like below , which have data which capture each minute(columnB) transactions
CREATE TABLE tableA
columnA     CHAR(6),
columnB     Date,
columnC     Number(3,0),
CRT_ID    CHAR(8),
CRT_TS    TIMESTAMP(6),
UPD_ID    CHAR(8),
UPD_TS    TIMESTAMP(6),
CONSTRAINT PK_tableA_colAB PRIMARY KEY (columnA , columnB )
When I query the table, to get the list of transaction between particular date, it goes for TABLE FULL SCAN.( cost and execution time also high)
Total no of records in the table(tableA) is 13669094, the below query returns only around 150 to 200 no of records
select columnA,columnB,columnC,Crt_Id,Crt_ts
from tableA
where columnB between TO_DATE('06/28/2013','MM/DD/yyyy') and to_date('06/29/2013','MM/DD/yyyy')
when i use the query like below it will use the INDEX SCAN (cost and execution time also less)
select      columnA,columnB,columnC,Crt_Id,Crt_ts
from         tableA
where      columnB between TO_DATE('06/28/2013','MM/DD/yyyy') and to_date('06/29/2013','MM/DD/yyyy')
and           columnA like '%%'
Please advice is it good to add condition like '%%' inorder to use the index scan. Also kindly let me know if it works in the same way in all environments.

Hi RP0428,
Thank you very much for your response.
1. Are you collecting statistics on the table and indexes?
2. Post the exact command you use to collect those statistics.
Recently my DBA had gathered the statistics on 17-Dec-2013 22:01:32(LAST_ANALYSED). I am unaware about what query he ran to gather the statistics.
After that it executed in 28 seconds itself, before it took several minutes.
My concerns is only gathering the statistics periodically will improve performance.
Table is growing with thousands of records daily. Already it have 13669094 records. In order to avoid Full Table Scan
From my knowledge, I feel creating composite Range-Hash partitioning will helpful in improving the performance.
3. Post info about the data distribution for the two columns used by the index. This is 'counts and amounts' for each of the two columns and a GROUP BY date showing the skew of the values.
Plz find the details below
ColumnA
ColumnB
AA1
118800
AA2
117600
AA3
118200
AA4
118200
AA5
118200
AA6
118800
AA7
118800
AA8
117600
AA9
117600
AA10
117600
AA11
118200
AA12
118200
AA13
37234
AA14
118200
AA118
18450
AA119
96600
AA120
105000
AA121
105000
AA122
105600

Similar Messages

  • Multi-choice parameters with "like" condition

    Hi,
    I defined a parameter, multi-choice, using a "like" condition.  It always works for the first value of the LOV and ignores the rest of the values chosen.
    Is there any way that the "like" condition can be used for more than one parameter value?
    In other words, if someone chooses values A,B,C from the LOV, I would like the condition to be interpreted as:
    Field like 'A' or Field like 'B' or Field like 'C'.
    --Further explanation--->
    The reason that I need to use LIKE is that the field contains, e.g. A-1, A-2, A-3, B-1,B-2...
    Instead of having the LOV list the variations, it would be much easier for the user to choose A and then get all the A's that exist.
    Thanks.
    Leah

    What I am still understanding is defining the calculated field as a predefined length which I don't think will work since some of the values are shorter and some longer.
    However, you gave me another idea (unless this is what you really meant).  I can define the calculated field similar to how I defined the list of values, i.e. take the value of the field until the '-' sign (if it exists).  If the actual field value is "permit medical-2", then in the LOV it only says "permit medical".  If the original field is "surgery" then the LOV  is "surgery.  I can do the same thing in the calculated field.  Then the condition would be calculated field = LOV.
    Thank you!  Now my only problem is that I will not be at work for 10 days so I will have to wait until then to try it.
    I will let you know how it worked out.
    Thanks.
    Leah

  • Optimizer not using indexes

    DBAs,
    I have a select query which is using index scan when quired in prod. database and is executing in 20secs.and is using full table scan in non prod. db and is taking 48 secs.I rebuilded indexes & took stats in non-prod db but even it is taking 47 secs.
    Please advice......

    Here are the details
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS( -ownname => 'TCD_PRD_STG', -
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
    method_opt => 'for all columns size AUTO' -
    SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (‘JOE’,’EMPLOYEE’);
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('TCD_PRD_STG',DBMS_STATS.AUTO_SAMPLE_SIZE);
    1)Oracle versions are 10.2.0.2 in both prod & non-prod.
    2)Explain plan of prod. db
    SQL> SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    YPE=16 AND ( ( ( SCHEMA_ID=31 ) ) AND ( ( (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM
    ( SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHE
    RE KEYWORD IN ('Africa') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) t
    empselectholder WHERE tempkeywordcount=1)) OR (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM (
    SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHER
    E KEYWORD IN ('Aig') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) temps
    electholder WHERE tempkeywordcount=1)) ) ) ) ORDER BY LAST_PUBLISHED_DATE DESC) WHERE ROWNUM<51;
    no rows selected
    Elapsed: 00:00:21.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=192 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=192 Card=79 Bytes=1027)
    3 2 SORT (ORDER BY STOPKEY) (Cost=192 Card=79 Bytes=92272)
    4 3 HASH (UNIQUE) (Cost=191 Card=79 Bytes=92272)
    5 4 FILTER
    6 5 TABLE ACCESS (BY INDEX ROWID) OF 'ITEMS' (TABLE)
    (Cost=190 Card=808 Bytes=943744)
    7 6 INDEX (RANGE SCAN) OF 'IDX_ITEMS_PUB_URL' (IND
    EX) (Cost=107 Card=17024)
    8 5 FILTER
    9 8 HASH (GROUP BY) (Cost=42 Card=1 Bytes=540)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=41 Card=1 Bytes=540)
    11 10 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=35 Card=7403)
    12 5 FILTER
    13 12 HASH (GROUP BY) (Cost=3 Card=1 Bytes=540)
    14 13 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=2 Card=1 Bytes=540)
    15 14 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=1 Card=50)
    Statistics
    21 recursive calls
    0 db block gets
    4950582 consistent gets
    4060 physical reads
    13100 redo size
    240 bytes sent via SQL*Net to client
    333 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    explain plan of non-prod db
    1* SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    SQL> /
    ITEM_REFERENCE_ID
    96672
    96680
    Elapsed: 00:00:47.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=502 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=502 Card=255 Bytes=3315)
    3 2 SORT (ORDER BY STOPKEY) (Cost=502 Card=255 Bytes=40035
    4 3 HASH (UNIQUE) (Cost=501 Card=255 Bytes=40035)
    5 4 FILTER
    6 5 TABLE ACCESS (FULL) OF 'ITEMS' (TABLE) (Cost=500
    Card=2618 Bytes=411026)
    7 5 FILTER
    8 7 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    9 8 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=11 Bytes=319)
    10 5 FILTER
    11 10 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    12 11 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=1 Bytes=29)
    Statistics
    0 recursive calls
    0 db block gets
    5912606 consistent gets
    0 physical reads
    0 redo size
    387 bytes sent via SQL*Net to client
    435 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    2 rows processed

  • How is it possible to use Index Seek for LIKE %search-string% case?

    Hello,
    I have the following SP:
    CREATE PROCEDURE dbo.USP_SAMPLE_PROCEDURE(@Beginning nvarchar(15))
    AS
    SELECT * FROM HumanResources.Employee
    WHERE NationalIDNumber LIKE @Beginning + N'%';
    GO
    If I run the sp first time with param: N'94', then the following plan is generated and added to the cache:
    SQL Server "sniffs" the input value (94) when compiling the query. So for this param using Index Seek for AK_Employee_NationalIDNumber index will be the best option. On the other hand, the query plan should be generic enough to be able to handle
    any values specified in the @Beginning param.
    If I call the sp with @Beginning =N'%94':
    EXEC dbo.USP_SAMPLE_PROCEDURE N'%94'
    I see the same execution plan as above. The question is how is it possible to reuse this execution plan in this case? To be more precise, how
    Index Seek can be used in case LIKE %search-string% case. I expected that
    ONLY Index Scan operation can be used here.
    Alexey

    The key is that the index seek operator includes both seek (greater than and less than) and a predicate (LIKE).  With the leading wildcard, the seek is effectively returning all rows just like a scan and the filter returns only rows matching
    the LIKE expression.
    Do you want to say that in case of leading wildcard, expressions Expr1007 and Expr1008 (see image below) calculated such a way that
    Seek Predicates retrieve all rows from the index. And only
    Predicate does the real job by taking only rows matching the Like expression? If this is the case, then it explains how
    Index Seek can be used to resolve such queries: LIKE N'%94'.
    However, it leads me to another question: Since
    Index Seek in
    this particular case scans
    all the rows, what is the difference between
    Index Seek and Index Scan?
    According to
    MSDN:
    The Index Seek operator uses the seeking ability of indexes to retrieve rows from a nonclustered index.
    The storage engine uses the index to process
    only those rows that satisfy the SEEK:() predicate. It optionally may include a WHERE:() predicate, which the storage engine will evaluate against all rows that satisfy the SEEK:() predicate (it does not use the indexes to do this).
    The Index Scan operator retrieves
    all rows from the nonclustered index specified in the Argument column. If an optional WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.
    It seems like Index Scan is a special case of Index Seek,
    which means that when we see Index Seek in the execution plan, it does NOT mean that storage engine does NOT scan all rows. Right?
    Alexey

  • Add terms and conditions on last page of PO using XSL-FO stylesheet

    Hi
    I need to add terms and conditions on last page of PO pdf. I have changed the values for profile options PO;Terms and Conditions Filename and PO: In file path to the file name and filepath of my terms and conditions txt file. Do I have to make any changes to the stylesheet for it to be printed on the last page? if so, what are the changes to be made?
    This is really urgent and I would appreciate any help.
    Thanks,
    Sharmila

    Hi,
    In main window &next-page&  eq 0. condition does not work . 
    even &page& eq &formpages& may not work out not sure.So what you need to do is.
    In the print program
    just describe your internal table say lines v_lines
    and in the loop put one count varaible say v_count
    then in the script put condition as below.
    if v_count eq v_lines.
    /:NEW-PAGE pagename  ( say last)
    endif.
    Reward points if useful.
    Regards,
    Nageswar

  • I use browsec but today this add one filtered in Iran. could you please another add one like this?

    i need a add one like browsec. because browsec filtered in Iran and dont work.

    Your request does not give any real information. Please describe fully
    what you need. If you have trouble with English, use your own language.

  • Query tuning and how to force  table to use index?

    Dear Experts,
    i have two (2) question regarding performance during DRL.
    Question # 1
    There is a column name co_id in every transaction table. DBA suggest me to add [co_id='KPG'] in every clause which forces query to use index, resulting immediate processing. As an index was created for each table on the co_id column.
    Please note that co_id has constant value 'KPG' through out the table. is it make sense to add that column in where caluse like
    select a,b,c from tab1
    where a='89' and co_id='KPG'
    Question # 2
    if i am using a column name in where clause having index on it and that column is not in my column list does it restrict query for full table scan. like
    select a,b,c,d from tabletemp
    where e='ABC';
    Thanks in advance
    Edited by: Fiz Dosani on Mar 27, 2009 12:00 PM

    Fiz Dosani wrote:
    Dear Experts,
    i have two (2) question regarding performance during DRL.
    Question # 1
    There is a column name co_id in every transaction table. DBA suggest me to add [co_id='KPG'] in every clause which forces query to use index, resulting immediate processing. As an index was created for each table on the co_id column.
    Please note that co_id has constant value 'KPG' through out the table. is it make sense to add that column in where caluse like
    select a,b,c from tab1
    where a='89' and co_id='KPG'If co_id is always 'KPG' it is not needed to add this condition to the table. It would be very stupid to add an (normal) index on that column. An index is used to reduce the resultset of a query by storing the values and the rowids in a specified order. When all the values are equal and index justs makes all dml operations slower without makeing any select faster.
    And of cause the CBO is clever enough not to use such a index.
    >
    Question # 2
    if i am using a column name in where clause having index on it and that column is not in my column list does it restrict query for full table scan. like
    select a,b,c,d from tabletemp
    where e='ABC';
    Yes this is possible. However it depends from a few things.
    1) How selective this condition is. In general an index will be used when selectivity is less than 5%. This factor depends a bit on the database version. it means that when less then 5% of your rows have the value 'ABC' then an index access will be faster than the full table scan.
    2) Are the statistics up to date. The cost based optimizer (CBO) needs to know how many values are in that table, in the columns, in that index to make a good decision bout using an index access or a full table scan. Often one forgets to create statistics for freshly created data as in temptables.
    Edited by: Sven W. on Mar 27, 2009 8:53 AM

  • When does MaxDB use Index strategy?

    Hello,
    I want to improve the performance of my MaxDB. So I tried to create indexes and now I want to check if these indexes are used or not!
    1st: Is there a possibility to force MaxDB to use the index, if the sql statement consists e.g. LIKE '%N600%'?
    2nd: Exist a list or something else where I can see which sql statements in MaxDB dont support index strategy?
    Thx for help and kind regards,
    Frank

    Hello Frank,
    A1: You could force MaxDB to use an index - but this wouldn't help much. The index could not be used efficiently with a like condition which starts with %.
    Q2: Please check out the tuning section in the Wiki:
    https://wiki.sdn.sap.com/wiki/x/jRI
    Especially this section might be of interest for you:
    https://wiki.sdn.sap.com/wiki/x/GXE
    In General: if you want to check the execution plan for a specific SQL statement, use the EXPLAIN statement (just add EXPLAIN before the SELECT and execute it in SQLStudio).
    HTH,
    Melanie

  • How to make sql to use index/make to query to perform better

    Hi,
    I have 2 sql query which results the same.
    But both has difference in SQL trace.
    create table test_table
    (u_id number(10),
    u_no number(4),
    s_id number(10),
    s_no number(4),
    o_id number(10),
    o_no number(4),
    constraint pk_test primary key(u_id, u_no));
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030301, 1, 1001, 1, 2001, 1);
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030302, 1, 1001, 1, 2001, 2);
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030303, 1, 1001, 1, 2001, 3);
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030304, 1, 1001, 1, 2001, 4);
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030305, 1, 1002, 1, 1001, 2);
    insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
    values (2007030306, 1, 1002, 1, 1002, 1);
    commit;
    CREATE INDEX idx_test_s_id ON test_table(s_id, s_no);
    set autotrace on
    select s_id, s_no, o_id, o_no
    from test_table
    where s_id <> o_id
    and s_no <> o_no
    union all
    select o_id, o_no, s_id, s_no
    from test_table
    where s_id <> o_id
    and s_no <> o_no;
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
    1 0 UNION-ALL
    2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
    3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
    Statistics
    223 recursive calls
    2 db block gets
    84 consistent gets
    0 physical reads
    0 redo size
    701 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    5 sorts (memory)
    0 sorts (disk)
    8 rows processed
    -- i didnt understand why the above query is not using the index idx_test_s_id.
    -- But still it is faster
    select s_id, s_no, o_id, o_no
    from test_table
    where (u_id, u_no) in
    (select u_id, u_no from test_table
    minus
    select u_id, u_no from test_table
    where s_id = o_id
    or s_no = o_no)
    union all
    select o_id, o_no, s_id, s_no
    from test_table
    where (u_id, u_no) in
    (select u_id, u_no from test_table
    minus
    select u_id, u_no from test_table
    where s_id = o_id
    or s_no = o_no);
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
    1 0 UNION-ALL
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
    4 2 MINUS
    5 4 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
    6 4 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
    7 6 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
    8 1 FILTER
    9 8 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
    10 8 MINUS
    11 10 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
    12 10 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
    13 12 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
    Statistics
    53 recursive calls
    8 db block gets
    187 consistent gets
    0 physical reads
    0 redo size
    701 bytes sent via SQL*Net to client
    508 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    4 sorts (memory)
    0 sorts (disk)
    8 rows processed
    -- The above query is using index PK_TEST. But still it has FULL SCAN to the
    -- table two times it has the more cost.
    1st query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
    2nd query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
    My queries are:
    1) performance wise which query is better?
    2) how do i make the 1st query to use an index
    3) is there any other method to get the same result by using any index
    Appreciate your immediate help.
    Best regards
    Muthu

    Hi William
    Nice...it works.. I have added "o_id" and "o_no" are in part of the index
    and now the query uses the index
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
    1 0 UNION-ALL
    2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
    3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
    Statistics
    7 recursive calls
    0 db block gets
    21 consistent gets
    0 physical reads
    0 redo size
    701 bytes sent via SQL*Net to client
    507 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    But my questions are:
    1) In a where clause, if "<>" condition is used, then, whether the system will use the index. Because I have observed in several situations even though the column in where clause is indexed, since the where condition is "like" or "is null/is not null"
    then the index is not used. Same as like this, i assumed, if we use <> then indexes will not be used. Is it true?
    2) Now, after adding "o_id" and "o_no" columns to the index, the Execution plan is:
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
    1 0 UNION-ALL
    2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
    3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
    Before it was :
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
    1 0 UNION-ALL
    2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
    3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
    Difference only in Cost (reduced), not in Card, Bytes.
    Can you explain, how can i decide which makes the performace better (Cost / Card / Bytes). Full Scan / Range Scan?
    On statistics also:
    Before:
    Statistics
    52 recursive calls
    0 db block gets
    43 consistent gets
    0 physical reads
    0 redo size
    701 bytes sent via SQL*Net to client
    507 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    After:
    Statistics
    7 recursive calls
    0 db block gets
    21 consistent gets
    0 physical reads
    0 redo size
    701 bytes sent via SQL*Net to client
    507 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    8 rows processed
    Difference in recursive calls & consistent gets.
    Which one shows the query with better performance?
    Please explain..
    Regards
    Muthu

  • How to add Terms and Conditions in Standard PO XSL-FO template

    Hi all,
    I want to add terms and conditions in PO_STANDARD_XSLFO template....
    anyone has idea how to do it...
    In the template one place is given to add Terms and Conditions, but i am not able to use it.....
    Regards
    Ravi

    You are not getting my question Vicha... I am explaining you.....
    For POrelated work....
    Like SQL,PL/SQL file we keep it in PO_TOP/SQL directory
    Shell Scripts file in PO_TOP/bin directory
    RDF in PO_TOP/report...directory
    Now i am confused here that we have to keep this Terms and Condition file in specific directory under PO_TOP
    OR
    we can keep it anywhere on Server in any directory.
    Regards
    Ravi

  • Creating a script for a PRIMARY KEY USING INDEX SORT doesn't work

    Probably a bug.
    h1. Environment
    Application: Oracle SQL Developer Data Modeler
    Version: 3.0.0.655
    h1. Test Case:
    1. Create a new table TRANSACTIONS with some columns.
    2. Mark one of numeric columns as the primary key - PK_TRANSACTIONS.
    3. Go to Physical Models and create new Oracle Database 11g.
    4. Go to Physical Models -> Oracle Database 11g -> Tables -> TRANSACTIONS -> Primary Keys -> PK_TRANSACTIONS -> Properties:
    a) on General tab set Using Index to BY INDEX NAME
    b) on Using Index tab choose a tablespace
    c) on Using Index tab set Index Sort to SORTED.
    5. Export the schema to DDL script. For the primary key you will get something like this:
    ALTER TABLE TRANSACTION
    ADD CONSTRAINT PK_TRANSACTION PRIMARY KEY ( TRAN_ID ) DEFERRABLE
    USING INDEX
    PCTFREE 10
    MAXTRANS 255
    TABLESPACE TBSPC_INDX
    LOGGING
    STORAGE (
    INITIAL 65536
    NEXT 1048576
    PCTINCREASE 0
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    ) SORTED
    h1. Reason of failure
    The script will fail because SORTED is not allowed here. It should be SORT.
    Additionally, the default behaviour for Data Modeler is to set Index Sort to NO but default setting for Oracle database 11g is SORT. Shouldn't Data Modeler use SORT as the default value?
    Edited by: user7420841 on 2011-05-07 03:15

    Hi,
    Thanks for reporting this problem. As you say, it should be SORT rather than SORTED. I have logged a bug on this.
    I also agree that, for consistency with the database default, it would be better to have SORT as the default in Data Modeler.
    David

  • ABAP: How to add a Price Condition Item to an order item

    Hi experts,
       Is it possible to add a Price Condition Item to an order item by BAPI or Function in sap crm? (like: 0k04 10 USD 1 EA)

    Hi,
    You need to use CRM_ORDER_MAINTAIN and have to use structure IT_PRIDOC to update or add new pricing data.
    But do remember, you have to call FM 'BAPI_TRANSACTION_COMMIT' once you are done with call to FM CRM_ORDER_MAINTAIN, else all your updation will remain in buffer and will have no effect in Database, which will let u feel as if things are not working they should be.
    Best Regards,
    Pratik Patel
    <b>Reward with Points!</b>

  • Field name DUMMY is reserved (Do not use structure as include in DB table)

    We are trying to add a custom field, e.g., called ZZZ in LIS Communication structure MCEKKO (Purchasing Document Header) by creating a new custom append structure and add the field ZZZ into it, then activate the new append structure, but get warning msg like "Field name DUMMY is reserved (Do not use structure as include in DB table)".
    We do find a field called DUMMY in the structure MCEKKO. How to get rid of the warning msg and successfully activate the new custom append structure with the new field ZZZ?
    We will give you reward points for correct solutions for the above problem!
    Thanks

    hi Dinesh,
    But we wonder why this new custom field appended can not be seen from the right frame of the extraction structure in LBWE?
    Any idea?
    Thanks

  • Query takes 5 min when using Indexes and 1 Second without Indexes !!

    Hi,
    We have been using indexes on all tables until recently when we faced problems with queries like the one below:
    SELECT a.std_id FROM students a, student_degree b, student_course c, course d
    WHERE b.std_id = a.std_id
    AND c.std_id = a.std_id
    AND d.crn = c.crn
    AND b.in_progress = 'Y'
    AND b.major_code = 'ABTC'
    AND a.program_code = 'DP'
    AND a.level_code = 'S2'
    AND a.campus_code = '05'
    AND a.termcode = '200702';
    This query takes more than 5 minutes to return a result, but as soon as we remove indexes on the columns termcode and campus_code,it shows result in 1 second.
    What could be the problem ?, Is there an attribute that need to be set when creating these indexes ?
    Thanks in advance
    Madani

    Thank you Karthik for your reply.Here are the explain plan report (as shown on Oracle 9i Enterprise Manager)
    *1-Explain plan without Indexes:*
    Execution Steps:
    Step # Step Name
    11 SELECT STATEMENT
    10 MERGE JOIN
    7 SORT [JOIN]
    6 NESTED LOOPS
    4 NESTED LOOPS
    1 ERMS.STUDENT_DEGREE TABLE ACCESS [FULL]
    3 ERMS.STUDENTS TABLE ACCESS [BY INDEX ROWID]
    2 ERMS.SYS_C006642 INDEX [UNIQUE SCAN]
    5 ERMS.SYS_C007065 INDEX [RANGE SCAN]
    9 SORT [JOIN]
    8 ERMS.COURSE TABLE ACCESS [FULL]
    Step # Description
    1 This plan step retrieves all rows from table STUDENT_DEGREE.
    2 This plan step retrieves a single ROWID from the B*-tree index SYS_C006642.
    3 This plan step retrieves rows from table STUDENTS through ROWID(s) returned by an index.
    4 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
    5 This plan step retrieves one or more ROWIDs in ascending order by scanning the B*-tree index SYS_C007065.
    6 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
    7 This plan step accepts a row set (its only child) and sorts it in preparation for a merge-join operation.
    8 This plan step retrieves all rows from table COURSE.
    9 This plan step accepts a row set (its only child) and sorts it in preparation for a merge-join operation.
    10 This plan step accepts two sets of rows sorted on the join key. By walking both sets of rows in the order of the join key, every distinct pair of rows satisfying the join condition in the WHERE clause is found through a single pass of the row sets.
    11 This plan step designates this statement as a SELECT statement.
    *2-Explain plan with Indexes:* (I added index on column campus_code which is a varchar2 column)
    Execution Steps:
    Step # Step Name
    11 SELECT STATEMENT
    10 MERGE JOIN
    7 SORT [JOIN]
    6 NESTED LOOPS
    4 NESTED LOOPS
    1 ERMS.COURSE TABLE ACCESS [FULL]
    3 ERMS.STUDENTS TABLE ACCESS [BY INDEX ROWID]
    2 ERMS.INDEX_STUDENTS_CAMPUS_CODE INDEX [RANGE SCAN]
    5 ERMS.SYS_C007065 INDEX [RANGE SCAN]
    9 SORT [JOIN]
    8 ERMS.STUDENT_DEGREE TABLE ACCESS [FULL]
    Step # Description
    1 This plan step retrieves all rows from table COURSE.
    2 This plan step retrieves one or more ROWIDs in ascending order by scanning the B*-tree index INDEX_STUDENTS_CAMPUS_CODE.
    3 This plan step retrieves rows from table STUDENTS through ROWID(s) returned by an index.
    4 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
    5 This plan step retrieves one or more ROWIDs in ascending order by scanning the B*-tree index SYS_C007065.
    6 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
    7 This plan step accepts a row set (its only child) and sorts it in preparation for a merge-join operation.
    8 This plan step retrieves all rows from table STUDENT_DEGREE.
    9 This plan step accepts a row set (its only child) and sorts it in preparation for a merge-join operation.
    10 This plan step accepts two sets of rows sorted on the join key. By walking both sets of rows in the order of the join key, every distinct pair of rows satisfying the join condition in the WHERE clause is found through a single pass of the row sets.
    11 This plan step designates this statement as a SELECT statement.

  • Crash when use indexe array with in place element structure

    Hello !
    I have a problem with in place element structure. I want index a waveform array (16 elements) and when i execute or save that labview close....
    I dont have problem with waveform array 15 elements or less, but i need index 16 elements...
    Thanks for your help !!!
    Solved!
    Go to Solution.
    Attachments:
    Test.PNG ‏8 KB

    I give you my code but it work because i used a waveform array with only 15 elements. I can't save or execute with 16 elements...
    So add it (like picture Test.png) and you will see.
    Thank you
    Attachments:
    Test.vi ‏25 KB

Maybe you are looking for