Query performance with %

Hi,
I have a system running on 11gr2 windows 64bit windows with Oracle text.
90 % of the data is in Hebrew and the rest is in English.
we have a great performance while running Q with % in the end of the word like : word%
when we issue a search with the % in the beginning like : %word , of the word the performance are extremely bad.
__I have create a test case :__
-- CREATE TABLE
CREATE TABLE news (pkey NUMBER,lang VARCHAR2 (2), short_content CLOB);
-- INSERT DATA
insert into news values (myseq.nextval,'iw','&1');
-- The next step is to configure the MULTI_LEXER
BEGIN
-- hebrew
ctx_ddl.create_preference ('hebrew_lexer', 'basic_lexer');
--english
ctx_ddl.create_preference('english_lexer','basic_lexer');
ctx_ddl.set_attribute('english_lexer','index_themes','yes');
ctx_ddl.set_attribute('english_lexer','theme_language','english');
END;
-- CREATE THE MULTI_LEXER
--Create the multi-lexer preference:
BEGIN
ctx_ddl.create_preference('global_lexer', 'multi_lexer');
END;
-- make the hebrew lexer the default using CTX_DDL.ADD_SUB_LEXER:
BEGIN
ctx_ddl.add_sub_lexer('global_lexer','default','hebrew_lexer');
END;
--add the English  language with CTX_DDL.ADD_SUB_LEXER procedure.
BEGIN
ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','eng');
END;
-- create the wordlist
begin
-- ADD WORDLIST
-- exec ctx_ddl.drop_preference ('my_wordlist');
ctx_ddl.create_preference ('my_wordlist','basic_wordlist');
ctx_ddl.set_attribute     ('my_wordlist', 'stemmer','auto');
ctx_ddl.set_attribute ('my_wordlist','SUBSTRING_INDEX', 'YES');
end;
--CREATE THE INDEX 
--drop index search_news
create index search_news
on news (short_content)
indextype is ctxsys.context
parameters
('lexer          global_lexer
     language column lang
     wordlist     my_wordlist')
still the performance are bad.
I know I am missing here somthing.
I appropriate any help

That's expected. Internally Oracle Text has a list of words (the $I table) on which there is an index (the $X index).
If you use a leading wildcard, then the $X index cannot be used and Oracle Text has to do a full-table scan of the $I table.
If you MUST have leading wildcards, you should use the SUBSTRING_INDEX wordlist preference when creating the index. That creates an extra ($P) table which allows Oracle Text to resolve leading wildcards without resorting to a full table scan.
Be warned that your index creation will take considerably longer and use a lot more space with this option in place. Many customers prefer to disallow leading wildcards from their search interface.

Similar Messages

  • Query Performance with Exception aggregation

    Hello,
    My Query Keyfigures has exception aggregation on order line level as per requirement.
    Currently cube holds 5M of records, when we run query its running more than 30min.
    We cont remove exception aggregation.
    Cube is alredy modeled correctly and we dont want to use the cache.
    Does anybody can please advice if there is any other better approach to improve query performance with exception agg?
    Thanks

    Hi,
    We have the same problem and raised an OSS ticket. They replied us with the note 1257455 which offers all ways of improving performance in such cases. I guess there s nothing else to do, but to precalculate this exception aggregated formula in data model via transformations or ABAP.
    By the way, cache can not help you in this case since exc. agg. is calculated after cache retrieval.
    Hope this helps,
    Sunil

  • Disappointing query performance with object-relational storag

    Hello,
    after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
    SQL> select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
    Thanks in advance.
    --Swen
    Edited by: user636644 on Aug 30, 2008 3:55 PM
    Edited by: user636644 on Aug 30, 2008 4:12 PM

    Marc,
    thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
    select rt.object_value
    from t_element rt
    where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
    Execution Plan
    Plan hash value: 4104484998
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
    | 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 3 | FILTER | | | | | |
    | 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
    | 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("NESTED_TABLE_ID"=:B1)
    3 - filter( EXISTS (SELECT /*+ ???)
    8 - access("NESTED_TABLE_ID"=:B1)
    9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
    SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
    nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
    /><read-contents/></privilege>'))=1)
    10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
    11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
    filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
    Edited by: user636644 on Sep 1, 2008 9:56 PM

  • Query Performance with Unit Conversion

    Hi Experts,
    Right now my Customer ask to me to do a improve in the runtime of some queries.
    I detect a problem in one related to unit conversion. I execute a workbook statistics and found that the time is focusing in step conversion data.
    I'm consulting all the year and is taking around 20 minuts to give us a result. I too much time. The only thing in the query is the conversion data.
    How can I to improve the performance? what is the check list in this case?
    thanks for you help.
    Jose

    Hi Jose,
    You might not be able to reduce the unit conversion time here and try to apply the general query performance improvement techniques, e.g. caching the query results etc.
    But there is one thing which can help you, if end user is using only one of the unit for e.g. User always execute the report in USD but the source currency is different from USD. In such cases you can create another data source and do the data conversion at the time of data loading so that in the new data source all data will be available in required currency and no conversion will happen at runtime and will improve the query performance drastically.
    But above solution is not feasible if there are many currencies and report needs to be run in multiple currency frequently.
    Regards,
    Durgesh.

  • How can we improve query performance with out indexes?

    Hello Experts,
    I have a problem with table(calc) which contain 3 crore records and table doesn't have index, on that table one of the view(View_A) created.
    When i use the view in  below query SELECT count(*)
    FROM
      Table A INNER JOIN Table B ON (A.a=B.b)
       LEFT OUTER JOIN View_A ON ( Table A.a=View_A.a)
       LEFT OUTER JOIN View_B ON (Table A.a=View_B.a)
    In above query View_A is causing the problem, view_A is created on Calc table. One more thing when i execute select statement on the view it was running fine.
    With out View_A query fetching data fine. Update stats of the table also fine. When i run cost plan for scanning only cost % is 90.
    Can any help me please?.
    Thank you all.
    Regards,
    Jason.

    Jason,
    Not sure what you are trying to do. But outer joins are bad for performance. So try to avoid them.
    You also say that you have a view on a calc table. What are you calculating? Are you using user defined functions maybe?
    Regards,
    Nico

  • Query performance with filters

    Hi there,
    I've noticed that when I run a query in Answers, if the query has a filter which is not in the displayed columns, the query runs very slowly. However, if I run the same query WITH the filtering column in the displayed columns, the query will return the results almost immediately.
    Take the example of a sales report. If I run a query of [Store Number] vs. [Sales Amount] and ctrl-click filter it with the [Region] dimension column equal to 'North America'. The query will take about 5 to 10 minutes to run. However, if I include the [Region] column in the display columns (i.e. [Region], [Store Number] vs. [Sales Amount]) or the "Excluded" columns in Answers, then the query will take less than a minute to run.
    I am using Oracle BI to connect to a MS Analysis Services cube by the way.
    Any ideas or suggestions on how to improve the performance? I don't want to include the filtering columns in the select query because when users use the dashboard filters, they just want to filter the results by different dimension values instead of seeing them in the report.
    Thanks.

    Thanks.
    However, when I run a similar query in the backend (MS Analysis Services), the performance is very good. Only when I try to run the query through Oracle BI, the performance suffers. I know that it has something to do with the way Oracle constructs its query to send back to the Analysis Services.
    The main thing about my issue is that in Answers, queries with the filtering columns in both the select and where clauses run much faster than queries with the filtering columns ONLY in the where clause. Why is that, and how to speed it up?

  • Poor query performance with BETWEEN

    I'm using Oracle Reports 6i.
    I needed to add Date range parameters (Starting and Ending dates) to a report. I used lexicals in the Where condition to handle the logic.
    If no dates given,
    Start_LEX := '/**/' and
    End_LEX := '/**/'
    If Start_date given,
    Start_LEX := 'AND t1.date >= :Start_date'
    If End_date given,
    End_LEX := 'AND t1.date <= :End_date'
    When I run the report with no dates or only one of the dates, it finishes in 3 to 8 seconds.
    But when I supply both dates, it takes > 5 minutes.
    So I did the following
    If both dates are given and Start_date = End date,
    Start_LEX := 'AND t1.date = :Start_date'
    End_LEX := '/**/'
    This got the response back to the 3 - 8 second range.
    I then tried this
    if both dates are given and Start_date != End date,
    Start_LEX := 'AND t1.date BETWEEN :Start_date AND :End_date'
    End_LEX := '/**/'
    This didn't help. The response was still in the 5+ minutes range.
    If I run the query outside of Oracle Reports, in PL/SQL Developer or SQLplus, it returns the same data in 3 - 8 seconds in all cases.
    Does anyone know what is going on in Oracle Reports when a date is compared with two values either separately or with a BETWEEN? Why does the query take approx. 60 times as long to execute?

    Hi,
    Observe access plan first by using BETWEEN as well as using <= >=.
    Try to impose logic of using NVL while forming lexical parameters.
    Adinath Kamode

  • Query Performance with and without cache

    Hi Experts
    I have a query that takes 50 seconds to execute without any caching or precalculation.
    Once I have run the query in the Portal, any subsequent execution takes about 8 seconds.
    I assumed that this was to do with the cache, so I went into RSRT and deleted the Main Memory Cache and the Blob cache, where my queries seemed to be.
    I ran the query again and it took 8 seconds.
    Does the query cache somewhere else? Maybe on the portal? or on the users local cache? Does anyone have any idea of why the reports are still fast, even though the cache is deleted?
    Forum points always awarded for helpful answers!!
    Many thanks!
    Dave

    Hi,
    Cached data automatically becomes invalid whenever data in the InfoCube is loaded or purged and when a query is changed or regenerated. Once cached data becomes invalid, the system reverts to the fact table or associated aggregate to pull data for the query You can see the cache settings for all queries in your system using transaction SE16 to view table RSRREPDIR . The CACHEMODE field shows the settings of the individual queries. The numbers in this field correspond to the cache mode settings above.
    To set the cache mode on the InfoCube, follow the path Business Information Warehouse Implementation Guide (IMG)>Reporting-Relevant Settings>General Reporting Settings>Global Cache Settings or use transaction SPRO . Setting the cache mode at the InfoCube level establishes a default for each query created from that specific InfoCube.

  • Query Performance with/without PK Index

    Hi!
    Please take a look at these queries and tell me why their
    performances are
    so extremely different!
    1.
    SELECT DISTINCT column_name #(Many Null-Values in this
    Column)
    FROM table_name
    WHERE primary_key_index_name IN (...long list...)
    AND column_name IS NOT NULL;
    --> 1 Row, 120 msec
    2.
    #(Only Order altered:)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE column_name IS NOT NULL
    AND primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec (nearly 20 times slower!)
    3.
    #(No NOT NULL)
    SELECT DISTINCT column_name
    FROM table_name
    WHERE primary_key_index_name IN (...long list...);
    --> 1 Row, 2 sec just as No. 2!
    Can anyone explain?
    TIA! Dominic

    As mentioned, you really should create explain plans for all 3
    queries. I could be that the first query loaded all the block
    into the buffer cache so when you ran the 2nd query, the data it
    needed was already in memory.

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Problem with query performance

    Hi All,
    While loading data if we run the report, will it make any difference in the query performance? I have a cube with 230 million records  I am running the delta which has got another 230 million records. Now I am trying to run the queries on the cube, all those timed out.
    But I have a DSO with the 480 million the same queries are running good on the DSO. But I wanted to run the reports on the cube not on DSO what can I do? Is the data load giving any problem in the query runtime??
    Pleas advise me what to do?
    Regards
    Kiran

    Hi,
    My load got finished, Now i created indexes and tried to run the query, every query on this cube times out.. but on the top of DSO we have Same queries those queries are running.. what could be the reason how can i go ahead to improve the query performance plzzz advise me
    Kiran
    Edited by: kiran kumar on Mar 14, 2009 1:28 AM

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

  • Issue with query performance

    Hi,
    I have a multi provider with 2 cubes & 3 ODS & 1 info object.
    On top of this MP queries are built. All the queries are in 3.5
    It takes more then 30 min to execute each query.
    Now we are planning to replace all 3.5 queries in 7.
    We will build cube on top of 3 ODS. New MP will have 3 cubes & 1 Info object.
    My issue is, the other 2 cubes are too large. We have around 4 million records in each.
    And i need only few fields from these cubes.
    Is there any way I can have less load on MP?
    What would be the best approach to improve query performance?
    Thanks,
    Gowri

    Hello Gowri,
    I think you should take the help of Aggregates.
    You may create aggregates on the 2 large cube, using the characteristics
    that you are using in the Query.
    Since that cubes are having large amount of data, the use of aggregates will consideraly
    reduce the Data manager time i.e. time spent by the query in retriving data from info-provider.
    for more details on aggregates please refer the following link:
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/content.htm
    Thanx and regards
    Priyanka

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

Maybe you are looking for

  • Document on XI 3.0

    Hi,   Is there any good document available on XI 3.0 (new installation) on AIX/ORACLE platform. We also have BW 3.5 and Solution Manager 3.2. I basically need something like after installation what are the things to be done by BASIS ADMIN. I already

  • Info package issue in Production system?

    Hi experts, I would like to know about the scenerio of info packages in production system. 1) When the objects are transported to production system, Do we use the same info packages in the prod system,which are used to schedule the load in developmen

  • Tomcat6 needs more Heapsize

    I imported the application on Tomcat5.5/JDK5 to Tomcat6.0.16/JDK6. I did the performance test Tomcat6.0.16/JDK6. I got OutofMemory Error only after 10 minutes. As HeapSize was 256MB, I changed 512MB. The result was OutofMemory Error. Finally I found

  • Create a button link

    Hello, I have a button on say frame 1 and I would like to define to go to frame 2. Can anyone tell me what it is I'm missing. Here is a sample of what i tried customcnsl_btn.onRelease(); gotoAndStop ("2"); Any ideas? Thanks

  • Appending customer field for BAPI_CONTRACT_CREATE

    Hello.  I'm trying to append customer field to the table EKPO using BAPI called "BAPI_CONTRACT_CREATE ".  I have created a structure and included it into append structure of EKPO.  Can anyone tell me what I should do next? Regards, Tomoko Sakamoto