Oracle text performance with context search indexes

Search performance using context index.
We are intending to move our search engine to a new one based on Oracle Text, but we are meeting some
bad performance issues when using search.
Our application allows the user to search stored documents by name, object identifier and annotations(formerly set on objects).
For example, suppose I want to find a document named ImportSax2.c: according to user set parameters, our search engine format the following
search queries :
1) If the user explicitely ask for a search by document name, the query is the following one =>
     select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;
2) If the user don't specify any extra parameters, the query is the following one =>
     select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0;
Oracle text only need around 7 seconds to answer the second query, whereas it need around 50 seconds to give an answer for the first query.
Here is a part of the sql script used for creating the Oracle Text index on the column OBJFIELDURL
(this column stores a path to an xml file containing properties that have to be indexed for each object) :
begin
Ctx_Ddl.Create_Preference('wildcard_pref', 'BASIC_WORDLIST');
ctx_ddl.set_attribute('wildcard_pref', 'wildcard_maxterms', 200) ;
ctx_ddl.set_attribute('wildcard_pref','prefix_min_length',3);
ctx_ddl.set_attribute('wildcard_pref','prefix_max_length',6);
ctx_ddl.set_attribute('wildcard_pref','STEMMER','AUTO');
ctx_ddl.set_attribute('wildcard_pref','fuzzy_match','AUTO');
ctx_ddl.set_attribute('wildcard_pref','prefix_index','TRUE');
ctx_ddl.set_attribute('wildcard_pref','substring_index','TRUE');
end;
begin
ctx_ddl.create_preference('doc_lexer_perigee', 'BASIC_LEXER');
ctx_ddl.set_attribute('doc_lexer_perigee', 'printjoins', '_-');
ctx_ddl.set_attribute('doc_lexer_perigee', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('doc_lexer_perigee','index_themes','yes');
ctx_ddl.create_preference('english_lexer','basic_lexer');
ctx_ddl.set_attribute('english_lexer','index_themes','yes');
ctx_ddl.set_attribute('english_lexer','theme_language','english');
ctx_ddl.set_attribute('english_lexer', 'printjoins', '_-');
ctx_ddl.set_attribute('english_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.create_preference('german_lexer','basic_lexer');
ctx_ddl.set_attribute('german_lexer','composite','german');
ctx_ddl.set_attribute('german_lexer','alternate_spelling','GERMAN');
ctx_ddl.set_attribute('german_lexer','printjoins', '_-');
ctx_ddl.set_attribute('german_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('german_lexer','NEW_GERMAN_SPELLING','YES');
ctx_ddl.set_attribute('german_lexer','OVERRIDE_BASE_LETTER','TRUE');
ctx_ddl.create_preference('japanese_lexer','JAPANESE_LEXER');
ctx_ddl.create_preference('global_lexer', 'multi_lexer');
ctx_ddl.add_sub_lexer('global_lexer','default','doc_lexer_perigee');
ctx_ddl.add_sub_lexer('global_lexer','german','german_lexer','ger');
ctx_ddl.add_sub_lexer('global_lexer','japanese','japanese_lexer','jpn');
ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','en');
end;
begin
     ctx_ddl.create_section_group('axmlgroup', 'AUTO_SECTION_GROUP');
end;
drop index ADSOBJ_XOBJFIELDURL force;
create index ADSOBJ_XOBJFIELDURL on ADSOBJ(OBJFIELDURL) indextype is ctxsys.context
parameters
('datastore ctxsys.file_datastore
filter ctxsys.inso_filter
sync (on commit)
lexer global_lexer
language column OBJFIELDURLLANG
charset column OBJFIELDURLCHARSET
format column OBJFIELDURLFORMAT
section group axmlgroup
Wordlist wildcard_pref
Oracle created a table named DR$ADSOBJ_XOBJFIELDURL$I which now contains around 25 millions records.
ADSOBJ is the table contaings information for our documents,OBJFIELDURL is the field that contains the path to the xml file containing
data to index. That file looks like this :
<?xml version="1.0" encoding="UTF-8" ?>
<fields>
<OBJNAME><![CDATA[NomLnk_177527o.jpgp]]></OBJNAME>
<OBJREM><![CDATA[Z_CARACT_141]]></OBJREM>
<OBJID>295926o.jpgp</OBJID>
</fields>
Can someone tell me how I can make that kind of request
"select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;"
run faster ?

Below are the execution plan for both the 2 requests :
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0
PLAN_TABLE_OUTPUT
|     Id     | Operation                              |Name                         |Rows     |Bytes     |Cost (%CPU)|
|     0     | SELECT STATEMENT                    |                              |1272     |119K     |     4     (0)     |
|     1      | TABLE ACCESS BY INDEX ROWID     |ADSOBJ      |1272     |119K     |     4     (0)     |
|     2      |     DOMAIN INDEX                    |ADSOBJ_XOBJFIELDURL     |          |          |     4     (0)     |
Note
- 'PLAN_TABLE' is old version
Executed in 2 seconds
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0
PLAN_TABLE_OUTPUT
|     Id     |Operation                              |Name                         |Rows     |Bytes     |Cost (%CPU)|
|     0     | SELECT STATEMENT                    |                              |1272     |119K     |     4     (0)     |
|     1     | TABLE ACCESS BY INDEX ROWID     |ADSOBJ                         |1272     |119K     |     4     (0)     |
|     2     | DOMAIN INDEX                    |ADSOBJ_XOBJFIELDURL     |          |          |     4     (0)     |
Sorry for the result formatting, I can't get it "easily" readable :(

Similar Messages

  • Oracle 10g  – Performance with BIG CONTEXT indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    I can not easily test it my self yet.
    Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
    Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.

    That depends on at least three things:
    (1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
    (2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
    (3) how many milliseconds are your users willing to wait for results
    So, unfortunately, you'll probably have to experiment a bit before you'll know...

  • Performance issue with context search

    We have a performance problem with a table tise with about 10 mio rows
    and a text column tise_desc with short descriptions (about 300 characters per row).
    We currently use mixed queries of the form (here very simple)
    SELECT /*+ FIRST_ROWS(10) */ * FROM tise
    WHERE reg_id = 'REGI0000000000000132'
    AND contains(tise_desc, '(employment)' ) > 0
    When the structured query part (here reg_id) and the fulltext part (contains) is not selective some queries take about 30 sec to 90 sec to process.
    When we repeat the query it only takes about 1-3 sec (may be due to caching).
    We are not interested in scoring nor in sorting the data.
    Until now we have tried to use different hints, use different index types (btree versus bitmap), use different sql query syntax, use one fulltext index with xml and within syntax without any real progress.
    Has any body addtional ideas?

    Since you are doing a combination of structured queries and simple text queries on only one column, a ctxcat index with a catsearch may be better than a context index with a contains search. The catsearch has fewer additional features available than contains, but some context features can be done with catsearch through a query template. If you only do simple searches and don't need those extra features then it doesn't matter.
    Make sure you have current statistics, so that the optimizer can use them to select the best execution plan.
    Use bind variables instead of literal values in your searches. When you use bind variables, the previous query in the SGA can be reused for different variable values without reparsing, so that every query with new values has the same effect as rerunning the previous query. This alone could account for the different between 30 to 90 seconds on the first run and 1-3 seconds on the second run.
    If you are not using scoring or sorting, then you might be better off without the first_rows hint, which chooses the best path for sorting. Try it both ways and see which works best for you.
    Once you have tested with a ctxcat index, current statistics, and bind variables, if it is still slow you may be able to use some tracing to determine the cause of the slowness and adjust some memory settings or some such thing. You can find more tuning hints for Oracle Text here:
    http://download.oracle.com/docs/cd/B28359_01/text.111/b28303/aoptim.htm#i1007227
    Please see the demonstration below that implements the recommendations above.
    SCOTT@orcl_11g> CREATE TABLE tise
      2    (reg_id        VARCHAR2 (20),
      3       tise_desc  VARCHAR2 (300))
      4  /
    Table created.
    SCOTT@orcl_11g> INSERT INTO tise VALUES ('REGI0000000000000132', 'employment')
      2  /
    1 row created.
    SCOTT@orcl_11g> INSERT INTO tise SELECT object_id, object_name FROM all_objects
      2  /
    68770 rows created.
    SCOTT@orcl_11g> -- try a ctxcat index instead of a context index
    SCOTT@orcl_11g> -- (make sure the index is not fragmented by periodically
    SCOTT@orcl_11g> --  dropping and recreating or altering and rebuilding or optimizing)
    SCOTT@orcl_11g> BEGIN
      2    CTX_DDL.CREATE_INDEX_SET ('tise_iset');
      3    CTX_DDL.ADD_INDEX ('tise_iset', 'reg_id');
      4  END;
      5  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> CREATE INDEX tise_id_desc_idx ON tise (tise_desc)
      2  INDEXTYPE IS CTXSYS.CTXCAT
      3  PARAMETERS ('INDEX SET tise_iset')
      4  /
    Index created.
    SCOTT@orcl_11g> -- make sure you have current statistics:
    SCOTT@orcl_11g> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'TISE')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> -- use bind variables so that the query in the sga can be reused
    SCOTT@orcl_11g> -- for different variable values without reparsing:
    SCOTT@orcl_11g> VARIABLE search_reg_id VARCHAR2 (20)
    SCOTT@orcl_11g> VARIABLE search_desc VARCHAR2 (2000)
    SCOTT@orcl_11g> EXEC :search_reg_id := 'REGI0000000000000132'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> EXEC :search_desc := 'employment'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> -- query using catsearch with a ctxcat index and bind variables:
    SCOTT@orcl_11g> COLUMN tise_desc FORMAT A30 WORD_WRAPPED
    SCOTT@orcl_11g> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11g> SELECT *
      2  FROM   tise
      3  WHERE  CATSEARCH (tise_desc, :search_desc, 'reg_id=''' || :search_reg_id || '''') > 0
      4  /
    REG_ID               TISE_DESC
    REGI0000000000000132 employment
    Execution Plan
    Plan hash value: 409728589
    | Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                  |  3439 |   100K|   102   (0)| 00:00:02 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TISE             |  3439 |   100K|   102   (0)| 00:00:02 |
    |*  2 |   DOMAIN INDEX              | TISE_ID_DESC_IDX |       |       |            |          |
    Predicate Information (identified by operation id):
       2 - access("CTXSYS"."CATSEARCH"("TISE_DESC",:SEARCH_DESC,'reg_id='''||:SEARCH_REG_ID|
                  |'''')>0)
    SCOTT@orcl_11g> SET AUTOTRACE OFF
    SCOTT@orcl_11g>

  • Regular expression vs oracle text performance

    Does anyone have experience with comparig performance of regular expression vs oracle text?
    We need to implement a text search on a large volume table, 100K-500K rows.
    The select stmt will select from a VL, a view joining 2 tables, B and _TL.
    We need to search 2 text columns from this _VL view.
    Using regex seems less complex, but the deciding factor is of course performace.
    Would oracle text search perform better than regular expression in general?
    Thanks,
    Margaret

    Hi Dominc,
    Thanks, we'll try both...
    Would you be able to validate our code to create the multi-table index:
    CREATE OR REPLACE PACKAGE requirements_util AS
    PROCEDURE concat_columns(i_rowid IN ROWID, io_text IN OUT NOCOPY VARCHAR2);
    END requirements_util;
    CREATE OR REPLACE PACKAGE BODY requirements_util AS
    PROCEDURE concat_columns(i_rowid IN ROWID, io_text IN OUT NOCOPY VARCHAR2)
    AS
    tl_req pjt_requirements_tl%ROWTYPE;
    b_req pjt_requirements_b%ROWTYPE;
    CURSOR cur_req_name (i_rqmt_id IN pjt_requirements_tl.rqmt_id%TYPE) IS
    SELECT rqmt_name FROM pjt_requirements_tl
    WHERE rqmt_id = i_rqmt_id;
    PROCEDURE add_piece(i_add_str IN VARCHAR2) IS
    lx_too_big EXCEPTION;
    PRAGMA EXCEPTION_INIT(lx_too_big, -6502);
    BEGIN
    io_text := io_text||' '||i_add_str;
    EXCEPTION WHEN lx_too_big THEN NULL; -- silently don't add the string.
    END add_piece;
    BEGIN
         BEGIN
              SELECT * INTO b_req FROM pjt_requirements_b WHERE ROWID = i_rowid;
              EXCEPTION
              WHEN NO DATA_FOUND THEN
              RETURN;
         END;
         add_piece(b_req.req_code);
         FOR tl_req IN cur_req_name(b_req.rqmt_id) LOOP
         add_piece(tl_req.rqmt_name);
    END concat_columns;
    END requirements_util;
    EXEC ctx_ddl.drop_section_group('rqmt_sectioner');
    EXEC ctx_ddl.drop_preference('rqmt_user_ds');
    BEGIN
    ctx_ddl.create_preference('rqmt_user_ds', 'USER_DATASTORE');
    ctx_ddl.set_attribute('rqmt_user_ds', 'procedure', sys_context('userenv','current_schema')||'.'||'requirements_util.concat_columns');
    ctx_ddl.set_attribute('rqmt_user_ds', 'output_type', 'VARCHAR2');
    END;
    CREATE INDEX rqmt_cidx ON pjt_requirements_b(req_code)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS ('DATASTORE rqmt_user_ds
    SYNC (ON COMMIT)');

  • Problem with context search in iFS

    Hello , here is my problem with iFS.
    We have installation of Oracle 8.1.7 Enterprise edition with interMedia and iFS 1.1 on same server (Windows NT Server 4.0/512 RAM). During install everything went fine.
    I had uploaded about 200 MB files in the iFS (pdf's and html's).
    The problem is when I try to use context based search. If I search for file's name everything is fine, but when I search for a word that is in a file it almost immediately gives mi "0 file(s) found", and I'm sure that there are files that have that word in their body's.
    What can be the problem?
    Any sugestions will be in help.
    Thanks in advance.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by mark_d_Drake ():
    That's the way it works. Content Indexing is not on insert, it occurs when the ctxsrv process runs. See the IntermediaText doc for more information.
    <HR></BLOCKQUOTE>
    Documents's content is stored in the GLOBALINDEXEDBLOB column of the IFSSYS.ODMM_CONTENTSTORE table.
    There is an text index GLOBALINDEXEDBLOB_I built on this column.
    To make the context search possible just update this index using the following command in SQL*Plus:
    SQL> exec ctx_ddl.sync_index('GLOBALINDEXEDBLOB_I');
    If you want this index be updated automatically when new documents are uploaded/changed/deleted in iFS then start the ctxsrv utility on the computer where your Oracle database resides. To do this issue the following command in OS command line:
    ctxsrv -user ctxsys/ctxpwd@db_alias
    just replace here ctxpwd and db_alias with real values you specified during the installation.
    null

  • Oracle Text performance -- failed attempts

    We are trying to implement a simple search of text data stored in a heavily used table (inserts/updates). There are 3 columns to index --
    Headline (varchar2(255))
    Subheadline (varchar2(255))
    Teaser (varchar2(4000))
    The first attempt to implement Oracle text w/ CATSEARCH
    begin
    ctx_ddl.create_index_set('cms_iset');
    ctx_ddl.add_index('cms_iset','poolid_cp, mediaid_cp'); /* sub-index A */
    end;
    ---- We knew we were going to filter on poolid_cp and mediaid_cp ---
    CREATE INDEX cms_headlineidx ON con_properties (headline)
    INDEXTYPE IS ctxsys.CTXCAT
    PARAMETERS ('index set cms_iset');
    CREATE INDEX cms_subheadlineidx ON con_properties (subheadline)
    INDEXTYPE IS ctxsys.CTXCAT
    PARAMETERS ('index set cms_iset');
    CREATE INDEX cms_teaseridx ON con_properties (teaser)
    INDEXTYPE IS ctxsys.CTXCAT
    PARAMETERS ('index set cms_iset');
    *********THE RESULTS*************
    Our application server would spin up threads that would appear to be hanging. The load on the DB servers (RAC) were higher than normal. This implementation would have saved on having to do resync's manually.
    The next attempt was implementing w/ CONTEXT:
    alter table con_properties add (dummy varchar2(1));
    begin
    ctx_ddl.create_preference('con_propsearch', 'MULTI_COLUMN_DATASTORE');
    ctx_ddl.set_attribute('con_propsearch', 'columns', 'headline,subheadline,teaser');
    end;
    CREATE INDEX con_properties_searchidx
    ON con_properties(dummy)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS ('datastore CTXSYS.con_propsearch')
    Records getting put into the ctx_user_pending table a few hundred per hour.
    ********THE RESULTS*************
    Same issue with the application servers spinning off threads that seem to be hung. Spikey load on the DB servers (RAC).
    NOTE: In both implementations, running search querys ran OK. However, dropping the text index in BOTH cases caused the application servers to behave normally.
    Can anyone tell me what's going on internally with Oracle TEXT when a table is heavily inserted and updated? What is going on in the background. Is there some sort of lock that the app servers are waiting on? I know there is "overhead" with inserts on a normal b-tree index. Is it "exponential" with Oracle Text?
    Thank you!

    When documents in the base table are inserted, updated, or deleted, their ROWIDs are held in a DML queue until you synchronize the index. You can view this queue with the CTX_USER_PENDING view. Apparently, you are not synchronizing your context index, so the queue is building infinitely. You need to establish some method of synchronizing your index. You can use parameters('sync(on commit)') in your index creation or create an after insert or update statement level trigger, not row trigger, that uses dbms_job.submit to schedule ctx_ddl.sync_index to synchronize the index upon commit of the dml or you can manually run ctx_ddl.sync_index periodically or schedule it or you can alter and rebuild your index periodically or you can drop and recreate it periodically. Which method you choose depends on how current the information that you query needs to be. If your data needs to be current up to the moment, the you should sync on commit. Otherwise it may be better to do it in periodic batches.

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Oracle Text Help with XML column values

    Hello. In addition to being new to Oracle Text, I am inheriting an Oracle Text application and have a couple of questions.
    First, A context-based index has been set-up on a CLOB column which contains an XML formatted document. The Auto Section Group parameter has been set to created zones for each tag of the XML document. I have found that when using a browser to display the content of the CLOB, some of the column values have trouble displaying in the browser, where I receive an XML processing error. I believe this is due to the fact that some of the XML document rows contain URLs that are not embedded in the CDATA tag. In any case, if the browser has trouble displaying the XML, will oracle text have trouble indexing the XML and creating the section group zones?
    Second, I understand that the NOT operator takes a right operand term and left operand term. Can either of the terms be the results of the WITHIN operator, i.e. "dogs not (cats within animals)".
    Thank you.

    I bet you just whipped that out, and I thank you with all my
    heart, its amazing to me how many ways I tried to do what you did.
    Thanks
    I have a second question relating to the same problem and
    that is in referencing the over state. Currently, I can write
    'text' into the text field and see what I have coming in from xml
    in its place during the 'up' state.
    However, when the timeline hits the 'over' state, the
    textfield will display nothing, or 'text' if I have that written
    in. I suspect that I am not referencing the'over' state correctly.
    Should I add one line of code sort of referencing the text
    field and not just the button while in the over state?

  • Oracle Text - Problem with filtering binary documents (.doc, .pdf, etc...)

    Hi, I have a problem with filtering binary documents (.doc, .pdf, etc...). I use SQL*PLUS for remote access to Oracle 10.2 on Linux and I create table:
    CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
    I insert to this table:
    INSERT into test values(1, 'PATH/text1.doc‘);
    INSERT into test values(2,'PATH/text2.doc‘);
    and then:
    CREATE INDEX test_index ON test(text) indextype is ctxsys.context
    parameters (’datastore ctxsys.file_datastore
    filter ctxsys.auto_filter’);
    Message "Index created" is displayed, but objects: DR$test_index$I, DR$test_index$K, DR$test_index$N, DR$test_index$R and DR$test_index$P are empty => index wasn´t created probably.
    I don´t know, where is bug, either bug is somewhere in this code or on the server (wrong installation oracle or constraint privileges). Do you know in what is bug?

    The following is an excerpt from the 10g online documentation. Note the items that I have put in bold.
    "FILE_DATASTORE
    The FILE_DATASTORE type is used for text stored in files accessed through the local file system.
    Note:
    FILE_DATASTORE may not work with certain types of remote mounted file systems.
    FILE_DATASTORE has the following attribute(s):
    Table 2-4 FILE_DATASTORE Attributes
    Attribute Attribute Value
    path path1:path2:pathn
    path
    Specify the full directory path name of the files stored externally in a file system. When you specify the full directory path as such, you need only include file names in your text column.
    You can specify multiple paths for path, with each path separated by a colon (:) on UNIX and semicolon(;) on Windows. File names are stored in the text column in the text table.
    If you do not specify a path for external files with this attribute, Oracle Text requires that the path be included in the file names stored in the text column.
    PATH Attribute Limitations
    The PATH attribute has the following limitations:
    If you specify a PATH attribute, you can only use a simple filename in the indexed column. You cannot combine the PATH attribute with a path as part of the filename. If the files exist in multiple folders or directories, you must leave the PATH attribute unset, and include the full file name, with PATH, in the indexed column.
    On Windows systems, the files must be located on a local drive. They cannot be on a remote drive, whether the remote drive is mapped to a local drive letter."
    With accessible paths and files, you get something like:
    SCOTT@orcl_11g> CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
    Table created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> INSERT into test values(1,'c:\oracle11g\banana.pdf');
    1 row created.
    SCOTT@orcl_11g> INSERT into test values(2,'c:\oracle11g\cranberry.pdf');
    1 row created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> CREATE INDEX test_index ON test(text) indextype is ctxsys.context
      2  parameters ('datastore ctxsys.file_datastore
      3  filter ctxsys.auto_filter');
    Index created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> select count(*) from dr$test_index$i
      2  /
      COUNT(*)
           608
    SCOTT@orcl_11g> In the following, I used a non-existent path and non-existent file name, which produces the same results as when you use a remote path that does not exist locally.
    SCOTT@orcl_11g> CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
    Table created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> INSERT into test values(3,'c:\nosuchpath\nosuchfile.pdf');
    1 row created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> CREATE INDEX test_index ON test(text) indextype is ctxsys.context
      2  parameters ('datastore ctxsys.file_datastore
      3  filter ctxsys.auto_filter');
    Index created.
    SCOTT@orcl_11g>
    SCOTT@orcl_11g> select count(*) from dr$test_index$i
      2  /
      COUNT(*)
             0
    SCOTT@orcl_11g>

  • Free text retrieval with Context Cartridge

    I am wondering whether it would a problem to create a Java
    application doing free text retrieval via Context Cartridge.
    I mean by that :
    1. JDBC call via thin Oracle driver with CONTAINS in WHERE
    clause. Is it supposed to work ?
    2. Assuming we have several hits in a BLOB can we provide
    functionality ( next hit prev hit ) in textarea field for Java
    frame.
    null

    This one was forwarded to JDBC experts. Two experts (A and B)
    replied. Their answers are below.
    Michael Mitiaguin (guest) wrote:
    I am wondering whether it would a problem to create a Java
    application doing free text retrieval via Context Cartridge.
    I mean by that :
    1. JDBC call via thin Oracle driver with CONTAINS in WHERE
    clause. Is it supposed to work ?
    A. YES.
    B. There is no limitation in JDBC of using CONTAINS in WHERE
    clause.
    2. Assuming we have several hits in a BLOB can we provide
    functionality ( next hit prev hit ) in textarea field for Java
    frame.
    A. I don't really get what you mean. If you mean that you want to
    map a Java Frame to a BLOB column and depending upon the hit in
    the Java frame, you want to read the data from a certain position
    (offset), then yes. If you are using 8.0.X drivers, you can use
    the dbms_lob.read call. If you are using 8.1.X drivers, you can
    use Blob.getBytes call.
    B. It can be achieved by creating a wrapper of using the
    following method in oracle.sql.BLOB in 8.1 driver.
    public long position(byte[] pattern,
    long start)
    throws SQLException
    Determines the byte position at which the specified byte pattern
    begins within the BLOB value that this Blob object represents.
    Begin search at position start. Parameters:
    pattern - the byte array for which to search
    start - the position at which to begin searching; the first
    position is 1
    Returns: the position at which the pattern appears, else -1.
    Throws: SQLException - if there is an error accessing the BLOB
    null

  • Oracle text - issue with contains query

    Hello,
    Need urgent help.
    Following code in my procedure is giving me error.
    TYPE c_1 is ref cursor;
    result_cursor c1;
    i_text2 := 'NEW%';
    open result_cursor for
    'select /*+ INDEX_SS_DESC(e cad_addr_idx2 )*/
    from cad_address
    where
    contains(text, {:i_text2}, 1) > 0
    and rec_type in (1,2,3,4)
    order by occur_count desc'
    using
    i_text2;
    ORA-00936: missing expression
    ORA-06512: at "AV_OWNER.MY_PROC", line 43
    ORA-06512: at line 6
    Oracle version is 11.2.0.3.0.
    Thanks,

    check your table is 'text indexed' on this 'Text' column.To knoow more about 'text index' go to
    http://docs.oracle.com/cd/B19306_01/text.102/b14217/ind.htm
    Also refer to the below thread where someone had faced issues with CONTAINS clause.
    ORA-20000: Oracle Text error: DRG-10599: column is not indexed

  • Using oracle text in apex report search

    I am trying to use oracle text in apex, integrating it in an existing application. The idea is that it will allow to do a search in bigger textfields. Thats how I want it to get to work. In one of the oracle packaged applications oracle text is used as well, so I will have a look to that as well. I've addapted this search. I've added
    AND t. contains(oplossing, :P15_OPLOSSING)
    AND t.contains(sleutelwoorden, :P15_SLEUTELWOORDEN)
    That didn't work, so I changed those two to:
    AND t.oplossing = (t.contains(oplossing, :P15_OPLOSSING)>0)
    AND t.sleutelwoorden = (t.contains(sleutelwoorden, :P15_SLEUTELWOORDEN)>0)
    which didn't work either, which I expected to be the case. Clearly I'm not doing it correctly, I intend to look it up tonight in the packaged applications as I do want to findt it myself to.
    But does anyone can give a hint, on what I am doing wrong ?
    SELECT t.ticketid ticketnr, t.ticketid,
    g.voornaam||' '||g.naam aangemaaktdoor,
    t.credt, t.applicatiecd, t.titel,
    s.statusdefoms,
    si.statusdefoms instat,
    NVL2(t.toegekend,'Y','N') toegekend,
    sleutelwoorden, klantprioriteitid, oplossing, s.htmlkleur, si.htmlkleur inthtmlkleur
    FROM ticket t,
    gebruiker g,
    status s,
    status si
    WHERE t.gebruikerid = g.gebruikerid
    AND t.statusid = s.statusid
    AND t.statusinternid = si.statusid (+)
    AND t.applicatiecd = NVL(:P0_APPLICATIECD, :F101_APPLICATIECD)
    AND (t.categorieid = :P15_CATEGORIEID OR NVL(:P15_CATEGORIEID, 0) = 0)
    AND (t.moduleid = :P15_MODULEID OR NVL(:P15_MODULEID, 0) = 0)
    AND (t.statusid = :P15_STATUSID OR NVL(:P15_STATUSID, 0) = 0)
    AND (t.statusinternid = :P15_INTSTATUSID OR NVL(:P15_INTSTATUSID, 0) = 0)
    AND (t.versieid = :P15_VERSIEID OR NVL(:P15_VERSIEID, 0) = 0)
    AND t.ticketid LIKE '%'||:P15_TICKETID||'%'
    AND t.gebruikerid = DECODE(NVL(:P15_GEBRUIKERID,0), 0, t.gebruikerid, :P15_GEBRUIKERID)
    AND t.credt BETWEEN NVL(:P15_DATUMVAN, To_Date('01-01-1900', 'DD-MM-YYYY')) AND NVL(To_Date(:P15_DATUMTOT, 'DD-MM-YYYY'), sysdate) +1
    AND t.titel LIKE '%'||:P15_TITEL||'%'
    AND t. contains(oplossing, :P15_OPLOSSING)
    AND t.contains(sleutelwoorden, :P15_SLEUTELWOORDEN)
    AND PCK$Ticket_Admin.getklantid(t.gebruikerid) = DECODE(Pck$Ticket_Admin.isklantadminroleN(:APP_USER,NVL(:P0_APPLICATIECD, :F101_APPLICATIECD)), 1, PCK$Ticket_Admin.getklantid(:APP103_GEBRUIKERID), PCK$Ticket_Admin.getklantid(t.gebruikerid))
    AND (:APP103_GEBRUIKERID IN (t.voor_gebruikerid, t.gebruikerid)
    OR Pck$Ticket_Admin.isintern(:APP_USER,:P0_APPLICATIECD) = 1)
    changed to:
    AND t.oplossing = (t.contains(oplossing, :P15_OPLOSSING)>0)
    AND t.sleutelwoorden = (t.contains(sleutelwoorden, :P15_SLEUTELWOORDEN)>0)

    I have worked it further out now, and looked at the search of the packaged application. It turned out to be a pl/sql block . I used what I found in there to adapt the previous search. I added the following:
    OR (CONTAINS(t.oplossing, :P15_OPLOSSING)>0)
    OR (CONTAINS(t.sleutelwoorden, :P15_SLEUTELWOORDEN)>0)
         OR (CONTAINS(t.titel,:P15_SEARCH_T_O_S)>0 OR
         CONTAINS (t.oplossing, :P15_SEARCH_T_O_S)>0 OR
         CONTAINS(t.sleutelwoorden, :P15_SEARCH_T_O_S)>0 )
    OR (CONTAINS(t.titel,:P15_SEARCH_T_O_S)>0 AND
         CONTAINS (t.oplossing, :P15_SEARCH_T_O_S)>0 AND
         CONTAINS(t.sleutelwoorden, :P15_SEARCH_T_O_S)>0 )
    oplossing means solution
    sleutelwoorden means keywords
    titel means title
    Yet this doesn't work yet. It gives an error message:
    failed to parse SQL query:
    ORA-01719: outer join operator (+) not allowed in operand of OR or IN
    I've tried adding the addition in a different place, yet that gives the same error message. I'm not sure now.

  • Oracle Database Performance With Semantic

    Hello,
    Is there a Developer's Guide for Semantic that specifically talks about database performance with the Semantic network/tables/indexes? We are having issues with performance the larger the semantic network becomes.
    Any help or pointers would be appriciated.
    Thanks
    -MichaelB

    Matt,
    Thanks for your response. Here are the answers to the questions about our setup/environment.
    1) Are you querying multiple models and/or a model + entailment? If so, are you using a virtual model and using the ALLOW_DUP=T query option?
    A single model, no entailments. We attempted to use multiple models, and a virtual model (with ALLOW_DUP=T), however the UNION ALL in the explain plan made the query duration unacceptable.
    2) Are you using named graphs?
    No named graphs.
    3) How many triples are you querying?
    Approximately 85 million.
    4) What semantic network and/or datatype indexes have been created?
    We have PCSGM, PSCGM, PSCM, PCSM, CPSM, and SCM.
    5) What is your hardware setup (number and type of disks, RAM, processor, etc.)?
    We are running the 11.2.0.3 database on a Sun Solaris T2000, we have ASM managing our disks from RAID5, I believe currently we have two Disk Groups with the indexes in one and the data tables in the other. We have 32 GB of memory, and 32 CPUs. However, it is not the only thing running on the machine.
    6) How much memory have you allocated to the database (pga, sga, memory_target, etc.)?
    We have the memory_target set to 9GB, the db_cache_size set to 2GB, and the db_keep_cache_size set to 4.5GB. `pga_aggregate_target` is set to 0 (auto), as is `sga_target`.
    (Since my initial request, we pinned the RDF_VALUE$ (~2.5GB) and C_PK_VID (~1.7GB) objects in the KEEP buffer cache, which drastically improved performance)
    7) Are you using parallel query execution?
    Yes, some of the more complex queries we run with the parallel hint set to 8.
    8) Have you tried dynamic sampling?
    Yes. We have ODS set to 3 for our more complex queries, we have not altered this much to see if there is a performance gained by changing this value.
    Thanks again,
    -Michael

  • "Oracle text" performance Problem

    Architecture for Performance on a web site Search!
    I wanna use text service of ORACLE.But I am worried about the performance ....
    How should I design the system if I want the best perfomance and scalability ?
    1.Should I build a seperate coloumn in my every table and merge all the information into one coloumn and full text index that column.
    2.Put a full text index in all column in the table and use OR clause and reverse rank it for AND clause,using CONTAINSTABLE function.
    3.Make a different table and put ID,TYPE and _VALUE fields and search in that table with less coloumns.
    4.Seperate the full text database and search in a seperate db so that I can scale better?
    did anybody have a similiar problem ? Any books on full text search ?

    The number of indexes is irrelevant as such. If you really need 100 tables and you really need full text search on all of them, you need 100 indexes. When you are inserting data in any given table, the fact that there are 99 other tables with 99 other Text indexes is irrelevant.
    That being said, I would seriously question whether a data model that involves doing full-text searches on 100 separate tables was actually a proper data model. That strikes me as highly unlikely.
    Justin

  • Oracle text performance

    hi all.
    i have useing orcle text for the indexing purposes ..
    the below query goes for an wildcard search and it performance is very poor ..
    /* Formatted on 2009/08/11 16:06 (Formatter Plus v4.8.5) */
    SELECT *
      FROM (SELECT z.*, ROWNUM r
              FROM (SELECT   *
                        FROM (SELECT *
                                FROM (SELECT score (1) score,
                                             SUBSTR (note, 1, 200) note, tmplt_id,
                                             appl_name, appl_use, appl_reference,
                                             appl_entity, effctv_start_date,
                                             effctv_end_date, eq_name, note_id,
                                             Bfcommon.getvaluebasedontemplate
                                                                 (tmplt_id,
                                                                  note_id
                                                                 ) VALUE,
                                             (SELECT em_name
                                                FROM ip_eq
                                               WHERE eq_name = a.eq_name) em_name,
                                             Bfcommon.is_edit_allowed_note1
                                                       (a.note_id,
                                                        'PACRIM1\E317329',
                                                        SYSDATE,
                                                        SYSDATE
                                                       ) LOCKED
                                        FROM bf_note a
                                       WHERE tmplt_id NOT IN (19, 14, 16)
                                         AND effctv_start_date >=
                                                TO_DATE ('08/12/2008 00:00:00',
                                                         'MM/DD/YYYY HH24:MI:SS'
                                         AND (   effctv_end_date <=
                                                    TO_DATE
                                                          ('08/12/2009 00:00:00',
                                                           'MM/DD/YYYY HH24:MI:SS'
                                              OR effctv_end_date IS NULL
                                         AND contains (note, '%test%', 1) > 0) r
                              UNION
                              SELECT score (1) score, SUBSTR (note, 1, 200) note,
                                     tmplt_id, appl_name, appl_use,
                                     appl_reference, appl_entity,
                                     effctv_start_date, effctv_end_date, eq_name,
                                     a.note_id, b.trgt_name VALUE,
                                     (SELECT em_name
                                        FROM ip_eq
                                       WHERE eq_name = a.eq_name) em_name,
                                     Bfcommon.is_edit_allowed_note1
                                                       (a.note_id,
                                                        'PACRIM1\E317329',
                                                        SYSDATE,
                                                        SYSDATE
                                                       ) LOCKED
                                FROM bf_note a, om_limit_hist b
                               WHERE (   date_changed =
                                            (SELECT MAX (date_changed)
                                               FROM om_limit_hist h1
                                              WHERE date_changed <
                                                       (SELECT MAX (date_changed)
                                                          FROM om_limit_hist h2
                                                         WHERE h2.trgt_name =
                                                                       b.trgt_name)
                                                AND h1.trgt_name = b.trgt_name)
                                      OR date_changed =
                                               (SELECT MAX (date_changed)
                                                  FROM om_limit_hist h1
                                                 WHERE h1.trgt_name = b.trgt_name)
                                 AND b.note_id = a.note_id(+)
                                 AND tmplt_id = 19
                                 AND effctv_start_date >=
                                        TO_DATE ('08/12/2008 00:00:00',
                                                 'MM/DD/YYYY HH24:MI:SS'
                                 AND (   effctv_end_date <=
                                            TO_DATE ('08/12/2009 00:00:00',
                                                     'MM/DD/YYYY HH24:MI:SS'
                                      OR effctv_end_date IS NULL
                                 AND contains (note, '%test%', 1) > 0)
                    ORDER BY 1 DESC) z
             WHERE ROWNUM <= 50)
    WHERE r >= 1here it goes for the wild card search for the string test ...Plz tell me how to index it in this case ..
    and its plan is
    Execution Plan
    Plan hash value: 3535478881
    | Id  | Operation                            | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |                     |    50 |   128K|       |   632   (1)| 00:00:08 |
    |*  1 |  VIEW                                |                     |    50 |   128K|       |   632   (1)| 00:00:08 |
    |*  2 |   COUNT STOPKEY                      |                     |       |       |       |         |     |
    |   3 |    VIEW                              |                     |    69 |   177K|       |   632   (1)| 00:00:08 |
    |*  4 |     SORT ORDER BY STOPKEY            |                     |    69 |   177K|   376K|   632   (1)| 00:00:08 |
    |   5 |      VIEW                            |                     |    69 |   177K|       |   590   (1)| 00:00:08 |
    |   6 |       SORT UNIQUE                    |                     |    69 |  7281 |       |   590   (2)| 00:00:08 |
    |   7 |        UNION-ALL                     |                     |       |       |       |         |     |
    |*  8 |         TABLE ACCESS BY INDEX ROWID  | BF_NOTE             |    68 |  7140 |       |   585   (1)| 00:00:08 |
    |*  9 |          DOMAIN INDEX                | BF_NOTE_TEXT_SEARCH |       |       |       |   221   (0)| 00:00:03 |
    |* 10 |         FILTER                       |                     |       |       |       |         |     |
    |  11 |          NESTED LOOPS                |                     |     1 |   141 |       |     3   (0)| 00:00:01 |
    |  12 |           TABLE ACCESS FULL          | OM_LIMIT_HIST       |     1 |    36 |       |     2   (0)| 00:00:01 |
    |* 13 |           TABLE ACCESS BY INDEX ROWID| BF_NOTE             |     1 |   105 |       |     1   (0)| 00:00:01 |
    |* 14 |            INDEX UNIQUE SCAN         | BF_NOTE_PK          |     1 |       |       |     1   (0)| 00:00:01 |
    |  15 |          SORT AGGREGATE              |                     |     1 |    23 |       |         |     |
    |* 16 |           INDEX RANGE SCAN           | OM_LIMIT_HIST_IDX1  |     1 |    23 |       |     0   (0)| 00:00:01 |
    |  17 |            SORT AGGREGATE            |                     |     1 |    23 |       |         |     |
    |* 18 |             INDEX RANGE SCAN         | OM_LIMIT_HIST_IDX1  |     1 |    23 |       |     0   (0)| 00:00:01 |
    |  19 |            SORT AGGREGATE            |                     |     1 |    23 |       |         |     |
    |* 20 |             INDEX RANGE SCAN         | OM_LIMIT_HIST_IDX1  |     1 |    23 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("R">=1)
       2 - filter(ROWNUM<=50)
       4 - filter(ROWNUM<=50)
       8 - filter("EFFCTV_START_DATE">=TO_DATE(' 2008-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "TMPLT_ID"19 AND "TMPLT_ID"14 AND "TMPLT_ID"16 AND ("EFFCTV_END_DATE" IS NULL OR
                  "EFFCTV_END_DATE"<=TO_DATE(' 2009-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
       9 - access("CTXSYS"."CONTAINS"("NOTE",'%test%',1)>0)
      10 - filter("DATE_CHANGED"= (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H1" WHERE
                  "H1"."TRGT_NAME"=:B1 AND "DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H2" WHER
    E
                  "H2"."TRGT_NAME"=:B2)) OR "DATE_CHANGED"= (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H1"
                  WHERE "H1"."TRGT_NAME"=:B3))
      13 - filter("TMPLT_ID"=19 AND "EFFCTV_START_DATE">=TO_DATE(' 2008-08-12 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "CTXSYS"."CONTAINS"("NOTE",'%test%',1)>0 AND ("EFFCTV_END_DATE" IS NULL OR
                  "EFFCTV_END_DATE"<=TO_DATE(' 2009-08-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      14 - access("B"."NOTE_ID"="A"."NOTE_ID")
      16 - access("H1"."TRGT_NAME"=:B1 AND "DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM
                  "OM_LIMIT_HIST" "H2" WHERE "H2"."TRGT_NAME"=:B2))
           filter("DATE_CHANGED"< (SELECT /*+ */ MAX("DATE_CHANGED") FROM "OM_LIMIT_HIST" "H2" WHERE
                  "H2"."TRGT_NAME"=:B1))
      18 - access("H2"."TRGT_NAME"=:B1)
      20 - access("H1"."TRGT_NAME"=:B1)

    What version of Oracle are you using?
    Your sample query is a good example of the benefits and costs of 'mixed' queries, and the challenges of mixed query performance. Oracle 11g has some very helpful new features (search for SDATA) that can really improve performance. (Specifically, it looks like your query does some useful date-range bounding. You need to get that into the FT index).
    In the end, it's not going to be easy to look at your reasonably complex query, understand the data and relationships, and wave the magic wand to make the thing go fast.

Maybe you are looking for

  • Nokia N80 Calendar

    I've recently got a Nokia N80 on Orange. I have a question about the Calendar entries that are displayed on the home screen (the main home screen when you switch on the phone, not the home screen when you select Calendar). The home screen displays on

  • Why don't I see Lightroom in my Creative Cloud apps area?

    I don't see Lightroom available for download in my Creative Cloud apps area... and from the web side when I click download it tries to open my Creative Cloud utility and then does nothing. What should I do?

  • Convert inch fraction to decimal value

    I get a lot of printer requirements tin Fractions (12 11/16 Inches, 2 1/8 inches, etc.) Is there a way in illustrator to convert those to decimal values?

  • LVOOP: Children aren't Friends of their Parents' Friends

    OK, so I don't know if it's a bug, a poor design decision, or my lack of understanding, BUT here goes... (talking LV2013 here) I have two preant classes, from which I am making plugin child classes. I would LIKE some of their methods to be restricted

  • Is there a way to export link info (scale %, effective ppi, etc.) to a spreadsheet?

    We have several retail pieces that use the same images multiple times. We upsize the image to the largest scale that is used in multiple layouts and then use that same image in the smaller pieces. We'd like to be able to export the link info from sev