Tuning Simple Intermedia Queries

I am comparing the performance of doing a text search with
standard sql and Oracle Intermedia.
I have a table with a text column NARRLONG, defined as below:
Name Null? Type
NARRPREFIX NOT NULL VARCHAR2(2)
NARRKEY NOT NULL VARCHAR2(240)
NARRLANG NOT NULL VARCHAR2(2)
NARRSEQ NOT NULL VARCHAR2(5)
NARRLSEQ VARCHAR2(5)
NARRCHADTE DATE
NARRFLAG CHAR(1)
NARRLONG VARCHAR2
(1260)
The table has a primary key on NARRPREFIX, NARRKEY, NARRLANG,
NARRSEQ and a text index on NARRLONG.
When I run the query:
select count(*) from maa240 where narrprefix = 'BA'
and narrkey like '000010000100001C%'
and (upper(narrlong) like ('%VDU%')
or upper(narrlong) like ('%INPUT%'))
I get 273 rows returned in approx. 16 secs.
When I run the query;
Oracle 8.1.7 > cat andy2.sql
select count(*) from maa240
where contains (narrlong, '(%VDU%) OR (%INPUT%)') > 0
and narrprefix = 'BA'
and narrkey like '000010000100001C%'
I get 273 rows returned in approx. 3 minutes.
Analyzing the table made no difference.
Can anyone point me in the right direction for tuning this
query ?
Thanks
Andy Lamberth
OCP DBA

I believe that the problem revolves around the way that
intermedia text now works. The contains search returns rowids
which are then accessed as necessary. In your example I suspect
that a very large number of rows match the criteria that you
have specified. Each row will then be read to check the other
(non-IM) criteria before giving you the results.
By contrast your non-IM query may well be just accessing an
ordinary index based on most/all of your criteria and therefore
reading comparitively few rows.
To confirm this either just run the IM query to get a count on
the number of rows it finds as matches or use tkprof to examine
trace files.
We use a large number of IM indexes and have found that queries
matching a large number of rows on an individual IM index
perform poorly. By contrast when there are (relatively) few
matches performance is certainly superior to ordinary indexes.

Similar Messages

  • Intermedia queries..

    i would like to know, if any solution for this...
    A. When interMedia process runs at background, it requries user name and password, and the username and password will be shown in background process when you use ps command. This could be a security problem since interMedia admin user has certain system privilege.
    B. There is no good monitor tools and logs for interMedia. I had a couple time the interMedia process went to south without any error message and logs. If you want to monitor if the interMedia process is still running, you have to create your own tool to monitor the process.
    thanks,
    Manish.

    Need more info - please supply version/platform information, and interMedia components used.

  • Performace tuning required in queries having where clause on = or =

    Hi,
    I ahve query which is taking hell lot of time because of having filter condition on with >=.
    Say:
    select * from <tableA> where <column1> >= sysdate;
    By any chance can we do something to avoid full scan on above table?

    Hi,
    The oracle version is :
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    the query is :
    with temp as(
    select sysdate as valuedate from dual
    union
    select sysdate+1 as valuedate from dual
    Union
    select sysdate+2 as valuedate from dual
    Union
    select sysdate+3 as valuedate from dual
    select valuedate from temp where valuedate >= sysdate
    Taking the above scenerio, is it possible to build any index which can be used to improve the performance.
    The above query is just a example. the original query is fetching data from huge table with filter condition >=
    I want my index to be used there.

  • Distributed Queries w/interMedia

    Does interMedia support simple distributed queries such as the following:
    select doc_id from doc_table@dblink where contains(text,'November',0)>0;
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Paul Dixon ([email protected]):
    This does not work so far in >= 8i.<HR></BLOCKQUOTE>
    I finally figured this one out. Add @dblink between "contains" and the "(". Works fine like this against 8.1.7.
    null

  • INTERMEDIA TEXT INDEX를 사용하는 QUERY의 TUNING

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-12
    INTERMEDIA TEXT INDEX를 사용하는 QUERY의 TUNING
    ===============================================
    Purpose
    Intermedia text를 사용하는 query의 속도를 향상시킬 수 있는 방안을
    알아보자.
    Explanation
    1. Make analyze all the table in the query
    text index를 이용하는 Query 안의 모든 Table을 analyze 해 주십시요.
    예를 들어 다음의 command를 이용할 수 있습니다.
    ANALYZE TABLE <table_name> COMPUTE STATISTICS;
    or
    ANALYZE TABLE <table_name> ESTIMATE STATISTICS 1000 ROWS;
    or
    ANALYZE TABLE <table_name> ESTIMATE STATISTICS 50 PERCENT;
    2. Using FIRST_ROWS hint
    더 좋은 response time 을 위해서 first_rows hint 를 사용해 보십시요.
    database에 기본적으로 설정된 optimizer mode는 choose mode입니다.
    이것은 전체 처리시간(throughput)을 가장 빠르게 하기 위한(all_rows mode)
    plan을 따르기 때문에 user의 입장에서는 first_rows 보다는 늦다고 느낄 수
    있습니다.
    Query에 다음과 같이 hint를 주고 performance를 확인해 보십시요.
    select /*+ FIRST_ROWS */ pk, col from ctx_tab
    where contains(txt_col, 'test', 1) > 0;
    단, first_rows hint를 이용하는 경우 자동으로 score 순서대로
    ordering 되지 않습니다. 왜냐하면 단어에 부합하는 문서를 찾는대로
    즉시 결과로 나타내 주기 때문입니다.
    3. Make sure text index is not fragmented
    insert, delete 가 많이 되는 table의 경우 index fragment를 제거해 주어야
    합니다. Index fragmentation 은 다음과 같이 확인할 수 있습니다.
    select count(*) from dr$<indexname>$i; -> A
    select count(*) from (select distinct(token_text) from dr$<indexname>$i); -> B
    위의 결과가 A/B 의 값이 3:1 보다 크면 optimize_index 를 실행해 주시는
    것이 좋습니다. 다음과 같은 command로 index optimization을 할 수 있습니다.
    alter index <index_name> rebuild online
    parameters('optimize full unlimited');
    index rebuild중에 online option을 주면 rebuild하는 중에도 계속 index를
    사용할 수 있게 됩니다. 하지만, 가능하면 사용자가 없을 때 rebuild하는 것이
    좋습니다.
    4. Check execution plan and sql trace.
    기본적인 여러 가지 작업들에도 속도가 별로 향상되지 않는다면, 직접
    sql trace를 떠서 Execution plan등을 확인해 보는 것이 필요합니다.
    예를 들어 SQL*PLUS에서 다음과 같이 sql trace를 뜹니다.
    alter session set timed_statistics=true;
    alter session set sql_trace=true;
    select ..... -> execute the query
    실행 후,
    exit
    user_dump_dest 에 지정된 directory에 trace 가 떨어지면 다음과 같은
    command로 tkprof 를 떠서 내용을 확인합니다.
    $ tkprof <tracefilename> <outputfilename> explain=username/password
    Referenc Documents
    Bulletin#10134 : SQL trace와 tkprof 사용 방법

  • Tuning sql - uneven execution times

    Hi-
    A common problem we have when we are performance tuning our sql queries is that an initial run of the query will be slow but subsequent runs of the same sql might be up to ten times faster. Presumably this results from some kind of caching.
    This confounds our ability it to tune the sql, as we cannot distinguish changes in run time that result from changes to the sql or from whatever caching Oracle is doing.
    Is there a way for us to tell Oracle "we are trying to tune this query, please don't do any caching" or some other way to get consistent run times for performance comparison?
    Thanks,
    Steve

    jgarry wrote:
    Which brings up another fundamental tuning issue - how do you order the importance of problems unless you start by measuring the elapsed times?I would argue that looking at what that elapsed time is spend on (wait events) is a lot more useful.
    (I happen to agree with you Billy, I'm just pointing out what some tuning methodologies say and an aspect of compulsive tuning disorder. Never really suffered from that. 90%, if not more, of all performance problems I've ever dealt with were due to either application code or application/database design.
    When you pop open the hood of a RDBMS or operating system to address performance problems, then you are saying by implication that the code is 100% correct and the design is a 100% correct.
    I always get uncomfortable with people wanting to immediately pop the hood to fix performance problems. Set this s/pfile parameter.. turn that Oracle knob.. throw that switch. This is second guessing as to what the actual root cause of the problem is.
    When I'm given a problem SQL to tune for performance (some of these spanned a couple of A4 pages for a single SQL!), may first question is always what is the goal? What is the query suppose to do? Not setting up a SQL trace, and playing with trace events and the like. That comes afterward, when you know for a fact that the design of the query is correct.
    I think people underestimate SQL. It is a relatively simple language. Unlike object orientated and even procedural languages. And because it is seen as simple, very little though goes into designing a SQL "+program+" - as that is what a SQL statement is. A program that specifies how (sometimes enormous) data sets need to be processed, applying (sometimes very complex) business logic.
    Ask the same same developer to write that in Java/.Net/etc and the developer will spend a significant time in designing the program. Yet, when writing it in SQL, very little though is given (IMO) by developers as to the design of that SQL program... as with very little code you are hitting large data sets doing some pretty complex processing.
    So because SQL is treated as "+simplistic+", performance problems in this regard is often treated as a SQL problems. CBO not doing it correctly. Slapping more indexes on a table. Etc. Instead of looking at the design of that SQL and first determining whether that is correct or not.
    Top down. Something that was drilled into me when I started programming in Cobol on mainframes years ago. You deal with design and problems from the top down. :-)
    The real reconciliation comes in recognizing the importance of context, sequencing and iteration in problem definition. When you add multiuser issues into the mix you can get things like, users complain their queries take too long, developer starts tuning the sql since hey, most performance problems are sql, but the resources are sucked by something else entirely, or even the problem sql punching itself in the face).Yep. Have seen many SQLs that hurts themselves badly (in addition to the rest of the server). Like multiple passes through the same data set.
    So shouldn't there be a "typical" load on the test system when tuning, rather than fresh system or empty system with iterations? It depends...It makes sense to deal with a base line ito performance and measurement and data volumes and so on when designing and coding.

  • Stored Procedures for Simple SQL statements

    Hi Guys,
    We are using Oracle 10g database and Web logic for frontend.
    The Product is previously developed in DotNet and SQL Server and now its going to develop into Java (Web Logic) and Oracle 10g database.
    Since the project is developed in SQL Server, there are lot many procedures written for simple sql queries. Now I would like to gather your suggestions / pointers on using procedures for simple select statements or Inserts from Java.
    I have gathered some list for using PL/SQL procedure for simple select queries like
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)
    Simple select statements are much faster than executing them from Procedure
    Pros
    Code changes for modifying select query in PL/SQL much easier than in Java
    Your help in this regard is more valuable. Please post your points / thoughts here.
    Thanks & Regards
    Srinivas
    Edited by: Srinivas_Reddy on Dec 1, 2009 4:52 PM

    Srinivas_Reddy wrote:
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)No entirely correct. All SQLs that hit the SQL engine are stored as cursors.
    On the client side, you have an interface that deals with this SQL cursor. It can be a Java class, a Delphi dataset, or a PL/SQL refcursor.
    Yes, cursors are created/opened at a huge rate by the SQL engine. But is is capable of doing that. What you need to do to facilitate that is send it SQLs that uses bind variables. This enables the SQL engine to simply re-use the existing cursor for that SQL.
    Simple select statements are much faster than executing them from ProcedureAlso not really correct. SQL performance is SQL performance. It has nothing to do with how you create the SQL on the client side and what client interface you use. The SQL engine does not care whether you use a PL/SQL ref cursor or a Java class as your client interface. That does not change the SQL engine's performance.
    Yes, this can change the performance on the client side. But that is entirely in the hands of the developer and how the developer selected to use the available client interfaces to interface with the SQL cursor in the SQL engine.
    Pros
    Code changes for modifying select query in PL/SQL much easier than in JavaThis is not a pro merely for ref cursors, but using PL/SQL as the abstraction layer for the data model implemented, and having it provide a "business function" interface to clients, instead of having the clients dealing with the complexities of the data model and SQL.
    I would seriously consider ref cursors in your environment. With PL/SQL servicing as the interface, there is a single place to tune SQL, and a single place to update SQL. It allows one to make data model changes without changing or even recompiling the client. It allows one to add new business logical and processing rules, again without having to touch the client.

  • Steps for performance Tuning....!!!!

    Hi all,
    I need your help in Performance tuning.
    While we do tuning in Oracle, apart from Indexes, where clause and order by clause, what are the other points we need to check. I mean explain plan etc...
    I am working as Informatica Developer, but i need to make an documents which points out what are the step we can check while doing performance tuning on SQL queries.
    Thanks in advance for your help.

    Hi,
    have a look into these link.it may helpful to you.
    When your query takes too long .
    When your query takes too long ...
    * HOW TO Post a SQL statement tuning request template posting *
    HOW TO: Post a SQL statement tuning request - template posting
    Edited by: Ravi291283 on Jul 28, 2009 4:00 AM
    Edited by: Ravi291283 on Jul 28, 2009 4:01 AM
    Edited by: Ravi291283 on Jul 28, 2009 4:02 AM

  • Simple query performance problem

    Hey!
    I'm using two simple XQUpdate queries in my wholedoc container.
    a) insert nodes <node name="my_name"/> as last into collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]
    b) delete node collection('xml_content.dbxml')[dbxml:metadata('dbxml:name')='document.xml']/nodes[1]/node[@name='my_name'][last()]
    The queries are operating on the same document.
    1) First a bunch of 'insert' queries has been executed (ca.50),
    2) Then a bunch of delete queries (ca. 50).
    The attribute name of element node varies.
    After a couple of iterations 1) and 2) each XQUpdate statement takes a lot of time to be completed (ca. 5-10 secs, whereas before it took much less then a second).
    The number of node elements in nodes element never exceeded 50. And eventually it works very slow even with 2 node elements.
    Does anybody have an idea what goes wrong after certain number of queries? What are the possible solutions here? How can I examine what is wrong?
    I didn't find relevant information in DB XML docs. Maybe I should look at BDB docs?
    Thanks in advance,
    Vyacheslav

    Here is a patch to fix the problem in 2.4.16. Note that the slowdown that this patch fixes only applies to whole document containers.
    Lauren Foutz
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.cpp     2008-10-21 17:27:22.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.cpp     2009-04-27 14:06:40.000000000 -0400
    @@ -477,7 +477,8 @@
                        if(updateStats_) {
                             // Get the size of the node
                             size_t nodeSize = 0;
    -                         if(ninfo != 0) {
    +                         // Node size is kept only for node containers
    +                         if(ninfo != 0 && container_->isNodeContainer()) {
                                  const NsFormat &fmt =
                                       NsFormat::getFormat(NS_PROTOCOL_VERSION);
                                  nodeSize = ninfo->getNodeDataSize();
    @@ -487,18 +488,22 @@
                                                        0, /*count*/true);
    -                         // Store the node stats for this node
    +                         /* Store the node stats for this node, only the descendants
    +                          * of the node being partially indexed are being removed/added
    +                          */
                             StructuralStats *cstats = &cis->stats[0];
    -                         cstats->numberOfNodes_ = 1;
    +                         cstats->numberOfNodes_ = this->getStatsNumberOfNodes(ninfo);
                             cstats->sumSize_ = nodeSize;
                             // Increment the descendant stats in the parent
                             StructuralStats *pstats = 0;
                             if (pis) {
                                  pstats = &pis->stats[0];
    -                              pstats->sumChildSize_ += nodeSize;
    -                              pstats->sumDescendantSize_ +=
    -                                   nodeSize + cstats->sumDescendantSize_;
    +                              if (container_->isNodeContainer()) {
    +                                   pstats->sumChildSize_ += nodeSize;
    +                                   pstats->sumDescendantSize_ +=
    +                                        nodeSize + cstats->sumDescendantSize_;
    +                              }
                                  pstats = &pis->stats[k.getID1()];
                                  pstats->sumNumberOfChildren_ += 1;
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/Indexer.hpp     2008-10-21 17:27:18.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/Indexer.hpp     2009-04-27 14:08:20.000000000 -0400
    @@ -19,6 +19,7 @@
    #include "OperationContext.hpp"
    #include "KeyStash.hpp"
    #include "StructuralStatsDatabase.hpp"
    +#include "nodeStore/NsNode.hpp"
    namespace DbXml
    @@ -181,6 +182,8 @@
         void checkUniqueConstraint(const Key &key);
         void addIDForString(const unsigned char *strng);
    +     
    +     virtual int64_t getStatsNumberOfNodes(const IndexNodeInfo *ninfo) const { return 1; }
    protected:     
         // The operation context within which the index keys are added
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.cpp     2008-10-21 17:27:22.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.cpp     2009-04-27 14:04:42.000000000 -0400
    @@ -103,6 +103,7 @@
              const DocID &did = document_.getID();
              DbWrapper &db = *document_.getDocDb();
              ElementIndexList nodes(*this);
    +          partialIndexNode_ = node->getNid();
              do {
                   bool hasValueIndex = false;
                   bool hasEdgePresenceIndex = false;
    @@ -124,6 +125,7 @@
              nodes.generate(*this);
    +     partialIndexNode_ = 0;
         return ancestorHasValueIndex;
    @@ -203,6 +205,19 @@
    +
    +int64_t NsReindexer::getStatsNumberOfNodes(IndexNodeInfo *ninfo) const
    +{
    +     /* Get the number of this node being removed or added, only the descendants
    +      * of the node being partially indexed are being removed/added
    +      */
    +     DBXML_ASSERT(!partialIndexNode_ || (ninfo != 0));
    +     if (!partialIndexNode_ || (partialIndexNode_.compareNids(ninfo->getNodeID()) < 0)) {
    +          return 1;     
    +     }
    +     return 0;
    +}
    +
    const char *NsReindexer::lookupUri(int uriIndex)
    diff -ru dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp
    --- dbxml-2.4.16-orig/dbxml/src/dbxml/nodeStore/NsReindexer.hpp     2008-10-21 17:27:18.000000000 -0400
    +++ dbxml-2.4.16/dbxml/src/dbxml/nodeStore/NsReindexer.hpp     2009-04-27 14:09:04.000000000 -0400
    @@ -45,6 +45,7 @@
         const char *lookupUri(int uriIndex);
         void indexAttribute(const char *aname, int auri,
                       NsNodeRef &parent, int index);
    +     virtual int64_t getStatsNumberOfNodes(IndexNodeInfo *ninfo) const;
    private:
         IndexSpecification is_;
         KeyStash stash_;
    @@ -54,6 +55,9 @@
         // this is redundant wrt Indexer, but dict_ in Indexer triggers
         // behavior that this class does not want
         DictionaryDatabase *dictionary_;
    +          
    +     // The node being indexed in partial indexing
    +     NsNid partialIndexNode_;
    }Edited by: LaurenFoutz on May 1, 2009 5:47 AM

  • Bitmap index and group by queries

    Please could someone offer me some advice for a data warehouse table I am designing which will have ad hoc queries running against it mainly grouping by day/month/year and needs to use as little resources as possible.
    TRANS_DATE DATE, LOC_ID VARCHAR2(8), USER_ID VARCHAR2(8), TRANS_CODE VARCHAR2(3), COUNT NUMBER(8,0)
    In populating the table I truncated the trans_date to hourly data and aggregated the other columns to give me an hourly count for every combination of location, user and code. I wasn't sure if I should create 2 more columns with truncated dates by day and by month? There are 200,000 rows per day in this table.
    The first 4 columns have low cardinality so I decided to create Bitmap indexes on them. However, when querying in Application Express SQL Workshop and looking at the query plan it seems that a full table scan is being performed whenever I use a group_by(example below), even when i use a hint for the index. The bitmap index is used on simple select queries with where clauses but no grouping.
    SELECT LOC_ID, count(TOTAL)
    FROM TRANS_SUMMARY
    GROUP BY LOC_ID
    Am I doing this the right way? Or would multiple materialised views / btree indexes be a better way of ensuring fast group_by queries on this table?
    Thanks in advance.
    Paul

    You don't need a separate materialized view for every combination. You may need a few carefully chosen combinations, in addition to appropriate dimensions for rollups.
    For example, a materialized view that aggregated dat based on location_id, trans_code, and hour could be used to aggregate by location_id, trans_code and day (i.e. adding up 24 rows from the MV for each trans_code and location_id) or location_id, trans_code and month (i.e. adding 24*30 rows from the MV for each trans_code and location_id). The more rows you have to aggregate, and the more frequently the higher level aggregates are accessed, the more likely you'd want to have a separate MV that aggregates at the higher level (i.e. if you're accessing monthly summary data totals all the time).
    If you are rarely aggregating by trans_code, and you have lots of different trans_code values, you could use the one MV that aggregated by location_id, trans_code, and hour to do a monthly aggregate just on location_id and month, but that requires adding 24*30*# trans_code rows from the MV, which may be expensive. It may make sense to have a separate MV for that, or a set of MV's that are aggregated just by location_id and day, or some combination.
    Justin

  • I want the list sql queries performed someone in one session based

    V$SESSION V$SQLTEXT and audsid (taken from sys_context('USERENV','SESSIONID') ) are great .. session_id be generated by a trigger when the application logs on) but then uploading the list of sql queries made by that user (application) until they log off seems impossible.
    I've tried creating triggers that check if V$SESSION changes with respect to an audsid number and sql_hash_value but it didn't work. I'm trying to work out why.
    In the meantime I thought I'd post this to see if anyone else foudn another way.
    I don't want to use sql trace that just gives load of info, I just want simple sql queries.
    Auditting seemed a bit complicated, can that list ALL DML and DDL commands made by a user? if so how?
    thanks
    Bobby

    quite right.
    Oracle enterprise dedicated server 9.2.0.8 on solaris 10.
    I just want to log all command DML and DDL, without stats or anything like that, simply a list, in a chronological order would be nice.
    I found something called dbms_fga package but not sure if thats what I want. Seems I have to implement it on specific tables. which is not good if the application hasn't created the tables yet.
    Thanks in advance.

  • 8i intermedia ORA-03113 ORA-07445 CTX$N

    We get the occasional ORA-03113 end of file communication error along with a trace file with ORA-07445. On looking at the CTX$N - described as the negative list table I found it had quite a few entries (nearly 10,000). When I rebuilt the index the problem went away and the CTX$N table was empty. Clearly it gets written to during the course of time. Does anyone know any details or have any suggestions as to if/when large numbers of entries might cause this sort of failure.

    In the last couple of days we have started seeing ORA-03113 on some interMedia queries that lead me to think we're running into bug 1517789 (8.1.7.0.0 on NT4WSSP6a). The bug is supposed to be taken care of in the 8.1.7.1b patchset, but I haven't yet found the patch for NT. Does anybody have more information or a link to the patch?
    Interestingly, the number of rows in the table (in the hundreds), rows in the DR$ tables (a few thousand), and number of hits (hundreds) are small. It does respond to the "well, don't do that" workarounds mentioned in the bug discussion, though--changing the contains from '%' to a couple of characters preceeding the '%'.
    Any information appreciated

  • Sql tuning...guide me

    Hi all,
    Eventhough i am not facing any frequent problems in pl/sql programming, but often face with problems in tuning the sql queries. Basically i dont have an idea about what way to tune. Please can anybody direct me in this regard. what documentation i have to read.. most people telling that tuning will come by experience but i am beginner in tuning.. pls guide me
    thanks
    asp

    Hello
    There are a lot of myths about performance tuning, some of which used to be correct - or kind of correct. Tom Kyte is a great advocate of testing and proving everything and his site and books are excellent sources of information and techniques. Johnathan Lewis, Cary Milsap, Howard Dizwel, and several others I can't think of right now, are widely considered to be some of the top experts in their field so it is worth looking for their web sites and keeping an eye out for their publications.
    If you are new to performance tuning, you should question every rule of thumb and seek to prove or disprove it. If someone tells you that you should put equi joins first in the WHERE section of a query, ask them why. If you see a technique that you don't understand, or haven't used, have a go at using it. Read the documentation and if you still don't understand it, ask. I personally think this is the fastest way you can get up to speed on pretty much any subject area of the database. I probably have the quote hideously wrong but : I hear, I forget; I see, I remember; I do, I understand.
    Anyway, just several of my penneth.
    HTH
    David

  • TimesTen Queries not returning

    We are evaluating TimesTen IMDB Cache as an option to improve our performance of queries that do aggregate operations across millions of rows.
    My environment is a Windows Server 2003 R2 X64 with Intel Xeon X5660 16 CPU cores, and 50G of RAM. The database is Oracle Enterprise 11.2.0.2.
    I have the following DSN parameters.
    DataStore Path + Name : H:\ttdata\database\my_ttdb
    Transaction Log Directory : H:\ttdata\logs
    Database Character Set: WE8MSWIN1252
    First Connection:
    Permanent Data Size: 26000
    Temporary Data Size: 1600
    IMDB Cache:
    PassThrough :1
    Rest of the parameters are default.
    I have created 2 read only cache groups and 1 asynchronous write-through cache group.
    The first read only cache group is on a table A with 108 rows , the second read only cache group is on table B with 878689 rows and table C is the fact table with 20.5 million rows.
    I have loaded these cache groups. Now I am trying to do join queries across these tables that do aggregation and group by on some measures in table C.
    I have seen using dssize that the data has been loaded in permanent data area.
    These queries execute in Oracle in around 5s and my expectation was that these would return in sub-seconds.
    But these queries are not returning back at all even after hours. I have looked at the query plan and I do not have any lookups which say that they are not indexed.
    I have even tried simpler join queries without any aggregation. Even those get stuck. The only queries that I have been able to run are select * from tables.
    What may be wrong in this setup/configuration ? How do I debug what is causing the problem.
    Thanks,
    Mehta

    Dear user2057059,
    Could you specify more details about your question:
    - Tables structure (columns, indexes, constraints)
    - Could you post your query with execution plan
    20 M rows it is not a big database especially for your hardware.
    In my example:
    CPU: Intel Core 2 Duo CPU 2.33 GHz,
    RAM: 4 GB DDR2
    HDD: 100 Gb SATA-II
    OS: Fedora 8 x64 (Linux 2.6.23.1-42.fcb)
    +
    Oracle TimesTen 7.0.5.0.0 (64 bit Linux)
    Command> select count(*) from accounts;
    Query Optimizer Plan:
      STEP:                1
      LEVEL:               1
      OPERATION:           TblLkSerialScan
      TBLNAME:             ACCOUNTS
      IXNAME:              <NULL>
      INDEXED CONDITION:   <NULL>
      NOT INDEXED:         <NULL>
    < 30000000 >
    1 row found.
    average time - 1,920321 sec, direct connection. Regards,
    Gennady

  • Tuning: what is VW_NSO_1?

    When I do an explain plan, I see VW_NSO_1 show up. Does anyone know what this is? Is this some temporary table?
    Also, any tuning hints below would help. I am a bit distressed that its doing full table scans of the index-organized tables (instead of fast-full scans on the indexes). Will a spatial join always do a full table scan on one of the base-tables, or isn't there a way to use hash-join to avoid this?
    0 SELECT STATEMENT GOAL: CHOOSE
    1061 NESTED LOOPS
    11226 HASH JOIN
    11225 VIEW OF 'VW_NSO_1'
    11225 SORT (UNIQUE)
    13495 HASH JOIN
    218612 TABLE ACCESS (FULL) OF 'T1_IDX_FL15$'
    2890 TABLE ACCESS (FULL) OF 'T2_IDX_FL15$'
    1891 TABLE ACCESS GOAL: ANALYZED (FULL) OF
    'T2123'
    1061 TABLE ACCESS GOAL: ANALYZED (BY USER ROWID) OF
    'T1'
    Query used:
    select ...
    from t2 b,
    t1 t
    where sdo_relate(
    t.location,
    b.location,
    'mask=ANYINTERACT querytype=JOIN') = 'TRUE'
    and t.user_id = 2
    and 1=2
    order by ...
    Thanks,
    Andrew
    null

    Hi Andrew,
    The absolute best way to determine a tiling level is to run queries like that which will be running in production, and tuning to those queries.
    What are the geometries like which you are comparing?
    The estimate tiling level procedures are designed to give your tuning efforts a starting point, but again using real data and queries are the way to go if you have that knowledge.
    select /*+ ordered */ ...
    from t2 b, t1 t
    where sdo_relate( t.location, b.location,
    'mask=ANYINTERACT querytype=WINDOW') = 'TRUE'
    and t.user_id = 2 and 1=2
    order by ...
    You might want to try rewriting the query as window query:
    select /*+ordered */ ...
    from t2 b, t1 t
    where sdo_relate( t.location, b.location,
    'mask=ANYINTERACT querytype=WINDOW') = 'TRUE'
    and t.user_id = 2 and 1=2
    order by ...
    also, how many items in t1 have user_id = 2?
    If there are only a few, you might want to try:
    select /*+ordered */ ...
    from t1 t, t2 b
    where sdo_relate( b.location, t.location,
    'mask=ANYINTERACT querytype=WINDOW') = 'TRUE'
    and t.user_id = 2
    order by ...
    regards,
    dan

Maybe you are looking for

  • Jsp,bc4j,Data web Bean , error : NO ROWS AVAILABLE FOR EDITING

    HI I'm developping a jsp application based on Bc4j. when trying to insert a new row with missing fields which must be not null, in order to be able to edit the row and correct the non seized fields, I display the edit page and it works for several vi

  • Problem with encrypted message in PI 7.1

    Hi all, I'm facing this problem: my Scenario is: MarketPlace with SSL x.509 to iDoc... The basis team has configurated the x.509 certified in PI... but when the message comes to PI, the payload is the encrypted message: Example: Content-ID: b0abd66e-

  • How can I add a new page without connecting it to the menu bar?

    Hello!  I would like to create a new page in the top level section. However, I do NOT want to link it through the menu bar widget... and instead, link it through a hyperlink or button from one of my main pages. I do not want this page to be a drop do

  • Drag layer 'bug' in IE

    Hi, My Drag AP element web page works fine in Firefox, not at all in Opera, and quirkily in IE7. I can live without Opera, but I have to fix the quirks. Placing the cursor over the div will not allow me to drag it when the div is small (under about 2

  • File upload field

    I have a profile update page, with a file field where the user can upload a photo. But I have a major problem. I know that the fileupload form cannot contain a value from a database. When I submit the form without the image, it resets the photo field