Performance Problems - Index and Statistics

Dear Gurus,
I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
please help

Dear Mr Syed ,
Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
For <b>Re-organizing</b> degenerated indexes 3 options are available-
<b>1) DROP INDEX ..., and CREATE INDEX …</b>
<b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
<b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
<b>Advantages- option 2</b>
1)Fast storage in a different table space possible
2)Creates a new index tree
3)Gives the option to change storage parameters without deleting the index
4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
I would still leave the Database tech team be the best to judge and take a call on these.
These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
However,I leave the thoughts with you.
Hope it Helps
Chetan
@CP..

Similar Messages

  • Which Event Classes i should use for finding good indexs and statistics for queries in SP.

    Dear all,
    I am trying to use pro filer to create a trace,so that it can be used as workload in
    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    The stored proc contains three insert queries which insert data into a table variable,
    Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
    There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
    1) There is only one table with three inserts which gets  into a table variable with a final sequence creation block.
    2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
    sequence creation block.
    3)
    There are 3 tables with 9 inserts , which gets into a table variable with a final
    sequence creation block.
    In all the above case number of record will be around 5 lacks.
    Purpose is optimization of queries in SP
    like which Event Classes i should use for finding good indexs and statistics for queries in SP.
    yours sincerely

    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA.  See
    http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
    If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance.  Instead, start/stop the Profiler Tuning template against a test server and then script the trace
    definition (File-->Export-->Script Trace Definition).  You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file.  Stop and remove the trace after the workload
    is captured with sp_trace_setstatus:
    DECLARE @TraceID int = <trace id returned by the trace create script>
    EXEC sp_trace_setstatus @TraceID, 0; --stop trace
    EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Index and statistics

    for infoprovider, what is the problem to run the indicex, but not statistics them?

    Statistics tell the optimizer things like how many values there, how similar or unique the values are, etc.  The database optimizer uses the index and table statistics to make decisions about how best to execute the query, e.g. perform indexed reads of the table or a full table scan, or what order to read tables. The CBO (Cost Based Optimzer) is generally trying to find the execution plan that requires the fewest system resources, generally the fewest IOs (but there are other factors in the mix). It may examine hundreds or thousands of different execution plans to find wht it thinks is best. 
    Without any / good statistics the CBO may make bad choices about how to execute the query, that are not the most efficient.
    When you drop and rebuild the indices, you lose the statistics which could easily result in bad choices by the CBO (Cost Based Optimizer).

  • Performance problem - XML and Hashtable

    Hello!
    I have a strange (or not strange, we don't know) performance problem. The situation is simple: our application runs on Oracle IAS, and gets XML messages from an MQ, and we have to process them.
    Sample message:
    <RequestDeal RequestId="RequestDeal-20090209200010-998493885" CurrentFragment="0" EndFragment="51" Expires="2009-03-7T08:00:10">
         <Deal>
              <Id>385011</Id>
              <ModifyDate>2009-02-09</ModifyDate>
              <PastDuesOnly>false</PastDuesOnly>
         </Deal>
         <Deal>
              <Id>385015</Id>
              <ModifyDate>2009-02-09</ModifyDate>
              <PastDuesOnly>false</PastDuesOnly>
         </Deal>
    </RequestDeal>(there's an average of 50000-80000 deals in the whole message)
    After the application receives the whole message, we'd like to parse it, and put the values into a hashtable for further processing:
        Hashtable ht = new Hashtable();
        Node eDeals = getElementNode(docReq, "/RequestDeal");
        Node eDeal = (Element)(eDeals.getFirstChild());
        long start = System.currentTimeMillis();
        while (eDeal != null) {
          String id = getElementText((Element)eDeal, "Id");
          Date modifyDate = getElementDateValue((Element)eDeal, "ModifyDate");
          boolean szukitett = getElementBool((Element)eDeal, "PastDuesOnly", false);
          ht.put(id, new UgyletStatusz(id, modifyDate, szukitett));
          eDeal = eDeal.getNextSibling();
        logger.info("Request adatok betöltve.");
        long end = System.currentTimeMillis();
        logger.info("Hosszu muvelet ideje: " + (end - start) + "ms");The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
    On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
    On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
    Do you have any idea, what can be the cause of the problem?

    gyulaur wrote:
    Hello!
    The problem is the while statement. On my PC it runs for 15-25 seconds, depends on the number of the deals; but on our customer's server, it runs for 2-5 hours with very high CPU load. The application uses oracle xml parser.
    On my PC there are Websphere MQ 5.3 and Oracle 10ias with 1.5 JVM.
    On our customers's server there are Websphere MQ 5.3 and Oracle 9ias with 1.4 HotSpot JVM.
    Do you have any idea, what can be the cause of the problem?All sorts of things are possible.
    - MQ is configured differently. For instance transactional versus non-transactional.
    - The customer uses a network (multiple computers) and you use a box (singlur) and something is using a lot of bandwidth on the network. Could be your process, one of the dependent processes or something else that has nothing to do with your system
    - The processing computer is loaded with something else or something is artificially limiting the processing time of your application (there is a app that allows one to limit another app to one CPU.)
    - At least one version of 1.4 had a xml bug that consumed massive amounts of memory when processing large xml. Sound familar? If the physical memory is not up to it then it will be thrashing the hard drive as it swaps virtual in and out.
    - Of course virtual memory causing swaps would impact it regardless of the cause.
    - The database is loaded.
    You can of course at least get the same version of java that they are using. Actually that would seem like a rather good idea to me regardless.

  • What is the difference between the drop and create the index and rebuild index ?

    Hi All,
    what is the difference between drop and create index & rebuild index ? i think both are same...Please clarify if both are same or any difference...
    Thanks in Advance,
    rup

    Both are same. Rebuilding an index drops and re-creates the index. 
    Ref:
    SSMS - https://technet.microsoft.com/en-us/library/ms187874(v=sql.105).aspx
    TSQL - https://msdn.microsoft.com/en-us/library/ms188388.aspx
    I would suggest you to also refer one of the best index maintenance script as below:
    https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

  • Jdeveloper dual core processor performance problem

    I have a dual core 2.4 ghz processor and 2 gig of ram and Im running Jdevloper 9.0.5.2 and my performace is terrible. Other developers in the company have non-dual core processors and they can start up in debug mode using the Jdeveloper embedded oc4j our application in 10 seconds where it takes me 4 min!!!!! It is a struts,ejb web application. Is there anything I can do to help in debug mode??? cheers.
    Murray

    Hi Bernard,
    Which version of McAfee are you using?
    On my (personal) laptop, I'm using McAfee VirusScan 9.0.10. I don't frequently run JDeveloper on this laptop, but when I do, it's not experiencing signficant startup delays (it's a very low power machine: PIII 650 512Mb)
    McAfee VirusScan seems to have very few configuration options (noticably different from Norton, which I use on my corporate desktop machine). I specifically remember changing the "File Types to Scan" option to "Program files and documents only". You can get to this by right clicking the "M" notification area icon, VirusScan->Options menu, Advanced button on the ActiveShield page.
    In Norton, I think I have it configured so that it only scans files on write rather than on read. I also exclude directories which contain jdeveloper installs or other large Java apps (although scanning only on write elliminates most of the performance problems anyway and still leaves your system reasonably secure).
    The easiest way to convince your MIS dept that the virus checker is the source of your problems might be if you ask them to allow you to turn it off in order to test the difference it makes to performance. It's a reasonable request to make if you're trying to elliminate possible causes for the slowdown (from the description you gave, it does sound like the AV upgrade is the first place I'd start looking).
    If the virus checker is the source of your problems, you'll probably be seeing massive slowness in most large Java applications that have a large number of JARs on their classpath.
    Thanks,
    Brian

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problem because of ignored index

    Hi,
    We have a performance problem with kodo ignoring indexes in Oracle:
    Our baseclass of all our persistent classes (LogasPoImpl) has a subclass
    CODEZOLLMASSNAHMENIMPL.
    We use vertical mapping for all subclasses and have 400.000 instances of
    CODEZOLLMASSNAHMENIMPL.
    We defined an additional index on an attribute of CODEZOLLMASSNAHMENIMPL.
    A query with a filter like "myIndexedAttribute = 'DE'" takes about 15
    seconds on Oracle 8.1.7.
    Kodo logs something like the following:
    [14903 ms] executing prepstmnt 6156689 SELECT (...)
    FROM CODEZOLLMASSNAHMENIMPL t0, LOGASPOIMPL t1
    WHERE (t0.myIndexedAttribute = ?)
    AND t1.JDOCLASS = ?
    AND t0.JDOID = t1.JDOID
    [params=(String) DE, (String)
    de.logas.zoll.eztneu.CodeZollMassnahmenImpl] [reused=0]
    When I execute the same statement from a SQL-prompt, it takes that long as
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the two statements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.
    Thanks,
    Wolfgang

    Thank you very much, Stefan & Alex.
    After computing statistics the index is used and the performance is fine
    now.
    - Wolfgang
    Alex Roytman wrote:
    ANALYZE TABLE MY_TABLE COMPUTE STATISTICS;
    "Stefan" <[email protected]> wrote in message
    news:btlqsj$f18$[email protected]..
    When I execute the same statement from a SQL-prompt, it takes that longas
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the twostatements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.I know that in DB2 there is a function called "Run Statistics" which you
    can (and should do) on all tables involved in a query (at least once a
    month, when there are heavy changes in the tables).
    On information gathered by this statistics DB2 can optimize your queries
    and execution path's
    Since I was once involved in query performance optimizing on DB/2 I can
    say you can get improvements of 80% on big tables on which statistics are
    run and not. (Since the execution plans created by the optimizer differ
    heavily)
    Since I'm working now with Oracle as well, at least I can say, that Oracle
    has a featere like statistics as well. (go into the manager enterprise
    Console and click on a table, you will find a row "statisitics last run")
    I don't know how to trigger these statistics nor whether they would
    influence the query execution path on oracle (thus "swapping" tablenames
    by itself), since I didn't have time to do further research on thatmatter.
    But it's worth a try to find out and maybe it helps on you problem ?

  • Gathering statistics on interMedia indexes and tables

    Has anyone found any differences (like which one is better or worse) between using the ANALYZE sql command, dbms_utility package, or dbms_stats package to compute or estimate statistics for interMedia text indexes and tables for 8.1.6? I've read the documentation on the subject, but it is still unclear as to which method should be used. The interMedia text docs say the ANALYZE command should be used, and the dbms_stats docs say that dbms_stats should be used.
    Any help or past experience will be grateful.
    Thanks,
    jj

    According to the Support Document "Using statistics with Oracle Text" (Doc ID 139979.1), no:
    Q. Should we gather statistics on the underlying DR$/DR# tables? If yes/no, why?
    A. The recommendation is NO. All internal recursive queries have hints to fix the plans that are deemed most optimal. We have seen in the past that statistics on the underlying DR$ tables may cause query plan changes leading to serious query performance problems.
    Q. Should we gather statistics on Text domain indexes ( in our example above, BOOKS_INDEX)? Does it have any effect?
    A: As documented in the reference manual, gathering statistics on Text domain index will help CBO to estimate selectivity and costs for processing a CONTAINS() predicate. If the Text index does not have statistics collected, default selectivity and cost will be used.
    So 'No' on the DR$ tables and indexes, 'yes' on the user table being indexed.

  • Performance problem on function-based index

    Hi guys,
    I am having performance problems with the addition of new function-based indexes.
    alter session set nls_comp='ANSI';
    alter session set nls_sort='BINARY_CI';
    * have to run this because the of case-insensitivity requirements
    I have a view. for ex:
    create or replace view view1
    as
    select * from emp1,user
    where emp1.empno=user.empno
    union
    select * from emp2,user
    where emp2.empno=user.empno
    union
    select * from emp3,user
    where emp3.empno=user.empno and so on
    When I run this it works with a full table scan. Then when i created a function-based index:
    create index user_ix on
    user(nlssort(empno,'NLS_SORT=BINARY_CI'));
    analyze index user_ix compute statistics;
    analyze table user compute statistics;
    the view hangs. but when i run the individual select statements it works.
    Do you guys have any idea on what's going on? Any advise is greatly appreciated.
    Thanks.

    LC is absolutely right. Brain cramp on my part.
    On the other hand, I can't seem to coerce Oracle to apply a to_binary_double conversion as part of an implicit conversion.
    var bin_dbl binary_double;
    select to_binary_double(14) into :bin_dbl from dual;
    SCOTT @ nx102 JCAVE9420> select * from emp where empno = :bin_dbl;
    no rows selected
    Elapsed: 00:00:00.14
    Execution Plan
    Plan hash value: 2949544139
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |     1 |    39 |     1   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    39 |     1   (0)| 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPNO"=TO_NUMBER(:BIN_DBL))I'd expect that Oracle would try to convert the binary double to a number, not the other way around.
    Justin

  • Urgent: Performance problem with where clause using IN and an OR condition

    Select statement is:
    select fl.feed_line_id
    from ap_expense_feed_lines_all fl
    where ((:1 is not null and
    fl.feed_line_id in (select distinct r2.object_id
    from xxdl_pcard_wf_routing_lists r2,
         per_people_f hr2
    where upper(hr2.full_name) like upper(:1||'%')
              and hr2.person_id = r2.person_id
    and r2.fyi_list is null
              and r2.sequence_number <> 0))
    or
    (:1 is null))
    If I modify the statement to remove the "or (:1 is null))" part at the bottom of the where clause, it returns in .16 seconds. If I modify the statement to only contain the "(:1 is null))" part of the where clause, it returns in .02 seconds. With the whole statement above, it returns in 477 seconds. Anyone have any suggestions?
    Explain plan for the whole statement is:
    (1) SELECT STATEMENT CHOOSE
    Est. Rows: 10,960 Cost: 212
    FILTER
    (2) TABLE ACCESS FULL AP.AP_EXPENSE_FEED_LINES_ALL [Analyzed]
    (2) Blocks: 8,610 Est. Rows: 10,960 of 209,260 Cost: 212
    Tablespace: APD
    (6) TABLE ACCESS BY INDEX ROWID HR.PER_ALL_PEOPLE_F [Analyzed]
    (6) Blocks: 4,580 Est. Rows: 1 of 85,500 Cost: 2
    Tablespace: HRD
    (5) NESTED LOOPS
    Est. Rows: 1 Cost: 4
    (3) TABLE ACCESS FULL XXDL.XXDL_PCARD_WF_ROUTING_LISTS [Analyzed]
    (3) Blocks: 19 Est. Rows: 1 of 1,303 Cost: 2
    Tablespace: XXDLD
    (4) UNIQUE INDEX RANGE SCAN HR.PER_PEOPLE_F_PK [Analyzed]
    Est. Rows: 1 Cost: 1
    Thanks in advance,
    Peter

    Thanks for the reply, but I have already checked what you are suggesting and I am pretty sure those are not causing the problem. The hr2.full_name column has an upper index and the (4) line of the explain plan shows that index being used. In addition, that part of the query executes on its own quickly.
    Because the sql is not displayed in an indented format on this page it is a little hard to understand the structure so I am going to restate what is happening.
    My sql is:
    select a_column
    from a_table
    where ((:1 is not null) and a_column in (sub-select statement)
    or
    (:1 is null))
    The :1 bind variable is set to a varchar2 entered on the screen of an application.
    If I execute either part of the sql without the OR condition, performance is good.
    If the :1 bind variable is null with the whole sql statement (so all rows or a_table are returned), performance is still good.
    If the :1 bind variable is a not-null value with the whole sql statement, performance stinks.
    As an example:
    where (('wa' is not null) and a_column in (sub-select statement)) -- fast
    where (('wa' is null)) -- fast
    where (('' is not null) and a_column in (sub-select statement) -- fast
    or
    ('' is null))
    where (('wa' is not null) and a_column in (sub-select statement) -- slow
    or
    ('wa' is null))

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Performance problems with JDBC and mysql

    Hi,
    I have here a "one table" database which has 1.000.000 data rows. The problem is, that deleting a special entry (ie delete from mytable where da_value="foo1") takes between 10 and 60 seconds.
    Does anyone can help me out of this performance trap.
    LoCal

    table indexed and keys used?
    always preferred postgres! :)

  • Performance problem with selecting records from BSEG and KONV

    Hi,
    I am having performance problem while  selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
    Regards,
    Prashant

    Hi,
    Some steps to improve performance
    SOME STEPS USED TO IMPROVE UR PERFORMANCE:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
    4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    5. Avoid using nested SELECT statement SELECT within LOOPs.
    6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
    7. Avoid using SELECT * and Select only the required fields from the table.
    8. Avoid nested loops when working with large internal tables.
    9. Use assign instead of into in LOOPs for table types with large work areas
    10. When in doubt call transaction SE30 and use the examples and check your code
    11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
    12. Use "CHECK" instead of IF/ENDIF whenever possible.
    13. Use "CASE" instead of IF/ENDIF whenever possible.
    14. Use "MOVE" with individual variable/field moves instead of "MOVE-
    CORRESPONDING" creates more coding but is more effcient.

Maybe you are looking for